uuid int64 541B 3,299B | dataset stringclasses 1
value | text stringlengths 1 4.29M |
|---|---|---|
2,877,628,090,454 | arxiv | \section{Experts' load percentages}
\label{sec:percent}
\begin{figure*}[htbp]
\centering
\includegraphics[width=0.8\linewidth]{EMNLP 2022/Figs/percent.pdf}
\caption{Experts' load percentages for different encoder layers}
\label{fig:percent}
\end{figure*}
We compare the attribute distribution of tokens on different experts for different encoder layers of $16K32E512D$. The results are shown in Figure~\ref{fig:percent}. The load percentages for each expert in different layers are relatively balanced.
\section{Computational Complexity Proof}
\label{sec:complex}
Given a Mixture of Attention Heads with $E$ attention experts, a MoA layer has $(2E+2)d_{\rm head}d_{\rm model}$ parameters. A multi-head attention layer has $4d_{\rm model}^2$ parameters.
To compare these two complexities, we conduct the fraction of them.
\begin{align*}
&\frac{2(E+1)d_{\rm head}d_{\rm model}}{4d_{\rm model}^2} \\
=&\frac{(E+1)}{2K} \frac{Ed_{\rm head}}{d_{\rm model}}
\end{align*}
We note $q=\frac{Ed_{\rm head}}{d_{\rm model}}$, then we get
\begin{align*}
\left(\frac{1}{2} + \frac{1}{2E}\right)q
\end{align*}
When $Ed_{\rm head}\simeq d_{\rm model}$, we have $q\simeq 1$, and
\begin{align*}
\frac{1}{2} + \frac{1}{2E}
\end{align*}
which is a hyperbolic-like curve, with value equals to 1 when $E = 1$.
Therefore, if $E>1$, we have the proportion between the parameters of MoA and that of multi-head attention inferior to 1. Thus, the MoA layer contains fewer parameters than multi-head attention layer.
\section{Traning Details}
\label{app:training}
\begin{table*}[htbp]
\centering
\small
\begin{tabular}{ccccccccccccc}
\toprule
Dataset & Model & Emb Size & FFD Size & Encoder Layers & Decoder Layers \\ \midrule
\multirow{2}{*}{WMT14} & MoA base & 512 & 2048 & 6 & 6 \\
& MoA big & 512 & 2048 & 6 & 6 \\
Wikitext-103 & MoA & 512 & 2048 & 8 & - \\
\bottomrule
\end{tabular}
\caption{Hyperparameters for different models.
}
\label{tab:model_hyperparameters}
\end{table*}
\begin{table*}[htbp]
\centering
\small
\begin{tabular}{ccccccccccccc}
\toprule
Dataset & Model & BSZ & LR & warmup & Dropout & DropATT & DropFFD & Epochs \\ \midrule
\multirow{2}{*}{WMT14 EN-DE}
& MoA base & 8092 $\times$ 32 & 7e-4 & 4000 & 0.2 & 0.2 & 0.1 & 100 \\
& MoA big & 4096 $\times$ 64 & 7e-4 & 4000 & 0.2 & 0.2 & 0.1 & 100 \\
\multirow{2}{*}{WMT14 EN-FR}
& MoA base & 8092 $\times$ 32 & 7e-4 & 8000 & 0.1 & 0 & 0.1 & 50 \\
& MoA big & 4096 $\times$ 64 & 7e-4 & 8000 & 0.1 & 0.1 & 0.1 & 100 \\
Wikitext-103 & MoA & 16384 $\times$ 32 & 6e-4 & 2000 & 0.1 & 0 & 0 & 60 \\
\bottomrule
\end{tabular}
\caption{Training Hyperparameters for different models.
The BSZ represent the maximum number of tokens in each batch.
}
\label{tab:hyperparameters}
\end{table*}
All of our models are trained on 32 V100 GPUs.
We use the Adam Optimizer~\citep{DBLP:journals/corr/KingmaB14} with $\beta_1= 0.9$, $\beta_2= 0.98$ and $\epsilon = 1e-9$.
We use a inverse square root learning rate scheduler for the translation tasks and a linear scheduler for the masked language model task.
During training, we employed label smoothing~\citep{DBLP:conf/cvpr/SzegedyVISW16} of value 0.1.
More training hyperparameters can be found in Table~\ref{tab:hyperparameters}.
\section{Utility of different auxiliary losses}
\label{sec:losses}
We adopted two different auxiliary losses to balance the experts' loads, one is $L_a$ proposed by~\citet{DBLP:journals/corr/abs-2101-03961}, the other is $L_z$ proposed by~\citet{DBLP:journals/corr/abs-2202-08906}. To validate the utility of these two auxiliary losses, we conducted several ablation tests. The results are shown in Table~\ref{tab:losses}. With different combinations of auxiliary losses and different coefficients, we found that 0.01$L_a$ + 0.001$L_z$ achieved the best BLEU score on WMT14 EnDe test set.
\begin{table}[htbp]
\centering
\small
\begin{tabular}{cccccc} \toprule
MoA & 0.01$L_a$ & 0.01$L_z$ & 0.001$L_z$ & 0.01$L_a$+0.001$L_z$ & 0.01$L_a$+0.01$L_z$ \\ \midrule
$8K8E128D$ & 28.95 & 28.73 & 28.78 & 28.94 & 28.73\\
$8K16E128D$ & 28.53 & 28.68 & 28.61 & 28.77 & 28.62\\
$8K32E128D$ & 28.45 & 28.31 & 28.38 & 28.32 & 28.4\\ \bottomrule
\end{tabular}
\caption{Ablation test for different auxiliary losses}
\label{tab:losses}
\end{table}
\section{MACs calculation}
\label{app:macs}
\textsc{ptflops} launches a given model on a random tensor (with pre-defined input shapes) and estimates amount of computations (multiply-add operations) during inference. We need to define the shapes of inputs when using \textsc{ptflops} to calculate MACs. For translation models, we set encoder sequence length and decoder sequence length to be 10. We set the batch size to be 1. For language modeling models, we set the sequence length to be 128 and the batch size to be 1. With the pre-defined input shapes, \textsc{ptflops} will conduct the forward process of the given model and feed back the MACs.
\section{Introduction}
\begin{figure}[t]
\centering
\includegraphics[width=0.6\columnwidth]{Figs/MoAillus.pdf}
\caption{Simple illustration of MoA. MoA consists of a set of attention heads named attention experts. For each token in the input, a Router selects $k$ attention heads among all attention experts with different confidences. The output is a weighted sum of the selected attention heads given the confidence calculated by the Router.}
\label{fig:MoAillus}
\end{figure}
In recent years, large models have become a popular trend in the research of Natural Language Processing, especially large-scale Transformer~\citep{DBLP:conf/nips/VaswaniSPUJGKP17}.
The model's capacity has increased from millions of parameters~\citep{DBLP:conf/naacl/DevlinCLT19,DBLP:journals/corr/abs-1907-11692}, to billions of parameters~\citep{DBLP:journals/corr/abs-1909-08053,DBLP:journals/jmlr/RaffelSRLNMZLL20,DBLP:journals/corr/abs-2203-00555}, even to trillions of parameters~\citep{DBLP:journals/corr/abs-2112-06905,DBLP:journals/corr/abs-2101-03961}. However, these large-scale models demand substantially more computations than small-scale models. A popular trend is to utilize conditional computation with a sparsely activated model to seek greater computational efficiency. Thus, only a part of the model’s parameters is used for a specific input during the forward computation, which alleviates the computational load.
Among these attempts, the Mixture of Experts (MoE)~\citep{DBLP:journals/neco/JacobsJNH91,DBLP:journals/neco/JordanJ94} is an essential technique.
Since first applying the mixture of experts to Transformer architecture~\citep{DBLP:conf/nips/ShazeerCPTVKHLH18}, researchers have mainly focused on combining the Feed-Forward Network layer and the Mixture of Experts.
Recent works have discussed how to get a better routing strategy~\citep{DBLP:conf/iclr/ShazeerMMDLHD17,DBLP:journals/corr/abs-2110-08246,DBLP:conf/icml/LewisBDGZ21,DBLP:journals/corr/abs-2112-14397} or how to scale up the Mixture of Experts on different nodes of GPUs~\citep{DBLP:conf/iclr/LepikhinLXCFHKS21,DBLP:journals/corr/abs-2101-03961}.
However, few attempts have explored the possibility of combining MoE with the Multi-Head Attention (MHA) mechanism.
Since the MHA is another compulsory module in the Transformer architecture, combining MoE and the attention mechanism could also help achieve better performance while restraining the computational cost.
Besides, previous research has investigated the utility of different attention heads. \citet{DBLP:conf/acl/PengSLS20} found that the combination (reallocation) of a subset of attention heads helps the Translation task since they prune the useless attention heads. In the field of dependency parsing, researchers have unveiled that some attention heads in BERT-like language models~\citep{DBLP:conf/naacl/DevlinCLT19,DBLP:journals/corr/abs-1907-11692} model individual dependency types~\citep{DBLP:journals/corr/abs-1911-12246} and syntactic functions~\citep{shen2020unsupervised}. ~\citet{DBLP:conf/acl/VoitaTMST19} claimed that the attention heads have different functions that could be categorized into three types.
There is no need to pass through all multiple attention heads for an input token if we could select some relevant attention heads whose function is proper.
Thus, we conceive an attention mechanism that selects different attention heads per token.
Based on the above discussion, we proposed Mixture of Attention Heads (MoA) (Section~\ref{sec:moa}), an attention mechanism that selects different attention heads for different inputs. A simple illustration of this idea is shown in Figure~\ref{fig:MoAillus}. MoA includes a set of of attention heads with different parameters. Given an input, a routing network dynamically selects a subset of $k$ attention heads for each token. The output is a weighted sum of the selected attention heads given the confidence calculated by the routing network.
We conducted experiments on two tasks: Machine Translation and Masked Language Modeling (Section~\ref{sec:results}). Experiments shown promising results against several strong baselines. In all tasks, our proposed mixture of attention heads outperforms the original Transformer architecture~\citep{DBLP:conf/nips/VaswaniSPUJGKP17}. Our model surpasses many large models or achieves comparable results with only a half computational cost. Our contributions can be summarized in three folds:
1) We proposed a new attention mechanism called Mixture of Attention Heads, combining the idea of Mixture of Experts with the attention mechanism.
2) MoA can improve the model's performance without substantially adding parameters and computational cost.
3) MoA is easy to scale up while maintaining with a restrained computation complexity, resulting in a further performance amelioration.
\section{Related Work}
\paragraph{Mixture of Experts}
The Mixture of Experts (MoE) was firstly introduced in the 1990s~\citep{DBLP:journals/neco/JacobsJNH91,DBLP:journals/neco/JordanJ94}.
\citet{DBLP:conf/iclr/ShazeerMMDLHD17} adopted this method into modern deep learning architectures (LSTM;~\citealt{DBLP:journals/neco/HochreiterS97}) and proved its effectiveness in Language Modeling and Machine Translation. The MoE was used to substitute the FFN layers in Transformer architecture~\citep{DBLP:conf/nips/VaswaniSPUJGKP17} by the Mesh Tensorflow library~\citep{DBLP:conf/nips/ShazeerCPTVKHLH18}. Gshard~\citep{DBLP:conf/iclr/LepikhinLXCFHKS21} is a lightweight module that helps scale up multilingual neural machine translation Transformer with a Sparsely-Gated Mixture of Experts beyond 600 billion parameters. In Switch Transformer~\citep{DBLP:journals/corr/abs-2101-03961}, the authors scaled the MoE-integrated Transformer architecture toward trillion parameter models. GLaM~\citep{DBLP:journals/corr/abs-2112-06905} utilized a decoder-only architecture to do language model pre-training. \citet{DBLP:journals/corr/abs-2201-05596} proposed a Pyramid-Residual-MoE for smaller model size and fast inference.
Various routing strategies~\citep{DBLP:conf/iclr/ShazeerMMDLHD17,DBLP:journals/corr/abs-2110-08246,DBLP:conf/icml/LewisBDGZ21,DBLP:journals/corr/abs-2112-14397} have been investigated for stabilizing the MoE training and balancing the expert loads. \citet{DBLP:journals/corr/abs-2204-09179} pointed out the representation collapse issue in the sparse Mixture of Experts models and solved by a two-stage routing strategy.
\paragraph{Machine Translation Architectures}
With original Transformer architecture~\citep{DBLP:conf/nips/VaswaniSPUJGKP17}, ~\citet{ott-etal-2018-scaling} found that training with reduced precision and large batch could improve the translation performance.
Some models get better performance on translation by using larger scale of Transformer.
\citet{DBLP:conf/emnlp/LiuLGCH20} deepened the encoder and decoder of the Transformer by adequately initializing the model.
DeepNet~\citep{DBLP:journals/corr/abs-2203-00555} scaled Transformers up to 1,000 layers by introducing a new normalization function. However, these methods require a great amount of computational cost.
Some models make changes to the self-attention module. \citet{DBLP:conf/acl/PengSLS20} proposed MAE model. The reallocation of attention heads got better performance on Translation, since the model prune useless multi-head attention heads. However, their method is difficult to scale up and get further improvement of the results because it needs to use all the attention heads in the model rather than sparsely activate them. It also requires the complicated block coordinate descent training steps. ~\citet{wu2018pay} proposed DynamicConv and LightConv by replacing self-attention mechanism with a lightweight convolution.
\paragraph{Specialization of Attention Heads}
Since the publication of Transformer architecture~\citep{DBLP:conf/nips/VaswaniSPUJGKP17}, many researchers have been interested in analyzing how the attention mechanism works. ~\citet{DBLP:conf/acl/VoitaTMST19} systematically analyzed the attention heads in the encoder and categorized them into three functional subsets: positional, syntactic, and rare words. When dealing with dependency parsing, researchers also observed the same phenomenon that different heads could capture different syntactic functions~\citep{DBLP:journals/corr/abs-1911-12246,shen2020unsupervised}.
\begin{figure*}[tbp]
\centering
\includegraphics[width=0.78\linewidth]{Figs/MoA.pdf}
\caption{Mixture of Attention Heads (MoA) architecture. MoA contains two mixtures of experts. One is for query projection, the other is for output projection. These two mixture of experts select the same indices of experts. One routing network calculates the probabilities for each selected experts. The output of the MoA is the weighted sum of the outputs of each selected experts.}
\label{fig:moa}
\end{figure*}
\section{Preliminaries}
\subsection{Mixture of Experts}
MoE~\citep{DBLP:conf/iclr/ShazeerMMDLHD17} contains a set of expert networks $E_1, E_2, \dots, E_N$ and a routing network $G$.
The output of the MoE is the weighted sum of the output of each expert. The routing network calculates the probability for each expert. Formally, the output of the MoE can be written as:
\begin{equation}
\label{eq:moe}
y = \sum_{i=1}^N G(x)_i E_i(x)
\end{equation}
The routing network $G$ is a Noisy Top-k Routing network. Before the softmax function, they add Gaussian noise to the gated logits, see Equation~\ref{eq:gate}. Then, they keep only the top k values, setting the rest gate values to equal 0, see Equation~\ref{eq:noisytopk}.
\begin{equation}
\label{eq:noisytopk}
G(x) = \operatorname{Softmax}(\operatorname{TopK}(H(x), k))
\end{equation}
\begin{align}
\label{eq:gate}
H(x)_i =& (x\cdot W_g)_i + \sigma (0,1)\cdot\\ \notag
&\operatorname{Softplus}((x\cdot W_{noise})_i)
\end{align}
\subsection{Multi-head Attention}
\citet{DBLP:conf/nips/VaswaniSPUJGKP17} proposed an encoder-decoder architecture Transformer, which contains the multi-head attention module. Different heads from the multi-head attention module attend to information from different representation subspaces, which learn the input from various perspectives.
Performing multi-head attention with $k$ heads, the $Q,K,V$ are linearly projected $k$ times with different, learned linear projections to subspaces. On each projected $Q$ and $K$, the attention scores are calculated, via Equation~\ref{eq:multiatt}. Values deriving from different heads are projected back to the model dimension size and summed up, with Equation~\ref{eq:attn}.
\begin{equation}
\label{eq:multiatt}
W^{\rm att}_i = \text{Softmax} \left(\frac{QW_i^q(KW_i^{k})^T}{\sqrt{d_k}} \right)
\end{equation}
\begin{equation}
\label{eq:attn}
y = \sum_{i=1}^k\left(W^{\rm att}_iVW_i^v\right)W_i^o
\end{equation}
where $W_i^q,W_i^k,W_i^v \in \mathbb{R}^{d_m\times d_h}$ and $W_i^o \in \mathbb{R}^{d_h\times d_m}$, $d_k$ is the dimension of the key $K$.
\section{Mixture of Attention Heads}
\label{sec:moa}
In this work, we propose a variant of multi-head attention for Transformer called Mixture of Attention Heads (MoA), illustrated in Figure~\ref{fig:moa}.
MoA consists of two major components, the routing network $G$ and a group of $N$ attention experts $\left\{ E_1, ..., E_N \right\}$.
Similar to standard multi-head self-attention, the input of MoA includes three sequences, query sequence $Q$, key sequence $K$, and value sequence $V$.
We note $q_t$ as the query vector at time step $t$.
For each $q_t$, the routing network $G$ selects a subset of $k$ experts $G(q_t) \subseteq \left\{ E_i \right\}$ based on $q_t$ and assign a weight $w_i$ to each selected expert.
Then, these selected experts take $q_t$, $K$, and $V$ as inputs and compute an output $E_i(q_t,K,V)$.
The output of the MoA is the weighted sum of the selected experts' outputs.
Formally, the MoA output at time step $t$ can be written as:
\begin{equation}
y_t = \sum_{i \in G(q_t)} w_{i,t}\cdot E_i(q_t,K,V)
\end{equation}
\subsection{Routing Network}
Similar to previous mixture-of-expert methods, the routing network assigns attention experts to input query.
In order to select $k$ experts for query $q_t$, we compute a routing probability $p_i$ for each expert $E_i$.
The routing probability is modeled with a linear layer $W_g$ and a softmax function:
\begin{equation}
p_{i,t} = \operatorname{Softmax}_i(q_t \cdot W_g)
\end{equation}
Based on the routing probability $p$, we select the top-$k$ attention experts among all $N$ attention experts with the largest probabilities.
Formally, the routing network is defined as:
\begin{equation}
\label{eq:topk}
G(Q)=\operatorname{TopK}(p_{i,t}, k)
\end{equation}
where $W_g\in \mathbb{R}^{d_m \times N}$, representing the routing matrix.
Then, we renormalize the routing probability of the selected experts to get normalized expert weights:
\begin{equation}
w_{i,t} = \frac{p_i}{\operatorname{Detach} \left( \sum_{j \in G(q_t)} p_j \right)}
\end{equation}
where $\operatorname{Detach}(\cdot)$ is a function that stops the gradient backpropagation.
In other words, the denominator receives zero gradient during the training process.
We empirically find that this trick helps the routing network learn better routing probability.
\subsection{Attention Expert}
An attention expert contains four different projection matrices, $W^q$, $W^k$, $W^v$ and $W^o$.
The attention calculation is similar to multi-head attention.
We first compute the attention weight for keys.
\begin{equation}
\label{eq:att}
W^{\rm att}_{i,t} = \text{Softmax}\left(\frac{q_t W_i^{q}(KW^k)^T}{\sqrt{d_h}}\right)
\end{equation}
where $W_i^q \in \mathbb{R}^{d_m\times d_h}$ is the query projection matrix, $W^k \in \mathbb{R}^{d_m\times d_h}$ is the key projection matrix, $d_m$ is the hidden state size, $d_h$ is named as head dimension.
We then compute the weighted sum of values:
\begin{equation}
o_{i,t} = W^{\rm att}_{i,t} VW^v
\end{equation}
where $W^v \in \mathbb{R}^{d_m\times d_h}$ is the value projection matrix.
Finally, the attention output is obtained by projecting $o_{i,t}$ back to the hidden state space:
\begin{equation}
\label{eq:moa}
E_i \left( q_t,K,V \right) = o_{i,t} W_i^o
\end{equation}
where $W_i^o \in \mathbb{R}^{d_h\times d_m}$ is the output projection matrix.
In the multi-head attention, the projection matrices $W^q$, $W^k$, $W^v$, and $W^o$ are all different across attention heads.
The MoA shares $W^k$ and $W^v$ across attention experts to reduce the computational complexity.
Attention experts are only differentiated by $W^q_i$ and $W^o_i$.
Thus, the expensive matrix projection of key sequence $KW^k$ and value sequence $VW^v$ can be pre-computed and shared for all attention experts.
Each expert only need to compute the vector projection of query $q_t W_i^q$ and output $o_{i,t} W_i^o$.
This design can significantly reduce the computational and space complexity while the number of experts is large.
\subsection{Training Losses}
Previous work~\citep{DBLP:conf/iclr/ShazeerMMDLHD17} has observed that the routing network tends to converge to a state where it always produces large weights for the same few experts, which indicates the insufficient utility of all the experts.
Following \citet{DBLP:conf/iclr/ShazeerMMDLHD17} and \citet{DBLP:journals/corr/abs-2101-03961}, we add an auxiliary loss to balance the loads of different experts.
Given $N$ experts and a sequence with $T$ queries $Q=\{q_1,q_2,\dots ,q_T\}$, the auxiliary loss $L_a$ can be computed as:
\begin{equation}
L_a(Q) = N\cdot \sum_{i=1}^N f_i \cdot P_i
\end{equation}
where $f_i$ is the number of tokens attributed to the $i$-th expert,
\begin{equation}
\label{eq:fi}
f_i = \sum_{t=1}^T \delta_{i \in G(q_t)}
\end{equation}
where $\delta$ represents the Kronecker-Symbol.
$P_i$ is the sum of router probability allocated for the $i$-th expert,
\begin{equation}
\label{eq:pi}
P_i = \sum_{t=1}^T p_{i,t}
\end{equation}
They are then normalized with norm 1 according to the expert column.
Mathematically, $f_i$ is indifferentiable while $P_i$ is. Thus, larger $f_i$ will result in a larger derivative. This penalizes the $P_i$ making larger $P_i$ smaller. What's more, $P_i$ is calculated by softmax. Thus smaller $P_i$ will become bigger.
\citet{DBLP:journals/corr/abs-2202-08906} introduced a router z-loss (Equation \ref{eq:zloss}) to penalize large logits into the gating network, which could stabilize the training and improve the performance.
\begin{equation}
\label{eq:zloss}
L_{z}(x)=\frac{1}{T} \sum_{j=1}^{T}\left(\log \sum_{t=1}^{N} e^{x_{i,t}}\right)^{2}
\end{equation}
where $x_{i,t}$ is the pre-softmax logit computed by router for $i$-th expert and input query $q_t$.
Each mixture of attention heads module has an auxiliary loss and a router z-loss. We sum them up together and added with a multiplicative coefficient $\alpha$ and $\beta$ respectively to the total model loss during training. Throughout this work, we use $\alpha = 0.01$ and $\beta=0.001$ to ensure the efficiency of the two added losses and not to disturb the primary cross-entropy model loss.
\begin{equation}
L = L_{\text{model}} + \sum_{\forall \ \text{MoA module}} (\alpha L_a + \beta L_z)
\end{equation}
To validate the utility of these auxiliary losses, we conducted ablation tests and the results are shown in Appendix~\ref{sec:losses}.
\subsection{Computational Complexity and Number of Parameters}
On the one hand, given a sequence with $T$ tokens, the amount of computation required by an MoA layer that selects top-$k$ experts is
\begin{equation}
C_{MoA} = kT^2d_h + 2(k+1)Td_hd_m
\end{equation}
where $k d_h$ is the sum of head dimension of selected experts.
It represents the maximum amount of information that can be collected by an MoA layer for a token.
On the other hand, the amount of computation required by a standard Multi-Head Attention (MHA) is
\begin{equation}
C_{MHA} = T^2d_m + 4Td_m^2
\end{equation}
where $d_m$ is the sum of head dimension.
If $kd_h\simeq d_m$, the computational complexity of MoA is smaller than that of MHA.
In other words, the MoA could collect more information for each token while maintaining a similar level of computational complexity as the MHA.
As for the number of parameters, given a Mixture of Attention Heads with $E$ attention experts, the number of parameters in MoA and MHA are:
\begin{equation}
M_{MoA} = (2E+2)d_hd_m, \quad M_{MHA} = 4d_m^2
\end{equation}
When $k = E$ and $Ed_h\simeq d_m$, the number of parameters in MoA is smaller than MHA.
In other words, MoA could collect more information for each token while maintaining a similar number of parameters as MHA.
More details of the calculation are in Appendix~\ref{sec:complex}.
The above discussion suggests that, from an information collection point of view, the MoA is more computational and parameter efficient than the standard MHA.
Our experimental results in Section~\ref{sec:results} also empirically support the hypothesis.
Additionally, the time complexity of MoA is decided by the number of attention heads $k$ and the attention head dimension $d_h$, not the model's total parameters.
One could arbitrarily increase the amount of parameters in MoA, without increasing its computational complexity.
\section{Experiments}
\label{sec:results}
\subsection{Machine Translation}
\paragraph{Dataset}
We train our Mixture of Attention model on WMT 2014 English-German and English-French datasets~\citep{DBLP:conf/wmt/BojarBFHKLMPPSS14}.
Following the experimental settings used in \citet{liu2020very}, all sentences were encoded using byte-pair encoding~\citep{sennrich-etal-2016-neural}.
For both tasks, we use a joined dictionary and share all word embeddings of the encoder and the decoder.
For English-German, their shared vocabulary size is set to be 32k.
For English-French, their shared vocabulary size is set to be 40k.
\begin{table*}[htbp]
\centering
\small
\begin{tabular}{rccccc}
\toprule
\multirow{2}{*}{Model} &
\multicolumn{2}{c}{WMT14 EnDe} &
\multicolumn{2}{c}{WMT14 EnFr} &
\multirow{2}{*}{MACs\footnotemark[3]}\\
& \#Params & BLEU & \#Params & BLEU \\
\midrule
Transformer base~\citep{DBLP:conf/nips/VaswaniSPUJGKP17} & 65M & 27.3 & 62M & 38.1 & 604M \\
Admin 6L-6L~\citep{liu2020very} & 61M & 27.7 & 67M & 41.5 & 604M \\
MAE-7~\citep{DBLP:conf/acl/PengSLS20} & 63M & 28.4 & - & - & -\\ \midrule
MoA Base ($8K8E128D$) & 65M & 28.4 & 69M & 42.5 & 628M \\ \midrule
Transformer big~\citep{DBLP:conf/nips/VaswaniSPUJGKP17} & 213M & 28.4 & 210M & 41.8 & 2090M\\
Transformer big~\citep{ott-etal-2018-scaling} & 210M & 29.3 & 222M & 43.2 & 2090M \\
LightConv~\citep{wu2018pay} & 202M & 28.9 & - & 43.1 & 1750M\footnotemark[4]\\
DynamicConv~\citep{wu2018pay} & 213M & 29.7 & - & 43.2 & 1790M\footnotemark[4] \\
Admin 18L-18L~\citep{liu2020very} & 151M & 29.0 & - & - & 1490M \\
Admin 60L-12L~\citep{liu2020very} & 256M & 30.1 & 262M & 43.8 & 2550M \\
\midrule
MoA Big ($16K32E256D$) & 200M & 29.4 & 204M & 43.7 & 1220M \\ \bottomrule
\end{tabular}
\caption{BLEU score on WMT14 translation datasets.
MACs (Multiply–Accumulate Operations)\footnotemark[3] measures the computational complexity of each model.
For different models, their MACs are computed on a source sentence of length $T_{src}=10$ and a target sentence of length $T_{tgt}=10$.
}
\label{tab:bleu}
\end{table*}
\paragraph{Training and Evaluation Details}
We use the Adam Optimizer~\citep{DBLP:journals/corr/KingmaB14} with a learning rate of $7e^{-4}$ and the inverse square root learning rate scheduler.
During training, we employed label smoothing~\citep{DBLP:conf/cvpr/SzegedyVISW16} of value 0.1.
More training details can be found in Appendix~\ref{app:training}.
For the evaluation, we average the last 10 epochs' checkpoints.
We list BLEU score~\citep{DBLP:conf/acl/PapineniRWZ02} computed with \textsc{multi-bleu.perl}, and apply the compound split post-processing\footnote{\url{https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/utils/get_ende_bleu.sh}} introduced in \citet{DBLP:conf/nips/VaswaniSPUJGKP17}.
We use MACs (Multiply–Accumulate Operations)\footnotemark[3] to evaluate the computational complexity of different models on a fixed input.
Details of the MACs calculation are in Appendix~\ref{app:macs}.
\footnotetext[3]{We adopt the open-source tool \textsc{ptflops} (\url{https://github.com/sovrasov/flops-counter.pytorch}) to calculate the MACs.}
\footnotetext[4]{These MACs values are underestimated.
Because the \textsc{ptflops} does not support the customized convolution layers in DynamicConv and LightConv. }
\paragraph{Baselines}
We compare with several strong baselines: Transformer base and big~\citep{DBLP:conf/nips/VaswaniSPUJGKP17}, Transformer big~\citep{ott-etal-2018-scaling} with reduced precision and large batch training, DynamicConv~\citep{wu2018pay} by replacing self-attention mechanism with a lightweight convolution, MAE-7 with reallocation of attention heads proposed by ~\citet{DBLP:conf/acl/PengSLS20}, Admin~\citep{DBLP:conf/emnlp/LiuLGCH20} which deepens the Transformer architecture.
For our model, three parameters are used to differentiate its variants, one is number of the activated attention heads ($K$) per token, one is total number of the experts ($E$), another is the attention expert dimension ($D$).
For example, our MoA base model is noted as $8K8E128D$, because it has 8 attention experts, 128 dimension per expert, and all 8 experts are activated for each token.
Our MoA big model is $16K32E256D$ as it has 32 attention experts and sparsely activates the top 16 experts for each token.
\paragraph{Results}
The results on the test set of WMT14 EnDe and WMT14 EnFr datasets are shown in Table~\ref{tab:bleu}. The table is split into 2 parts, the upper part is for base models and the lower part is for large models.
On all datasets, MoA base outperforms Transformer base and Admin 6L-6L by at least 0.6 BLEU.
On the WMT14 EnFr dataset, MoA base also outperforms Transformer big.
On the WMT14 EnDe dataset, MoA base reaches comparable results with the Mixture of Attention Experts model (MAE-7), which is the state-of-the-art performance for base-level models. MACs of MAE-7 and our model are comparable in the setting of 8 attention heads.
While both models leverage the idea of weighting the attention heads, MoA is easier to implement and does not require the complicated block coordinate descent training steps.
Compared to standard multi-head self-attention, the routing mechanism pays more attention to the more informative attention heads for each token, thus enabling the MoA base model to achieve better computation and parameter efficiency.
In the big-scale setting, MoA big consistently outperforms standard transformer big models, despite requiring significantly less computation.
Compared to the models with more parameters, MoA is still very competitive.
Only Admin 60L-12L outperforms MoA big on both datasets.
However, the model has more parameters and requires about two times of MACs.
The MACs of MoA big is 1220M, which is the lowest amount among big-scale models.
This result shows that our proposed method could easily scale up to a large amount of parameters and achieve good results without substantially burdening the computation system.
\begin{table}[t]
\centering
\small
\begin{tabular}{ccccc}
\toprule
\multicolumn{2}{c}{Model} & \#Params & PPL & MACs \\
\midrule
\multicolumn{2}{c}{Transformer} & 51.34M & 4.95 & 6.55G \\
\midrule
\multirow{4}{*}{MoA} & $8K8E128D$ & 52.45M & 4.82 & 7.27G \\
& $8K8E256D$ & 61.89M & 4.64 & 8.97G \\
& $8K16E256D$ & 78.75M & 4.48 & 8.97G \\
& $8K32E256D$ & 112.47M & 4.25 & 8.98G \\
& $8K64E256D$ & 179.91M & 4.21 & 8.98G\\
\bottomrule
\end{tabular}
\caption{Perplexity on wikitext-103 corpus test data for masked language modeling.
MACs are computed on a input sequence of length $T=128$.}
\label{tab:ppl}
\end{table}
\begin{table*}[htbp]
\small
\centering
\begin{tabular}{rccccccc}
\toprule
MoA Model & $K$ & $E$ & $D$ & \#Params & PPL(Valid) & BLEU(Test) & MACs \\
\midrule
Base & 8 & 8 & 128 & 65M & 4.68 & 28.4 & 628M \\
\midrule
(A) & 8 & 8 & 256 & 87M & 4.51 & 28.7 & 841M \\
(B) & 8 & 16 & 256 & 125M & 4.45 & 28.4 & 841M \\
(C) & 8 & 32& 64 & 83M & 4.79 & 27.9 & 524M \\
(D) & 8 & 32& 128 & 123M & 4.55 & 28.4 & 631M \\
(E) & 8 & 32& 256 & 200M & 4.44 & 28.8 & 841M \\
(F) & 4 & 32& 256 & 200M & 4.65 & 27.5 & 654M \\
\midrule
Big & 16 & 32& 256 & 200M & 4.35 & 29.4 & 1220M \\
\bottomrule
\end{tabular}
\caption{BLEU score of different MoA models on the WMT14 EnDe Dataset.
}
\label{tab:moaarch}
\end{table*}
\subsection{Masked Language Modeling}
Masked Language Modeling is the standard training objective for many Pretrained Language Models (PLMs), including BERT~\citep{DBLP:conf/naacl/DevlinCLT19} and RoBERTa~\citep{DBLP:journals/corr/abs-1907-11692}.
The task replaces a random sample of tokens in the input sequence with a special token \texttt{[MASK]}.
The training objective is a cross-entropy loss on predicting the masked tokens.
To better mimic the procedure of training PLMs, we adopt the setting introduced in RoBERTa~\citep{DBLP:journals/corr/abs-1907-11692} to conduct the masked language modeling experiment.
\paragraph{Dataset}
We conducted the masked language modeling on the wikitext-103 dataset~\citep{merity2016pointer}.
The corpus includes over 100 million tokens collected from verified Good and Featured articles on English Wikipedia.
Following the settings in \citet{merity2016pointer}, the training/validation/test set has 103M/218K/246K words.
The corpus is tokenized with the 50K subword vocabulary used in RoBERTa and initially introduced in GPT~\citep{radford2019language}.
\paragraph{Settings}
Then we train the model with the dynamic masking strategy and full-sentences input format.
To avoid overfitting on the training corpus, we adopt a medium-size RoBERTa model as the base model, with 512-dim word embedding, 2048-dim feed-forward network, 8 heads, and 8 layers.
Training details can be found in Appendix~\ref{app:training}.
The perplexity is used as the evaluation metric.
\paragraph{Results}
Table~\ref{tab:ppl} shows the perplexity on WikiText-103 test data.
While using a similar amount of parameters, MoA outperforms the standard transformer model by 0.13 perplexity.
Furthermore, the performance simultaneously improves with the increase of number of experts $E$ and head size $D$, while the number of selected heads $K$ and the computational complexity remains the same.
The observation shows that our model has the ability of increasing the model's performance while maintaining the computation complexity.
\subsection{Model Analysis}
\paragraph{MoA parameter influence}
We study the influence of three parameters, $K$, $E$, and $D$, on the WMT14 En-De Dataset. The results are shown in Table~\ref{tab:moaarch}.
For the expert dimension $D$, we control $K=8$ and $E=32$, vary the expert dimension $D$ with 64, 128, and 256. As the expert dimension size $D$ increases (rows C, D, E in Table~\ref{tab:moaarch}), the PPL on the validation set and the BLEU score on the test set both improve. This amelioration is due to the increase in parameters. With a larger expert dimension size, each expert has more parameters, and the computational costs increase. We believe the increase in computational cost is acceptable. As in Table~\ref{tab:bleu}, Transformer big model has a MACs of 2090M, reaching BLEU of 28.4. However, by enlarging hidden size of expert, we could get BLEU of 28.8 while MACs at 841M (<<2090M).
For the number of attention experts $E$, we control $K=8$ and $D=256$, select three different values of $E$, 8, 16, and 32. When adding the number of experts, the PPL on the valid set goes down, indicating our model's continuous scaling up ability. The BLEU score on the test set does not change with that of PPL, and this may be because the training objective is not directly linked to the BLEU score calculation.
However, we still observe that 32 experts can achieve better BLEU than the other two settings.
As the number of selected attention heads $K$ remains unchanged, the MACs for these three settings are the same.
Thus, MoA allows us to improve the model ability by adding more parameters without changing the computational complexity.
For the number of selected attention heads $K$, we test three numbers of selected attention heads $K$, 4, 8, and 16, freezing $E=32$ and $D=256$. With the increase in the number of selected attention heads, we observe that the PPL on the valid set decreases and the BLEU score on the test set goes up. Since the number of attention experts remains the same, the model's total parameters stay at 200M.
This result shows the trade-off between computation efficiency and performance. The model needs more computations for better performance as the MACs vary from 654M to 1220M.
\paragraph{MoA Expert loads}
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{EMNLP 2022/Figs/percent4.pdf}
\caption{Experts' load percentages for encoder layer 4.
Experts are indexed by their order of percentages.
}
\label{fig:percent4}
\end{figure}
Load balancing is a long-standing problem of MoE models~\citep{DBLP:journals/corr/abs-2101-03961}.
Figure~\ref{fig:percent4} shows the experts' load of the fourth layer of the MoA big model.
It plots the percentage of each expert used in the development set of WMT14 EN-DE.
For each input token and MoA big, 16 attention heads are selected among 32 attention experts.
This figure shows a relatively based load, where the most used expert is selected by 5\% of the tokens and the least used expert is selected by 1\%, and most experts' loads are between 2-4\%.
This observation suggests that the input tokens are attributed equally among attention experts.
Experts are assigned to different roles with substantially different groups of input tokens.
The load for every expert of different layers is shown in Appendix~\ref{sec:percent}.
\paragraph{MoA Interpretability: Specialize the Experts}
\begin{table}[t]
\centering
\small
\begin{tabular}{ccccccc} \toprule
Expert 5 &Expert 29 & Expert 10 \\ \midrule
\textbf{Tech.} & \textbf{adv.} & \textbf{Location} \\ \midrule
DSL & likely & Costasur \\
JPEG &tastefully & Kaliningrad\\
Module & environmentally & Freiburg \\
DSLR & heavily & Brava \\
screen & certainly & Jesolo \\ \midrule
Expert 7 & Expert 24 & Expert 23 \\ \midrule
\textbf{Computer} & \textbf{Reaction} & \textbf{Name} \\ \midrule
Outlook & supportive & Marx\\
Excel & misunderstanding & Jarzembowski \\
IO & advocating & Reding\\
emails & confirming & Donald\\
monitors & excitement &Socrates \\
\bottomrule
\end{tabular}
\caption{Indicative tokens of each expert for the first encoder layer of MoA}
\label{tab:indicative}
\end{table}
We study in this section whether the different experts possess different ``expertises''.
We try to find the most likely tokens to co-occur with each expert.
We compute the pointwise mutual information (PMI;~\citealt{DBLP:journals/coling/ChurchH90}) between tokens and experts:
\begin{equation*}
\operatorname{PMI}({\rm token}_i, {\rm expert}_j) = \frac{p({\rm token}_i, {\rm expert}_j)}{p({\rm token}_i)\cdot p({\rm expert}_j)} \text{.}
\end{equation*}
For each expert, the bigger the PMI, the more relevant the token with this expert.
Table~\ref{tab:indicative} lists the most indicative tokens of each expert for the first encoder layer of $16K32E512D$.
Many experts are associated with nouns in the same topic, e.g., Location, Name, Tech, etc.
We also found that some other experts are associated with adjectives and adverbs.
For example, Expert 29 is related to adverbs, and Expert 24 is connected to people's reactions, where some tokens are adjectives.
We also study the relation between expert and input tokens for other layers of the encoder, but it is hard to find clear patterns in other layers.
\section{Conclusion}
This work introduces the Mixture of Attention Heads (MoA).
MoA contains a set of attention experts and a routing network.
Given an input, the MoA attributes a probability to each expert by the routing network and selects the top-K experts.
The output of MoA is a weighted sum of the selected attention experts.
The weighting mechanism allows different tokens to focus on different experts, thus improving the model's parameter and computation efficiency.
Experimental results show that a base-scale MoA model could achieve comparable or better performance than a transformer big model.
Furthermore, MoA could improve its performance by adding more attention experts while maintaining a relatively small computational complexity.
In this way, MoA can achieve comparable performance with deeper and computationally more expensive models.
The interpretability analysis shows that different attention experts tend to specialize in a specific type of input tokens.
\section*{Limitations}
In this work, we scale up MoA to at most 64 experts. However, regarding the works combining mixture of experts with FFN layer, they could expand the expert number to thousands. In the future, we will explore the limit of the scale up ability of MoA.
Our implementation of MoA is not optimistic. Our code could not fully explore the parallel computing capability of GPUs. Our current implementation spends some extra time on memory copy operations.
Although the computational complexity (MACs) of MoA is relatively low compared to other baselines, the running time of our implementation is not optimal. In the future, if we could optimize the implementation at the cuda kernel level to remove the memory copy ops, we expect at least half the wall-clock time. This will make an MoA block as fast as a standard attention block.
Similar to Transformer architecture, MoA needs a careful hyperparameter search to reach satisfying results.
\section*{Acknowledgments}
This work is supported in part by the State Key Laboratory of the Software Development Environment of China under Grant SKLSDE-2021ZX-16.
\section{Experts' load percentages}
\label{sec:percent}
\begin{figure*}[htbp]
\centering
\includegraphics[width=0.8\linewidth]{EMNLP 2022/Figs/percent.pdf}
\caption{Experts' load percentages for different encoder layers}
\label{fig:percent}
\end{figure*}
We compare the attribute distribution of tokens on different experts for different encoder layers of $16K32E512D$. The results are shown in Figure~\ref{fig:percent}. The load percentages for each expert in different layers are relatively balanced.
\section{Computational Complexity Proof}
\label{sec:complex}
Given a Mixture of Attention Heads with $E$ attention experts, a MoA layer has $(2E+2)d_{\rm head}d_{\rm model}$ parameters. A multi-head attention layer has $4d_{\rm model}^2$ parameters.
To compare these two complexities, we conduct the fraction of them.
\begin{align*}
&\frac{2(E+1)d_{\rm head}d_{\rm model}}{4d_{\rm model}^2} \\
=&\frac{(E+1)}{2K} \frac{Ed_{\rm head}}{d_{\rm model}}
\end{align*}
We note $q=\frac{Ed_{\rm head}}{d_{\rm model}}$, then we get
\begin{align*}
\left(\frac{1}{2} + \frac{1}{2E}\right)q
\end{align*}
When $Ed_{\rm head}\simeq d_{\rm model}$, we have $q\simeq 1$, and
\begin{align*}
\frac{1}{2} + \frac{1}{2E}
\end{align*}
which is a hyperbolic-like curve, with value equals to 1 when $E = 1$.
Therefore, if $E>1$, we have the proportion between the parameters of MoA and that of multi-head attention inferior to 1. Thus, the MoA layer contains fewer parameters than multi-head attention layer.
\section{Traning Details}
\label{app:training}
\begin{table*}[htbp]
\centering
\small
\begin{tabular}{ccccccccccccc}
\toprule
Dataset & Model & Emb Size & FFD Size & Encoder Layers & Decoder Layers \\ \midrule
\multirow{2}{*}{WMT14} & MoA base & 512 & 2048 & 6 & 6 \\
& MoA big & 512 & 2048 & 6 & 6 \\
Wikitext-103 & MoA & 512 & 2048 & 8 & - \\
\bottomrule
\end{tabular}
\caption{Hyperparameters for different models.
}
\label{tab:model_hyperparameters}
\end{table*}
\begin{table*}[htbp]
\centering
\small
\begin{tabular}{ccccccccccccc}
\toprule
Dataset & Model & BSZ & LR & warmup & Dropout & DropATT & DropFFD & Epochs \\ \midrule
\multirow{2}{*}{WMT14 EN-DE}
& MoA base & 8092 $\times$ 32 & 7e-4 & 4000 & 0.2 & 0.2 & 0.1 & 100 \\
& MoA big & 4096 $\times$ 64 & 7e-4 & 4000 & 0.2 & 0.2 & 0.1 & 100 \\
\multirow{2}{*}{WMT14 EN-FR}
& MoA base & 8092 $\times$ 32 & 7e-4 & 8000 & 0.1 & 0 & 0.1 & 50 \\
& MoA big & 4096 $\times$ 64 & 7e-4 & 8000 & 0.1 & 0.1 & 0.1 & 100 \\
Wikitext-103 & MoA & 16384 $\times$ 32 & 6e-4 & 2000 & 0.1 & 0 & 0 & 60 \\
\bottomrule
\end{tabular}
\caption{Training Hyperparameters for different models.
The BSZ represent the maximum number of tokens in each batch.
}
\label{tab:hyperparameters}
\end{table*}
All of our models are trained on 32 V100 GPUs.
We use the Adam Optimizer~\citep{DBLP:journals/corr/KingmaB14} with $\beta_1= 0.9$, $\beta_2= 0.98$ and $\epsilon = 1e-9$.
We use a inverse square root learning rate scheduler for the translation tasks and a linear scheduler for the masked language model task.
During training, we employed label smoothing~\citep{DBLP:conf/cvpr/SzegedyVISW16} of value 0.1.
More training hyperparameters can be found in Table~\ref{tab:hyperparameters}.
\section{Utility of different auxiliary losses}
\label{sec:losses}
We adopted two different auxiliary losses to balance the experts' loads, one is $L_a$ proposed by~\citet{DBLP:journals/corr/abs-2101-03961}, the other is $L_z$ proposed by~\citet{DBLP:journals/corr/abs-2202-08906}. To validate the utility of these two auxiliary losses, we conducted several ablation tests. The results are shown in Table~\ref{tab:losses}. With different combinations of auxiliary losses and different coefficients, we found that 0.01$L_a$ + 0.001$L_z$ achieved the best BLEU score on WMT14 EnDe test set.
\begin{table}[htbp]
\centering
\small
\begin{tabular}{cccccc} \toprule
MoA & 0.01$L_a$ & 0.01$L_z$ & 0.001$L_z$ & 0.01$L_a$+0.001$L_z$ & 0.01$L_a$+0.01$L_z$ \\ \midrule
$8K8E128D$ & 28.95 & 28.73 & 28.78 & 28.94 & 28.73\\
$8K16E128D$ & 28.53 & 28.68 & 28.61 & 28.77 & 28.62\\
$8K32E128D$ & 28.45 & 28.31 & 28.38 & 28.32 & 28.4\\ \bottomrule
\end{tabular}
\caption{Ablation test for different auxiliary losses}
\label{tab:losses}
\end{table}
\section{MACs calculation}
\label{app:macs}
\textsc{ptflops} launches a given model on a random tensor (with pre-defined input shapes) and estimates amount of computations (multiply-add operations) during inference. We need to define the shapes of inputs when using \textsc{ptflops} to calculate MACs. For translation models, we set encoder sequence length and decoder sequence length to be 10. We set the batch size to be 1. For language modeling models, we set the sequence length to be 128 and the batch size to be 1. With the pre-defined input shapes, \textsc{ptflops} will conduct the forward process of the given model and feed back the MACs.
\section{Introduction}
\begin{figure}[t]
\centering
\includegraphics[width=0.6\columnwidth]{Figs/MoAillus.pdf}
\caption{Simple illustration of MoA. MoA consists of a set of attention heads named attention experts. For each token in the input, a Router selects $k$ attention heads among all attention experts with different confidences. The output is a weighted sum of the selected attention heads given the confidence calculated by the Router.}
\label{fig:MoAillus}
\end{figure}
In recent years, large models have become a popular trend in the research of Natural Language Processing, especially large-scale Transformer~\citep{DBLP:conf/nips/VaswaniSPUJGKP17}.
The model's capacity has increased from millions of parameters~\citep{DBLP:conf/naacl/DevlinCLT19,DBLP:journals/corr/abs-1907-11692}, to billions of parameters~\citep{DBLP:journals/corr/abs-1909-08053,DBLP:journals/jmlr/RaffelSRLNMZLL20,DBLP:journals/corr/abs-2203-00555}, even to trillions of parameters~\citep{DBLP:journals/corr/abs-2112-06905,DBLP:journals/corr/abs-2101-03961}. However, these large-scale models demand substantially more computations than small-scale models. A popular trend is to utilize conditional computation with a sparsely activated model to seek greater computational efficiency. Thus, only a part of the model’s parameters is used for a specific input during the forward computation, which alleviates the computational load.
Among these attempts, the Mixture of Experts (MoE)~\citep{DBLP:journals/neco/JacobsJNH91,DBLP:journals/neco/JordanJ94} is an essential technique.
Since first applying the mixture of experts to Transformer architecture~\citep{DBLP:conf/nips/ShazeerCPTVKHLH18}, researchers have mainly focused on combining the Feed-Forward Network layer and the Mixture of Experts.
Recent works have discussed how to get a better routing strategy~\citep{DBLP:conf/iclr/ShazeerMMDLHD17,DBLP:journals/corr/abs-2110-08246,DBLP:conf/icml/LewisBDGZ21,DBLP:journals/corr/abs-2112-14397} or how to scale up the Mixture of Experts on different nodes of GPUs~\citep{DBLP:conf/iclr/LepikhinLXCFHKS21,DBLP:journals/corr/abs-2101-03961}.
However, few attempts have explored the possibility of combining MoE with the Multi-Head Attention (MHA) mechanism.
Since the MHA is another compulsory module in the Transformer architecture, combining MoE and the attention mechanism could also help achieve better performance while restraining the computational cost.
Besides, previous research has investigated the utility of different attention heads. \citet{DBLP:conf/acl/PengSLS20} found that the combination (reallocation) of a subset of attention heads helps the Translation task since they prune the useless attention heads. In the field of dependency parsing, researchers have unveiled that some attention heads in BERT-like language models~\citep{DBLP:conf/naacl/DevlinCLT19,DBLP:journals/corr/abs-1907-11692} model individual dependency types~\citep{DBLP:journals/corr/abs-1911-12246} and syntactic functions~\citep{shen2020unsupervised}. ~\citet{DBLP:conf/acl/VoitaTMST19} claimed that the attention heads have different functions that could be categorized into three types.
There is no need to pass through all multiple attention heads for an input token if we could select some relevant attention heads whose function is proper.
Thus, we conceive an attention mechanism that selects different attention heads per token.
Based on the above discussion, we proposed Mixture of Attention Heads (MoA) (Section~\ref{sec:moa}), an attention mechanism that selects different attention heads for different inputs. A simple illustration of this idea is shown in Figure~\ref{fig:MoAillus}. MoA includes a set of of attention heads with different parameters. Given an input, a routing network dynamically selects a subset of $k$ attention heads for each token. The output is a weighted sum of the selected attention heads given the confidence calculated by the routing network.
We conducted experiments on two tasks: Machine Translation and Masked Language Modeling (Section~\ref{sec:results}). Experiments shown promising results against several strong baselines. In all tasks, our proposed mixture of attention heads outperforms the original Transformer architecture~\citep{DBLP:conf/nips/VaswaniSPUJGKP17}. Our model surpasses many large models or achieves comparable results with only a half computational cost. Our contributions can be summarized in three folds:
1) We proposed a new attention mechanism called Mixture of Attention Heads, combining the idea of Mixture of Experts with the attention mechanism.
2) MoA can improve the model's performance without substantially adding parameters and computational cost.
3) MoA is easy to scale up while maintaining with a restrained computation complexity, resulting in a further performance amelioration.
\section{Related Work}
\paragraph{Mixture of Experts}
The Mixture of Experts (MoE) was firstly introduced in the 1990s~\citep{DBLP:journals/neco/JacobsJNH91,DBLP:journals/neco/JordanJ94}.
\citet{DBLP:conf/iclr/ShazeerMMDLHD17} adopted this method into modern deep learning architectures (LSTM;~\citealt{DBLP:journals/neco/HochreiterS97}) and proved its effectiveness in Language Modeling and Machine Translation. The MoE was used to substitute the FFN layers in Transformer architecture~\citep{DBLP:conf/nips/VaswaniSPUJGKP17} by the Mesh Tensorflow library~\citep{DBLP:conf/nips/ShazeerCPTVKHLH18}. Gshard~\citep{DBLP:conf/iclr/LepikhinLXCFHKS21} is a lightweight module that helps scale up multilingual neural machine translation Transformer with a Sparsely-Gated Mixture of Experts beyond 600 billion parameters. In Switch Transformer~\citep{DBLP:journals/corr/abs-2101-03961}, the authors scaled the MoE-integrated Transformer architecture toward trillion parameter models. GLaM~\citep{DBLP:journals/corr/abs-2112-06905} utilized a decoder-only architecture to do language model pre-training. \citet{DBLP:journals/corr/abs-2201-05596} proposed a Pyramid-Residual-MoE for smaller model size and fast inference.
Various routing strategies~\citep{DBLP:conf/iclr/ShazeerMMDLHD17,DBLP:journals/corr/abs-2110-08246,DBLP:conf/icml/LewisBDGZ21,DBLP:journals/corr/abs-2112-14397} have been investigated for stabilizing the MoE training and balancing the expert loads. \citet{DBLP:journals/corr/abs-2204-09179} pointed out the representation collapse issue in the sparse Mixture of Experts models and solved by a two-stage routing strategy.
\paragraph{Machine Translation Architectures}
With original Transformer architecture~\citep{DBLP:conf/nips/VaswaniSPUJGKP17}, ~\citet{ott-etal-2018-scaling} found that training with reduced precision and large batch could improve the translation performance.
Some models get better performance on translation by using larger scale of Transformer.
\citet{DBLP:conf/emnlp/LiuLGCH20} deepened the encoder and decoder of the Transformer by adequately initializing the model.
DeepNet~\citep{DBLP:journals/corr/abs-2203-00555} scaled Transformers up to 1,000 layers by introducing a new normalization function. However, these methods require a great amount of computational cost.
Some models make changes to the self-attention module. \citet{DBLP:conf/acl/PengSLS20} proposed MAE model. The reallocation of attention heads got better performance on Translation, since the model prune useless multi-head attention heads. However, their method is difficult to scale up and get further improvement of the results because it needs to use all the attention heads in the model rather than sparsely activate them. It also requires the complicated block coordinate descent training steps. ~\citet{wu2018pay} proposed DynamicConv and LightConv by replacing self-attention mechanism with a lightweight convolution.
\paragraph{Specialization of Attention Heads}
Since the publication of Transformer architecture~\citep{DBLP:conf/nips/VaswaniSPUJGKP17}, many researchers have been interested in analyzing how the attention mechanism works. ~\citet{DBLP:conf/acl/VoitaTMST19} systematically analyzed the attention heads in the encoder and categorized them into three functional subsets: positional, syntactic, and rare words. When dealing with dependency parsing, researchers also observed the same phenomenon that different heads could capture different syntactic functions~\citep{DBLP:journals/corr/abs-1911-12246,shen2020unsupervised}.
\begin{figure*}[tbp]
\centering
\includegraphics[width=0.78\linewidth]{Figs/MoA.pdf}
\caption{Mixture of Attention Heads (MoA) architecture. MoA contains two mixtures of experts. One is for query projection, the other is for output projection. These two mixture of experts select the same indices of experts. One routing network calculates the probabilities for each selected experts. The output of the MoA is the weighted sum of the outputs of each selected experts.}
\label{fig:moa}
\end{figure*}
\section{Preliminaries}
\subsection{Mixture of Experts}
MoE~\citep{DBLP:conf/iclr/ShazeerMMDLHD17} contains a set of expert networks $E_1, E_2, \dots, E_N$ and a routing network $G$.
The output of the MoE is the weighted sum of the output of each expert. The routing network calculates the probability for each expert. Formally, the output of the MoE can be written as:
\begin{equation}
\label{eq:moe}
y = \sum_{i=1}^N G(x)_i E_i(x)
\end{equation}
The routing network $G$ is a Noisy Top-k Routing network. Before the softmax function, they add Gaussian noise to the gated logits, see Equation~\ref{eq:gate}. Then, they keep only the top k values, setting the rest gate values to equal 0, see Equation~\ref{eq:noisytopk}.
\begin{equation}
\label{eq:noisytopk}
G(x) = \operatorname{Softmax}(\operatorname{TopK}(H(x), k))
\end{equation}
\begin{align}
\label{eq:gate}
H(x)_i =& (x\cdot W_g)_i + \sigma (0,1)\cdot\\ \notag
&\operatorname{Softplus}((x\cdot W_{noise})_i)
\end{align}
\subsection{Multi-head Attention}
\citet{DBLP:conf/nips/VaswaniSPUJGKP17} proposed an encoder-decoder architecture Transformer, which contains the multi-head attention module. Different heads from the multi-head attention module attend to information from different representation subspaces, which learn the input from various perspectives.
Performing multi-head attention with $k$ heads, the $Q,K,V$ are linearly projected $k$ times with different, learned linear projections to subspaces. On each projected $Q$ and $K$, the attention scores are calculated, via Equation~\ref{eq:multiatt}. Values deriving from different heads are projected back to the model dimension size and summed up, with Equation~\ref{eq:attn}.
\begin{equation}
\label{eq:multiatt}
W^{\rm att}_i = \text{Softmax} \left(\frac{QW_i^q(KW_i^{k})^T}{\sqrt{d_k}} \right)
\end{equation}
\begin{equation}
\label{eq:attn}
y = \sum_{i=1}^k\left(W^{\rm att}_iVW_i^v\right)W_i^o
\end{equation}
where $W_i^q,W_i^k,W_i^v \in \mathbb{R}^{d_m\times d_h}$ and $W_i^o \in \mathbb{R}^{d_h\times d_m}$, $d_k$ is the dimension of the key $K$.
\section{Mixture of Attention Heads}
\label{sec:moa}
In this work, we propose a variant of multi-head attention for Transformer called Mixture of Attention Heads (MoA), illustrated in Figure~\ref{fig:moa}.
MoA consists of two major components, the routing network $G$ and a group of $N$ attention experts $\left\{ E_1, ..., E_N \right\}$.
Similar to standard multi-head self-attention, the input of MoA includes three sequences, query sequence $Q$, key sequence $K$, and value sequence $V$.
We note $q_t$ as the query vector at time step $t$.
For each $q_t$, the routing network $G$ selects a subset of $k$ experts $G(q_t) \subseteq \left\{ E_i \right\}$ based on $q_t$ and assign a weight $w_i$ to each selected expert.
Then, these selected experts take $q_t$, $K$, and $V$ as inputs and compute an output $E_i(q_t,K,V)$.
The output of the MoA is the weighted sum of the selected experts' outputs.
Formally, the MoA output at time step $t$ can be written as:
\begin{equation}
y_t = \sum_{i \in G(q_t)} w_{i,t}\cdot E_i(q_t,K,V)
\end{equation}
\subsection{Routing Network}
Similar to previous mixture-of-expert methods, the routing network assigns attention experts to input query.
In order to select $k$ experts for query $q_t$, we compute a routing probability $p_i$ for each expert $E_i$.
The routing probability is modeled with a linear layer $W_g$ and a softmax function:
\begin{equation}
p_{i,t} = \operatorname{Softmax}_i(q_t \cdot W_g)
\end{equation}
Based on the routing probability $p$, we select the top-$k$ attention experts among all $N$ attention experts with the largest probabilities.
Formally, the routing network is defined as:
\begin{equation}
\label{eq:topk}
G(Q)=\operatorname{TopK}(p_{i,t}, k)
\end{equation}
where $W_g\in \mathbb{R}^{d_m \times N}$, representing the routing matrix.
Then, we renormalize the routing probability of the selected experts to get normalized expert weights:
\begin{equation}
w_{i,t} = \frac{p_i}{\operatorname{Detach} \left( \sum_{j \in G(q_t)} p_j \right)}
\end{equation}
where $\operatorname{Detach}(\cdot)$ is a function that stops the gradient backpropagation.
In other words, the denominator receives zero gradient during the training process.
We empirically find that this trick helps the routing network learn better routing probability.
\subsection{Attention Expert}
An attention expert contains four different projection matrices, $W^q$, $W^k$, $W^v$ and $W^o$.
The attention calculation is similar to multi-head attention.
We first compute the attention weight for keys.
\begin{equation}
\label{eq:att}
W^{\rm att}_{i,t} = \text{Softmax}\left(\frac{q_t W_i^{q}(KW^k)^T}{\sqrt{d_h}}\right)
\end{equation}
where $W_i^q \in \mathbb{R}^{d_m\times d_h}$ is the query projection matrix, $W^k \in \mathbb{R}^{d_m\times d_h}$ is the key projection matrix, $d_m$ is the hidden state size, $d_h$ is named as head dimension.
We then compute the weighted sum of values:
\begin{equation}
o_{i,t} = W^{\rm att}_{i,t} VW^v
\end{equation}
where $W^v \in \mathbb{R}^{d_m\times d_h}$ is the value projection matrix.
Finally, the attention output is obtained by projecting $o_{i,t}$ back to the hidden state space:
\begin{equation}
\label{eq:moa}
E_i \left( q_t,K,V \right) = o_{i,t} W_i^o
\end{equation}
where $W_i^o \in \mathbb{R}^{d_h\times d_m}$ is the output projection matrix.
In the multi-head attention, the projection matrices $W^q$, $W^k$, $W^v$, and $W^o$ are all different across attention heads.
The MoA shares $W^k$ and $W^v$ across attention experts to reduce the computational complexity.
Attention experts are only differentiated by $W^q_i$ and $W^o_i$.
Thus, the expensive matrix projection of key sequence $KW^k$ and value sequence $VW^v$ can be pre-computed and shared for all attention experts.
Each expert only need to compute the vector projection of query $q_t W_i^q$ and output $o_{i,t} W_i^o$.
This design can significantly reduce the computational and space complexity while the number of experts is large.
\subsection{Training Losses}
Previous work~\citep{DBLP:conf/iclr/ShazeerMMDLHD17} has observed that the routing network tends to converge to a state where it always produces large weights for the same few experts, which indicates the insufficient utility of all the experts.
Following \citet{DBLP:conf/iclr/ShazeerMMDLHD17} and \citet{DBLP:journals/corr/abs-2101-03961}, we add an auxiliary loss to balance the loads of different experts.
Given $N$ experts and a sequence with $T$ queries $Q=\{q_1,q_2,\dots ,q_T\}$, the auxiliary loss $L_a$ can be computed as:
\begin{equation}
L_a(Q) = N\cdot \sum_{i=1}^N f_i \cdot P_i
\end{equation}
where $f_i$ is the number of tokens attributed to the $i$-th expert,
\begin{equation}
\label{eq:fi}
f_i = \sum_{t=1}^T \delta_{i \in G(q_t)}
\end{equation}
where $\delta$ represents the Kronecker-Symbol.
$P_i$ is the sum of router probability allocated for the $i$-th expert,
\begin{equation}
\label{eq:pi}
P_i = \sum_{t=1}^T p_{i,t}
\end{equation}
They are then normalized with norm 1 according to the expert column.
Mathematically, $f_i$ is indifferentiable while $P_i$ is. Thus, larger $f_i$ will result in a larger derivative. This penalizes the $P_i$ making larger $P_i$ smaller. What's more, $P_i$ is calculated by softmax. Thus smaller $P_i$ will become bigger.
\citet{DBLP:journals/corr/abs-2202-08906} introduced a router z-loss (Equation \ref{eq:zloss}) to penalize large logits into the gating network, which could stabilize the training and improve the performance.
\begin{equation}
\label{eq:zloss}
L_{z}(x)=\frac{1}{T} \sum_{j=1}^{T}\left(\log \sum_{t=1}^{N} e^{x_{i,t}}\right)^{2}
\end{equation}
where $x_{i,t}$ is the pre-softmax logit computed by router for $i$-th expert and input query $q_t$.
Each mixture of attention heads module has an auxiliary loss and a router z-loss. We sum them up together and added with a multiplicative coefficient $\alpha$ and $\beta$ respectively to the total model loss during training. Throughout this work, we use $\alpha = 0.01$ and $\beta=0.001$ to ensure the efficiency of the two added losses and not to disturb the primary cross-entropy model loss.
\begin{equation}
L = L_{\text{model}} + \sum_{\forall \ \text{MoA module}} (\alpha L_a + \beta L_z)
\end{equation}
To validate the utility of these auxiliary losses, we conducted ablation tests and the results are shown in Appendix~\ref{sec:losses}.
\subsection{Computational Complexity and Number of Parameters}
On the one hand, given a sequence with $T$ tokens, the amount of computation required by an MoA layer that selects top-$k$ experts is
\begin{equation}
C_{MoA} = kT^2d_h + 2(k+1)Td_hd_m
\end{equation}
where $k d_h$ is the sum of head dimension of selected experts.
It represents the maximum amount of information that can be collected by an MoA layer for a token.
On the other hand, the amount of computation required by a standard Multi-Head Attention (MHA) is
\begin{equation}
C_{MHA} = T^2d_m + 4Td_m^2
\end{equation}
where $d_m$ is the sum of head dimension.
If $kd_h\simeq d_m$, the computational complexity of MoA is smaller than that of MHA.
In other words, the MoA could collect more information for each token while maintaining a similar level of computational complexity as the MHA.
As for the number of parameters, given a Mixture of Attention Heads with $E$ attention experts, the number of parameters in MoA and MHA are:
\begin{equation}
M_{MoA} = (2E+2)d_hd_m, \quad M_{MHA} = 4d_m^2
\end{equation}
When $k = E$ and $Ed_h\simeq d_m$, the number of parameters in MoA is smaller than MHA.
In other words, MoA could collect more information for each token while maintaining a similar number of parameters as MHA.
More details of the calculation are in Appendix~\ref{sec:complex}.
The above discussion suggests that, from an information collection point of view, the MoA is more computational and parameter efficient than the standard MHA.
Our experimental results in Section~\ref{sec:results} also empirically support the hypothesis.
Additionally, the time complexity of MoA is decided by the number of attention heads $k$ and the attention head dimension $d_h$, not the model's total parameters.
One could arbitrarily increase the amount of parameters in MoA, without increasing its computational complexity.
\section{Experiments}
\label{sec:results}
\subsection{Machine Translation}
\paragraph{Dataset}
We train our Mixture of Attention model on WMT 2014 English-German and English-French datasets~\citep{DBLP:conf/wmt/BojarBFHKLMPPSS14}.
Following the experimental settings used in \citet{liu2020very}, all sentences were encoded using byte-pair encoding~\citep{sennrich-etal-2016-neural}.
For both tasks, we use a joined dictionary and share all word embeddings of the encoder and the decoder.
For English-German, their shared vocabulary size is set to be 32k.
For English-French, their shared vocabulary size is set to be 40k.
\begin{table*}[htbp]
\centering
\small
\begin{tabular}{rccccc}
\toprule
\multirow{2}{*}{Model} &
\multicolumn{2}{c}{WMT14 EnDe} &
\multicolumn{2}{c}{WMT14 EnFr} &
\multirow{2}{*}{MACs\footnotemark[3]}\\
& \#Params & BLEU & \#Params & BLEU \\
\midrule
Transformer base~\citep{DBLP:conf/nips/VaswaniSPUJGKP17} & 65M & 27.3 & 62M & 38.1 & 604M \\
Admin 6L-6L~\citep{liu2020very} & 61M & 27.7 & 67M & 41.5 & 604M \\
MAE-7~\citep{DBLP:conf/acl/PengSLS20} & 63M & 28.4 & - & - & -\\ \midrule
MoA Base ($8K8E128D$) & 65M & 28.4 & 69M & 42.5 & 628M \\ \midrule
Transformer big~\citep{DBLP:conf/nips/VaswaniSPUJGKP17} & 213M & 28.4 & 210M & 41.8 & 2090M\\
Transformer big~\citep{ott-etal-2018-scaling} & 210M & 29.3 & 222M & 43.2 & 2090M \\
LightConv~\citep{wu2018pay} & 202M & 28.9 & - & 43.1 & 1750M\footnotemark[4]\\
DynamicConv~\citep{wu2018pay} & 213M & 29.7 & - & 43.2 & 1790M\footnotemark[4] \\
Admin 18L-18L~\citep{liu2020very} & 151M & 29.0 & - & - & 1490M \\
Admin 60L-12L~\citep{liu2020very} & 256M & 30.1 & 262M & 43.8 & 2550M \\
\midrule
MoA Big ($16K32E256D$) & 200M & 29.4 & 204M & 43.7 & 1220M \\ \bottomrule
\end{tabular}
\caption{BLEU score on WMT14 translation datasets.
MACs (Multiply–Accumulate Operations)\footnotemark[3] measures the computational complexity of each model.
For different models, their MACs are computed on a source sentence of length $T_{src}=10$ and a target sentence of length $T_{tgt}=10$.
}
\label{tab:bleu}
\end{table*}
\paragraph{Training and Evaluation Details}
We use the Adam Optimizer~\citep{DBLP:journals/corr/KingmaB14} with a learning rate of $7e^{-4}$ and the inverse square root learning rate scheduler.
During training, we employed label smoothing~\citep{DBLP:conf/cvpr/SzegedyVISW16} of value 0.1.
More training details can be found in Appendix~\ref{app:training}.
For the evaluation, we average the last 10 epochs' checkpoints.
We list BLEU score~\citep{DBLP:conf/acl/PapineniRWZ02} computed with \textsc{multi-bleu.perl}, and apply the compound split post-processing\footnote{\url{https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/utils/get_ende_bleu.sh}} introduced in \citet{DBLP:conf/nips/VaswaniSPUJGKP17}.
We use MACs (Multiply–Accumulate Operations)\footnotemark[3] to evaluate the computational complexity of different models on a fixed input.
Details of the MACs calculation are in Appendix~\ref{app:macs}.
\footnotetext[3]{We adopt the open-source tool \textsc{ptflops} (\url{https://github.com/sovrasov/flops-counter.pytorch}) to calculate the MACs.}
\footnotetext[4]{These MACs values are underestimated.
Because the \textsc{ptflops} does not support the customized convolution layers in DynamicConv and LightConv. }
\paragraph{Baselines}
We compare with several strong baselines: Transformer base and big~\citep{DBLP:conf/nips/VaswaniSPUJGKP17}, Transformer big~\citep{ott-etal-2018-scaling} with reduced precision and large batch training, DynamicConv~\citep{wu2018pay} by replacing self-attention mechanism with a lightweight convolution, MAE-7 with reallocation of attention heads proposed by ~\citet{DBLP:conf/acl/PengSLS20}, Admin~\citep{DBLP:conf/emnlp/LiuLGCH20} which deepens the Transformer architecture.
For our model, three parameters are used to differentiate its variants, one is number of the activated attention heads ($K$) per token, one is total number of the experts ($E$), another is the attention expert dimension ($D$).
For example, our MoA base model is noted as $8K8E128D$, because it has 8 attention experts, 128 dimension per expert, and all 8 experts are activated for each token.
Our MoA big model is $16K32E256D$ as it has 32 attention experts and sparsely activates the top 16 experts for each token.
\paragraph{Results}
The results on the test set of WMT14 EnDe and WMT14 EnFr datasets are shown in Table~\ref{tab:bleu}. The table is split into 2 parts, the upper part is for base models and the lower part is for large models.
On all datasets, MoA base outperforms Transformer base and Admin 6L-6L by at least 0.6 BLEU.
On the WMT14 EnFr dataset, MoA base also outperforms Transformer big.
On the WMT14 EnDe dataset, MoA base reaches comparable results with the Mixture of Attention Experts model (MAE-7), which is the state-of-the-art performance for base-level models. MACs of MAE-7 and our model are comparable in the setting of 8 attention heads.
While both models leverage the idea of weighting the attention heads, MoA is easier to implement and does not require the complicated block coordinate descent training steps.
Compared to standard multi-head self-attention, the routing mechanism pays more attention to the more informative attention heads for each token, thus enabling the MoA base model to achieve better computation and parameter efficiency.
In the big-scale setting, MoA big consistently outperforms standard transformer big models, despite requiring significantly less computation.
Compared to the models with more parameters, MoA is still very competitive.
Only Admin 60L-12L outperforms MoA big on both datasets.
However, the model has more parameters and requires about two times of MACs.
The MACs of MoA big is 1220M, which is the lowest amount among big-scale models.
This result shows that our proposed method could easily scale up to a large amount of parameters and achieve good results without substantially burdening the computation system.
\begin{table}[t]
\centering
\small
\begin{tabular}{ccccc}
\toprule
\multicolumn{2}{c}{Model} & \#Params & PPL & MACs \\
\midrule
\multicolumn{2}{c}{Transformer} & 51.34M & 4.95 & 6.55G \\
\midrule
\multirow{4}{*}{MoA} & $8K8E128D$ & 52.45M & 4.82 & 7.27G \\
& $8K8E256D$ & 61.89M & 4.64 & 8.97G \\
& $8K16E256D$ & 78.75M & 4.48 & 8.97G \\
& $8K32E256D$ & 112.47M & 4.25 & 8.98G \\
& $8K64E256D$ & 179.91M & 4.21 & 8.98G\\
\bottomrule
\end{tabular}
\caption{Perplexity on wikitext-103 corpus test data for masked language modeling.
MACs are computed on a input sequence of length $T=128$.}
\label{tab:ppl}
\end{table}
\begin{table*}[htbp]
\small
\centering
\begin{tabular}{rccccccc}
\toprule
MoA Model & $K$ & $E$ & $D$ & \#Params & PPL(Valid) & BLEU(Test) & MACs \\
\midrule
Base & 8 & 8 & 128 & 65M & 4.68 & 28.4 & 628M \\
\midrule
(A) & 8 & 8 & 256 & 87M & 4.51 & 28.7 & 841M \\
(B) & 8 & 16 & 256 & 125M & 4.45 & 28.4 & 841M \\
(C) & 8 & 32& 64 & 83M & 4.79 & 27.9 & 524M \\
(D) & 8 & 32& 128 & 123M & 4.55 & 28.4 & 631M \\
(E) & 8 & 32& 256 & 200M & 4.44 & 28.8 & 841M \\
(F) & 4 & 32& 256 & 200M & 4.65 & 27.5 & 654M \\
\midrule
Big & 16 & 32& 256 & 200M & 4.35 & 29.4 & 1220M \\
\bottomrule
\end{tabular}
\caption{BLEU score of different MoA models on the WMT14 EnDe Dataset.
}
\label{tab:moaarch}
\end{table*}
\subsection{Masked Language Modeling}
Masked Language Modeling is the standard training objective for many Pretrained Language Models (PLMs), including BERT~\citep{DBLP:conf/naacl/DevlinCLT19} and RoBERTa~\citep{DBLP:journals/corr/abs-1907-11692}.
The task replaces a random sample of tokens in the input sequence with a special token \texttt{[MASK]}.
The training objective is a cross-entropy loss on predicting the masked tokens.
To better mimic the procedure of training PLMs, we adopt the setting introduced in RoBERTa~\citep{DBLP:journals/corr/abs-1907-11692} to conduct the masked language modeling experiment.
\paragraph{Dataset}
We conducted the masked language modeling on the wikitext-103 dataset~\citep{merity2016pointer}.
The corpus includes over 100 million tokens collected from verified Good and Featured articles on English Wikipedia.
Following the settings in \citet{merity2016pointer}, the training/validation/test set has 103M/218K/246K words.
The corpus is tokenized with the 50K subword vocabulary used in RoBERTa and initially introduced in GPT~\citep{radford2019language}.
\paragraph{Settings}
Then we train the model with the dynamic masking strategy and full-sentences input format.
To avoid overfitting on the training corpus, we adopt a medium-size RoBERTa model as the base model, with 512-dim word embedding, 2048-dim feed-forward network, 8 heads, and 8 layers.
Training details can be found in Appendix~\ref{app:training}.
The perplexity is used as the evaluation metric.
\paragraph{Results}
Table~\ref{tab:ppl} shows the perplexity on WikiText-103 test data.
While using a similar amount of parameters, MoA outperforms the standard transformer model by 0.13 perplexity.
Furthermore, the performance simultaneously improves with the increase of number of experts $E$ and head size $D$, while the number of selected heads $K$ and the computational complexity remains the same.
The observation shows that our model has the ability of increasing the model's performance while maintaining the computation complexity.
\subsection{Model Analysis}
\paragraph{MoA parameter influence}
We study the influence of three parameters, $K$, $E$, and $D$, on the WMT14 En-De Dataset. The results are shown in Table~\ref{tab:moaarch}.
For the expert dimension $D$, we control $K=8$ and $E=32$, vary the expert dimension $D$ with 64, 128, and 256. As the expert dimension size $D$ increases (rows C, D, E in Table~\ref{tab:moaarch}), the PPL on the validation set and the BLEU score on the test set both improve. This amelioration is due to the increase in parameters. With a larger expert dimension size, each expert has more parameters, and the computational costs increase. We believe the increase in computational cost is acceptable. As in Table~\ref{tab:bleu}, Transformer big model has a MACs of 2090M, reaching BLEU of 28.4. However, by enlarging hidden size of expert, we could get BLEU of 28.8 while MACs at 841M (<<2090M).
For the number of attention experts $E$, we control $K=8$ and $D=256$, select three different values of $E$, 8, 16, and 32. When adding the number of experts, the PPL on the valid set goes down, indicating our model's continuous scaling up ability. The BLEU score on the test set does not change with that of PPL, and this may be because the training objective is not directly linked to the BLEU score calculation.
However, we still observe that 32 experts can achieve better BLEU than the other two settings.
As the number of selected attention heads $K$ remains unchanged, the MACs for these three settings are the same.
Thus, MoA allows us to improve the model ability by adding more parameters without changing the computational complexity.
For the number of selected attention heads $K$, we test three numbers of selected attention heads $K$, 4, 8, and 16, freezing $E=32$ and $D=256$. With the increase in the number of selected attention heads, we observe that the PPL on the valid set decreases and the BLEU score on the test set goes up. Since the number of attention experts remains the same, the model's total parameters stay at 200M.
This result shows the trade-off between computation efficiency and performance. The model needs more computations for better performance as the MACs vary from 654M to 1220M.
\paragraph{MoA Expert loads}
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{EMNLP 2022/Figs/percent4.pdf}
\caption{Experts' load percentages for encoder layer 4.
Experts are indexed by their order of percentages.
}
\label{fig:percent4}
\end{figure}
Load balancing is a long-standing problem of MoE models~\citep{DBLP:journals/corr/abs-2101-03961}.
Figure~\ref{fig:percent4} shows the experts' load of the fourth layer of the MoA big model.
It plots the percentage of each expert used in the development set of WMT14 EN-DE.
For each input token and MoA big, 16 attention heads are selected among 32 attention experts.
This figure shows a relatively based load, where the most used expert is selected by 5\% of the tokens and the least used expert is selected by 1\%, and most experts' loads are between 2-4\%.
This observation suggests that the input tokens are attributed equally among attention experts.
Experts are assigned to different roles with substantially different groups of input tokens.
The load for every expert of different layers is shown in Appendix~\ref{sec:percent}.
\paragraph{MoA Interpretability: Specialize the Experts}
\begin{table}[t]
\centering
\small
\begin{tabular}{ccccccc} \toprule
Expert 5 &Expert 29 & Expert 10 \\ \midrule
\textbf{Tech.} & \textbf{adv.} & \textbf{Location} \\ \midrule
DSL & likely & Costasur \\
JPEG &tastefully & Kaliningrad\\
Module & environmentally & Freiburg \\
DSLR & heavily & Brava \\
screen & certainly & Jesolo \\ \midrule
Expert 7 & Expert 24 & Expert 23 \\ \midrule
\textbf{Computer} & \textbf{Reaction} & \textbf{Name} \\ \midrule
Outlook & supportive & Marx\\
Excel & misunderstanding & Jarzembowski \\
IO & advocating & Reding\\
emails & confirming & Donald\\
monitors & excitement &Socrates \\
\bottomrule
\end{tabular}
\caption{Indicative tokens of each expert for the first encoder layer of MoA}
\label{tab:indicative}
\end{table}
We study in this section whether the different experts possess different ``expertises''.
We try to find the most likely tokens to co-occur with each expert.
We compute the pointwise mutual information (PMI;~\citealt{DBLP:journals/coling/ChurchH90}) between tokens and experts:
\begin{equation*}
\operatorname{PMI}({\rm token}_i, {\rm expert}_j) = \frac{p({\rm token}_i, {\rm expert}_j)}{p({\rm token}_i)\cdot p({\rm expert}_j)} \text{.}
\end{equation*}
For each expert, the bigger the PMI, the more relevant the token with this expert.
Table~\ref{tab:indicative} lists the most indicative tokens of each expert for the first encoder layer of $16K32E512D$.
Many experts are associated with nouns in the same topic, e.g., Location, Name, Tech, etc.
We also found that some other experts are associated with adjectives and adverbs.
For example, Expert 29 is related to adverbs, and Expert 24 is connected to people's reactions, where some tokens are adjectives.
We also study the relation between expert and input tokens for other layers of the encoder, but it is hard to find clear patterns in other layers.
\section{Conclusion}
This work introduces the Mixture of Attention Heads (MoA).
MoA contains a set of attention experts and a routing network.
Given an input, the MoA attributes a probability to each expert by the routing network and selects the top-K experts.
The output of MoA is a weighted sum of the selected attention experts.
The weighting mechanism allows different tokens to focus on different experts, thus improving the model's parameter and computation efficiency.
Experimental results show that a base-scale MoA model could achieve comparable or better performance than a transformer big model.
Furthermore, MoA could improve its performance by adding more attention experts while maintaining a relatively small computational complexity.
In this way, MoA can achieve comparable performance with deeper and computationally more expensive models.
The interpretability analysis shows that different attention experts tend to specialize in a specific type of input tokens.
\section*{Limitations}
In this work, we scale up MoA to at most 64 experts. However, regarding the works combining mixture of experts with FFN layer, they could expand the expert number to thousands. In the future, we will explore the limit of the scale up ability of MoA.
Our implementation of MoA is not optimistic. Our code could not fully explore the parallel computing capability of GPUs. Our current implementation spends some extra time on memory copy operations.
Although the computational complexity (MACs) of MoA is relatively low compared to other baselines, the running time of our implementation is not optimal. In the future, if we could optimize the implementation at the cuda kernel level to remove the memory copy ops, we expect at least half the wall-clock time. This will make an MoA block as fast as a standard attention block.
Similar to Transformer architecture, MoA needs a careful hyperparameter search to reach satisfying results.
\section*{Acknowledgments}
This work is supported in part by the State Key Laboratory of the Software Development Environment of China under Grant SKLSDE-2021ZX-16.
|
2,877,628,090,455 | arxiv | \section{Introduction}
Selective rationalization \citep{lei2016rationalizing, bastings2019interpretable, swanson2020rationalizing} is a powerful explainability method, in which we construct models (\textit{rationalizers}) that produce an explanation or \textit{rationale} (e.g: text highlights or alignments; \citealt{zaidan-eisner-piatko:2007:disc}) along with the decision.
One, if not the main, drawback of rationalizers is that it is difficult to train the generator and the predictor jointly under instance-level supervision \citep{jain2020learning}. Hard attention mechanisms that stochastically sample rationales employ regularization to encourage sparsity and contiguity, and make it necessary to estimate gradients using the score function estimator (SFE), also known as REINFORCE \citep{Williams1992SimpleSG}, or reparameterized gradients \citep{kingma2014autoencoding, jang2017categorical}. Both of these factors substantially complicate training by requiring sophisticated hyperparameter tuning and lead to brittle and fragile models that exhibit high variance over multiple runs. Other works use strategies such as top-$k$ to map token-level scores to rationales, but also require
gradient estimations to train both modules jointly \citep{paranjape-etal-2020-information, chang2020invariant}. In turn, sparse attention mechanisms \citep{treviso-martins-2020-explanation} are deterministic and have exact gradients, but
lack a direct way to control sparsity and contiguity in the rationale extraction. This raises the question: \textit{how can we build an easy-to-train fully differentiable rationalizer that allows for flexible constrained rationale extraction?}
\begin{table*}[t]
\centering
\footnotesize
\renewcommand\arraystretch{.6}
\begin{tabular}{
>{\arraybackslash}m{3.8cm} >{\arraybackslash}m{2.4cm}
>{\arraybackslash}m{2.6cm}
>{\arraybackslash}m{2.6cm}
>{\arraybackslash}m{2.5cm}}
\toprule
Method & Deterministic Training & Exact \newline Gradients & Constrained \newline Extraction & Encourages \newline Contiguity\\ \midrule
\citet{lei2016rationalizing} & \scriptsize \textcolor{black!70}{\XSolidBrush} & \scriptsize \textcolor{black!70}{\XSolidBrush} & \scriptsize \textcolor{black!70}{\XSolidBrush} & \scriptsize \textcolor{black!100}{\Checkmark} \\
\citet{bastings2019interpretable} & \scriptsize \textcolor{black!70}{\XSolidBrush} & \scriptsize \textcolor{black!70}{\XSolidBrush} & \scriptsize \textcolor{black!100}{\Checkmark} & \scriptsize \textcolor{black!100}{\Checkmark} \\
\citet{treviso-martins-2020-explanation} & \scriptsize \textcolor{black!100}{\Checkmark} & \scriptsize \textcolor{black!100}{\Checkmark} & \scriptsize \textcolor{black!70}{\XSolidBrush} & \scriptsize \textcolor{black!70}{\XSolidBrush} \\
SPECTRA (ours) & \scriptsize \textcolor{black!100}{\Checkmark} & \scriptsize \textcolor{black!100}{\Checkmark} & \scriptsize \textcolor{black!100}{\Checkmark} & \scriptsize \textcolor{black!100}{\Checkmark} \\ \bottomrule
\end{tabular}
\caption{Positioning of our approach in the literature of rationalization for highlights extraction. Our method is an easy-to-train fully differentiable deterministic rationalizer that allows for flexible rationale regularization.}
\label{Tab:highlightspositioning}
\end{table*}
To answer this question, we introduce \textbf{\underline{sp}ars\underline{e} stru\underline{c}tured \underline{t}ext \underline{ra}tionalization} (\textbf{SPECTRA}), which employs LP-SparseMAP \citep{niculae2020lpsparsemap}, a constrained structured prediction algorithm, to provide a deterministic, flexible and modular rationale extraction process. We exploit our method's inherent flexibility to extract highlights and interpretable text matchings with a diverse set of constraints.
Our contributions are:
\begin{itemize}
\item We present a unified framework for deterministic extraction of structured rationales (\S\ref{sec:approach}) such as constrained highlights and matchings;
\item We show how to add constraints on the rationale extraction, and experiment with several structured and hard constraint factors, exhibiting the modularity of our strategy;
\item We conduct a rigorous comparison between deterministic and stochastic rationalizers (\S\ref{sec:experiments}) for both highlights and matchings extraction.
\end{itemize}
Experiments on selective rationalization for sentiment classification and natural language inference (NLI) tasks show that our proposed approach achieves better or competitive performance and similarity with human rationales, while exhibiting less variability and easing rationale regularization when compared to previous approaches.\footnote{Our library for rationalization is available at \href{https://github.com/deep-spin/spectra-rationalization}{\textsf{https://github.com/deep-spin/spectra-rationalization}}.}
\section{Background}
\label{sec:background}
\subsection{Rationalization for Highlights Extraction}
Rationalization models for highlights extraction, also known as \textit{select-predict} or \textit{explain-predict} models \citep{aligningsocial, Zhang_2021}, are based on a cooperative framework between a rationale generator and a predictor: the generator component encodes the input text and extracts a ``rationale'' (e.g., a subset of highlighted words), and the predictor classifies the input conditioned only on the extracted rationale. Typically, this is done by obfuscating the words that are not in the rationale with a binary mask.
\paragraph{Highlights Extraction.} We consider a standard text classification or regression setup, in which we are given an input sequence $\bm{x} \in \mathbb{R}^{D \times L}$, where $D$ is the embedding size and $L$ is the sequence length (number of words), and we want to predict its corresponding label $y \in \mathbb{R}$ for regression or $y \in \{1, \ldots, C\}$ for classification. A generator model, $\mathsf{gen}$, encodes the input text $\bm{x}$ into token-level scores. Then, a rationale $\bm{z}$, e.g. a binary mask over the tokens, is extracted based on these scores. Subsequently, the predictor model makes predictions conditioned only on the rationale $\hat{y} = \mathsf{pred}(\bm{z} \odot \bm{x})$, where $\odot$ denotes the Hadamard (elementwise) product.
\paragraph{End-to-end Training and Testing Procedure.}
While most rationalization methods deterministically select the rationale at test time, there are differences on how these models are \textbf{trained}. For instance, \citet{lei2016rationalizing} and \citet{bastings2019interpretable} use stochastic binary variables (Bernoulli and HardKuma, respectively), and sample the rationale $\bm{z} \sim \mathsf{gen}(\bm{x}) \in \{0,1\}^L$, whereas \citet{treviso-martins-2020-explanation} make a continuous relaxation of these binary variables and define the rationale as a sparse probability distribution over the tokens, $\bm{z} = \mathsf{sparsemax}(\mathsf{gen}(\bm{x}))$ or $\bm{z} = \alpha\text{-}\mathsf{entmax}(\mathsf{gen}(\bm{x}))$. In the latter approach, instead of a binary vector, we have $\bm{z} \in \triangle^{L-1}$, where $\triangle^{L-1}$ is the $L-1$ probability simplex $\triangle^{L-1} := \{\bm{p} \in \mathbb{R}^L: \bm{1}^\top \bm{p} = 1, \bm{p} \geq 0 \}$. Words receiving non-zero probability are considered part of the rationale.
Rationalizers that use hard attention mechanisms or heuristics to extract the rationales are distinctively hard to train end-to-end, as they require marginalization over all possible rationales, which is intractable in practice. Thus, recourse to sampling-based gradient estimations is a necessity, either via REINFORCE-style training, which exhibits high variance \citep{lei2016rationalizing, chang2020invariant}, or via reparameterized gradients \citep{bastings2019interpretable, paranjape-etal-2020-information}. This renders training these models a complex and cumbersome task. These approaches are often brittle and fragile for the high sensitivity that they show to changes in the hyperparameters and to variability due to sampling. On the other hand, existing rationalizers that use sparse attention mechanisms \citep{treviso-martins-2020-explanation} such as sparsemax attention, while being deterministic and end-to-end differentiable, do not have a direct handle to constrain the rationale in terms of sparsity and contiguity. We endow them with these capabilities in this paper as shown in Table \ref{Tab:highlightspositioning}, where we position our work in the literature for highlights extraction.
\paragraph{Constrained Rationale Extraction.} Existing rationalizers are \emph{extractive}: they select and extract words or word pairs to form the rationale. Since a rationalizer that extracts the whole input would be meaningless as an explainer, they must have a length constraint or a sparsity inducing component. Moreover, rationales are idealized to encourage selection of contiguous words, as there is some evidence that this improves readibility \citep{jain2020learning}. Some works opt to introduce regularization terms placed on the binary mask such as the $\ell_1$ norm and the fused-lasso penalty to encourage sparse and compact rationales \citep{lei2016rationalizing, bastings2019interpretable}. Others use hard constraints through heuristics such as top-$k$, which is not contiguous but sparse, or select a chunk of text with a pre-specified length that corresponds to the highest total score over all possible spans of that length \cite{chang2020invariant, paranjape-etal-2020-information, jain2020learning}. Sparse attention mechanisms can also be used to extract rationales, but since the rationales are constrained to be in the simplex, controlling the number of selected tokens and simultaneously promoting contiguity is non-trivial.
\subsection{Rationalization for Matchings Extraction}
For this task, we consider a natural language inference setup in which classification is made based on two input sentences: a premise $\bm{x}_P \in \mathbb{R}^{D \times L_P}$ and a hypothesis $\bm{x}_H \in \mathbb{R}^{D \times L_H}$, where $L_P$ and $L_H$ are the sequence lengths of the premise and hypothesis, respectively, and $D$ is the embedding size. A generator model ($\mathsf{gen}$) encodes $\bm{x}_P$ and $\bm{x}_H$ separately and then computes pairwise costs between the encoded representations to produce a score matrix $\bm{S} \in \mathbb{R}^{L_P \times L_H}$. The score matrix $\bm{S}$ is then used to compute an alignment matrix $\bm{Z} \in \mathbb{R}^{L_P \times L_H}$, where $z_{ij}=1$ if the $i\textsuperscript{th}$ premise word is aligned to the $j\textsuperscript{th}$ word in the hypothesis. $\bm{Z}$ subsequently acts as a sparse mask to obtain text representations that are aggregated with the original encoded sequences and fed to a predictor to obtain the output predictions.
\subsection{Structured Prediction on Factor Graphs}
\label{background:lpsparsemap}
Finding the highest scored rationale under the constraints described above is a structured prediction problem, which involves searching over a very large and combinatorial space.
We assume that a rationale $\bm{z}$ can be represented as an $L$-dimensional binary vector.
For example, in highlights extraction, $L$ is the number of words in the document and $\bm{z}$ is a binary mask selecting the relevant words; and in the extraction of matchings, $L = L_P\times L_H$ and $\bm{z}$ is a flattened binary vector whose entries indicate if a premise word is aligned to a word in the hypothesis.
We let $\mathcal{Z} \subseteq \{0,1\}^{L}$ be the set of rationales that satisfy the given constraints, and let $\bm{s} = \mathsf{gen}(\bm{x}) \in \mathbb{R}^L$ be a vector of scores.
\paragraph{Factor Graph.} In the sequel, we consider problems that consist of multiple interacting subproblems. \citet{niculae2020lpsparsemap} present structured differentiable layers, which decompose a given problem into simpler subproblems, instantiated as local factors that must agree when overlapped.
Formally, we assume a factor graph $\mathcal{F}$, where each factor $f \in \mathcal{F}$ corresponds to a subset of variables. We denote by $\bm{z}_f = (\bm{z}_i)_{i \in f}$ the vector of variables corresponding to factor $f$.
Each factor has a local score function $h_f(\bm{z}_f)$.
Examples are \textbf{hard constraint factors}, which take the form
\begin{equation}
h_f(\bm{z}_f) = \left\{
\begin{array}{ll}
0 & \text{if $\bm{z}_f \in \mathcal{Z}_f$} \\
-\infty & \text{otherwise},
\end{array}
\right.
\end{equation}
where $\mathcal{Z}_f$ is a polyhedral set imposing hard constraints (see Table \ref{Tab:constraints} for examples); and \textbf{structured factors}, which define more complex functions with structural dependencies on $\bm{z}_f$, such as
\begin{equation}
h_f(\bm{z}_f) = \sum_{i=1}^{L-1} r_{i, i+1} z_{i, i+1},
\end{equation}
where $r_{i, i+1} \in \mathbb{R}$ are edge scores,
which together define a \textbf{sequential factor}.
We require that for any factor the following local subproblem is tractable:
\begin{equation}
\hat{\bm{z}}_f = \arg\max_{\bm{z}_f \in \{0,1\}^{|f|}} \bm{s}_f^\top \bm{z}_f + h_f(\bm{z}_f).
\end{equation}
\paragraph{MAP inference.} The problem of identifying the highest-scoring global structure, known as \textbf{maximum \textit{a posteriori}} (MAP) \textbf{inference}, is written as:
\begin{equation}\label{eq:map}
\hat{\bm{z}} = \arg\max_{\bm{z} \in \{0,1\}^L} \underbrace{\bigl( \bm{s}^\top \bm{z} + \sum_{f \in \mathcal{F}} h_f(\bm{z}_f) \bigr)}_{\mathrm{score}(\bm{z}; \bm{s})}.
\end{equation}
The objective being maximized is the global score function $\mathrm{score}(\bm{z}; \bm{s})$, which combines information coming from all factors. The solution of the MAP problem is a vector $\hat{\bm{z}}$ whose entries are zeros and ones. However, it is often difficult to obtain an exact maximization algorithm for complex structured problems that involve interacting subproblems that impose global agreement constraints.
\paragraph{Gibbs distribution and sampling.} The global score function can be used to define a Gibbs distribution $p(\bm{z}; \bm{s}) \propto \exp(\mathrm{score}(\bm{z}; \bm{s}))$. The MAP in \eqref{eq:map} is the mode of this distribution.
Sometimes (e.g. in stochastic rationalizers) we want to sample from this distribution, $\hat{\bm{z}} \sim p(\bm{z}; \bm{s})$. Exact, unbiased samples are often intractable to obtain, and approximate sampling strategies have to be used, such as perturb-and-MAP \citep{perturbandMAP, corro2019differentiable, corro2019learning}. These strategies necessitate gradient estimators for end-to-end training, which are often obtained via REINFORCE \citep{Williams1992SimpleSG} or reparametrized gradients \citep{kingma2014autoencoding, jang2017categorical}.
\paragraph{LP-MAP inference.}
In many cases, the MAP problem \eqref{eq:map} is intractable due to the overlapping interaction of the factors $f \in \mathcal{F}$.
A commonly used relaxation is to replace the integer constraints $\bm{z} \in \{0,1\}^L$ by continuous constraints, leading to:
\begin{equation}\label{eq:lpmap}
\hat{\bm{z}} = \arg\max_{\bm{z} \in \color{myblue}{[0,1]^L}} \mathrm{score}(\bm{z}; \bm{s}).
\end{equation}
The problem above is known as LP-MAP inference \citep{wainwright2008graphical}. In some cases (for example, when the factor graph $\mathcal{F}$ does not have cycles), LP-MAP inference is \emph{exact}, i.e., it gives the same results as MAP inference. In general, this does not happen, but for many problems in NLP, LP-MAP relaxations are often nearly optimal \citep{koo-etal-2010-dual, martins2015ad3}. Importantly, computation in the hidden layer of these problems may render the network unsuitable for gradient-based training, as with MAP inference.
\paragraph{LP-SparseMAP inference.} The optimization problem respective to LP-SparseMAP is the $\ell_2$ regularized LP-MAP \cite{niculae2020lpsparsemap}:
\begin{equation}\label{eq:lpsparsemap}
\hat{\bm{z}} = \arg\max_{\bm{z} \in \color{myblue}{[0,1]^L}} \bigl( \mathrm{score}(\bm{z}; \bm{s}) {\color{myblue}{- \nicefrac{1}{2}\|\bm{z}\|^2}} \bigr).
\end{equation}
Unlike MAP and LP-MAP, the LP-SparseMAP relaxation is suitable to train with gradient backpropagation. Moreover, it favors sparse vectors $\hat{\bm{z}}$, i.e., vectors that have only a few non-zero entries. One of the most appealing features of this method is that it is modular: an arbitrary complex factor graph can be instantiated as long as a MAP oracle for each of the constituting factors is provided. This approach generalizes SparseMAP \citep{niculae2018sparsemap}, which requires an exact MAP oracle for the factor graph in its entirety. In fact, LP-SparseMAP recovers SparseMAP when there is a single factor $\mathcal{F} = \{f\}$. By only requiring a MAP oracle for each $f \in \mathcal{F}$, LP-SparseMAP makes it possible to instantiate more expressive factor graphs for which MAP is typically intractable. Table \ref{Tab:constraints} lists several logic constraint factors which are used in this paper.
\begin{table}[t]
\centering
\footnotesize
\renewcommand\arraystretch{1}
\begin{tabularx}{0.8\columnwidth}{p{0.3\columnwidth} p{0.4\columnwidth}}
\toprule
Factor Name & Imposed Constraint \\ \midrule
$\mathsf{XOR}$ & $\sum z_k = 1$ \\
$\mathsf{AtMostOne}$ & $\sum z_k \leq 1$ \\
$\mathsf{BUDGET}$ & $\sum z_k \leq B$ \\
\bottomrule
\end{tabularx}
\caption{Collection of logic factors and its imposed constraints. Each of these factors defines a constraint set $\mathcal{Z}_f$ as described in \S\ref{background:lpsparsemap}.}
\label{Tab:constraints}
\end{table}
\section{Deterministic Structured Rationalizers}
\label{sec:approach}
The idea behind our approach for selective rationalization is very simple: leverage the inherent flexibility and modularity of LP-SparseMAP for constrained, deterministic and fully differentiable rationale extraction.
\subsection{Highlights Extraction}
\paragraph{Model Architecture.} We use the model setting described in \S\ref{sec:background}. First, a generator model produces token-level scores $s_i, i \in \{1, \dots, L\}$. We propose replacing the current rationale extraction mechanisms (e.g. sampling from a Bernoulli distribution, or using sparse attention mechanisms) with an LP-SparseMAP extraction layer that computes token-level values $\hat{\bm{z}} \in [0,1]^L$, which are then used to mask the original sequence for prediction. Due to LP-SparseMAP's propensity for sparsity, many entries in $\hat{\bm{z}}$ will be zero, which approaches what is expected from a binary mask.
\paragraph{Factor Graphs.} The definition of the factor graph $\mathcal{F}$ is central to the rationale extraction, as each of the local factors $f \in \mathcal{F}$ will impose constraints on the highlight. We start by instantiating a factor graph with $L$ binary variables (one for each token) and a pairwise factor for every pair of contiguous tokens:
\begin{align}
\mathcal{F} = \{{\mathsf{PAIR}}(&z_i, z_{i+1}; r_{i, i+1}): 1 \leq i < L\},
\end{align}
which yields the binary pairwise MRF (\S\ref{background:lpsparsemap}) \begin{align}
\mathrm{score}(\bm{z}; \bm{s}) =
\sum_{i=1}^L s_i z_i + \sum_{i=1}^{L-1} r_{i, i+1} z_i z_{i+1}.
\end{align}
Instantiating this factor with non-negative edge scores, $r_{i, i+1} \geq 0$, encourages contiguity on the rationale extraction. Making use of the modularity of the method, we impose sparsity by further adding a $\mathsf{BUDGET}$ factor (see Table \ref{Tab:constraints}):
{\begin{align}
\mathcal{F} = &\{{\mathsf{PAIR}}(z_i, z_{i+1}; r_{i, i+1}): 1 \leq i < L\}\nonumber \\
&\cup\, \{\mathsf{BUDGET}(z_1, \dots, z_L; B)\}.
\label{eq: factorpair}
\end{align}
The size of the rationale is constrained to be, at most, $B\%$ of the input document size. Intuitively, the lower the $B$, the shorter the extracted rationales will be. Notice that this graph is composed of $L$ local factors. Thus, LP-SparseMAP would have to enforce agreement between all these factors in order to compute $\bm{z}$. Interestingly, factor graph representations are usually not unique. In our work, we instantiate an equivalent formulation of the factor graph in Eq. \ref{eq: factorpair} that consists of a single factor, \textbf{\textsf{H:SeqBudget}}. This factor can be seen as an extension of that of the LP-Sequence model in \citet{niculae2020lpsparsemap}: a linear-chain Markov factor with MAP provided by the Viterbi algorithm \citep{viterbi_orig, Viterbi}. The difference resides in the additional budget constraints that are incorporated in the MAP decoding. These constraints can be handled by augmenting the number of states in the dynamic program to incorporate how many words in the budget have already been consumed at each time step, leading to time complexity $\mathcal{O}(LB)$.
\subsection{Matchings Extraction}
\label{sec:matchings-extraction}
\paragraph{Model Architecture.}
Our architecture is inspired by ESIM \citep{chen-etal-2017-enhanced}. First, a generator model encodes two documents $\bm{x}_P$, $\bm{x}_H$ separately to obtain the encodings $(\Tilde{\bm{h}}_1^P, \dots, \Tilde{\bm{h}}_{L_P}^P)$ and $(\Tilde{\bm{h}}_1^H, \dots, \Tilde{\bm{h}}_{L_H}^H)$, respectively. Then, we compute alignment dot-product pairwise scores between the encoded representations to produce a score matrix $\bm{S} \in \mathbb{R}^{L_P \times L_H}$ such that $s_{ij} = \langle\Tilde{\bm{h}}_i^P, \Tilde{\bm{h}}_j^H\rangle$. We use LP-SparseMAP to obtain $\bm{Z}$, a constrained structured symmetrical alignment $\bm{Z}$ in which $z_{ij} \in [0,1]$, as described later. Then, we ``augment'' each word in the premise and hypothesis with the corresponding aligned weighted average by computing $\Bar{\bm{h}}_i^P = \left[\Tilde{\bm{h}}_i^P, \sum_{j} z_{ij}\, \Tilde{\bm{h}}_j^H\right]$ and $\Bar{\bm{h}}_j^H = \left[\Tilde{\bm{h}}_j^H, \sum_{i} z_{ji}\, \Tilde{\bm{h}}_i^P\right]$, and separately feed these vectors to another encoder and pool to find representations $\bm{r}^P$ and $\bm{r}^H$. Finally, the feature vector $\bm{r} = [\bm{r}^P, \bm{r}^H, \bm{r}^P-\bm{r}^H, \bm{r}^P \odot \bm{r}^H]$ is fed to a classification head for the final prediction. We also experiment with a strategy in which we assume that the hypothesis is known and the premise is masked for \textit{faithful} prediction. We consider $\Bar{\bm{h}}_i^P = \left[\sum_{j} z_{ij}\, \Tilde{\bm{h}}_j^H\right]$, such that the only information about the premise that the model has to make a prediction comes from the alignment and its masking of the encoded representation.
\paragraph{Factor Graphs.} We instantiate three different factor graphs for matchings extraction. The first -- \textbf{\textsf{M:XorAtMostOne}} -- is the same as the LP-Matching factor used in \citet{niculae2020lpsparsemap} with one \textsf{XOR} factor per row and one \textsf{AtMostOne} factor per column:
{\begin{align}
\mathcal{F} =\, &\{{\mathsf{XOR}}(z_{i1}, ... , z_{in}): 1 \leq i \leq L_P \\
&\cup \{\mathsf{AtMostOne}(z_{1j}, ... , z_{mj}): 1 \leq j \leq L_H\} \nonumber
\label{eq: xoratmostone}
\end{align}
which requires at least one active alignment for each word of the premise, since the $i^{\textrm{th}}$ word in the premise \textbf{must} be connected to the hypothesis. The $j^{\textrm{th}}$ word in the hypothesis, however, is not constrained to be aligned to any word in the premise. In the second factor graph -- \textbf{\textsf{M:AtMostOne2}} -- we alleviate the \textsf{XOR} restriction on the premise words to an $\mathsf{AtMostOne}$ restriction. The expected output is a sparser matching for there is no requirement of an active alignment for each word of the premise. The third factor graph -- \textsf{\textbf{M:Budget}} -- allows us to have more refined control on the sparsity of the resulting matching, by adding an extra global $\mathsf{BUDGET}$ factor (with budget $B$) to the factor graph of {\textsf{M:AtMostOne2}} so that the resulting matching will have at most $B$ active alignments.
\paragraph{Stochastic Matchings Extraction.} Prior work for selective rationalization of text matching uses constrained variants of optimal transport to obtain the rationale \citep{swanson2020rationalizing}. Their model is end-to-end differentiable using the Sinkhorn algorithm \citep{sinkhorn}. Thus, in order to provide a comparative study of stochastic and deterministic methods for rationalization of text matchings, we implement a perturb-and-MAP rationalizer (\S \ref{background:lpsparsemap}). We perturb the scores $s_{ij}$ by computing $\bm{\Tilde{S}} = \bm{S} + \bm{P}$, in which each element of $\bm{P}$ contains random samples from the Gumbel distribution, $p_{ij} \sim \mathcal{G}(0,1)$. We utilize these perturbed scores to compute non-symmetrical alignments from the premise to the hypothesis and vice-versa, such that their entries are in $[0,1]$. At test time, we obtain the most probable matchings, such that their entries are in $\{0,1\}$. These matchings are such that every word in the premise \textbf{must} be connected to a single word in the hypothesis and vice-versa.
\section{Experimental Setup}\label{sec:experiments}
\subsection{Highlights for Sentiment Classification}
\paragraph{Data and Evaluation.} We used the SST, AgNews, IMDB, and Hotels datasets for text classification and the BeerAdvocate dataset for regression. The statistics and details of all datasets can be found in \S\ref{sec:data_highlights}. The rationale specified lengths, as percentage of each document, for the strategies that impose fixed sparsity are 20\% for the SST, AgNews and IMDB datasets, 15\% for the Hotels dataset, and 10\% for the BeerAdvocate dataset. We evaluate end task performance (Macro $F_1$ for classification tasks and MSE for regression), and matching with human annotations through token-level $F_1$ score \citep{deyoung2019eraser} for the datasets that contain human annotations.
\paragraph{Baselines.} We compare our results with three versions of the stochastic rationalizer of \citet{lei2016rationalizing}: the original one -- \textbf{SFE} -- which uses the score function estimator to estimate the gradients; a second one -- \textbf{SFE w/ Baseline} -- which uses SFE with a moving average baseline variance reduction technique; a third -- \textbf{Gumbel} -- in which we employ the Gumbel-Softmax reparameterization \citep{jang2017categorical} to reparameterize the Bernoulli variables; and, a fourth -- \textbf{HardKuma} -- in which we employ HardKuma variables \citep{bastings2019interpretable} instead of Bernoulli variables and use reparameterized gradients for training end-to-end. Moreover, the latter rationalizer employs a Lagrangian relaxation to solve the constrained optimization problem of targeting specific sparsity rates. We also experimented with two deterministic strategies that use sparse attention mechanisms: a first that utilizes \textbf{sparsemax} \citep{martinssparsemax}, and a second that utilizes \textbf{fusedmax} \citep{niculae2019regularized} which encourages the network to pay attention to contiguous segments of text, by adding an additional total variation regularizer, inspired by the fused lasso. It is a natural deterministic counterpart of the constrained rationalizer proposed by \citet{lei2016rationalizing}, since the regularization encourages both sparsity and contiguity. The use of fusedmax for this task is new to the best of our knowledge. Similarly to \citet{jain2020learning}, we found that the stochastic rationalizers of \citet{lei2016rationalizing} and its variants (SFE, SFE w/ Baseline and Gumbel) require cumbersome hyperparameter search and tend to degenerate in such a way that the generated rationales are either the whole input text or empty text. Thus, at inference time, we follow the strategy proposed by \citet{jain2020learning} and restrict the generated rationale to a specified length $\ell$ via two mappings: \textbf{contiguous}, in which the span of length $\ell$, out of all the spans of this length, whose token-level scores cumulative sum is the highest is selected; and $\bm{\textbf{top-}k}$, in which the $\ell$ tokens with highest token-level scores are selected. Contrary to \citep{jain2020learning}, for the rationalizer of \citet{bastings2019interpretable} (HardKuma), we carefully tuned both the model hyperparameters and the Lagrangian relaxation algorithm hyperparameters, so as to use the deterministic policy in testing time that they propose.\footnote{We have found that using the deterministic policy at testing time proposed by \citet{bastings2019interpretable} instead of the top-$k$ or contiguous strategies is critical to achieve good performance with the HardKuma rationalizer.} All implementation details can be found in \S\ref{sec:implementation_details}. We also report the full-text baselines for each dataset in \S\ref{app:vanilla_classifiers}.
\subsection{Matchings for Natural Language Inference}
\paragraph{Data and Evaluation.} We used the English language SNLI and MNLI datasets \citep{bowman2015large, chen-etal-2017-enhanced}. We evaluate end task performance for both datasets. For the experiments with the {\textsf{M:Budget}}, we used a fixed budget of $B=4$ for SNLI and $B=6$ for MNLI. We also conduct further experiments with the HANS dataset \citep{mccoy2019right} which aims to analyse the use of linguistic heuristics (lexical overlap, constituent and subsequence heuristics) of NLI systems. The statistics and details of each dataset can be found in \S\ref{sec:data_matchings}.
\paragraph{Baselines.} We compare our results with variants of constrained optimal transport for selective rationalization employed by \citet{swanson2020rationalizing}: relaxed 1:1, which is similar in nature to our proposed {\textsf{M:AtMostOne2}} factor; and exact $k=4$ similar to our proposed {\textsf{M:Budget}} with budget $B=4$. We also replicate the LP-matching implementation of \citet{niculae2020lpsparsemap} which consists of the original ESIM model described in \S \ref{sec:matchings-extraction} with $\bm{Z}$ as the output of the LP-SparseMAP problem with a {\textsf{M:XorAtMostOne}} factor. Importantly, both these models aggregate the encoded premise representation with the information that comes from the alignment. All implementation details can be found in \S\ref{sec:implementation_details}. We also report the ESIM baselines in \S\ref{app:vanilla_classifiers}.
\section{Results and Analysis}
\label{sec:results}
\subsection{Extraction of Text Highlights}
\renewcommand{\arraystretch}{.95}
\begin{table*}[t]
\footnotesize
\centering
\begin{tabular}{>{\arraybackslash}m{0.01cm}
>{\arraybackslash}m{2.35cm} >{\arraybackslash}m{1.5cm}
>{\raggedleft\arraybackslash}m{1.6cm}
>{\raggedleft\arraybackslash}m{2cm}
>{\raggedleft\arraybackslash}m{1.7cm}
>{\raggedleft\arraybackslash}m{1.9cm}
>{\raggedleft\arraybackslash}m{1.5cm}}
\toprule
Method & & Rationale & SST $\uparrow$ & AgNews $\uparrow$ & IMDB $\uparrow$ & Beer $\downarrow$ & Hotels $\uparrow$\\ \midrule
\multirow{2}{*}{\scalerel*{\includegraphics{game-die_1f3b2.png}}{\textrm{\textbigcircle}}} & \multirow{2}{*}{SFE} & top-$k$ & \multicolumn{1}{r}{.76 \textcolor{black!90}{\scriptsize{(.71/.80)}}} & \multicolumn{1}{r}{.92 \textcolor{black!90}{\scriptsize{(.92/.92)}}} & \multicolumn{1}{r}{.84 \textcolor{black!90}{\scriptsize{(.72/.88)}}} & \multicolumn{1}{r}{.018 \textcolor{black!90}{\scriptsize{(.016/.020)}}} & \multicolumn{1}{r}{.66 \textcolor{black!90}{\scriptsize{(.62/.69)}}}\\
& & contiguous & \multicolumn{1}{r}{.71 \textcolor{black!90}{\scriptsize{(.68/.75)}}} & \multicolumn{1}{r}{.86 \textcolor{black!90}{\scriptsize{(.85/.86)}}} & \multicolumn{1}{r}{.65 \textcolor{black!90}{\scriptsize{(.57/.73)}}} & \multicolumn{1}{r}{.020 \textcolor{black!90}{\scriptsize{(.019/.024)}}} & \multicolumn{1}{r}{.62 \textcolor{black!90}{\scriptsize{(.34/.72)}}}\\ \midrule
\multirow{2}{*}{\scalerel*{\includegraphics{game-die_1f3b2.png}}{\textrm{\textbigcircle}}} & \multirow{2}{*}{SFE w/ Baseline} & top-$k$ & \multicolumn{1}{r}{.78 \textcolor{black!90}{\scriptsize{(.76/.80)}}} & \multicolumn{1}{r}{.92 \textcolor{black!90}{\scriptsize{(.92/.93)}}} & \multicolumn{1}{r}{.82 \textcolor{black!90}{\scriptsize{(.72/.88)}}} & \multicolumn{1}{r}{.019 \textcolor{black!90}{\scriptsize{(.017/.020)}}} & \multicolumn{1}{r}{.56 \textcolor{black!90}{\scriptsize{(.34/.64)}}}\\
& & contiguous & \multicolumn{1}{r}{.70 \textcolor{black!90}{\scriptsize{(.64/.75)}}} & \multicolumn{1}{r}{.86 \textcolor{black!90}{\scriptsize{(.84/.86)}}} & \multicolumn{1}{r}{.76 \textcolor{black!90}{\scriptsize{(.73/.80)}}} & \multicolumn{1}{r}{.021 \textcolor{black!90}{\scriptsize{(.019/.025)}}} & \multicolumn{1}{r}{.55 \textcolor{black!90}{\scriptsize{(.34/.69)}}}\\ \midrule
\multirow{2}{1.45cm}{\scalerel*{\includegraphics{game-die_1f3b2.png}}{\textrm{\textbigcircle}}} & \multirow{2}{1.45cm}{Gumbel} & top-$k$ & \multicolumn{1}{r}{.70 (\textcolor{black!90}{\scriptsize{.67/.72)}}} & \multicolumn{1}{r}{.78 \textcolor{black!90}{\scriptsize{(.73/.84)}}} & \multicolumn{1}{r}{.74 \textcolor{black!90}{\scriptsize{(.71/.78)}}} & \multicolumn{1}{r}{.026 \textcolor{black!90}{\scriptsize{(.018/.041)}}} & \multicolumn{1}{r}{.83 \textcolor{black!90}{\scriptsize{(.73/.92)}}} \\
& & contiguous & \multicolumn{1}{r}{.67 \textcolor{black!90}{\scriptsize{(.67/.68)}}} & \multicolumn{1}{r}{.77 \textcolor{black!90}{\scriptsize{(.74/.81)}}} & \multicolumn{1}{r}{.72 \textcolor{black!90}{\scriptsize{(.72/.73)}}} & \multicolumn{1}{r}{.043 \textcolor{black!90}{\scriptsize{(.040/.048)}}} & \multicolumn{1}{r}{.74 \textcolor{black!90}{\scriptsize{(.65/.84)}}} \\ \midrule
\multirow{1}{1.45cm}{\scalerel*{\includegraphics{game-die_1f3b2.png}}{\textrm{\textbigcircle}}} & \multirow{1}{1.45cm}{HardKuma} & -- & \multicolumn{1}{r}{.80 \textcolor{black!90}{\scriptsize{(.80/.81)}}} & \multicolumn{1}{r}{.90 \textcolor{black!90}{\scriptsize{(.87/.88)}}} & \multicolumn{1}{r}{.87 \textcolor{black!90}{\scriptsize{(.90/.91)}}} & \multicolumn{1}{r}{.019 \textcolor{black!90}{\scriptsize{(.016/.020)}}} & \multicolumn{1}{r}{.90 \textcolor{black!90}{\scriptsize{(.88/.92)}}} \\ \midrule
\multirow{2}{1.45cm}{\scalerel*{\includegraphics{1f3af_goog.png}}{\textrm{\textbigcircle}}} & \multirow{2}{2.85cm}{Sparse Attention} & sparsemax & \multicolumn{1}{r}{\textbf{.82} \textcolor{black!90}{\scriptsize{(.81/.83)}}} & \multicolumn{1}{r}{\textbf{.93} \textcolor{black!90}{\scriptsize{(.93/.93)}}} & \multicolumn{1}{r}{.89 \textcolor{black!90}{\scriptsize{(.89/.90)}}} & \multicolumn{1}{r}{.019 \textcolor{black!90}{\scriptsize{(.016/.021)}}} & \multicolumn{1}{r}{.89 \textcolor{black!90}{\scriptsize{(.87/.92)}}} \\
& & fusedmax & \multicolumn{1}{r}{.81 \textcolor{black!90}{\scriptsize{(.81/.82)}}} & \multicolumn{1}{r}{.92 \textcolor{black!90}{\scriptsize{(.91/.92)}}} & \multicolumn{1}{r}{.88 \textcolor{black!90}{\scriptsize{(.87/.89)}}} & \multicolumn{1}{r}{{.018} \textcolor{black!90}{\scriptsize{(.017/.019)}}} & \multicolumn{1}{r}{.85 \textcolor{black!90}{\scriptsize{(.77/.90)}}}\\ \midrule
\scalerel*{\includegraphics{1f3af_goog.png}}{\textrm{\textbigcircle}} & SPECTRA (ours) & H:SeqBudget & \multicolumn{1}{r}{.80 \textcolor{black!90}{\scriptsize{(.79/.81)}}} & \multicolumn{1}{r}{.92 \textcolor{black!90}{\scriptsize{(.92/.93)}}} & \multicolumn{1}{r}{\textbf{.90} \textcolor{black!90}{\scriptsize{(.89/.90)}}} & \multicolumn{1}{r}{\textbf{.017} \textcolor{black!90}{\scriptsize{(.016/.019)}}} & \multicolumn{1}{r}{\textbf{.91} \textcolor{black!90}{\scriptsize{(.90/.92)}}}\\ \bottomrule
\end{tabular}
\caption{\label{tab:predictive_performance}
Model predictive performances across datasets, for stochastic (\scalerel*{\includegraphics{game-die_1f3b2.png}}{\textrm{\textbigcircle}}) and deterministic (\scalerel*{\includegraphics{1f3af_goog.png}}{\textrm{\textbigcircle}}) methods. We report mean and min/max $F_1$ scores across five random seeds on test sets for all datasets but Beer where we report MSE. We bold the best-performing rationalized model(s) for each corpus.\vspace{-6pt}
}
\end{table*}
\renewcommand{\arraystretch}{.8}
\begin{table*}[t]
\footnotesize
\centering
\begin{tabular}{>{\arraybackslash}m{0.01cm}
>{\arraybackslash}m{2.35cm} >{\arraybackslash}m{1.5cm}
>{\raggedleft\arraybackslash}m{1.6cm}
>{\raggedleft\arraybackslash}m{2cm}
>{\raggedleft\arraybackslash}m{1.7cm}
>{\raggedleft\arraybackslash}m{1.9cm}
>{\raggedleft\arraybackslash}m{1.5cm}}
\toprule
Method & & Rationale & SST & AgNews & IMDB & Beer & Hotels \\ \midrule
\scalerel*{\includegraphics{game-die_1f3b2.png}}{\textrm{\textbigcircle}} & HardKuma & -- & \multicolumn{1}{r}{.15 \textcolor{black!90}{\scriptsize{(.12/.19)}}} & \multicolumn{1}{r}{.19 \textcolor{black!90}{\scriptsize{(.18/.19)}}} & \multicolumn{1}{r}{{.03} \textcolor{black!90}{\scriptsize{(.02/.03)}}} & \multicolumn{1}{r}{{.08} \textcolor{black!90}{\scriptsize{(.00/.17)}}} & \multicolumn{1}{r}{{.09} \textcolor{black!90}{\scriptsize{(.07/.12)}}}\\ \midrule
\multirow{2}{*}{\scalerel*{\includegraphics{1f3af_goog.png}}{\textrm{\textbigcircle}}} &\multirow{2}{2.85cm}{Sparse Attention} & sparsemax & \multicolumn{1}{r}{.17 \textcolor{black!90}{\scriptsize{(.13/.23)}}} & \multicolumn{1}{r}{.13 \textcolor{black!90}{\scriptsize{(.11/.15)}}} & \multicolumn{1}{r}{.02 \textcolor{black!90}{\scriptsize{(.02/.03)}}} & \multicolumn{1}{r}{.11 \textcolor{black!90}{\scriptsize{(.09/.13)}}} & \multicolumn{1}{r}{.03 \textcolor{black!90}{\scriptsize{(.02/.04)}}} \\
& & fusedmax & \multicolumn{1}{r}{.60 \textcolor{black!90}{\scriptsize{(.14/1.0)}}} & \multicolumn{1}{r}{.32 \textcolor{black!90}{\scriptsize{(.10/.66)}}} & \multicolumn{1}{r}{.02 \textcolor{black!90}{\scriptsize{(.01/.02)}}} & \multicolumn{1}{r}{{.26} \textcolor{black!90}{\scriptsize{(.03/.98)}}} & \multicolumn{1}{r}{.04 \textcolor{black!90}{\scriptsize{(.01/.08)}}}\\ \bottomrule
\end{tabular}
\caption{\label{tab:avg_size_rationale}
Average size of the extracted rationales using the HardKuma stochastic rationalizer (\scalerel*{\includegraphics{game-die_1f3b2.png}}{\textrm{\textbigcircle}}) and deterministic (\scalerel*{\includegraphics{1f3af_goog.png}}{\textrm{\textbigcircle}}) sparse attention mechanisms. We report mean and min/max average size across five random seeds.\vspace{-6pt}
}
\end{table*}
\paragraph{Predictive Performance.} We report the predictive performances of all models in Table \ref{tab:predictive_performance}. We observe that the deterministic rationalizers that use sparse attention mechanisms generally outperform the stochastic rationalizers while exhibiting lower variability across different random seeds and different datasets. In general and as expected, for the stochastic models, the top-$k$ strategy for rationale extraction outperforms the contiguous strategy. As reported in \citet{jain2020learning}, strategies that impose a contiguous mapping trade coherence for performance on the end-task. Our experiments also show that HardKuma is the stochastic rationalizer least prone to variability across different seeds, faring competitively with the deterministic methods. The strategy proposed in this paper, \textsf{{H:SeqBudget}}, fares competitively with the deterministic methods and generally outperforms the stochastic methods. Moreover, similarly to the other deterministic rationalizers, our method exhibits lower variability across different runs.
We show examples of highlights extracted by SPECTRA in \S\ref{app:example_highlights_rationales}.
\subsection{Quality of the Rationales}
\paragraph{Rationale Regularization.} We report in Table \ref{tab:avg_size_rationale} the average size of the extracted rationales (proportion of words
not zeroed out) across datasets for the stochastic HardKuma rationalizer and for each rationalizer that uses sparse attention mechanisms. The latter strategies do not have any mechanism to regularize the sparsity of the extracted rationales, which leads to variability on the rationale extraction. This is especially the case for the fusedmax strategy, as it pushes adjacent tokens to be given the same attention probability. This might lead to rationale degeneration when the attention weights are similar across all tokens. On the other hand, HardKuma employs a Lagrangian relaxation algorithm to target a predefined sparsity level. We have found that careful hyperparameter tuning is required across different datasets. While, generally, the average size of the extracted rationales does not exhibit considerable variability, some random seeds led to degeneration (the model extracts empty rationales). Remarkably, our proposed strategy utilizes the $\mathsf{BUDGET}$ factor to set a predefined desired rationale length, regularizing the rationale extraction while still applying a deterministic policy that exhibits low variability across different runs and datasets (Table \ref{tab:predictive_performance}).
\paragraph{Matching with Human Annotations.} We report token-level $F_1$ scores in Table \ref{tab:rationale_matching} to evaluate the quality of the rationales for the datasets for which we had human annotations for the test set. We observe that our proposed strategy and HardKuma outperform all the other methods on what concerns matching the human annotations. This was to be expected considering the results shown in Table \ref{tab:predictive_performance} and Table \ref{tab:avg_size_rationale}:
the stochastic models other than HardKuma do not fare competitively with the deterministic models and their variability across runs is also reflected on the token-level $F_1$ scores; and although the rationalizers that use sparse attention mechanisms are competitive with our proposed strategy, the lack of regularization on what comes to the rationale extraction leads to variable sized rationales which is also reflected on poorer matchings. We also observe that, when degeneration does not occur, HardKuma generally extracts high quality rationales on what comes to matching the human annotations. It is also worth remarking that the sparsemax and top-$k$ strategies are not expected to fare well on this metric because human annotations for these datasets are at the \textit{sentence-level}. Our strategy, however, not only pushes for sparser rationales but also encourages contiguity on the extraction.
\renewcommand{\arraystretch}{.9}
\begin{table}[t]
\footnotesize
\centering
\begin{tabular}{>{\arraybackslash}m{0.01cm}
>{\arraybackslash}m{1.45cm} >{\arraybackslash}m{1.45cm}
>{\raggedleft\arraybackslash}m{1.2cm}
>{\raggedleft\arraybackslash}m{1.2cm}}
\toprule
Method & & Rationale & Beer & Hotels \\ \midrule
\multirow{2}{*}{\scalerel*{\includegraphics{game-die_1f3b2.png}}{\textrm{\textbigcircle}}} & \multirow{2}{*}{SFE} & top-$k$ & \multicolumn{1}{r}{.19 \textcolor{black!90}{\scriptsize{(.13/.30)}}} & \multicolumn{1}{r}{.16 \textcolor{black!90}{\scriptsize{(.12/.30)}}} \\
& & contiguous & \multicolumn{1}{r}{.35 \textcolor{black!90}{\scriptsize{(.18/.42)}}} & \multicolumn{1}{r}{.14 \textcolor{black!90}{\scriptsize{(.12/.15)}}} \\ \midrule
\multirow{2}{*}{\scalerel*{\includegraphics{game-die_1f3b2.png}}{\textrm{\textbigcircle}}} & \multirow{2}{1.45cm}{SFE w/ Baseline} & top-$k$ & \multicolumn{1}{r}{.17 \textcolor{black!90}{\scriptsize{(.14/.19)}}} & \multicolumn{1}{r}{.14 \textcolor{black!90}{\scriptsize{(.13/.18)}}} \\
& & contiguous & \multicolumn{1}{r}{.41 \textcolor{black!90}{\scriptsize{(.37/.42)}}} & \multicolumn{1}{r}{.15 \textcolor{black!90}{\scriptsize{(.14/.15)}}} \\ \midrule
\multirow{2}{1.85cm}{\scalerel*{\includegraphics{game-die_1f3b2.png}}{\textrm{\textbigcircle}}} & \multirow{2}{1.85cm}{Gumbel} & top-$k$ & \multicolumn{1}{r}{.27 \textcolor{black!90}{\scriptsize{(.14/.39)}}} & \multicolumn{1}{r}{.36 \textcolor{black!90}{\scriptsize{(.27/.48)}}} \\
& & contiguous & \multicolumn{1}{r}{.42 \textcolor{black!90}{\scriptsize{(.41/.42)}}} & \multicolumn{1}{r}{.36 \textcolor{black!90}{\scriptsize{(.29/.48)}}} \\ \midrule
\multirow{1}{1.85cm}{\scalerel*{\includegraphics{game-die_1f3b2.png}}{\textrm{\textbigcircle}}} & \multirow{1}{1.85cm}{HardKuma} & -- & \multicolumn{1}{r}{.37 \textcolor{black!90}{\scriptsize{(.00/.90)}}} & \multicolumn{1}{r}{\textbf{.52} \textcolor{black!90}{\scriptsize{(.37/.57)}}} \\ \midrule
\multirow{2}{1.85cm}{\scalerel*{\includegraphics{1f3af_goog.png}}{\textrm{\textbigcircle}}} & \multirow{2}{1.45cm}{Sparse Attention} & sparsemax & \multicolumn{1}{r}{.48 \textcolor{black!90}{\scriptsize{(.41/.55)}}} & \multicolumn{1}{r}{.17 \textcolor{black!90}{\scriptsize{(.07/.31)}}} \\
& & fusedmax & \multicolumn{1}{r}{.39 \textcolor{black!90}{\scriptsize{(.29/.53)}}} & \multicolumn{1}{r}{.25 \textcolor{black!90}{\scriptsize{(.09/.31)}}} \\ \midrule
\scalerel*{\includegraphics{1f3af_goog.png}}{\textrm{\textbigcircle}} & SPECTRA \newline (ours) & H:SeqBudget & \multicolumn{1}{r}{\textbf{.61} \textcolor{black!90}{\scriptsize{(.56/.68)}}} & \multicolumn{1}{r}{.37 \textcolor{black!90}{\scriptsize{(.34/.40)}}}\\ \bottomrule
\end{tabular}
\caption{
Evaluation of the rationales through matching with human annotations, for stochastic (\scalerel*{\includegraphics{game-die_1f3b2.png}}{\textrm{\textbigcircle}}) and deterministic (\scalerel*{\includegraphics{1f3af_goog.png}}{\textrm{\textbigcircle}}) methods. We report mean token-level $F_1$ scores and min/max across five random seeds.\vspace{-6pt}
}
\label{tab:rationale_matching}
\end{table}
\subsection{Extraction of Text Matchings}
\renewcommand{\arraystretch}{.9}
\begin{table}[t]
\footnotesize
\centering
\begin{tabular}{
>{\arraybackslash}m{3.3cm}
>{\raggedleft\arraybackslash}m{1.1cm}
>{\raggedleft\arraybackslash}m{1.1cm} }
\toprule
Matching Structure & SNLI & MNLI \\ \midrule
\textit{\textbf{Not Faithful}} & & \\
\scalerel*{\includegraphics{1f3af_goog.png}}{\textrm{\textbigcircle}}~OT relaxed 1:1$^\dagger$ & \multicolumn{1}{r}{.82} & \multicolumn{1}{r}{--} \\
\scalerel*{\includegraphics{1f3af_goog.png}}{\textrm{\textbigcircle}}~OT exact $k=4^\dagger$ & \multicolumn{1}{r}{.81} & \multicolumn{1}{r}{--} \\\midrule
\scalerel*{\includegraphics{game-die_1f3b2.png}}{\textrm{\textbigcircle}}~Gumbel Matching & \multicolumn{1}{r}{.85 \textcolor{black!90}{\scriptsize{(.84/.85)}}} & \multicolumn{1}{r}{.73 \textcolor{black!90}{\scriptsize{(.72/.73)}}} \\
\scalerel*{\includegraphics{1f3af_goog.png}}{\textrm{\textbigcircle}}~M:XorAtMostOne & \multicolumn{1}{r}{\textbf{.86} \textcolor{black!90}{\scriptsize{(.86/.87)}}} & \multicolumn{1}{r}{\textbf{.76} \textcolor{black!90}{\scriptsize{(.75/.76)}}}\\
\scalerel*{\includegraphics{1f3af_goog.png}}{\textrm{\textbigcircle}}~M:AtMostOne2 & \multicolumn{1}{r}{\textbf{.86} \textcolor{black!90}{\scriptsize{(.86/.87)}}} & \multicolumn{1}{r}{\textbf{.76} \textcolor{black!90}{\scriptsize{(.75/.76)}}} \\
\scalerel*{\includegraphics{1f3af_goog.png}}{\textrm{\textbigcircle}}~M:Budget & \multicolumn{1}{r}{.85 \textcolor{black!90}{\scriptsize{(.85/.86)}}} & \multicolumn{1}{r}{.75 \textcolor{black!90}{\scriptsize{(.75/.76)}}} \\ \midrule\midrule
\textit{\textbf{Faithful}} & & \\
\scalerel*{\includegraphics{game-die_1f3b2.png}}{\textrm{\textbigcircle}}~Gumbel Matching & \multicolumn{1}{r}{.85 \textcolor{black!90}{\scriptsize{(.84/.85)}}} & \multicolumn{1}{r}{.73 \textcolor{black!90}{\scriptsize{(.72/.73)}}} \\
\scalerel*{\includegraphics{1f3af_goog.png}}{\textrm{\textbigcircle}}~M:XorAtMostOne & \multicolumn{1}{r}{.85 \textcolor{black!90}{\scriptsize{(.85/.85)}}} & \multicolumn{1}{r}{.73 \textcolor{black!90}{\scriptsize{(.72/.73)}}} \\
\scalerel*{\includegraphics{1f3af_goog.png}}{\textrm{\textbigcircle}}~M:AtMostOne2 & \multicolumn{1}{r}{.85 \textcolor{black!90}{\scriptsize{(.85/.85)}}} & \multicolumn{1}{r}{.73 \textcolor{black!90}{\scriptsize{(.73/.73)}}}\\
\scalerel*{\includegraphics{1f3af_goog.png}}{\textrm{\textbigcircle}}~M:Budget & \multicolumn{1}{r}{ .82 \textcolor{black!90}{\scriptsize{(.81/.82)}}} & \multicolumn{1}{r}{ .68 \textcolor{black!90}{\scriptsize{(.67/.68)}}}\\\bottomrule
\end{tabular}
\caption{
Model predictive performances across datasets, for stochastic (\scalerel*{\includegraphics{game-die_1f3b2.png}}{\textrm{\textbigcircle}}) and deterministic (\scalerel*{\includegraphics{1f3af_goog.png}}{\textrm{\textbigcircle}}) methods. We report mean and min/max $F_1$ scores across three random seeds on both SNLI and MNLI test sets. We bold the best-performing rationalized models for each corpus. $\dagger$ means results come from \citet{swanson2020rationalizing}.
}
\label{tab:results_predperformance_matchings}
\end{table}
\renewcommand{\arraystretch}{.7}
\begin{table}[h!]
\footnotesize
\centering
\begin{tabular}{
>{\arraybackslash}m{2.9cm}
>{\raggedleft\arraybackslash}m{1.75cm}
>{\raggedleft\arraybackslash}m{1.75cm} }
\toprule
HANS Subcomponent & Vanilla & Augmented \\ \midrule
\textit{\textbf{Entailment}} & & \\
Lexical Overlap & \multicolumn{1}{r}{.9942} & \multicolumn{1}{r}{.9962} \\
Subsequence & \multicolumn{1}{r}{.9960} & \multicolumn{1}{r}{1.0} \\
Constituent & \multicolumn{1}{r}{.9988} & \multicolumn{1}{r}{1.0} \\\midrule \midrule
\textit{\textbf{Non-entailment}} & & \\
Lexical Overlap & \multicolumn{1}{r}{.0052} & \multicolumn{1}{r}{.9998} \\
Subsequence & \multicolumn{1}{r}{.0016} & \multicolumn{1}{r}{1.0} \\
Constituent & \multicolumn{1}{r}{.0122} & \multicolumn{1}{r}{1.0} \\\bottomrule
\end{tabular}
\caption{
Model predictive performances for the vanilla and augmented models evaluated on the HANS evaluation set. We report accuracies for each of the six subcomponents of the evaluation set.\vspace{-5pt}
}
\label{tab:results_HANS}
\end{table}
\paragraph{Predictive Performance.} We report the predictive performances of all models in Table \ref{tab:results_predperformance_matchings}. Both the strategies that use the LP-SparseMAP extraction layer and our proposed stochastic matchings extractor outperform the OT variants for matchings extraction. We observe that, contrary to the text highlights experiments, the stochastic matchings extraction model does not exhibit noticeably higher variability compared to the deterministic models. In general, the faithful models are competitive with the non-faithful models. Since the latter ones are constrained to only utilize information from the premise that comes from alignments, these results demonstrate the effectiveness of the alignment extraction. As expected, there is a slight trade-off between how constrained the alignment is and the model's predictive performance. This is more noticeable with the M:Budget strategy, the most constrained version of our proposed strategies, in the faithful scenario. We show examples of matchings extracted by SPECTRA in \S\ref{app:example_matchings_rationales}.
\paragraph{Heuristics Analysis with HANS.} We used two different {\textsf{M:AtMostOne2}} models for our analysis: a first one trained on MNLI (\textbf{Vanilla}), and a second one trained on MNLI augmented (\textbf{Augmented}) with 30,000 HANS-like examples ($\approx$ 8\% of MNLI original size), replicating the data augmentation scenario in \cite{mccoy2019right}. We evaluated both models on the HANS evaluation set, which has six subcomponents, each defined by its correct label and the heuristic it addresses. We report the results in Table \ref{tab:results_HANS}. Our models behave similarly to those in \cite{mccoy2019right}: when we augment the training set with HANS-like examples, the model no longer associates the heuristics to entailment. By observation of the extracted matchings, we noticed that these were similar between the two models. Thus, the effect of the augmented data resides on how the information from the matchings is used after the extraction layer. We show examples of matchings in \S\ref{app:example_matchings_rationales}.
\section{Related Work}
\label{sec:related}
\paragraph{Selective Rationalization.} There is a long string of work on interpreting predictions made by neural networks \citep{lipton2017mythos, doshivelez2017rigorous, gilpin2019explaining, wiegreffe2021teach, zhang2021survey}.
Our paper focus on selective rationalizers, which have been used for extraction of text highlights \citep{lei2016rationalizing, bastings2019interpretable, yu2019rethinking, deyoung2019eraser, treviso-martins-2020-explanation, Zhang_2021} and text matchings \citep{swanson2020rationalizing}. Most works rely
on stochastic rationale generation or deterministic attention mechanisms,
but the two approaches have never been extensively compared.
Our work adds that comparison and contributes with an easy-to-train fully differentiable rationalizer that allows for flexible constrained rationale extraction.
Our strategy for rationalization based on sparse structured prediction on factor graphs constitutes a unified framework for deterministic extraction of different structured rationales.
\paragraph{Structured Prediction on Factor Graphs.} \citet{kim2017structured} incorporate structured models in attention mechanisms as a way to model rich structural dependencies, leading to a dense probability distribution over structures. \citet{niculae2018sparsemap} propose SparseMAP, which yields a sparse probability distribution over structures and can be computed using calls to a MAP oracle, making it applicable to problems (e.g. matchings) for which marginal inference is intractable but MAP is not.
However, the requirement of an exact MAP oracle prohibits its application for more expressive structured models such as loopy graphical models and logic constraints.
This limitation is overcome by LP-SparseMAP \citep{niculae2020lpsparsemap}
via a local polytope relaxation, extending the previous method to sparse differentiable optimization in any factor graph with arbitrarily complex structure. While other relaxations for matchings -- such as entropic regularization leading to Sinkhorn's algorithm \citep{cuturi-sinkhorn} -- that are tractable and efficient exist and have been used for rationalization \citep{swanson2020rationalizing}, we use LP-SparseMAP for rationale extraction in our work. Our approach for rationalization focuses on learning and explaining with latent structure extracted by structured prediction on factor graphs.
\paragraph{Sentence Compression and Summarization.} Work on sentence compression and summarization bears some resemblance to selective rationalization for text highlights extraction. \citet{titov-mcdonald-2008-joint} propose a statistical model which is able to discover corresponding topics in text and extract informative snippets of text by predicting a stochastic mask via Gibbs sampling. \citet{mcdonald-2006-discriminative} proposes a budgeted dynamic program in the same vein as that of the H:SeqBudget strategy for text highlights extraction. \citet{berg-kirkpatrick-etal-2011-jointly} and \citet{almeida-martins-2013-fast} propose models that jointly extract and compress sentences. Our work differs in that our setting is completely unsupervised and we need to differentiate through the extractive layers.
\section{Conclusions}
\label{sec:conclusions}
We have proposed SPECTRA, an easy-to-train fully differentiable rationalizer that allows for flexible constrained rationale extraction.
We have provided a comparative study with stochastic and deterministic approaches for rationalization, showing that SPECTRA generally outperforms previous rationalizers in text classification and natural language inference tasks. Moreover, it does so while exhibiting less variability than stochastic methods and easing regularization of the rationale extraction when compared to previous deterministic approaches.
Our framework constitutes a unified framework for deterministic extraction of different structured rationales. We hope that our work spurs future research on rationalization for different structured explanations.
\section*{Acknowledgements}
We are grateful to Vlad Niculae for his valuable help and insight on LP-SparseMAP. We would like to thank Marcos Treviso for helping to start this project. We are grateful to Wilker Aziz, António Farinhas, Ben Peters, Gonçalo Correia, and the reviewers, for their helpful feedback and discussions. This work was supported by the European Research Council (ERC StG DeepSPIN 758969, by the FCT
through contract UIDB/50008/2020, and by the P2020 programs MAIA and Unbabel4EU (LISBOA-01-0247-FEDER-045909 and LISBOA-01-0247-FEDER-042671).
\newpage
|
2,877,628,090,456 | arxiv | \section{Introduction}
\label{sec:1}
The Virtual Element Method (VEM), introduced in \cite{volley,VEM-hitchhikers}, is a recent paradigm for the approximation of partial differential equation problems that shares the same variational background of the Finite Element Methods. The original motivation of VEM is the need to construct an accurate {\em conforming} Galerkin scheme with the capability to deal with highly general polygonal/polyhedral meshes, including ``hanging vertexes'' and non-convex shapes. Among the Galerkin schemes, VEM has the peculiarity that the discrete spaces consist of functions which are not known pointwise, but a limited set of information about them are at disposal. Nevertheless, the available information are sufficient
to construct the stiffness matrix and the right-hand side.
The VEM has been developed for many problems, see for example \cite{Brezzi-Falk-Marini,Ahmed-et-al:2013,variable-primale,Steklov-VEM,Helmholtz-VEM,supermisti,Berrone-VEM,Benedetto-VEM-2,Berrone-SUPG,nonconforming,Manzini:Russo:Sukumar,plates-zhao,vaccahyper,gardini,senatore}.
Regarding more directly the Stokes problem, Virtual Elements have been developed in \cite{Antonietti-BeiraodaVeiga-Mora-Verani:20XX,Stokes:nonconforme,Stokes:divfree,Gatica-1}.
Moreover, VEM is experiencing a growing interest also towards Continuum Mechanics problems in the engineering community. We here cite the recent works \cite{GTP14,BeiraoLovaMora,ABLS_part_I,wriggers,BCP,Andersen-geo,Topology-VEM} and \cite{BeiraodaVeiga-Brezzi-Marini:2013,Brezzi-Marini:2012,ADLP-HR}, for instance.
Finally, some example of other numerical methods for the Stokes or Navier-Stokes equations that can handle polytopal meshes are \cite{dipietro_lemaire,qiu_shi,dipietro_krell}.
In this paper, which may be considered as a natural evolution of our recent divergence-free approach developed in \cite{Stokes:divfree} for the Stokes problem, we apply the VEM to the Navier-Stokes equations in 2D. However, the non-linear convective term in the Navier-Stokes equations leads to the introduction of suitable projectors. These, in turn, suggest to make use of an enhanced discrete velocity space \cite{preprintdarcy}, that is an improvement with respect to that of \cite{Stokes:divfree}.
Instead, the pressure field is approximated by means of standard locally polynomial functions, without any continuity requirement across the elements. Furthermore, we consider two different discretization of the trilinear form arising from the convective term. The first one is the straightforward VEM version of the continuous trilinear form; however, the projector introduction causes a lack of skew-symmetry, despite the discrete velocity is divergence-free (up to machine precision). This leads to consider the second choice, which is simply the skew-symmetric part of the trilinear form mentioned above (cf. \cite{giraultbook}, for instance). We remark that we develop an error analysis focusing on this latter choice, but the numerical tests concern both alternatives.
The outcome is a family of Virtual Elements, one per each polynomial order of consistency $k$, with $k\ge 2$. To our best knowledge, this is the first paper where the VEM technology is addressed to the Navier-Stokes equations.
The main objectives of the present paper are the following.
\begin{itemize}
\item {\em The development of a rigorous error analysis of the proposed methods.}
We highlight that our analysis provides some noteworthy element of novelty. Indeed, although we follow rather well-established lines for the error analysis of 2D Navier-Stokes Galerkin methods (see for example \cite{giraultbook}), these need to be combined with new techniques that are peculiar to the VEM framework. In particular, the interpolant construction of Theorem \ref{thm:interpolante} involves new arguments which might be useful even in different contexts (i.e. for other VEM spaces with different regularity requirement).
\item {\em A first but thorough assessment of the actual numerical performance of this new approach.} We provide a set of significant numerical tests, that highlight the features of our VEM approach. In addition to the important flexibility of dealing with general polygonal meshes, the presented scheme (we tested the case $k=2$) displays the following favourable points.
\begin{enumerate}
\item The error components partly decouple: notably, the velocity error does not depend directly on the discrete pressures, but only indirectly through the approximation of the loading and convection terms. This is a consequence of the fact that our methods provide a discrete velocity which is point-wise divergence-free (the isochoric constraint is {\em not} relaxed).
In some situations, e.g. for hydrostatic fluid problems, the partial decoupling of the errors induces a positive effect on the velocity approximation.
Moreover, for the same reason, the VEM scheme seems to be more robust for small values of the viscosity parameter when compared with standard mixed finite elements.
\item Another advantage of the method is that, again due to its divergence-free nature, the same Virtual space couple can be used directly also for the approximation of the diffusion problem (in mixed form). This allows for a much easier coupling in Stokes-Darcy problems where different models need to be used in different parts of the domain. This observation adds up with the fact that, thanks to the use of polygons that allow hanging nodes, the gluing of different meshes in different parts of the domain is also much easier.
\item As in \cite{Stokes:divfree}, the particular choice of degrees of freedom adopted for the velocity space yields a diagonal structure in a large part of the pressure-velocity interaction stiffness matrix. As a consequence, and without the need of any static condensation, many internal-to-element degrees of freedom can be automatically ignored when building the linear system.
\end{enumerate}
\end{itemize}
We finally note that, nowadays, there do exist Galerkin-type finite element methods for the Stokes and Navier-Stokes equations that are pressure-robust (that is, the error on the velocity does not depend on the pressure, not even indirectly through the loading or convection terms). Some recent examples are \cite{benchmark,dipietro_linke}.
However, to our best knowledge, all the available schemes work only for standard simplicial/hexahedral meshes.
Despite our method is not pressure-robust in the sense above, for arbitrary polygonal meshes it is the only conforming divergence-free scheme, a property which yields to important advantages, as outlined in points 1 and 2. Developing a conforming scheme which is both divergence-free and pressure-robust for general polygonal meshes, is currently an open problem.
A brief outline of the paper is the following. In Section \ref{sec:2} we recall the 2D Navier-Stokes problem, introducing the classical variational formulation and the necessary notations. Section \ref{sec:3} details the proposed discretization procedure. The approximation spaces and all the quantities that form the discrete problem, are introduced and described.
Section \ref{sec:4} deals with the theoretical analysis, which leads to the optimal error estimates of Theorem \ref{thm:u} and bound \eqref{eq:p-est}.
Finally, Section \ref{sec:5} presents several numerical tests, which highlight the actual performance of our approach, also in comparison with a couple of well-known mixed finite element schemes.
\section{The continuous Navier-Stokes equation}
\label{sec:2}
We consider the steady Navier-Stokes Equation on a polygonal domain $\Omega \subseteq \numberset{R}^2$ with homogeneous Dirichlet boundary conditions:
\begin{equation}
\label{eq:ns primale}
\left\{
\begin{aligned}
& \mbox{ find $(\mathbf{u},p)$ such that}& &\\
& - \nu \, \boldsymbol{\Delta} \mathbf{u} + (\boldsymbol{\nabla} \mathbf{u} ) \,\mathbf{u} - \nabla p = \mathbf{f}\qquad & &\text{in $\Omega$,} \\
& {\rm div} \, \mathbf{u} = 0 \qquad & &\text{in $\Omega$,} \\
& \mathbf{u} = 0 \qquad & &\text{on $\Gamma = \partial \Omega$,}
\end{aligned}
\right.
\end{equation}
with $\nu \in {\mathbb R}$, $\nu > 0$, and where $\mathbf{u}, p$ are the velocity and the pressure fields, respectively.
Furthermore, $\boldsymbol{\Delta} $, ${\rm div}$, $\boldsymbol{\nabla}$, and $\nabla$ denote the vector Laplacian,
the divergence, the gradient operator for vector fields and the gradient operator for scalar functions. Finally, $\mathbf{f}$ represents
the external force, while $\nu$ is the viscosity. We also remark that different boundary conditions can be treated as well.
Let us consider the spaces
\begin{equation}
\label{eq:spazi continui}
\mathbf{V}:= \left[ H_0^1(\Omega) \right]^2, \qquad Q:= L^2_0(\Omega) = \left\{ q \in L^2(\Omega) \quad \text{s.t.} \quad \int_{\Omega} q \,{\rm d}\Omega = 0 \right\}
\end{equation}
with norms
\begin{equation}
\label{eq:norme continue}
\| \mathbf{v} \|_{\mathbf{V}} := | \mathbf{v}|_{\left[ H^1(\Omega) \right]^2} \quad , \qquad
\|q\|_Q := \| q\|_{L^2(\Omega)}.
\end{equation}
We assume $\mathbf{f} \in[L^{2}(\Omega)]^2$ and consider the bilinear forms
\begin{gather}
\label{eq:forma a}
a(\cdot, \cdot) \colon \mathbf{V} \times \mathbf{V} \to \numberset{R}, \qquad
a (\mathbf{u}, \mathbf{v}) := \int_{\Omega} \, \boldsymbol{\nabla} \mathbf{u} : \boldsymbol{\nabla} \mathbf{v} \,{\rm d} \Omega, \qquad \text{for all $\mathbf{u}, \mathbf{v} \in \mathbf{V}$}
\\
\label{eq:forma b}
b(\cdot, \cdot) \colon \mathbf{V} \times Q \to \numberset{R} \qquad b(\mathbf{v}, q) := \int_{\Omega}q\, {\rm div} \,\mathbf{v} \,{\rm d}\Omega \qquad \text{for all $\mathbf{v} \in \mathbf{V}$, $q \in Q$}
\\
\label{eq:forma c}
c(\cdot; \, \cdot, \cdot) \colon \mathbf{V} \times \mathbf{V} \times \mathbf{V} \to \numberset{R} \qquad c(\mathbf{w}; \, \mathbf{u}, \mathbf{v}) := \int_{\Omega} ( \boldsymbol{\nabla} \mathbf{u} ) \, \mathbf{w} \cdot \mathbf{v} \,{\rm d}\Omega \qquad \text{for all $\mathbf{w}, \mathbf{u}, \mathbf{v} \in \mathbf{V}$.}
\end{gather}
Then a standard variational formulation of Problem \eqref{eq:ns primale} is:
\begin{equation}
\label{eq:ns variazionale}
\left\{
\begin{aligned}
& \text{find $(\mathbf{u}, p) \in \mathbf{V} \times Q$, such that} \\
& \nu \, a(\mathbf{u}, \mathbf{v}) + c(\mathbf{u}; \, \mathbf{u}, \mathbf{v}) + b(\mathbf{v}, p) = (\mathbf{f}, \mathbf{v}) \qquad & \text{for all $\mathbf{v} \in \mathbf{V}$,} \\
& b(\mathbf{u}, q) = 0 \qquad & \text{for all $q \in Q$,}
\end{aligned}
\right.
\end{equation}
where
\[
(\mathbf{f}, \mathbf{v}) := \int_{\Omega} \mathbf{f} \cdot \mathbf{v} \, {\rm d} \Omega .
\]
It is well known that with the choices \eqref{eq:norme continue}, we have (see for instance \cite{giraultbook}):
\begin{itemize}
\item $a(\cdot, \cdot)$, $b(\cdot, \cdot)$ and $c(\cdot; \, \cdot, \cdot)$ are continuous
\[
|a(\mathbf{u}, \mathbf{v})| \leq \|\mathbf{u}\|_{\mathbf{V}}\|\mathbf{v}\|_{\mathbf{V}} \qquad \text{for all $\mathbf{u}, \mathbf{v} \in \mathbf{V}$,}
\]
\[
|b(\mathbf{v}, q)| \leq \|\mathbf{v}\|_{\mathbf{V}} \|q\|_Q \qquad \text{for all $\mathbf{v} \in \mathbf{V}$ and $q \in Q$,}
\]
\[
|c(\mathbf{w}; \, \mathbf{u}, \mathbf{v})| \leq \widehat{C} \, \|\mathbf{w}\|_{\mathbf{V}} \|\mathbf{u}\|_{\mathbf{V}} \|\mathbf{v}\|_{\mathbf{V}}
\qquad \text{for all $\mathbf{w}, \mathbf{u}, \mathbf{v} \in \mathbf{V}$;}
\]
\item $a(\cdot, \cdot)$ is coercive (with coercivity constant $\alpha =1$), i.e.
\[
a(\mathbf{v}, \mathbf{v}) \geq \|\mathbf{v}\|^2_{\mathbf{V}} \qquad \text{for all $\mathbf{v} \in \mathbf{V}$;}
\]
\item the bilinear form $b(\cdot,\cdot) $ and the space $\mathbf{V}$ and $Q$ satisfy the inf-sup condition, i.e.
\begin{equation}
\label{eq:inf-sup}
\exists \, \beta >0 \quad \text{such that} \quad \sup_{\mathbf{v} \in \mathbf{V}, \, \mathbf{v} \neq \mathbf{0}} \frac{b(\mathbf{v}, q)}{ \|\mathbf{v}\|_{\mathbf{V}}} \geq \beta \|q\|_Q \qquad \text{for all $q \in Q$.}
\end{equation}
\end{itemize}
Therefore, if
\begin{equation}
\label{eq:ns condition}
\gamma := \frac{\widehat{C} \, \|\mathbf{f}\|_{-1}}{\nu^2} < 1
\end{equation}
Problem \eqref{eq:ns variazionale} has a unique solution $(\mathbf{u}, p) \in \mathbf{V} \times Q$ such that
\begin{equation}
\label{eq:solution estimates}
\| \mathbf{u}\|_{\mathbf{V}} \leq \frac{\| \mathbf{f}\|_{H^{-1}}}{\nu}.
\end{equation}
Let us introduce the kernel
\begin{equation}
\label{eq:kercontinuo}
\mathbf{Z} := \{ \mathbf{v} \in \mathbf{V} \quad \text{s.t.} \quad b(\mathbf{v}, q) = 0 \quad \text{for all $q \in Q$}. \}
\end{equation}
Then Problem \eqref{eq:ns variazionale} can be formulated in the equivalent kernel form
\begin{equation}
\label{eq:ns variazionale ker}
\left\{
\begin{aligned}
& \text{find $\mathbf{u} \in \mathbf{Z}$, such that} \\
& \nu \, a(\mathbf{u}, \mathbf{v}) + c(\mathbf{u}; \, \mathbf{u}, \mathbf{v}) = (\mathbf{f}, \mathbf{v}) \qquad & \text{for all $\mathbf{v} \in \mathbf{Z}$,}
\end{aligned}
\right.
\end{equation}
Finally, by a direct computation it is easy to see that, if $\mathbf{u} \in \mathbf{Z}$ is fixed, then the bilinear form $c(\mathbf{u}; \,\cdot, \,\cdot) \colon \mathbf{V} \times \mathbf{V} \to \numberset{R}$ is skew-symmetric, i.e.
\[
c(\mathbf{u}; \mathbf{v}, \mathbf{w}) = -c(\mathbf{u}; \mathbf{w}, \mathbf{v}) \qquad \text{for all $\mathbf{v}, \mathbf{w} \in \mathbf{V}$} .
\]
Therefore we introduce, as usual, also the trilinear form $\widetilde{c}(\cdot, \, \cdot, \, \cdot)\colon \mathbf{V} \times \mathbf{V} \times \mathbf{V} \to \numberset{R}$
\begin{equation}
\label{eq:skew}
\widetilde{c}(\mathbf{w}; \, \mathbf{u}, \mathbf{v}) := \frac{1}{2} c(\mathbf{w}; \, \mathbf{u}, \mathbf{v}) - \frac{1}{2} c(\mathbf{w}; \, \mathbf{v}, \mathbf{u}) \qquad \text{for all $\mathbf{w}, \mathbf{u}, \mathbf{v} \in \mathbf{V}$.}
\end{equation}
\section{Virtual formulation of the problem}
\label{sec:3}
\subsection{Virtual element space and polynomial projections}
\label{sub:3.1}
We outline the Virtual Element discretization of Problem \eqref{eq:ns variazionale}.
We will make use of various tools from the virtual element technology, that will be described briefly; we refer the interested reader to the papers \cite{Stokes:divfree,preprintdarcy}.
Let $\set{\Omega_h}_h$ be a sequence of decompositions of $\Omega$ into general polygonal elements $E$ with
\[
h_E := {\rm diameter}(E) , \quad
h := \sup_{E \in \Omega_h} h_E .
\]
We suppose that for all $h$, each element $E$ in $\Omega_h$ fulfils the following assumptions:
\begin{description}
\item [$\mathbf{(A1)}$] $E$ is star-shaped with respect to a ball $B_E$ of radius $ \geq\, \rho \, h_E$,
\item [$\mathbf{(A2)}$] the distance between any two vertexes of $E$ is $\ge c \, h_E$,
\end{description}
where $\rho$ and $c$ are positive constants. We remark that the hypotheses above, though not too restrictive in many practical cases,
can be further relaxed, as investigated in ~\cite{2016stability}.
Using standard VEM notation, for $k \in \numberset{N}$, let us define the spaces
\begin{itemize}
\item $\numberset{P}_k(E)$ the set of polynomials on $E$ of degree $\leq k$ (with the extended notation $\numberset{P}_{-1}(E)=\emptyset$),
\item $\numberset{B}_k(E) := \{v \in C^0(\partial E) \quad \text{s.t} \quad v_{|e} \in \numberset{P}_k(e) \quad \forall\mbox{ edge } e \subset \partial E\}$,
\item $\mathcal{G}_{k}(E):= \nabla(\numberset{P}_{k+1}(E)) \subseteq [\numberset{P}_{k}(E)]^2$,
\item $\mathcal{G}_{k}^{\oplus}(E) := \mathbf{x}^{\perp}[\numberset{P}_{k-1}(E)] \subseteq [\numberset{P}_{k}(E)]^2$ with $\mathbf{x}^{\perp}:= (x_2, -x_1)$.
\end{itemize}
For any $n \in \numberset{N}$ and $E \in \Omega_h$ we introduce the following useful polynomial projections:
\begin{itemize}
\item the $\boldsymbol{H^1}$ \textbf{semi-norm projection} ${\Pi}_{n}^{\nabla,E} \colon \mathbf{V} \to [\numberset{P}_n(E)]^2$, defined by
\begin{equation}
\label{eq:Pn_k^E}
\left\{
\begin{aligned}
& \int_E \boldsymbol{\nabla} \,\mathbf{q}_n : \boldsymbol{\nabla} (\mathbf{v}- \, {\Pi}_{n}^{\nabla,E} \mathbf{v}) \, {\rm d} E = 0 \qquad \text{for all $\mathbf{v} \in \mathbf{V}$ and for all $\mathbf{q}_n \in [\numberset{P}_n(E)]^2$,} \\
& \Pi_0^{0,E}(\mathbf{v} - \, {\Pi}_{n}^{\nabla, E} \mathbf{v}) = \mathbf{0} \, ,
\end{aligned}
\right.
\end{equation}
\item the $\boldsymbol{L^2}$\textbf{-projection for scalar functions} $\Pi_n^{0, E} \colon L^2(\Omega) \to \numberset{P}_n(E)$, given by
\begin{equation}
\label{eq:P0_k^E}
\int_Eq_n (v - \, {\Pi}_{n}^{0, E} v) \, {\rm d} E = 0 \qquad \text{for all $v \in L^2(E)$ and for all $q_n \in \numberset{P}_n(E)$,}
\end{equation}
with obvious extension for vector functions $\Pi_n^{0, E} \colon [L^2(\Omega)]^2 \to [\numberset{P}_n(E)]^2$, and tensor functions
$\boldsymbol{\Pi}_{n}^{0, E} \colon [L^2(E)]^{2 \times 2} \to [\numberset{P}_{n}(E)]^{2 \times 2}$.
\end{itemize}
In \cite{Stokes:divfree} we have introduced a new family of Virtual Elements for the Stokes problem on polygonal meshes. In particular, by a proper choice of the Virtual space of velocities, the virtual local spaces are associated to a Stokes-like variational problem on each element. In \cite{preprintdarcy} we have presented an enhanced Virtual space, taking the inspiration from \cite{Ahmed-et-al:2013}, to be used in place of the original one in such a way that the $L^2$-projection can be exactly computable by the DoFs.
In this section we briefly recall from \cite{Stokes:divfree, preprintdarcy} the notations, the main properties of the Virtual spaces and some details about the construction of the projections.
Let $k \geq 2$ the polynomial degree of accuracy of the method. We introduce on each element $E \in \Omega_h$ the (original) finite dimensional local virtual space \cite{Stokes:divfree}
\begin{multline}
\label{eq:W_h}
\mathbf{W}_h^E := \biggl\{
\mathbf{v} \in [H^1(E)]^2 \quad \text{s.t} \quad \mathbf{v}_{|{\partial E}} \in [\numberset{B}_k(\partial E)]^2 \, , \biggr.
\\
\left.
\biggl\{
\begin{aligned}
& - \boldsymbol{\Delta} \mathbf{v} - \nabla s \in \mathcal{G}_{k-2}^{\oplus}(E), \\
& {\rm div} \, \mathbf{v} \in \numberset{P}_{k-1}(E),
\end{aligned}
\biggr. \qquad \text{ for some $s \in L^2(E)$}
\quad \right\}
\end{multline}
where all the operators and equations above are to be interpreted in the distributional sense.
Then we enlarge the previous space
\begin{multline*}
\mathbf{U}_h^E := \biggl\{
\mathbf{v} \in [H^1(E)]^2 \quad \text{s.t} \quad \mathbf{v}_{|{\partial E}} \in [\numberset{B}_k(\partial E)]^2 \, , \biggr.
\\
\left.
\biggl\{
\begin{aligned}
& - \boldsymbol{\Delta} \mathbf{v} - \nabla s \in \mathcal{G}_{k}^{\oplus}(E), \\
& {\rm div} \, \mathbf{v} \in \numberset{P}_{k-1}(E),
\end{aligned}
\biggr. \qquad \text{ for some $s \in L^2(E)$}
\quad \right\}
\end{multline*}
Now we define the Virtual Element space $\mathbf{V}_h^E$ as the restriction of $\mathbf{U}_h^E$ given by
\begin{equation}
\label{eq:V_h^E}
\mathbf{V}_h^E := \left\{ \mathbf{v} \in \mathbf{U}_h^E \quad \text{s.t.} \quad \left(\mathbf{v} - \Pi^{\nabla,E}_k \mathbf{v}, \, \mathbf{g}_k^{\perp} \right)_{[L^2(E)]^2} = 0 \quad \text{for all $\mathbf{g}_k^{\perp} \in \mathcal{G}_{k}^{\oplus}(E)/\mathcal{G}_{k-2}^{\oplus}(E)$} \right\} ,
\end{equation}
where the symbol $\mathcal{G}_{k}^{\oplus}(E)/\mathcal{G}_{k-2}^{\oplus}(E)$ denotes the polynomials in $\mathcal{G}_{k}^{\oplus}(E)$ that are $L^2$-orthogonal to all polynomials of $\mathcal{G}_{k-2}^{\oplus}(E)$ (observing that $\mathcal{G}_{k-2}^{\oplus}(E) \subset \mathcal{G}_{k}^{\oplus}(E)$).
From \cite{supermisti,Stokes:divfree,preprintdarcy}, we recall the following properties of the space $\mathbf{V}_h^E$.
The proof of the following result can be found in \cite{preprintdarcy}.
\begin{proposition}[Dimension and DoFs]
\label{prp:dofs}
Let $\mathbf{V}_h^E$ be the space defined in \eqref{eq:V_h^E}. Then the dimension and $\mathbf{V}_h^E$ is
\begin{equation}
\label{eq:dimensione V_h^E}
\begin{split}
\dim\left( \mathbf{V}_h^E \right) &= \dim\left([\numberset{B}_k(\partial E)]^2\right) + \dim\left(\mathcal{G}_{k-2}^{\oplus}(E)\right) + \left( \dim(\numberset{P}_{k-1}(E)) - 1\right) \\
&= 2n_E k + \frac{(k-1)(k-2)}{2} + \frac{(k+1)k}{2} - 1.
\end{split}
\end{equation}
where $n_E$ is the number of vertexes of $E$. Moreover the following linear operators $\mathbf{D_V}$, split into four subsets (see Figure \ref{fig:dofsloc}) constitute a set of DoFs for $\mathbf{V}_h^E$:
\begin{itemize}
\item $\mathbf{D_V1}$: the values of $\mathbf{v}$ at the vertices of the polygon $E$,
\item $\mathbf{D_V2}$: the values of $\mathbf{v}$ at $k-1$ distinct points of every edge $e \in \partial E$,
\item $\mathbf{D_V3}$: the moments of $\mathbf{v}$
\[
\int_E \mathbf{v} \cdot \mathbf{g}_{k-2}^{\oplus} \, {\rm d}E \qquad \text{for all $\mathbf{g}_{k-2}^{\oplus} \in \mathcal{G}_{k-2}^{\oplus}(E)$,}
\]
\item $\mathbf{D_V4}$: the moments of ${\rm div} \,\mathbf{v}$
\[
\int_E ({\rm div} \,\mathbf{v}) \, q_{k-1} \, {\rm d}E \qquad \text{for all $q_{k-1} \in \numberset{P}_{k-1}(E) / \numberset{R}$}
\]
\end{itemize}
\end{proposition}
\begin{figure}[!h]
\center{
\includegraphics[scale=0.20]{localk2} \qquad \qquad
\includegraphics[scale=0.20]{localk3}
\caption{Degrees of freedom for $k=2$, $k=3$. We denote $\mathbf{D_V1}$ with black dots, $\mathbf{D_V2}$ with red squares, $\mathbf{D_V3}$ with green rectangles, $\mathbf{D_V4}$ with blue dots inside the element.}
\label{fig:dofsloc}
}
\end{figure}
The proof of the following result can be found in \cite{Stokes:divfree} for ${\Pi^{\nabla, E}_k}$ and in \cite{preprintdarcy} for the remaining projectors.
\begin{proposition}[Projections and Computability]
\label{prp:projections}
The DoFs $\mathbf{D_V}$ allow us to compute exactly
\[
{\Pi^{\nabla, E}_k} \colon \mathbf{V}_h^E \to [\numberset{P}_k(E)]^2, \qquad
{\Pi^{0, E}_k} \colon \mathbf{V}_h^E \to [\numberset{P}_k(E)]^2, \qquad
{\boldsymbol{\Pi}^{0, E}_{k-1}} \colon \boldsymbol{\nabla}(\mathbf{V}_h^E) \to [\numberset{P}_{k-1}(E)]^{2 \times 2},
\]
in the sense that, given any $\mathbf{v}_h \in \mathbf{V}_h^E$, we are able to compute the polynomials
${\Pi^{\nabla, E}_k} \mathbf{v}_h$, ${\Pi^{0, E}_k} \mathbf{v}_h$ and ${\boldsymbol{\Pi}^{0, E}_{k-1}}\nabla\mathbf{v}_h$ only using, as unique information, the degree of freedom values $\mathbf{D_V}$ of $\mathbf{v}_h$.
\end{proposition}
\begin{remark}
\label{rm1}
Using the enhanced space $\mathbf{V}_h^E$ and following the same ideas of \cite{Stokes:divfree,preprintdarcy}, it is possible to improve the results of Proposition \ref{prp:projections} and compute exactly also the following higher order projections
\[
\Pi^{\nabla, E}_{k+2} \colon \mathbf{V}_h^E \to [\numberset{P}_{k+2}(E)]^2, \qquad
\boldsymbol{\Pi}^{0, E}_{k+1} \colon \boldsymbol{\nabla}(\mathbf{V}_h^E) \to [\numberset{P}_{k+1}(E)]^{2 \times 2}.
\]
Moreover, given any polynomial $q_n$ of arbitrary degree n and any $\mathbf{v} \in \mathbf{V}_h^E$, an integration by parts shows that we can compute the moment
\[
\int_E \nabla q_n \cdot \mathbf{v} \, {\rm d}E.
\]
\end{remark}
For what concerns the pressures we take the standard finite dimensional space
\begin{equation}
\label{eq:Q_h^E}
Q_h^E := \numberset{P}_{k-1}(E)
\end{equation}
having dimension
\begin{equation*}
\dim(Q_h^E) = \dim(\numberset{P}_{k-1}(E)) = \frac{(k+1)k}{2}.
\end{equation*}
The corresponding degrees of freedom are chosen defining for each $q\in Q_h^E$ the following linear operators $\mathbf{D_Q}$:
\begin{itemize}
\item $\mathbf{D_Q}$: the moments up to order $k-1$ of $q$, i.e.
\[
\int_E q \, p_{k-1} \, {\rm d}E \qquad \text{for all $p_{k-1} \in \numberset{P}_{k-1}(E)$.}
\]
\end{itemize}
Finally we define the global virtual element spaces as
\begin{equation}
\label{eq:V_h}
\mathbf{V}_h := \{ \mathbf{v} \in [H^1_0(\Omega)]^2 \quad \text{s.t} \quad \mathbf{v}_{|E} \in \mathbf{V}_h^E \quad \text{for all $E \in \Omega_h$} \}
\end{equation}
and
\begin{equation}
\label{eq:Q_h}
Q_h := \{ q \in L_0^2(\Omega) \quad \text{s.t.} \quad q_{|E} \in Q_h^E \quad \text{for all $E \in \Omega_h$}\},
\end{equation}
with the obvious associated sets of global degrees of freedom. A simple computation shows that:
\begin{equation*}
\dim(\mathbf{V}_h) = n_P \left( \frac{(k+1)k}{2} -1 + \frac{(k-1)(k-2)}{2} \right)
+ 2(n_V + (k-1) n_e)
\end{equation*}
and
\begin{equation*}
\dim(Q_h) = n_P \frac{(k+1)k}{2} - 1 ,
\end{equation*}
where $n_P$ is the number of elements, $n_e$, $n_V$ is the number of internal edges and vertexes in $\Omega_h$.
As observed in \cite{Stokes:divfree}, we remark that
\begin{equation}\label{eq:divfree}
{\rm div}\, \mathbf{V}_h\subseteq Q_h .
\end{equation}
\subsection{Discrete bilinear forms and load term approximation}
\label{sub:3.2}
The next step in the construction of our method is to define a discrete version of the bilinear forms $a(\cdot, \cdot)$ and $b(\cdot, \cdot)$ given in \eqref{eq:forma a} and \eqref{eq:forma b} and trilinear form $c(\cdot; \cdot, \cdot)$ in \eqref{eq:forma c}.
Here and in the rest of the paper the symbol $C$ will indicate a generic positive quantity, independent of the mesh size (and of $\nu$), but may depend on $\Omega$ and on the polynomial degree $k$. Furthermore, $C$ may vary at each occurrence.
First of all we decompose into local contributions the bilinear forms $a(\cdot, \cdot)$, $b(\cdot, \cdot)$, the trilinear form $c(\cdot; \cdot, \cdot)$ and the norms $\|\cdot\|_{\mathbf{V}}$, $\|\cdot \|_Q$ by defining
\begin{equation*}
a (\mathbf{u}, \mathbf{v}) =: \sum_{E \in \Omega_h} a^E (\mathbf{u}, \mathbf{v}) \qquad \text{for all $\mathbf{u}, \mathbf{v} \in \mathbf{V}$}
\end{equation*}
\begin{equation*}
b (\mathbf{v}, q) =: \sum_{E \in \Omega_h} b^E (\mathbf{v}, q) \qquad \text{for all $\mathbf{v} \in \mathbf{V}$ and $q \in Q$,}
\end{equation*}
\begin{equation*}
c(\mathbf{w}; \, \mathbf{u}, \mathbf{v}) =: \sum_{E \in \Omega_h} c^E(\mathbf{w}; \, \mathbf{u}, \mathbf{v}) \qquad \text{for all $\mathbf{w}, \mathbf{u}, \mathbf{v} \in \mathbf{V}$.}
\end{equation*}
and
\begin{equation*}
\|\mathbf{v}\|_{\mathbf{V}} =: \left(\sum_{E \in \Omega_h} \|\mathbf{v}\|^2_{\mathbf{V}, E}\right)^{1/2} \quad \text{for all $\mathbf{v} \in \mathbf{V}$,} \qquad \|q\|_Q =: \left(\sum_{E \in \Omega_h} \|q\|^2_{Q, E}\right)^{1/2} \quad \text{for all $q \in Q$.}
\end{equation*}
For what concerns $b(\cdot, \cdot)$, we simply set
\begin{equation}\label{bhform}
b(\mathbf{v}, q) = \sum_{E \in \Omega_h} b^E(\mathbf{v}, q) = \sum_{E \in \Omega_h} \int_E {\rm div} \, \mathbf{v} \, q \,{\rm d}E \qquad \text{for all $\mathbf{v} \in \mathbf{V}_h$, $q \in Q_h$},
\end{equation}
i.e. as noticed in \cite{Stokes:divfree} we do not introduce any approximation of the bilinear form. We notice that~\eqref{bhform}
is computable from the degrees of freedom $\mathbf{D_V1}$, $\mathbf{D_V2}$ and $\mathbf{D_V4}$, since $q$ is polynomial in each element $E \in \Omega_h$.
We now define discrete versions of the forms $a(\cdot, \cdot)$ (cf.~\eqref{eq:forma a}) and $c(\cdot; \,\cdot, \cdot)$ (cf.~\eqref{eq:forma c}) that need to be dealt with in a more careful way.
First of all, we note that for an arbitrary pair $(\mathbf{u},\mathbf{v} )\in \mathbf{V}_h^E \times \mathbf{V}_h^E $, the quantity $a_h^E(\mathbf{w}, \mathbf{v})$ is not computable. Therefore, following a standard procedure in the VEM framework, we define a computable discrete local bilinear form
\begin{equation}
\label{eq:a_h^E}
a_h^E(\cdot, \cdot) \colon \mathbf{V}_h^E \times \mathbf{V}_h^E \to \numberset{R}
\end{equation}
approximating the continuous form $a^E(\cdot, \cdot)$, and defined by
\begin{equation}
\label{eq:a_h^E def}
a_h^E(\mathbf{u}, \mathbf{v}) := a^E \left({\Pi^{\nabla, E}_k} \mathbf{u}, \, {\Pi^{\nabla, E}_k} \mathbf{v} \right) + \mathcal{S}^K \left((I - {\Pi^{\nabla, E}_k}) \mathbf{u}, \, (I -{\Pi^{\nabla, E}_k}) \mathbf{v} \right)
\end{equation}
for all $\mathbf{u}_h, \mathbf{v}_h \in \mathbf{V}_h^E$, where the (symmetric) stabilizing bilinear form $\mathcal{S}^E \colon \mathbf{V}_h^E \times \mathbf{V}_h^E \to \numberset{R}$, satisfies (see Remark \ref{rm:stabilizzazione})
\begin{equation}
\label{eq:S^E}
\alpha_* a^E(\mathbf{v}, \mathbf{v}) \leq \mathcal{S}^E(\mathbf{v}, \mathbf{v}) \leq \alpha^* a^E(\mathbf{v}, \mathbf{v}) \qquad \text{for all $\mathbf{v} \in \mathbf{V}_h$ such that ${\Pi}_{k}^{\nabla ,E} \mathbf{v}= \mathbf{0}$}
\end{equation}
with $\alpha_*$ and $\alpha^*$ positive constants independent of the element $E$.
It is straightforward to check that Definition~\eqref{eq:Pn_k^E} and properties~\eqref{eq:S^E} imply
\begin{itemize}
\item $\mathbf{k}$\textbf{-consistency}: for all $\mathbf{q}_k \in [\numberset{P}_k(E)]^2$ and $\mathbf{v} \in \mathbf{V}_h^K$
\begin{equation}\label{eq:consist}
a_h^E(\mathbf{q}_k, \mathbf{v}) = a^E( \mathbf{q}_k, \mathbf{v});
\end{equation}
\item \textbf{stability}: there exist two positive constants $\alpha_*$ and $\alpha^*$, independent of $h$ and $E$, such that, for all $\mathbf{v} \in \mathbf{V}_h^E$, it holds
\begin{equation}\label{eq:stabk}
\alpha_* a^E(\mathbf{v}, \mathbf{v}) \leq a_h^E(\mathbf{v}, \mathbf{v}) \leq \alpha^* a^E(\mathbf{v}, \mathbf{v}).
\end{equation}
\end{itemize}
\begin{remark}
\label{rm:stabilizzazione}
Condition \eqref{eq:S^E} essentially requires that the stabilizing term $\mathcal{S}^E(\mathbf{v}_h, \mathbf{v}_h)$ scales as $a^E(\mathbf{v}_h, \mathbf{v}_h)$. For instance, following the most standard VEM choice (cf. \cite{volley,VEM-hitchhikers,2016stability}), denoting with $\vec{\mathbf{u}}_h$, $\vec{\mathbf{v}}_h \in \numberset{R}^{N_{DoFs, E}}$
the vectors containing the values of the $N_{DoFs, E}$ degrees of freedom associated to $\mathbf{u}_h, \mathbf{v}_h \in \mathbf{V}_h^E$, we set
\[
\mathcal{S}^E (\mathbf{u}_h, \mathbf{v}_h) = \alpha^E \, \vec{\mathbf{u}}_h^T \vec{\mathbf{v}}_h ,
\]
where $\alpha^E$ is a suitable positive constant. For example, in the numerical tests presented in Section \ref{sec:5}, we have chosen $\alpha^E$ as the mean value of the non-zero eigenvalues of the matrix stemming from the
term $a^E \left({\Pi^{\nabla, E}_k} \mathbf{u}_h,\, {\Pi^{\nabla, E}_k} \mathbf{v}_h \right) $ in \eqref{eq:a_h^E def}.
\end{remark}
Finally we define the global approximated bilinear form $a_h(\cdot, \cdot) \colon \mathbf{V}_h \times \mathbf{V}_h \to \numberset{R}$ by simply summing the local contributions:
\begin{equation}
\label{eq:a_h}
a_h(\mathbf{u}_h, \mathbf{v}_h) := \sum_{E \in \Omega_h} a_h^E(\mathbf{u}_h, \mathbf{v}_h) \qquad \text{for all $\mathbf{u}_h, \mathbf{v}_h \in \mathbf{V}_h$.}
\end{equation}
For what concerns the approximation of the local trialinear form $c^E(\cdot; \, \cdot, \cdot)$, we set
\begin{equation}
\label{eq:c_h^E}
c_h^E(\mathbf{w}_h; \, \mathbf{u}_h, \mathbf{v}_h) := \int_E \left[ \left({\boldsymbol{\Pi}^{0, E}_{k-1}} \, \boldsymbol{\nabla} \mathbf{u}_h \right) \left({\Pi^{0, E}_k} \mathbf{w}_h \right) \right] \cdot {\Pi^{0, E}_k} \mathbf{v}_h \, {\rm d}E
\qquad \text{for all $\mathbf{w}_h, \mathbf{u}_h, \mathbf{v}_h \in \mathbf{V}_h$}
\end{equation}
and note that all quantities in the previous formula are computable, in the sense of Proposition \ref{prp:projections}.
As usual we define the global approximated trilinear form by adding the local contributions:
\begin{equation}
\label{eq:c_h}
c_h(\mathbf{w}_h; \, \mathbf{u}_h, \mathbf{v}_h) := \sum_{E \in \Omega_h} c_h^E(\mathbf{w}_h; \, \mathbf{u}_h, \mathbf{v}_h), \qquad \text{for all $\mathbf{w}_h, \mathbf{u}_h, \mathbf{v}_h \in \mathbf{V}_h$.}
\end{equation}
We first notice that the form $c_h(\cdot; \, \cdot, \cdot)$ is immediately extendable to the whole $\mathbf{V}$ (simply apply the same definition for any $\mathbf{w}, \mathbf{u}, \mathbf{v} \in \mathbf{V}$) . Moreover, we now show that it is continuous on $\mathbf{V}$, uniformly in $h$.
\begin{proposition}
\label{prp:continuity-ch}
Let
\begin{equation}
\label{eq:CC}
\widehat{C}_h := \sup_{\mathbf{w}, \mathbf{u}, \mathbf{v} \in \mathbf{V}} \frac{|c_h(\mathbf{w}; \, \mathbf{u}, \mathbf{v})|}{\|\mathbf{w}\|_{\mathbf{V}} \|\mathbf{u}\|_{\mathbf{V}} \|\mathbf{v}\|_{\mathbf{V}}}.
\end{equation}
Then $\widehat{C}_h$ is uniformly bounded, i.e. the trilinear form $c_h(\cdot; \, \cdot, \cdot)$ is uniformly continuous with respect to $h$.
\end{proposition}
\begin{proof}
By a direct computation it holds
\begin{equation}
\label{eq:continuity-ch1}
\begin{split}
c_h(\mathbf{w}; \, \mathbf{u}, \mathbf{v}) & = \sum_{E \in \Omega_h} c_h^E(\mathbf{w}; \, \mathbf{u}, \mathbf{v}) =
\sum_{E \in \Omega_h} \int_E \left[ \left({\boldsymbol{\Pi}^{0, E}_{k-1}} \, \boldsymbol{\nabla} \mathbf{u} \right) \left({\Pi^{0, E}_k} \mathbf{w} \right) \right] \cdot {\Pi^{0, E}_k} \mathbf{v} \, {\rm d}E \\
& \leq \sum_{i,j=1}^2 \, \sum_{E \in \Omega_h}
\left\|{\Pi^{0, E}_k} \, \frac{\partial \mathbf{u}_i}{\partial x_j} \right\|_{0,E}
\left\| {\Pi^{0, E}_k} \mathbf{w}_j\right\|_{L^4(E)}
\left\| {\Pi^{0, E}_k} \mathbf{v}_i\right\|_{L^4(E)}
\end{split}
\end{equation}
where the last inequality follows by using H\"older inequality.
Let us analyse each term in the right hand side of \eqref{eq:continuity-ch1}. Employing the continuity of the projection ${\Pi^{0, E}_k}$ with respect the $L^2$-norm we easily get
\begin{equation}
\label{eq:continuity-ch2}
\left\|{\Pi^{0, E}_k} \, \frac{\partial \mathbf{u}_i}{\partial x_j} \right\|_{0,E} \leq \left\| \frac{\partial \mathbf{u}_i}{\partial x_j} \right\|_{0,E}.
\end{equation}
For what concerns the second term (and analogously for the third one) we get
\begin{equation}
\label{eq:continuity-ch3}
\begin{aligned}
\left\| {\Pi^{0, E}_k} \mathbf{w}_j\right\|_{L^4(E)} & \leq C h_E^{-\frac{1}{2}} \left\| {\Pi^{0, E}_k} \mathbf{w}_j\right\|_{0,E} &\text{(inverse estimate for polynomials)} \\
& \leq C h_E^{-\frac{1}{2}} \left\| \mathbf{w}_j\right\|_{0,E} &\text{(continuity of ${\Pi^{0, E}_k}$ with respect $\|\cdot\|_{0,E}$ )} \\
& \leq C h_E^{-\frac{1}{2}} \, \|1\|_{L^4(E)} \, \left\| \mathbf{w}_j\right\|_{L^4(E)} &\text{(H\"older inequality )} \\
& \leq C h_E^{-\frac{1}{2}} \, (h_E^2)^{\frac{1}{4}} \, \left\| \mathbf{w}_j\right\|_{L^4(E)} &\text{(definition of $h_E$)} \\
&\leq C \, \left\| \mathbf{w}_j\right\|_{L^4(E)}.
\end{aligned}
\end{equation}
Collecting \eqref{eq:continuity-ch2} and \eqref{eq:continuity-ch3} in \eqref{eq:continuity-ch1} we obtain
\begin{equation}
\label{eq:continuity-ch4}
c_h(\mathbf{w}; \, \mathbf{u}, \mathbf{v}) \leq C \sum_{i,j=1}^2 \, \sum_{E \in \Omega_h} \left\| \frac{\partial \mathbf{u}_i}{\partial x_j} \right\|_{0,E} \, \left\| \mathbf{w}_j\right\|_{L^4(E)} \, \left\| \mathbf{v}_i\right\|_{L^4(E)}.
\end{equation}
Now applying H\"older inequality (for sequences) we get
\begin{equation}
\label{eq:continuity-ch5}
\begin{split}
c_h(\mathbf{w}; \, \mathbf{u}, \mathbf{v}) & \leq C \sum_{i,j=1}^2 \, \left( \sum_{E \in \Omega_h} \left\| \frac{\partial \mathbf{u}_i}{\partial x_j} \right\|^2_{0,E} \right)^{\frac{1}{2}} \, \left(\sum_{E \in \Omega_h} \left\| \mathbf{w}_j\right\|^4_{L^4(E)} \right)^{\frac{1}{4}} \, \left( \sum_{E \in \Omega_h}\left\| \mathbf{v}_i\right\|^4_{L^4(E)} \right)^{\frac{1}{4}} \\
& \leq C \sum_{i,j=1}^2 \left\| \frac{\partial \mathbf{u}_i}{\partial x_j} \right\|_{0} \, \left\| \mathbf{w}_j\right\|_{L^4(\Omega)} \, \left\| \mathbf{v}_i\right\|_{L^4(\Omega)}.
\end{split}
\end{equation}
Finally, since $H^1(\Omega) \subset L^4(\Omega)$, by Sobolev embedding it holds
\[
c_h(\mathbf{w}; \, \mathbf{u}, \mathbf{v}) \leq \widehat{C}_h \, \|\mathbf{u}\|_{\mathbf{V}} \|\mathbf{w}\|_{\mathbf{V}} \|\mathbf{v}\|_{\mathbf{V}},
\]
where the constant $\widehat{C}_h$ does not depend on $h$.
\end{proof}
We can also define the local discrete skew-symmetric trilinear form $\widetilde{c}_h^E(\cdot; \, \cdot, \cdot)\colon \mathbf{V} \times \mathbf{V} \times \mathbf{V} \to \numberset{R}$ simply setting
\begin{equation}
\label{eq:ctilde_h^E}
\widetilde{c}_h^E(\mathbf{w}; \, \mathbf{u}, \mathbf{v}) := \frac{1}{2}c_h^E(\mathbf{w}; \, \mathbf{u}, \mathbf{v}) - \frac{1}{2}c_h^E(\mathbf{w}; \, \mathbf{v}, \mathbf{u})
\qquad \text{for all $\mathbf{w}, \mathbf{u}, \mathbf{v} \in \mathbf{V}$}
\end{equation}
with obvious global extension
\begin{equation}
\label{eq:ctilde_h}
\widetilde{c}_h(\mathbf{w}; \, \mathbf{u}, \mathbf{v}) := \sum_{E \in \Omega_h} \widetilde{c}_h^E(\mathbf{w}; \, \mathbf{u}, \mathbf{v}), \qquad \text{for all $\mathbf{w}, \mathbf{u}, \mathbf{v} \in \mathbf{V}$,}
\end{equation}
that is (obviously) still continuous and computable.
The last step consists in constructing a computable approximation of the right-hand side $(\mathbf{f}, \, \mathbf{v})$ in \eqref{eq:ns variazionale}. We define the approximated load term $\mathbf{f}_h$ as
\begin{equation}
\label{eq:f_h}
\mathbf{f}_h := \Pi_{k}^{0,E} \mathbf{f} \qquad \text{for all $E \in \Omega_h$,}
\end{equation}
and consider:
\begin{equation}
\label{eq:right}
(\mathbf{f}_h, \mathbf{v}_h) = \sum_{E \in \Omega_h} \int_E \mathbf{f}_h \cdot \mathbf{v}_h \, {\rm d}E = \sum_{E \in \Omega_h} \int_E \Pi_{k}^{0,E} \mathbf{f} \cdot \mathbf{v}_h \, {\rm d}E = \sum_{E \in \Omega_h} \int_E \mathbf{f} \cdot \Pi_{k}^{0,E} \mathbf{v}_h \, {\rm d}E.
\end{equation}
We observe that \eqref{eq:right} can be exactly computed from $\mathbf{D_V}$ for all $\mathbf{v}_h \in \mathbf{V}_h$ (see Proposition \ref{prp:projections}).
\subsection{The discrete problem}\label{sec:discrete}
\label{sub:3.3}
We are now ready to state the proposed discrete problem. Referring to~\eqref{eq:V_h}, \eqref{eq:Q_h}, ~\eqref{eq:a_h}, ~\eqref{eq:ctilde_h} and ~\eqref{bhform}, we consider the \textbf{virtual element problem}:
\begin{equation}
\label{eq:ns virtual}
\left\{
\begin{aligned}
& \text{find $(\mathbf{u}_h, p_h) \in \mathbf{V}_h \times Q_h$, such that} \\
& \nu \, a_h(\mathbf{u}_h, \mathbf{v}_h) + \widetilde{c}_h(\mathbf{u}_h; \, \mathbf{u}_h, \mathbf{v}_h) + b(\mathbf{v}_h, p_h) = (\mathbf{f}_h, \mathbf{v}_h) \qquad & \text{for all $\mathbf{v}_h \in \mathbf{V}_h$,} \\
& b(\mathbf{u}_h, q_h) = 0 \qquad & \text{for all $q_h \in Q_h$.}
\end{aligned}
\right.
\end{equation}
We point out that the symmetry of $a_h(\cdot, \cdot)$ together with \eqref{eq:stabk} easily implies that $a_h(\cdot, \cdot)$ is continuous and coercive with respect to the $\mathbf{V}$-norm.
Moreover, as a direct consequence of Proposition 4.3 in \cite{Stokes:divfree}, we have the following stability result.
\begin{proposition}
\label{thm2} Given the discrete spaces
$\mathbf{V}_h$ and $Q_h$ defined in~\eqref{eq:V_h} and~\eqref{eq:Q_h}, there exists a positive $\widehat{\beta}$, independent of $h$, such that:
\begin{equation}
\label{eq:inf-sup discreta}
\sup_{\mathbf{v}_h \in \mathbf{V}_h \, \mathbf{v}_h \neq \mathbf{0}} \frac{b(\mathbf{v}_h, q_h)}{ \|\mathbf{v}_h\|_{\mathbf{V}}} \geq \widehat{\beta} \|q_h\|_Q \qquad \text{for all $q_h \in Q_h$.}
\end{equation}
\end{proposition}
In particular, the inf-sup condition of Proposition~\ref{thm2}, along with property~\eqref{eq:divfree}, implies that:
\begin{equation*}
{\rm div}\, \mathbf{V}_h = Q_h .
\end{equation*}
The well-posedness of virtual problem \eqref{eq:ns virtual} is a consequence of the coercivity property of $a_h(\cdot,\cdot)$, the skew-symmetry of $\widetilde{c}_h(\cdot; \cdot,\cdot)$ and the inf-sup condition \eqref{eq:inf-sup discreta}. We have
\begin{theorem}
Assuming that
\begin{equation}
\label{eq:ns virtual condition}
\gamma_h := \frac{\widehat{C}_h \, \|\mathbf{f}_h\|_{-1}}{\alpha_*^2 \, \nu^2} \le r < 1
\end{equation}
Problem \eqref{eq:ns virtual} has a unique solution $(\mathbf{u}_h, p_h) \in \mathbf{V}_h \times Q_h$ such that
\begin{equation}
\label{eq:solution virtual estimates}
\| \mathbf{u}_h\|_{\mathbf{V}} \leq \frac{\| \mathbf{f}_h\|_{H^{-1}}}{\alpha_* \, \nu}.
\end{equation}
\end{theorem}
Moreover, as observed in \cite{Stokes:divfree}, introducing the discrete kernel
\begin{equation*}
\mathbf{Z}_h := \{ \mathbf{v}_h \in \mathbf{V}_h \quad \text{s.t.} \quad b(\mathbf{v}_h, q_h) = 0 \quad \text{for all $q_h \in Q_h$}\},
\end{equation*}
recalling \eqref{eq:divfree} it follows
\begin{equation}
\label{kernincl}
\mathbf{Z}_h \subseteq \mathbf{Z} .
\end{equation}
Problem \eqref{eq:ns virtual} can be also formulated in the equivalent kernel form
\begin{equation}
\label{eq:nsvirtual ker}
\left\{
\begin{aligned}
& \text{find $\mathbf{u}_h \in \mathbf{Z}_h$, such that} \\
& \nu \, a_h(\mathbf{u}_h, \mathbf{v}_h) + \widetilde{c}_h(\mathbf{u}_h; \, \mathbf{u}_h, \mathbf{v}_h) = (\mathbf{f}_h, \mathbf{v}_h) \qquad & \text{for all $\mathbf{v}_h \in \mathbf{Z}_h$.}
\end{aligned}
\right.
\end{equation}
\begin{remark}\label{rem:non-skew}
An alternative choice for the discretization \eqref{eq:ns virtual} is to substitute the skew-simmetric form $\widetilde{c}_h(\cdot;\cdot,\cdot)$ with
${c}_h(\cdot;\cdot,\cdot)$. With that choice, a theoretical analysis can be developed using the guidelines in \cite{Maday_Quarteroni} in connection
with the same tools and ideas of Section \ref{sec:4}. We here prefer to consider the choice \eqref{eq:ns virtual}, that allows for a more direct stability argument. Nevertheless, in the numerical tests of Section \ref{sec:5} we will investigate both possibilities.
\end{remark}
\begin{remark}\label{rem:Sto-Dar}
An additional interesting consequence of property \eqref{kernincl} is that, following \cite{Stokes:divfree,preprintdarcy}, the proposed virtual elements can accommodate both the Stokes (or Navier-Stokes) and the Darcy problems simultaneously. Indeed, due to property \eqref{kernincl}, the proposed velocity-pressure couple turns out to be stable not only for the Stokes problem, but also for the Darcy problem. This yields an interesting advantage in complex flow problems where both equations are present: the same spaces can be used in the whole computational domain. As a consequence, the implementation of the method and the enforcement of the interface conditions are greatly simplified (see also Section \ref{test6}).
\end{remark}
\section{Theoretical analysis}
\label{sec:4}
\subsection{Interpolation estimates}
\label{sub:4.1}
In this section we prove that the following interpolation estimate holds for the enhanced space $\mathbf{V}_h$. Since the proof is quite involved, we divide it in three steps.
\begin{theorem}
\label{thm:interpolante}
Let $\mathbf{v} \in H^{s+1}(\Omega) \cap \mathbf{V}$, for $0<s \le k$. Then there exists $\mathbf{v}_I \in \mathbf{V}_h$ such that
\[
\| \mathbf{v} - \mathbf{v}_I \|_0 + h \, \| \mathbf{v} - \mathbf{v}_I \|_{\mathbf{V}} \leq C \, h^{s+1} \, | \mathbf{v} |_{s+1},
\]
where the constant $C$ depends only on the degree $k$ and the shape regularity constants $\rho,c$ (see assumptions $\mathbf{(A1)}$ and $\mathbf{(A2)}$ of Section \ref{sub:3.1}).
\end{theorem}
\begin{proof}
\textit{Step 1.}
Let $\mathbf{w}_I$ the approximant function of $\mathbf{v}$ in the space $\mathbf{W}_h$ (cf. \eqref{eq:W_h} and Proposition 4.2 in \cite{Stokes:divfree}) then it holds that
\begin{equation}
\label{eq:interpolata old}
\| \mathbf{v} - \mathbf{w}_I \|_0 + h \, \| \mathbf{v} - \mathbf{w}_I \|_{\mathbf{V}} \leq C \, h^{s+1} \, | \mathbf{v} |_{s+1}.
\end{equation}
Now let $\mathbf{v}_I \in \mathbf{V}_h$ be the interpolant of $\mathbf{w}_I$ in the sense of the DoFs $\mathbf{D_V}$, so that
\begin{equation}\label{Dv-eq-I}
\mathbf{D_V}(\mathbf{v}_I)= \mathbf{D_V}(\mathbf{w}_I).
\end{equation}
Let us define $\boldsymbol{\theta} := \mathbf{v}_I - \mathbf{w}_I$, then for every element $E \in \Omega_h$ the following facts hold.
\begin{itemize}
\item Since $\mathbf{v}_I$ and $\mathbf{w}_I$ are polynomials of degree $k$ on $\partial E$, by definition of $\mathbf{D_V1}$ and $\mathbf{D_V2}$, we have
\begin{equation}
\label{eq:inter1}
\boldsymbol{\theta} = \mathbf{0} \qquad \text{on $\partial E$.}
\end{equation}
\item Since ${\rm div} \, \mathbf{v}_I$ and ${\rm div} \, \mathbf{w}_I$ are polynomials of degree $k-1$ in $E$, by definition of $\mathbf{D_V4}$ and homogeneous boundary data \eqref{eq:inter1}, we get
\begin{equation}
\label{eq:inter2}
{\rm div} \, \boldsymbol{\theta} = 0 \qquad \text{in $E$.}
\end{equation}
\item Let $d^E(\cdot, \, \cdot) \colon H^1_0(E) \times \mathcal{G}_k^{\oplus}(E) \to \numberset{R}$ given by
\[
d^E(\mathbf{v}, \, \mathbf{g}_{k}^{\oplus}) =\int_E \mathbf{v} \cdot \mathbf{g}_{k}^{\oplus} \,{\rm d}E \qquad \text{for all $\mathbf{v} \in H^1_0(E)$, and $\mathbf{g}_{k}^{\oplus} \in \mathcal{G}_{k}^{\oplus}(E)$} .
\]
Then by definition of $\mathbf{D_V3}$, we infer
\begin{equation}\label{added:star1}
d^E(\boldsymbol{\theta}, \, \mathbf{g}_{k-2}^{\oplus}) = 0\qquad \text{for all $\mathbf{g}_{k-2}^{\oplus} \in \mathcal{G}_{k-2}^{\oplus}(E)$}.
\end{equation}
Now we recall that, for any $\mathbf{v}_h \in \mathbf{V}_h$, the quantity ${\Pi^{\nabla, E}_k} \mathbf{v}_h$ depends only on the values of $\mathbf{D_V}(\mathbf{v}_h)$, see Proposition \ref{prp:projections}. Therefore, using \eqref{Dv-eq-I}, we have that ${\Pi^{\nabla, E}_k}\mathbf{v}_I = {\Pi^{\nabla, E}_k}\mathbf{w}_I$. As a consequence, by definition of $\mathbf{V}_h^E$ it holds
\begin{equation}\label{added:star2}
d^E(\boldsymbol{\theta}, \, {\mathbf{g}}^{\perp}) = \int_E \left( {\Pi^{\nabla, E}_k} \mathbf{v}_I - \mathbf{w}_I \right) \cdot {\mathbf{g}}^{\perp} \, {\rm d}E = \int_E \left( {\Pi^{\nabla, E}_k} \mathbf{w}_I - \mathbf{w}_I \right) \cdot {\mathbf{g}}^{\perp} \, {\rm d}E
\end{equation}
for all $\mathbf{g}^{\perp} \in \mathcal{G}_{k}^{\oplus}(E) \setminus \mathcal{G}_{k-2}^{\oplus}(E)$. Thus, by \eqref{added:star1} and \eqref{added:star2},
\begin{equation}
\label{eq:inter3}
d^E(\boldsymbol{\theta}, \, \mathbf{g}_{k}^{\oplus}) = (\boldsymbol{\chi}, \, \mathbf{g}_{k}^{\oplus}) \qquad \text{for all $\mathbf{g}_{k}^{\oplus} \in \mathcal{G}_{k}^{\oplus}(E)$}
\end{equation}
where
\begin{equation}\label{eq:chi}
\mbox{
$\boldsymbol{\chi}$ is the $L^2$-projection of $\left( {\Pi^{\nabla, E}_k} \mathbf{w}_I - \mathbf{w}_I \right)$ onto $\mathcal{G}_{k}^{\oplus}(E) \setminus \mathcal{G}_{k-2}^{\oplus}(E)$.}
\end{equation}
\item By definition of $\mathbf{W}_h^E$ and $\mathbf{V}_h^E$ there exist $\widehat{s} \in L_0^2(E)$ and $\widehat{\mathbf{g}} \in \mathcal{G}_{k}^{\oplus}(E)$ such that
\begin{equation}
\label{eq:inter4}
a^E(\boldsymbol{\theta}, \mathbf{v}) +b^E(\mathbf{v}, \widehat{s}) + d^E(\mathbf{v}, \widehat{\mathbf{g}}) = 0 \qquad \text{for all $\mathbf{v} \in H^1_0(E)$.}
\end{equation}
\end{itemize}
Collecting \eqref{eq:inter1}, \eqref{eq:inter2}, \eqref{eq:inter3}, \eqref{eq:inter4} it follows that $(\boldsymbol{\theta}, \, \widehat{s}, \, \widehat{\mathbf{g}})$ solves the problem
\begin{equation}
\label{eq:pbinter}
\left\{
\begin{aligned}
& \text{Find $(\boldsymbol{\theta}, \, \widehat{s}, \, \widehat{\mathbf{g}}) \in [H^1_0(E)]^2 \times L^2_0(E) \times \mathcal{G}_{k}^{\oplus}(E)$, such that} \\
& a^E(\boldsymbol{\theta}, \boldsymbol{\psi}) + b^E(\boldsymbol{\psi}, \widehat{s}) + d^E(\boldsymbol{\psi}, \widehat{\mathbf{g}}) = 0 \qquad & \text{for all $\boldsymbol{\psi} \in [H^1_0(E)]^2$,} \\
& b^E(\boldsymbol{\theta}, q) = 0 \qquad & \text{for all $q \in L^2_0(E)$,} \\
& d^E(\boldsymbol{\theta}, \mathbf{h}) = (\boldsymbol{\chi}, \mathbf{h}) \qquad & \text{for all $\mathbf{h} \in \mathcal{G}_{k}^{\oplus}(E)$.}
\end{aligned}
\right.
\end{equation}
\textit{Step 2.}
We now analyse the well-posedness of Problem \eqref{eq:pbinter}. We consider $[H^1_0(E)]^2$ and $L^2(E)$ endowed with the $H^1$ and the $L^2$-norm, respectively, and $\mathcal{G}_{k}^{\oplus}(E)$ endowed with the scaled norm
\[
\|\mathbf{h}\|_{\mathcal{G}_{k}^{\oplus}(E)} := h_E \, \|\mathbf{h}\|_{0,E} \qquad \text{for all $\mathbf{h} \in \mathcal{G}_{k}^{\oplus}(E)$.}
\]
Then for all $\boldsymbol{\psi} \in [H^1_0(E)]^2$ and $\mathbf{h} \in \mathcal{G}_{k}^{\oplus}(E)$
\begin{equation}
\label{eq:dcon}
d^E(\boldsymbol{\psi}, \mathbf{h}) = \int_E \boldsymbol{\psi} \cdot \mathbf{h} \, {\rm d}E \leq \|{\boldsymbol{\psi}}\|_{0,E} \|\mathbf{h}\|_{0,E} \leq c_{{\rm cont}} \, |\boldsymbol{\psi}|_{1,E} \, h^E \|\mathbf{h}\|_{0,E} ,
\end{equation}
where the last inequality follows by a scaled Poincar\'e inequality.
Therefore all the involved bilinear forms are continuous. By the theory of problems in mixed form \cite{BoffiBrezziFortin}, due to the coercivity of $a^E(\cdot,\cdot)$ the well-posedness of problem \eqref{eq:pbinter} will follow if we show an inf-sup condition for the form
$$
b^E(\cdot,\cdot) \! + \! d^E(\cdot,\cdot) \ \colon \
[H^1_0(E)]^2 \times \big(L^2_0(E) \times \mathcal{G}_{k}^{\oplus}(E)) \to {\mathbb R} .
$$
In other words, for all $(q, \mathbf{h}) \in L^2_0(E) \times \mathcal{G}_{k}^{\oplus}(E)$ we have to find $\boldsymbol{\phi} \in H^1_0(E)$ such that
\begin{equation}
\label{eq:is3inter}
\left\{
\begin{aligned}
& |\boldsymbol{\phi}|_{1,E} \leq b_0\, (\|q\|_{0,E} + \|\mathbf{h}\|_{\mathcal{G}_{k}^{\oplus}(E)} ) \\
& b^E(\boldsymbol{\phi}, q) + d^E(\boldsymbol{\phi}, \mathbf{h}) \geq c_0 \, ( \|q\|_{0,E} + \|\mathbf{h}\|_{\mathcal{G}_{k}^{\oplus}(E)})^2
\end{aligned}
\right.
\end{equation}
for suitable uniform positive constants $b_0$, $c_0$.
It is well known (see \cite{BoffiBrezziFortin}) that for all $q \in L^2_0(E)$ there exists $\boldsymbol{\phi}_1 \in [H^1_0(E)]^2$ such that
\begin{equation}
\label{eq:is1inter}
\left\{
\begin{aligned}
& |\boldsymbol{\phi}_1|_{1,E} \leq b_1 \, \|q\|_{0,E} \\
& b^E(\boldsymbol{\phi}_1, q) \geq c_1 \, \|q\|^2_{0,E}.
\end{aligned}
\right.
\end{equation}
Now let $T_E \subset E$ be an equilateral triangle inscribed in the ball $B_E$ (cf. assumption $\mathbf{(A1)}$). Then for all polynomial $p \in \numberset{P}_k(E)$, it holds $\|p\|_{0,E} \leq C \|p\|_{0, T_E}$ for a suitable uniform constant $C$. Let $\mathbf{h} \in \mathcal{G}_{k}^{\oplus}(E)$ and we define
\[
q := {\rm rot} (\mathbf{h}) \qquad \text{and} \qquad \boldsymbol{\phi}_2 := h_E^4 \, \boldsymbol{{\rm curl}}(b q)
\]
where $b \in \numberset{P}_3(T_E)$ denotes the standard cubic bubble in $T_E$ with unitary maximum value. Therefore, we get
\begin{equation}
\label{eq:is21inter}
\begin{split}
d^E(\boldsymbol{\phi}_2, \mathbf{h}) & = h_E^4 \, \int_E \boldsymbol{{\rm curl}}(bq) \cdot \mathbf{h} \, {\rm d}E
= h_E^4 \, \int_E b q \, {\rm rot}(\mathbf{h}) \, {\rm d}E
= h_E^4 \, \int_E b \, {\rm rot}(\mathbf{h})^2 \, {\rm d}E \\
& \geq h_E^4 \,\|{\rm rot}(\mathbf{h})\|_{0,E}^2 \geq C h_E^4 \, \|{\rm rot}(\mathbf{h})\|_{0,T_E}^2.
\end{split}
\end{equation}
Since ${\rm rot} \colon \mathcal{G}_k^{\oplus}(T_E)\to \numberset{P}_{k-1}(T_E)$ is an isomorphism (see \cite{supermisti}), a scaling argument for polynomials on the triangle $T_E$ yields $\|{\rm rot}(\mathbf{h})\|_{0,T_E} \geq h_E^{-1} \|\mathbf{h}\|_{0,T_E}$. Thus using \eqref{eq:is21inter} we find
\begin{equation}
\label{eq:is22inter}
d^E(\boldsymbol{\phi}_2, \mathbf{h}) \geq C \,h_E^4 \, h_E^{-2} \, \|\mathbf{h}\|_{0,T_E}^2 \geq C \, h_E^{2} \, \|\mathbf{h}\|_{0,E}^2 = C \, \|\mathbf{h}\|_{\mathcal{G}_{k}^{\oplus}(E)}^2.
\end{equation}
Moreover using an inverse estimate for the polynomials $bq$ and $\mathbf{h}$
\begin{equation}
\label{eq:is23inter}
\begin{split}
|\boldsymbol{\phi}_2|_{1,E} &= h_E^4 \,|\boldsymbol{{\rm curl}}(b q)|_{1,E} \leq C h_E^4 \, \, h_E^{-2} \|b q\|_{0,E} \leq C \, h_E^{2} \|q\|_{0,E} \\
& = C \, h_E^{2} \|{\rm rot}(\mathbf{h})\|_{0,E} \leq C \, h_E \, \|\mathbf{h}\|_{0,E} = C \|\mathbf{h}\|_{\mathcal{G}_{k}^{\oplus}(E)(E)}.
\end{split}
\end{equation}
Therefore by \eqref{eq:is22inter} and \eqref{eq:is23inter} for all $\mathbf{h} \in \mathcal{G}_{k}^{\oplus}(E)$ we find $\boldsymbol{\phi}_2 \in H^1_0(E)$ such that
\begin{equation}
\label{eq:is2inter}
\left\{
\begin{aligned}
& |\boldsymbol{\phi}_2|_{1,E} \leq b_2 \, \|\mathbf{h}\|_{\mathcal{G}_{k}^{\oplus}(E)} \\
& d^E(\boldsymbol{\phi}_2, \mathbf{h}) \geq c_2 \, \|\mathbf{h}\|^2_{\mathcal{G}_{k}^{\oplus}(E)}.
\end{aligned}
\right.
\end{equation}
Recalling \eqref{eq:is3inter}, let us set $\boldsymbol{\phi} := \boldsymbol{\phi}_1 + \xi \,\boldsymbol{\phi}_2$ (cf. \eqref{eq:is1inter} and \eqref{eq:is2inter}) where $\xi$ is a positive constant.
Then, it is clear that
\begin{equation}
\label{eq:is31inter}
|\boldsymbol{\phi}|_{1,E} \leq |\boldsymbol{\phi}_1|_{1,E} + |\boldsymbol{\phi}_2|_{1,E} \leq \max \{b_1, \, b_2 \} \, (1 + \xi) (\|q\|_{0,E} + \|\mathbf{h}\|_{\mathcal{G}_{k}^{\oplus}(E)} ).
\end{equation}
Moreover, by \eqref{eq:dcon} and since ${\rm div} \, \boldsymbol{{\rm curl}} = 0$, we have
\begin{equation}
\label{eq:is32inter}
\begin{split}
b^E(\boldsymbol{\phi}, q) + d^E(\boldsymbol{\phi}, \mathbf{h}) &= b^E(\boldsymbol{\phi}_1, q) + d^E(\boldsymbol{\phi}_1, \mathbf{h}) + \xi \, b^E(\boldsymbol{\phi}_2, q) + \xi \,d^E(\boldsymbol{\phi}_2, \mathbf{h}) \\
& = b^E(\boldsymbol{\phi}_1, q) + d^E(\boldsymbol{\phi}_1, \mathbf{h}) + \xi \, d^E(\boldsymbol{\phi}_2, \mathbf{h}) \\
& \geq c_1 \, \|q\|^2_{0,E} + c_2 \, \xi \, \|\mathbf{h}\|^2_{\mathcal{G}_{k}^{\oplus}(E)} + d^E(\boldsymbol{\phi}_1, \mathbf{h}) \\
& \geq c_1 \, \|q\|^2_{0,E} + c_2 \, \xi \, \|\mathbf{h}\|^2_{\mathcal{G}_{k}^{\oplus}(E)} - c_{\rm cont} \, |\boldsymbol{\phi}_1|_{1,E} \|\mathbf{h}\|_{\mathcal{G}_{k}^{\oplus}(E)}\\
& \geq c_1 \, \|q\|^2_{0,E} + c_2 \, \xi \, \|\mathbf{h}\|^2_{\mathcal{G}_{k}^{\oplus}(E)} - c_{\rm cont} b_1 \, \|q\|_{0,E} \|\mathbf{h}\|_{\mathcal{G}_{k}^{\oplus}(E)}\\
& \geq \left( c_1 - \frac{\epsilon}{2} c_{\rm cont} b_1 \right) \, \|q\|^2_{0,E} +
\left( \xi \, c_2 - \frac{1}{2 \epsilon} c_{\rm cont} b_1 \right) \, \|\mathbf{h}\|^2_{\mathcal{G}_{k}^{\oplus}(E)}
\end{split}
\end{equation}
for any positive real number $\epsilon$. Finally, setting
\[
\epsilon := \frac{c_1}{ c_{\rm cont} b_1} \qquad \text{and} \qquad \xi := \frac{ c_{\rm cont}^2 \, b_1^2}{c_1 c_2}
\]
by \eqref{eq:is31inter} and \eqref{eq:is32inter} we get \eqref{eq:is3inter}.
\noindent
\textit{Step 3.}
Since problem \eqref{eq:pbinter} is well-posed, the following stability estimate holds
\[
|\boldsymbol{\theta}|_{1, E} + \|\widehat{s}\|_{0,E} + \|\widehat{\mathbf{g}}\|_{\mathcal{G}_{k}^{\oplus}(E)} \leq \|\boldsymbol{\chi}\|_{\left(\mathcal{G}_{k}^{\oplus}(E) \right)^{*}},
\]
where
\[
\|\boldsymbol{\chi}\|_{\left(\mathcal{G}_{k}^{\oplus}(E) \right)^{*}} := \sup_{\mathbf{h} \in \mathcal{G}_{k}^{\oplus}(E), \mathbf{h} \neq \mathbf{0}} \frac{(\boldsymbol{\chi}, \mathbf{h})}{\|\mathbf{h}\|_{\mathcal{G}_{k}^{\oplus}(E)}} \leq h_E^{-1} \, \|\boldsymbol{\chi}\|_{0,E}.
\]
Then, by the definition of $\boldsymbol{\chi}$ (see \eqref{eq:chi}) and by the continuity of the $L^2$-projection, we get
\[
\begin{split}
|\boldsymbol{\theta}|_{1, E} \leq h_E^{-1} \, \|\boldsymbol{\chi}\|_{0,E} \leq h_E^{-1} \, \left \| {\Pi^{\nabla, E}_k} \mathbf{w}_I - \mathbf{w}_I \right\|_{0,E} \leq C \left | {\Pi^{\nabla, E}_k} \mathbf{w}_I - \mathbf{w}_I \right|_{1,E}
\end{split}
\]
where the last inequality is justified since, by definition \eqref{eq:Pn_k^E}, the function $\left ( {\Pi^{\nabla, E}_k} \mathbf{w}_I - \mathbf{w}_I \right)$ has zero mean value. Noting that ${\Pi^{\nabla, E}_k}$ is a projection with respect the $H^1$ semi-norm and using a triangular inequality, from \eqref{eq:interpolata old} we finally get
\begin{equation}
\label{eq:inter5}
\begin{split}
|\boldsymbol{\theta}|_{1,E} & \leq \, \left( \left | {\Pi^{\nabla, E}_k} (\mathbf{w}_I - \mathbf{v}) |_{1,E} \right| + \left| \mathbf{w}_I - {\Pi^{\nabla, E}_k} \mathbf{v} \right|_{1,E} \right) \\
& \leq \left( 2 \left | (\mathbf{w}_I - \mathbf{v}) |_{1,E} \right| + \left| \mathbf{v} - {\Pi^{\nabla, E}_k} \mathbf{v} \right|_{1,E} \right) \\
& \leq C \, h_E^s \, |\mathbf{v}|_{s+1, E}.
\end{split}
\end{equation}
The thesis now follows from \eqref{eq:inter5} and again \eqref{eq:interpolata old}, by adding all the local contributions.
For what concerns the $L^2$ estimate, for each polygon $E \in \Omega_h$, we have that $\boldsymbol{\theta} = \mathbf{0}$ on $\partial E$ (see \eqref{eq:inter1}). Hence, from \eqref{eq:inter5} it
holds
\[
\|\boldsymbol{\theta}\|_{0,E} \leq C \, h_E \, |\boldsymbol{\theta}|_{1,E} \leq C \, h_E^{s+1} \, |\mathbf{v}|_{s+1, E},
\]
from which we easily infer the $L^2$ estimate.
\end{proof}
\subsection{Convergence analysis}
\label{sub:4.2}
First of all, let us recall a classical approximation result for $\numberset{P}_k$ polynomials on star-shaped domains, see for instance \cite{brennerscott}.
\begin{lemma}
\label{lm:scott}
Let $E \in \Omega_h$, and let two real numbers $s,p$ with $0 \le s \le k$ and $1 \le p \le \infty$.
Then for all $\mathbf{u} \in [H^{s+1}(E)]^2$, there exists a polynomial function $\mathbf{u}_{\pi} \in [{\numberset{P}}_k(E)]^2$, such that
\begin{equation}
\label{eq:scott}
\|\mathbf{u} - \mathbf{u}_{\pi}\|_{L^p(E)} + h_E
| \mathbf{u} - \mathbf{u}_{\pi} |_{W^{1,p}(E)} \leq C h_E^{s+1}| \mathbf{u}|_{W^{s+1,p}(E)},
\end{equation}
with $C$ depending only on $k$ and the shape regularity constant $\rho$ in assumption $\mathbf{(A1)}$.
\end{lemma}
Now we prove two technical lemmata.
\begin{lemma}
\label{lemma3}
Let $\mathbf{v} \in H^{s+1}(\Omega) \cap \mathbf{V}$ with $0 \le s \le k$. Then for all $\mathbf{w} \in \mathbf{V}$ it holds
\[
\left|\widetilde{c}(\mathbf{v}; \, \mathbf{v}, \mathbf{w}) - \widetilde{c}_h(\mathbf{v}; \, \mathbf{v}, \mathbf{w})\right| \leq C \, h^s \, \left( \|\mathbf{v}\|_s + \|\mathbf{v}\|_{\mathbf{V}} + \|\mathbf{v}\|_{s+1} \right)\|\mathbf{v}\|_{s+1} \, \|\mathbf{w}\|_{\mathbf{V}}.
\]
\end{lemma}
\begin{proof}
First of all, we set
\begin{equation}
\label{eq:mu}
\mu_1(\mathbf{w}) := \sum_{E \in \Omega_h} \left( c^E(\mathbf{v}; \, \mathbf{v}, \mathbf{w}) - c^E_h(\mathbf{v}; \, \mathbf{v}, \mathbf{w}) \right)
\quad \text{and} \quad
\mu_2(\mathbf{w}) := \sum_{E \in \Omega_h} \left( c^E(\mathbf{v}; \, \mathbf{w}, \mathbf{v}) - c^E_h(\mathbf{v}; \, \mathbf{w}, \mathbf{v}) \right)
\end{equation}
then by definition \eqref{eq:ctilde_h^E} and \eqref{eq:ctilde_h} it holds
\begin{equation}
\label{eq:mu1mu2}
\widetilde{c}(\mathbf{v}; \, \mathbf{v}, \mathbf{w}) - \widetilde{c}_h(\mathbf{v}; \, \mathbf{v}, \mathbf{w}) =
\frac{1}{2} \bigl( \mu_1(\mathbf{w}) + \mu_2(\mathbf{w})\bigr).
\end{equation}
We now analyse the two terms. For the term $\mu(\mathbf{w})$ by simple computations, we have
\begin{equation*}
\begin{split}
\mu_1(\mathbf{w})
& = \sum_{E \in \Omega_h} \int_E \left( (\boldsymbol{\nabla} \mathbf{v}) \, \mathbf{v} \cdot \mathbf{w} - \left({\boldsymbol{\Pi}^{0, E}_{k-1}} \, \boldsymbol{\nabla} \mathbf{v} \right) \left({\Pi^{0, E}_k} \mathbf{v} \right) \cdot {\Pi^{0, E}_k} \mathbf{w} \right) \, {\rm d}E \\
& = \sum_{E \in \Omega_h} \sum_{i,j=1}^2 \int_E \left( \frac{\partial \mathbf{v}_i}{\partial x_j} \, \mathbf{v}_j \, \mathbf{w}_i - \left( \Pi^{0,E}_{k-1} \, \frac{\partial \mathbf{v}_i}{\partial x_j} \right) \left( {\Pi^{0, E}_k} \mathbf{v}_j\right) {\Pi^{0, E}_k} \mathbf{w}_i \right) \, {\rm d}E\\
\end{split}
\end{equation*}
from which it follows
\begin{equation}
\begin{split}
\label{eq:mu1}
\mu_1(\mathbf{w}) &= \sum_{E \in \Omega_h}
\sum_{i,j=1}^2 \int_E \left( \frac{\partial \mathbf{v}_i}{\partial x_j} \, \mathbf{v}_j \left[ \left(I - {\Pi^{0, E}_k} \right) \mathbf{w}_i\right] +
\right. \\
& \qquad \qquad \left. + \frac{\partial \mathbf{v}_i}{\partial x_j} \left[ \left(I - {\Pi^{0, E}_k} \right) \mathbf{v}_j\right] \, {\Pi^{0, E}_k} \, \mathbf{w}_i +
\left[ \left(I - \Pi^{0,E}_{k-1} \right) \frac{\partial \mathbf{v}_i}{\partial x_j} \right] \left( {\Pi^{0, E}_k} \mathbf{v}_j\right) {\Pi^{0, E}_k} \mathbf{w}_i \right) \, {\rm d}E \\
& =: \sum_{E \in \Omega_h}
\sum_{i,j=1}^2 \int_E \left( \alpha(\mathbf{w}) + \beta(\mathbf{w}) + \gamma(\mathbf{w}) \right) {\rm d}E.
\end{split}
\end{equation}
Now, by definition of $L^2$ projection ${\Pi^{0, E}_k}$ and by Lemma \ref{lm:scott}, we have
\begin{equation}
\label{eq:mu1alpha}
\begin{split}
\int_E \alpha(\mathbf{w}) \,{\rm d}E & = \int_E \frac{\partial \mathbf{v}_i}{\partial x_j} \, \mathbf{v}_j \left[ \left(I - {\Pi^{0, E}_k} \right) \mathbf{w}_i\right] {\rm d}E \\
& = \int_E \left[ \left(I - \Pi^{0,E}_{k-2} \right)\frac{\partial \mathbf{v}_i}{\partial x_j} \, \mathbf{v}_j \right] \left[ \left(I - {\Pi^{0, E}_k} \right) \mathbf{w}_i\right] {\rm d}E \\
& \leq \left \|\left(I - \Pi^{0,E}_{k-2} \right)\frac{\partial \mathbf{v}_i}{\partial x_j} \, \mathbf{v}_j \right \|_{0,E} \, \left\| \left(I - {\Pi^{0, E}_k} \right) \mathbf{w}_i\right\|_{0,E} \\
& \leq C \, h_E^s \, \left| \frac{\partial \mathbf{v}_i}{\partial x_j} \, \mathbf{v}_j\right|_{s-1,E} |\mathbf{w}_i|_{1,E}.
\end{split}
\end{equation}
Applying H\"older inequality (for sequences), we get
\begin{equation}
\label{eq:mu1alpha1}
\begin{split}
\sum_{E \in \Omega_h} \sum_{i,j=1}^2 \int_E \alpha(\mathbf{w}) \,{\rm d}E & \leq C \, h^s \, \sum_{E \in \Omega_h} \sum_{i,j=1}^2 \,\left| \frac{\partial \mathbf{v}_i}{\partial x_j} \, \mathbf{v}_j\right|_{s-1,E} |\mathbf{w}_i|_{1,E} \\
&\leq C \, h^s \, \sum_{i,j=1}^2 \left( \sum_{E \in \Omega_h} \,\left| \frac{\partial \mathbf{v}_i}{\partial x_j} \, \mathbf{v}_j\right|^2_{s-1,E} \right)^{\frac{1}{2}} \left( \sum_{E \in \Omega_h} |\mathbf{w}_i|^2_{1,E} \right)^{\frac{1}{2}}\\
&\leq C \, h^s \, \sum_{i,j=1}^2 \, \left| \frac{\partial \mathbf{v}_i}{\partial x_j} \, \mathbf{v}_j\right|_{s-1} \, |\mathbf{w}_i|_{1}
\end{split}
\end{equation}
and by H\"older inequality and Sobolev embedding $H^{s-1}(\Omega) \subset W^{s}_4(\Omega)$ we infer
\begin{equation}
\label{eq:mu1alpha2}
\left| \frac{\partial \mathbf{v}_i}{\partial x_j} \, \mathbf{v}_j\right|_{s-1} \leq
\left \| \frac{\partial \mathbf{v}_i}{\partial x_j} \right\|_{W^{s-1}_4} \,
\left \| \mathbf{v}_j \right\|_{W^{s-1}_4}
\leq C \,
\left \| \frac{\partial \mathbf{v}_i}{\partial x_j} \right\|_{s} \,
\left \| \mathbf{v}_j \right\|_{s} .
\end{equation}
By \eqref{eq:mu1alpha1} and \eqref{eq:mu1alpha2} we finally obtain
\begin{equation}
\label{eq:alphafinale}
\sum_{E \in \Omega_h} \sum_{i,j=1}^2 \alpha(\mathbf{w}) \leq C \, h^s \left \| \mathbf{v} \right\|_{s+1} \,
\left \| \mathbf{v} \right\|_{s} \, \|\mathbf{w}\|_{\mathbf{V}}.
\end{equation}
For what concerns the term $\beta(\mathbf{w})$ in \eqref{eq:mu1} using H\"older inequality we have
\begin{equation}
\label{eq:mu1beta}
\begin{split}
\int_E \beta(\mathbf{w}) \,{\rm d}E & = \int_E \frac{\partial \mathbf{v}_i}{\partial x_j} \left[ \left(I - {\Pi^{0, E}_k} \right) \mathbf{v}_j\right] \, {\Pi^{0, E}_k} \, \mathbf{w}_i \, {\rm d}E \\
& \leq
\left \| \frac{\partial \mathbf{v}_i}{\partial x_j} \right\|_{0,E} \,
\left \| \left(I - {\Pi^{0, E}_k} \right) \mathbf{v}_j \right\|_{L^4(E)} \,
\left\| {\Pi^{0, E}_k} \, \mathbf{w}_i \right\|_{L^4(E)}.
\end{split}
\end{equation}
Lemma \ref{lm:scott} yields a polynomial $\mathbf{v}_{j, \pi} \in \numberset{P}_k(E)$ such that
\[
\|\mathbf{v}_j - \mathbf{v}_{j, \pi} \|_{L^4(E)} \leq C \, h_E^s \, |\mathbf{v}_j|_{W^s_4(E)}
\]
and thus, by the continuity of ${\Pi^{0, E}_k}$ with respect the $L^4$-norm (cf. \eqref{eq:continuity-ch3}),
\begin{equation}
\label{eq:mu1beta2}
\begin{aligned}
\left \| \left(I - {\Pi^{0, E}_k} \right) \mathbf{v}_j \right\|_{L^4(E)} & \leq \|\mathbf{v}_j - \mathbf{v}_{j, \pi} \|_{L^4(E)} + \left\|{\Pi^{0, E}_k} \, (\mathbf{v}_j - \mathbf{v}_{j, \pi}) \right\|_{L^4(E)} \\
& \leq C \|\mathbf{v}_j - \mathbf{v}_{j, \pi} \|_{L^4(E)} \leq C \, h_E^s \, |\mathbf{v}_j|_{W^s_4(E)}.
\end{aligned}
\end{equation}
Using again the continuity of ${\Pi^{0, E}_k}$ with respect the $L^4$-norm, by \eqref{eq:mu1beta} and \eqref{eq:mu1beta2} we infer
\[
\int_E \beta(\mathbf{w}) \,{\rm d}E \leq
C\, h_E^s \, \left \| \frac{\partial \mathbf{v}_i}{\partial x_j} \right\|_{0,E} \,|\mathbf{v}_j|_{W^s_4(E)} \, \| \mathbf{w}_i \|_{L^4(E)}.
\]
Applying the H\"older inequality and Sobolev embeddings $H^{1}(\Omega) \subset L^4(\Omega)$ and $H^{s+1}(\Omega) \subset W^{s}_4(\Omega)$, we obtain
\begin{equation}
\label{eq:betafinale}
\begin{split}
\sum_{E \in \Omega_h} & \sum_{i,j=1}^2 \int_E \beta(\mathbf{w}) \,{\rm d}E \leq C \, h^s \, \sum_{E \in \Omega_h} \sum_{i,j=1}^2 \, \left \| \frac{\partial \mathbf{v}_i}{\partial x_j} \right\|_{0,E} \,|\mathbf{v}_j|_{W^s_4(E)} \, \| \mathbf{w}_i \|_{L^4(E)}\\
&\leq C \, h^s \, \sum_{i,j=1}^2
\left( \sum_{E \in \Omega_h} \, \left \| \frac{\partial \mathbf{v}_i}{\partial x_j} \right\|^2_{0,E} \right)^{\frac{1}{2}}
\left( \sum_{E \in \Omega_h} |\mathbf{v}_j|^4_{W^s_4(E)} \right)^{\frac{1}{4}}
\left( \sum_{E \in \Omega_h} \ |\mathbf{w}_i\|^4_{L^4(E)} \right)^{\frac{1}{4}}
\\
&\leq C \, h^s \, \sum_{i,j=1}^2 \, \left\| \frac{\partial \mathbf{v}_i}{\partial x_j} \right \|_0\, \|\mathbf{v}_j \|_{W^s_4} \, \|\mathbf{w}_i\|_{L^4} \leq C \, h^s \, \left\| \mathbf{v} \right \|_{\mathbf{V}} \, \|\mathbf{v}\|_{s+1} \, \|\mathbf{w}\|_{\mathbf{V}}.
\end{split}
\end{equation}
For what concerns the term $\gamma(\mathbf{w})$ in \eqref{eq:mu1}, using H\"older and the continuity of ${\Pi^{0, E}_k}$, it holds
\begin{equation}
\label{eq:mu1gamma}
\begin{split}
\int_E \gamma(\mathbf{w}) \,{\rm d}E &= \int_E \left[ \left(I - \Pi^{0,E}_{k-1} \right) \frac{\partial \mathbf{v}_i}{\partial x_j} \right] \left( {\Pi^{0, E}_k} \mathbf{v}_j\right) {\Pi^{0, E}_k} \mathbf{w}_i \, {\rm d}E \\
& \leq
\left \| \left(I - \Pi^{0,E}_{k-1} \right) \frac{\partial \mathbf{v}_i}{\partial x_j} \right\|_{0,E} \,
\| {\Pi^{0, E}_k} \, \mathbf{v}_j\|_{L^4(E)} \,
\| {\Pi^{0, E}_k} \, \mathbf{w}_i \|_{L^4(E)} \\
& \leq
C \, h_E^s \, \left | \frac{\partial \mathbf{v}_i}{\partial x_j} \right|_{s,E} \, \| \mathbf{v}_j\|_{L^4(E)} \,
\| \mathbf{w}_i \|_{L^4(E)} . \\
\end{split}
\end{equation}
Using again the H\"older inequality and Sobolev embedding we get
\begin{equation}
\label{eq:gammafinale}
\sum_{E \in \Omega_h} \sum_{i,j=1}^2 \, \int_E \gamma(\mathbf{w}) \,{\rm d}E \leq C \, h^s \, \|\mathbf{v}\|_{\mathbf{V}} \, \|\mathbf{w}\|_{\mathbf{V}} \, \|\mathbf{v}\|_{s+1}.
\end{equation}
By collecting \eqref{eq:alphafinale}, \eqref{eq:betafinale} and \eqref{eq:gammafinale} in \eqref{eq:mu1} we finally get
\begin{equation}
\label{eq:mu1finale}
\mu_1(\mathbf{w}) \leq C \, h^s \, \left( \|\mathbf{v}\|_{s+1} \|\mathbf{v}\|_{s} +\|\mathbf{v}\|_{s+1} \|\mathbf{v}\|_{\mathbf{V}} \right) \|\mathbf{w}\|_{\mathbf{V}}.
\end{equation}
For the second term $\mu_2(\mathbf{w})$ we only sketch the proof since we use analogous arguments. First by definition, then by adding and subtracting terms, we obtain
\begin{equation}
\begin{split}
\label{eq:mu2}
\mu_2(\mathbf{w}) &= \sum_{E \in \Omega_h} \sum_{i,j=1}^2 \int_E \left (
\left[ \left(I - \Pi^{0,E}_{k-1} \right) \frac{\partial \mathbf{w}_i}{\partial x_j} \right] \mathbf{v}_j\, \mathbf{v}_i + \left( \Pi^{0,E}_{k-1} \,\frac{\partial \mathbf{w}_i}{\partial x_j} \right) \left[ \left( I - {\Pi^{0, E}_k} \right) \mathbf{v}_j \right] \mathbf{v}_i + \right. \\
& \qquad \qquad \left.
+ \left( \Pi^{0,E}_{k-1} \, \frac{\partial \mathbf{w}_i}{\partial x_j} \right) \left( {\Pi^{0, E}_k} \, \mathbf{v}_j\right) \left[ \left( I - {\Pi^{0, E}_k} \right) \mathbf{v}_i \right] \right) \, {\rm d}E \\
& =: \sum_{E \in \Omega_h} \sum_{i,j=1}^2 \int_E \left( \delta(\mathbf{w}) + \epsilon(\mathbf{w}) + \zeta(\mathbf{w})\right) \, {\rm d}E.
\end{split}
\end{equation}
For the term $\delta(\mathbf{w})$ we have
\begin{equation}
\label{eq:mu2delta}
\begin{split}
\int_E \delta(\mathbf{w}) \,{\rm d}E & = \int_E \left[ \left(I - \Pi^{0,E}_{k-1} \right) \frac{\partial \mathbf{w}_i}{\partial x_j} \right] \mathbf{v}_j\, \mathbf{v}_i \, {\rm d}E \\
& = \int_E \left[ \left(I - \Pi^{0,E}_{k-1} \right) \frac{\partial \mathbf{w}_i}{\partial x_j} \right] \left[ \left(I - \Pi^{0,E}_{k-1}\right) \mathbf{v}_j\, \mathbf{v}_i \right] \, {\rm d}E \\
& \leq \left \|\left(I - \Pi^{0,E}_{k-1} \right)\frac{\partial \mathbf{w}_i}{\partial x_j} \right \|_{0,E} \, \left\| \left(I - \Pi^{0,E}_{k-1} \right) \mathbf{v}_j\, \mathbf{v}_i \right\|_{0,E} \\
& \leq C \sum_{i,j=1}^2 \, h_E^s \, \left\| \frac{\partial \mathbf{w}_i}{\partial x_j} \right\|_{0,E} \,
|\mathbf{v}_j \, \mathbf{v}_i |_{s,E}
\end{split}
\end{equation}
and applying the H\"older inequality (for sequences) we easily get
\[
\sum_{E \in \Omega_h} \sum_{i,j=1}^2 \int_E \delta(\mathbf{w}) \,{\rm d}E \leq
C \, h^s \, \sum_{i,j=1}^2 \, \left\| \frac{\partial \mathbf{w}_i}{\partial x_j} \right\|_{0} \, |\mathbf{v}_j \, \mathbf{v}_i |_{s} .
\]
The H\"older inequality and the Sobolev embedding $H^{s+1}(\Omega) \subset W^{s}_4(\Omega)$ yield
\[
|\mathbf{v}_j \, \mathbf{v}_i |_{s} \leq \|\mathbf{v}_j\|_{W^s_4} \,\|\mathbf{v}_i\|_{W^s_4} \leq C \, \|\mathbf{v}_j\|_{s+1} \,\|\mathbf{v}_i\|_{s+1}
\]
and thus we conclude that
\begin{equation}
\label{eq:deltafinale}
\sum_{E \in \Omega_h} \sum_{i,j=1}^2 \delta(\mathbf{w}) \leq C \, h^s \, \|\mathbf{v}\|_{s+1}^2 \, \|\mathbf{w}\|_{\mathbf{V}}.
\end{equation}
The terms $\epsilon(\mathbf{w})$ and $\zeta(\mathbf{w})$ can be estimated using the usual argument (H\"older inequality, continuity of ${\Pi^{0, E}_k}$ with respect to the $L^4$-norm and Sobolev embeddings). We conclude that
\begin{equation}
\label{eq:mu2finale}
\mu_2(\mathbf{w}) \leq C \, h^s \, \left( \|\mathbf{v}\|_{s+1}^2 +\|\mathbf{v}\|_{s+1} \|\mathbf{v}\|_{\mathbf{V}} \right) \|\mathbf{w}\|_{\mathbf{V}}.
\end{equation}
We infer the thesis by collecting \eqref{eq:mu1finale} and \eqref{eq:mu2finale} in \eqref{eq:mu1mu2}.
\end{proof}
\begin{lemma}
\label{lemma2}
Let $\widehat{C}_h$ be the constant defined in \eqref{eq:CC}. Then for all $\mathbf{v}, \mathbf{z}, \mathbf{w} \in \mathbf{V}$ it holds
\[
|\widetilde{c}_h(\mathbf{v}; \, \mathbf{v}, \mathbf{w}) - \widetilde{c}_h(\mathbf{z}; \, \mathbf{z}, \mathbf{w})| \leq
\widehat{C}_h \, \left( \|\mathbf{z}\|_{\mathbf{V}} \, \| \mathbf{w}\|_{\mathbf{V}} + \|\mathbf{v} - \mathbf{z} + \mathbf{w}\|_{\mathbf{V}} (\|\mathbf{v}\|_{\mathbf{V}} + \|\mathbf{z}\|_{\mathbf{V}}) \right) \, \|\mathbf{w}\|_{\mathbf{V}}.
\]
\end{lemma}
\begin{proof}
Since $\widetilde{c}_h(\cdot; \, \cdot, \cdot)$ is skew-symmetric by simple computations we obtain
\[
\begin{split}
\widetilde{c}_h(\mathbf{v}; \, \mathbf{v}, \mathbf{w}) - \widetilde{c}_h(\mathbf{z}; \, \mathbf{z}, \mathbf{w}) & =
\widetilde{c}_h(\mathbf{v} - \mathbf{z}; \, \mathbf{v}, \mathbf{w}) + \widetilde{c}_h(\mathbf{z}; \, \mathbf{v} - \mathbf{z}, \mathbf{w}) \\
& =
-\widetilde{c}_h(\mathbf{w}; \, \mathbf{v}, \mathbf{w}) + \widetilde{c}_h(\mathbf{v} - \mathbf{z} + \mathbf{w}; \, \mathbf{v}, \mathbf{w}) + \widetilde{c}_h(\mathbf{z}; \, \mathbf{v} - \mathbf{z} + \mathbf{w}, \mathbf{w}).
\end{split}
\]
The thesis follows by definition \eqref{eq:CC}.
\end{proof}
Furthermore, we state the following result concerning the load approximation, which can be proved using standard arguments \cite{volley}.
\begin{lemma}
\label{lemma4}
Let $\mathbf{f}_h$ be defined as in \eqref{eq:f_h}, and let us assume $\mathbf{f} \in H^{s+1}(\Omega)$, $-1 \le s \le k$. Then, for all $\mathbf{v}_h \in \mathbf{V}_h$, it holds
\begin{gather*}
\left|( \mathbf{f}_h - \mathbf{f}, \mathbf{v}_h ) \right| \leq C h^{s+2} |\mathbf{f}|_{s+1} |\mathbf{v}_h|_{\mathbf{V}}.
\end{gather*}
\end{lemma}
We now note that, given $\mathbf{v} \in \mathbf{Z}$, the inf-sup condition \eqref{eq:inf-sup discreta} implies (see \cite{BoffiBrezziFortin}):
\[
\inf_{\mathbf{v}_h \in \mathbf{Z}_h, \mathbf{v}_h \neq \mathbf{0}} \|\mathbf{v} - \mathbf{v}_h\|_{\mathbf{V}} \leq C
\inf_{\mathbf{w}_h \in \mathbf{V}_h, \mathbf{w}_h \neq \mathbf{0}} \|\mathbf{v} - \mathbf{w}_h\|_{\mathbf{V}}
\]
which essentially means that $\mathbf{Z}$ is approximated by $\mathbf{Z}_h$ with the same accuracy order of the
whole subspace $\mathbf{V}_h$. In particular by Theorem \ref{thm:interpolante}, assuming
$\mathbf{v} \in H^{s+1}(\Omega) \cap \mathbf{Z}$, $0 < s \le k$, we infer
\begin{equation}
\label{eq:interpolant kernel}
\inf_{\mathbf{v}_h \in \mathbf{Z}_h, \mathbf{v}_h \neq \mathbf{0}} \|\mathbf{v} - \mathbf{v}_h\|_{\mathbf{V}} \leq C \, h^s \, | \mathbf{v} |_{s+1}.
\end{equation}
\begin{theorem}
\label{thm:u}
Under the assumptions \eqref{eq:ns condition} and \eqref{eq:ns virtual condition}, let $\mathbf{u}$ be the solution of Problem \eqref{eq:ns variazionale ker} and $\mathbf{u}_h$ be the solution of virtual Problem \eqref{eq:nsvirtual ker}. Assuming moreover $\mathbf{u}, \mathbf{f} \in [H^{s+1}(\Omega)]^2$, $0 < s \le k$, then
\begin{equation}\label{eq:thm:u}
\| \mathbf{u} - \mathbf{u}_h \|_{\mathbf{V}} \leq \, h^{s} \, \mathcal{F}(\mathbf{u}; \, \nu, \gamma, \gamma_h) + \, h^{s+2} \, \mathcal{H}(\mathbf{f}; \nu, \gamma_h)
\end{equation}
where $\mathcal{F}$ and $\mathcal{H}$ are suitable functions independent of $h$.
\end{theorem}
\begin{proof}
Let $\mathbf{u}_I$ be an approximant of $\mathbf{u}$ in the discrete kernel $\mathbf{Z}_h$ satisfying \eqref{eq:interpolant kernel}, and let us define $\boldsymbol{\delta}_h := \mathbf{u}_h - \mathbf{u}_I$. Now, by the stability and the consistency properties (cf. \eqref{eq:consist} and \eqref{eq:stabk}) of the bilinear form $a_h(\cdot, \cdot)$, the triangular inequality and \eqref{eq:interpolant kernel} give
\begin{equation}
\label{eq:thm1}
\begin{split}
\alpha_* \, \nu \, \| \boldsymbol{\delta}_h \|^2_{\mathbf{V}} & \leq \nu \, a_h(\boldsymbol{\delta}_h, \, \boldsymbol{\delta}_h) = \nu \, a_h(\mathbf{u}_h, \, \boldsymbol{\delta}_h) - \nu \, a_h(\mathbf{u}_I, \, \boldsymbol{\delta}_h)
\\
& = \nu \, a_h(\mathbf{u}_h, \, \boldsymbol{\delta}_h) - \nu\, a(\mathbf{u}, \, \boldsymbol{\delta}_h) + \nu \sum_{E \in \Omega_h} \left( a_h^E(\mathbf{u}_{\pi} - \mathbf{u}_I, \, \boldsymbol{\delta}_h) + a^E(\mathbf{u} - \mathbf{u}_{\pi}, \, \boldsymbol{\delta}_h) \right) \\
& \leq \nu \, a_h(\mathbf{u}_h, \, \boldsymbol{\delta}_h) - \nu\, a(\mathbf{u}, \, \boldsymbol{\delta}_h) +
C \, \nu \, h^s |\mathbf{u} |_{s+1}\| \boldsymbol{\delta}_h \|_{\mathbf{V}} \\
\end{split}
\end{equation}
where $\mathbf{u}_{\pi}$ is the piecewise polynomial of degree $k$ defined in Lemma \ref{lm:scott}.
Now since $\mathbf{u}$ and $\mathbf{u}_h$ are solutions of Problem \eqref{eq:ns variazionale ker} and Problem \eqref{eq:nsvirtual ker} respectively, from Lemma \ref{lemma4} we obtain
\begin{equation}
\label{eq:thm2}
\begin{split}
\alpha_* \, \nu \, \| \boldsymbol{\delta}_h \|^2_{\mathbf{V}}
& \leq (\mathbf{f}_h - \mathbf{f},\, \boldsymbol{\delta}_h) + \widetilde{c}(\mathbf{u}; \, \mathbf{u}, \boldsymbol{\delta}_h) - \widetilde{c}_h(\mathbf{u}_h; \, \mathbf{u}_h, \boldsymbol{\delta}_h) + C \, \nu \, h^s | \mathbf{u} |_{s+1}\|\boldsymbol{\delta}_h\|_{\mathbf{V}} \\
& \leq C \, h^s (\nu \, |\mathbf{u} |_{s+1} + h^2 \, |\mathbf{f}|_{s+1})\|\boldsymbol{\delta}_h\|_{\mathbf{V}} + \widetilde{c}(\mathbf{u}; \, \mathbf{u}, \boldsymbol{\delta}_h) - \widetilde{c}_h(\mathbf{u}_h; \, \mathbf{u}_h, \boldsymbol{\delta}_h).
\end{split}
\end{equation}
Now we observe that
\begin{equation}
\label{eq:lemma3thm}
\widetilde{c}(\mathbf{u}; \, \mathbf{u}, \boldsymbol{\delta}_h) - \widetilde{c}_h(\mathbf{u}_h; \, \mathbf{u}_h, \boldsymbol{\delta}_h) =
\bigl(\widetilde{c}(\mathbf{u}; \, \mathbf{u}, \boldsymbol{\delta}_h) - \widetilde{c}_h(\mathbf{u}; \, \mathbf{u}, \boldsymbol{\delta}_h) \bigr) +
\bigl(\widetilde{c}_h(\mathbf{u}; \, \mathbf{u}, \boldsymbol{\delta}_h) - \widetilde{c}_h(\mathbf{u}_h; \, \mathbf{u}_h, \boldsymbol{\delta}_h) \bigr).
\end{equation}
The first term can be estimated by Lemma \ref{lemma3}
\[
\widetilde{c}(\mathbf{u}; \, \mathbf{u}, \boldsymbol{\delta}_h) - \widetilde{c}_h(\mathbf{u}; \, \mathbf{u}, \boldsymbol{\delta}_h) \leq C \, h^s \, \left( \|\mathbf{u}\|_s + \|\mathbf{u}\|_{\mathbf{V}} + \|\mathbf{u}\|_{s+1} \right)\|\mathbf{u}\|_{s+1} \, \|\boldsymbol{\delta}_h\|_{\mathbf{V}}.
\]
The second term, recalling that $\boldsymbol{\delta}_h = \mathbf{u}_h - \mathbf{u}_I$, is bounded by Lemma \ref{lemma2}
\begin{equation}
\label{eq:lemma2thm}
\widetilde{c}_h(\mathbf{u}; \, \mathbf{u}, \boldsymbol{\delta}_h) - \widetilde{c}_h(\mathbf{u}_h; \, \mathbf{u}_h, \boldsymbol{\delta}_h) \leq
\widehat{C}_h \, \bigl( \|\mathbf{u}_h\|_{\mathbf{V}} \, \| \boldsymbol{\delta}_h\|_{\mathbf{V}} + \|\mathbf{u} - \mathbf{u}_I\|_{\mathbf{V}} (\|\mathbf{u}\|_{\mathbf{V}} + \| \mathbf{u}_h\|_{\mathbf{V}}) \bigr) \, \|\boldsymbol{\delta}_h\|_{\mathbf{V}}.
\end{equation}
Collecting \eqref{eq:lemma3thm} and \eqref{eq:lemma2thm} in \eqref{eq:thm2}, we get
\begin{multline}
\label{eq:thm3}
\alpha_* \, \nu \, \| \boldsymbol{\delta}_h \|_{\mathbf{V}} \leq C \, h^s (\nu \, |\mathbf{u}|_{s+1} + h^2 \, |\mathbf{f}|_{s+1}) +
C \, h^s \, \left( \|\mathbf{u}\|_s + \|\mathbf{u}\|_{\mathbf{V}} + \|\mathbf{u}\|_{s+1} \right)\|\mathbf{u}\|_{s+1} + \\
+ \widehat{C}_h \, \bigl( \|\mathbf{u}_h\|_{\mathbf{V}} \, \| \boldsymbol{\delta}_h\|_{\mathbf{V}} + \|\mathbf{u} - \mathbf{u}_I\|_{\mathbf{V}} (\|\mathbf{u}\|_{\mathbf{V}} + \| \mathbf{u}_h\|_{\mathbf{V}}) \bigr)
\end{multline}
and then by Theorem \ref{thm:interpolante} we infer
\begin{multline}
\label{eq:thm4}
\alpha_* \, \nu \left( 1 - \frac{\widehat{C}_h \, \|\mathbf{u}_h\|_{\mathbf{V}} }{\alpha_* \, \nu}\right) \, \| \boldsymbol{\delta}_h \|_{\mathbf{V}} \leq C \, h^s (\nu \, |\mathbf{u}|_{s+1} + h^2 \, |\mathbf{f}|_{s+1}) + \\
+ C \, h^s \, \left( \|\mathbf{u}\|_s + \|\mathbf{u}\|_{\mathbf{V}} + \|\mathbf{u}\|_{s+1} \right)\|\mathbf{u}\|_{s+1} + C \, h^s \, \|\mathbf{u}\|_{s+1} \, \widehat{C}_h \,(\|\mathbf{u}\|_{\mathbf{V}} + \| \mathbf{u}_h\|_{\mathbf{V}}).
\end{multline}
We observe now that from \eqref{eq:solution virtual estimates} and \eqref{eq:ns virtual condition}, it holds
\[
1 - \frac{\widehat{C}_h \, \|\mathbf{u}_h\|_{\mathbf{V}} }{\alpha_* \, \nu} \geq 1 - \frac{\widehat{C}_h \, \|\mathbf{f}_h\|_{H^{-1}} }{(\alpha_* \, \nu)^2} \ge 1 - r > 0.
\]
Therefore
\begin{multline*}
\| \boldsymbol{\delta}_h \|_{\mathbf{V}} \leq
C \, \frac{h^s}{1 - r} \left(|\mathbf{u}|_{s+1} + \frac{h^{2} }{\nu} \|\mathbf{f}\|_{s+1}\right) +
C \, \frac{h^s}{\nu(1 - r)} \, \left( \|\mathbf{u}\|_s + \|\mathbf{u}\|_{\mathbf{V}} + \|\mathbf{u}\|_{s+1} \right)\|\mathbf{u}\|_{s+1} \\ +
C \, h^s \, \|\mathbf{u}\|_{s+1} \, \frac{\widehat{C}_h}{ \nu (1 - \gamma_h)} \,(\|\mathbf{u}\|_{\mathbf{V}} + \| \mathbf{u}_h\|_{\mathbf{V}})
\end{multline*}
and from \eqref{eq:solution estimates}, \eqref{eq:ns condition}, \eqref{eq:solution virtual estimates} and \eqref{eq:ns virtual condition} we finally obtain
\begin{multline*}
\| \boldsymbol{\delta}_h \|_{\mathbf{V}} \leq
C \, \frac{h^s}{1 - r} \left(|\mathbf{u}|_{s+1} + \frac{h^{2} }{\nu} \|\mathbf{f}\|_{s+1}\right) +
C \, \frac{h^s}{\nu(1 - r)} \, \left( \|\mathbf{u}\|_s + \|\mathbf{u}\|_{\mathbf{V}} + \|\mathbf{u}\|_{s+1} \right)\|\mathbf{u}\|_{s+1} + \\ +
C \, h^s \, \|\mathbf{u}\|_{s+1} \, \left( \frac{\widehat{C}_h}{ \widehat{C}} \frac{\gamma}{1 - r} + \frac{\gamma_h}{1 - r} \right).
\end{multline*}
The thesis easily follows from the triangular inequality.
\end{proof}
\begin{remark}
We observe that, due to the divergence-free property of the proposed method, the estimate on the velocity errors in Theorem \ref{thm:u} does not depend on the continuous pressure, whereas the velocity errors of classical methods have a pressure contribution.
A numerical investigation of this aspect, also in relation to the presence of a higher order load approximation term in the right hand side of
\eqref{eq:thm:u}, will be shown in the next section.
\end{remark}
\begin{remark}
From the discrete inf-sup condition \eqref{eq:inf-sup discreta} the pressure estimate easily follows by standard arguments. Let $(\mathbf{u}, p) \in \mathbf{V} \times Q$ be the solution of Problem \eqref{eq:ns variazionale} and $(\mathbf{u}_h, p_h) \in \mathbf{V}_h \times Q_h$ be the solution of Problem \eqref{eq:ns virtual}. Then it holds:
\begin{equation}\label{eq:p-est}
\|p - p_h\|_Q \leq C \, h^{s} \, |p|_{s} + C \, h^{s+2} \, |\mathbf{f}|_{s+1} + h^s \, \mathcal{K}(\mathbf{u}; \nu, \gamma, \gamma_h)
\end{equation}
for a suitable function $\mathcal{K}(\cdot; \, \cdot, \cdot)$ independent of $h$.
\end{remark}
\begin{remark}
In Theorem \ref{thm:u} we have assumed $\mathbf{u}$ and ${\bf f}$ in $H^{s+1}(\Omega)$. However, it is easy to check that the same analysis can be performed if we only require:
$$
\mathbf{u}, \: {\bf f} \in H^{s+1}(E) \quad \forall E \in \Omega_h.
$$
In such a case, the higher order Sobolev norms on $\mathbf{u}, {\bf f}$ appearing in Theorem \ref{thm:u} (and in the other results of this section) are substituted with the corresponding element-wise broken Sobolev norms.
\end{remark}
\section{Numerical Tests}
\label{sec:5}
In this section we present six sets of numerical experiments to test the practical performance of the method.
All the tests are performed with the second-order VEM, i.e. $k=2$. We also consider suitable second order Finite Elements for comparison.
In almost all cases, both options ${c}_h(\cdot;\cdot,\cdot)$ and ${\widetilde c}_h(\cdot;\cdot,\cdot)$ (see Remark \ref{rem:non-skew}) yield very similar results; in such cases, only the first choice is reported. On the contrary, whenever the results between the two choices are significantly different, both outcomes are shown.
In Test \ref{test1} and Test \ref{test2}, we consider two benchmark problems for the Stokes and Navier-Stokes equation. They share the property of having the velocity solution in the discrete space. However, classical mixed finite element methods lead to significant velocity errors, stemming from the velocity/pressure coupling in the error estimates. This effect is greatly reduced (or even neglected) by our VEM methods (cf. Theorem \ref{thm:u} and estimate \eqref{eq:p-est}).
In Test \ref{test3} we analyse the stability of the method with respect to the viscosity parameter $\nu$.
In Test \ref{test4} and Test \ref{test5} we study the convergence of the proposed method for the Navier-Stokes and Stokes equations, respectively.
A comparison with the triangular P2-P1 and the quadrilateral Q2-P1 mixed finite element methods, see for example \cite{BoffiBrezziFortin}, is also performed.
Finally in Test \ref{test6} we assess the proposed virtual element method for flows which are governed by the Stokes system on one part of the domain, and by the Darcy's law in the rest of the domain, the solutions in the two domains being coupled by proper interface conditions (see Remark \ref{rem:Sto-Dar}). \\
In order to compute the VEM errors, we consider the computable error quantities:
\begin{gather*}
{\rm error}(\mathbf{u}, H^1) := \left( \sum_{E \in \Omega_h} \left \| \boldsymbol{\nabla} \, \mathbf{u} - \boldsymbol{\Pi}_{k-1}^{0, E} (\boldsymbol{\nabla} \, \mathbf{u}_h) \right \|_{0,K}^2 \right)^{1/2}
\\
{\rm error}(\mathbf{u}, L^2) := \left(\sum_{E \in \Omega_h} \left \| \mathbf{u} - \Pi_{k}^{0, E} \,\mathbf{u}_h \right \|_{0,E}^2 \right)^{1/2}
\\
{\rm error}(\mathbf{u}, L^{\infty}) := \max_{\mathbf{x} \in \, {\rm nodes}} | \mathbf{u}(\mathbf{x}) - \mathbf{u}_h(\mathbf{x})|
\end{gather*}
where in the previous formula ``nodes'' denotes the set of internal edges nodes and internal vertexes (cf. $\mathbf{D_V1}$ and $\mathbf{D_V2}$). For the pressures we simply compute
\[
{\rm error}(p, L^2) := \|p - p_h\|_0.
\]
The polynomial degree of accuracy for the numerical tests is $k=2$.
In the experiments we consider the computational domains $\Omega_{\rm Q} := [0,1]^2$ and $\Omega_{\rm D} := \{\mathbf{x} \in \numberset{R}^2 \, \text{s.t.} \, |\mathbf{x}| \leq 1 \}$. The square domain $\Omega_{\rm Q}$ is partitioned using the following sequences of polygonal meshes:
\begin{itemize}
\item $\{ \mathcal{Q}_{h} \}_h, \, \{ \mathcal{U}_{h}\}_h$: sequences of distorted quadrilateral meshes with $h=1/10, 1/20, 1/40, 1/80$,
\item $\{ \mathcal{T}_h\}_h$: sequence of triangular meshes with $h=1/5, 1/10, 1/20, 1/40$,
\item $\{ \mathcal{W}_h\}_h$: sequence of WEB-like meshes with $h= 1/5, 1/10, 1/20, 1/40$.
\end{itemize}
An example of the adopted meshes is shown in Figure \ref{meshq}.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.25]{meshtest1}
\includegraphics[scale=0.25]{meshtest3}
\caption{Example of the adopted polygonal meshes: $\mathcal{Q}_{1/20}$, $\mathcal{U}_{1/20}$ (up); $\mathcal{T}_{1/10}$, $\mathcal{W}_{1/10}$ (down).}
\label{meshq}
\end{figure}
The distorted quadrilateral meshes are obtained starting from the uniform square meshes and displacing the internal vertexes (with a proportional ``distortion amplitude'' of $0.3$ for $\mathcal{Q}_h$ and $0.5$ for $\mathcal{U}_h$). The non-convex WEB-like meshes are composed by hexagons, generated starting from the triangular meshes $\{ \mathcal{T}_h\}_h$ and randomly displacing the midpoint of each (non boundary) edge.
For what concerns the disk $\Omega_{\rm D}$ we consider the sequences of polygonal meshes:
\begin{itemize}
\item $\{ \mathcal{T}_h\}_h$: sequence of triangular meshes with $h=1/5, 1/10, 1/20, 1/40$,
\item $\{ \mathcal{V}_h\}_h$: sequence of CVT Voronoi meshes with $h= 1/5, 1/10, 1/20, 1/40$.
\end{itemize}
Figure \ref{meshd} displays an example of the adopted meshes.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.25]{meshtest2}
\caption{Example of polygonal meshes: $\mathcal{T}_{1/10}$, $\mathcal{V}_{1/10}$.}
\label{meshd}
\end{figure}
For the generation of the Voronoi meshes we use the code Polymesher \cite{TPPM12}.
\begin{remark}
As a comparison, we make use also of the classical Q2-P1 and P2-P1 mixed finite elements, see for instance \cite{BoffiBrezziFortin}. The Q2-P1 (Crousiex-Raviart) is a quadrilateral element with bi-quadratic velocities and $\numberset{P}_1$ discontinuous pressures. The P2-P1 (Taylor-Hood) is a triangular element with $\numberset{P}_2$ velocities and $\numberset{P}_1$ continuous pressures. Both are inf-sup stable elements, widely used in the literature and yielding a quadratic convergence rate in the natural norms of the problem.
\end{remark}
\begin{test} [\textbf{Hydrostatic fluids}]
\label{test1}
In this test we consider the linear Stokes equation on the domain $\Omega_{\rm Q}$ with external load $\mathbf{f} = \nabla \, p$ that exactly balances the gradient of the pressure, yielding a hydrostatic situation, i.e. $\mathbf{u} = \mathbf{0}$.
We set the viscosity $\nu = 1$ and we consider two possible pressures
\[
p_1(x, y) = x^3 - y^3 \qquad \text{and} \qquad p_2(x, y) = \sin(2 \pi x)\sin(2 \pi y).
\]
It is well known that the velocity error between the exact velocity $\mathbf{u}$ and the discrete velocity $\mathbf{u}_h$ of standard mixed elements like the Q2-P1 element for the incompressible Stokes equations is pressure-dependent, i.e. has the form
\begin{equation}
\label{eq:errorfem}
\|\mathbf{u} - \mathbf{u}_h\|_{\mathbf{V}} \leq C_1 \, \inf_{\mathbf{v}_h \mathbf{V}_h} \|\mathbf{u} - \mathbf{v}_h\|_{\mathbf{V}} + C_2 \, \inf_{q_h \in Q_h} \|p - q_h\|_{Q}
\end{equation}
where $C_1, C_2$ are two positive uniform constants, whereas for the virtual element scheme (see Theorem \ref{thm:u} and \cite{Stokes:divfree}) the error on the velocity does not depend by the pressure, i.e.
\begin{equation}
\label{eq:errorvem}
\|\mathbf{u} - \mathbf{u}_h\|_{\mathbf{V}} \leq C_1 \, \inf_{\mathbf{v}_h \mathbf{V}_h} \|\mathbf{u} - \mathbf{v}_h\|_{\mathbf{V}} + C_2 \, h^{k+2} |\mathbf{f}|_{k+1}.
\end{equation}
We observe that for both VEM and Q2-P1, the pressures $p_1$ and $p_2$ do not belong to the discrete pressure space. Therefore we expect that the discrete Q2-P1 velocities are polluted by the pressure approximation.
Table \ref{tab1} shows the results obtained respectively with VEM and Q2-P1 for the case of polynomial pressure $p_1$ and sequence of meshes $\mathcal{Q}_h$.
We observe that the virtual element method yields an exact hydrostatic velocity solution, since $\mathbf{f}$ is a polynomial of degree two, while the Q2-P1 finite element method, in accordance with the a priori estimate \eqref{eq:errorfem}, shows non-negligible errors in the velocity.
\begin{table}[!h]
\centering
\begin{tabular}{ll*{3}{c}}
\toprule
& $h$ & ${\rm error}(\mathbf{u}, H^1)$ & ${\rm error}(\mathbf{u}, L^2)$ & ${\rm error}(p, L^2)$ \\
\midrule
\multirow{4}*{VEM}
&$1/10$ &$7.157458e-16$ &$2.565404e-17$ &$2.117754e-03$ \\
&$1/20$ &$1.524395e-15$ &$2.597817e-17$ &$5.489919e-04$ \\
&$1/40$ &$1.610876e-15$ &$1.589614e-17$ &$1.377769e-04$ \\
&$1/80$ &$9.630624e-15$ &$4.590908e-17$ &$3.465069e-05$ \\
\midrule
\multirow{4}*{Q2-P1}
&$1/10$ &$5.328708e-04$ &$9.142870e-06$ &$8.202921e-03$ \\
&$1/20$ &$1.486154e-04$ &$1.278884e-06$ &$2.623095e-03$ \\
&$1/40$ &$4.105136e-05$ &$1.737273e-07$ &$3.433991e-04$ \\
&$1/80$ &$1.006121e-05$ &$2.164782e-08$ &$8.511695e-05$ \\
\bottomrule
\end{tabular}
\caption{Test \ref{test1}: Errors with VEM and Q2-P1 for polynomial pressure $p_1$ and meshes $\mathcal{Q}_h$.}
\label{tab1}
\end{table}
On the other hand we note that, due to the load approximation procedure, there is a load dependent term in the right hand side of \eqref{eq:errorvem}. As a consequence, in the test with goniometric pressure $p_2$ (where the load ${\bf f}$ is not a polynomial) we expect a slight pollution of the velocity errors also for the VEM scheme, although much smaller than for the FEM case.
In Figure \ref{fig:test1} we plot the errors for the goniometric pressure $p_2$ and the same sequence of meshes $\mathcal{Q}_h$. In accordance with the a-priori estimates \eqref{eq:errorfem}, \eqref{eq:errorvem} and the above observation, we obtain quadratic convergence rate for the Q2-P1 finite element method, and fourth order convergence rate for the VEM scheme for the $H^1$-velocity (quadratic for the $L^2$-pressure errors).
\begin{figure}[!h]
\centering
\includegraphics[scale=0.25]{test1}
\caption{Test \ref{test1}: Errors with VEM and Q2-P1 for the meshes $\mathcal{Q}_h$.}
\label{fig:test1}
\end{figure}
\end{test}
\begin{test} [\textbf{Vanishing external load}]
\label{test2}
In this test we consider two benchmark Navier-Stokes problems taken from \cite{benchmark} on the disk $\Omega_{\rm D}$ where we compare the results obtained with VEM discretization with those obtained with the standard P2-P1 element for the sequence of meshes $\mathcal{T}_h$. The solutions are chosen in such a way that the pressures balance the nonlinear convective term yielding a vanishing external load $\mathbf{f} = \mathbf{0}$.
In the first example we take $\nu = 1$ and the exact solution
\[
\mathbf{u}_1(x, y) = \begin{pmatrix}-y \\ x \end{pmatrix} \qquad p_1(x, y) = -\frac{x^2 + y^2}{2} + \frac{1}{4}
\]
We notice that the velocity $\mathbf{u}_1$ belongs to the discrete space for both VEM and P2-P1 schemes.
In Table \ref{tab2} we show the results obtained with the P2-P1 element and the VEM discretization, in which we use respectively the trilinear form $c_h(\cdot; \, \cdot, \cdot)$ of \eqref{eq:c_h}, labelled as ${\rm VEM}_{\rm non-skew}$, and the skew-symmetric form $\widetilde{c}_h(\cdot; \, \cdot, \cdot)$ of \eqref{eq:ctilde_h}, labelled as ${\rm VEM}_{\rm skew}$ (cf. Remark \ref{rem:non-skew}).
\begin{table}[!h]
\centering
\begin{tabular}{ll*{3}{c}}
\toprule
& $h$ & ${\rm error}(\mathbf{u}, H^1)$ & ${\rm error}(\mathbf{u}, L^{\infty})$ & ${\rm error}(p, L^2)$ \\
\midrule
\multirow{4}*{${\rm VEM}_{\rm non-skew}$}
&$ 1/5$ &$3.409332e-13$ &$2.564615e-14$ &$3.379691e-03$ \\
&$1/10$ &$8.055803e-13$ &$3.158584e-14$ &$8.512726e-04$ \\
&$1/20$ &$1.769002e-12$ &$6.561418e-14$ &$2.135981e-04$ \\
&$1/40$ &$4.080531e-12$ &$8.147236e-14$ &$5.352940e-05$ \\
\midrule
\multirow{4}*{${\rm VEM}_{\rm skew}$}
&$ 1/5$ &$5.738500e-05$ &$3.252807e-06$ &$3.379691e-03$ \\
&$1/10$ &$1.510897e-05$ &$5.101225e-07$ &$8.512726e-04$ \\
&$1/20$ &$3.438742e-06$ &$7.243032e-08$ &$2.135981e-04$ \\
&$1/40$ &$6.894319e-07$ &$8.940261e-09$ &$5.352940e-05$ \\
\midrule
\multirow{4}*{P2-P1}
&$ 1/5$ &$4.658371e-04$ &$4.031058e-05$ &$3.416287e-03$ \\
&$1/10$ &$1.470468e-04$ &$6.057622e-06$ &$8.666102e-04$ \\
&$1/20$ &$2.760305e-05$ &$7.740773e-07$ &$2.160190e-04$ \\
&$1/40$ & fail to converge & fail to converge & fail to converge \\
\bottomrule
\end{tabular}
\caption{Test \ref{test2}: Errors with VEM and P2-P1 with solution $(\mathbf{u}_1, p_1)$ and meshes $\mathcal{T}_h$.}
\label{tab2}
\end{table}
We observe that the ${\rm VEM}_{\rm non-skew}$ yields an exact solution $\mathbf{u}_h = \mathbf{u}_1$. Indeed, in this simple case it holds
\[
c_h(\mathbf{u}_1; \, \mathbf{u}_1, \mathbf{v}_h) = c(\mathbf{u}_1; \, \mathbf{u}_1, \mathbf{v}_h) \qquad \text{for all $\mathbf{v}_h \in \mathbf{V}_h$.}
\]
This property is not verified by the skew-symmetric trilinear form $\widetilde{c}_h(\cdot; \, \cdot, \cdot)$. The P2-P1 does not yield the exact velocity solution since the velocity error of the method is polluted by the approximation of the pressure.
In the second example we set $\nu = 1$ and we consider the exact solution
\[
\mathbf{u}_2(x, y) = 3\, \begin{pmatrix} x^2 - y^2 \\ -2 x y \end{pmatrix} \qquad p_2(x, y) = 9
\frac{(x^2 + y^2)^2}{2} - \frac{3}{2}
\]
In Figure \ref{fig:test2tri} we show the results.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.25]{test2_tri}
\caption{Test \ref{test2}: Errors with VEM and P2-P1 with solution $(\mathbf{u}_2, p_2)$ and meshes $\mathcal{T}_h$.}
\label{fig:test2tri}
\end{figure}
We are in a similar situation to the previous example (the velocity $\mathbf{u}_2$ belongs to the discrete spaces, whereas the pressure $p_2$ does not) but with an important difference.
Also in this case we observe that ${\rm VEM}_{\rm non-skew}$ provides a better performance than ${\rm VEM}_{\rm skew}$, but now $\mathbf{u}_h \neq \mathbf{u}_2$. Indeed, in this case it holds
\[
c_h(\mathbf{u}_2; \, \mathbf{u}_2, \mathbf{v}_h) - c(\mathbf{u}_2; \, \mathbf{u}_2, \mathbf{v}_h) \leq C \, h^{k+2} |(\boldsymbol{\nabla}\mathbf{u} )\, \mathbf{u}|_{k+1} || \mathbf{v}_h ||_{\mathbf{V}} \qquad \text{for all $\mathbf{v}_h \in \mathbf{V}_h$}
\]
and using similar steps as in the proof of Theorem \ref{thm:u}, for ${\rm VEM}_{\rm non-skew}$ we can derive
\[
\|\mathbf{u} - \mathbf{u}_h \|_{\mathbf{V}} \leq C \, h^{k+2} \, \|(\boldsymbol{\nabla}\mathbf{u}) \, \mathbf{u}\|_{k+1} .
\]
Instead, for ${\rm VEM}_{\rm skew}$ we can only obtain
\[
\|\mathbf{u} - \mathbf{u}_h \|_{\mathbf{V}} \leq C \, h^{k} \, |\mathbf{u} \cdot \mathbf{u}|_k.
\]
Finally, figure \ref{fig:test2vor} displays the results obtained with ${\rm VEM}_{\rm non-skew}$ and ${\rm VEM}_{\rm skew}$ for the sequence of polygonal meshes $\mathcal{V}_h$ (see Figure \ref{meshd}).
\begin{figure}[!h]
\centering
\includegraphics[scale=0.25]{test2_vor}
\caption{Test \ref{test2}: Errors with VEM and P2-P1 with solution $(\mathbf{u}_2, p_2)$ and meshes $\mathcal{V}_h$.}
\label{fig:test2vor}
\end{figure}
\end{test}
\begin{test}
\label{test3}
In this example we test the Navier-Stokes equation on the domain $\Omega_{\rm Q}$ with different values of the fluid viscosity $\nu$. We choose the load term $\mathbf{f}$ in such a way that the analytical solution is
\[
\mathbf{u}(x,y) = 0.1 \, \begin{pmatrix}
x^2 (1 - x)^2 \, (2 y - 6 y^2 + 4 y^3)\\
- y^2 (1 - y)^2 \, (2 x - 6 x^2 + 4 x^3)
\end{pmatrix}
\qquad
p(x,y) = x^3 \, y^3 - \frac{1}{16}.
\]
The aim of this test is to check the actual performance of the virtual element method for small viscosity parameters, in comparison with the standard P2-P1 mixed finite element method.
Figure \ref{fig:test3} shows that the solutions of the virtual element method are accurate even for rather
small values of $\nu$. Larger velocity errors appear only for very small viscosity parameters. The reason for this robustness is again that the ``divergence free'' property of VEM yields velocity errors that do not depend directly on the pressure (but only indirectly through the higher order load approximation term, see Theorem \ref{thm:u}). On the contrary, for the P2-P1 element the pressure component of the error can become the dominant source of error also for the velocity field. In addition, we note that for $\nu=10^{-4}, 10^{-5}$ the P2-P1 element does not even converge.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.25]{test3_h1}
\caption{Test \ref{test3}: Errors of VEM (dotted lines) and P2-P1 (solid lines), with different values of $\nu$ for the meshes $\mathcal{T}_h$.}
\label{fig:test3}
\end{figure}
\end{test}
\begin{test}
\label{test4}
In this test we solve the Navier-Stokes equation on the square domain $\Omega_{\rm Q}$ with viscosity $\nu = 0.1$ and with
the load term $\mathbf{f}$ chosen such that the analytical solution is
\[
\mathbf{u}(x,y) = \frac{1}{2} \, \begin{pmatrix}
\sin(2 \pi x)^2 \, \sin(2 \pi y) \, \cos(2 \pi y) \\
- \sin(2 \pi y)^2 \, \sin(2 \pi x) \, \cos(2 \pi x)
\end{pmatrix} \qquad
p(x,y) = \pi^2 \, \sin(2 \pi x) \, \cos(2 \pi y).
\]
In Figure \ref{fig:test4} we show the results obtained for the sequence of triangular meshes $\mathcal{T}_h$, also compared with the P2-P1 element.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.25]{test4_tri}
\caption{Test \ref{test4}: Errors with VEM and P2-P1 for the meshes $\mathcal{T}_h$.}
\label{fig:test4}
\end{figure}
We notice that the theoretical predictions of Sections \ref{sec:4} are confirmed. Moreover, we observe that the virtual element method exhibit smaller errors than the standard P2-P1 method, at least for this example and with the adopted meshes.
Finally we test the virtual element method with the sequence of polygonal meshes $\mathcal{W}_h$, obtaining that the theoretical results are confirmed as well (note that the $N_{dof}$ behaves like $h^{-2}$).
\begin{figure}[!h]
\centering
\includegraphics[scale=0.25]{test4_esa}
\caption{Test \ref{test4}: Errors with VEM for the meshes $\mathcal{W}_h$.}
\label{fig:test4_esa}
\end{figure}
\end{test}
\begin{test}
\label{test5}
In this experiment we analyse the Stokes equation on the square domain $\Omega_{\rm Q}$ where the viscosity $\nu = 1$ and
the load term $\mathbf{f}$ is chosen such that the analytical solution is
\[
\mathbf{u}(x,y) = \frac{1}{2} \, \begin{pmatrix}
\sin(2 \pi x)^2 \, \sin(2 \pi y) \, \cos(2 \pi y) \\
- \sin(2 \pi y)^2 \, \sin(2 \pi x) \, \cos(2 \pi x)
\end{pmatrix} \qquad
p(x,y) = \sin(2 \pi x) \, \cos(2 \pi y).
\]
The aim of this test is the assessment of the VEM robustness with respect to the mesh deformation, performing also a comparison with the Q2-P1 mixed finite element method.
In Figures \ref{fig:test5_storti} and \ref{fig:test5_ultra_storti} we plot the obtained results.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.25]{test5_storti}
\caption{Test \ref{test5}: Errors with VEM and Q2-P1 for the meshes $\mathcal{Q}_h$.}
\label{fig:test5_storti}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[scale=0.25]{test5_ultra}
\caption{Test \ref{test5}: Errors with VEM and Q2-P1 for the meshes $\mathcal{U}_h$.}
\label{fig:test5_ultra_storti}
\end{figure}
\end{test}
We observe that for the (less deformed) quadrilateral meshes $\mathcal{Q}_h$, both the virtual element method and the Q2-P1 preserve the theoretical order of accuracy, but the Q2-P1 element yields better results.
Instead, for the (more deformed) sequence of meshes $\mathcal{U}_h$, the behaviour is completely different. The virtual element approach maintains the optimal second order accuracy, whereas the Q2-P1 element clearly suffers from an evident sub-optimality of the convergence rates (the pressure does not even seem to converge). Therefore, we may conclude that the VEM seems to be more robust with respect to large distortions of the mesh.
\begin{test}
\label{test6}
This test highlights that, following \cite{Stokes:divfree,preprintdarcy}, the proposed virtual elements can accommodate both the Stokes (or Navier-Stokes) and the Darcy problems simultaneously (see Remark \ref{rem:Sto-Dar}).
Accordingly, we consider the approximation of a flow in the square $[0, 2]^2$,
consisting of a porous region $\Omega_D := \Omega_{D1} \cup \Omega_{D2}$, where the flow is a Darcy flow, and an open region $\Omega_S = \Omega \setminus \Omega_D$,
where the flow is governed by the linear Stokes system (see Figure \ref{fig:test6_problem} for a depiction of the problem configuration).
This leads to consider the problem: find $(\mathbf{u},p) \in [H^1(\Omega)]^2 \times L^2(\Omega)$ such that
\begin{equation}
\label{eq:coupled}
\left\{
\begin{aligned}
& - 2 \nu \, {\rm div} (\epsilon (\mathbf{u})) - \nabla p = \mathbf{0} \quad & &\text{in $\Omega_S$,} \\
& {\rm div} \, \mathbf{u} = 0 \quad & &\text{in $\Omega_S$,} \\
& \mathbf{u}_1 = \phi, \quad & &\text{on $\{0\} \times [0, 2]$,} \\
& \mathbf{u}_2 = 0 \quad & &\text{on $[0, 1] \times \{0, 2\}$,} \\
\end{aligned}
\right.
\qquad
\left\{
\begin{aligned}
& \nu \, \lambda \mathbf{u} - \nabla p = \mathbf{0} \quad & &\text{in $\Omega_D$,} \\
& {\rm div} \, \mathbf{u} = 0 \quad & &\text{in $\Omega_D$,} \\
& \mathbf{u}_2 = 0 \quad & &\text{on $[1, 2] \times \{0, 2\}$,} \\
\end{aligned}
\right.
\end{equation}
where $\epsilon(\mathbf{u}) := \frac{1}{2}({\boldsymbol{\nabla}} \mathbf{u} + {\boldsymbol{\nabla}} \mathbf{u}^T)$ denotes the symmetric gradient operator. We fix $\nu =1$, $\lambda= 10$ on $\Omega_{D1}$, and $\lambda= 2$ on $\Omega_{D2}$. Furthermore, we take
\[
\phi(0,y) = \max \{0, -10(1-y)(2-y)\}.
\]
At the interface between Stokes and Darcy regions, the system \eqref{eq:coupled} is coupled using the Beavers-Joseph-Saffmann condition (see \cite{beavers1967boundary,saffman1971boundary} for further details).
We observe that in our test problem, we set free boundary conditions on the right boundary edge of the Darcy region.
To test the performance of the virtual element method, we compute the unknown fluxes quantity (see Figure \ref{fig:test6_problem})
\[
f_{R1} := \int_{\partial \Omega \cap \partial \Omega_{D1}} \mathbf{u} \cdot \mathbf{n} \, {\rm d}s \qquad \text{and} \qquad
f_{R2} := \int_{\partial \Omega \cap \partial \Omega_{D2}} \mathbf{u} \cdot \mathbf{n} \, {\rm d}s
\]
taking into account that $f_L + f_{R1} + f_{R2} = 0$ and
\[
f_{L} := \int_{\partial \Omega \cap \partial \Omega_{S}} \mathbf{u} \cdot \mathbf{n} \, {\rm d}s = \frac{10}{6} ,
\]
with $\mathbf{n}$ denoting the outward normal vector.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.6]{meshtest6flusso}
\caption{Test \ref{test6}: The domain configuration of Problem \eqref{eq:coupled}.}
\label{fig:test6_problem}
\end{figure}
In this experiment the computational domain $\Omega:= [0,2]^2$ is partitioned using two sequences of polygonal meshes:
\begin{itemize}
\item $\{ \mathcal{Q}_{h} \}_h$: sequence of square meshes with element edge length $h=1/4, 1/8, 1/16, 1/32$ ;
\item $\{ \mathcal{P}_h\}_h$: sequence of meshes obtained by gluing a Voronoi decomposition on the domain $\Omega_S$, a triangular decomposition on $\Omega_{D1}$ and a square decomposition on $\Omega_{D2}$, with edge length $h=1/4, 1/8, 1/16, 1/32$.
\end{itemize}
In addition, we use the square mesh $\mathcal{Q}_{1/64}$ as the basis for the reference solution.
An example of the adopted meshes is shown in Figure \ref{fig:test6_mesh}. In Figure \ref{fig:test6_velocity}
we show the plot of the numerical velocity and pressure. Note that the purpose of mesh family $\{ \mathcal{P}_h\}_h$ is to show the robustness of the proposed method when, by exploiting the flexibiliy of polygonal grids, completely independent meshes are glued together.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.8]{test6_mesh}
\vspace{-2.0cm}
\caption{Test \ref{test6}: Example of the adopted polygonal meshes: $\mathcal{Q}_{1/4}$ (left), $\mathcal{P}_{1/4}$ (right).}
\label{fig:test6_mesh}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[scale=0.28]{flusso}
\includegraphics[scale=0.23]{pressione}
\vspace{-0.5cm}
\caption{Test \ref{test6}: Velocity and pressure respectively for the mesh $\mathcal{Q}_{1/8}$ and $\mathcal{Q}_{1/32}$.}
\label{fig:test6_velocity}
\end{figure}
In Table \ref{tab6} we show the results obtained by using the sequences of meshes $\mathcal{Q}_h$ and $\mathcal{P}_h$, compared with those obtained with the reference mesh.
We observe that both sequences of meshes exhibit appropriate convergence properties, confirming that the proposed virtual element method
can automatically handle non-conforming polygonal meshes and the coupling between Darcy and Stokes flow problems.
\begin{table}[!h]
\centering
\begin{tabular}{ll*{3}{c}}
\toprule
& $h$ & $f_{R1}$ & $f_{R2}$ & $f_{L} + f_{R1} + f_{R2}$ \\
\midrule
\multirow{1}*{reference mesh}
&$1/64$ &$5.373938e-01$ &$1.129272e+00$ &-$1.332267e-15$ \\
\midrule
\multirow{4}*{$\mathcal{Q}_h$}
&$1/4$ &$5.215469e-01$ &$1.145119e+00$ &$0$ \\
&$1/8$ &$5.284186e-01$ &$1.138248e+00$ &$4.440892e-16$ \\
&$1/16$ &$5.339269e-01$ &$1.132739e+00$ &-$4.44089e-16$ \\
&$1/32$ &$5.367736e-01$ &$1.129893e+00$ &$6.661338e-16$ \\
\midrule
\multirow{4}*{$\mathcal{P}_h$}
&$1/4$ &$5.161923e-01$ &$1.158934e+00$ &$4.440892e-16$ \\
&$1/8$ &$5.254995e-01$ &$1.143962e+00$ &-$4.44089e-16$ \\
&$1/16$ &$5.343364e-01$ &$1.132836e+00$ &-$2.22044e-16$ \\
&$1/32$ &$5.381031e-01$ &$1.128622e+00$ &$0$ \\
\bottomrule
\end{tabular}
\caption{Test \ref{test6}: Fluxes along the boundary for the sequences of meshes $\mathcal{Q}_h$ and $\mathcal{P}_h$.}
\label{tab6}
\end{table}
\end{test}
\section{Acknowledgements}
The authors L. Beir\~ao da Veiga and G. Vacca were partially supported by the European Research Council through
the H2020 Consolidator Grant (grant no. 681162) CAVE, Challenges and Advancements in Virtual Elements. This support is gratefully acknowledged.
\addcontentsline{toc}{section}{\refname}
\bibliographystyle{plain}
|
2,877,628,090,457 | arxiv | \section{Introduction}
Casimir physics deals with the ubiquitous London-van der Waals dispersion forces, arising from the spontaneous polarization of neutral atoms and molecules, in a regime where retardation effects are not negligible. Accordingly, the resulting Casimir forces between macroscopic bodies are truly quantum and relativistic in nature.
In a pioneering work dating back to 1948 \cite{Cas48}, following a suggestion of Bohr, Hendrik Casimir made a groundbreaking theoretical prediction: two parallel, neutral conducting plates would experience a mutually attractive force $F = \hbar c\, {\pi^2 \over 240}\,{\Sigma/a^4}$ ($a$ and $\Sigma$ denoting, respectively, the distance between the plates and their surface area), due to a variation of the electromagnetic quantum vacuum energy induced by the presence of the plates themselves. This astonishing result indicates that a detailed microscopic description of the plates constituents is actually unnecessary for the computation of the previously mentioned dispersion forces, at least to leading order. Indeed, it is sufficient to consider effective models where relativistic quantum fields are influenced by classical boundaries, external potentials, or even curved or topologically non-trivial background geometries.
Building on this crucial feature, the study of the Casimir effect has nowadays become a well-established and extremely active line of research, both on the theoretical and on the experimental side. Here we content ourselves with mentioning the classical essays \cite{BKMM09,BMM01,DMRR11,KMM09,Milt01,MT97}, making also reference to the vast literature cited therein.
Assuming that quantum fields are confined by perfectly reflecting boundaries is a strong idealization: understandably, no real material is going to behave as a perfect conductor in any frequency range of the electromagnetic field. It comes as no surprise that a price must be paid for this simplification. As first pointed out by Deutsch and Candelas in 1979 \cite{DC79}, renormalized expectation values of local observables, such as the vacuum energy density, generically diverge in a non-integrable way as the boundary is approached. This leads inevitably to the emergence of anomalies in the computation of the associated global observables (see also \cite{BBLPRS15,FS98,KCD80}). Similar issues appear even if the confinement of the quantum field is produced by a smooth external potential diverging at infinity \cite{FP15}. On the contrary, no pathologies are expected to occur when the external potential is regular and vanishes rapidly enough at large distances.
An intermediate regime between smooth confining potentials and hard boundaries can be realized through singular zero-range potentials. Their mathematical description ultimately amounts to prescribing suitable boundary conditions for the quantum field on sets of small co-dimension ($1,2$ or $3$), where the distributional potentials are supposed to be concentrated. At the same time, such singular potentials can often be interpreted as limits (in resolvent sense) of sharply peaked, regular potentials. More technical details on these subjects can be found, e.g., in \cite{AGHH05,AK99,NGMMR20,Po01}. Nowadays, a quite rich literature is available regarding the analysis of Casimir-type settings with external zero-range potentials.
The Casimir effect in presence of surface Dirac delta potentials, interpreted as semi-transparent walls responsible for a partial confinement of the quantum field, was first addressed by Mamaev and Trunov \cite{MT81} and later examined in various configurations by several authors \cite{BBK,BHR92,CMK07,FL08,GJK02,K06,M04,MMM13}. More recently, considerable attention was devoted to the study of renormalized vacuum expectations of global observables (such as the total energy) in presence of generalized zero-range interactions concentrated on sets of co-dimension 1, corresponding to mixtures of $\delta-\delta'$ potentials \cite{AM13,BMS19,BSA16,CFP17,MM15,MSDT20}. Before proceeding, let us also mention that various models with point impurities, modelled via distributional potentials concentrated on sets of co-dimension 3, were analysed in \cite{ACSZ10,ACS16,BM15,BP17,FPSym,Grats18,Grats19,Sca05,SZ09}.
The present work studies the vacuum fluctuations of a canonically quantized, neutral scalar field in $(d+1)$-dimensional Minkowski spacetime (with $d \geqslant 1$) in presence of a flat hyperplane of co-dimension $1$. Both the massive and massless theories are considered. The presence of the hyperplane is described in terms of boundary conditions for the field and its normal derivative. It is worth remarking that all local, homogeneous and isotropic boundary conditions compatible with the unitarity of the quantum field theory are taken into account. Of course, two qualitatively different scenarios are allowed. The first one corresponds to a perfectly reflecting plane, yielding a total confinement of the field on either of the half-spaces that it separates; this setting is naturally portrayed in terms of classical boundary conditions of Dirichlet, Neumann or Robin type. The second one refers to a semitransparent plane, which can be tunnelled through by the quantum field; this situation is described making reference to generalized $\delta$-$\delta'$ potentials concentrated on the plane.
The main object of investigation is the vacuum polarization, namely the renormalized expectation value of the field squared at any spacetime point. This is computed implementing the $\zeta$-regularization technique in the formulation outlined in \cite{WSB} (see also \cite{FP11,FP16,DF20}), which allows to derive explicit integral representations in all cases of interest. These representations are then employed to determine the asymptotic behaviour of the vacuum polarization close to the hyperplane and far away from it. In this connection, the primary purpose is to inspect the presence of boundary divergences. For a perfectly reflecting hyperplane, it is found that the vacuum polarization always diverges near the plane (logarithmically for $d = 1$ and with a power law for $d \geqslant 2$, with respect to the distance from the plane); notably, the leading order term in the asymptotic expansion is always independent of the parameters describing specific boundary conditions. Similar divergences also occur for a semitransparent plane; however in this case the leading order asymptotics depend explicitly on the parameters appearing in the characterization of the boundary conditions. To say more, the leading order divergent contribution is absent for a specific choice of the parameters, corresponding to a pure Dirac delta potential. Some motivations explaining why this very model plays a somehow distinguished role are presented.
The paper is organized as follows. Section \ref{sec:gen} provides an overview of the local zeta regularization framework described in \cite{WSB}. In Section \ref{sec:perfect} the renormalized vacuum polarization for a scalar field in presence of a perfectly reflecting plane is analysed. The analogous observable in the case of a semitransparent plane is examined in Section \ref{sec:semitr}. In both Sections \ref{sec:perfect} and \ref{sec:semitr} the case of a massive field is first considered, and the corresponding massless theory is subsequently addressed by a limiting procedure. Finally, Appendix \ref{app:heatperf} presents a self-contained derivation of the heat kernel on the half-line for generic Robin boundary conditions at the origin, a tool used in the computations of Section \ref{sec:perfect}.
\section{General theory}\label{sec:gen}
The purpose of this section is to present a brief and self-contained summary of some general techniques extracted from \cite{WSB}, to be systematically employed in the sequel.
\subsection{Quantum field theory and the fundamental operator.}\label{subsec:QFTA}
We work in natural units of measure ($c = 1$, $\hbar = 1$) and identify $(d + 1)$-dimensional Minkowski spacetime with $\mathbb{R}^{d+1}$ using a set of inertial coordinates $(x^{\mu})_{\mu \,=\, 0,1,\,...\,,\,d} \equiv (t,\mathbf{x})$, such that the Minkowski metric has components $(\eta_{\mu\nu}) = \mbox{diag}\{-1,1,...,1\}$.
Making reference to the standard formalism of canonical quantization, we describe a neutral scalar field living on a spatial domain $\Omega \subset \mathbb{R}^{d}$ as an operator-valued distribution $\hat{\phi} : (t,\mathbf{x}) \in \mathbb{R} \times \Omega \mapsto \hat{\phi}(t,\mathbf{x}) \in \mathcal{L}_{sa}(\mathfrak{F})$. Here $\mathfrak{F}$ is the bosonic Fock space constructed on the single-particle Hilbert space $L^2(\Omega)$ of square-integrable functions, and $\mathcal{L}_{sa}(\mathfrak{F})$ is the set of unbounded self-adjoint operators on it. We denote with $|0\rangle \in \mathfrak{F}$ the corresponding vacuum state (not to be confused with the true Minkowskian vacuum) and assume that the dynamics is determined by a generalized Klein-Gordon equation of the form
\begin{equation*}
(\partial_{tt} + \mathcal{A}) \hat{\phi} = 0\,,
\end{equation*}
where $\mathcal{A} : \mbox{dom}(\mathcal{A}) \subset L^2(\Omega) \to L^2(\Omega)$ is a non-negative and self-adjoint operator on the single-particle Hilbert space. The non-negativity of $\mathcal{A}$ is in fact an indispensable requirement for a well-behaved quantum field theory, free of pernicious instabilities.
In typical applications, $\mathcal{A}$ is a Schr\"odinger-type differential operator on the spatial domain $\Omega$, possibly including a static external potential $V: \Omega \to \mathbb{R}$, \textit{i.e.},
\begin{equation*}
\mathcal{A} = - \Delta + V\,.
\end{equation*}
Correspondingly, whenever the spatial domain has a boundary $\partial \Omega$ it is essential to specify suitable conditions on it. We understand these boundary conditions to be encoded in the definition of the operator self-adjointness domain $\mbox{dom}(\mathcal{A})$. It goes without saying that the class of admissible potentials and boundary conditions is restricted by the fundamental hypotheses of self-adjointness and non-negativity for $\mathcal{A}$.
The configurations analysed in the present work regard a scalar field influenced solely by the presence of an either perfectly reflecting or semitransparent hyperplane $\pi$ which, without loss of generality, can be parametrized as
\begin{equation}\label{eq:pi}
\pi = \{\mathbf{x} \equiv (x_1,\dots,x_d) \in \mathbb{R}^{d}\,|\,x_1 = 0\}\,.
\end{equation}
As already mentioned in the Introduction, the coupling between the field and this hyperplane can always be described in terms of suitable boundary conditions for the field and its normal derivative on $\pi$. Accordingly, in all cases we shall characterize the fundamental operator $\mathcal{A}$ as a self-adjoint extension of the closable symmetric operator $(- \Delta + m^2) \!\upharpoonright\! C^{\infty}_c(\mathbb{R}^d \setminus \pi)$ on $L^2(\mathbb{R}^d \setminus \pi) \equiv L^2(\mathbb{R}^d)$ (here $m = const. \geqslant 0$ indicates the mass of the field). We refer to subsection \ref{subsec:fact} for more details.
\subsection{$\zeta$-regularization and renormalization}
As well known, a quantum field theory of the type outlined in the preceding subsection is typically plagued by ultraviolet divergences. A viable way to cure these divergences is the $\zeta$-regularization approach described in \cite{WSB}. Following this approach and assuming for now that $\mathcal{A}$ is strictly positive,\footnote{With this we mean that the spectrum $\sigma(\mathcal{A})$ is contained in $[\varepsilon,+\infty)$ for some $\varepsilon > 0$. In the sequel we will indicate how to relax this condition.} we firstly introduce the \textit{$\zeta$-smeared field operator}
\begin{equation*
\hat{\phi}^{u} := (\mathcal{A}/\kappa^2)^{-u/4}\, \hat{\phi}\,;
\end{equation*}
here $u \in \mathbb{C}$ is the regularizing parameter and $\kappa > 0$ is a mass scale factor, included for dimensional reasons. Notice that the initial non-regularized theory is formally recovered setting $u = 0$.
Next, we consider the \textit{$\zeta$-regularized vacuum polarization} at any spacetime point $(t,\mathbf{x}) \in \mathbb{R} \times \Omega$, that is the regularized 2-point function at equal times evaluated along the space diagonal $\{\mathbf{x},\mathbf{y} \in \Omega\,|\,\mathbf{y} = \mathbf{x}\}$:
\begin{equation*}
\langle 0| \big(\hat{\phi}^{u}(t,\mathbf{x})\big)^2 |0\rangle \equiv \langle 0| \hat{\phi}^{u}(t,\mathbf{x}) \hat{\phi}^{u}(t,\mathbf{y}) |0\rangle \Big|_{\mathbf{y} = \mathbf{x}}\,.
\end{equation*}
This quantity can be expressed in terms of the integral kernel associated to a suitable complex power of the fundamental operator $\mathcal{A}$; more precisely, we have \cite[Eq. (2.26)]{WSB}
\begin{equation}\label{eq:fiuA}
\langle 0| \big(\hat{\phi}^{u}(t,\mathbf{x})\big)^2 |0\rangle = {\kappa^u \over 2}\,\mathcal{A}^{-{u+1 \over 2}}(\mathbf{x},\mathbf{y})\Big|_{\mathbf{y} = \mathbf{x}}\,.
\end{equation}
Notice that the expression on the right-hand side of \eqref{eq:fiuA} does not depend on the time coordinate $t \in \mathbb{R}$, as expected for static configurations like the ones we are considering.
On very general grounds it can be shown that the function $(\mathbf{x},\mathbf{y}) \mapsto \mathcal{A}^{-{u+1 \over 2}}(\mathbf{x},\mathbf{y})$ belongs to $C^{j}(\Omega \times \Omega)$ for any $u \in \mathbb{C}$ and $j \in \{1,2,3,\dots\}$ such that $\mbox{Re}\, u > d - 1 + j$. Especially, let us remark that the said function is regular along the diagonal $\{\mathbf{y} = \mathbf{x}\}$ for $\mbox{Re}\, u$ large enough. Furthermore, for any fixed $\mathbf{x},\mathbf{y} \in \Omega$ (even for $\mathbf{y} = \mathbf{x}$), the map $u \mapsto \mathcal{A}^{-{u+1 \over 2}}(\mathbf{x},\mathbf{y})$ is analytic in the complex strip $\{u \in \mathbb{C}\,|\,\mbox{Re}\, u > d-1\}$ and possesses a meromorphic extension to the whole complex plane $\mathbb{C}$ with at most simple pole singularities \cite{DFT,WSB,Minak,Seeley}.
In light of these results, we proceed to define the \textit{renormalized vacuum polarization} at $(t,\mathbf{x}) \in \mathbb{R} \times \Omega$ as
\begin{equation}\label{eq:fi2ren}
\langle 0| \hat{\phi}^{2}(t,\mathbf{x}) |0\rangle_{ren} := RP\Big|_{u = 0} \langle 0| \big(\hat{\phi}^{u}(t,\mathbf{x})\big)^2 |0\rangle\,.
\end{equation}
Here and in the following we denote with $RP\big|_{u = 0}$ the regular part of the Laurent expansion near $u = 0$.\footnote{For any complex-valued meromorphic function $f$ defined in a complex neighbourhood of $u = 0$, making reference to its Laurent expansion $f(u) = \sum_{\ell = -\infty}^{+\infty} f_\ell\,u^\ell$ we define the regular part as $(RP\, f)(u) = \sum_{\ell = 0}^{+\infty} f_\ell\,u^\ell$, which yields in particular $RP\big|_{u = 0}\, f = f_0$.} Notably, if no pole arises at $u = 0$, Eq. \eqref{eq:fi2ren} simply amounts to evaluating the analytic continuation at this point and ultraviolet renormalization is attained with no need to subtract divergent quantities; on the contrary, when the meromorphic extension has a pole at $u = 0$, Eq. \eqref{eq:fi2ren} matches a minimal subtraction prescription \cite{BVW,WSB,WaldZ}.
Before we proceed, let us point out that a modification of the above construction is required whenever the fundamental operator $\mathcal{A}$ is non-negative but not strictly positive, namely, when its spectrum contains a right neighbourhood of $0$. In this case an infrared cut-off must be added in advance, and ultimately removed after renormalization of ultraviolet divergences. For example, one can replace $\mathcal{A}$ with $\mathcal{A} + m^2$ ($m > 0$) and compute the limit $m \to 0^{+}$ at last, after analytic continuation at $u = 0$. Concerning the present work, this modification plays a key role when the field is massless ($m = 0$); in this case the renormalized vacuum polarization is determined as the zero mass limit ($m \to 0^{+}$) of the analogous quantity in the massive theory, namely,
\begin{equation}\label{eq:fi2renm0}
\langle 0| \hat{\phi}^{2}(t,\mathbf{x}) |0\rangle_{ren}^{(massless)} := \lim_{m \to 0^{+}} \langle 0| \hat{\phi}^{2}(t,\mathbf{x}) |0\rangle_{ren}^{(massive)}\,.
\end{equation}
\subsection{Factorized configurations}\label{subsec:fact}
Let us now restrict the attention to product configurations with
$$ \Omega = \Omega_1 \times \mathbb{R}^{d-1} \,, \qquad \mathcal{A} = \mathcal{A}_1 \otimes \mathbf{1}_{d-1} + \mathbf{1}_{1} \otimes (-\Delta_{d-1})\,, $$
where $\Omega_1 \subset \mathbb{R}$ is any open interval, $\mathcal{A}_1$ is a positive self-adjoint operator on $L^2(\Omega_1)$, and $-\Delta_{d-1}$ indicates the free Laplacian on $\mathbb{R}^{d-1}$ with $\mbox{dom}(-\Delta_{d-1}) = H^2(\mathbb{R}^{d-1}) \subset L^2(\mathbb{R}^{d-1})$. It is implied that $\mbox{dom}(\mathcal{A}) = \overline{\mbox{dom}(\mathcal{A}_1) \!\otimes\! H^2(\mathbb{R}^{d-1})} \subset L^2(\Omega) \equiv L^2(\Omega_1) \!\otimes\! L^2(\mathbb{R}^{d-1})$.
Under these circumstances, everything is determined upon factorization by the reduced operator $\mathcal{A}_1$ acting on the 1-dimensional spatial domain $\Omega_1$. In particular, let us highlight that for any $\tau> 0$ the heat kernel $e^{-\tau \mathcal{A}}(\mathbf{x},\mathbf{y})$ and the reduced analogue $e^{-\tau \mathcal{A}_1}(x_1,y_1)$ fulfil
\begin{equation*}
e^{-\tau \mathcal{A}}(\mathbf{x},\mathbf{y}) = e^{-\tau \mathcal{A}_1}(x_1,y_1)\; e^{\tau \Delta_{d-1}}(\mathbf{x}_{d-1},\mathbf{y}_{d-1})\,,
\end{equation*}
where we put $\mathbf{x}_{d-1} \equiv (x_2,\dots,x_d) \in \mathbb{R}^{d-1}$ and denoted with $e^{\tau \Delta_{d-1}}(\mathbf{x}_{d-1},\mathbf{y}_{d-1})$ the free heat kernel in $\mathbb{R}^{d-1}$, that is
\begin{equation*}
e^{\tau \Delta_{d-1}}(\mathbf{x}_{d-1},\mathbf{y}_{d-1}) = {1 \over (4\pi \tau)^{d-1 \over 2}}\;e^{-{|\mathbf{x}_{d-1}-\,\mathbf{y}_{d-1}|^2 \over 4\tau}}\,.
\end{equation*}
Taking into account the above considerations, from the basic Mellin-type identity
\begin{equation*}
\mathcal{A}^{-s}(\mathbf{x},\mathbf{y}) = {1 \over \Gamma(s)} \int_{0}^{\infty} d\tau\;\tau^{s-1}\,e^{-\tau \mathcal{A}}(\mathbf{x},\mathbf{y})\,,
\end{equation*}
we infer by direct evaluation
\begin{equation*}
\mathcal{A}^{-s}(\mathbf{x},\mathbf{y})\Big|_{\mathbf{y} = \mathbf{x}} = {1 \over (4\pi)^{d-1 \over 2}\, \Gamma(s)} \int_{0}^{\infty} d\tau\;\tau^{s-{d+1 \over 2}}\,e^{-\tau \mathcal{A}_1}(x_1,y_1)\Big|_{y_1 = x_1}\,.
\end{equation*}
Together with Eq.\,\eqref{eq:fiuA}, the latter relation allows us to derive the following representation formula for the $\zeta$-regularized vacuum polarization, valid for $u \in \mathbb{C}$ with $\mbox{Re}\, u > d - 1$:
\begin{equation}\label{eq:fiuheat}
\langle 0| \big(\hat{\phi}^{u}(t,\mathbf{x})\big)^2 |0\rangle
= {\kappa^u \over 2\,(4\pi)^{d-1 \over 2}\, \Gamma({u+1 \over 2})} \int_{0}^{\infty} d\tau\;\tau^{{u - d \over 2}}\,e^{-\tau \mathcal{A}_1}(x_1,y_1)\Big|_{y_1 = x_1}\,.
\end{equation}
Let us now return to the configurations portrayed at the end of subsection \ref{subsec:QFTA}, involving a scalar field restrained by the presence of a hyperplane $\pi$. Whenever the effective interaction between the field and $\pi$ is isotropic and homogeneous along the hyperplane, these configurations exhibit the factorization property discussed above. More precisely, referring to the parametrization \eqref{eq:pi} of $\pi$, under the hypotheses just mentioned it is natural to consider the reduced domain $\Omega_1 = \mathbb{R} \setminus \{0\}$ and to characterize the associated operator $\mathcal{A}_1$ as a self-adjoint extension of the symmetric operator $(-\,\partial_{x_1 x_1}\!+m^2) \upharpoonright C^{\infty}_c(\mathbb{R} \setminus \{0\})$.\linebreak
We recall that the domains of these self-adjoint extensions are indeed restrictions of the maximal domain $H^2(\mathbb{R} \setminus \{0\}) \equiv H^2(\mathbb{R}_{-}) \oplus H^2(\mathbb{R}_{+})$ (where $\mathbb{R}_{+} \equiv (0,+\infty)$ and $\mathbb{R}_{-} \equiv (-\infty,0)$), determined by suitable boundary conditions at the gap point $x_1 = 0$.
As a matter of fact, the models discussed in the upcoming Sections \ref{sec:perfect} and \ref{sec:semitr} encompass all admissible self-adjoint realizations of the reduced operator $-\,\partial_{x_1 x_1}\!+m^2$ on $\mathbb{R}\setminus \{0\}$, respecting the basic requirement of positivity.
Notice however that this scheme does not reproduce the entire class of (positive) self-ajoint realizations of the full operator $-\Delta + m^2$ on $\mathbb{R}^{d} \setminus \pi$, since non-homogeneous and non-local self-adjoint realizations are being omitted.\footnote{Let us mention a pair of examples which are not covered by our analysis. On one hand, local but non-homogeneous boundary conditions appear in the description of $\delta$-potentials supported on $\pi$ with non-constant coupling coefficients, formally corresponding to operators like $-\Delta + m^2 + \alpha(\mathbf{x}_{d-1})\, \delta_{\pi}$ (with $\mathbf{x}_{d-1} \equiv (x_2,\dots,x_d) \in \mathbb{R}^{d-1} \simeq \pi$). On the other hand, homogeneous but non-local boundary conditions are used to characterize operators like $-\Delta + m^2 + \alpha(-\Delta_{d-1})\, \delta_{\pi}$, with $\alpha(-\Delta_{d-1})$ a suitable self-adjoint operator on $L^2(\pi)$ defined by functional calculus \cite{CFP17}.}
In the following sections, after providing a precise definition of the reduced operator $\mathcal{A}_1$ under analysis and an explicit expression for the associated heat kernel $e^{-\tau \mathcal{A}_1}(x_1,y_1)$, we proceed to construct the analytic continuation of the map $u \mapsto \langle 0| \big(\hat{\phi}^{u}(t,\mathbf{x})\big)^2 |0\rangle$ to the whole complex plane starting from the representation formula \eqref{eq:fiuheat}. The renormalized vacuum fluctuation is ultimately computed following the prescriptions \eqref{eq:fi2ren} and \eqref{eq:fi2renm0}.
\section{Perfectly reflecting plane}\label{sec:perfect}
In this section we analyse the admissible scenarios where the hyperplane $\pi$ behaves as a perfectly reflecting surface, providing a total decoupling of the two half-spaces which it separates. To this purpose, taking into account the general arguments presented in the preceding Section \ref{sec:gen} and making reference to \cite[Thm. 3.2.3]{AK99}, we consider the family of reduced operators labelled as follows by the elements $\mathbf{h}^{\pm} \!= (h_0^{\pm}, h_1^{\pm})$ of the real projective space $\mathbf{P}_1$:
\begin{eqnarray}
& \mbox{dom}(\mathcal{A}_1) := \big\{\psi \in H^2(\mathbb{R} \!\setminus\! \{0\})\,\big|\, h_0^{+} \psi'(0^{+}) = h_1^{+} \psi(0^{+}),\, h_0^{-} \psi'(0^{-}) = h_1^{-} \psi(0^{-})\big\}\,, \nonumber\\
& \mathcal{A}_1\, \psi = (-\,\partial_{x_1 x_1}\! + m^2)\, \psi \quad \mbox{in\; $\mathbb{R} \!\setminus\! \{0\}$}\,.\label{eq:Aperf}
\end{eqnarray}
Let us remark that the above definition of $\mbox{dom}(\mathcal{A}_1)$ entails classical boundary conditions of Neumann, Dirichlet or Robin type, chosen independently on the two sides of the gap point $x_1 = 0$ (\textit{viz.}, on the two sides of the hyperplane $\pi$). Especially, Neumann conditions correspond to $h_1^{\pm} = 0$ ($h_0^{\pm} \neq 0$) and Dirichlet ones to $h_0^{\pm} = 0$ ($h_1^{\pm} \neq 0$). In passing, we also mention that the Casimir effect for Robin boundary conditions was previously analysed in \cite{EOS09,RS02,SAD06}.
For convenience of presentation, in place of the projective labels $\mathbf{h}^{\pm} \!= (h_0^{\pm}, h_1^{\pm})$ we introduce the parameters
\begin{equation}\label{eq:bpm}
b_{\pm} := \pm \,(h_1^{\pm}/h_0^{\pm}) \in \mathbb{R} \cup \{+\infty\}\,,
\end{equation}
intending $b_{\pm} = +\infty$ if $h_0^{\pm} = 0$. Accordingly, the boundary conditions in Eq. \eqref{eq:Aperf} become
\begin{equation}\label{eq:bcb}
\mp \psi'(0^{\pm}) + b_{\pm}\, \psi(0^{\pm}) = 0\,,
\end{equation}
with the implication that $\psi(0^{\pm}) = 0$ if $b_{\pm} = +\infty$. Notice that we fixed the overall $\pm$ signs in the definition \eqref{eq:bpm} of $b_{\pm}$ so that both conditions in Eq. \eqref{eq:bcb} comply with the canonical form $(\partial_{n}\psi + b_{\pm} \psi)\big|_{x_1 \,=\, 0^{\pm}}\! = 0$, where $n$ denotes the unit outer normal. Of course, Neumann and Dirichlet boundary conditions are retrieved for $b_{\pm} = 0$ and $b_{\pm} = +\infty$, respectively. Let us also emphasize that, with our units of measure, the parameters $b_{\pm}$ are dimensionally equivalent to a mass.
For any $b_{+},b_{-} \in \mathbb{R} \cup \{+\infty\}$ the spectrum of the reduced operator $\mathcal{A}_1$ comprises an invariant purely absolutely continuous part and at most two isolated eigenvalues below the continuous threshold, depending on the sign of $b_{\pm}$. More precisely, we have
\begin{eqnarray*}
& \sigma(\mathcal{A}_1) = \sigma_{ac}(\mathcal{A}_1) \cup \sigma_p(\mathcal{A}_1)\,, \qquad
\sigma_{ac}(\mathcal{A}_1) = [m^2,+\infty)\,, \nonumber \\
& \sigma_{p}(\mathcal{A}_1) = \left\{\!\begin{array}{ll}
\displaystyle{ \varnothing } & \displaystyle{ \mbox{for\, $b_{+},b_{-} \!\in\! [0,+\infty) \cup \{+\infty\}$}, } \vspace{0.1cm}\\
\displaystyle{ \big\{m^2 - b_{+}^2\big\} } & \displaystyle{ \mbox{for\, $b_{+} \!\in\! (-\infty,0)$, $b_{-} \!\in\! [0,+\infty) \cup \{+\infty\}$}, } \vspace{0.1cm}\\
\displaystyle{ \big\{m^2 - b_{-}^2\big\} } & \displaystyle{ \mbox{for\, $b_{+} \!\in\! [0,+\infty) \cup \{+\infty\}$, $b_{-} \!\in\! (-\infty,0)$}, } \vspace{0.1cm}\\
\displaystyle{ \big\{m^2 - b_{+}^2, m^2 - b_{-}^2\big\} } & \displaystyle{ \mbox{for\, $b_{+},b_{-}\! \in\! (-\infty,0)$}. }
\end{array}\right.
\end{eqnarray*}
This makes evident that the required positivity of $\mathcal{A}_1$ is ensured if and only if
\begin{equation}\label{eq:conbm}
b_{+},b_{-} \in (-m,+\infty) \cup \{+\infty\}\,, \qquad m > 0\,,
\end{equation}
two conditions which we assume to be fulfilled until the end of this section.
The heat kernel associated to the aforementioned reduced operator $\mathcal{A}_1$ is given by (see Appendix \ref{app:heatperf}; $\theta(\cdot)$ is the Heaviside step function)
\begin{align}\label{eq:heatrob}
e^{-\tau \mathcal{A}_1}(x_1,y_1) & = {e^{- m^2 \tau} \over \sqrt{4\pi \tau}} \left[
\theta(x_1)\, \theta(y_1) \left(
e^{- {|x_1 - y_1|^2 \over 4\tau}}\! + e^{- {|x_1 + y_1|^2 \over 4\tau}} \!
- 2b_{+}\! \int_{0}^{\infty}\! dw\; e^{- b_{+} w \,- {(w + x_1 + y_1)^2 \over 4 \tau}}
\right)\right. \\
& \qquad \left. +\, \theta(-x_1)\, \theta(-y_1) \left(
e^{- {|x_1 - y_1|^2 \over 4\tau}}\! + e^{- {|x_1 + y_1|^2 \over 4\tau}}\!
- 2b_{-}\! \int_{0}^{\infty}\! dw\; e^{- b_{-} w \,- {(w - x_1 - y_1)^2 \over 4 \tau}}
\right) \right]. \nonumber
\end{align}
Inserting this expression into Eq. \eqref{eq:fiuheat}, we obtain the following integral representation of the $\zeta$-regularized vacuum polarization:
\begin{align}\label{eq:fiuperf}
& \langle 0| \big(\hat{\phi}^{u}(t,\mathbf{x})\big)^2 |0\rangle
= {\kappa^u \over 2\,(4\pi)^{d/2}\, \Gamma({u+1 \over 2})} \int_{0}^{\infty} d\tau\;\tau^{{u - d - 1 \over 2}}\, e^{- m^2 \tau} \;\times\\
& \qquad \times \!\left[
1 + e^{- {(x_1)^2 \over \tau}}\!
- 2b_{+}\,\theta(x_1)\!\int_{0}^{\infty}\!\! dw\; e^{- b_{+} w \,- {(w + 2 x_1)^2 \over 4 \tau}}\!
- 2b_{-}\,\theta(-x_1)\! \int_{0}^{\infty}\!\! dw\; e^{- b_{-} w \,- {(w - 2 x_1)^2 \over 4 \tau}}
\right]\,. \nonumber
\end{align}
In accordance with the general theory outlined in Section \ref{sec:gen}, it can be checked by direct inspection that the above representation \eqref{eq:fiuperf} makes sense for
\begin{equation}\label{eq:Reu0}
u \in \mathbb{C} \quad \mbox{with} \quad \mbox{Re}\, u > d - 1\,,
\end{equation}
a condition needed especially to ensure the convergence of the integral w.r.t. the variable $\tau$ for $\tau \to 0^{+}$. Besides, the expression of the right-hand side of Eq. \eqref{eq:fiuperf} is an analytic function of $u$ inside the semi-infinite complex strip identified by Eq. \eqref{eq:Reu0}.
In order to determine the meromorphic extensions of the map $u \mapsto \langle 0| \big(\hat{\phi}^{u}(t,\mathbf{x})\big)^2 |0\rangle $ to the whole complex plane, let us first make a brief digression to mention a pair of identities involving the Euler Gamma function $\Gamma$ and the modified Bessel function of second kind $K_{\nu}$. On one hand, we have \cite[Eq. 5.9.1]{NIST}
\begin{equation}\label{eq:idGamma}
\int_{0}^{\infty}\! d\tau\;\tau^{\nu - 1}\, e^{-m^2 \tau} = m^{-2\nu}\, \Gamma(\nu)\,,
\qquad \mbox{for all\; $m>0$, $\nu \in \mathbb{C}$ with $\mbox{Re}\, \nu > 0$}\,;
\end{equation}
on the other hand, from \cite[Eq. 10.32.10]{NIST} we deduce
\begin{equation}\label{eq:idBesselK}
\int_{0}^{\infty}\! d\tau\;\tau^{\nu - 1}\, e^{-m^2 \tau - {p^2 \over \tau}}
= 2^{\nu + 1}\,p^{2\nu}\, \mathfrak{K}_{-\nu}(2 m p)\,,
\qquad \mbox{for all\; $m,p>0$,\, $\nu \in \mathbb{C}$}\,,
\end{equation}
where, for later convenience, we introduced the functions ($\nu \in \mathbb{C}$)
\begin{equation}\label{eq:KKdef}
\mathfrak{K}_{\nu} : (0,+\infty) \to \mathbb{C}\,, \qquad \mathfrak{K}_{\nu}(w) := w^{\nu}\,K_{\nu}(w)\,.
\end{equation}
Let us now return to Eq. \eqref{eq:fiuperf}. Firstly, notice that the integration order therein can be exchanged by Fubini's theorem, for any $u$ as in Eq. \eqref{eq:Reu0}. Then, using the previous identities \eqref{eq:idGamma} and \eqref{eq:idBesselK}, by a few additional manipulations we obtain
\begin{align*}
\langle 0| \big(\hat{\phi}^{u}(t,\mathbf{x})\big)^2 |0\rangle
& = {\kappa^u \over 2\,(4\pi)^{d/2}\, \Gamma({u+1 \over 2})} \Bigg[
m^{d - 1 - u}\; \Gamma\left({u - d + 1 \over 2}\right)
+ 2^{{u - d + 3 \over 2}}\,|x_1|^{u - d + 1}\, \mathfrak{K}_{{d - 1 - u \over 2}}\big(2 m |x_1|\big)
\\
& \qquad
- 2^{{u - d + 5 \over 2}} b_{+}\,\theta(x_1)\!\int_{0}^{\infty}\!\! dw\; e^{- b_{+} w}\, (w/2 + x_1)^{u - d + 1}\, \mathfrak{K}_{{d - 1 - u \over 2}}\big(m (w/2 + x_1)\big)
\\
& \qquad
- 2^{{u - d + 5 \over 2}} b_{-}\,\theta(-x_1)\! \int_{0}^{\infty}\!\! dw\; e^{- b_{-} w}\, (w/2 - x_1)^{u - d + 1}\, \mathfrak{K}_{{d - 1 - u \over 2}}\big(m (w/2 - x_1)\big)
\Bigg]\,, \nonumber
\end{align*}
which, via the change of integration variable $w = 2 |x_1|\, v$, becomes
\begin{align}
\langle 0| \big(\hat{\phi}^{u}(t,\mathbf{x})\big)^2 |0\rangle
& = {m^{d - 1}(\kappa/m)^u\, \Gamma({u - d + 1 \over 2}) \over 2^{d+1} \pi^{d/2}\, \Gamma({u+1 \over 2})}
+ {2^{{u - 3d + 1 \over 2}}\, \big(\kappa |x_1|\big)^{\!u} \over \pi^{d/2}\, \Gamma({u+1 \over 2})\, |x_1|^{d - 1}} \, \mathfrak{K}_{{d - 1 - u \over 2}\!}\big(2 m |x_1|\big) \nonumber
\\
& \qquad
- {\theta(x_1)\; b_{+}\,2^{{u - 3d + 5 \over 2}} \big(\kappa |x_1|\big)^{\!u} \over \pi^{d/2}\,\Gamma({u+1 \over 2})\,|x_1|^{d - 2}} \int_{0}^{\infty}\!\! dv\; {e^{- 2 b_{+} |x_1|\, v} \over (v + 1)^{d - 1 - u}}\, \mathfrak{K}_{{d - 1 - u \over 2}}\!\big(2 m |x_1| (v + 1)\big)\nonumber
\\
& \qquad
- {\theta(-x_1)\; b_{-}\, 2^{{u - 3d + 5 \over 2}} \big(\kappa |x_1|\big)^{\!u} \over \pi^{d/2}\,\Gamma({u+1 \over 2})\, |x_1|^{d - 2}} \int_{0}^{\infty}\!\! dv\; {e^{- 2 b_{-} |x_1|\, v} \over (v + 1)^{d - 1 - u}}\, \mathfrak{K}_{{d - 1 - u \over 2}}\big(2 m |x_1| ( v + 1)\big)\,. \label{eq:fiuAC}
\end{align}
Recall that the reciprocal of the Gamma function is analytic on the whole complex plane. Conversely, the Gamma function appearing in the numerator of the first term is a meromorphic function of u, with simple poles where its argument is equal to a non-positive integer, \textit{i.e.},
\begin{equation}\label{eq:poles}
u = d-1-2\ell\,, \qquad \mbox{with\; $\ell \in \{0,1,2,\dots\}$}\,.
\end{equation}
On the other hand, from basic features of the modified Bessel function $K_{\nu}$
we infer that the function $\mathfrak{K}_{\nu}$ introduced in Eq. \eqref{eq:KKdef} fulfils the following: for any fixed $w > 0$, the map $\nu \mapsto \mathfrak{K}_{\nu}(w)$ is analytic on the whole complex plane \cite[\S 10.25(ii)]{NIST}; for any fixed $\nu \in \mathbb{C}$, the map $w \in (0,+\infty) \mapsto \mathfrak{K}_{\nu}(w)$ is analytic, continuous up to $w = 0$ and decaying with exponential speed for $w \to +\infty$ \cite[\S 10.31 and Eqs. 10.25.2, 10.27.4, 10.40.2]{NIST}.\footnote{Consider the integrals appearing in Eq. \eqref{eq:fiuAC}. Since the integrand functions therein are continuous at $w = 0$, the lower extreme of integration is never problematic. On the other hand, from \cite[10.40.2]{NIST} we infer
$$ e^{- 2 b_{\pm} |x_1|\, v}\, (v + 1)^{u - d + 1} \mathfrak{K}_{{d - 1 - u \over 2}}\big(2 m |x_1| ( v + 1)\big)
= \sqrt{\pi \over 2}\; e^{- 2 m |x_1|} \big(2 m |x_1|\big)^{{d - 2 - u \over 2}}\;
e^{- 2 (b_{\pm} + m)\, |x_1|\, v}\, v^{{u - d \over 2}} \Big(1 + o(1)\Big)
\quad \mbox{for\; $w \to +\infty$}\,, $$
which shows that the condition $b_{\pm} > - m$ established in Eq. \eqref{eq:conbm} is in fact indispensable to grant the convergence of the said integrals.}
In light of the above considerations, Eq. \eqref{eq:fiuAC} does in fact provide the meromorphic extension of the $\zeta$-regularized vacuum polarization $\langle 0| \big(\hat{\phi}^{u}(t,\mathbf{x})\big)^2 |0\rangle$ to the whole complex plane, with isolated simple pole singularities at the points indicated in Eq. \eqref{eq:poles}. We can then proceed to compute the renormalized vacuum polarization, implementing the general prescription \eqref{eq:fi2ren}. In this regard, special attention must be paid to the first term on the right-hand side of Eq. \eqref{eq:fiuAC}, since it presents a pole at $u = 0$ when the space dimension $d$ is odd. More precisely, using some basic properties of the Gamma function \cite[\S 5]{NIST} and indicating with $H_{\ell} := \sum_{j = 1}^{\ell} {1 \over j}$ the $\ell$-th harmonic number for $\ell \in \{0,1,2, \dots\}$ ($H_{0} \equiv 0$ by convention), we deduce
\begin{equation*}
{(\kappa/m)^u\, \Gamma({u - d + 1 \over 2}) \over \Gamma({u+1 \over 2})} = \left\{\!\!\begin{array}{ll}
\displaystyle{{(-1)^{{d \over 2}}\,\sqrt{\pi} \over \Gamma({d + 1 \over 2})} + \mathcal{O}(u)} & \displaystyle{\mbox{for $d$ even}\,,} \vspace{0.1cm}\\
\displaystyle{{(-1)^{{d-1 \over 2}} \over \sqrt{\pi}\; \Gamma({d+1 \over 2})} \left[{2 \over u} + H_{{d-1 \over 2}} + 2\log \left({2\kappa \over m}\right) \right] + \mathcal{O}(u)} & \displaystyle{\mbox{for $d$ odd}\,.}
\end{array}\right.
\end{equation*}
Noting that all other terms in Eq. \eqref{eq:fiuAC} are regular at $u = 0$, from \eqref{eq:fi2ren} we obtain
\begin{equation}\label{eq:firen}
\langle 0| \hat{\phi}^{2}(t,\mathbf{x}) |0\rangle_{ren} = \langle 0| \hat{\phi}^{2} |0\rangle_{ren}^{(free)} + \langle 0| \hat{\phi}^{2}(x_1) |0\rangle_{ren}^{(plane)}\,;
\end{equation}
\begin{equation}\label{eq:firenm}
\langle 0| \hat{\phi}^{2} |0\rangle_{ren}^{(free)} = \left\{\!\!\begin{array}{ll}
\displaystyle{ {(-1)^{{d \over 2}}\,\pi\;m^{d - 1} \over (4 \pi)^{{d + 1 \over 2}}\; \Gamma({d + 1 \over 2})} } & \displaystyle{\mbox{for $d$ even}\,,} \vspace{0.1cm} \\
\displaystyle{ {(-1)^{{d-1 \over 2}}\; m^{d - 1} \over (4 \pi)^{{d + 1 \over 2}}\; \Gamma({d+1 \over 2})} \left[H_{{d-1 \over 2}} + 2\log \left({2\kappa \over m}\right) \right]} & \displaystyle{\mbox{for $d$ odd}\,;}
\end{array}\right.
\end{equation}
\begin{align}\label{eq:firenpi}
\langle 0| \hat{\phi}^{2}(x_1) |0\rangle_{ren}^{(plane)}
& := {1 \over 2^{{3d - 1 \over 2}} \pi^{{d+1 \over 2}}\, |x_1|^{d - 1}}\, \Bigg[
\mathfrak{K}_{{d - 1 \over 2}\!}\big(2 m |x_1|\big)
\\
& \qquad
- \theta(x_1)\; 4 b_{+} |x_1| \int_{0}^{\infty}\!\! dv\; {e^{- 2 b_{+} |x_1|\, v} \over (v + 1)^{ d - 1}}\; \mathfrak{K}_{{d - 1 \over 2}} \big(2 m |x_1| (v + 1)\big)\nonumber
\\
& \qquad
- \theta(-x_1)\; 4 b_{-} |x_1| \int_{0}^{\infty}\!\! dv\; {e^{- 2 b_{-} |x_1|\, v} \over (v + 1)^{ d - 1}}\; \mathfrak{K}_{{d - 1 \over 2}}\big(2 m |x_1| ( v + 1)\big)\Bigg]\,. \nonumber
\end{align}
It is worth remarking that $\langle 0| \hat{\phi}^{2}|0\rangle_{ren}^{(free)}$ is in fact a constant which depends solely on the mass $m$ of the field and, possibly, on the renormalization mass parameter $\kappa$ (if the space dimension $d$ is odd). In particular, it does not depend on the coordinate $x_1$, namely, the distance from the hyperplane $\pi$, nor on the parameters $b_{\pm}$ defining the boundary conditions on $\pi$. For these reasons it is natural to regard $\langle 0| \hat{\phi}^{2} |0\rangle_{ren}^{(free)}$ as a pure free-theory contribution (which explains the choice of the superscript). In contrast, $\langle 0| \hat{\phi}^{2}(x_1) |0\rangle_{ren}^{(plane)}$ is a contribution which truly accounts for the presence of the hyperplane and for the boundary conditions on it.
Owing to the above considerations, one might be tempted to discard the free-theory term $\langle 0| \hat{\phi}^{2}|0\rangle_{ren}^{(free)}$ and regard $\langle 0| \hat{\phi}^{2}(x_1) |0\rangle_{ren}^{(plane)}$ as the only physically relevant contribution to the vacuum polarization. Despite being tenable, this standpoint actually suffers from a drawback. Indeed, let us anticipate that $\langle 0| \hat{\phi}^{2}|0\rangle_{ren}^{(free)}$ plays a key role in the cancellation of some infrared divergences which would otherwise affect the massless theory in space dimension $d = 1$ (see the subsequent paragraph \ref{subsubsec:m0d1}). Therefore, we reject the standpoint sketched above and regard the sum $\langle 0| \hat{\phi}^{2}(t,\mathbf{x}) |0\rangle_{ren}$ defined in Eq. \eqref{eq:firen} as the true physically sensible observable.
\subsection{Neumann and Dirichlet conditions}
We already mentioned in the comments below Eq. \eqref{eq:bcb} that Neumann and Dirichlet boundary conditions correspond to $b_{\pm} = 0$ and $b_{\pm} = +\infty$, respectively. Of course, the free-theory contribution $\langle 0| \hat{\phi}^{2} |0\rangle_{ren}^{(free)}$ remains unchanged in both cases, so let us focus on the term $\langle 0| \hat{\phi}^{2}(x_1) |0\rangle_{ren}^{(plane)}$.
In the case of Neumann conditions where $b_{\pm} = 0$, it appears that the expressions in the second and third line of Eq. \eqref{eq:firenpi} vanish identically. Regarding the case of Dirichlet conditions, the limits $b_{\pm} \to +\infty$ can be easily computed as follows, making the change of integration variable $z = 2 b_{+} |x_1|\, v$ and using the dominated convergence theorem:
\begin{align*}
& \lim_{b_{\pm} \to +\infty} \Bigg[4 b_{\pm} |x_1| \int_{0}^{\infty}\!\! dv\; {e^{- 2 b_{\pm} |x_1|\, v} \over (v + 1)^{ d - 1}}\; \mathfrak{K}_{{d - 1 \over 2}}\big(2 m |x_1| (v + 1)\big) \Bigg]\\
& = \lim_{b_{\pm} \to +\infty} \Bigg[2 \int_{0}^{\infty}\!\! dz\; {e^{- z} \over \big({z \over 2 b_{+} |x_1|} + 1\big)^{ d - 1}}\; \mathfrak{K}_{{d - 1 \over 2}}\!\left(2 m |x_1| \Big({z \over 2 b_{+} |x_1|} + 1\Big)\right) \Bigg] \\
& = 2\, \mathfrak{K}_{{d - 1 \over 2}}\!\big(2 m |x_1|\big) \int_{0}^{\infty}\!\! dz\; e^{- z}
= 2\, \mathfrak{K}_{{d - 1 \over 2}}\!\big(2 m |x_1|\big)\,.
\end{align*}
Summarizing, Eq. \eqref{eq:firenpi} reduces to
\begin{equation}\label{eq:firenpiDN}
\langle 0| \hat{\phi}^{2}(x_1) |0\rangle_{ren}^{(plane)}
= \pm\,{\mathfrak{K}_{{d - 1 \over 2}\!}\big(2 m |x_1|\big) \over 2^{{3d - 1 \over 2}} \pi^{{d+1 \over 2}}\, |x_1|^{d - 1}}\,,
\end{equation}
for Neumann (+) and Dirichlet (-) boundary conditions, respectively.
\subsection{Asymptotics for $x_1 \to 0^{\pm}$ and $x_1 \to \pm \infty$}\label{subsec:asy}
Hereafter we investigate the behaviour of the renormalized vacuum polarization $\langle 0| \hat{\phi}^{2}(t,\mathbf{x}) |0\rangle_{ren}$ close to the hyperplane $\pi$ and far way from it. For brevity we only present the leading order asymptotics, although a refinement of the arguments outlined below would actually permit to derive asymptotic expansions at any order.
Before proceeding, let us stress once more that $\langle 0| \hat{\phi}^{2} |0\rangle_{ren}^{(free)}$ does not depend on the coordinate $x_1$; thus, it is sufficient to analyse the term $\langle 0| \hat{\phi}^{2}(x_1) |0\rangle_{ren}^{(plane)}$ (see Eq. \eqref{eq:firenpi}).
\subsubsection{The limit $x_1 \to 0^{\pm}$}\label{par:x10}
Let us first notice that the functions $\mathfrak{K}_{\nu}$ defined in Eq. \eqref{eq:KKdef} have the following asymptotic expansions, which can be easily derived from \cite{NIST}[Eqs. 10.31.1 and 10.30.2] (here and in the sequel $\gamma_{EM} = 0.57721\dots$ indicates the Euler-Mascheroni constant):
\begin{eqnarray}
& \mathfrak{K}_{0}(w) = - \log(w/2) + \gamma_{EM} + \mathcal{O}\big(w^2 \log w\big)\,, \qquad \mbox{for\; $w \to 0^{+}$}\,; \label{eq:K0asy} \\
& \mathfrak{K}_{\nu}(w) = 2^{\nu-1}\,\Gamma(\nu) + \mathcal{O}\big(w^{\min\{2+\nu,\,2\nu\}} \log w\big) \,, \qquad \mbox{for\, $w \to 0^{+}$ and\, $\nu > 0$}\,. \label{eq:Knuasy}
\end{eqnarray}
Next, consider the integrals appearing in the second and third lines of Eq. \eqref{eq:firenpi}. For any finite $b_{\pm} \in (-m,+\infty)$ (cf. Eq. \eqref{eq:conbm}), via the change of variable $z = 2m |x_1|\,(v+1)$ we get
\begin{align*}
& 4 b_{\pm} |x_1| \int_{0}^{\infty}\!\! dv\; {e^{- 2 b_{\pm} |x_1|\, v} \over (v + 1)^{ d - 1}}\; \mathfrak{K}_{{d - 1 \over 2}} \big(2 m |x_1| (v + 1)\big)
= {2 b_{\pm} \over m}\,\big(2m |x_1|\big)^{d - 1}\, e^{2 b_{\pm} |x_1|} \int_{2 m |x_1|}^{\infty}\! dz\; {e^{- {b_{\pm} \over m}\, z} \over z^{d - 1}}\; \mathfrak{K}_{{d - 1 \over 2}}(z)
\end{align*}
Writing $\int_{2 m |x_1|}^{\infty} = \int_{2 m |x_1|}^{z_0} + \int_{z_0}^{\infty}$ for some $z_0 > 0$ fixed arbitrarily and replacing the integrand inside $\int_{2 m |x_1|}^{z_0}$ with its Taylor expansion at $z = 0$ (recall, especially, Eqs. \eqref{eq:K0asy} and \eqref{eq:Knuasy}), by a few additional computations we deduce, for $|x_1| \to 0^{+}$,
\begin{align*}
& 4 b_{\pm} |x_1| \int_{0}^{\infty}\!\! dv\; {e^{- 2 b_{\pm} |x_1|\, v} \over (v + 1)^{ d - 1}}\; \mathfrak{K}_{{d - 1 \over 2}} \big(2 m |x_1| (v + 1)\big)
= \left\{\! \begin{array}{ll}
\displaystyle{ \mathcal{O}(1) } & \displaystyle{\mbox{for\; $d = 1$}\,,} \vspace{0.1cm} \\
\displaystyle{ \mathcal{O}\big( |x_1|\, \log |x_1|\big) } & \displaystyle{\mbox{for\; $d = 2$}\,,} \vspace{0.1cm} \\
\displaystyle{ \mathcal{O}\big(|x_1|\big) } & \displaystyle{\mbox{for\; $d \geqslant 3$}\,.}
\end{array} \right.
\end{align*}
Summing up, the above arguments allow us to infer that, in the limit $x_1 \to 0^{\pm}$,
\begin{align}\label{eq:firenx0}
& \langle 0| \hat{\phi}^{2}(x_1) |0\rangle_{ren}^{(plane)}
= \left\{\! \begin{array}{ll}
\displaystyle{ -\,{1 \over 2 \pi}\,\log\big(m |x_1|\big) + \mathcal{O}(1) } & \displaystyle{\mbox{for\; $d = 1$}\,,} \vspace{0.1cm} \\
\displaystyle{ {1 \over 8 \pi\, |x_1|} + \mathcal{O}\big( \log |x_1|\big) } & \displaystyle{\mbox{for\; $d = 2$}\,,} \vspace{0.1cm} \\
\displaystyle{ {\Gamma({d - 1 \over 2}) \over (4 \pi)^{{d+1 \over 2}}\, |x_1|^{d - 1}} \Big[1 + \mathcal{O}\big(|x_1|\big)\Big] } & \displaystyle{\mbox{for\; $d \geqslant 3$}\,.}
\end{array} \right.
\end{align}
It is remarkable that the above leading order expansions do not depend on the parameters $b_{\pm}$, describing the boundary conditions. In particular, the same results remain valid for Neumann conditions, corresponding to $b_{\pm} = 0$. On the contrary, a separate analysis is required for Dirichlet conditions, which is formally recovered for $b_{\pm} \to + \infty$ (a limit which clearly does not commute with $x_1 \to 0^{\pm}$); in this case, starting from Eq. \eqref{eq:firenpiDN} and using again Eqs. \eqref{eq:K0asy} \eqref{eq:Knuasy} one can derive asymptotic expansions which coincide with those reported in Eq. \eqref{eq:firenx0}, except for the opposite overall sign.
\subsubsection{The limit $x_1 \to \pm\infty$}
It is a well known fact that local observables of Casimir type for massive fields are typically suppressed with exponential rate in the regime of large distances from the boundaries. In the sequel we provide quantitative estimates for $\langle 0| \hat{\phi}^{2}(x_1) |0\rangle_{ren}^{(plane)}$, confirming this general expectation.
To this purpose, let us first point out that the functions $\mathfrak{K}_{\nu}$ fulfil (see Eq. \eqref{eq:KKdef} and \cite{NIST}[Eq. 10.40.2])
\begin{equation}\label{eq:Kinf}
\mathfrak{K}_{\nu}(w) = \sqrt{{\pi \over 2}}\;e^{-w}\,w^{\nu-1/2}\, \Big(1 + \mathcal{O}(1/w) \Big) \,, \qquad \mbox{for\, $w \to +\infty$\, and\, $\nu \geqslant 0$}\,.
\end{equation}
Consider now the integral expressions in Eq. \eqref{eq:firenpi}. Using the above relation and making the change of variable $z = |x_1|\, v$, for $|x_1| \to +\infty$ we deduce
\begin{align*}
& 4 b_{\pm} |x_1| \int_{0}^{\infty}\!\! dv\; {e^{- 2 b_{\pm} |x_1|\, v} \over (v + 1)^{ d - 1}}\;
\mathfrak{K}_{{d - 1 \over 2}} \big(2 m |x_1| (v + 1)\big) \\
& = 2 \sqrt{2 \pi}\; b_{\pm} \;e^{- 2 m |x_1|}\,\big(2 m |x_1|\big)^{{d - 2 \over 2}} \int_{0}^{\infty}\!\! dz\; e^{- 2 (b_{\pm} + m)\,z}\, \Big(1 + \mathcal{O}\big(z/|x_1|\big) \Big) \\
& = {\sqrt{2 \pi}\; b_{\pm} \over b_{\pm} + m} \;e^{- 2 m |x_1|}\,\big(2 m |x_1|\big)^{{d - 2 \over 2}} \Big(1 + \mathcal{O}\big(1/|x_1|\big) \Big)\,.
\end{align*}
In view of the above results, from Eq. \eqref{eq:firenpi} we infer
\begin{equation}\label{eq:fireninf}
\langle 0| \hat{\phi}^{2}(x_1) |0\rangle_{ren}^{(plane)}\!
= {m^{{d-2 \over 2}} \over 2 (4 \pi)^{{d/2}}} \left({m \!-\! b_{\pm} \over m \!+\! b_{\pm}}\right) {e^{- 2 m |x_1|} \over |x_1|^{d/2}}\, \left[1 + \mathcal{O}\left({1 \over |x_1|} \right) \right]
\qquad \mbox{for\, $x_1 \!\to\! \pm \infty$} \,.
\end{equation}
The case of Dirichlet boundary conditions can be alternatively addressed taking the limit $b_{\pm} \to +\infty$ in Eq. \eqref{eq:fireninf}, or starting from Eq. \eqref{eq:firenpiDN} and using again Eq. \eqref{eq:Kinf}:
\begin{equation*}
\langle 0| \hat{\phi}^{2}(x_1) |0\rangle_{ren}^{(plane)}\!
= -\,{m^{{d-2 \over 2}} \over 2 (4 \pi)^{{d/2}}}\; {e^{- 2 m |x_1|} \over |x_1|^{d/2}}\, \left[1 + \mathcal{O}\left({1 \over |x_1|} \right) \right]
\qquad \mbox{for $x_1 \!\to\! \pm \infty$} \,.
\end{equation*}
As a final remark, let us highlight that in any case $\langle 0| \hat{\phi}^{2}(t,\mathbf{x}) |0\rangle_{ren}$ approaches the constant free-theory value $\langle 0| \hat{\phi}^{2} |0\rangle_{ren}^{(free)}$ with exponential speed.
\subsection{Vacuum polarization for a massless field}\label{subsec:m0}
Let us now address the case of a massless field, fulfilling generic boundary conditions of the form \eqref{eq:bcb}. In this context the hypothesis \eqref{eq:conbm} entails
\begin{equation*
b_{+},b_{-} \in [0,+\infty) \cup \{+\infty\}\,,
\end{equation*}
and under this condition we can implement the general arguments reported in Section \ref{sec:gen}. Especially, let us recall that the renormalized vacuum polarization for a massless field is obtained as the zero-mass limit of the analogous quantity for a massive field, see Eq. \eqref{eq:fi2renm0}. In the sequel we discuss separately the cases with space dimension $d = 1$ and $d \geqslant 2$, for both technical and physical reasons.
\subsubsection{Space dimension $d = 1$}\label{subsubsec:m0d1}
This case deserves a separate analysis, due to the emergence of some delicate infrared features. As a matter of fact, both $\langle 0| \hat{\phi}^{2} |0\rangle_{ren}^{(free)}$ and $\langle 0| \hat{\phi}^{2}(x_1) |0\rangle_{ren}^{(plane)}$ diverge in the limit $m \to 0^{+}$; however, their sum $\langle 0| \hat{\phi}^{2}(t,\mathbf{x}) |0\rangle_{ren}$ remains finite, except when the boundary conditions are of Neumann type.
To account for the above claims, let us firstly notice that Eq. \eqref{eq:firenm} yields (for $d = 1$)
\begin{equation}\label{eq:firenm1d}
\langle 0| \hat{\phi}^{2}|0\rangle_{ren}^{(free)} = {1 \over 2 \pi}\, \log \left({2\kappa \over m}\right) ,
\end{equation}
which is patently divergent in the limit $m \to 0^{+}$.
Now consider the term $\langle 0| \hat{\phi}^{2}(x_1) |0\rangle_{ren}^{(plane)}$. For $b_{+} = b_{-} = 0$,\footnote{Similar results can be derived also if only one of $b_{+}$ and $b_{-}$ is equal to zero.} namely in the case of Neumann conditions, from Eqs. \eqref{eq:firenpiDN} and \eqref{eq:K0asy} we readily infer (for fixed $x_1 \neq 0$)
\begin{equation*}
\langle 0| \hat{\phi}^{2}(x_1) |0\rangle_{ren}^{(plane)}
= - \,{1 \over 2 \pi}\,\log\big(m |x_1|\big) + \mathcal{O}(1)\,, \qquad \mbox{for\; $m \to 0^{+}$}\,,
\end{equation*}
which, together with Eqs. \eqref{eq:firen} and \eqref{eq:firenm1d}, implies in turn
\begin{equation*}
\lim_{m \to 0^{+}} \langle 0| \hat{\phi}^{2}(t,\mathbf{x}) |0\rangle_{ren} =
\lim_{m \to 0^{+}} \left[{1 \over 2 \pi}\, \log \left({\kappa \over m^2 |x_1|}\right) + \mathcal{O}(1)\right] = + \infty\,.
\end{equation*}
This is nothing but an unavoidable manifestation of the infrared divergences which typically affect massless theories in low space dimension. Taking notice of this fact, in the remainder of this subsection we restrict the attention to
\begin{equation*
b_{+},b_{-} \in (0,+\infty)\,.
\end{equation*}
With this requirement, using Eq. \eqref{eq:K0asy} and some known integral identities for the incomplete Gamma function $\Gamma(a,z)$,\footnote{More precisely, integrating by parts, making the change of integration variable $z = 2 b_{\pm} |x_1|\, (v+1)$ and recalling \cite[Eq. 8.2.2]{NIST} one gets
$$ 2 b_{\pm} |x_1| \int_{0}^{\infty}\!\! dv\; e^{- 2 b_{\pm} |x_1|\, v}\,\log(v+1)
= - \Big(e^{- 2 b_{\pm} |x_1|\, v}\,\log(v\!+\!1)\Big)_{0}^{+ \infty}
\!+ \int_{0}^{\infty}\!\! dv\; {e^{- 2 b_{\pm} |x_1|\, v} \over v+1}
= e^{2 b_{\pm} |x_1|} \int_{2 b_{\pm} |x_1|}^{\infty}\!\! dz\; {e^{- z} \over z}
= e^{2 b_{\pm} |x_1|}\, \Gamma\big(0,2 b_{\pm} |x_1|\big)\,. $$
}
from Eq. \eqref{eq:firenpi} we deduce the following for $m \to 0^{+}$:
\begin{align*
\langle 0| \hat{\phi}^{2}(x_1) |0\rangle_{ren}^{(plane)}
& = {1 \over 2 \pi}\, \Bigg[ - \log\big(m |x_1|\big) + \gamma_{EM}
\nonumber \\
& \qquad\qquad - \theta(x_1)\; 4 b_{+} |x_1| \int_{0}^{\infty}\!\! dv\; e^{- 2 b_{+} |x_1|\, v}\;
\Big(- \log\big(m |x_1|\big) + \gamma_{EM} - \log(v + 1)\Big)
\nonumber \\
& \qquad\qquad - \theta(-x_1)\; 4 b_{-} |x_1| \int_{0}^{\infty}\!\! dv\; e^{- 2 b_{-} |x_1|\, v}\;
\Big(- \log\big(m |x_1|\big) + \gamma_{EM} - \log(v + 1) \Big) \Bigg]
\nonumber \\
& \qquad\qquad + \mathcal{O}\Big(\big(m |x_1|\big)^2 \log \big(m |x_1|\big)\Big)
\\
& = {1 \over 2 \pi}\, \Big[
\log\big(m |x_1|\big) - \gamma_{EM}
+ 2\, \theta(x_1)\; e^{2 b_{+} |x_1|}\, \Gamma\big(0,2 b_{+} |x_1|\big)
\nonumber \\
& \qquad\qquad
+ 2\, \theta(-x_1)\; e^{2 b_{-} |x_1|}\, \Gamma\big(0,2 b_{-} |x_1|\big)\Big]
+ \mathcal{O}\Big(\big(m |x_1|\big)^2 \log \big(m |x_1|\big)\Big)\,.
\nonumber
\end{align*}
From here and from Eqs. \eqref{eq:fi2renm0} and \eqref{eq:firenm1d}, we finally obtain
\begin{align}
& \langle 0| \hat{\phi}^{2}(t,\mathbf{x}) |0\rangle_{ren}^{(massless)}
= \lim_{m \to 0^{+}} \Big[ \langle 0| \hat{\phi}^{2} |0\rangle_{ren}^{(free)} + \langle 0| \hat{\phi}^{2}(x_1) |0\rangle_{ren}^{(plane)} \Big] \nonumber \\
& = {1 \over 2 \pi}\, \Big[
\log \big(2\kappa |x_1|\big) - \gamma_{EM}
+ 2\, \theta(x_1)\; e^{2 b_{+} |x_1|}\, \Gamma\big(0,2 b_{+} |x_1|\big)
+ 2\, \theta(-x_1)\; e^{2 b_{-} |x_1|}\, \Gamma\big(0,2 b_{-} |x_1|\big)\Big]\,.
\label{eq:firend1m0}
\end{align}
The case of Dirichlet boundary conditions is retrieved taking the limit $b_{\pm} \to +\infty$ and noting that the incomplete Gamma function fulfils $\lim_{w \to +\infty} e^{w}\, \Gamma(0,w) = 0$ (see \cite[Eq. 8.11.2]{NIST}), which gives
\begin{equation*}
\langle 0| \hat{\phi}^{2}(t,\mathbf{x}) |0\rangle_{ren}^{(massless)} = {1 \over 2 \pi}\, \Big[\log \big(2\kappa |x_1|\big) - \gamma_{EM} \Big]\,.
\end{equation*}
For any $b_{+},b_{-} \in (0,+\infty)$, the asymptotic behaviour of $\langle 0| \hat{\phi}^{2}(t,\mathbf{x}) |0\rangle_{ren}^{(massless)}$ for small and large distances from the point $\pi \equiv \{x_1 = 0\}$ can be easily derived from the explicit expression \eqref{eq:firend1m0}, using the known series expansions for the incomplete Gamma function (see \cite[Eqs. 8.7.6 and 8.11.2]{NIST}). More precisely, to leading order we have
\begin{equation*}
\langle 0| \hat{\phi}^{2}(t,\mathbf{x}) |0\rangle_{ren}^{(massless)} = \left\{\!\!\begin{array}{ll}
\displaystyle{ -\, {1 \over 2 \pi} \log \big(\kappa |x_1|\big) + \mathcal{O}(1) } & \displaystyle{\mbox{for\, $x_1 \to 0^{\pm}$},} \vspace{0.1cm} \\
\displaystyle{ {1 \over 2 \pi} \log\big(\kappa |x_1|\big) + \mathcal{O}(1) } & \displaystyle{\mbox{for\, $x_1 \to \pm \infty$}\,.}
\end{array}
\right.
\end{equation*}
\subsubsection{Space dimension $d \geqslant 2$}\label{subsubsec:d2perfm0}
In this case it can be easily checked that the free-theory contribution $\langle 0| \hat{\phi}^{2} |0\rangle_{ren}^{(free)}$ vanishes in the limit $m \to 0^{+}$ (see Eq. \eqref{eq:firenm}). Bearing this in mind, let us focus on the term $\langle 0| \hat{\phi}^{2}(x_1) |0\rangle_{ren}^{(plane)}$. Recalling the asymptotic relation \eqref{eq:Knuasy} for $\mathfrak{K}_\nu$, by dominated convergence from Eq. \eqref{eq:firenpi} we infer
\begin{align*}
\lim_{m \to 0^{+}} \langle 0| \hat{\phi}^{2}(x_1) |0\rangle_{ren}^{(plane)}
& = {\Gamma({d - 1 \over 2}) \over (4 \pi)^{{d+1 \over 2}} |x_1|^{d - 1}} \left[
1 - \theta(x_1)\; 4 b_{+} |x_1| \int_{0}^{\infty}\!\! dv\; {e^{- 2 b_{+} |x_1|\, v} \over (v + 1)^{ d - 1}} \right.
\nonumber \\
& \hspace{3.5cm} \left.
-\, \theta(-x_1)\; 4 b_{-} |x_1| \int_{0}^{\infty}\!\! dv\; {e^{- 2 b_{-} |x_1|\, v} \over (v + 1)^{d - 1}}\right] . \nonumber
\end{align*}
To say more, via the change of variable $z = 2 b_{\pm} |x_1| (v+1)$, the above integrals can be expressed in terms of incomplete Gamma functions $\Gamma(a,z)$ (see \cite[Eq. 8.2.2]{NIST}).
Summing up, we ultimately obtain
\begin{align}
& \langle 0| \hat{\phi}^{2}(t,\mathbf{x}) |0\rangle_{ren}^{(massless)}\!
= \lim_{m \to 0^{+}} \langle 0| \hat{\phi}^{2}(x_1) |0\rangle_{ren}^{(plane)} \nonumber \\
& = {\Gamma({d - 1 \over 2}) \over (4 \pi)^{{d+1 \over 2}} |x_1|^{d - 1}} \left[
1 - 2\,\theta(x_1)\, (2 b_{+} |x_1|)^{d-1} e^{2 b_{+} |x_1|}\, \Gamma\big(2\!-\!d, 2b_{+} |x_1|\big) \right.
\nonumber \\
& \hspace{3.5cm} \left.
-\, 2\,\theta(-x_1)\, (2 b_{-} |x_1|)^{d-1} e^{2 b_{-} |x_1|}\, \Gamma\big(2-d, 2b_{-} |x_1|\big) \right] . \label{eq:firenm0}
\end{align}
The renormalized vacuum polarization for a massless field subject to Neumann or Dirichlet boundary conditions can be deduced from the above result evaluating the limits $b_{\pm} \to 0^{+}$ or $b_{\pm} \to +\infty$, respectively. To be more precise, taking into account that $\lim_{w \to 0^{+}} w^{1-a} e^{w} \Gamma(a,w) = 0$ and $\lim_{w \to +\infty} w^{1-a} e^{w} \Gamma(a,w) = 1$ (see \cite[Eqs. 8.7.6 and 8.11.2]{NIST}), for Neumann (+) and Dirichlet (-) conditions we get
\begin{equation*}
\langle 0| \hat{\phi}^{2}(t,\mathbf{x}) |0\rangle_{ren}^{(massless)} = \pm \,{\Gamma({d - 1 \over 2}) \over (4 \pi)^{{d+1 \over 2}} |x_1|^{d - 1}} \;.
\end{equation*}
The same result can be alternatively derived using \eqref{eq:Knuasy} to compute the limit $m \to 0^{+}$ of Eq. \eqref{eq:firenpiDN}.
Also in this case, for any $b_{+},b_{-} \in (0,+\infty)$ the behaviour of $\langle 0| \hat{\phi}^{2}(t,\mathbf{x}) |0\rangle_{ren}^{(massless)}$ for $x_1 \to 0^{\pm}$ and $x_1 \to \pm \infty$ can be inferred from Eq. \eqref{eq:firenm0} using the corresponding expansions for the incomplete Gamma function (see \cite[Eqs. 8.7.6 and 8.11.2]{NIST}). To leading order, we have
\begin{equation*}
\langle 0| \hat{\phi}^{2}(t,\mathbf{x}) |0\rangle_{ren}^{(massless)} = \left\{\!\!\begin{array}{ll}
\displaystyle{ {1 \over 8 \pi\, |x_1|} + \mathcal{O}\big(\log|x_1| \big) } & \displaystyle{\mbox{for\, $d = 2$,\, $x_1 \to 0^{\pm}$},} \vspace{0.1cm} \\
\displaystyle{ {\Gamma({d - 1 \over 2}) \over (4 \pi)^{{d+1 \over 2}} |x_1|^{d - 1}} \Big[1 + \mathcal{O}\big(|x_1| \big) \Big] } & \displaystyle{\mbox{for\, $d \geqslant 3$,\, $x_1 \to 0^{\pm}$},} \vspace{0.1cm} \\
\displaystyle{ -\, {\Gamma({d - 1 \over 2}) \over (4 \pi)^{{d+1 \over 2}} |x_1|^{d - 1}} \Big[1 + \mathcal{O}\big(1/|x_1| \big) \Big] } & \displaystyle{\mbox{for\, $x_1 \to \pm \infty$}\,.}
\end{array}
\right.
\end{equation*}
\section{Semitransparent plane}\label{sec:semitr}
Let us now examine configurations where the hyperplane $\pi$ can be regarded as a semitransparent surface. In this connection, recalling the general arguments of Section \ref{sec:gen} and referring again to \cite[Thm. 3.2.3]{AK99}, we consider the family of reduced operators labelled as follows by the elements of the unitary group $U(2)$:
\begin{eqnarray}
& \mbox{dom}(\mathcal{A}_1) := \left\{\psi \in H^2(\mathbb{R} \!\setminus\! \{0\})\,\left|\,
\left(\! \begin{array}{c} \displaystyle{\psi(0^{+})} \\ \displaystyle{\psi'(0^{+})} \end{array} \!\right) = \omega \left(\! \begin{array}{cc} \alpha & \beta \\ \gamma & \varsigma \end{array} \!\right) \left(\! \begin{array}{c} \displaystyle{\psi(0^{-})} \\ \displaystyle{\psi'(0^{-})} \end{array} \! \right)\right.\right\} \,, \nonumber\\
& \mathcal{A}_1\, \psi = (-\,\partial_{x_1 x_1}\! + m^2)\, \psi \quad \mbox{in\; $\mathbb{R} \!\setminus\! \{0\}$}\,,\label{eq:Asemi}
\end{eqnarray}
where
\begin{equation}
\mbox{$\omega \in \mathbb{C}$\; with\; $|\omega| = 1$} \qquad \mbox{and} \qquad
\mbox{$\alpha,\beta,\gamma,\varsigma \in \mathbb{R}$\; with\; $\alpha \varsigma - \beta \gamma = 1$}\,. \label{eq:omabcd}
\end{equation}
Two distinguished one-parameter subfamilies are respectively obtained for either $\beta = 0$, $\gamma \in \mathbb{R}$, $\omega = \alpha = \varsigma = 1$ or $\beta \in \mathbb{R}$, $\gamma = 0$, $\omega = \alpha = \varsigma = 1$. These formally correspond to reduced operators of the form $\mathcal{A}_1 = -\, \partial_{x_1 x_1} + m^2 + \gamma\,\delta$ or $\mathcal{A}_1 = - \,\partial_{x_1 x_1} + m^2 + \beta\,\delta'$, containing the well-known distributional delta and delta-prime potentials. The other admissible choices of parameters formally correspond to mixtures of delta and delta-prime potentials concentrated at $x_1 = 0$ (see \cite{Se86} and \cite[\S 3.2.4]{AK99}). Let us further remark that for $\beta = \gamma = 0$ and $\omega = \alpha = \varsigma = 1$ the reduced operator $\mathcal{A}_1$ is just the free Laplacian on the line; this case corresponds to a configuration where the quantum field does not interact with the plane $\pi$.
For any choice of the parameters $\omega,\alpha,\beta,\gamma,\varsigma$ compatible with Eq. \eqref{eq:omabcd}, the spectrum of the reduced operator $\mathcal{A}_1$ possesses an invariant purely absolutely continuous part; in addition to this, at most two isolated eigenvalues can appear. To be more precise, from \cite[Eq. (2.13)]{ABD95} we infer\footnote{Notice that for $\beta = 0$ we have $\alpha \varsigma = 1$ (see Eq. \eqref{eq:conomabcd}), which grants $\alpha + \varsigma \neq 0$; on the other hand the constants $\Lambda_{\pm}$ are well defined and finite for any $\beta \neq 0$ (see the forthcoming Eq. \eqref{eq:defLam}).}
\begin{eqnarray*}
& \sigma(\mathcal{A}_1) = \sigma_{ac}(\mathcal{A}_1) \cup \sigma_p(\mathcal{A}_1)\,, \qquad
\sigma_{ac}(\mathcal{A}_1) = [m^2,+\infty)\,, \nonumber \\
& \sigma_{p}(\mathcal{A}_1) = \left\{\!\begin{array}{ll}
\displaystyle{\left\{m^2 - {\gamma^2 \over (\alpha+\varsigma)^2}\right\} } & \displaystyle{ \mbox{for\, $\beta = 0$,\, $\gamma/(\alpha+\varsigma) \!<\! 0$}, } \vspace{0.1cm}\\
\displaystyle{ \big\{m^2 - \Lambda_{-}^2\big\} } & \displaystyle{ \mbox{for\, $\beta \neq 0$,\, $\Lambda_{-}\! < 0\leqslant \!\Lambda_{+}$}, } \vspace{0.1cm}\\
\displaystyle{ \big\{m^2 - \Lambda_{-}^2\,,m^2 - \Lambda_{+}^2\big\} } & \displaystyle{ \mbox{for\, $\beta \neq 0$,\, $\Lambda_{-}\! <\!\Lambda_{+} \!< \!0$}, } \vspace{0.1cm}\\
\displaystyle{ \varnothing } & \displaystyle{ \mbox{otherwise}, }
\end{array}\right.
\end{eqnarray*}
where
\begin{equation}\label{eq:defLam}
\Lambda_{\pm} := {\alpha + \varsigma \over 2 \beta} \pm {\sqrt{(\alpha-\varsigma)^2 + 4} \over 2 |\beta|}\;, \qquad \mbox{for\; $\beta \neq 0$}\,.
\end{equation}
From here, by a few elementary considerations we deduce that $\mathcal{A}_1$ is positive if and only if one of the following two alternatives occurs, for $m > 0$:
\begin{equation}\label{eq:conomabcd}
\mbox{$\beta = 0$\; and\; ${\gamma/(\alpha+\varsigma)} > -m$}
\quad\qquad \mbox{or} \quad\qquad
\mbox{$\beta \neq 0$\; and\; $\Lambda_{+} \!>\! \Lambda_{-} \!>\! - m$}\,.
\end{equation}
We shall henceforth assume the parameters $\omega,\alpha,\beta,\gamma,\varsigma$ to fulfil the latter Eq. \eqref{eq:conomabcd}, in addition to the conditions previously stated in Eq. \eqref{eq:omabcd}.
To proceed, let us recall that the heat kernel associated to the reduced operator $\mathcal{A}_1$ for $m = 0$ was formerly computed in \cite{ABD95}; taking into account that the addition of a mass term only produces the overall multiplicative factor $e^{-\tau m^2}$, from \cite[Eq. (3.4)]{ABD95} (see also Eqs. (2.12) and (3.2) of the cited reference) we infer
\begin{align}\label{eq:heatdel}
& e^{-\tau \mathcal{A}_1}(x_1,y_1)
= {e^{- m^2 \tau - {|x_1 - y_1|^2 \over 4\tau}} \over \sqrt{4\pi \tau}} \\
& \qquad + {e^{- m^2 \tau} \over \sqrt{4\pi \tau}}\, \left\{\!\!\begin{array}{ll}
\displaystyle{ L(x_1,y_1)\,e^{- {\left(|x_1| + |y_1|\right)^2 \over 4\tau}} } & \\
\displaystyle{ \qquad -\, {\gamma \over \alpha+\varsigma}\, \big(1 + L(x_1,y_1)\big) \int_{0}^{\infty}\!\!dw\;e^{- {\gamma \over \alpha+\varsigma}\,w\, - {\left(w+|x_1|+|y_1|\right)^2 \over 4\tau}} } & \displaystyle{ \mbox{for\; $\beta = 0$},} \vspace{0.1cm}\\
\displaystyle{ \mbox{sgn}(x_1 y_1)\,e^{- {\left(|x_1| + |y_1|\right)^2 \over 4\tau}} } & \\
\displaystyle{ \qquad + \int_{0}^{\infty}\!\!dw \,\left( M_{+}(x_1,y_1)\, e^{-\Lambda_{+} w} - M_{-}(x_1,y_1)\, e^{-\Lambda_{-} w} \right)\, e^{- {\left(w+|x_1|+|y_1|\right)^2 \over 4\tau}} } & \displaystyle{ \mbox{for\; $\beta \neq 0$},}
\end{array}
\right. \nonumber
\end{align}
where we introduced the notations ($\theta(\cdot)$ is the Heaviside step function and $\mbox{sgn}(\cdot)$ is the sign function)
\begin{gather*}
L(x_1,y_1) := {\alpha - \varsigma \over \alpha + \varsigma}\; \mbox{sgn}(x_1)\, \theta(x_1 y_1) - \left(\!1 - {2\,( \mbox{Re}\, \omega + \mbox{sgn}(x_1)\,i\, \mbox{Im}\, \omega) \over \alpha+\varsigma}\!\right) \theta(- x_1 y_1) \,,\\
\begin{aligned}
& \displaystyle{M_{\pm}(x_1,y_1) := {\mbox{sgn}(\beta) \over \sqrt{(\alpha-\varsigma)^2 + 4}} \left[
\theta(x_1 y_1) \Big((\alpha+\varsigma)\Lambda_{\pm} - 2 \gamma - {(\alpha - \varsigma)\Lambda_{\pm} \over 2}\,\mbox{sgn}(x_1)\Big)\right.} \nonumber \\
& \hspace{6.5cm} \displaystyle{\left.
- \,\theta(- x_1 y_1)\, \Lambda_{\pm} \Big( {\alpha + \varsigma \over 2} + \mbox{Re}\, \omega \!+\!\mbox{sgn}(x_1)\,i\, \mbox{Im}\, \omega\Big) \right]\,.}
\end{aligned}
\end{gather*}
Notice in particular that, for any $x_1 \in \mathbb{R} \setminus \{0\}$, we have
\begin{gather}
L(x_1) \equiv L(x_1,x_1) = {\alpha - \varsigma \over \alpha + \varsigma}\; \mbox{sgn}(x_1) \,, \label{eq:LMx1}\\
\displaystyle{M_{\pm}(x_1) \equiv M_{\pm}(x_1,x_1) = {\mbox{sgn}(\beta) \over \sqrt{(\alpha-\varsigma)^2 + 4}} \Big((\alpha+\varsigma)\Lambda_{\pm} - 2 \gamma - {(\alpha - \varsigma)\Lambda_{\pm} \over 2}\,\mbox{sgn}(x_1)\Big)\,.} \nonumber
\end{gather}
Substituting the above expression for $e^{-\tau \mathcal{A}_1}(x_1,y_1) $ into Eq. \eqref{eq:fiuheat}, we obtain the following integral representation for the $\zeta$-regularized vacuum polarization:
\begin{align}\label{eq:fiusemi}
& \langle 0| \big(\hat{\phi}^{u}(t,\mathbf{x})\big)^2 |0\rangle
= {\kappa^u \over 2\,(4\pi)^{d/2}\, \Gamma({u+1 \over 2})} \int_{0}^{\infty}\! d\tau\;\tau^{{u - d - 1 \over 2}}\, e^{- m^2 \tau} \;\times\\
& \times \left\{\!\!\begin{array}{ll}
\displaystyle{1 + L(x_1)\,e^{- {(x_1)^2 \over \tau}}\! - {\gamma \over \alpha+\varsigma}\, \big(1 + L(x_1)\big)\! \int_{0}^{\infty}\!\!dw\;e^{- {\gamma \over \alpha+\varsigma}\,w \,- {\left(w+2|x_1|\right)^2 \over 4\tau}} } & \displaystyle{ \mbox{for\; $\beta = 0$},} \vspace{0.1cm}\\
\displaystyle{1 + e^{- {(x_1)^2 \over \tau}}\! + \!\int_{0}^{\infty}\!\!dw \,\left( M_{+}(x_1)\, e^{-\Lambda_{+} w} \!-\! M_{-}(x_1)\, e^{-\Lambda_{-} w} \right)\, e^{- {\left(w+2|x_1|\right)^2 \over 4\tau}} } & \displaystyle{ \mbox{for\; $\beta \neq 0$}.}
\end{array}
\right. \nonumber
\end{align}
Regarding this representation, one can make considerations analogous to those reported below Eq. \eqref{eq:fiuperf}. Especially, it can be checked by direct inspection that the integrals on the right-hand side of \eqref{eq:fiusemi} are convergent and define an analytic function of $u$ in the complex strip $\mbox{Re}\, u > d - 1$, in agreement with the general theory.
Now, let us proceed to determine the analytic continuation of the map $u \mapsto \langle 0| \big(\hat{\phi}^{u}(t,\mathbf{x})\big)^2 |0\rangle$. Using once more the identities \eqref{eq:idGamma} \eqref{eq:idBesselK} introduced in the previous Section \ref{sec:perfect} and making again the change of integration variable $w = 2\,|x_1|\,v$, we obtain
\begin{align}\label{eq:fiusemiAC}
& \langle 0| \big(\hat{\phi}^{u}(t,\mathbf{x})\big)^2 |0\rangle
= {m^{d - 1}\, (\kappa/m)^u\, \Gamma\left({u - d + 1 \over 2}\right) \over 2^{d+1}\,\pi^{d/2}\, \Gamma({u+1 \over 2})}
+ {2^{{u - 3d + 1 \over 2}}\,\big(\kappa |x_1|\big)^{\!u} \over \pi^{d/2}\, \Gamma({u+1 \over 2})\,|x_1|^{d - 1}}\; \times \\
& \times \left\{\!\!\begin{array}{ll}
\displaystyle{
L(x_1)\; \mathfrak{K}_{{d - 1 - u \over 2}}\big(2 m |x_1|\big) } \vspace{-0.13cm}\\
\displaystyle{
\quad -\,\big(1 + L(x_1)\big)\, {2\,\gamma\, |x_1| \over \alpha+\varsigma} \!\int_{0}^{\infty}\!\!dv\;{e^{- {2\,\gamma\,|x_1| \over \alpha+\varsigma}\,v} \over (v + 1)^{d - 1 - u}} \; \mathfrak{K}_{{d - 1 - u \over 2}}\big(2 m |x_1|\, (v+1)\big)\,,}
\quad \displaystyle{ \mbox{for\; $\beta = 0$},}
\vspace{0.3cm}\\
\displaystyle{
\mathfrak{K}_{{d - 1 - u \over 2}}\big(2 m |x_1|\big)
+ \,2\,M_{+}(x_1)\,|x_1| \int_{0}^{\infty}\!\!dv\; {e^{-2 \Lambda_{+} |x_1|\,v} \over (v+1)^{d - 1 - u}}\; \mathfrak{K}_{{d - 1 - u \over 2}}\big(2 m |x_1|\, (v+1)\big)
}\vspace{0.03cm}\\
\displaystyle{
\quad - \,2\,M_{-}(x_1)\,|x_1| \int_{0}^{\infty}\!\!dv\; {e^{-2 \Lambda_{-} |x_1|\,v} \over (v+1)^{d - 1 - u}}\; \mathfrak{K}_{{d - 1 - u \over 2}}\big(2 m |x_1|\, (v+1)\big)\,,}
\qquad\quad\; \displaystyle{ \mbox{for\; $\beta \neq 0$},}
\end{array}
\right. \nonumber
\end{align}
where $\mathfrak{K}_{\nu}(w) = w^{\nu} K_{\nu}(w)$ are the functions defined in Eq. \eqref{eq:KKdef}. The same considerations reported below Eq. \eqref{eq:fiuAC} apply to the present context. As a result, Eq. \eqref{eq:fiusemiAC} yields the meromorphic extension of $\langle 0| \big(\hat{\phi}^{u}(t,\mathbf{x})\big)^2 |0\rangle$ to the whole complex plane, with isolated simple pole singularities at
\begin{equation*
u = d-1-2\ell\,, \qquad \mbox{with\; $\ell \in \{0,1,2,\dots\}$}\,.
\end{equation*}
Special attention must be paid to the fact that the first addendum on the right-hand side of Eq. \eqref{eq:fiusemiAC} has a pole at $u = 0$ if the space dimension $d$ is odd. On the contrary, all other terms in Eq. \eqref{eq:fiusemiAC} are analytic at $u = 0$. Taking these facts into account, we can proceed to compute the renormalized vacuum polarization using the general prescription \eqref{eq:fi2ren}:
\begin{equation}\label{eq:firensemi}
\langle 0| \hat{\phi}^{2}(t,\mathbf{x}) |0\rangle_{ren} = \langle 0| \hat{\phi}^{2} |0\rangle_{ren}^{(free)} + \langle 0| \hat{\phi}^{2}(x_1) |0\rangle_{ren}^{(plane)}\,;
\end{equation}
\begin{equation
\langle 0| \hat{\phi}^{2} |0\rangle_{ren}^{(free)} = \left\{\!\!\begin{array}{ll}
\displaystyle{ {(-1)^{{d \over 2}}\,\pi\;m^{d - 1} \over (4 \pi)^{{d + 1 \over 2}}\; \Gamma({d + 1 \over 2})} } & \displaystyle{\mbox{for $d$ even}\,,} \vspace{0.1cm} \\
\displaystyle{ {(-1)^{{d-1 \over 2}}\; m^{d - 1} \over (4 \pi)^{{d + 1 \over 2}}\; \Gamma({d+1 \over 2})} \left[H_{{d-1 \over 2}} + 2\log \left({2\kappa \over m}\right) \right]} & \displaystyle{\mbox{for $d$ odd}\,;}
\end{array}\right.
\end{equation}
\begin{align}\label{eq:firenpisemi}
& \langle 0| \hat{\phi}^{2}(x_1) |0\rangle_{ren}^{(plane)}
= \\
& \left\{\!\!\begin{array}{ll}
\displaystyle{
{1 \over 2^{{3d - 1 \over 2}} \pi^{{d + 1 \over 2}}\,|x_1|^{d - 1}} \Bigg[
L(x_1)\; \mathfrak{K}_{{d - 1 \over 2}}\big(2 m |x_1|\big) }\vspace{-0.13cm} \\
\displaystyle{
\qquad -\,\big(1 + L(x_1)\big)\, {2\,\gamma\, |x_1| \over \alpha+\varsigma} \!\int_{0}^{\infty}\!\!dv\;{e^{- {2\,\gamma\,|x_1| \over \alpha+\varsigma}\,v} \over (v + 1)^{d - 1}} \; \mathfrak{K}_{{d - 1 \over 2}}\big(2 m |x_1|\, (v+1)\big)\Bigg]\,,}
& \displaystyle{ \mbox{for\; $\beta = 0$},}
\vspace{0.05cm}\\
\displaystyle{{1 \over 2^{{3d - 1 \over 2}} \pi^{{d + 1 \over 2}}\,|x_1|^{d - 1}} \Bigg[
\mathfrak{K}_{{d - 1 \over 2}}\big(2 m |x_1|\big) }\vspace{-0.05cm}\\
\displaystyle{\qquad + \,2\,M_{+}(x_1)\,|x_1| \int_{0}^{\infty}\!\!dv\; {e^{-2 \Lambda_{+} |x_1|\,v} \over (v+1)^{d - 1}}\; \mathfrak{K}_{{d - 1 \over 2}}\big(2 m |x_1|\, (v+1)\big) } \\
\displaystyle{ \qquad - \,2\,M_{-}(x_1)\,|x_1| \int_{0}^{\infty}\!\!dv\; {e^{-2 \Lambda_{-} |x_1|\,v} \over (v+1)^{d - 1}}\; \mathfrak{K}_{{d - 1 \over 2}}\big(2 m |x_1|\, (v+1)\big)\Bigg]\,,}
& \displaystyle{ \mbox{for\; $\beta \neq 0$}.}
\end{array}
\right. \nonumber
\end{align}
Before moving on, let us remark that $\langle 0| \hat{\phi}^{2}|0\rangle_{ren}^{(free)}$ is exactly the same free-theory contribution arising in the case of a perfectly reflecting plane (cf. Eq. \eqref{eq:firenm}), while $\langle 0| \hat{\phi}^{2}(x_1) |0\rangle_{ren}^{(plane)}$ contains all the information related to the semitransparent hyperplane $\pi$. As expected, from Eq. \eqref{eq:firenpisemi} it can be readily deduced that $\langle 0| \hat{\phi}^{2}(x_1) |0\rangle_{ren}^{(plane)} = 0$ when $\beta = \gamma = 0$ and $\omega = \alpha = \varsigma = 1$, namely when the quantum field is not affected by the presence of the hyperplane $\pi$.
\subsection{Asymptotics for $x_1 \to 0^{\pm}$ and $x_1 \to \pm \infty$}
In order to determine the asymptotic behaviour of the renormalized vacuum polarization $\langle 0| \hat{\phi}^{2}(t,\mathbf{x}) |0\rangle_{ren}$ for small and large distances from the plane $\pi$, we retrace the same arguments already described in the previous subsection \ref{subsec:asy} for the case of a perfectly reflecting plane. Also in the present situation we just provide a leading order analysis, focusing primarily on the non-constant term $\langle 0| \hat{\phi}^{2}(x_1) |0\rangle_{ren}^{(plane)}$ of Eq. \eqref{eq:firenpisemi}.
\subsubsection{The limit $x_1 \to 0^{\pm}$}\label{subsubsec:x0msemi}
First of all, recall the asymptotic expansions \eqref{eq:K0asy} and \eqref{eq:Knuasy} for the functions $\mathfrak{K}_{\nu}$. Taking these into account, regarding the integral expressions in Eq. \eqref{eq:firenpisemi} we infer the following for $|x_1| \to 0$ (cf. paragraph \ref{par:x10}):
\begin{align*}
& {2\,\gamma\, |x_1| \over \alpha+\varsigma} \!\int_{0}^{\infty}\!\!dv\;{e^{- {2\,\gamma\,|x_1| \over \alpha+\varsigma}\,v} \over (v + 1)^{d - 1}} \; \mathfrak{K}_{{d - 1 \over 2}}\big(2 m |x_1|\, (v+1)\big) \\
& = {\gamma\,\big(2m |x_1|\big)^{d - 1} \over (\alpha+d) m}\, e^{{2\,\gamma\,|x_1| \over \alpha+d}}\! \int_{2m |x_1|}^{\infty}\!\!dz\;{e^{- {\gamma \over (\alpha+\varsigma) m}\,z} \over z^{d-1}} \, \mathfrak{K}_{{d - 1 \over 2}}(z)
= \left\{\!\! \begin{array}{ll}
\displaystyle{ \mathcal{O}(1) } & \displaystyle{\mbox{for\; $d = 1$}\,,} \vspace{0.1cm} \\
\displaystyle{ \mathcal{O}\big( |x_1|\, \log |x_1|\big) } & \displaystyle{\mbox{for\; $d = 2$}\,,} \vspace{0.1cm} \\
\displaystyle{ \mathcal{O}\big(|x_1|\big) } & \displaystyle{\mbox{for\; $d \geqslant 3$}\,;}
\end{array} \right. \nonumber
\\
& 2\,|x_1| \int_{0}^{\infty}\!\!dv\; {e^{-2 \Lambda_{\pm} |x_1|\,v} \over (v+1)^{d - 1}}\; \mathfrak{K}_{{d - 1 \over 2}}\big(2 m |x_1|\, (v+1)\big) \\
& = {\big(2m |x_1|\big)^{d - 1} \over m}\, e^{2 \Lambda_{\pm} |x_1|} \int_{2m |x_1|}^{\infty}\!\!dz\; {e^{- {\Lambda_{\pm} \over m}\,z} \over z^{d-1}}\, \mathfrak{K}_{{d - 1 \over 2}}(z)
= \left\{\!\! \begin{array}{ll}
\displaystyle{ \mathcal{O}(1) } & \displaystyle{\mbox{for\; $d = 1$}\,,} \vspace{0.1cm} \\
\displaystyle{ \mathcal{O}\big( |x_1|\, \log |x_1|\big) } & \displaystyle{\mbox{for\; $d = 2$}\,,} \vspace{0.1cm} \\
\displaystyle{ \mathcal{O}\big(|x_1|\big) } & \displaystyle{\mbox{for\; $d \geqslant 3$}\,.}
\end{array} \right. \nonumber
\end{align*}
Taking the above estimates into account, from Eq. \eqref{eq:firenpisemi} we infer for $x_1 \to 0^{\pm}$
\begin{align}\label{eq:firenx0semi}
& \langle 0| \hat{\phi}^{2}(x_1) |0\rangle_{ren}^{(plane)}
= \left\{\!\!\begin{array}{ll}
\displaystyle{-\,{\mbox{sgn}(x_1) \over 2 \pi} \left( {\alpha - \varsigma \over \alpha + \varsigma}\right) \log\big(m |x_1|\big) + \mathcal{O}(1)}
& \displaystyle{ \mbox{for\; $d = 1$ \,and\, $\beta = 0$},}
\vspace{0.05cm}\\
\displaystyle{-\,{1 \over 2 \pi}\,\log\big(m |x_1|\big) + \mathcal{O}(1) }
& \displaystyle{ \mbox{for\; $d = 1$ \,and\, $\beta \neq 0$},}
\vspace{0.1cm}\\
\displaystyle{{\mbox{sgn}(x_1) \over 8 \pi\,|x_1|} \left({\alpha - \varsigma \over \alpha + \varsigma}\right) + \mathcal{O}\big( \log |x_1|\big)}
& \displaystyle{ \mbox{for\; $d = 2$ \,and\, $\beta = 0$},}
\vspace{0.1cm}\\
\displaystyle{{1 \over 8 \pi\,|x_1|} + \mathcal{O}\big(\log |x_1|\big)}
& \displaystyle{ \mbox{for\; $d = 2$ \,and\, $\beta \neq 0$}.}
\vspace{0.1cm}\\
\displaystyle{
{\mbox{sgn}(x_1) \,\Gamma({d - 1 \over 2}) \over (4 \pi)^{{d + 1 \over 2}}\,|x_1|^{d - 1}} \left({\alpha - \varsigma \over \alpha + \varsigma}\right) \Big[ 1 + \mathcal{O}\big(|x_1|\big) \Big]}
& \displaystyle{ \mbox{for\; $d \geqslant 3$ \,and\, $\beta = 0$},}
\vspace{0.1cm}\\
\displaystyle{{\Gamma({d - 1 \over 2}) \over (4 \pi)^{{d + 1 \over 2}}\,|x_1|^{d - 1}}\, \Big[ 1 + \mathcal{O}\big(|x_1|\big) \Big]}
& \displaystyle{ \mbox{for\; $d \geqslant 3$ \,and\, $\beta \neq 0$}.}
\end{array}
\right. \nonumber
\end{align}
Let us briefly comment the above results. Comparing Eqs. \eqref{eq:firenx0} and \eqref{eq:firenx0semi}, it appears that $\langle 0| \hat{\phi}^{2}(x_1) |0\rangle_{ren}^{(plane)}$ presents the same kind of divergence near the plane $\pi$, whether the latter be perfectly reflecting or semitransparent; as a matter of fact, the leading order terms in \eqref{eq:firenx0} and \eqref{eq:firenx0semi} exactly coincide for any $d \geqslant 1$ when $\beta \neq 0$. \\
On the other side, the expansions in Eq. \eqref{eq:firenx0semi} call the attention to two subfamilies, parametrized by
\begin{equation}\label{eq:nodiv}
\beta = 0\,, \qquad \gamma \in \mathbb{R}\,, \qquad \alpha = \varsigma = \pm 1\,.
\end{equation}
In these cases the leading order contribution vanishes identically (for any $d \geqslant 1$), implying that the divergence of the renormalized vacuum polarization $\langle 0| \hat{\phi}^{2}(x_1) |0\rangle_{ren}^{(plane)}$ near the hyperplane $\pi$ is somehow softened. While the occurrence of this phenomenon appears to be accidental for $\alpha = \varsigma = - 1$, some intuition can be gained instead regarding the case with $\alpha = \varsigma = + 1$.
We already mentioned that the subfamily with $\beta = 0$, $\gamma \in \mathbb{R}$ and $\alpha = \varsigma = 1$ describes a delta-type potential concentrated on the hyperplane $\pi$.\footnote{\label{foot:delta}More precisely, in this case the functions belonging to the domain of the reduced operator $\mathcal{A}_1$ are continuous at $x_1 = 0$, namely $\psi(0^{+}) = \psi(0^{-}) = \psi(0)$, with discontinuous first derivative fulfilling the jump condition $\psi'(0^{+}) - \psi'(0^{-}) = \gamma\,\psi(0)$.} This is actually the ``less singular'' distributional potential amid the ones associated to the boundary conditions written in Eq. \eqref{eq:Asemi}. There are at least three interdependent ways to understand the latter claim:\\
\emph{i)} Except for the pure delta case, all distributional potentials mentioned below Eq. \eqref{eq:Asemi} comprise at least one derivative of the Dirac delta function (see \cite{Se86}, \cite[\S 3.2.4]{AK99}). It is therefore evident that, as distributions, they are more singular than the Dirac delta function itself.\\
\emph{ii)} In the case of a delta potential, the field is required to be continuous across the plane $\pi$ where the potential is concentrated (see footnote \ref{foot:delta}). In contrast, in all other cases the field exhibits a discontinuity.\\
\emph{iii)} It is well-known that a delta potential concentrated on a surface of co-dimension $1$ (such as the hyperplane $\pi$) can be approximated (in resolvent sense) by regular short-range interactions \cite[\S I.3.2]{AGHH05}. In light of this, delta potentials can be reasonably regarded as a crossing point between smooth background potentials and classical hard-wall boundaries. Given that renormalized Casimir observables present no singularity when the external potentials are smooth, it is not entirely surprising that the boundary behaviour is less singular in the case of delta potentials. The above line of thinking does not apply in the case of non-pure delta potentials, since the approximation of the latters by regular potentials is far more problematic \cite{Se87}.
Let us finally recognize that, whenever the leading order terms in the asymptotic expansions \eqref{eq:firenx0semi} vanish, the study of the sub-leading contributions becomes crucial. A detailed investigation of this subject is deferred to future works.
\subsubsection{The limit $x_1 \to \pm\infty$}
In this paragraph we proceed to examine the behaviour of the renormalized vacuum polarization in the regime of large distances from the hyperplane $\pi$. Recalling once more that the term $\langle 0| \hat{\phi}^{2} |0\rangle_{ren}^{(free)}$ is constant, we restrict our analysis to the expression $\langle 0| \hat{\phi}^{2}(x_1) |0\rangle_{ren}^{(plane)}$.
Firstly, recall the asymptotic expansion \eqref{eq:Kinf} for the functions $\mathfrak{K}_{\nu}$. Then, making the change of variable $z = |x_1|\, v$, we derive the following expansions of the integral expressions in Eq. \eqref{eq:firenpisemi} for $|x_1| \to +\infty$:
\begin{align*}
& {2\,\gamma\, |x_1| \over \alpha+\varsigma} \!\int_{0}^{\infty}\!\!dv\;{e^{- {2\,\gamma\,|x_1| \over \alpha+\varsigma}\,v} \over (v + 1)^{d - 1}} \; \mathfrak{K}_{{d - 1 \over 2}}\big(2 m |x_1|\, (v+1)\big) \\
& = {\sqrt{2\pi}\;\gamma \over \alpha+\varsigma}\; e^{-2 m |x_1|}\,\big(2 m |x_1|\big)^{\!{d - 2 \over 2}} \!\int_{0}^{\infty}\!\!dz\;e^{- 2\big({\gamma \over \alpha+\varsigma} + m\big) z} \; \Big(1 + \mathcal{O}\big(z/|x_1|\big) \Big)\\
& = \sqrt{{\pi \over 2}}\; {{\gamma \over \alpha+\varsigma} \over {\gamma \over \alpha+\varsigma} + m}\; e^{-2 m |x_1|}\,\big(2 m |x_1|\big)^{\!{d - 2 \over 2}}\, \Big(1 + \mathcal{O}\big(1/|x_1|\big) \Big)\,; \nonumber
\\
& 2\,|x_1| \int_{0}^{\infty}\!\!dv\; {e^{-2 \Lambda_{\pm} |x_1|\,v} \over (v+1)^{d - 1}}\; \mathfrak{K}_{{d - 1 \over 2}}\big(2 m |x_1|\, (v+1)\big) \\
& = \sqrt{2\pi}\, e^{- 2 m |x_1|} \big(2 m |x_1|\big)^{\!{d - 2 \over 2}}\! \int_{0}^{\infty}\!\!dz\; e^{-2 (\Lambda_{\pm} + m) z} \; \Big(1 + \mathcal{O}\big(z/|x_1|\big) \Big) \\
& = \sqrt{{\pi \over 2}}\; {1 \over \Lambda_{\pm} + m}\; e^{- 2 m |x_1|} \big(2 m |x_1|\big)^{\!{d - 2 \over 2}}\, \Big(1 + \mathcal{O}\big(1/|x_1|\big) \Big)\,. \nonumber
\end{align*}
From here and from Eq. \eqref{eq:firenpisemi}, in the limit $x_1 \to \pm \infty$ we infer
\begin{align
& \langle 0| \hat{\phi}^{2}(x_1) |0\rangle_{ren}^{(plane)}
= {m^{{d - 2 \over 2}} \over 2 (4\pi)^{d/2}}\;{e^{-2 m |x_1|} \over |x_1|^{d/2}} \; \times \\
& \qquad \times \left\{\!\!\begin{array}{ll}
\displaystyle{ {(\alpha - \varsigma) m\; \mbox{sgn}(x_1) - \gamma \over (\alpha+\varsigma)m + \gamma}\, \left[1 + \mathcal{O}\left({1 \over |x_1|} \right) \right]}
& \displaystyle{ \mbox{for\; $\beta = 0$},}
\vspace{0.05cm}\\
\displaystyle{
{6 \gamma + 2 \beta m^2 + 4(\alpha+\varsigma)m -(\alpha-\varsigma)m\;\mbox{sgn}(x_1) \over 2(\gamma + \beta m^2 + (\alpha+\varsigma)m)}\, \left[1 + \mathcal{O}\left({1 \over |x_1|} \right) \right]}
& \displaystyle{ \mbox{for\; $\beta \neq 0$}.}
\end{array}
\right. \nonumber
\end{align}
The above relations show that also in the case of a semitransparent plane the renormalized expectation $\langle 0| \hat{\phi}^{2}(x_1) |0\rangle_{ren}^{(plane)}$ decays exponentially fast far away from the hyperplane $\pi$. In other words, the difference between the full vacuum polarization $\langle 0| \hat{\phi}^{2}(t,\mathbf{x}) |0\rangle_{ren}$ and the constant free-theory term $\langle 0| \hat{\phi}^{2} |0\rangle_{ren}^{(free)}$ becomes exponentially small.
\subsection{Vacuum polarization for a massless field}\label{subsec:m0semi}
We now examine the renormalized vacuum polarization for a massless field in presence of a semitransparent hyperplane. Making reference to Eqs. \eqref{eq:Asemi} and \eqref{eq:conomabcd}, for a sensible quantum field theory we must require either
\begin{equation}
\mbox{$\beta = 0$\; and\; ${\gamma/(\alpha+\varsigma)} \geqslant 0$}
\quad\qquad \mbox{or} \quad\qquad
\mbox{$\beta \neq 0$\; and\; $\Lambda_{+} \!>\! \Lambda_{-} \!\geqslant\! 0$}\,.
\end{equation}
In accordance with the general arguments of Section \ref{sec:gen} (see, especially, Eq. \eqref{eq:fi2renm0}), we proceed to determine the renormalized observable of interest evaluating the zero-mass limit $m \to 0^{+}$ of the analogous quantity in the massive theory.
Similarly to the configuration with a perfectly reflecting surface, the cases with space dimension $d = 1$ and $d \geqslant 2$ need to be analysed separately.
\subsubsection{Space dimension $d = 1$}\label{subsubsec:m0d1semi}
A careful analysis is demanded for this specific model, due to the emergence of the same infrared pathologies already discussed in paragraph \ref{subsubsec:m0d1}. Indeed, let us recall that the renormalized vacuum polarization comprises a free theory contribution which is divergent in the limit $m \to 0^{+}$ (see Eq. \eqref{eq:firenm1d}):
\begin{equation*
\langle 0| \hat{\phi}^{2}|0\rangle_{ren}^{(free)} = {1 \over 2 \pi}\, \log \left({2\kappa \over m}\right) .
\end{equation*}
On the other hand, by arguments similar to those described in paragraph \ref{subsubsec:m0d1}, from Eq. \eqref{eq:firenpi} we obtain the following asymptotic expansions for $m \to 0^{+}$, involving the incomplete Gamma function $\Gamma(a,z)$:\footnote{Notice also the following basic identity, which can be easily deduced from Eqs. \eqref{eq:defLam} and \eqref{eq:LMx1}:
$$
1 + {M_{+}(x_1) \over \Lambda_{+}} - {M_{-}(x_1) \over \Lambda_{-}} = 3\,.
$$
}
\begin{align
& \langle 0| \hat{\phi}^{2}(x_1) |0\rangle_{ren}^{(plane)}
= \left\{\!\!\begin{array}{ll}
\displaystyle{
{1 \over 2 \pi} \Bigg[
\log\big(m |x_1|\big) - \gamma_{EM}
+ \big(1 + L(x_1)\big)\, e^{{2\,\gamma\, |x_1| \over \alpha+\varsigma}}\,\Gamma\left(0\,,{2\,\gamma\, |x_1| \over \alpha + \varsigma}\right)
\Bigg]} \\
\displaystyle{\qquad +\; \mathcal{O}\Big(\big(m |x_1|\big)^2 \log \big(m |x_1|\big) \Big)}
\hspace{5.5cm} \displaystyle{ \mbox{for\; $\beta = 0$},}
\vspace{0.05cm}\\
\displaystyle{{1 \over 2 \pi} \Bigg[
- 3\, \log\big(m |x_1|\big) + 3\,\gamma_{EM} }\vspace{-0.05cm}\\
\displaystyle{\qquad
-\, {M_{+}(x_1) \over \Lambda_{+}}\; e^{2 \Lambda_{+} |x_1|}\, \Gamma\big(0,2 \Lambda_{+} |x_1|\big)
+ {M_{-}(x_1) \over \Lambda_{-}}\; e^{2 \Lambda_{-} |x_1|}\, \Gamma\big(0,2 \Lambda_{-} |x_1|\big)\Bigg]} \\
\displaystyle{\qquad +\; \mathcal{O}\Big(\big(m |x_1|\big)^2 \log \big(m |x_1|\big) \Big)}
\hspace{5.5cm} \displaystyle{ \mbox{for\; $\beta \neq 0$}.}
\end{array}
\right. \nonumber
\end{align}
For $\beta \neq 0$, the above relations together with Eq. \eqref{eq:firensemi} make evident that
\begin{equation*
\lim_{m \to 0^{+}}\langle 0| \hat{\phi}^{2}(t,\mathbf{x}) |0\rangle_{ren}
= \lim_{m \to 0^{+}} \left[{1 \over 2 \pi} \log \left({2\kappa \over m^4 |x_1|^3}\right) + \mathcal{O}(1)\right] = +\infty,
\end{equation*}
indicating that in this case the renormalized expectation $\langle 0| \hat{\phi}^{2}(t,\mathbf{x}) |0\rangle_{ren}^{massless}$ is irremediably divergent in the infrared for a massless field. Before we proceed, let us point out a connection between this result and the similar conclusions drawn in paragraph \ref{subsubsec:m0d1} for the case of Neumann conditions on the hyperplane $\pi$: Neumann conditions are formally recovered in the present scenario as the limit case where $\beta \to \infty$, with $\gamma = 0$, $\omega = \alpha = \varsigma = 1$.
Let us henceforth assume
\begin{equation*}
\beta = 0\,.
\end{equation*}
Under this condition, from the above results and from Eq. \eqref{eq:fi2renm0} we readily infer
\begin{align}
& \langle 0| \hat{\phi}^{2}(t,\mathbf{x}) |0\rangle_{ren}^{(massless)}
= \lim_{m \to 0^{+}} \Big[ \langle 0| \hat{\phi}^{2} |0\rangle_{ren}^{(free)} + \langle 0| \hat{\phi}^{2}(x_1) |0\rangle_{ren}^{(plane)} \Big] \nonumber \\
& = {1 \over 2 \pi} \left[\log\!\big(2\kappa |x_1|\big) - \gamma_{EM}
+ \left(1 + {\alpha - \varsigma \over \alpha + \varsigma}\; \mbox{sgn}(x_1)\right)\, e^{{2\,\gamma\, |x_1| \over \alpha+\varsigma}}\,\Gamma\left(0\,,{2\,\gamma\, |x_1| \over \alpha + \varsigma}\right)\right]\,.
\end{align}
Using again known properties of the incomplete Gamma function, it is easy to derive the following leading order asymptotic expansions of the the above expression:
\begin{equation*}
\langle 0| \hat{\phi}^{2}(t,\mathbf{x}) |0\rangle_{ren}^{(massless)} = \left\{\!\!\begin{array}{ll}
\displaystyle{ \mp\,{1 \over 2 \pi} \left({\alpha - \varsigma \over \alpha + \varsigma}\right) \log\left({\gamma\, |x_1| \over \alpha + \varsigma}\right) + \mathcal{O}(1) } & \displaystyle{\mbox{for\, $x_1 \to 0^{\pm}$},} \vspace{0.1cm} \\
\displaystyle{ {1 \over 2 \pi}\,\log\big(\kappa |x_1|\big) + \mathcal{O}(1) } & \displaystyle{\mbox{for\, $x_1 \to \pm \infty$}\,.}
\end{array}
\right.
\end{equation*}
It is remarkable that the renormalized vacuum polarization $\langle 0| \hat{\phi}^{2}(t,\mathbf{x}) |0\rangle_{ren}^{(massless)}$ remains finite in the limit $x_1 \to 0^{\pm}$ when $\beta = 0$, $\gamma \in \mathbb{R}$ and $\alpha = \varsigma = \pm 1$. Recall that the very same phenomenon occurs in the case of a massive field (see paragraph \ref{subsubsec:x0msemi} and the comments reported therein).
\subsubsection{Space dimension $d \geqslant 2$}
Recall that for $d \geqslant 2$ the free-theory contribution $\langle 0| \hat{\phi}^{2} |0\rangle_{ren}^{(free)}$ vanishes in the limit $m \to 0^{+}$ (see Eq. \eqref{eq:firenm}). Taking this into account, by arguments analogous to those described in paragraph \ref{subsubsec:d2perfm0}, from Eq. \eqref{eq:firenpisemi} we get
\begin{align
& \langle 0| \hat{\phi}^{2}(t,\mathbf{x}) |0\rangle_{ren}^{(massless)} = \lim_{m \to 0^{+}} \langle 0| \hat{\phi}^{2}(x_1) |0\rangle_{ren}^{(plane)} \\
& = \left\{\!\!\begin{array}{ll}
\displaystyle{
{\Gamma({d - 1 \over 2}) \over (4\pi)^{{d + 1 \over 2}}\,|x_1|^{d - 1}} \left[
L(x_1) -\,\big(1 + L(x_1)\big)\,e^{{2\,\gamma\,|x_1| \over \alpha+\varsigma}} \left({2\,\gamma\,|x_1| \over \alpha+\varsigma}\right)^{\!d-1} \Gamma\left(2-d,{2\,\gamma\, |x_1| \over \alpha+\varsigma}\right)\right]} \\
\hspace{8cm} \displaystyle{ \mbox{for\; $\beta = 0$},}
\vspace{0.05cm}\\
\displaystyle{{\Gamma({d - 1 \over 2}) \over (4 \pi)^{{d + 1 \over 2}}\,|x_1|^{d - 1}} \left[
1 + {M_{+}(x_1) \over \Lambda_{+}}\;e^{2 \Lambda_{+} |x_1|}\;\big(2 \Lambda_{+}\,|x_1|\big)^{d-1}\, \Gamma\big(2-d,2 \Lambda_{+} |x_1|\big) \right.} \\
\displaystyle{ \hspace{3cm} \left. - \,{M_{-}(x_1) \over \Lambda_{-}}\;e^{2 \Lambda_{-} |x_1|}\; \big(2 \Lambda_{-}\,|x_1|\big)^{d-1}\, \Gamma\big(2-d,2 \Lambda_{-} |x_1|\big)\right]} \\
\hspace{8cm} \displaystyle{ \mbox{for\; $\beta \neq 0$}.}
\end{array}
\right. \nonumber
\end{align}
Using once more the known expansions of the incomplete Gamma function for small and large values of the argument, we derive the following leading order asymptotics:
\begin{align
& \langle 0| \hat{\phi}^{2}(t,\mathbf{x}) |0\rangle_{ren}^{(massless)}
= \left\{\!\!\begin{array}{ll}
\displaystyle{ \pm \left({\alpha - \varsigma \over \alpha + \varsigma}\right) {1 \over 8\pi\,|x_1|} + \mathcal{O}\big(\log|x_1|\big)}
& \displaystyle{ \mbox{for\; $\beta = 0$, $d = 2$ and $x_1 \to 0^{\pm}$},}
\vspace{0.1cm} \\
\displaystyle{ \pm \left({\alpha - \varsigma \over \alpha + \varsigma}\right) {\Gamma({d - 1 \over 2}) \over (4\pi)^{{d + 1 \over 2}}\,|x_1|^{d - 1}}\, \Big[1 + \mathcal{O}\big(|x_1| \big)\Big]}
& \displaystyle{ \mbox{for\; $\beta = 0$, $d \geqslant 3$ and $x_1 \to 0^{\pm}$},}
\vspace{0.1cm}\\
\displaystyle{ -\,{\Gamma({d - 1 \over 2}) \over (4\pi)^{{d + 1 \over 2}}\,|x_1|^{d - 1}}\,\Big[1 + \mathcal{O}\big(1/|x_1| \big) \Big]} &
\displaystyle{ \mbox{for\; $\beta = 0$, $d \geqslant 2$ and $x_1 \to \pm \infty$};}
\vspace{0.1cm}\\
\displaystyle{{1 \over 8\pi\,|x_1|} + \mathcal{O}\big(\log|x_1|\big)}
& \displaystyle{ \mbox{for\; $\beta \neq 0$, $d = 2$ and $x_1 \to 0^{\pm}$},}
\vspace{0.1cm} \\
\displaystyle{{\Gamma({d - 1 \over 2}) \over (4 \pi)^{{d + 1 \over 2}}\,|x_1|^{d - 1}}\, \Big[1 + \mathcal{O}\big(|x_1| \big)\Big]}
& \displaystyle{ \mbox{for\; $\beta \neq 0$, $d \geqslant 3$ and $x_1 \to 0^{\pm}$},}
\vspace{0.1cm} \\
\displaystyle{{3\,\Gamma({d - 1 \over 2}) \over (4 \pi)^{{d + 1 \over 2}}\,|x_1|^{d - 1}}\,\Big[1 + \mathcal{O}\big(1/|x_1| \big) \Big]}
& \displaystyle{ \mbox{for\; $\beta \neq 0$, $d \geqslant 2$ and $x_1 \to \pm \infty$}.}
\end{array}
\right. \nonumber
\end{align}
Notice that also in this case the local divergences of the renormalized polarization $\langle 0| \hat{\phi}^{2}(t,\mathbf{x}) |0\rangle_{ren}^{(massless)}$ near the hyperplane $\pi$ are softened for $\beta = 0$, $\gamma \in \mathbb{R}$ and $a = d = \pm 1$; again, we refer to the analysis of paragraph \ref{subsubsec:x0msemi}.
\vspace{0.5cm}
\textbf{Aknowledgments.} I am grateful to Livio Pizzocchero for many interesting conversations, some of which inspired the subject of this work. I also wish to thank Claudio Cacciapuoti for precious insights on the representation formulae for the heat kernel.
\vspace{0.2cm}
\textbf{Funding.} This work was partly supported by Progetto Giovani INdAM-GNFM 2020 ``\textit{Emergent Features in Quantum Bosonic Theories and Semiclassical Analysis}'' (Istituto Nazionale di Alta Matematica `Francesco Severi' - Gruppo Nazionale per la Fisica Matematica).
|
2,877,628,090,458 | arxiv | \section{INTRODUCTION} \label{sec:intro}
Nearly all that is known about the X-ray spectral properties of low-mass
X-ray binaries (LMXBs) has been derived at energies above 1 keV. The
simple reason for this is that most Galactic LMXBs lie in the Galactic
plane and are therefore heavily absorbed at soft X-ray energies by
Galactic hydrogen. Virtually nothing is known about the X-ray properties
of LMXBs below 0.5 keV. Two Galactic examples of LMXBs which lie
in directions of low hydrogen column densities (Hercules X-1 and MS1603+2600)
show strong X-ray emission at energies below 0.5 keV (Choi et al.\ 1997;
Hakala et al.\ 1998), in addition to the hard
(5--10 keV) emission normally attributed to LMXBs.
Fortunately, the bulge of M31 provides a relatively nearby (690 kpc)
laboratory for examining the soft X-ray properties of LMXBs. Since bulge
populations are known to be old, the X-ray sources found in the bulge
are not likely to be contaminated by high-mass X-ray binaries or
supernovae remnants. Supper et al.\ (1997) detected 22 X-ray sources
within $5^{\prime}$ of the center of the bulge with the {\it ROSAT} PSPC,
19 of which are most likely LMXBs with 0.1--2.0 keV X-ray luminosities
of $10^{36}-10^{38}$ erg s$^{-1}$ (the other three are supersoft sources).
Nearly all of these LMXBs exhibit strong very soft emission, much like
Hercules X-1 and MS1603+2600.
Furthermore, the integrated X-ray spectrum of the inner
$5^{\prime}$ of the bulge of M31, of which $\sim$80\% is resolved into
the 22 sources, strongly resembles the X-ray spectra of a class of early-type
galaxies that have very low X-ray--to--optical luminosity ratios
(Irwin \& Sarazin 1998a,b; hereafter IS98a,b).
Trends in the X-ray spectral properties of early-type glaxies were first
observed by {\it Einstein} by Kim, Fabbiano, \& Trinchieri (1992).
Unlike X-ray bright early-type galaxies whose
X-ray emission is dominated by thermal emission from $\sim$0.8 keV gas,
these X-ray faint galaxies exhibit two-component (hard + very soft)
X-ray emission (Fabbiano, Kim, \& Trinchieri 1994; Pellegrini 1994;
Kim et al.\ 1996), with the hard component generally attributed to a
collection of LMXBs. The strong soft component was unsuccessfully attributed
to the integrated emission from M star coronae, RS CVn binary stars, and
supersoft sources (Pellegrini \& Fabbiano 1994; IS98b),
leaving only a warm (0.2 keV) ISM as a possible alternative.
However, LMXBs in the bulge of M31 demonstrated that LMXBs can also be a
significant source of very soft X-ray emission, and is a likely
explanation for the excess very soft X-ray emission in X-ray faint
early-type galaxies (IS98a,b) instead of a 0.2 keV ISM. The
X-ray--to--optical luminosity ratio of the bulge of M31 is comparable to
that of X-ray faint early-type galaxies, so LMXBs are also luminous
and/or numerous to account for the X-ray emission.
This seemingly simple solution is complicated by the fact that LMXBs exist
outside the disk of the Galaxy or the bulge of M31 which do not exhibit
strong very soft emission. Four of these LMXBs reside in Galactic globular
clusters, and another in the Large
Magellanic Cloud. This would seem to weaken the argument that
LMXBs are the source of the very soft emission in X-ray faint early-type
galaxies. However, these five examples differ from the rest in that they
reside in low metallicity environments. The metallicities of NGC~1851,
NGC~6624, NGC~6652, and NGC~7078 are 5\%, 43\%, 10\%, and 0.7\% solar,
respectively (Djorgovski 1993), and the iron abundance of the LMC is
$\sim$50\% solar (e.g., Hill, Andrievsky, \& Spite 1995).
It is possible that the metallicity of the environment in which an LMXB forms
has an effect on the X-ray properties of the binary.
To test this hypothesis we need a sample of LMXBs that reside in
environments that span a range of metallicities, and that also lie in
directions of reasonably low Galactic hydrogen column densities.
The globular cluster system of M31 is ideal for this. Metallicities
for most of M31's globular clusters have been determined optically, many
of which harbor LMXBs. In addition, the hydrogen column density towards M31
is not excessively high ($7 \times 10^{20}$ cm$^{-2}$). In this {\it Letter}
we present the spectral X-ray analysis of a sample of LMXBs which
reside in M31 globular clusters using archival {\it ROSAT} PSPC data
to search for trends in the X-ray properties of LMXBs as a function of
the metallicity of their environment. We also determine the X-ray spectral
properties of the four Galactic globular cluster LMXBs with X-ray
luminosities $\ga 10^{36}$ erg s$^{-1}$ and LMC X-2.
\section{DATA REDUCTION} \label{sec:data}
Because of the large angular size of M31, the galaxy was imaged by the
{\it ROSAT} PSPC with many different pointings to encompass all the X-ray
emission. For our analysis, we only analyze the long ($>25$ kiloseconds)
observations. For each data set the error in the gain correction applied
by SASS as part of the conversion from detected pulse height to pulse
invariant channel (Snowden et al.\ 1995) was corrected using the FTOOL
pcpicor.
Each M31 X-ray source identified as being coincident with an M31 globular
cluster by Supper et al.\ (1997) was cross-referenced against
Huchra, Brodie, \& Kent (1991) for an
estimate for the metallicity of the globular cluster.
If the source contained fewer than 250 X-ray counts
it was removed from the sample. Sources that fell outside the rib support
structure at $18^{\prime}$ were also removed. Table~\ref{tab:m31spec_fits}
lists the ID number of the X-ray source (taken from Supper et al.\ 1997),
an alternate ID taken from Huchra et al.\ (1991), and total X-ray
counts for each X-ray source in our sample.
For each LMXB, its spectrum was extracted from a circular aperture
centered on the source. The size of the extraction aperture varied with
the off-axis angle of the source to account for the point spread function
of the instrument. A locally-determined background was subtracted from the
spectra. The spectra were binned so that each channel contained at least
25 counts, and all channels below 0.1 keV and above 2.4 keV were ignored.
A similar procedure was used to extract spectra from LMXBs in NGC~1851,
NGC~6624, NGC~6652, NGC~7078 (M15), and LMC X-2.
\section{SPECTRAL FITTING} \label{sec:spectral}
\subsection{M31 Globular Cluster LMXBs} \label{subsec:m31_gc}
The spectra for the twelve M31 globular cluster LMXBs in our sample were fit
individually with a variety of spectral models using XSPEC, and it was found
that absorbed thermal bremsstrahlung (TB), power law (PL), and blackbody models
all fit the data equally well. We chose to let the photoelectric absorption
component (Morrison \& McCammon 1983) be a free parameter, since although the
Galactic hydrogen column density toward the direction of the LMXB is
known, it is not known how much M31 hydrogen the LMXB lies behind.
In most cases, when a blackbody model was used the best-fit column
density was well below (and statistically inconsistent with) the Galactic
value, so we discarded this model. The best-fit parameters for the TB and
PL models along with 90\% confidence levels
for one interesting parameter ($\Delta \chi^2 = 2.71$) are
shown in Table~\ref{tab:m31spec_fits} (all errors in this paper are quoted at
the 90\% level). Also shown are the minimum $\chi^2$ value,
the number of degrees of freedom, and the metallicity of the globular cluster
in which the X-ray source is embedded.
\subsection{Galactic and LMC LMXBs} \label{subsec:gal_gc}
Due to the relative proximity of the Galactic LMXBs plus LMC X-2, the
observations of these LMXBs yield a much greater number of X-ray counts
than the M31 globular cluster LMXBs. Consequently, simple one component
models are not adequate to describe the spectra of these objects as is
the case in the M31 LMXBs, which typically yield 1000 or fewer counts. This
illustrates the complexity of LMXB spectra at soft X-ray energies. In
order to compare fairly the properties between Galactic and M31 LMXBs,
we have analyzed only a small fraction of the PSPC data for the Galactic
LMXBs, such that each observation yields only $\sim$2000 counts. This number
was chosen to yield reasonable errors on the spectral parameters, while
still containing few enough counts to be directly comparable to the brighter
M31 LMXBs.
\begin{table*}[tbp]
\caption[M31 Spectral Fits]{}
\label{tab:m31spec_fits}
\begin{center}
\begin{tabular}{ccccccccccc}
\multicolumn{11}{c}{\sc Spectral Fits of M31 Globular Cluster LMXBs} \cr
\tableline \tableline
& & & &\multicolumn{3}{c}{TB Model} &&
\multicolumn{3}{c}{PL Model} \cr
\cline{5-7} \cline{9-11}
& Alternate & Total & & $N_H$ & $kT_{TB}$ & & & $N_H$ & & \\
Name & Name & Counts & $Z/Z_{\odot}$ & ($10^{20}$ cm$^{-2}$) & (keV) &
$\chi^2$/d.o.f. && ($10^{20}$ cm$^{-2}$) & $\Gamma$ & $\chi^2$/d.o.f. \\
\tableline
217 & 143-198 & 274 & 1.23 & 7.26$^{+15.06}_{-3.20}$ & 0.95$^{+1.64}_{-0.50}$ &
14.9/17 && 10.4$^{+34.1}_{-5.12}$ & 2.63$^{+2.29}_{-0.82}$ &
14.6/17 \\
228 & 153-000 & 820 & 0.83 & 7.81$^{+4.27}_{-1.95}$ & 1.70$^{+1.44}_{-0.60}$ &
30.0/28 && 9.47$^{+10.05}_{-2.73}$ & 2.01$^{+0.66}_{-0.34}$ &
31.7/28 \\
220 & 146-000 & 764 & 0.37 & 7.97$^{+4.08}_{-1.84}$ & 1.46$^{+0.99}_{-0.47}$ &
25.6/29 && 9.66$^{+9.56}_{-2.55}$ & 2.10$^{+0.64}_{-0.34}$ &
27.3/29 \\
73 & 005-52 & 2332 & 0.21 & 19.1$^{+6.33}_{-5.56}$ & 1.47$^{+1.13}_{-0.47}$ &
65.4/69 && 27.0$^{+10.9}_{-7.95}$ & 2.48$^{+0.62}_{-0.46}$ &
64.9/69 \\
282 & 225-280 & 730 & 0.20 & 6.21$^{+3.64}_{-1.74}$ & 3.57$^{+12.4}_{-1.84}$ &
19.4/25 && 7.08$^{+5.14}_{-2.33}$ & 1.64$^{+0.40}_{-0.33}$ &
19.8/25 \\
150 & 082-144 & 987 & 0.14 & 44.3$^{+18.2}_{-8.81}$ & 16.8$^{+\infty}_{-15.0}$ &
37.5/33 && 43.6$^{+24.9}_{-20.6}$ & 1.38$^{+0.99}_{-0.87}$ &
37.6/33 \\
122 & 045-108 & 1263 & 0.11 & 11.8$^{+8.87}_{-4.49}$ & 4.25$^{+76.9}_{-2.76}$ &
60.7/47 && 14.4$^{+14.6}_{-6.87}$ & 1.69$^{+0.82}_{-0.48}$ &
60.8/47 \\
247 & 185-235 & 435 & 0.09 & 5.59$^{+3.43}_{-1.53}$ & $> 4.15$ &
18.7/17 && 5.27$^{+6.09}_{-2.44}$ & 1.10$^{+0.50}_{-0.44}$ &
18.6/17 \\
349 & 386-322 & 1503 & 0.06 & 12.8$^{+8.03}_{-4.69}$ & 3.80$^{+14.2}_{-2.23}$ &
46.4/50 && 16.6$^{+12.7}_{-7.54}$ & 1.79$^{+0.69}_{-0.47}$ &
46.9/50 \\
318 & 375-307 & 5731 & 0.06 & 9.83$^{+1.60}_{-1.17}$ & 9.18$^{+18.0}_{-4.20}$ &
145.6/137 && 10.5$^{+2.25}_{-1.49}$ & 1.42$^{+0.16}_{-0.13}$ &
145.4/137 \\
205 & 135-193 & 1769 & 0.02 & 21.2$^{+7.82}_{-6.01}$ & 7.66$^{+\infty}_{-5.30}$ &
53.9/55 && 22.8$^{+11.1}_{-8.47}$ & 1.50$^{+0.55}_{-0.45}$ &
54.0/55 \\
158 & 086-148 & 485 & 0.02 & 5.50$^{+25.5}_{-1.55}$ & $> 0.81$ &
15.5/17 && 5.16$^{+42.1}_{-2.55}$ & 1.11$^{+2.39}_{-0.39}$ &
15.6/17 \\
\tableline
\end{tabular}
\end{center}
\end{table*}
As a check for consistency, five randomly-selected time intervals (each
of which yielded about 2000 counts) were analyzed from each observation
to search for any time dependence in the spectral properties. This was done
to ensure that the parameters of the models used to fit the spectra were
truly representative of time-averaged properties of the LMXB. Below, we
present the {\it median} best-fit parameters for the five time intervals
for each observation.
{\noindent \it NGC1851:} Based on a visual extinction of $A_V=0.06$ mag
(Djorgovski 1993), we have fixed the Galactic hydrogen column density
at $N_H=1.2 \times 10^{20}$ cm$^{-2}$ for both TB and PL models. We have
assumed $N_H=1.79 \times 10^{21} A_V$ cm$^{-2}$ (Predehl \& Schmitt 1995).
This produced a poor fit for the TB model and a marginal
fit for the PL model. However, Walker (1992) found the color excess to be
$E(B-V)=0.02 \pm 0.02$ for this cluster, so the absorption may be twice
the value quoted above given the error. Letting the absorption be free
led to acceptable fits with best-fit
parameters of $N_H=2.57^{+0.29}_{-0.25} \times 10^{20}$ cm$^{-2}$ and
$kT_{TB} > 19.4$ keV for the TB model, and
$N_H=1.90^{+0.49}_{-0.45} \times 10^{20}$ cm$^{-2}$ and $\Gamma=1.03 \pm{0.14}$
for the PL model. This is somewhat steeper than the value of $\Gamma=0.61$
found Verbunt et al.\ (1995) with {\it ROSAT} All Sky Survey data.
{\noindent \it NGC6624:} We have fixed the absorption
component at $N_H=1.56 \times 10^{21}$ cm$^{-2}$ ($A_V=0.87$ mag). Good
fits were found with a best-fit parameter of $kT_{TB}=3.77^{+2.72}_{-1.23}$
for the TB model and $\Gamma=1.57 \pm 0.14$ for the PL model.
Again, this is slightly steeper than the value of $\Gamma=1.36$ of
Verbunt et al.\ (1995).
{\noindent \it NGC6652:}
The absorption was fixed at $N_H=5.55 \times 10^{20}$ cm$^{-2}$
($A_V=0.31$ mag). A marginal fit ($\chi_{\nu} \sim 1.7$) was found for a
TB model with an unreasonably high lower limit on the temperature of
$kT_{TB} > 100$ keV. However, a good fit was obtained with a PL model
with $\Gamma=0.80 \pm 0.10$. Verbunt et al.\ (1995) found a value of
$\Gamma=0.72$.
{\noindent \it NGC7078:}
The absorption was fixed at $N_H=1.97 \times 10^{20}$ cm$^{-2}$
($A_V=0.11$ mag). A TB model gave a very poor fit to the data. A better fit
was obtained with a PL model with $\Gamma=0.12 \pm 0.10$.
{\noindent \it LMC X-2:} The column density was a free parameter.
The best-fit values were $N_H=1.30^{+0.62}_{-0.44} \times 10^{21}$ cm$^{-2}$
and $kT_{TB}=2.47^{+3.91}_{-1.08}$ for the TB model, and
$N_H=1.87^{+0.62}_{-0.44} \times 10^{21}$ cm$^{-2}$ and
$\Gamma=2.06^{+0.59}_{-0.53}$ for the PL model. Estimates made from color
excess maps of the LMC by Schwering \& Israel (1991) give a column density of
$\sim$$10^{21}$ cm$^{-2}$, in rough agreement with the X-ray measurements.
\section{DISCUSSION} \label{sec:discussion}
From Table~\ref{tab:m31spec_fits} the correlation between the best-fit
TB temperature or PL exponent and the metallicity is evident.
Source 217, which has the highest metallicity of the LMXBs in the sample
also has the lowest measured temperature ($kT=0.95$ keV)
or steepest power law exponent ($\Gamma=2.63$).
Conversely, the lower metallicity LMXBs have higher temperatures
or flatter power law exponents.
In some cases, the best-fit $N_H$ value is considerably higher than the
Galactic value, indicating that the LMXB spectrum is being absorbed by
hydrogen in the disk of M31. In other cases the derived $N_H$ value is
consistent with the Galactic value in that direction. This trend with
metallicity is supported by the four Galactic globular cluster LMXBs and
LMC X-2. The two LMXBs in higher metallicity environments (NGC~6624
and LMC X-2) have softer spectra than the three low metallicity LMXBs.
Previous studies (IS98a,b) have concluded that the
X-ray properties of a subclass of early-type galaxies with very low
X-ray--to--optical luminosity ratios are similar to those of LMXBs
in the Galaxy and the bulges of M31 and NGC~1291. In these studies, two
X-ray ``colors" (ratio of counts in three X-ray bands) were used to
characterize the X-ray emission. For a comparison to those studies, we
compute the same colors from the best fit TB and PL fits described
above. The two X-ray colors, C21 and C32, are defined as
\begin{equation} \label{eq:c21}
{\rm C21} =
\frac{\rm counts~in~0.52-0.90~keV~band}{\rm counts~in~0.11-0.41~keV~band}
\, ,
\end{equation}
and
\begin{equation} \label{eq:c32}
{\rm C32} =
\frac{\rm counts~in~0.91-2.02~keV~band}{\rm counts~in~0.52-0.90~keV~band}
\, .
\end{equation}
The absorption-corrected colors derived from the best-fit models for
the twelve LMXBs are shown in Figure~\ref{fig:brem} and
Figure~\ref{fig:power} for the TB and PL models,
respectively. The errors shown are the colors derived from spectral models
using the 90\% upper and lower limits on the temperature or power law
exponent. The M31 globular cluster sample has been
broken into three groups based on metallicity: $Z \le 0.2$ $Z_{\odot}$,
0.2 $Z_{\odot} < Z <$ $Z_{\odot}$, and $Z >$ $Z_{\odot}$.
The colors of the four Galactic LMXBs and LMC X-2 are also shown, although
NGC~6652 and NGC~7078 have been omitted in the TB case since this model
did not produce an acceptable fit to the spectra.
In the TB case, the temperature of some of the low metallicity LMXBs
was unconstrained, so the colors given are those predicted by a
bremsstrahlung temperature of 200 keV (the highest temperature allowed
by XSPEC). Also shown are the colors for the bulge of M31
and many X-ray faint early-type galaxies
(taken from IS98b). The error bars on these quantities
have been omitted for clarity.
The segregation of the colors with metallicity of the globular cluster is
clear, although the errors are rather large in the power law case. As the
metallicity of the cluster decreases the colors increase, indicating a
hardening of the spectra. The lone LMXB in the sample that resides in
a cluster with a metallicity greater than solar has colors very similar to
that of the bulge of M31, which has a metallicity of about twice solar
(Bica, Alloin, \& Schmidt 1990). As mentioned above, the X-ray emission from
the bulge of M31 is dominated by LMXBs. The three moderate metallicity M31
LMXBs as well as NGC~6624 and LMC X-2 occupy a region of C21-C32 space above
and to the right of the high metallicity LMXB and the bulge of M31.
The lowest metallicity LMXBs lie further still to the right and above.
The three low metallicity Galactic globular cluster LMXBs have especially
hard colors in the PL case.
For both the TB and PL case, the colors of the high metallicity M31 LMXB
(Source 217) is consistent with those of the X-ray faint early-type galaxies.
As mentioned above, the LMXBs in the bulge of M31 give X-ray--to--optical
luminosity ratios comparable to those of the X-ray faint galaxies. The colors
of the high metallicity M31
\centerline{\null}
\vskip2.65truein
\special{psfile=figure1.ps hscale=38 vscale=38 hoffset=0 voffset=-75}
\figcaption{
C21 vs.\ C32 plot of the best-fit TB models of the twelve M31 globular cluster
LMXBs, NGC~1851, NGC~6624, and LMC X-2. Also shown are the colors of
the bulge of M31 and many X-ray faint early-type galaxies
(taken from IS98a). Arrow indicate that only lower limits
were found for the temperatures. The trend of X-ray colors with the
metallicity is evident. Note that the high metallicity M31 LMXB has colors
similar to those of the bulge of M31 and the X-ray faint galaxies.
\label{fig:brem}}
\vskip0.1truein
\centerline{\null}
\vskip2.65truein
\special{psfile=figure2.ps hscale=38 vscale=38 hoffset=0 voffset=-75}
\figcaption{
Same notation as in Figure~\protect\ref{fig:brem}, but this time including
the colors of NGC~6652 and NGC~7078 for the PL case. Although the errors
are larger than in the TB case, the trend of X-ray colors with the
metallicity is still evident.
\label{fig:power}}
\vskip0.1truein
\noindent globular cluster LMXB adds further
evidence that LMXBs can produce the spectral characteristics observed in X-ray
faint early-type galaxies. Given the high central metallicities of early-type
galaxies, the LMXBs in these systems should more closely resemble LMXBs
in high metallicity systems such as the bulge of M31 or Source 217, and
not LMXBs in low metallicity systems such as those in NGC~1851, NGC~6652,
and NGC~7078.
Previous studies (i.e., Davis \& White 1996) have found a correlation
between ISM temperature and metal abundance, by assuming that the
X-ray spectra of the X-ray faintest galaxies could be described by a
single component, zero metallicity Raymond-Smith model with
$kT\sim 0.6$ keV. Although this model is not excluded by the {\it ROSAT}
PSPC data, a subsequent {\it ASCA} study of at least one X-ray faint galaxy
excluded this model (Kim et al.\ 1996). If a majority of the X-ray
emission from the X-ray faintest galaxies is stellar in nature, this calls
into question the relation between ISM temperature and abundance, as
well as the relation between ISM temperature and stellar velocity
dispersions.
Finally we note that the small discrepancy in C32 between the bulge of
M31 and the mean of the X-ray faint early-type galaxies may be caused
by the presence of small amounts of warm interstellar gas in the X-ray
faint systems, although the gas is not the dominant X-ray emission
mechanism as is the case in X-ray bright galaxies. There is
evidence that the amount of ISM increases with increasing
$L_X/L_B$. As a comparison, X-ray
bright early-type galaxies whose emission is dominated by $\sim$0.8 keV
gas have X-ray colors in the range (C21,C32$)=(0.5-1, 0.8-2)$, well
separated from the X-ray faint early-type galaxies (IS98b).
\section{A FEW WORDS OF WARNING} \label{sec:warning}
It should be stressed that the low energy spectra of LMXBs are quite
complicated and cannot in general be described by a single TB or PL
model. The good fits obtained here are solely the result of having a
paucity of X-ray counts for the M31 sample. For the well-observed Galactic
LMXB sample, even two component fits did not always produce a good fit.
However, there is no reason to believe that the simple models used here
should lead to misleading results regarding the trend of the X-ray colors
with metallicity.
Another issue not addressed here is the the presence of absorbing material
intrinsic to the LMXB. For the five Galactic/LMC LMXBs as well as eight of
the twelve M31 LMXBs, the best-fit absorption value is consistent
with what is expected from intervening material in our own Galaxy.
We have assumed that those showing excess absorption do so because they
lie behind absorbing material in the disk of M31. Although it is possible
that the excess absorption is from material intrinsic to the LMXB, the removal
of these four LMXBs will not alter the conclusions drawn here. Three of the
four LMXBs have a low metallicity and already have hard colors, so using a
lower value for the line-of-sight (non-intrinsic) absorption will only
make their colors harder. Furthermore, inspection of the raw C32 color
{\it uncorrected} for absorption shows the same trend with metallicity.
The trend in C21 is lost, though, since this color is highly dependent
on absorption.
\acknowledgments
This research has made use of data obtained through the High Energy
Astrophysics Science Archive Research Center Online Service,
provided by the NASA/Goddard Space Flight Center.
This work has been supported by NASA grant NAG5-3247.
|
2,877,628,090,459 | arxiv | \section{Introduction}
\label{sec:intro}
In recent years, Convolutional Neural Networks and Vision Transformers have grown into the state-of-the-art techniques in computer vision \cite{Sutton2000} with remarkable results on famous datasets, namely ImageNet \cite{5206848}, JFT-300M \cite{Sun_2017_ICCV}. One of the main factors affecting the performance of mentioned algorithms is a considerable quantity of high-resolution and distinct training instances. On the other hand, collecting such datasets requires expertise in the specialized field also is tedious and costly. Moreover, sometimes it is not feasible to collect a satisfactory number of data; for example, in the case of Fine-Grained Classification \cite{Akata_2015_CVPR, DBLP:journals/corr/abs-2106-10587, conde2021exploring}, which is a sub-field of image recognition \cite{He_2016_CVPR}, that tries to differentiate subcategories of a general class. Examples include distinguishing breeds of animals or species of flowers. As a result, several methods have been introduced to tackle these problems. \\
\begin{figure*}[ht]
\centering
\includegraphics[clip,width=0.9\linewidth]{images/ss.pdf}
\caption{The complete classification pipeline of our method for GAN-based data augmentation. }
\label{fig:pipe}
\end{figure*}
Data augmentation techniques have been performed remarkably well as a solution to the problem of limited data \cite{DBLP:journals/corr/abs-1712-04621, jain2021using}. These algorithms use various approaches to incorporate new instances from existing training data which are divided into two groups, traditional methods \cite{zhao2020differentiable, 10.1007/978-3-030-58583-9_34} and image generation using Generative Adversarial Networks (GANs) \cite{goodfellow2014generative}. Color jittering, flipping, blurring, rotation, and adding a small amount of noise exemplify the former. GANs can produce unseen images with the same statistics as original training data. GANs are capable of training a generative model (automatically finding the patterns from the input data and learning harmonies to produce new examples without supervision) by associating two neural networks, namely, generator and discriminator (see Figure \textcolor{red}{\ref{fig:gans}}). The generator model makes a synthetic example from a random vector. The generated images, then, are fed to the discriminator along with authentic images, and the discriminator attempt to identify the fake ones. These two models are trained together until the success rate of the discriminator stabilized at approximately 50\%. \\
As previously discussed, GANs seem to be an effective method for data augmentation and had exceptional success on certain datasets, e.g., 100-shot Obama and Panda \cite{zhao2020differentiable}. Still, there are difficulties generalizing to other datasets. The mentioned datasets all have similar characteristics, such as objects (faces) are centered and often have limited distribution. However, it is not always the case; most datasets have objects in various poses and sizes, which leads to poor convergence of GAN models \cite{barnett2018convergence}. As a result, the produced images might have issues with abnormal colors and distribution shifts.\cite{9035107} In other cases, when the size of the training set is not satisfactory, the loss of the discriminator falls rapidly. Still, the validation loss decreases, showing the network just producing images almost identical to the training set. Consequently, when the suggested images add up to the original dataset for computer vision tasks, in particular image classification, the evaluation metrics barely fluctuate, which indicates the impracticality of these methods \cite{sundaram2021ganbased}. \\
To overcome the stated drawbacks, we took advantage of several methods. For this purpose, we experimented with our work on the Oxford-IIIT Pet dataset \cite{6248092}, which consists of different breeds of dogs and cats images. Experimental results demonstrate a noticeable increase in accuracy of highly developed model, Vision Transformer (ViT), \cite{dosovitskiy2021image} on the task of fine-grained image classification. First, we tried generating new realistic samples to feed the model, but produced images were impractical and had poor quality. To solve this problem, we capitalize on a new version of StyleGAN \cite{karras2019stylebased}, the highly developed StyleGAN2-ADA \cite{karras2020training}, which proposed a flexible discriminator augmentation mechanism that improves performance on limited data regimes. As previously suggested, GANs perform adequately when training objects are not centered, so we trained a customized version of the famous CNN architecture, MobileNetV2 \cite{Sandler_2018_CVPR}, to predict facial landmarks \cite{Wu_2018} of cats \cite{weiweizhangandjiansunandxiaooutang2008cat} and dogs \cite{7026060} and cropped instances of the Oxford-IIIT Pet dataset according to predicted landmarks. At last, we trained the StyleGan2-ADA using the cropped images, next off by combining the produced images with the original data, created a more prosperous and extensive dataset. Finally, we trained the upgraded dataset on the ViT benchmark with the same hyper-parameters and achieved a better result.
This paper commences with introducing some related works in the following section. Next, it explains the approach proposed and presented the experimental setup in sections \textcolor{red}{\ref{sec:method}} and \textcolor{red}{\ref{sec:exper}}, respectively. In section \textcolor{red}{\ref{sec:result}}, the outcomes are reported, and finally, we have a conclusion in section \textcolor{red}{\ref{sec:con}}.
\section{Related Work}
Dataset size is an essential factor in the performance of image classifiers, and the primary issue is overfitting \cite{doi:10.1021/ci0342472, zhao2020differentiable}. When the model is overfitting the training set, it will not generalize well on the test data. Many methods have been proposed over the years to overcome this problem; the most straightforward methods include using regularization \cite{6796297} or dropout \cite{JMLR:v15:srivastava14a}. The former adds a penalty to the norm of the weights so that the results will be less fluctuated. Dropout drops certain connections in the network by removing the percentage of neurons randomly. Later techniques such as batch normalization \cite{cooijmans2017recurrent} re-scale and center each layer, thus making the model more stable and accurate. Transfer learning \cite{10.1007/978-3-030-01424-7_27} is another popular strategy that uses the weights of the same model trained on a different dataset instead of randomly initialize them. All of the suggested approaches are orthogonal to our work, and it is possible to use them all together.
That being the case, it requires specialization and time to tune the parameters to get the best accuracy. Our method generates new images, which data augmentation can apply to and increase the size of the dataset further.
The most advanced augmentation method is using GANs to generate new never-seen-before samples from current data. Since the invention of GANs\cite{goodfellow2014generative}, various models dependent on GAN were introduced \cite{8667290}. One issue with these models was the lack of ability to distinguish between different classes of training data and no control over generated images. Consequently, Conditional Generative Adversarial Networks (cGAN) \cite{mirza2014conditional} were proposed in 2014, which allowed image generation to be conditional on a possible class label, enabling generate images on a specified target class. Sanfort et al. proved the potential of GAN-based data augmentation \cite{Sandfort2019 } confirmed that using CycleGAN for augmentation in diverse tasks, e.g., segmentation and classification, can substantially boost performance in these domains.
In this paper, we used the state-of-the-art model StyleGAN2-ADA, introduced by Karras et al. in 2020\cite {karras2020training}, which by augmenting the input data perform better than previous ones and better on low data regimes. However, a downside with GAN models is that the objects need to be centered and have similar poses; we study challenging scenarios where mentioned conditions do not apply and overcome this issue by cropping the images from predicted landmarks of samples, in this case, animals, to boost the StyleGAN2-ADA. Furthermore, we showed that these changes not only reduce the Frechet Inception Distance \cite{8110709} score of StyleGAN2-ADA but also, generated images can be used for data augmentation in fine-grained images classification.
\section{Method}
\label{sec:method}
Our procedure consists of using GANs to expand the length of the dataset by generating new instances of the training set (see examples in Figure \textcolor{red}{\ref{fig:gan}}). We further improved the generated images' quality by detecting the facial landmarks and cropping images fed to the GANs model. Moreover, we compared the result with original data. Finally, by combining synthetic images with real ones, we trained on the highly developed classifier model to show the effectiveness of our method, The whole pipeline is illustrated in Figure \textcolor{red}{\ref{fig:pipe}}.
\subsection{Fine-Grained Classification Model}
Vision transformers recently achieved exceptional results on various datasets. ViT \cite{dosovitskiy2021image} and BiT \cite{kolesnikov2020big}, two popular transformer models, delivered outstanding performance on the Oxford-IIIT Pet dataset. In this work, we train our augmented dataset plus the original one on a variant of ViT named the R26 + ViT-S/32 model. This hybrid model is a ViT-S/32 on top of a ResNet-26 \cite{AAAI1714806} backbone; this combination allows us to achieve similar outcomes to larger models with less than half the computational finetuning cost. R26 + ViT-S/32 is pre-trained on ImageNet-21k and has fewer parameters comparing to larger variants of ViT. We chose this model due to limitations in computational power, even though models with more parameters yield better results. We compared the results and our method's capability by training each dataset variation, namely, original, augmented, cropped-augmented on the classifier model.
\begin{figure}[ht]
\centering
\includegraphics[clip,width=0.9\linewidth]{images/landmark.jpg}
\caption{Examples of different cat and dog breeds and their associated landmarks}
\label{fig:land}
\end{figure}
\begin{figure*}[ht]
\subfloat[\label{subfig-1:dummy}]{%
\includegraphics[width=0.13\textwidth]{images/original.jpg}
}
\hfill
\subfloat[\label{subfig-1:dummy}]{%
\includegraphics[width=0.13\textwidth]{images/cropped-100.jpg}
}
\hfill\subfloat[\label{subfig-1:dummy}]{%
\includegraphics[width=0.13\textwidth]{images/uncropped-100.jpg}
}
\hfill\subfloat[\label{subfig-1:dummy}]{%
\includegraphics[width=0.13\textwidth]{images/cropped-50.jpg}
}
\hfill\subfloat[\label{subfig-1:dummy}]{%
\includegraphics[width=0.13\textwidth]{images/uncropped-50.jpg}
}
\hfill\subfloat[\label{subfig-1:dummy}]{%
\includegraphics[width=0.13\textwidth]{images/cropped-10.jpg}
}
\hfill\subfloat[\label{subfig-1:dummy}]{%
\includegraphics[width=0.13\textwidth]{images/uncropped-10.jpg}
}
\caption{Comparison between synthetic and authentic images. We show (a) the original data,(b) and (c) generated images on the whole dataset, cropped and uncropped, respectively. (d) cropped images on 50\%, (e) uncropped images generated on 50\% subset and finally (f) and (g), cropped and uncropped images result of training on only 10\% of the data. These qualitative visualizations prove the effectiveness and the interpretability of our method.}
\label{fig:gan}
\end{figure*}
\subsection{Data Augmentation via Image Generation}
By employing GANs framework to generate new images, it is possible to use them for the task of augmentation to reduce overfitting and increase the accuracy of the classifier. To keep the data balance, we used a conditional form of StyleGAN2-ADA, which receives an extra parameter as input $class = \{1, 2, ..., 37\}$ which denotes one of the 37 breeds available in the pets dataset. The conditional model works more efficiently than training each class separately. It consolidates data from different classes while keeping each of them realistic and does not confuse the looks of each breed together.
StyleGAN2-ADA takes advantage of augmentation to create notable results even with limited data. To experiment with the size of the dataset, we trained both StyleGAN and ViT models using $\{10\%, 50\%, 100\% \}$ subsets of pets dataset to show the effectiveness of dataset size in the output.
\subsection{Facial Landmark Detection}
The position and size of the object in the image ( in this case, cats and dogs) play a crucial role in robust image generation using GANs. Most publicly available datasets do not meet this requirement; thus, GANs are limited to a petite number of datasets, i.e., CelebA \cite{liu2015faceattributes}, AFHQ \cite{choi2020starganv2}, etc. In this paper, we demonstrate how preparing the Oxford-IIIT Pet dataset by cropping the images can improve the performance of GANs and therefore generate more realistic images. Due to inconsistent annotations in the pets dataset, instead of using provided bounding boxes, we trained a model to predict the animal landmarks, i.e., ears, nose, and eyes. Furthermore, we cropped the images by taking advantage of predicted landmarks. This method can easily generalize on different datasets, despite the bounding box restrictions.
We finetuned a MoblieNetV2 model on annotated datasets from Sandler et al. \cite{Sandler_2018_CVPR}. The mentioned model was pre-trained on ImageNet, which outputs 14 numbers denoting 7 landmarks on the animal face (see Figure \textcolor{red}{\ref{fig:land}}). All the data is augmented with traditional methods, i.e., mirroring, rotation, and zooming. Furthermore, we used landmark normalization based on the center of the anchor, introduced in \cite{deng2019retinaface}, which slightly boosted the performance.
\section{Experiments}
\label{sec:exper}
\subsection{Datasets}
\textbf{Oxford IIIT-Pet Dataset} To demonstrate the capability of our proposed method, we chose the public Oxford-IIIT Pet \cite{6248092} dataset, which includes pictures of animals in different shapes and poses. This diversity suggests a more natural distribution and intensifies our work's performance compared to the benchmark results. This dataset contains a total of 7349 images in 37 different breeds of cats (12 categories) and dogs (25 categories). Every species consists of roughly 200 photos in JPG format and resized to 128 $\times$ 128 for training StyleGAN2-ADA, and fine-tuning ViT.
We used the official training split according to the original paper for both training the GANs model as well as fine tuning the fine-grained image classifier. To demonstrate the effect of the proposed augmentation technique, we performed all experiments on subsets of the dataset, i.e., 10\%, 50\%, and 100\%, both on cropped and the original data.
\textbf{Landmark Detection Datasets} We benefit from of two separate datasets to finetune a MobileNet model for the purpose of facial landmark prediction of cats and dogs. By combining the dog face dataset collected by Mougeot et al. \cite{10.1007/978-3-030-29894-4_34}. with a variation of the cat face dataset from Zhang et al. \cite{weiweizhangandjiansunandxiaooutang2008cat}, we produced a consolidated dataset for the mentioned task. In addition, we unified and improved the landmarks format. The first dataset contains 1393 classes of dogs, 8363 images, and there are at least two images per dog. Each image is of size 224 $\times$ 224 $\times$ 3 and has JPG format. The second dataset is consists of 10,000 annotated cat images collected from \textit{flickr.com}. To keep the unification of both datasets, we resized all images from the cat dataset to 224 $\times$ 224. Lastly, we used the official train/validation/test split suggested in the paper.
\subsection{Implementation Details}
\textbf{MobileNetV2 Settings} We used original MobileNetV2 architecture with some modifications for facial landmark prediction. It accepts 224 $\times$ 224 RGB images as input. Global max-pooling is applied to pooling layers. We discarded the Fully Connected Layers (FCL) on top; instead, we used two FCL with $2^7$ nodes and Rectified Linear Unit (ReLu) activation function. The model's output is the size of 14, which denotes 7 coordinates of the animal landmark (2 for each ear, 1 for each eye, 1 for nose).
\textbf{StyleGAN2-ADA Settings} This model has trained six times on different variants of the pets dataset, with the same hyperparameters. All input images are 128 $\times$ 128 $\times$ 3, and the conditional form of the model is used with 37 classes for each species type. ADA target value is set to 0.6, which controls data augmentation intensity and, methods of augmentation are set to default. Learning rates for generator and discriminator are both set to 0.0025.
\textbf{R26 + ViT-S/32 Settings} To keep the results comparable with the available performance of the ViT model, we used the same hyperparameters as mentioned in the official paper; however, because of the lower resolution of generated images, we resized the final dataset to 128 $\times$ 128 for consistency. The model is pre-trained on ImageNet with 85.99\% accuracy, and dropout is excluded. Additionally, Adam optimizer performs slightly better than Stochastic Gradient Descent (SGD). We split the data and evaluated the model using 3680 images for training and 3669 images for testing.
\subsection{Training Details}
Our model is implemented using PyTorch on an Ubuntu
20.04 server. We use a Tesla P100 GPU to accelerate
our training process. We finetuned MobileNetV2 model for 15 epochs and 32 batch size. As recommended in StyleGAN2-ADA paper, 5000 kimg training results in realistic images and it is almost converged. We trained all variants from scratch up to 5120 kimg, each of them took approximately 60 hours. Finally, ViT model pre-trained for 300 epochs, then fined tuned for 16 more on each variant of pets dataset.
\begin{table}[t]
\centering
\begin{tabular}{p{0.30\linewidth}m{0.10\linewidth}m{0.10\linewidth}}
\toprule
\multirow{2}{*}{Models} & \multicolumn{2}{c}{RMSE ($\downarrow$)}\\ \cmidrule{2-3}
& Validation & Test \\
\midrule
MobileNetV2 & 3.11 & 3.56 \\ \addlinespace
MobileNetV2 +Normalization & 3.02 & 3.43 \\
\bottomrule
\end{tabular}
\caption{We evaluate the RMSE once for the MobileNetV2 model and once for MobileNetV2 that is added with Normalization. The RMSE is calculated for both validation and test set.}
\label{table:one}
\end{table}
\pgfplotstableread{graph.dat}{\loadedtable}
\begin{figure*}
\centering
\subfloat{
\begin{tikzpicture}
\begin{axis}[
title = 100\% subset,
xlabel = $\times 10^3 \quad kimg$,
ylabel = FID,
xmin = 0, xmax = 5.04,
ymin = 10, ymax = 75,
xtick distance = 1,
ytick distance = 20,
grid = both,
minor tick num = 1,
major grid style = {lightgray},
minor grid style = {lightgray!25},
width = 0.3\textwidth,
]
\addplot[blue] table [x = {x}, y = {y3}] {\loadedtable};
\addplot[red] table [x ={x}, y = {y4}] {\loadedtable};
\end{axis}
\end{tikzpicture}
}\hfill
\subfloat{
\begin{tikzpicture}
\begin{axis}[
legend columns=-1,
legend entries={cropped,uncropped},
legend to name=named,
title = 50\% subset,
xlabel = $\times 10^3 \quad kimg$,
ylabel = FID,
xmin = 0, xmax = 5.04,
ymin = 15, ymax = 100,
xtick distance = 1,
ytick distance = 20,
grid = both,
minor tick num = 1,
major grid style = {lightgray},
minor grid style = {lightgray!25},
width = 0.3\textwidth,]
\addplot[blue] table [x = {x}, y = {y2}] {\loadedtable};
\addplot[red] table [x ={x}, y = {y5}] {\loadedtable};
\end{axis}
\end{tikzpicture}
} \hfill
\subfloat{
\begin{tikzpicture}
\begin{axis}[
title = 10\% subset,
xlabel = $\times 10^3 \quad kimg$,
ylabel = $FID$,
xmin = 0, xmax = 5.04,
ymin = 40, ymax = 150,
xtick distance = 1,
ytick distance = 25,
grid = both,
minor tick num = 1,
major grid style = {lightgray},
minor grid style = {lightgray!25},
width =0.3\textwidth,
]
\addplot[blue] table [x = {x}, y = {y1}] {\loadedtable};
\addplot[red] table [x ={x}, y = {y6}] {\loadedtable};
\end{axis}
\end{tikzpicture}
} \\
\textcolor{black}{\ref{named}}
\caption{We evaluate the accuracy of the used model and FID for three different dataset conditions (Original, augmented, and cropped-augmented ) in data regimes of 10, 50, and 100 percent.}
\label{fig:fid}
\end{figure*}
\begin{table*}[ht]
\centering
\setlength{\tabcolsep}{10pt}
\begin{tabular}{p{0.29\textwidth}*{4}{l{c}}}
\toprule
\multicolumn{1}{l}{\multirow{2}{*}{ Dataset variant}} & \multicolumn{2}{c}{10\% training data} & \multicolumn{2}{c}{50\% training data} & \multicolumn{2}{c}{100\% training data} \\
\cmidrule(r{3.8pt}){2-3} \cmidrule(l){4-5} \cmidrule(l){6-7}
& FID($\downarrow$) & Test Accuracy ($\uparrow$) & FID & Test Accuracy & FID & Test Accuracy \\
\midrule
Original & -- & 64.73 & -- & 88.41 & -- & 94.13\\\addlinespace
Augmented & 71.1 & 63.32 & 36.4 & 88.70 & 20.7 & 94.93\\\addlinespace
Cropped-Augmented (Ours)& \textbf{49.4} & \textbf{68.55} & \textbf{22.3} & \textbf{91.73} & \textbf{14.1} & \textbf{96.28} \\
\bottomrule
\end{tabular}
\caption{We evaluate the accuracy of the used model and FID for three different dataset conditions (Original, augmented, and augmented-cropped ) in data regimes of 10, 50, and 100 percent.}
\label{table:two}
\end{table*}
\section{Results}
\label{sec:result}
\subsection{Evaluation Metrics}
We used three independent metrics to evaluate the performances of our approaches and tasks. For the fine-grained image classification task, we considered the Accuracy metric since there are pretty equal samples belonging to each class. Accuracy has the following definition where $Ra$ is the total number of input samples and $R$ means the number of testing images.
$$
Accuracy = \frac{R_a}{R}
$$
In order to assess the quality of the images that
are generated, Frechet Inception Distance (FID) is
used. The lower amount of FID displays better quality.
In FID, first, a pre-trained InceptionV3 model is loaded and removes the features from an intermediate layer. Then the output is taken as the activations from the last pooling layer. A multivariate Gaussian distribution is used to model the data arrangement using covariance $\Sigma$ and mean $\mu$.
The FID interpolated the authentic images $x$ and produced
images $g$ is calculated as follows, and $Tr$ sums up all the diagonal elements:
$$
FID(x, g) = {||\mu_x - \mu_g||_2}^2 + Tr(\Sigma_x + \Sigma_g - 2\sqrt{\Sigma_x \times\Sigma_g })
$$
For measuring facial landmarking accuracy, we used the Root Mean Square Error (RMSE) \cite{Johnston2018}, which is a metric that shows the average distance between the predicted values from the model and the actual values in the dataset. We used the following formula to measure the average length between every $N$ anticipated landmarks $ (x_i^p, y_i^p) $ and the corresponding 'ground truth' $ (x_i^t, y_i^t) $ on a per landmark basis. The predicted landmarks that aren't good enough will be placed far-off from their related provided annotations positions and cause to raise the value of RMSE.
$$
RMSE = \frac{1}{N}\sum_{i=1}^{N} \sqrt{(x_i^p - x_i^t)^2 + (y_i^p - y_i^t)^2}
$$
\subsection{Performance Analysis}
To evaluate the effectiveness of our landmark predictor, we evaluated RMSE of the trained MobileNetV2 model with and without landmark normalization. Table \textcolor{red}{\ref{table:one}} shows that normalization decreases RMSE on both validation and test set, suggesting increased accuracy and quality in predicted landmarks. Since we detect only one object per image, in some cases where multiple instances of dogs or cats were visible, predicting the landmark for the wrong instance caused the introduction of outliers. We removed these outliers to keep the RMSE more legitimate.
The reported results in Table \textcolor{red}{\ref{table:two}} show that both centering the objects and manipulating the size of the dataset have a discernible effect on the quality of generated outputs, measured by their FID. Cropping the images decreased FID substantially on all three variants of the dataset. On the 100\% subset of the cropped dataset, FID reached as low as 15.9, which is 32\% less than the uncropped version. Limiting the amount of data fed into the GANs model had a denying impact on the condition of generated images that are shown in Figure \textcolor{red}{\ref{fig:gan}} for variants of the dataset. Regarding Figure \textcolor{red}{\ref{fig:fid}}, it is noticeable that FID is still slightly decreasing and have not converged yet, but due to computational constraints, we stopped at approximately 5000 kimg.
As shown in Table \textcolor{red}{\ref{table:two}}, the size of the database has a direct impact on the accuracy of experimented ViT model. It is clear that data augmentation has a negligible influence on the results when the dataset is not prepared and even reduced accuracy on the 10\% subset due to the low quality of generated images and lack of detail. Our method generated higher quality instances and dealt with the problem of abstraction in images. Consequently, increased accuracy in all three subsets of the dataset and improved the available benchmark by roughly 2\%.
\section{conclusion}
\label{sec:con}
In this work, we gained improvement in fine-grained image classification by generating new samples with the use of GANs. New data samples were obtained from StyleGAN2-ADA. We finetuned a custom MobileNetV2 model to predict animal facial landmarks, then cropped the Oxford-IIIT Pet dataset images accordingly. The newly generated images from the cropped dataset caused to enhancement in quality and diversity. An increase in the number and diversification of sample images impacts increasing the accuracy of the state-of-the-art ViT model.
\bibliographystyle{ieeetr}
|
2,877,628,090,460 | arxiv |
\section{Conclusions}
We propose a video-based hand pose estimation model, temporal-aware self-supervised network (TASSN), to learn and infer 3D hand pose and mesh from RGB videos.
By leveraging temporal consistency between forward and reverse measurements, TASSN can be trained through self-supervised learning without explicit 3D annotations.
The experimental results show that TASSN achieves reasonably good results with performance comparable to state-of-the-art models trained with 3D ground truth.
The temporal consistency constraint proposed here offers a convenient and yet effective mechanism for training 3D pose prediction models. Although we illustrate the efficacy of the model without using 3D annotations, it can be used in conjunction with direct supervision with a small number of 3D labeled samples to improve accuracy.
\paragraph{Acknowledgement.} This work was supported in part by the Ministry of Science and Technology (MOST) under grants MOST 107-2628-E-009-007-MY3, MOST 109-2634-F-007-013, and MOST 109-2221-E-009-113-MY3, and by Qualcomm through a Taiwan University Research Collaboration Project.
\label{sec:conclu}
\section{Introduction}
\label{sec:intro}
3D hand estimation is an important research topic in computer vision due to a wide range of potential applications, such as sign language translation~\cite{zafrulla2011american}, robotics~\cite{antoshchuk2018gesture}, movement disorder detection and monitoring, and human-computer interaction (HCI)~\cite{lin2013airtouch, hung2016re,mikeicpr10}.
Depth sensors and RGB cameras are popular devices for collecting hand data.
However, depth sensors are not as widely available as RGB cameras and are much more expensive, which has limited the applicability of hand pose estimation methods developed upon depth images.
Recent research interests have shifted toward estimating 3D hand poses directly from RGB images by utilizing color, texture, and shape information contained in RGB images.
Some methods carried out 3D hand pose estimation from monocular RGB images~\cite{cai2018weakly,iqbal2018hand,zb2017hand}.
More recently, progresses have been made on estimating 3D hand shape and mesh from RGB images~\cite{baek2019pushing,boukhayma20193d,ge2019handshapepose,zhang2019end,mm-hand, chen_wacv21_dataset,kong2020sia,kong2019adaptive,zhao2020image,zhao2020topk,zhao2020improved}.
Compared to poses, hand meshes provide richer information required by many immersive VR and AR applications.
Despite the advances, 3D hand pose estimation remains a challenging problem due to the lack of accurate, large-scale 3D pose annotations.
In this work, we develop a new approach to 3D hand pose and mesh estimation by taking the following two observations into account.
First, most existing methods rely on training data with 3D information, but capturing 3D information from 2D images is intrinsically difficult.
Although there are a few datasets providing annotated 3D hand joints, the amount is too small to train a robust hand pose estimator.
Second, most studies focus on hand pose estimation from a single image.
Nevertheless, important applications based on 3D hand poses, such as augmented reality (AR), virtual reality (VR), and sign language recognition, are usually carried out in videos.
According to the two observations, our approach exploits video temporal consistency to address the uncertainty caused by the lack of 3D joint annotations on training data.
Specifically, our approach, called {\em temporal-aware self-supervised network (TASSN)}, can learn and infer 3D hand poses without using annotated 3D training data.
Figure~\ref{fig:coreidea} shows the motivation and core idea of the proposed TASSN.
TASSN explores video information by embedding a temporal structure to extract spatial-temporal features.
We design a novel temporal self-consistency loss, which helps training the hand pose estimator without requiring annotated 3D training data.
In addition to poses, we estimate hand meshes since meshes bring salient evidences for pose inference.
With meshes, we can infer silhouettes to further regularize our model.
The main contributions of this work are given below:
\begin{enumerate}
\item We develop a temporal consistency loss and a reversed temporal information technique for extracting spatio-temporal features.
To the best of our knowledge, this work makes the first attempt to estimate 3D hand poses and meshes without using 3D annotations.
\item An end-to-end trainable framework, named temporal-aware self-supervised networks (TASSN), is proposed to learn an estimator without using annotated 3D training data.
The learned estimator can jointly infer the 3D hand poses and meshes from video.
\item Our model achieves high accuracy with 3D prediction performance on par with state-of-the-art models trained with 3D ground truth.
\end{enumerate}
\section{Proposed Method}
\label{sec:method}
We aim to train a 3D hand pose estimator from videos without 3D hand joint labels.
To tackle the absence of 3D annotations, we adopt the temporal information from hand motion videos, and address the ambiguity caused by the lack of 3D joint ground truth.
Specifically, we present a novel deep neural network, named temporal-aware self-supervised networks (TASSN).
By developing the temporal consistency loss on the estimated hand gestures in a video, TASSN can learn and infer 3D hand poses through self-supervised learning without using any $3$D annotations.
\subsection{Overview}
\label{sec:overview}
Given an RGB hand motion video $\bm{x}$ with $N$ frames, $\bm{x}=\{\bm{I}_{1},...,\bm{I}_N\}$, we aim at estimating $3$D hand poses in this video, where $\bm{I}_t\in\mathbb{R}^{3\times W\times H}$ is the $t$-th frame, and $W$ and $H$ are the frame width and height, respectively.
The $3$D hand pose at frame $t$, $\bm{p}_{t}\in \mathbb{R}^{3 \times K}$, is represented by a set of $K$ $3$D keypoint coordinates of the hand.
Figure~\ref{fig:2} illustrates the network architecture of TASSN.
Leveraging the temporal consistency properties of videos, the hand poses and meshes predicted in the forward and backward inference orders can perform mutual supervision.
Our model can be fine-tuned on any target dataset using this self-supervised learning and the temporal consistency is a good substitute for the hard-to-get 3D ground truth.
TASSN alleviates the burden of annotating 3D ground-truth of a dataset without significantly sacrificing model performance.
Recent studies~\cite{ge2019handshapepose,zhang2019end} show that training pose estimators with hand meshes improves the performance because hand meshes can act as intermediate guidance for hand pose prediction.
To this end, we propose a hand pose and mesh estimation (PME) module, which jointly estimates the 2D hand keypoint heatmaps, 3D hand poses and meshes from every two adjacent frames $\bm{I}_i$ and $\bm{I}_{i+1}$.
\subsection{Pose and Mesh Estimation Module}
\label{sec:pem}
The proposed PME module consists of four estimator sub-modules, including flow estimator, 2D keypoint heatmap estimator, 3D hand mesh estimator, and 3D hand pose estimator.
Given two consecutive frames as input, it estimates the 3D hand pose and mesh.
Figure~\ref{fig:pme} shows its network architecture.
\vspace{-0.1in}
{\flushleft {\bf Flow Estimator}}:
To capture temporal clues from a hand gesture video, we adopt FlowNet~\cite{ilg2017flownet} to estimate the optical flow $\bm{o}_{t+1} \in \mathbb{R}^{2\times W\times H}$ between two consecutive frames $\bm{I}_{t}$ and $\bm{I}_{t+1}$.
In forward inference, FlowNet computes $\bm{o}_{t+1}$, the motion from frame $\bm{I}_{t}$ to frame $\bm{I}_{t+1}$.
In backward inference, FlowNet computes the reverse motion.
\vspace{-0.1in}
{\flushleft {\bf Heatmap Estimator}}:
Our heatmap estimator computes 2D hand keypoints and generates the features for the 3D hand pose and mesh estimators.
The estimated 2D keypoint heatmaps are denoted by $\bm{H} \in \mathbb{R}^{K\times W \times H}$, where $K$ represents the number of keypoints.
We adopt a two stacked hourglass network~\cite{newell2016stacked} to infer the hand keypoint heatmaps $\bm{H}$ and compute the features $\bm{F}$.
We concatenate $\bm{I}_{t+1}$, $\bm{o}_{t+1}$, and $\bm{H}_t$ as input to the stacked hourglass network, which produces heatmaps $\bm{H}_{t+1}$, as shown in Figure~\ref{fig:pme}.
The estimated $\bm{H}_{t+1}$ includes $K$ heatmaps $\{\bm{H}_{t+1}^k\in \mathbb{R}^{W \times H}\}_{k=1}^K$, where $\bm{H}_{t+1}^k$ expresses the confidence map of the location of the $k$th keypoint.
The ground truth heatmap $\bm{\bar{H}}^k_{t+1}$ is the Gaussian blur of the Dirac-$\delta$ distribution centered at the ground truth location of the $k$th keypoint.
The heatmap loss $\bm{\mathcal{L}}_{h}$ at frame $t$ is defined by
\begin{equation}
\label{eq:heatmap}
\bm{\mathcal{L}}_{h} =\frac{1}{K} \sum_{k = 1}^K||\bm{H}^k_t - \bm{\bar{H}}_t^k||_F^2.
\end{equation}
{\flushleft {\bf 3D Hand Mesh Estimator}}:
Our 3D hand mesh estimator is developed based on Chebyshev spectral graph convolution network (GCN)~\cite{ge2019handshapepose}, and it takes hand features $\bm{F}$ as input and infers the 3D hand mesh.
The output hand mesh $\bm{m}_{t}\in \mathbb{R}^{3 \times C}$ is represented by a set of $3$D mesh vertices, where $C$ is the number of vertices in a hand mesh.
To model hand mesh, we use an undirected graph $\bm{G}(\bm{V},\bm{E})$, where $\bm{V}$ and $\bm{E}$ are the vertex and edge sets, respectively.
The edge set $\bm{E}$ can be represented by an adjacent matrix $\bm{A}$, where $\bm{A}_{i,j} = 1$ if edge $e(i,j) \in \bm{E}$, otherwise $\bm{A}_{i,j} = 0$.
The normalized Laplacian normal matrix of $\bm{G}$ is obtained via $\bm{L} = \bm{I} - \bm{D}^{-\frac{1}{2}}\bm{A}\bm{D}^{-\frac{1}{2}}$, where $\bm{D}$ is the degree matrix and $\bm{I}$ is the identity matrix.
Since $\bm{L}$ is a positive semi-definite matrix~\cite{bruna2013spectral}, it can be decomposed as $\bm{L} = \bm{U\Lambda U}^T$, where $\bm{\Lambda} = diag(\lambda_1, \lambda_2,... , \lambda_C)$, and $C$ is the number of vertices in $\bm{G}$.
We follow the setting in~\cite{defferrard2016convolutional}, and set the convolution kernel to $\bm{\hat{\Lambda}} = diag(\sum_{i=0}^S\alpha_i\lambda_1^i, ... ,\sum_{i=0}^S\alpha_i\lambda_C^i)$, where $\alpha$ is the kernel parameter.
The convolutional operations in $\bm{G}$ can be calculated by $\bm{F'} = \bm{U\hat{\Lambda} U}^T\bm{F\theta}_i = \sum_{i=0}^{S} \alpha_i\bm{L}^i\bm{F\theta}_i$,
where $\bm{F}\in \mathbb{R}^{N\times F_{\text{in}}}$ and $\bm{F}' \in \mathbb{R}^{N\times F_\text{out}}$ indicate the input and output features respectively, $S$ is a preset hyperparameter used to control the receptive field, and $\bm{\theta}_i \in \mathbb{R}^{F_\text{in} \times F_\text{out}}$ is trainable parameter set used to control the number of output channels.
The Chebyshev polynomial is used to reduce the model complexity by approximating convolution operations, leading to the output features $\bm{F'} = \sum_{i=0}^{S} \alpha_iT_i(\hat{\bm{L}})\bm{\theta}_i$ where $T_k(x)$ is the $k$-th Chebyshev polynomial and $\hat{\bm{L}} = 2\bm{L} / \lambda_{max}- \bm{I}$ is used to normalize the input features.
We adopt the scheme in~\cite{defferrard2016convolutional,ge2019handshapepose} to construct the hand mesh in a coarse-to-fine manner.
We use the multi-level clustering algorithm for coarsening the graph, and then store the graph at each level and the mapping between graph nodes in every two consecutive levels.
In forward inference, the GCN first up-samples the node features according to the stored mappings and graphs and then preforms the graph convolutional operations.
\vspace{-0.1in}
{\flushleft {\bf Mesh Silhouette Constraint}}:
In our model, without 3D mesh ground truth, the model tends to collapse to any kind of mesh as long as it is temporally consistent.
To avoid this issue, we introduce the mesh loss $\bm{\mathcal{L}}_{m}$ to calculate the difference between the silhouette of the predicted hand mesh $\bm{s}_{t}$ and the ground-truth silhouette $\bm{\bar{s}_t}$ at frame $t$. The silhouette loss is defined by
\begin{equation}
\label{eq:silhouette}
\bm{\mathcal{L}}_{m} = ||\bm{s}_t - \bm{\bar{s}}_t||_F^2.
\end{equation}
To obtain $\bm{\bar{s}}_t$, we use GrabCut~\cite{rother2004grabcut} to estimate the hand silhouettes from the training images.
Some silhouettes estimated from training images are shown in Figure~\ref{fig:grabcut}.
The silhouette of our predicted hand mesh $\bm{s}_{t}$ is obtained by using the neural rendering approach in~\cite{kato2018neural}.
\begin{figure}[t]
\begin{center}
\begin{tabular}{cccccc}
\includegraphics[width=0.065\textwidth]{SK_164.png}
\includegraphics[width=0.065\textwidth]{SK_538.png}
\includegraphics[width=0.065\textwidth]{SK_750.png}
\includegraphics[width=0.065\textwidth]{SK_957.png}
\includegraphics[width=0.065\textwidth]{SK_1186.png}
\includegraphics[width=0.065\textwidth]{SK_1306.png
\end{tabular}
\begin{tabular}{ccccc}
\includegraphics[width=0.065\textwidth]{sk_164.png}
\includegraphics[width=0.065\textwidth]{sk_538.png}
\includegraphics[width=0.065\textwidth]{sk_750.png}
\includegraphics[width=0.065\textwidth]{sk_957.png}
\includegraphics[width=0.065\textwidth]{sk_1186.png}
\includegraphics[width=0.065\textwidth]{sk_1306.png
\end{tabular}
\begin{tabular}{ccccc}
\includegraphics[width=0.065\textwidth]{0_webcam_1.jpg}
\includegraphics[width=0.065\textwidth]{0_webcam_2.jpg}
\includegraphics[width=0.065\textwidth]{0_webcam_3.jpg}
\includegraphics[width=0.065\textwidth]{0_webcam_4.jpg}
\includegraphics[width=0.065\textwidth]{30_webcam_2.jpg}
\includegraphics[width=0.065\textwidth]{1_webcam_1.jpg
\end{tabular}
\begin{tabular}{ccccc}
\includegraphics[width=0.065\textwidth]{0_webcam_1.png}
\includegraphics[width=0.065\textwidth]{0_webcam_2.png}
\includegraphics[width=0.065\textwidth]{0_webcam_3.png}
\includegraphics[width=0.065\textwidth]{0_webcam_4.png}
\includegraphics[width=0.065\textwidth]{30_webcam_2.png}
\includegraphics[width=0.065\textwidth]{1_webcam_1.png
\end{tabular}
\end{center}
\vspace{-0.1in}
\caption{Examples of our estimated silhouettes.
The first and third rows show the training images in STB and MHP datasets, respectively.
The second and the fourth rows show the estimated silhouettes by our method.
}
\label{fig:grabcut}
\end{figure}
{\flushleft {\bf 3D Hand Pose Estimator}}:
The proposed 3D pose estimator directly infers 3D hand keypoints $\bm{p}_t$ from the predicted hand mesh $\bm{m}_t$.
Taking the mesh as the input, we adopt a network of two stacked GCNs, which has a similar structure to that used in 3D hand mesh estimator.
We add a pooling layer to each GCN to extract the pose features from the mesh.
Those pose features are then fed to two fully connected layers to regress the 3D hand pose $\bm{p}_t$.
\subsection{Temporal Consistency Loss}
Due to the lack of 3D keypoint annotations, conventional supervised learning schemes no longer work in model training.
We propose a temporal consistency loss $\bm{\mathcal{L}}_{c}$ to solve this problem.
Figure~\ref{fig:2} shows the idea of our approach.
Given a video clip with $n$ frames, we feed every two adjacent frames $\{\bm{I}_{i}, \bm{I}_{i+1}\}^{t+n}_{i=t}$ to PME module for hand mesh and pose estimation, \ie, $\{\bm{p}_i$, $\bm{m}_i\}^{t+n}_{i=t}$.
TASSN analyzes the temporal information according to their relative input orders.
Thus, we can reverse the input order from $\{\bm{I}_{i}, \bm{I}_{i+1}\}$ to $\{\bm{I}_{i+1}, \bm{I}_{i}\}$ to infer the pose and mesh in $\bm{I}_{i}$ from $\bm{I}_{i+1}$.
With this reversed temporal measurement (RTM) technique, we can infer the hand pose and mesh from the reversed temporal order.
We denote the estimated pose and mesh in the reversed order as $\{\bm{\tilde{p}}_{i}$, $\bm{\tilde{m}}_{i}\}^{t+n}_{i=t}$.
As shown in Figure~\ref{fig:2}, the prediction results estimated by the PME module in both forward and backward inference must be consistent with each other since the same mesh and pose are estimated at any frame.
The temporal consistency loss on hand pose $\bm{\mathcal{L}}_{c}^p$ and mesh $\bm{\mathcal{L}}_{c}^m$ can be computed by
\begin{equation}
\bm{\mathcal{L}}_{c}^p =\frac{1}{n} \sum_{i = t}^{t+n}||\bm{{p}}_{i}\ - \bm{\tilde{p}}_{i}||_F^2,
\end{equation}
\begin{equation}
\label{eq:temporal_mesh}
\bm{\mathcal{L}}_{c}^m =\frac{1}{n} \sum_{i = t}^{t+n}||\bm{{m}}_{i}\ - \bm{\tilde{m}}_{i}||_F^2.
\end{equation}
The temporal consistency loss $\bm{\mathcal{L}}_{c}$ is defined as the summation of
$\bm{\mathcal{L}}_{c}^m$ and $\bm{\mathcal{L}}_{c}^p$, \ie,
\begin{equation}
\label{eq:temporal}
\bm{\mathcal{L}}_{c} = \lambda^m\bm{\mathcal{L}}_{c}^m + \lambda^p\bm{\mathcal{L}}_{c}^p,
\end{equation}
where $\lambda^m$ and $\lambda^p$ are the weights of the corresponding losses.
\subsection{TASSN Training}
\label{sec:tassn_training}
Suppose we are given an unlabeled hand pose dataset $\bm{X}$ for training, which contains $M$ hand gesture videos, $\bm{X}=\{\bm{x}^{(i)}\}_{i=1}^M$, where video $\bm{x}^{(i)}=\{I_{1},...,I_N\}$ consists of $N$ frames.
We divide each training video into several video clips.
Each training video clip $\bm{v}$ is with $n$ frames, \ie, $\bm{v}= \{I_{t}, I_{t+1},...,I_{t+n}\}$.
With the losses defined in Eq.~(\ref{eq:heatmap}), Eq.~(\ref{eq:silhouette}), and Eq.~(\ref{eq:temporal}), the objective for training the proposed TASSN is
\begin{equation}
\bm{\mathcal{L}}=\lambda_s\bm{\mathcal{L}}_{m}+\lambda_h\bm{\mathcal{L}}_{h}+\bm{\mathcal{L}}_{c},
\end{equation}
where $\lambda^s$ and $\lambda^h$ denote the weights of the loss $\bm{\mathcal{L}}_{m}$ and the loss $\bm{\mathcal{L}}_{h}$, respectively.
The details of parameter setting are given in the experiments.
\section{Related Work}
\label{sec:related}
\subsection{3D Hand Pose Estimation from Depth Images}
Since depth images contain surface geometry information of hands, they are widely used for hand pose estimation in the literature~\cite{wan2018dense,Yuan_2018_CVPR,deng2017hand3d,Wu18HandPose,ge2018hand,Ge_2018_ECCV,li2018point,Chen2018Generating,chen2018generating_arxiv}.
Most existing work adopts regression to fit the parameters of a deformed hand model~\cite{makris2015model,joseph2016fits,khamis2015learning,wan2018dense}.
Recent work \cite{ge2018hand,Ge_2018_ECCV} extracts depth image features and regress the joints through PointNet \cite{Qi_2017_CVPR}.
Wu~\etal~\cite{Wu18HandPose} leverage the depth image as the intermediate guidance and conduct an end-to-end training framework.
Despite the effectiveness, the aforementioned methods highly rely on accurate depth maps, and are less practical in the daily life since depth sensors are not available in many cases due to the high cost.
\subsection{3D Hand Pose Estimation from RGB Images}
Owing to the wide accessibility of RGB cameras, estimating 3D hand poses from monocular images becomes an active research topic~\cite{cai2018weakly,iqbal2018hand,mueller2018ganerated,tekin2019h+,yang2018disentangling,zb2017hand} and significant improvement has been witnessed.
These methods use convolutional neural networks (CNN) to extract features from RGB images.
Zimmermann and Brox \cite{zb2017hand} feed these features to the 3D lift network and camera parameter estimation network for depth regression.
Building on Zimmermann and Brox's work, Iqbal~\etal \cite{iqbal2018hand} add depth maps as intermediate guidance while Cai~\etal~\cite{cai2018weakly} propose a weakly supervised approach to utilize depth maps for regularization.
However, these methods suffer from limited training data since 3D hand annotations are hard to acquired.
Also, they all dismiss the temporal information.
\subsection{3D Hand Mesh Estimation}
3D hand mesh estimation is an active research topic~\cite{ge2019handshapepose,boukhayma20193d,baek2019pushing,joo2018total,zhang2019end}.
Methods in~\cite{boukhayma20193d,baek2019pushing,zhang2019end} estimate hand meshes by using a pre-defined hand model, named MANO~\cite{romero2017embodied}.
Due to the high degree of freedom of hand gestures, hand meshes lie in a high dimensional space.
The MANO model serves as a kinematic and shape prior of meshes and can help reduce the dimension.
However, since MANO is a linear model, it is not able to capture the nonlinear transformation for hand meshes~\cite{ge2019handshapepose}.
Thus, mesh estimators based on MANO suffer from this issue.
On the other hand, Ge~\etal~\cite{ge2019handshapepose} regress 3D mesh vertices through graphical convolutional neural network (GCN) with down-sampling and up-sampling.
Their work achieves the state-of-the-art performance, but it is trained on a dataset with 3D mesh ground truth which is even more difficult to label than 3D joint annotations. %
This drawback limits its applicability in practice.
\begin{figure*}[t]
\centering
\hspace{-1cm}
\includegraphics[width=0.93\textwidth]{fig2.pdf}
\caption{Overview of the proposed TASSN.
TASSN involves both forward and backward inference to utilize temporal information.
Namely, the hand poses estimated by forward and backward inference should be consistent.
Our hand pose estimator leverages this observation and can be trained by using self-supervised learning without the need of 3D hand joint labels.
Moreover, with the constraints of temporal consistency, either forward or backward inference can gain more accurate hand pose estimation results.
}
\label{fig:2}
\end{figure*}
\subsection{Self-supervised Learning}
Self-supervised learning~\cite{doersch2015unsupervised,pathak2017learning,Debidatta2019temporal} is a type of training methodologies, where training data are automatically labeled by exploiting existing information within the data.
With this training scheme, manual annotations are not required for a given training set.
This scheme is especially beneficial when data labeling is difficult or the data size is exceedingly large.
Self-supervised learning has been applied to hand pose estimation.
Similar to ours, the method in~\cite{Debidatta2019temporal} adopts temporal cycle consistency for self-supervised learning.
However, this method uses soft nearest neighbors to solve the video alignment problem, which is not applicable to 3D pose and mesh estimation.
Simon~\etal~\cite{simon2017hand} adopt multi-view supervisory signals to regress 3D hand joint locations.
While their approach resolves the hand self-occlusion issue using multi-view images, it in the training stage requires 3D joint annotations, which are difficult and expensive to get in this task.
Another attempt of using self-supervised learning for hand pose estimation is presented in~\cite{wan2019self}, where an approach leveraging a massive amount of unlabeled depth images is proposed.
However, this approach may be limited due to the high variations of depth maps in diverse poses, scales, and sensing devices.
Instead of leveraging multi-view consistency or depth consistency, the proposed self-supervised scheme relies on temporal consistency, which is inexpensive to get and does not require 3D keypoint annotations.
\section{Experimental Results}
\begin{table}[t]
\centering
\hspace{-1cm}
\caption{$3$D hand pose estimation results on the STB and MHP datasets. $\uparrow$: higher is better; $\downarrow$: lower is better; The measuring unit of EPE is millimeter (mm).}
\begin{tabular}{lcccc}
\hline
&$\text{AUC}_\text{0-50}\uparrow$ &$\text{AUC}_{\text{20-50}}\uparrow$&EPE$\downarrow$\\%& {\scriptsize EPE median (mm)} $\downarrow$\\
\hline
STB Dataset&&&& \\
\hline
TASSN w/o $\bm{\mathcal{L}}_{c}$& 0.541 &0.735 &24.2\\% &20.4 \\
TASSN w/o $\bm{\mathcal{L}}_{c}^m$& 0.754 &0.936 &13.6\\% &9.96\\
TASSN & \bf{0.773} &\bf{0.972} &\bf{11.3}\\% &\bf{9.86}\\
\hline
\hline
MHP Dataset&&&& \\
\hline
TASSN w/o $\bm{\mathcal{L}}_{c}$ &0.492 &0.677 &28.2\\% &22.9 \\
TASSN w/o $\bm{\mathcal{L}}_{c}^m$ &0.665 &0.870 &17.5\\% &13.34\\
TASSN & \bf{0.689}&\bf{0.892}&\bf{16.2}&\\%\bf{12.2}\\
\hline
\end{tabular}
\label{table:3Dresult}
\end{table}
\subsection{Ablation Study of Temporal Consistency Losses}
To study the impact of the proposed temporal consistency constraint, we train and evaluate TASSN under the following three settings: 1) TASSN is trained without using temporal consistency loss $\bm{\mathcal{L}}_c$, i.e., without any temporal consistency constraint; 2) TASSN is trained without using temporal consistency loss of hand mesh $\bm{\mathcal{L}}_{c}^m$, i.e., with temporal 3D pose constraint but not 3D mesh constraint; 3) TASSN is trained with all the proposed loss functions.
Table~\ref{table:3Dresult} shows the evaluation results on two 3D hand pose estimation tasks under the three different settings described above.
The PCK curves corresponding to different settings are shown in Figure~\ref{fig:result_setting}.
\begin{figure*}[]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.42\textwidth]{STB.png}\label{fig:STB}&\quad\quad
\includegraphics[width=0.42\textwidth]{MHP.png}\label{fig:MHP}\\
(a)&\quad(b)\\
\end{tabular}
\caption{Performance in PCK on the (a) STB and (b) MHP datasets. TCL and TMCL denote the losses $\bm{\mathcal{L}}_{c}$ and $\bm{\mathcal{L}}_{c}^m$, respectively.
}
\label{fig:result_setting}
\end{figure*}
\begin{figure*}[]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.42\textwidth]{STB_stoa.png}\label{fig:STOA_STB}&\quad\quad
\includegraphics[width=0.42\textwidth]{MHP_stoa.png}\label{fig:STOA_MHP}\\
(a)&\quad(b)\\
\end{tabular}
\caption{Comparison with the state-of-the-arts. Results in $\text{AUC}_{\text{20-50}}$ on (a) the STB dataset and (b) the MHP dataset.
}
\label{fig:result_stoa}
\end{figure*}
We note the following two observations from the ablation study. First, the temporal consistency constraint is critical for 3D pose estimation accuracy. This is clearly illustrated by comparing the results between settings~$1$ and~$3$.
As shown in Figure~\ref{fig:result_setting}, TASSN trained with the temporal consistency loss $\bm{\mathcal{L}}_{c}$ (red curve, setting 3) outperforms the TASSN trained without using temporal consistency loss (blue curve, setting 1) by a large margin on both the STB and MHP datasets.
The quantitative results in Table~\ref{table:3Dresult} show that $\text{AUC}_\text{0-50}$, $\text{AUC}_{20-50}$ and EPE, are improved by 0.232, 0.237, 12.9 on the STB dataset, respectively.
A similar trend is also observed on the MHP dataset.
Second, imposing temporal mesh consistency constraints is beneficial for 3D pose estimation. This is illustrated by comparing the results between settings~$2$ and setting~$3$.
By using the temporal mesh consistency loss $\bm{\mathcal{L}}_c^m$, $\text{AUC}_\text{0-50}$, $\text{AUC}_\text{20-50}$, EPE improves by $0.024$, $0.022$, $1.3$, respectively, on the STB dataset (Table~\ref{table:3Dresult}).
Results on MHP dataset share a same trend: Test AUCs are boosted by including the temporal mesh consistency loss $\bm{\mathcal{L}}_c^m$.
It points out that the temporal mesh consistency loss, as an intermediate constraint, facilitates 3D hand pose estimator learning.
\label{sec:exper}
\begin{figure*}[t]
\begin{center}
\begin{tabular}{cccccccccc}
\includegraphics[width=0.09\textwidth]{SK_164.png}
\includegraphics[width=0.09\textwidth]{baseline_stb_164.png}
\includegraphics[width=0.09\textwidth]{nomeshloss_stb_164.png}
\includegraphics[width=0.09\textwidth]{my_stb_164.png}
\includegraphics[width=0.09\textwidth]{gt_stb_164.png}
\includegraphics[width=0.09\textwidth]{SK_194.png}
\includegraphics[width=0.09\textwidth]{baseline_stb_194.png}
\includegraphics[width=0.09\textwidth]{nomeshloss_stb_194.png}
\includegraphics[width=0.09\textwidth]{my_stb_194.png}
\includegraphics[width=0.09\textwidth]{gt_stb_194.png}
\end{tabular}
\\
\begin{tabular}{cccccccccc}
\includegraphics[width=0.09\textwidth]{SK_224.png}
\includegraphics[width=0.09\textwidth]{baseline_stb_224.png}
\includegraphics[width=0.09\textwidth]{nomeshloss_stb_224.png}
\includegraphics[width=0.09\textwidth]{my_stb_224.png}
\includegraphics[width=0.09\textwidth]{gt_stb_224.png}
\includegraphics[width=0.09\textwidth]{SK_254.png}
\includegraphics[width=0.09\textwidth]{baseline_stb_254.png}
\includegraphics[width=0.09\textwidth]{nomeshloss_stb_254.png}
\includegraphics[width=0.09\textwidth]{my_stb_254.png}
\includegraphics[width=0.09\textwidth]{gt_stb_254.png}
\end{tabular}
\\
\begin{tabular}{cccccccccc}
\includegraphics[width=0.09\textwidth]{SK_284.png}
\includegraphics[width=0.09\textwidth]{baseline_stb_284.png}
\includegraphics[width=0.09\textwidth]{nomeshloss_stb_284.png}
\includegraphics[width=0.09\textwidth]{my_stb_284.png}
\includegraphics[width=0.09\textwidth]{gt_stb_284.png}
\includegraphics[width=0.09\textwidth]{SK_314.png}
\includegraphics[width=0.09\textwidth]{baseline_stb_314.png}
\includegraphics[width=0.09\textwidth]{nomeshloss_stb_314.png}
\includegraphics[width=0.09\textwidth]{my_stb_314.png}
\includegraphics[width=0.09\textwidth]{gt_stb_314.png}
\end{tabular}
\\
\end{center}
\vspace{-.3cm}
\caption{Comparison among three different settings on the STB dataset. Columns 1 and 6 are RGB images. Columns 2 and 7 are the result by TASSN trained without temporal consistency loss. Columns 3 and 8 are the result by TASSN trained without temporal mesh consistency loss. Columns 4 and 9 are the result by TASSN. Columns 5 and 10 are the ground truth.}
\label{fig:STBcmp}
\end{figure*}
\begin{figure*}[t]
\begin{center}
\begin{tabular}{cccccccccc}
\includegraphics[width=0.09\textwidth]{5_webcam_2.jpg}
\includegraphics[width=0.09\textwidth]{baseline_mhp_5_webcam_2.png}
\includegraphics[width=0.09\textwidth]{nomeshloss_mhp_5_webcam_2.png}
\includegraphics[width=0.09\textwidth]{my_mhp_5_webcam_2.png}
\includegraphics[width=0.09\textwidth]{gt_mhp_5_webcam_2.png}
\includegraphics[width=0.09\textwidth]{10_webcam_2.jpg}
\includegraphics[width=0.09\textwidth]{baseline_mhp_10_webcam_2.png}
\includegraphics[width=0.09\textwidth]{nomeshloss_mhp_10_webcam_2.png}
\includegraphics[width=0.09\textwidth]{my_mhp_10_webcam_2.png}
\includegraphics[width=0.09\textwidth]{gt_mhp_10_webcam_2.png}
\end{tabular}
\\
\begin{tabular}{cccccccccc}
\includegraphics[width=0.09\textwidth]{15_webcam_2.jpg}
\includegraphics[width=0.09\textwidth]{baseline_mhp_15_webcam_2.png}
\includegraphics[width=0.09\textwidth]{nomeshloss_mhp_15_webcam_2.png}
\includegraphics[width=0.09\textwidth]{my_mhp_15_webcam_2.png}
\includegraphics[width=0.09\textwidth]{gt_mhp_15_webcam_2.png}
\includegraphics[width=0.09\textwidth]{20_webcam_2.jpg}
\includegraphics[width=0.09\textwidth]{baseline_mhp_20_webcam_2.png}
\includegraphics[width=0.09\textwidth]{nomeshloss_mhp_20_webcam_2.png}
\includegraphics[width=0.09\textwidth]{my_mhp_20_webcam_2.png}
\includegraphics[width=0.09\textwidth]{gt_mhp_20_webcam_2.png}
\end{tabular}
\\
\begin{tabular}{cccccccccc}
\includegraphics[width=0.09\textwidth]{26_webcam_2.jpg}
\includegraphics[width=0.09\textwidth]{baseline_mhp_26_webcam_2.png}
\includegraphics[width=0.09\textwidth]{nomeshloss_mhp_26_webcam_2.png}
\includegraphics[width=0.09\textwidth]{my_mhp_26_webcam_2.png}
\includegraphics[width=0.09\textwidth]{gt_mhp_26_webcam_2.png}
\includegraphics[width=0.09\textwidth]{27_webcam_2.jpg}
\includegraphics[width=0.09\textwidth]{baseline_mhp_27_webcam_2.png}
\includegraphics[width=0.09\textwidth]{nomeshloss_mhp_27_webcam_2.png}
\includegraphics[width=0.09\textwidth]{my_mhp_27_webcam_2.png}
\includegraphics[width=0.09\textwidth]{gt_mhp_27_webcam_2.png}
\end{tabular}
\\
\end{center}
\vspace{-.3cm}
\caption{Comparison among three different settings on the MHP dataset. Columns 1 and 6 are RGB images. Columns 2 and 7 are the result by TASSN trained without temporal consistency loss. Columns 3 and 8 are the result by TASSN trained without temporal mesh consistency loss. Columns 4 and 9 are the result by TASSN. Columns 5 and 10 are the ground truth.
}
\vspace{-.3cm}
\label{fig:MHPcmp}
\end{figure*}
In addition to the quantitative analysis, Figure~\ref{fig:STBcmp} and Figure~\ref{fig:MHPcmp} display some estimated 3D hand poses for visual comparison among these settings on the STB and MHP datasets, respectively. We can see that TASSN, when trained with temporal consistency loss, can produce 3D hand pose estimations highly similar to the ground truth in diverse poses.
It is worth noting that our GCN model is initialized with model~\cite{ge2019handshapepose} pretrained on the STB dataset. Our results on STB demonstrate that the temporal consistency is critical to enforce the 3D constraints, without which 3D prediction accuracy drops substantially (Table~\ref{table:3Dresult}). Moreover, our method generalizes well on other target datasets, e.g., the MHP dataset, where 3D annotations are not used in either model initialization or training. The pose categories and capturing environments are quite different between the two datasets (Figure~\ref{fig:dataset}). The effectiveness of our method on the MHP dataset can only be attributed to the temporal consistency constraint (Figure~\ref{fig:result_setting}).
\subsection{Comparison with the State-of-the-art Methods}
The state-of-the-art methods on both STB and MHP datasets are trained with the 3D annotations, while our method is not. Therefore, we take these methods as the upper bound of our method, and evaluate the performance gaps between these methods and ours.
For the STB dataset, we select six the-state-of-the-art methods for comparison.
The selected methods include PSO~\cite{boukhayma20193d}, ICPPSO~\cite{Chen2018Generating}, CHPR~\cite{zhang20163d}, the method by~Iqbal~\etal~\cite{iqbal2018hand}, Cai~\etal~\cite{cai2018weakly} and the approach by Zimmermann and Brox~\cite{zb2017hand}.
For the MHP dataset, we select two the-state-of-the-art methods for comparison including the approach by~Cai~\etal~\cite{cai2018weakly} and the method by Chen~\etal~\cite{chen2020dggan}.
Figure~\ref{fig:result_stoa}(a) and Figure~\ref{fig:result_stoa}(b) show the comparison results on STB and MHP datasets, respectively.
As expected, TASSN has a performance gap with current state-of-the-art methods on both datasets due to the lack of 3D annotation. However, the performance gaps are relative small.
In STB dataset, as shown in Figure~\ref{fig:result_stoa}(a), our methods could even beat some of the methods trained with full 3D annotations.
All together, these results illustrate that 3D pose estimator can be trained without using 3D annotations.
Estimating hand pose and mesh from single frames is challenging due to the ambiguities caused by the missing depth information and high flexibility of joints. These challenges can be partly mitigated by utilizing information from video, in which pose and the mesh are highly constrained by the adjacent frames. Temporal information offers an alternative way of enforcing constraints on 3D models for pose and mesh estimation.
\section{Experiments Setting}
\label{sec:setting}
\subsection{Datasets for Evaluation}
We evaluate our approach on two hand pose datasets, Stereo Tracking Benchmark Dataset (STB)~\cite{zhang20163d} and Multi-view 3D Hand Pose dataset (MHP)~\cite{gomez2017large}.
These two datasets include real hand video sequences performed by different subjects and 3D hand keypoint annotations are provided for the hand video sequences.
For the STB dataset, we adopt its SK subset for training and evaluation.
This subset contains $6$ hand videos, each of which has $1,500$ frames.
Following the train-validation split setting used in \cite{ge2019handshapepose}, we use the first hand video as the validation set and the rest videos for training.
The MHP dataset includes $21$ hand motion videos.
Each video provides hand color images and different kinds of annotations for each sample, including the bounding box and the 2D and 3D location on the hand keypoints.
The following scheme of data pre-processing is applied to both STB and MHP datasets.
We crop the hand from the original image by using the center of hand and the scale of the hand.
Thus, the center of the hand is located at the center of the cropped images, and the cropped image covers the whole hand.
We then resize the cropped image to $256\times256$.
As mentioned in~\cite{cai2018weakly,zb2017hand}, the STB and MHP datasets use the palm center as the center of the hand.
We use the mechanism introduced by~\cite{cai2018weakly} to change the center of hand from the palm center to the joint of wrist.
\subsection{Metric}
We follow the setting adopted in previous work~\cite{zb2017hand,ge2019handshapepose} and use {\em average End-Point-Error} (EPE) and {\em Area Under the Curve} (AUC) on the {\em Percentage of Correct Keypoints} (PCK) between threshold $20$ millimeter (mm) and $50$mm ($\text{AUC}_\text{20-50}$) as the two metrics.
Beside, we adopt AUC on PCK between threshold $0$mm and $50$mm ($\text{AUC}_\text{0-50}$) as the third metrics for evaluating 3D hand pose estimation performance.
The measuring unit of EPE is millimeter (mm).
\subsection{Implementation Details}
\label{sec:traing}
We implement our TASSN by using PyTorch.
In training phase, we set the batch size to $24$ and the initial learning rate to $10^{-5}$.
We train and evaluate our TASSN by using a machine with four GeForce GTX 1080Ti GPUs.
Since end-to-end training a network from scratch with multiple modules is very difficult, we train our TASSN by using a three-stage procedure.
In the first stage, we train the heatmap estimator with the loss~$\bm{\mathcal{L}_h}$.
In the second stage, the GCN hand mesh estimator is initialized by using the pre-trained model provided by~\cite{ge2019handshapepose}.
We jointly fine-tune heatmap and hand mesh estimator with the losses $\bm{\mathcal{L}_h}$ and $\bm{\mathcal{L}_m}$ on the target dataset without 3D supervision.
In the final stage, we conduct an end-to-end training for our TASSN and fine-tune the weights of each sub-module.
The model weights of heatmap, GCN hand mesh estimator, and 3D pose estimators are fine-tuned end-to-end.
In this stage, we set $\lambda_s = 0.1$, $\lambda_h = 1$, and $\lambda_c^p = \lambda_c^m = 10$.
|
2,877,628,090,461 | arxiv | \section{Introduction}
Simple di- and triatomic molecules and ions are fundamental constituents of interstellar chemistry which eventually leads to the formation of complex molecules. Many of these species have ground state transitions at submillimeter- and THz wavelengths, and are therefore either difficult or not at all observable from the ground, yet they constitute the building blocks of chemistry, and are therefore fundamental to its understanding in various environments. Among those are spiral arm clouds, located in the plane of the Galactic disk, where the line-of-sight toward a strong continuum source passes through by chance. This setup allows sensitive absorption measurements, and the clouds have been observed by this method against the Sgr B2, W31c, W49, W51 and CasA millimeter continuum sources using molecular species such as CO, HCN, HCO$^{+}$, CS, CN, SO, and c-C$_{3}$H$_{2}$ etc \citep[e.g.][]{Greaves1994, Tieftrunk1994, Greaves1996, Menten2010, Gerin2010, Ossenkopf2010}. The results demonstrate that spiral arm clouds have low gas density and low excitation temperatures, and represent diffuse and translucent clouds.
One of the best sources for these absorption studies is Sgr B2, located close to the Galactic center, $\sim$ 100 pc from Sgr A$^{\star}$ in projection, and one of the strongest submillimeter sources in the Galaxy \citep[e.g.][]{Pierce-Price2000}. The dense cores Sgr~B2(N) and Sgr~B2(M) within the cloud are at different evolutionary stages, and constitute well-studied massive star forming regions in our Galaxy. The flux ratio of the continuum between Sgr~B2(M) and Sgr~B2(N) is less than unity at 1 mm and rises at shorter wavelengths so that Sgr~B2(M) dominates above $\sim$ 500 GHz \citep{Goldsmith1990, Lis1991}. Sgr~B2(M) also shows fewer molecular emission lines than Sgr~B2(N) \citep{Nummelin1998}, hence less confusion and therefore is better suited for absorption studies. The line-of-sight toward the Sgr~B2(M) continuum will pass \new{almost all the way to the center of our Galaxy}, providing a more complete census in studying physical and chemical conditions towards the Galactic center clouds and all spiral arm clouds simultaneously. HIFI, the Heterodyne Instrument for the Far-Infrared \citep{deGraauw2010} on board the {\it Herschel} Space Observatory \citep{Pilbratt2010} is an ideal instrument for making these observations.
\section{Observations}
Full spectral
scans of HIFI bands \new{1a, 1b, and 4b} towards Sgr~B2(M) ($\alpha_{J2000} = 17^h47^m20.35^s$ and $\delta_{J2000} =
-28^{\circ}23'03.0''$) have been carried out respectively on {\bf
March 1, 2, and 5 2010}, providing coverage of the frequency range 479 through
637 GHz \new{and 1051 through 1121 GHz}.
HIFI Spectral Scans are carried out in Dual Beam Switch (DBS)
mode, where the DBS reference beams lie approximately 3$^{\prime}$
apart. The wide band spectrometer (WBS) is used as a back-end,
providing a spectral resolution of 1.1 MHz over a 4-GHz-wide
Intermediate Frequency (IF) band. A HIFI Spectral Scan consists
of a number of double-sideband (DSB) spectra, tuned at different Local
Oscillator (LO) frequencies, where the spacing between one LO setting
and the next is determined by the ``redundancy'' chosen by the
observer \citet{Comito2002}. The molecular spectrum of Sgr~B2(M) in
\new{band 1a and 4b has been scanned
with a redundancy of 4, that of band 1b with a redundancy of 8,} which means that every
frequency has been observed respectively 4 and 8 times in each
sideband. Multiple observations of the same frequency at different LO
tunings are necessary to separate the lower-sideband (LSB) from the
upper-sideband (USB) spectrum.
The data have been calibrated through the standard pipeline released with
version 2.9 of HIPE \citet{Ott2010}, and subsequently exported to
CLASS\footnote{{\it Continuum and Line Analysis Single-dish Software},
distributed with the GILDAS software, see http://www.iram.fr/IRAMFR/GILDAS.} using the HiClass task within HIPE. Deconvolution of
the DSB data into single-sideband (SSB) format has been performed on
CLASS. All the HIFI data presented here, spectral features \emph{and}
continuum emission, are deconvolved SSB spectra. Although both
horizontal (H) and vertical (V) polarizations have been obtained, we will show
only H-polarization spectra. The intensity scale
is main-beam temperature, and results from applying a beam efficiency
correction of \new{0.69 for band 1a, 0.68 for band 1b, and 0.669 for
band 4b} \citep{Roelfsema2010}.
\section{Spectroscopy of H$_2$O$^+$}\label{sec:spec}
Removal of an electron from oxidane, H$_2$O, also known
as water, yields oxidaniumyl, H$_2$O$^+$. Its bond lengths and
bond angle are slightly larger than \new{those of} H$_2$O, see e.g.
\citet{H2O+_LMR_1986}. Quantum-chemical calculations
\citep{H2O+_ai_1989} yielded a ground state dipole moment of
$\sim$2.4~D, considerably larger than in H$_2$O.
The transitions are of {\it b}-type, meaning
$\Delta K_a \equiv \Delta K_ \equiv 1$~mod~2.
The electronic ground state changes from $^1A_1$ in the neutral
to $^2B_1$ in the cation which leads to a reversal of the
{\it ortho} and {\it para} levels with respect to water.
$K_a + K_c$ is even and odd for {\it ortho}- and
{\it para}-H$_2$O$^+$, respectively.
The {\it para} levels do not show any hyperfine splitting
while the {\it ortho} levels are split into three because
of the $^1$H hyperfine structure. The strong lines have
$\Delta F = \Delta J = \Delta N$, i.e. they do not involve a
spin-flip. At low quantum numbers spin-flipping transitions
have appreciable intensity.
Further details of the spectroscopy of H$_2$O$^+$ are discussed
in the Appendix. Table~\ref{lab-data} provides calculated rest
frequencies for the two rotational transitions discussed
in the present investigation. Fig.~\ref{levels} show the lowest
energy levels of H$_2$O$^+$ with allowed transitions.
\section{Results}
Determining the opacities and thus column densities of absorption lines is traditionally done using the line-to-continuum ratio. In the present case, this is not straightforward, because the \mbox{\emph{ortho}-H$_2$O$^+$}-line has hyperfine structure with closeby components (Fig.~\ref{fig1}), which distorts the simple correspondence of line-to-continuum ratio with column density at a given velocity, and also because the line background of the Sgr~B2(M) core cannot necessarily be neglected. We therefore fitted the lines using the XCLASS\footnote{We made use of the
myXCLASS program (https://www.astro.uni-koeln.de/projects/schilke/XCLASS), which
accesses the CDMS (\citealp{CDMS_1,CDMS_2}
http://www.cdms.de) and JPL \citealp{JPL-catalog}
http://spec.jpl.nasa.gov) molecular data bases.} program, which performs an LTE fit using the molecular data discussed in Sect.~\ref{sec:spec}, using the automated fitting routine provided by MAGIX\footnote{https://www.astro.uni-koeln.de/projects/schilke/MAGIX}. For all velocity components, an excitation temperature of 2.7~K was assumed. \new{For molecules that react strongly with H$_2$ \citep[see the discussion in][]{Black1998, Staeuber2009}, the collisional processes in diffuse gas are unimportant relative to radiative excitation in controlling the excitation temperatures of observed transitions of species with high dipole moments such as \mbox{H$_2$O$^+$}, since inelastic collisions with H, H$_2$ and electrons compete with reactive collisons. The excitation temperature employed here may still not be entirely correct, since particularly at 1115 GHz the general FIR background of the Galaxy contributes to a radiation temperature of 4.8~K even in the vicinity of the Sun, and for the spiral arm clouds one expects similar or slightly higher values \citep{Wright1991, Paladini2007}. Our analysis is not affected however if the excitation temperature is dominated by this radiation field, and stays significantly smaller compared with the upper level energies for the \textit{para-} and \mbox{\emph{ortho}-H$_2$O$^+$}{} ground state lines $h\nu/k$ = 29 and 54 K, respectively, which is a well justified assumption.}
The maximum opacities of the \mbox{\emph{ortho}-H$_2$O$^+$}{} line are about 2, so the lines are only moderately opaque.
For \mbox{\emph{para}-H$_2$O$^+$}{}, only the 607~GHz line was used to perform the fit, since this is the strongest and least contaminated para-line, but it can be seen from Fig.~\ref{fig2} that predictions from this \new{reproduce} the other para lines rather well. To predict contamination, we used the fit of all species in Sgr~B2(M) (Qin et al, in prep) as background. This is a preliminary version of the fit, and we cannot exclude the existence of additional contamination by unknown lines. Thus we estimate the error of the fit due to uncertainties of this nature very conservatively to be 20\%, but in the presence of strong unknown lines it could be larger at certain frequencies. This is particularly true for the possible D$_2$O contamination of the \mbox{\emph{para}-H$_2$O$^+$}{}-line at 607 GHz. The relatively small variation of the \textit{ortho}/\textit{para} ratio in the absorption cloud range (see below) argues against contamination by a strong unknown line however. All \mbox{\emph{para}-H$_2$O$^+$}{} components are optically thin. \new{In the following, we make the assumption that the excitation of all upper levels can be approximated as LTE with an excitation temperature of 2.7~K, and that thus the \textrm{ortho-} and \mbox{\emph{para}-H$_2$O$^+$}{} column densities can be measured by observations of the ground state. This assumption is reasonable for the spiral arm clouds, but most likely violated for the clouds associated with the Sgr~B2(M) envelope (see below).}
It appears that the absorption lines of different species toward Sgr~B2(M) cannot be fitted with a unique set of physical components of fixed velocity and velocity width. This probably reflects the different origins of the species in atomic, low density molecular and high density molecular gas with a different velocity structure. A detailed study of the different distributions will have to await the complete data set of the survey. Apart from that, particularly in species which have hyperfine structure and are very abundant, that is in species which absorb at all velocities, the decomposition into basically Gaussian components would not necessarily be unique. The fit rather represents a deconvolution of the hyperfine pattern. We therefore prefer to present the results as depicted in Fig.~\ref{fig3}, as column densities/velocity interval and ratio as a function of velocity, as a sum over the components. The component of the Sgr~B2(M) envelope, which is located at 64 km s$^{-1}$, is most uncertain, because here the \mbox{\emph{para}-H$_2$O$^+$}{} 607 GHz line is most contaminated, and there the assumption of \new{uniformly low} excitation temperature \new{for all levels} is most likely to be violated, since this is warm and dense gas \new{with a strong FIR field which may dominate the excitation}.
Since \mbox{H$_2$O$^+$}{} is expected to originate in mostly atomic gas at the edge of diffuse and translucent clouds, giving abundances relative to H$_2$ or H does not make sense, since it exists neither in purely atomic, nor in purely molecular gas. \citet{Menten2010} and \citet{Qin2010} give column densities of H$_2$ of typically a few times $10^{21}$ cm$^{-2}$ and H column densities of typically 10$^{20}$ cm$^{-2}$, so the average \mbox{H$_2$O$^+$}{} abundance relative to the number of H nuclei is a few times $10^{-8}$, but could be much higher locally. The o/p ratio was calculated using the column densities, and has a mean of 4.8 $\pm$ 0.5 for the spiral arm clouds, with little variation.
We calculated the nuclear spin temperature using
\begin{equation}
\frac{N(\mbox{\emph{ortho}-H$_2$O$^+$})}{N(\mbox{\emph{para}-H$_2$O$^+$})} = e^{\Delta E/kT_{\rm nuclear spin}} \frac{Q_o}{Q_p}
\end{equation}
with $\Delta E = 30.1$~K, and $Q_{o,p}$ the partition function of \mbox{\emph{ortho}-H$_2$O$^+$}/\mbox{\emph{para}-H$_2$O$^+$}{}, repectively, given in the Appendix. The $Q_{o,p}$ include the rotational and nuclear spin part, and are referenced to a ground state energy of zero for both. Toward the spiral arm clouds, we find the mean nuclear spin temperature to be almost constant at 21 $\pm 2$~K. At this temperature range, the \new{ratio of the } partition functions \new{is} almost equal to one, and the temperature is mostly determined by the exponential factor.
In the velocity range from about 0 to 60 km s$^{-1}$, the o/p ratio drops to unity, much below the high-temperature limit of 3. This does not reflect any thermal equilibrium and cannot be explained by any known formation mechanism: at low temperatures, the o/p ratio is extremely high, since all the molecules are in their lowest (\textit{ortho}) state, and at high temperatures the limit 3 is reached, given by the nuclear spin statistics. We can only speculate about the cause of this unexpected result: it could be either a measurement error, or a real effect. If it is a measurement error, either the \mbox{\emph{ortho}-H$_2$O$^+$}{} column density is underestimated, which could be caused by contamination from a strong emission line, or the \mbox{\emph{para}-H$_2$O$^+$}{} column density is overestimated, which could be due to contamination from another absorption line, or the excitation temperatures deviate from the 2.7~K we assumed in such a way to produce this effect. \new{The latter could e.g.\ be produced by a bright FIR field at the location of the clouds.} Taking the measured ratio at face value \new{as the o/p ratio} would imply that a process exists which produces \mbox{\emph{ortho}-H$_2$O$^+$}\ and \mbox{\emph{para}-H$_2$O$^+$}\ in equal amounts.
\begin{figure}
\centering
\includegraphics[bb=-50 100 600 600, width=0.75\columnwidth, angle=-90]{15087fg1.ps}
\caption{\mbox{\emph{ortho}-H$_2$O$^+$}{}, as already shown in \citet{Ossenkopf2010}. The data are shown in black, the fit in red, in blue the hfs pattern is depicted, and in green the predicted contamination by other molecules.}
\label{fig1}
\end{figure}
\begin{figure}
\centering
\includegraphics[bb=-50 100 600 600, width=0.75\columnwidth, angle=-90]{15087fg2.ps}
\caption{\mbox{\emph{para}-H$_2$O$^+$}{} lines in Sgr~B2, with the predicted contamination by other molecules in green, as in Fig.~\ref{fig1}. The position of the D$_2$O ground state line is indicated.}
\label{fig2}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{15087fg3.eps}
\caption{Column density distribution of \mbox{\emph{ortho}-H$_2$O$^+$}\ and \mbox{\emph{para}-H$_2$O$^+$}\ (upper panel), o/p ratio (central panel) and $T_{\rm nuclear spin}$ distribution (lower panel).}
\label{fig3}
\end{figure}
\section{Discussion}
Since there are no fast radiative transitions between \mbox{\emph{para}-H$_2$O$^+$}{} and \mbox{\emph{ortho}-H$_2$O$^+$}, the derived nuclear spin temperature is thought to be determined by chemical processes, either at formation or afterwards, since \textit{para}-\textit{ortho} transformation can only occur accompanied with a proton exchange reaction of one of the hydrogen nuclei. The only observed kinetic temperature estimates in these clouds are from \citet{Tieftrunk1994}, based on NH$_3$, and show values of 35$\pm$5~K in the --100 km s$^{-1}$ component (which is believed to originate in the Galactic center), below 20~K in the velocity range in the spiral arm clouds, while the --10 to 20~km s$^{-1}$ component (also from the Galactic center) has temperatures exceeding 100~K \citep{Gardner1988}, and is \new{most likely} shock heated. There is no similarity to the more or less constant nuclear spin temperature of 21~K we derive for \mbox{H$_2$O$^+$}{} formation in this range, which suggests that \mbox{H$_2$O$^+$}{} and NH$_3$ trace very different gas components: \mbox{H$_2$O$^+$}{} the warm outer photon dominated edge of clouds, NH$_3$ either the shielded and cold interior, or hot shocked gas, which also seems to be devoid of \mbox{H$_2$O$^+$}. This picture of \mbox{H$_2$O$^+$}{} formation at the edges of clouds, actually in regions where the gas is mostly atomic, is supported by \citet{Gerin2010} and \citet{Neufeld2010} based on studies of OH$+$ and \mbox{H$_2$O$^+$}{} with \textit{Herschel}/HIFI. PDR models of diffuse clouds \citep{LePetit2006} predict temperatures of 50 to 100~K for the formation region of \mbox{H$_2$O$^+$}{} in diffuse clouds (A$_V \approx$ 1-3) with densities of about 10$^2$ and 10$^3$ cm$^{-3}$ with radiation fields of 1-3 times ambient.
The relationship of nuclear spin temperature and formation or ambient temperature needs to be discussed in somewhat more detail \citep[see][for a discussion on this ratio for molecular hydrogen]{Flower2006}. H$_2$O$^+$ is a very reactive ion, and particularly it reacts exothermically with H$_2$ to form H$_3$O$^+$, so the reaction \mbox{\emph{para}-H$_2$O$^+$}\ + H$_2 \rightarrow \mbox{\emph{ortho}-H$_2$O$^+$} +$ H$_2$, which would equilibrate the o/p ratio to the kinetic gas temperature, might not be relevant. If an equivalent reaction with atomic hydrogen (\mbox{\emph{para}-H$_2$O$^+$}\ + H $\rightarrow \mbox{\emph{ortho}-H$_2$O$^+$} +$ H) could happen, is unknown. If it does, it most likely will equilibrate to the current gas temperature, if not, the observed o/p ratio is the one established at formation.
How this depends on the kinetic temperature at formation is not well understood. \mbox{H$_2$O$^+$}{} is produced by the highly exothermic reaction of OH$^+$ with H$_2$, so the variables determining the \mbox{H$_2$O$^+$}{} o/p ratio are
\begin{enumerate}
\item the o/p ratio of H$_2$,
\item how much and in which way the excess energy of the exothermic reaction is available for o/p conversion of \mbox{H$_2$O$^+$},
\item the temperature of the gas at the time of formation, typically above 50-100 K based on PDR models.
\end{enumerate}
The latter two processes would push the o/p ratio down toward 3:1, the high temperature value, which means toward a spin temperature higher than the 21~K we measure. \new{The influence of the o/p ratio of H$_2$ is hard to assess: if the reaction proceeds by way of a collision complex, then the nuclear spin of H$_2$ will have an effect on the nuclear spin of product \mbox{H$_2$O$^+$}{}, but not if the reaction proceeds through direct atom transfer.} The measured o/p of H$_2$ in diffuse clouds is about unity \citep{Savage1977, Rachford2009}. Clearly, this is an interesting area of molecular physics that needs further study. From our observations, it seems that we see an excess of \mbox{\emph{ortho}-H$_2$O$^+$}{} relative to what one would expect based on the formation temperature and available reaction energy in the spiral arm clouds and, if the measurement of an \mbox{\emph{ortho}-H$_2$O$^+$}/\mbox{\emph{para}-H$_2$O$^+$}\ ratio in the 0 to 60 km s$^{-1}$ region is real, an excess of \mbox{\emph{para}-H$_2$O$^+$}\ there. This velocity range nominally is assigned to the Sagittarius/Scutum arms \citep{Vallee2008}, outside of the Galactic center, where no exotic conditions are expected. However, this velocity range is also bracketed by gas local to Sgr~B2 at 0 to 10 km s$^{-1}$ and around 60 km s$^{-1}$, so it could represent diffuse gas belonging to this complex, which could be the cause of unusual excitation or chemical conditions, e.g.\ due to shocks. In the lower velocity range overlapping with this (--10 to 20 km s$^{-1}$), \citet{Lis2010} also find water with an o/p ratio of 3, indicating high temperatures.
\citet{Lis2010} find an average spin temperature of 27~K for water toward the the spiral arms lines of sight. H$_2$O in these clouds is formed in the gas phase, through \new{dissociative recombination} of H$_3$O$^+$, in a region where the gas is at about this temperature. The correspondence between physical temperature and spin temperature may be more easily traced by the more stable water molecule, although in general the contribution of grain surface chemistry for water complicates the issue (see discussion in \citealt{Lis2010}). From this study it is clear that by determining the \textit{ortho}/\textit{para}-ratio of \mbox{H$_2$O$^+$}{} (and, by proxy, from other simple hydrides) one can learn a lot about the formation processes, but also that many fundamental physical and chemical processes are still not fully understood. We can look forward to the wealth of data HIFI will bring!
|
2,877,628,090,462 | arxiv | \section*{Supplemental Material}
\beginsupplement
Under ambient pressure, BiNiO$_3$ adopts a highly distorted perovskite (triclinic) crystal structure with space group $P\bar{1}$ (see Supplementary Fig.~\ref{fig:structure}). It has two inequivalent Bi and four Ni sites and is characterized by the cooperative breathing Bi-O distortions of the lattice. The Bi sites are arranged in chains along the $c$ axis, with a checkerboard pattern in the $ab$-plane. The $P\bar{1}$-structured BiNiO$_3$ is an insulator with an energy gap of $\sim 0.68$ eV as estimated from the electrical resistivity \cite{Ishiwata2002}.
\begin{figure*}[h]
\includegraphics[trim=0cm 0cm 0cm 0cm,width=0.66\textwidth]{Fig_S1}
\caption{(Color online)
Crystal structure of the triclinic $P\bar{1}$ (left) and highly-pressurized orthorhombic $Pbnm$ (right) phases of BiNiO$_3$. The oxygen atoms are depicted by small red balls. The figure was prepared with the VESTA program~\cite{vesta}.}
\label{fig:structure}
\end{figure*}
Under pressure above $\sim4$~GPa, BiNiO$_3$ undergoes a structural transformation to the orthorhombic GdFeO$_3$-type ($Pbnm$) crystal structure, which has a single type of Bi and Ni ions. The phase transition is accompanied by a Mott insulator-to-metal transition and is associated with suppression of the breathing distortions of the lattice (all the Ni and Bi sites become equivalent in the $Pbnm$ phase).
\begin{figure*}[h]
\begin{tabular}{cc}
\includegraphics[trim=0cm 0cm 0cm 0cm,width=0.41\textwidth]{bands_ap.eps}
\includegraphics[trim=0cm 0cm 0cm 0cm,width=0.41\textwidth]{bands_hp.eps}
\end{tabular}
\caption{(Color online).
Band structure of BiNiO$_3$ calculated within nonmagnetic DFT for the ambient-pressure $P\bar{1}$ (left panel) and high-pressure $Pbnm$ (right panel) phases
in comparison with the Wannier bands corresponding to the constructed Bi $6s$, Ni $3d$, and O $2p$ Wannier functions (red dashed lines).
The Fermi level is at zero energy.
\label{fig:band_structure}}
\end{figure*}
Here, we employed the DFT+DMFT approach to explore the electronic properties and phase stability of paramagnetic BiNiO$_3$ under pressure
using the DFT+DMFT method \cite{dftdmft} implemented with plane-wave pseudopotentials \cite{espresso,Leonov1}.
We start by constructing the effective low-energy Hamiltonian [$\hat{H}^{\mathrm{DFT}}_{\sigma,\alpha\beta}(\bf{k})$], which explicitly contains the Bi $6s$, Ni $3d$, and O $2p$ valence states, using the projection onto Wannier functions~\cite{Wannier1}. For this purpose, for the partially filled Bi $6s$, Ni $3d$, and O $2p$ orbitals we construct a basis set of atomic-centered symmetry-constrained Wannier functions \cite{Wannier2}.
The Wannier functions are constructed over the full energy range spanned by the $s$-$p$-$d$ band complex using the scheme of Ref.~\cite{Wannier2}.
We obtain the $s$-$p$-$d$ Hubbard Hamiltonian (in the density-density approximation)
\begin{eqnarray}
\label{eq:hamilt}
\hat{H} = \sum_{\bf{k},\sigma} \hat{H}^{\mathrm{DFT}}_{\sigma,\alpha\beta}({\bf{k}}) + \frac{1}{2} \sum_{i,\sigma\sigma',\alpha\beta} U_{\alpha\beta}^{\sigma\sigma'} \hat{n}_{i,\alpha\sigma} \hat{n}_{i,\beta\sigma'} - \hat{H}_{\mathrm{DC}},
\nonumber
\end{eqnarray}
where $\hat{n}_{i,\alpha\sigma}$ is the occupation number operator for the $i$-th Ni site with spin $\sigma$ and (diagonal) orbital indices $\alpha$.
In Supplementary Fig.~\ref{fig:band_structure} we show our results for the band structure of BiNiO$_3$ calculated within nonmagnetic DFT in comparison with the Wannier Bi $6s$, Ni $3d$, and O $2p$ band structure for the
ambient-pressure $P\bar{1}$ and high-pressure $Pbmn$ phases of BiNiO$_3$. Our results for the leading Wannier hopping integrals between the Bi $6s$ and neighbor ions in the ambient-pressure $P\bar{1}$ and high-pressure $Pbnm$ phases of BiNiO$_3$ are summarized in Table~\ref{tab:model_1}.
All the calculations are performed in the local basis set determined by diagonalization of the corresponding Ni $3d$ occupation matrices.
In order to solve the realistic many-body problem, we employ the continuous-time hybridization-expansion quantum Monte-Carlo algorithm \cite{CT-QMC}. The Coulomb interaction has been treated in the density-density approximation. The elements of the $U$ matrix are parametrized by the average Coulomb interaction $U$ and Hund's exchange $J$ for the Ni $3d$ shell. For all the structural phases considered here we have used the same $U=6$ eV and $J=0.95$~eV values as was estimated previously for $R$NiO$_3$ \cite{Park2012,Nowadnick2015}.
The spin-orbit coupling was neglected in these calculations. Moreover, the $U$ and $J$ values are assumed to remain constant upon variation of the lattice volume.
We employ the fully localized double-counting correction, evaluated from the self-consistently determined local occupations, to account for the electronic interactions already described by DFT,
$\hat{H}_{DC}=U ( N - \frac{1}{2} ) - J ( N_{\sigma} - \frac{1}{2} )$,
where $N_\sigma$ is the total Ni $3d$ occupation with spin $\sigma$ and $N=N_\uparrow+N_\downarrow$.
Here, we employ a fully self-consistent in charge density DFT+DMFT scheme in order to take into account the effect of charge redistribution caused by electronic correlations and electron-lattice coupling.
\begin{figure*}[h]
\begin{tabular}{cc}
\includegraphics[trim=0cm 0cm 0cm 0cm,width=0.41\textwidth]{Fig_S2a}
\includegraphics[trim=0cm 0cm 0cm 0cm,width=0.41\textwidth]{Fig_S2b}
\end{tabular}
\caption{(Color online).
Orbitally-resolved spectral functions of paramagnetic BiNiO$_3$ calculated within DFT+DMFT for the ambient-pressure $P\bar{1}$ (left panel) and high-pressure $Pbnm$ (right panel) phases of BiNiO$_3$. Photoemission (PES) and X-ray absorption (XAS) spectra are shown for comparison~\cite{Wadati2005}.
The DFT+DMFT calculations are performed at a temperature ${T = 387}$~K (above $T_\mathrm{N} \sim 300$~K). The Fermi level is at zero energy.
\label{fig:dos_pade}}
\end{figure*}
In Supplementary Fig.~\ref{fig:dos_pade} we show the spectral functions of paramagnetic BiNiO$_3$ calculated by DFT+DMFT in comparison with photoemission (PES) and X-ray absorption (XAS) spectra taken at room temperature~\cite{Wadati2005}.
Our calculations are performed in the paramagnetic state at a temperature ${T = 387}$~K, above the N\'eel temperature ${T_\mathrm{N}\sim 300}$~K.
To calculate the spectral functions, we employ the Pad\'{e} analytical continuation procedure for the self-energy.
In our calculations we adopt the experimental crystal structure data (atomic positions for the orthorhombic phase are taken from the experiment at a pressure of $\sim$7.7~GPa~\cite{Azuma2007}).
The calculated spectral functions are in overall good agreement with the experimental spectra.
In particular, in the insulating triclinic phase, the energy gap lies between the occupied and unoccupied Ni $e_g$ states, strongly mixed with the O $2p$ and the empty Bi2 $6s$ states (the Bi1 $6s$ states are fully occupied).
Our results indicate that all the Ni sites (the insulating $P\bar{1}$ phase has four inequivalent Ni sites) are nearly equivalent.
A sharp peak at about -1.5~eV originates from the occupied Ni $t_{2g}$ states, which form a lower Hubbard band at -9~eV.
The PES spectral weight lying at about -3 and -5~eV is mainly due to the O $2p$ states, the hump at -10~eV is predominantly due to the Bi $6s$ states.
In the metallic orthorhombic phase, the peak at the Fermi level and the spectral weight at the bottom of the conduction band are predominantly formed by the Ni $e_g$ and O $2p$ states. The Ni $e_g$ upper Hubbard band appears at $\sim 1.0$~eV.
The peak at about -1.5~eV is due to the occupied Ni $t_{2g}$ states.
In contrast to the insulating phase, all the Bi states are occupied and are located at about -10~eV.
\begin{table*}[t]
\caption{Leading Wannier hopping integrals (in meV) between Bi $6s$ and neighbor ions in the ambient-pressure $P\bar{1}$ (left part) and high-pressure $Pbnm$ (right part) phases of BiNiO$_3$.
\label{tab:model_1}}
\begin{ruledtabular}
\begin{tabular}{cc}
\begin{tabular}{cccc}
Atom \quad & Atom \quad & Distance (a.u.) \quad & Hoppings (meV) \\
\cline{1-4}
Bi $6s$ & O $2p$ & 4.08 & -1304, -1234, -71 \\
Bi $6s$ & O $2p$ & 4.11 & -1410, 1037, 631 \\
Bi $6s$ & O $2p$ & 4.48 & -280, -404, 1086 \\
Bi $6s$ & O $2p$ & 4.62 & 772, 295, 1144 \\
Bi $6s$ & O $2p$ & 4.86 & -674, 72, 839 \\
Bi $6s$ & O $2p$ & 4.90 & 422, 47, -916 \\
Bi $6s$ & O $2p$ & 5.41 & 052, -80, -757 \\
Bi $6s$ & O $2p$ & 5.98 & -385, 78, 204 \\
Bi $6s$ & Ni $e_g$ & 6.03 & 41, 2 \\
Bi $6s$ & Ni $t_{2g}$ & 6.03 & -37, 75, 163 \\
Bi $6s$ & Ni $e_g$ & 6.10 & -7, 58 \\
Bi $6s$ & Ni $t_{2g}$ & 6.10 & 31, -126, -142 \\
Bi $6s$ & Ni $e_g$ & 6.11 & 11, 48 \\
Bi $6s$ & Ni $t_{2g}$ & 6.11 & -2, -274, -163 \\
Bi $6s$ & Ni $e_g$ & 6.16 & 40, 12 \\
Bi $6s$ & Ni $t_{2g}$ & 6.16 & 58, -14, 64 \\
\end{tabular}
&
\begin{tabular}{cccc}
Atom \quad & Atom \quad & Distance (a.u.) \quad & Hoppings (meV) \\
\cline{1-4}
Bi $6s$ & O $2p$ & 4.24 & 0, 1709, 242 \\
Bi $6s$ & O $2p$ & 4.44 & -271, -1536, 95 \\
Bi $6s$ & O $2p$ & 4.44 & -271, -1536, -95 \\
Bi $6s$ & O $2p$ & 4.58 & 0, 622, 1132 \\
Bi $6s$ & O $2p$ & 4.85 & -69, -103, 957 \\
Bi $6s$ & O $2p$ & 4.85 & -69, -103, -957 \\
Bi $6s$ & O $2p$ & 4.95 & 6, -142, 935 \\
Bi $6s$ & O $2p$ & 4.95 & 6, -142, -935 \\
Bi $6s$ & O $2p$ & 5.85 & 0, -298, -151 \\
Bi $6s$ & O $2p$ & 5.85 & 0, -49, -330 \\
Bi $6s$ & Ni $e_g$ & 5.86 & -6, -11 \\
Bi $6s$ & Ni $t_{2g}$ & 5.86 & 40, -12, 227 \\
Bi $6s$ & Ni $e_g$ & 5.86 & 6, 11 \\
Bi $6s$ & Ni $t_{2g}$ & 5.86 & 40, -12, 227 \\
Bi $6s$ & Ni $e_g$ & 6.11 & 0, -40 \\
Bi $6s$ & Ni $t_{2g}$ & 6.11 & 38, 156, -55 \\
\end{tabular}
\end{tabular}
\end{ruledtabular}
\end{table*}
|
2,877,628,090,463 | arxiv | \section{Introduction}
Theoretical studies about plastic analysis, using perfect ellipsoids, have been recently successful in explaining deformation processes of small bodies. \cite{Holsapple2001} firstly used limit analysis to calculate the limit spins of cohesionless rubble pile ellipsoids and showed that any elements in a perfect ellipsoid reach the limit state at the same time. \cite{Holsapple2004} reported that the limit spin given by volume averaged stresses is identical to the analysis given by \cite{Holsapple2001}. Using the limit analysis approach, \cite{Holsapple2006} investigated the tidal disruption condition of cohesionless ellipsoids. \cite{Sharma2009}, on the other hand, introduced an averaging form over the whole volume of dynamical deformation to discuss structural failure of a rubble pile ellipsoid due to a tidal effect of a massive body. \cite{Sharma2010} extended the theory by \cite{Sharma2009} to binary-ellipsoid systems.
Regardless of their elegant mathematical formulations, these researches simplified their discussion by ignoring the effect of density distribution. It is interesting to understand how the density distribution has an effect on structural stability. In asteroid environments, the density may be distributed axisymmetrically during its accretion process. The present study considers the effect of this axisymmetric density distribution on the structural stability of asteroids. To investigate this effect, a uniformly rotating asteroid is modeled as a biaxial ellipsoid composed of an internal sphere and an external shell. The density is assumed to be constant in each layer. The technique used here is based on the limit analysis technique by \cite{Holsapple2004} who obtained the limit spin by using the total volume stress. The current paper is organized as follows. First, the two-layer model is established. Second, the limit analysis technique is applied to the two-layer model. Last, the upper bound condition of structural failure of the present model is compared with that of the uniform density case.
\section{Two-layer model of a rubble pile biaxial ellipsoid}
\subsection{Definition}
\label{sec:def}
A biaxial ellipsoid with dimensions of $2 a$ by $2 b$ by $2 b$, where $a>b$, is supposed to be spinning with a constant spin $\omega$ along the maximum principal axis. This biaxial ellipsoid is composed of an internal sphere with a radius $l b$, where $l$ is less than $1$, and an external layer enclosing the internal layer. The volume of the internal layer can vary as $l$ changes. Those layers are concentric. Let us denote the density of the internal layer by $\rho$, the density of the external layer by $\rho^\prime$, and the averaged density of the whole volume by $\rho^\ast$.
Since the following discussion will use normalized forms, definitions of mathematical notations are given first. Any lengths are normalized by $a$, and the size of a two-layer biaxial ellipsoid is characterized by the aspect ratio $\beta=b/a$. Similarly, a normalized position of an arbitrary element is denoted by $(x_1,x_2,x_3)$. The dimensionless spin rate is defined by $\Omega = \omega/\sqrt{\pi \rho^\ast G}$, where G is the gravitational constant. The density is normalized by the averaged density; in the formulation, a scale density relative to the averaged density, denoted as $\epsilon$, will be used. In other words, $\rho/\rho^\ast = 1 + \epsilon$ and $\rho^\prime/\rho^\ast = 1 + \epsilon^\prime$. This paper considers the total mass to be constant, there is the relation between $\epsilon$ and $\epsilon^\prime$:
\begin{eqnarray}
\epsilon^\prime = - \frac{\epsilon \beta l^3}{1-\beta l^3}. \label{Eq:ep}
\end{eqnarray}
Potential $U$, body force $\bfm b$, and stress tensor $\bfm T$ are normalized by $\pi \rho^\ast G a^2$, $\pi \rho^\ast G a$, and $\pi \rho^{\ast 2} G a^2$, respectively. For $\bfm b$ and $\bfm T$, the following discussions will use index notations instead of vector notations. With indices $(i,j) = (1,2,3)$, these vector notations are expressed by $b_i$ and $T_{ij}$, respectively.
The outer boundary of the internal layer and that of the external layer are introduced. The outer boundary of the internal layer is given as
\begin{eqnarray}
x_1^2 + x_2^2 + x_3^2 = l^2 \beta^2.
\end{eqnarray}
On the other hand, the outer boundary of the external layer, or the surface of a biaxial body, is given as
\begin{eqnarray}
x_1^2 + \frac{x_2^2 + x_3^2}{\beta^2} = 1.
\end{eqnarray}
The stress state of this problem is given by the equilibrium equation:
\begin{eqnarray}
\frac{\partial T_{ij}}{\partial x_i} + (1 + \epsilon_k) b_j = 0, \label{Eq:Eqn_Eqb}
\end{eqnarray}
where $\epsilon_k = \epsilon$ if an element is in the internal layer and $\epsilon_k = \epsilon^\prime$ if an element is in the external layer.
\subsection{Calculation of body forces}
This study focuses on the effect of a gravitational force and a centrifugal force, so $b_i$ is a function of the density and the spin rate. The acceleration components of a centrifugal force $b_{c,i}$ are given as
\begin{eqnarray}
b_{c,i} = \left\{
\begin{array}{l l}
\Omega^2 x_i, & \text{if $\quad i=1,2$} \\
0, & \text{if $\quad i = 3$}
\end{array} \right.
\end{eqnarray}
On the other hand, the gravity computation requires considerations of density distribution. Here, a combination of a uniform-density ellipsoid and a uniform-density sphere is considered to obtain a gravitational acceleration. The potential can be described as
\begin{eqnarray}
U &=& - \frac{1}{\pi} \int_{V} \frac{1+\epsilon(\bfm r)}{d} dV, \nonumber \\
&=& - \frac{1}{\pi} \int_{V_{ex}} \frac{1+\epsilon^\prime}{d} dV - \frac{1}{\pi} \int_{V_{in}} \frac{1+\epsilon}{d} dV, \nonumber \\
&=& - \frac{1}{\pi} \int_{V} \frac{1+\epsilon^\prime}{d} dV - \frac{1}{\pi} \int_{V_{in}} \frac{\epsilon-\epsilon^\prime}{d} dV, \label{Eq:Potential}
\end{eqnarray}
where $\epsilon (\bfm r)$ is the scale density at an arbitrary element, $V$ is the total volume, $V_{ex}$ is the volume of the external layer, $V_{in}$ is the volume of the internal layer, and $d$ is the distance between two small elements. The third row indicates that computation of a gravitational acceleration can be decoupled into a perfect ellipsoid and a perfect sphere. The first term in the third row in Eq. (\ref{Eq:Potential}), denoted as $U_{el}$, is written as
\begin{eqnarray}
U_{el} = - (1+\epsilon^\prime) (A_0 + \sum_{i=1}^3 A_i x_i^2),
\end{eqnarray}
where
\begin{eqnarray}
A_0 &=& \beta^2 \int_0^\infty \frac{d s}{(s+ \beta^2) \Delta}, \\
A_1 &=& \beta^2 \int_0^\infty \frac{d s}{(s+1) (s+ \beta^2) \Delta}, \label{Eq:Ax} \\
A_2 &=& A_3 = \beta^2 \int_0^\infty \frac{d s}{(s+\beta^2)^2 \Delta}. \label{Eq:Az}
\end{eqnarray}
and $\Delta = \sqrt{s + 1}$. The second term in the third row in Eq. (\ref{Eq:Potential}), denoted as $U_{sp}$, is given as
\begin{eqnarray}
U_{sp} =
\left\{
\begin{array}{l l}
- \frac{4 l^3 \beta^3 (\epsilon-\epsilon^\prime)}{3r} & \quad \text{if $r > l \beta$}, \\
- \frac{4 r^2 (\epsilon-\epsilon^\prime)}{3} & \quad \text{if $r \le l \beta$},
\end{array}
\right.
\end{eqnarray}
where $r$ is the distance between a field point and the center of mass.
Differentiating those potentials with respect to the position yields the gravitational acceleration:
\begin{eqnarray}
b_{g,i} =
\left\{
\begin{array}{l l}
- 2 A_i (1 + \epsilon^\prime) x_i - \frac{4 l^3 \beta^3 (\epsilon-\epsilon^\prime)}{3} \frac{x_i}{r^3}, & \quad \text{if $r > l \beta$}, \\
- 2 A_i (1 + \epsilon^\prime) x_i - \frac{4 (\epsilon-\epsilon^\prime)}{3} x_i. & \quad \text{if $r \le l \beta$},
\end{array}
\right.
\end{eqnarray}
The first row indicates the gravitational acceleration in the external layer, while the second row describes that in the internal layer. The total body forces, a sum of the centrifugal and gravitational accelerations $b_i = b_{gi} + b_{ci}$, are given as follows. The body force in the external layer $b_{ex,i}$ is given as
\begin{eqnarray}
b_{ex,i} = - 2 A_i (1 + \epsilon^\prime) x_i - \frac{4 l^3 \beta^3 (\epsilon - \epsilon^\prime)}{3} \frac{x_i}{r^3} + \Omega_i^2 x_i.
\end{eqnarray}
On the other hand, the body force in the internal layer $b_{in,i}$ is obtained as
\begin{eqnarray}
b_{in,i} =
- 2 A_i (1+\epsilon^\prime) x_i - \frac{4 (\epsilon - \epsilon^\prime)}{3} x_i + \Omega_i^2 x_i.
\end{eqnarray}
Note that $[\Omega_1, \Omega_2, \Omega_3] = [\Omega, \Omega, 0]$.
\section{Upper bound condition of structural failure}
This study assumes that materials are characterized by elastic perfectly-plastic theory, a smooth-convex yield envelope, and an associate flow. In limit analysis, the upper bound theorem provides the condition where a target body must fail plastically (see \citealt{Chen1988}). \cite{Holsapple2008A} derived that the upper bound condition is identical to the yield condition of averaged stresses over an arbitrary volume. The present paper utilizes this technique to determine the upper bound of structural failure of the whole volume. The yield condition is modeled by using the Mohr-Coulomb yield criterion, which is given as
\begin{eqnarray}
g (\sigma_1, \sigma_3, \phi) \le 0, \label{Eq:MC}
\end{eqnarray}
where
\begin{eqnarray}
g (\sigma_1, \sigma_3, \phi) = \frac{\sigma_1 - \sigma_3}{2} \sec \phi + \frac{\sigma_1 + \sigma_3}{2} \tan \phi,
\end{eqnarray}
and $\phi$ is the angle of internal friction. If materials are cohesive, the term for cohesive strength should appear in the right hand side in Eq. (\ref{Eq:MC}). It is necessary to clarify the use of this yield criterion, which is not smooth at a compression meridian and a tension meridian, for limit analysis. A biaxial ellipsoid spinning along the maximal principal axis experiences the stress states at the meridians in some conditions (see Fig. 3 of \citealt{Holsapple2001}). However, since this condition may be unrealistic and limited in nature, this paper does not consider such a condition. Therefore, since stress states are always between these meridians and the yield envelope is smooth in this region, the Mohr-Coulomb yield envelope is still applicable to the current problem. Note that the use of the Drucker-Prager yield criterion removes this assumption. Nevertheless, the technical method used here does not change due to yield conditions, so this paper uses the Mohr-Coulomb yield criterion.
There is a standard formula for the total volume stress. Using the general form yields the total volume stress of a two-layer biaxial ellipsoid:
\begin{eqnarray}
\bar T^t_{ij} = \frac{1}{V_t} \int_{V_{ex}} x_j b_{ex,i} dV_{ex} + \frac{1}{V_t} \int_{V_{in}} x_j b_{in,i} dV_{in}. \label{Eq:Total1}
\end{eqnarray}
Since the diagonal components of the stress tensor are zero, Eq. (\ref{Eq:Total1}) can be simply rewritten as
\begin{eqnarray}
\bar T^t_{ii} &=& - \frac{(1 + \epsilon^\prime) [2 A_i (1 + \epsilon^\prime) - \Omega_i^2]}{V} E_{1,i} \nonumber \\
&& - \frac{4 l^3 \beta^3 (1+\epsilon^\prime) (\epsilon-\epsilon^\prime)}{3 V} E_{2,i} \nonumber \\
&& - \frac{(1 + \epsilon) [2 A_i (1+\epsilon^\prime) + 4 (\epsilon-\epsilon^\prime)/3 - \Omega_i^2]}{V} F_i, \label{Eq:totalStress}
\end{eqnarray}
where $V = 4 \pi \beta^2/3$ and $[\Omega_1, \Omega_2, \Omega_3] = [\Omega, \Omega, 0]$. $E_{1,i}$, $E_{1,i}$, and $F_i$ are given as
\begin{eqnarray}
E_{1,i} &=& \int_{V_{ex}} x_i^2 dV_{ex}, \nonumber \\
E_{2,i} &=& \int_{V_{ex}} \frac{x_i^2}{d^3} dV_{ex}, \nonumber \\
F_i &=& \int_{V_{in}} x_i^2 dV_{in}. \nonumber
\end{eqnarray}
Finally, substitution of Eq. (\ref{Eq:totalStress}) into the yield condition $g(\sigma_1, \sigma_3, \phi) = 0$ gives the upper bound condition of structural failure.
\section{Application to small bodies}
This section considers comparison of the upper bound condition of the present model and that of the uniform density case.
First, the effect of the scaling parameter on the critical spin $\Omega$ is discussed. Figure \ref{Fig:denDis} shows change of the critical spin with regards to the scaling parameter $l$. The friction angle is chosen as $30^\circ$. Fig. \ref{Fig:denDisA} gives the case $\beta=0.5$, while Fig. \ref{Fig:denDisB} describes the case $\beta=0.9$, where $\beta$ is the aspect ratio. Normalization defined above allows for the mass constant condition; in other words, the mass is constant in each plot. The case $\epsilon=0.0$ in both plots is consistent with the uniform density case (e.g., \citealt{Holsapple2001}). It can be found that if $\epsilon>0 (<0)$, i.e., high (low) density in the internal layer, the critical spin rate increases (decreases) as $l$ becomes larger. Therefore, the body becomes stronger (weaker) against structural failure if $\epsilon>0 (<0)$. Differences between the case $\beta=0.5$ and $\beta=0.9$ can also be seen. Compared to the case $\beta=0.5$, the slope of the critical spin for the case $\beta=0.9$ becomes steeper as $l$ increases.
Second, the density distribution case is compared with the uniform density case in terms of the aspect ratio\footnote{The latter case corresponds to Fig. 8 by \cite{Holsapple2001}.}. Here, the asteroid LightCurve Data Base (LCDB) by Warner, Harris, and Pravec (revised on November 10, 2012) is also given. Instructed by its manual, the following discussion only uses the objects of which $U$ (quality) code is more than or equal to 2. In addition, since the spin barrier is split into the gravity regime and the cohesive regime (\citealt{Holsapple2007}), only asteroids in the gravity regime ranging between 5 km and 300 km are considered. The LCDB data includes the spin periods, sizes, and observational full-range amplitudes. This study uses a standard formula for the largest observed amplitude relative to the aspect ratio:
\begin{eqnarray}
A = - 2.5 \log \beta,
\end{eqnarray}
where $A$ is the observational amplitude. Again, the smallest diameter is assumed to be equal to the intermediate diameter here. Also, the averaged density is fixes as 2.5 g/cm$^3$.
Figure \ref{Fig:denDisC} gives the critical spin rate with regards to the aspect ratio. The solid lines show the critical spin for the density distribution case, while the dashed lines give that for the uniform density case. Each line is calculated based on a different friction angle. For the density distribution case, the external and internal layers are characterized by $\epsilon = 0.3$ and $l=0.9$. This condition gives a high density core and a low density surface. It is found that the critical spins become higher in the tension regime. On the other hand, interestingly, in the compression regime, the critical spins for the density distribution case is higher than those for the uniform density case when $\beta$ is small\footnote{In the compression regime, the critical spins are the minimum spins that suspend the bodies.}.
\section{Discussion and Conclusion}
This paper investigated the effect of the two-layer density distribution on structural failure of a uniformly rotating ellipsoid. The prime result shows that the two-layer density distribution causes different failure conditions from a uniformly rotating ellipsoid. The larger (smaller) size and higher (lower) density of the internal layer allow the bodies to become stronger (weaker) against structural failure. On the other hand, the critical spins with regards to the aspect ratio behave differently. If there is a high density core, the critical spins in the tension regime can increase, which indicates that the body becomes stronger against tension failure. However, in the compression regime, the critical spins for the density distribution case is also larger than those for the uniform density case if $\beta$ decreases. This implies that the body becomes weaker against compression failure. It can be explained that for such elongated bodies, since a dense core causes stronger gravitational compression and centrifugal forces do not effectively support the bodies, the bodies become sensitive to structural failure.
\acknowledgments
The author wishes to thank Dr. Holsapple for his detailed reviews that improved the clarity and quality of the manuscript.
\bibliographystyle{model2-names}
|
2,877,628,090,464 | arxiv | \section{Introduction}
Graphical structure plays an important role in natural language processing (NLP), they often serve as the central formalism for representing syntax, semantics, and knowledge. For example, most syntactic representations (e.g., dependency relation) are tree-based while most whole-sentence semantic representation frameworks (e.g., Abstract Meaning Representation (AMR) \cite{banarescu2013abstract}) encode sentence meaning as directed acyclic graphs. A range of NLP applications can be framed as the process of graph-to-sequence learning. For instance, text generation may involve realizing a semantic graph into a surface form \cite{liu-etal-2015-toward} and syntactic machine translation incorporates source-side syntax information for improving translation quality \cite{bastings-etal-2017-graph}. Fig. \ref{example} gives an example of AMR-to-text generation.
\begin{figure}[t]
\centering
\includegraphics[scale=0.28]{example.pdf}
\caption{An AMR graph (left) for the reference sentence ``The boy wants the girl to believe him." and the corresponding Levi graph (right).}
\label{example}
\end{figure}
While early work uses statistical methods or neural models after the linearization of graphs, graph neural networks (GNNs) have been firmly established as the state-of-the-art approaches for this task \cite{damonte-cohen-2019-structural,guo2019densely}. GNNs typically compute the representation of each node iteratively based on those of its adjacent nodes. This inherently local propagation nature precludes efficient global communication, which becomes critical at larger graph sizes, as the distance between two nodes exceeds the number of stacked layers. For instance, for two nodes staying $L$ hops away, at least $L$ layers will be needed in order to capture their dependencies. Furthermore, even if two distant nodes are reachable, the information may also be disrupted in the long journey \cite{xu2018graph2seq,guo2019densely}.
To address the above problems, we propose a new model, known as Graph Transformer, which relies entirely on the multi-head attention mechanism \cite{vaswani2017attention} to draw global dependencies.\footnote{We note that the name \textit{Graph Transformer} was used in a recent work \cite{koncel-kedziorski-etal-2019-text}. However, it merely focuses on the relations between directly connected nodes as other graph neural networks.} Different to GNNs, the Graph Transformer allows direct modeling of dependencies between any two nodes without regard to their distance in the input graph. One undesirable consequence is that it essentially treats any graph as a fully connected graph, greatly diluting the explicit graph structure. To maintain a graph structure-aware view, our proposed model introduces explicit relation encoding and incorporates it into the pairwise attention score computation as a dynamic parameter.
Our treatment of explicit relation encoding also brings other side advantages compared to GNN-based methods. Previous state-of-the-art GNN-based methods use Levi graph transformation \cite{beck-etal-2018-graph,guo2019densely}, where two unlabeled edges are replacing one labeled edge that is present in the original graph. For example, in Fig. \ref{example}, the labeled edge $\texttt{want-01}\stackrel{ARG1}{\longrightarrow}\texttt{believe-01}$ turns to be two unlabeled edges $\texttt{want-01}\stackrel{}{\longrightarrow}\texttt{ARG1}$ and $\texttt{ARG1}\stackrel{}{\longrightarrow}\texttt{believe-01}$. Since edge labels are represented as nodes, they end up sharing the same semantic space, which is not ideal as nodes and edges are typically different elements. In addition, the Levi graph transformation at least doubles the number of representation vectors. which will introduce more complexity for the decoder-side attention mechanism \cite{bahdanau2015neural} and copy mechanism \cite{gu-etal-2016-incorporating,see-etal-2017-get}. Through explicit and separate relation encoding, our proposed Graph Transformer inherently avoids these problems.
Experiments show that our model is able to achieve better performance for graph-to-sequence learning tasks for natural language processing. For the AMR-to-text generation task, our model surpasses the current state-of-the-art neural methods trained on LDC2015E86 and LDC2017T10 by 1.6 and 2.2 BLEU points, respectively. For the syntax-based neural machine translation task, our model is also consistently better than others, even including ensemble systems, showing the effectiveness of the model on a large training set. In addition, we give an in-depth study of the source of improvement gain and the internal workings of the proposed model.
\section{Related Work}
Early research efforts for graph-to-sequence learning use specialized grammar-based methods. \newcite{flanigan-etal-2016-generation} split input graphs to trees and uses a tree-to-string transducer. \newcite{song-etal-2016-amr} recast generation as a traveling salesman problem. \newcite{jones-etal-2012-semantics} leverage hyperedge replacement grammar and \newcite{song-etal-2017-amr} use a synchronous node replacement grammar. More recent work employs more general approaches, such as phrase-based machine translation model \cite{pourdamghani2016generating} and neural sequence-to-sequence methods \cite{konstas-etal-2017-neural} after linearizing input graphs. Regarding AMR-to-text generation, \newcite{cao-clark-2019-factorising} propose an interesting idea that factorizes text generation through syntax. One limitation of sequence-to-sequence models, however, is that they require serialization of input graphs, which inevitably incurs the obstacle of capturing graph structure information.
An emerging trend has been directly encoding the graph with different variants of graph neural networks, which in common stack multiple layers that restrict the update of node representation based on a first-order neighborhood but use different information passing schemes. Some borrow the ideas from recurrent neural networks (RNNs), e.g, \newcite{beck-etal-2018-graph} use gated graph neural network \cite{li2016gated} while \newcite{song-etal-2018-graph} introduce LSTM-style information aggregation. Others apply convolutional neural networks (CNNs), e.g., \newcite{bastings-etal-2017-graph};\newcite{damonte-cohen-2019-structural};\newcite{guo2019densely} utilize graph convolutional neural networks \cite{kipf2017semi}. \newcite{koncel-kedziorski-etal-2019-text} update vertex information by attention over adjacent neighbors. Furthermore, \newcite{guo2019densely} allow the information exchange across different levels of layers. \newcite{damonte-cohen-2019-structural} systematically compare different encoders and show the advantages of graph encoder over tree and sequential ones. The contrast between our model and theirs is reminiscent of the contrast between the self-attention network (SAN) and CNN/RNN.
For sequence-to-sequence learning, the SAN-based Transformer model \cite{vaswani2017attention} has been the \textit{de facto} approach for its empirical successes. However, it is unclear on the adaptation to graphical data and its performance. Our work is partially inspired by the introduction of relative position embedding \cite{shaw-etal-2018-self,dai-etal-2019-transformer} in sequential data. However, the extension to graph is nontrivial since we need to model much more complicated relation instead of mere visual distance. To the best of our knowledge, the Graph Transformer is the first graph-to-sequence transduction model relying entirely on self-attention to compute representations.
\begin{figure*}[t]
\centering
\includegraphics[scale=0.44]{arch.pdf}
\caption{An overview of our proposed model.}
\label{arch}
\end{figure*}
\section{Background of Self-Attention Network}
The Transformer introduced by \newcite{vaswani2017attention} is a sequence-to-sequence neural architecture originally used for neural machine translation. It employs self-attention network (SAN) for implementing both the encoder and the decoder. The encoder consists of multiple identical blocks, of which the core is multi-head attention. The multi-head attention consists of $H$ attention heads, and each of them learns a distinct attention function. Given a source vector $x\in\mathbb{R}^{d_x}$ and a set of context vectors $\{y_1, y_2, \ldots, y_m\}$ with the same dimension $d_x$ or in short $y_{1:m}$, for each attention head, $x$ and $y_{1:m}$ are transformed into distinct query and value representations. The attention score is computed as the dot-product between them.
\begin{align*}
f(x, y_i) =(W_qx)^TW_k y_i
\end{align*}
where $W_q, W_k \in \mathbb{R}^{d_z\times d_x}$ are trainable projection matrices. The attention scores are scaled and normalized by a softmax function to compute the final attention output $attn$.
\begin{align*}
&a_i = \frac{\exp(f(x, y_i)/ \sqrt{d_z})}{\sum_{j=1}^m \exp(f(x, y_i))/ \sqrt{d_z})}\\
&attn = \sum_{i=1}^m a_i W_v{y_i}
\end{align*}
where $a \in \mathbb{R}^m$ is the attention vector (a distribution over all input $y_{1:m}$), $W_v\in\mathbb{R}^{d_z \times d_x}$ is a trainable projection matrix. Finally, the outputs of all attention heads are concatenated and projected to the original dimension of $x$, followed by feed-forward layers, residual connection, and layer normalization.\footnote{We refer interesting readers to \newcite{vaswani2017attention} for more details.} For brevity, we will denote the whole procedure described above as a single function $\textsc{ATT}(x,y_{1:m})$.
For an input sequence $x_{1:n}$, the SAN-based encoder computes the vector representations iteratively by $x^L_i = \textsc{ATT}(x^L_i,x^{L-1}_{1:n})$, where $L$ is the total number of blocks and $x^0_{1:n}$ are word embeddings. In this way, a representation is allowed to build a direct relationship with another long-distance representation. To feed the sequential order information, the deterministic or learned position embedding \cite{vaswani2017attention} is introduced to expose the position information to the model, i.e., $x^0_{i}$ becomes the sum of the corresponding word embedding and the position embedding for $i$.
The aforementioned treatment of SAN on sequential data can be drawn a close resemblance to graph neural networks by regarding the token sequence as an unlabeled fully-connected graph (each token as a node) and taking the multi-head attention mechanism as a specific message-passing scheme. Such view on the relationship between SAN and graph neural networks inspires our work.
\section{Graph Transformer}
\subsection{Overview}
For a graph with $n$ nodes, previous graph neural networks compute the node representation $v_i$ as a function of the input node $i$ and all its first-order neighborhoods $N(i)$. The graph structure is implicitly reflected by the receptive field of each node representation. This local communication design, however, could be inefficient for long-distance information exchange. We introduce a new model, known as Graph Transformer, which provides an aggressively different paradigm that enables relation-aware global communication.
The overall framework is shown in Fig. \ref{arch}. The most important characteristic of the Graph Transformer is that it has a fully-connected view on arbitrary input graphs. A node is able to directly receive and send information to another node no matter whether they are directly connected or not. These operations are achieved by our proposed extension to the original multi-head attention mechanism, the relation-enhanced global attention mechanism described below. In a nutshell, the relationship between any node pair is depicted as the shortest relation path between them. These pairwise relation paths are fed into a relation encoder for distributed relation encoding. The node vectors are initialized as the sum of the node embedding and absolute position embeddings. Multiple blocks of global attention network are then stacked to compute the final node representations. At each block, a node vector is updated based on all other node vectors and the corresponding relation encodings. The resulted node vectors at the last block are fed to the sequence decoder for sequence generation.
\subsection{Graph Encoder}
Our graph encoder is responsible for transforming an input graph into a set of corresponding node embeddings. To apply global attention on a graph, the central problem is how to maintain the topological structure of the graph while allowing fully-connected communication. To this end, we propose relation-enhanced global attention mechanism, which is an extension of the vanilla multi-head attention. Our idea is to incorporate explicit relation representation between two nodes into their representation learning. Recall that, in the standard multi-head attention, the attention score between the element $x_i$ and the element $x_j$ is simply the dot-product of their query vector and key vector respectively:
\begin{align}
\begin{split}
s_{ij} & = f(x_i, x_j) \\
& =x_i W_q^TW_k x_j
\end{split}
\label{datt}
\end{align}
Suppose we have learned a vector representation for the relationship $r_{ij}$, which we will refer as relation encoding, between the node $i$ and the node $j$. Following the idea of relative position embedding \cite{shaw-etal-2018-self,dai-etal-2019-transformer}, we propose to compute the attention score as follows:
\begin{align}
& [r_{i\to j}; r_{j\to i}] = W_r r_{ij}
\label{r_split}
\end{align}
where we first split the relation encoding $r_{ij}$ into the forward relation encoding $r_{i\to j}$ and the backward relation encoding $r_{j\to i}$. Then we compute the attention score based on both the node representations and their relation representation:
\begin{align}
\begin{split}
s_{ij} &= g(x_i, x_j, r_{ij})\\
& = (x_i + r_{i\to j})W_q^TW_k(x_j + r_{j\to i})\\
&= \underbrace{x_iW_q^TW_kx_j}_{(a)} + \underbrace{x_iW_q^TW_kr_{j\to i}}_{(b)} \\
&+ \underbrace{r_{i\to j}W_q^TW_kx_j}_{(c)} + \underbrace{r_{i\to j}W_q^TW_kr_{j\to i}}_{(d)}
\end{split}
\label{ratt}
\end{align}
Each term in Eq (\ref{ratt}) corresponds to some intuitive meaning according to their formalization. The term (a) captures purely content-based addressing, which is the original term in vanilla attention mechanism. The term (b) represents a source-dependent relation bias. The term (c) governs a target-dependent relation bias. The term (d) encodes the universal relation bias. Our formalization provides a principled way to model the element-relation interactions. In comparison, it has broader coverage than \newcite{shaw-etal-2018-self} in terms of additional terms (c) and (d), and than \newcite{dai-etal-2019-transformer} in terms of the extra term (c) respectively. More importantly, previous methods only model the relative position in the context of sequential data, which merely adopts the immediate embeddings of the relative positions (e.g, $-1, +1$). To depict the relation between two nodes in a graph, we utilize a shortest-path based approach as described below.
\subsubsection{Relation Encoder}
Conceptually, the relation encoding gives the model a global guidance about how information should be gathered and distributed, i.e., where to attend. For most graphical structures in NLP, the edge label conveys direct relationship between adjacent nodes (e.g., the semantic role played by concept-to-concept, and the dependency relation between two words). We extend this one-hop relation definition into multi-hop relation reasoning for characterizing the relationship between two arbitrary nodes. For example, in Fig \ref{example}, the shortest path from the concept \texttt{want-01} to \texttt{girl} is `` $\texttt{want-01}\stackrel{ARG1}{\longrightarrow}\texttt{believe-01}\stackrel{ARG0}{\longrightarrow}\texttt{girl}$", which conveys that \texttt{girl} is the object of the \textit{wanted} action. Intuitively, the shortest path between two nodes gives the closest and arguably the most important relationship between them. Therefore, we propose to use the shortest paths (relation sequence) between two nodes to characterize their relationship.\footnote{For the case that there are multiple shortest paths, we randomly sample one during training and take the averaged representation during testing.} Following the sequential nature of the relation sequence, we employs recurrent neural networks with Gated Recurrent Unit (GRU) \cite{cho2014learning} for transforming relation sequence into a distributed representation. Formally, we represent the shortest relation path $sp_{i \to j} = [e(i, k_1), e(k_1, k_2), \ldots, e(k_n, j)]$ between the node $i$ and the node $j$, where $e(\cdot, \cdot)$ indicates the edge label and $k_{1:n}$ are the relay nodes. We employ bi-directional GRUs for sequence encoding:
\begin{align*}
\overrightarrow{s_t} &= \text{GRU}_f( \overrightarrow{s_{t-1}}, sp_t) \\
\overleftarrow{s_t} &= \text{GRU}_b(\overleftarrow{s_{t+1}}, sp_t)
\end{align*}
The last hidden states of the forward GRU network and the backward GRU networks are concatenated to form the final relation encoding $r_{ij} = [ \overrightarrow{s_n}; \overleftarrow{s_0}]$.
\subsubsection{Bidirectionality} Though in theory, our architecture can deal with arbitrary input graphs, the most widely adopted graphs in the real problems are directed acyclic graphs (DAGs). This implies that the node embedding information will be propagated in one pre-specified direction. However, the reverse direction informs the equivalent information flow as well. To facilitate communication in both directions, we add reverse edges to the graph. The reverse edge connects the same two nodes as the original edge but in a different direction and with a reversed label. For example, we will draw a virtual edge $\texttt{believe-01}\stackrel{RARG1}{\longrightarrow}\texttt{want-01}$ according to the original edge $\texttt{want-01}\stackrel{ARG1}{\longrightarrow}\texttt{believe-01}$. For convenience, we also introduce self-loop edges for each node. These extra edges have specific labels, hence their own parameters in the network. We also introduce an extra global node into every graph, who has a direct edge to all other nodes with the special label $global$. The final representation $x_{global}$ of the global node serves as a whole graph representation.
\subsubsection{Absolute Position} Besides pairwise relationship, some absolute positional information can also be beneficial. For example, the root of an AMR graph serves as a rudimentary representation of the overall focus, making the minimum distance from the root node partially reflect the importance of the corresponding concept in the whole-sentence semantics. The sequence order of tokens in a dependency tree also provides complementary information to dependency relations. In order for the model to make use of the absolute positions of nodes, we add the positional embeddings to the input embeddings at the bottom of the encoder stacks. For example, \texttt{want-01} in Fig \ref{example} is the root node of the AMR graph, so its index should be 0. Notice we denote the index of the global node as $0$ as well.
\begin{table*}[t]
\small
\centering
\begin{tabular}{c|c|c|c|c|c|c|c|c}
Dataset & \#train & \#dev & \#test & \#edge types & \#node types & avg \#nodes& avg \#edges & avg diameter\\
\hline
LDC2015E86 & 16,833 & 1,368 & 1,371 & 113& 18735 &17.34&17.53&6.98\\
LDC2017T10 & 36,521 & 1,368 & 1,371 & 116& 24693 &14.51&14.62&6.15\\
\hline
English-Czech & 181,112 & 2,656 & 2,999 &46&78017&23.18&22.18&8.36\\
English-German & 226,822 & 2,169 & 2,999 &46&87219&23.29&22.29&8.42
\end{tabular}
\caption{Data statistics of all four datasets. \#train/dev/test indicates the number of instances in each set, avg \#nodes/edges/diameter represents the averaged value of nodes/edge/diameter size of a graph.}
\label{data}
\end{table*}
\subsection{Sequence Decoder}
Our sequence decoder basically follows the same spirit of the sequential Transformer decoder. The decoder yields the natural language sequence by calculating a sequence of hidden states sequentially. One distinct characteristic is that we use the global graph representation $x_{global}$ for initializing the hidden states at each time step. The hidden state $h_t$ at each time step $t$ is then updated by interleaving multiple rounds of attention over the output of the encoder (node embeddings) and attention over previously-generated tokens (token embeddings). Both are implemented by the multi-head attention mechanism. $x_{global}$ is removed when performing the sequence-to-graph attention.
\subsubsection{Copy mechanism}
To address the data sparsity issue in token prediction, we include a copy mechanism \cite{gu-etal-2016-incorporating} in similar spirit to most recent works. Concretely, a single-head attention is computed based on the decoder state $h_t$ and the node representation $x_{1:n}$, where $a_t^i$ denotes the attention weight of the node $v_i$ in the current time step $t$. Our model can either directly copy the type name of a node (node label) or generate from a pre-defined vocabulary $V$. Formally, the prediction probability of a token $y$ is given by:
\begin{align*}
P(y|h_t) = P(gen|h_t) gen(y|h_t) +P(copy|h_t) \sum_{i \in S(y)} a_t^i
\end{align*}
where $S(y)$ is the set of nodes that have the same surface form as $y$. $P(gen|h_t)$ and $P(copy|h_t)$ are computed by a single layer neural network with softmax activation, and $gen(y|h_t) = \exp({w_y}^T h_t)/ \sum_{y'\in V} \exp({w_y'}^T h_t)$, where $w_y$ (for $y \in V$) denotes the model parameters. The copy mechanism facilitates the generation of dates, numbers, and named entities in both AMR-to-text generation and machine translation tasks in experiments.
\begin{table}[t]
\centering
\small
\begin{tabular}{c|c|c}
model component& hyper-parameter& value\\
\hline
\multirow{4}{*}{char-level CNN} & number of filters & 256 \\
& width of filters & 3 \\
& char embedding size & 32 \\
& final hidden size & 128 \\
\hline
\multirow{2}{*}{Embeddings} & node embedding size & 300 \\
&edge embedding size& 200\\
&token embedding size&300\\
\hline
\multirow{3}{*}{Multi-head attention} & number of heads & 8 \\
& hidden state size & 512 \\
& feed-forward hidden size & 1024
\end{tabular}
\caption{Hyper-parameters settings.}
\label{hyper}
\end{table}
\begin{table*}[t]
\small
\centering
\begin{tabular}{c|c|c|c|c|c|c}
\multirow{2}{*}{Model}& \multicolumn{3}{c|}{LDC2015E86} & \multicolumn{3}{c}{LDC2017T10}\\
\cline{2-7}
& \textsc{BLEU} & \textsc{chrF++} & \textsc{Meteor}& \textsc{BLEU} & \textsc{chrF++} & \textsc{Meteor}\\
\hline
\newcite{song-etal-2016-amr}$\dag$ & 22.4 &-&-&-&-&-\\
\newcite{flanigan-etal-2016-generation}$\dag$ &23.0&-&-&-&-&-\\
\newcite{pourdamghani2016generating}$\dag$ & 26.9 &-&-&-&-&-\\
\newcite{song-etal-2017-amr}$\dag$ &25.6&-&-&-&-&-\\
\hline
\newcite{konstas-etal-2017-neural} &22.0&-& -& -&-&-\\
\newcite{cao-clark-2019-factorising}$\ddag$ &23.5 &-&-&26.8&-&-\\
\hline
\newcite{song-etal-2018-graph} & 23.3& -&-&24.9&-&-\\
\newcite{beck-etal-2018-graph}& -& -&-&23.3&50.4&\\
\newcite{damonte-cohen-2019-structural} &24.4&-&23.6&24.5&-&24.1\\
\newcite{guo2019densely} &25.7&54.5$^*$&31.5$^*$&27.6&57.3&34.0$^*$ \\
\hline
Ours &\textbf{27.4}&\textbf{56.4}&\textbf{32.9}&\textbf{29.8}&\textbf{59.4}&\textbf{35.1}
\end{tabular}
\caption{Main results on AMR-to-text generation. Numbers with $^*$ are from the contact from the authors. - denotes that the result is unknown because it is not provided in the corresponding paper.}
\label{main-amr}
\end{table*}
\begin{table*}[t]
\centering
\small
\begin{tabular}{c|c|c|c|c|c}
\multirow{2}{*}{Model}& \multirow{2}{*}{Type} &\multicolumn{2}{c|}{English-German} & \multicolumn{2}{c}{English-Czech}\\
\cline{3-6}
&&\textsc{BLEU} & \textsc{chrF++} & \textsc{BLEU} & \textsc{chrF++} \\
\hline
\newcite{bastings-etal-2017-graph} &Single &16.1&-&9.6&-\\
\newcite{beck-etal-2018-graph} &Single & 16.7& 42.4&9.8&33.3\\
\newcite{guo2019densely} &Single &19.0&44.1&12.1&37.1 \\
\hline
\newcite{beck-etal-2018-graph} &Ensemble & 19.6& 45.1&11.7&35.9\\
\newcite{guo2019densely} &Ensemble &20.5&45.8&13.1& 37.8\\
\hline
Ours &Single&\textbf{21.3}&\textbf{47.9}&\textbf{14.1}&\textbf{41.1}
\end{tabular}
\caption{Main results on syntax-based machine translation.}
\label{main-nmt}
\end{table*}
\begin{figure*}[t]
\centering
\begin{subfigure}{0.32\textwidth}
\includegraphics[width=0.99\linewidth]{graph_size.pdf}
\caption{}
\label{graph_size}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\includegraphics[width=0.99\linewidth]{graph_diameter.pdf}
\caption{}
\label{graph_diameter}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\includegraphics[width=0.99\linewidth]{graph_re.pdf}
\caption{}
\label{graph_re}
\end{subfigure}
\caption{\textsc{chrF++} scores with respect to (a) the graph size, (b) the graph diameter, and (c) the the number of reentrancies.}
\end{figure*}
\section{Experiments}
\label{sec-exp}
We assess the effectiveness of our models on two typical graph-to-sequence learning tasks, namely AMR-to-text generation and syntax-based machine translation (MT). Following previous work, the results are mainly evaluated by \textsc{BLEU} \cite{papineni2002bleu} and \textsc{chrF++} \cite{popovic2017chrf++}. Specifically, we use case-insensitive scores for AMR and case-sensitive BLEU scores for MT.
\subsection{AMR-to-text Generation}
Our first application is language generation from AMR, a semantic formalism that represents sentences as rooted DAGs \cite{banarescu2013abstract}. For this AMR-to-text generation task, we use two benchmarks, namely the LDC2015E86 dataset and the LDC2017T10 dataset. The first block of Table \ref{data} shows the statistics of the two datasets. Similar to \newcite{konstas-etal-2017-neural}, we apply entity simplification and anonymization in the preprocessing steps and restore them in the postprocessing steps.
The graph encoder uses randomly initialized node embeddings as well as the output from a learnable CNN with character embeddings as input. The sequence decoder uses randomly initialized token embeddings and another char-level CNN. Model hyperparameters are chosen by a small set of experiments on the development set of LDC2017T10. The detailed settings are listed in Table \ref{hyper}. During testing, we use a beam size of $8$ for generating graphs. To mitigate overfitting, we also apply dropout \cite{srivastava2014dropout} with the drop rate of $0.2$ between different layers. We use a special UNK token to replace the input node tag with a rate of $0.33$. Parameter optimization is performed with the Adam optimizer \cite{kingma2014adam} with $ \beta_1= 0.9$ and $beta_2 = 0.999$. The same learning rate schedule of \newcite{vaswani2017attention} is adopted in our experiments.\footnote{Code available at \url{https://github.com/jcyk/gtos}.} For computation efficiency, we gather all distinct shortest paths in a training/testing batch, and encode them into vector representations by the recurrent relation encoding procedure as described above.\footnote{This strategy reduces the number of related sequences to encode from $O(mn^2)$ to a stable number when a large batch size $m$ is used.}
We run comparisons on systems without ensembling nor additional silver data. Specifically, the comparison methods can be grouped into three categories: (1) feature-based statistical methods \cite{song-etal-2016-amr,pourdamghani2016generating,song-etal-2017-amr,flanigan-etal-2016-generation}; (2) sequence-to-sequence neural models \cite{konstas-etal-2017-neural,cao-clark-2019-factorising}, which use linearized graphs as inputs; (3) recent works using different variants of graph neural networks for encoding graph structures directly \cite{song-etal-2018-graph,beck-etal-2018-graph,damonte-cohen-2019-structural,guo2019densely}. The results are shown in Table \ref{main-amr}. For both datasets, our approach substantially outperforms all previous methods. On the LDC2015E86 dataset, our method achieves a BLEU score of 27.4, outperforming previous best-performing neural model \cite{guo2019densely} by a large margin of 2.6 BLEU points. Also, our model becomes the first neural model that surpasses the strong non-neural baseline established by \newcite{pourdamghani2016generating}. It is worth noting that those traditional methods marked with $\dag$ train their language models on the external Gigaword corpus, thus they possess an additional advantage of extra data. On the LDC2017T10 dataset, our model establishes a new record BLEU score of 29.8, improving over the state-of-the-art sequence-to-sequence model \cite{cao-clark-2019-factorising} by 3 points and the state-of-the-art GNN-based model \cite{guo2019densely} by 2.2 points. The results are even more remarkable since the model of \newcite{cao-clark-2019-factorising} (marked with $\ddag$) uses constituency syntax from an external parser. Similar phenomena can be found on the additional metrics of \textsc{chrF++} and \textsc{Meteor} \cite{denkowski:lavie:meteor-wmt:2014}. Those results suggest that current graph neural networks cannot make full use of the AMR graph structure, and our Graph Transformer provides a promising alternative.
\subsection{Syntax-based Machine Translation}
Our second evaluation is syntax-based machine translation, where the input is a source language dependency syntax tree and the output is a plain target language string. We employ the same data and settings from \newcite{bastings-etal-2017-graph}. Both the English-German and the English-Czech datasets from the WMT16 translation task.\footnote{\url{http://www.statmt.org/wmt16/translation-task.html.}} The English sentences are parsed after tokenization to generate the dependency trees on the source side using SyntaxNet \cite{alberti2017syntaxnet}.\footnote{\url{https://github.com/tensorflow/models/ tree/master/syntaxnet}} On the Czech and German sides, texts are tokenized using the Moses tokenizer.\footnote{\url{https://github.com/moses-smt/mosesdecoder.}} Byte-pair encodings \cite{sennrich-etal-2016-neural} with 8,000 merge operations are used to obtain subwords. The second block of Table \ref{data} shows the statistics for both datasets. For model configuration, we just re-use the settings obtained in our AMR-to-text experiments.
Table \ref{main-nmt} presents the results with comparison to existing methods. On the English-to-German translation task, our model achieves a BLEU score of 41.0, outperforming all of the previously published single models by a large margin of 2.3 BLEU score. On the English-to-Czech translation task, our model also outperforms the best previously reported single models by an impressive margin of 2 BLEU points. In fact, our single model already outperforms previous state-of-the-art models that use ensembling. The advantages of our method are also verified by the metric \textsc{chrF++}.
An important point about these experiments is that we did not tune the architecture: we simply employed the same model in all experiments, only adjusting the batch size for different dataset size. We speculate that even better results would be obtained by tuning the architecture to individual tasks. Nevertheless, we still obtained improved performance over previous works, underlining the generality of our model.
\section{More Analysis}
The overall scores show a great advantage of the Graph Transformer over existing methods, including the state-of-the-art GNN-based models. However, they do not shed light into how this is achieved. In order to further reveal the source of performance gain, we perform a series of analyses based on different characteristics of graphs. For those analyses, we use sentence-level \textsc{chrF++} scores, and take the macro average of them when needed. All experiments are conducted with the test set of LDC2017T10.
\begin{figure}[t]
\centering
\includegraphics[scale=0.41]{head.pdf}
\caption{The average distance for maximum attention for each head.}
\label{head}
\end{figure}
\subsubsection{Graph Size} To assess the model's performance for different sizes of graphs, we group graphs into four classes and show the curves of \textsc{chrF++} scores in Figure \ref{graph_size}. The results are presented with the contrast with the state-of-the-art GNN-based model of \newcite{guo2019densely}, denoted as Guo'19. As seen, the performance of both models decreases as the graph size increases. It is expected since a larger graph often contains more complex structure and the interactions between graph elements are more difficult to capture. The gap between ours and Guo'19 becomes larger for relatively larger graphs while for small graphs, both models give similar performance. This result demonstrates that our model has better ability for dealing with complicated graphs. As for extremely large graphs, the performance of both models have a clear drop, yet ours is still slightly better.
\subsubsection{Graph Diameter} We then study the impact of graph diameter.\footnote{The diameter of a graph is defined as the length of the longest shortest path between two nodes.} Graphs with large diameters have interactions between two nodes that appear distant from each other. We conjecture that it will cause severe difficulties for GNN-based models because they solely rely on local communication. Figure \ref{graph_diameter} confirms our hypothesis, as the curve of the GNN-based model shows a clear slope. In contrast, our model has more stable performance, and the gap between the two curves also illustrates the superiority of our model on featuring long-distance dependencies.
\subsubsection{Number of Reentrancies} We also study the ability for handling the reentrancies, where the same node has multiple parent nodes (or the same concept participates in multiple relations for AMR). The recent work \cite{damonte-cohen-2019-structural} has identified reentrancies as one of the most difficult aspects of AMR structure. We bin the number of reentrancies occurred in a graph into four classes and plot Fig. \ref{graph_re}. It can be observed that the gap between the GNN-based model and the Graph transformer becomes noticeably wide when there are more than one reentrancies. Since then, our model is consistently better than the GNN-based model, maintaining a margin of over $1$ \textsc{chrF++} score.
\subsubsection{How Far Does Attention Look At} The Graph Transformer shows a strong capacity for processing complex and large graphs. We attribute the success to the global communication design, as it provides opportunities for direct communication in long distance. A natural and interesting question is how well the model makes use of this property. To answer this question, following \newcite{voita-etal-2019-analyzing}, we turn to study the attention distribution of each attention head. Specifically, we record the specific distance to which its maximum attention weight is assigned as attention distance. Fig. \ref{head} shows the averaged attention distance after we run our model on the development set of LDC2017T10. We can observe that nearly half of the attention heads have an average attention distance larger than $2$. The number of these far-sighted heads generally increases as layers go deeper. Interestingly, the longest-reaching head (layer1-head5) and the shortest-sighted head (layer1-head2) coexist in the very first layer, while the former has an average distance over 5.
\section{Conclusions}
In this paper, we presented the Graph Transformer, the first graph-to-sequence learning model based entirely on automatic attention. Different from previous recurrent models that require linearization of input graph and previous graph neural network models that restrict the direct message passing in the first-order neighborhood, our model enables global node-to-node communication. With the Graph Transformer, we achieve the new state-of-the-art on two typical graph-to-sequence generation tasks with four benchmark datasets.
|
2,877,628,090,465 | arxiv | \section{Introduction}
\label{sec:intro} The Groenewold-Moyal plane is the algebra ${\cal
A}_\theta({\mathbb{R}}^{d+1})$ of functions on ${\mathbb{R}}^{d+1}$ with the
$\ast$-product $\alpha \ast_\theta \beta$ between functions
$\alpha$ and $\beta$ as the product law, where
\begin{eqnarray}
\alpha \ast_\theta \beta \;(x) &=& \left[ \alpha \exp\left(\frac{i}{2}
\overleftarrow{\partial_\mu} \theta^{\mu \nu}
\overrightarrow{\partial_\nu} \right) \beta \right] (x) ~,
\label{starprod} \\
\theta^{\mu \nu}&=&-\theta^{\nu \mu} \in {\mathbb{R}} ~, \ x=
(x^0,x^1,\ldots,x^d) ~. \nonumber
\end{eqnarray}
The Poincar\'{e} group $\mathcal{P}$ acts on ${\mathbb{R}}^{d+1}$ and hence
on its smooth functions $C^{\infty}({\mathbb{R}}^{d+1})$ regarded just as a
vector space. If $g \in \mathcal{P}$ and $g: x \rightarrow g x$,
then for $\gamma \in C^{\infty}({\mathbb{R}}^{d+1})$
\begin{equation}\label{gaction}
( g \gamma) (x) = \gamma (g^{-1} x) ~.
\end{equation}
However, in general
\begin{equation}
(g \alpha) \ast_\theta (g \beta) \neq g (\alpha \ast_\theta \beta)
\label{gastarb} ~,
\end{equation}
so that this action of $\mathcal{P}$ is not an automorphism of
${\cal A}_\theta({\mathbb{R}}^{d+1})$.
Similar remarks can be made generically about any group which acts on
${\mathbb{R}}^{d+1}$ and in particular about the diffeomorphism group
$\mathcal{D}$. Only a limited group of transformations, such as
translations, gives the equality in (\ref{gastarb}). Nevertheless,
there is a way to avoid this limitation with $\mathcal{D}$. It
involves introducing a new deformed coproduct $\Delta_\theta$ on
$\mathcal{D}$. The revival of this idea in recent times is due to
\cite{Chaichian:2004za,aschieri,Dimitrijevic:2004rf,Oeckl:2000eg}.
But its origins can be traced back to Drin'feld \cite{drinfeld} in
mathematics. This Drin'feld twist leads naturally to deformed
$R$-matrices and statistics for quantum groups, as discussed by Majid
\cite{majid}. Subsequently, Fiore and Schupp \cite{fiore1} and Watts
\cite{watts1,watts2} explored the significance of the Drin'feld twist
and $R$-matrices while Fiore \cite{fioresolo1, fioresolo2} and Fiore
and Schupp \cite{fiore2}, Oeckl \cite{Oeckl:2000eg} and Grosse et al
\cite{gms} studied the importance of $R$-matrices for
statistics. Oeckl \cite{Oeckl:2000eg} and Grosse et al \cite{gms} also
developed quantum field theories using different and apparently
inequivalent approaches, the first on the Moyal plane and the second
on the $q$-deformed fuzzy sphere. Recent work, including ours, has
significant overlap with the earlier literature. We share many
features in particular with \cite{gms,Oeckl:2000eg}.
In \cite{aschieri,Dimitrijevic:2004rf} the authors focused on
$\mathcal{D}$ and developed Riemannian geometry and gravity theories
based on $\Delta_\theta$, while \cite{Chaichian:2004za} focused on the
Poincar\'{e} subgroup $\mathcal{P}$ of $\mathcal{D}$ and explored the
consequences of $\Delta_\theta$ for quantum field theories. Twisted
conformal symmetry was discussed by \cite{matlock}. We explain the
basics of all this work in Section 2.
In Section 3, we discuss the impact of the deformed tensor product on
the Bose and Fermi commutation relations. In fact, they are also
deformed. We give an explicit formula for the new
creation-annihilation operators in terms of the standard ($\theta^{\mu
\nu}=0$) ones. State vectors can still be classified by the
irreducible representations of the permutation group, but the action
of the latter on the Hilbert space is deformed as well.
Previous research on the spin-statistics theorem on ${\cal
A}_\theta({\mathbb{R}}^{d+1})$ is due to Alvarez-Gaum\'{e} and Vazquez-Mozo
\cite{Alvarez-Gaume:2003mb}, but they do not use the deformed
coproduct on $\mathcal{P}$.
In Section 4, we construct the second quantization formalism
corresponding to the deformed commutation relations, introducing also
the corresponding symmetry under permutations of physical states.
In Section 5, we argue that excitations in the quantum Hall effect
should be described by deformed statistics.
Finally, in Section 6, we discuss the possible phenomenological
implications of the deformed commutation relations, considering in
particular the case of systems of fermionic identical particles. We
show that there exist state vectors of the system which violate the
Pauli exclusion principle. There are quite stringent tests on Pauli
violating transitions in nuclear (see for example \cite{sk,borexino}
and references therein) and atomic systems \cite{xrays}, and crystals
\cite{greenberg}, so that the energy scale associated with
$\theta^{\mu \nu}$ (whose dimension is inverse squared energy) can be
severely constrained. This issue will be studied in more detail later.
In another work \cite{bpq}, it is proved that UV-IR mixing is entirely
absent for quantum field theories on ${\cal A}_\theta ({\mathbb
R}^{d+1})$ with the deformed statistics.
\section{The Deformed Coproduct}
\subsection{\it{Tensor Product of Representations}}
Suppose that a group $G$ acts on a complex vector space $V$ by a
representation $\rho$. We denote this action by
\begin{equation}
v \rightarrow \rho(g) v ~,
\label{rhov}
\end{equation}
for $g \in G$ and $v \in V$. Then the group algebra $G^*$ also acts on
$V$. A typical element of $G^*$ is
\begin{equation}
\int dg \,\alpha(g)\, g, \,\,\,\,\, \alpha(g) \in {\mathbb{C}} ~,
\end{equation}
where $dg$ is a measure on $G$. Its action is
\begin{equation}
v \rightarrow \int dg \,\alpha(g) \, \rho (g) \, v ~.
\label{actgstar}
\end{equation}
Both $G$ and $G^*$ act on $V \otimes_{\mathbb{C}} V$, the tensor product of
$V$'s over ${\mathbb{C}}$, as well. These actions are usually taken to be
\begin{equation}
v_1 \otimes v_2 \rightarrow \left[ \rho(g) \otimes \rho(g) \right]
(v_1 \otimes v_2 ) = \rho(g) v_1 \otimes \rho(g) v_2 ~,
\label{acttens}
\end{equation}
and
\begin{equation}
v_1 \otimes v_2 \rightarrow
\int dg \, \alpha(g) \, \rho(g) v_1 \otimes \rho(g) v_2
\end{equation}
respectively, for $v_1, v_2 \in V$.
In Hopf algebra theory, the action of $G$ and $G^*$ on tensor products
is formalized using the coproduct $\Delta$, a homomorphism from $G^*$
to $G^* \otimes G^*$, which on restriction to $G$ gives a homomorphism
from $G$ to $G^* \otimes G^*$. This restriction specifies $\Delta$ on
all of $G^*$ by linearity. Thus if
\begin{eqnarray}
&& \Delta: \, g \rightarrow \Delta(g) ~, \\
&& \Delta(g_1)\Delta(g_2)=\Delta(g_1g_2) ~,
\end{eqnarray}
we have
\begin{equation}
\Delta \left(\int dg \, \alpha(g) \, g \right) = \int dg \, \alpha(g)
\, \Delta(g) ~.
\end{equation}
For the familiar choice $\Delta(g) = g \otimes g$, the action
(\ref{acttens}) can be written as
\begin{equation}
v_1 \otimes v_2 \rightarrow \left[\rho \otimes \rho \right]
\Delta(g) v_1 \otimes v_2 ~.
\label{coprod}
\end{equation}
But any choice of coproduct will do to define an action of $G$ on $V
\otimes V$ using (\ref{coprod}).
Likewise, if $G$ acts on vector spaces $V$ and $W$ by representations
$\rho$ and $\sigma$, respectively, and $\Delta$ is a coproduct on $G$,
$G$ can act on $V \otimes W$ according to
\begin{equation}
v \otimes w \rightarrow \left[\rho \otimes \sigma \right] \Delta(g) v
\otimes w ~,
\end{equation}
for $v \in V$, $w \in W$. This action extends by linearity to an
action of $G^*$.
Not all choices of $\Delta$ are equivalent. In particular the
irreducible representations (IRR's), which can occur in the reduction
of $\left[ \rho \otimes \sigma \right]$ can depend upon
$\Delta$. Examples of this sort perhaps occur for the Poincar\'{e}
group.
\subsection{\it{The Carrier of Group Action is an Algebra}}
Until now we assumed only that $V,W$ are vector spaces. Suppose next
that $V$ is an algebra ${\cal A}$ (over ${\mathbb{C}}$). In that case, as
discussed by \cite{Chaichian:2004za,aschieri} there is also a
compatibility condition on $\Delta$. It comes about as follows.
As ${\cal A}$ is an algebra, we have a rule for taking products of
elements of ${\cal A}$. That means that there is a multiplication map
\begin{eqnarray}
m: {\cal A} \otimes {\cal A} \rightarrow {\cal A} ~, \\
\alpha \otimes \beta
\rightarrow m (\alpha \otimes \beta) ~, \nonumber \label{multmap}
\end{eqnarray}
for $\alpha,\beta \in {\cal A}$, the product $\alpha \beta$ being $m
(\alpha \otimes \beta)$.
It is now essential that $\Delta$ be compatible with $m$. That means
that if we transform $\alpha \otimes \beta$ by $g$-action and then
apply $m$, it should be equal to the $g$-transform of $m (\alpha
\otimes \beta)$:
\begin{equation}
m \left( (\rho \otimes \rho) \Delta(g) \left(\alpha \otimes \beta
\right)\right)=
\rho(g) m ( \alpha \otimes \beta) ~.
\label{compatib}
\end{equation}
This result is encoded in the commutative diagram
\begin{equation}
\begin{array}{ccc}\label{diag}
\alpha \otimes \beta & \longrightarrow & ( \rho \otimes \rho ) \Delta
(g) \alpha \otimes \beta \\
& & \\ m \,\, \downarrow & & \downarrow \,\, m \\ & & \\ m(\alpha
\otimes
\beta) &
\longrightarrow & \rho(g) m (\alpha \otimes \beta)
\end{array}
\end{equation}
If such a $\Delta$ can be found, $G$ is an automorphism of ${\cal
A}$. In the absence of such a $\Delta$, $G$ does not act on ${\cal
A}$.
\subsection{\it{The Case of the Groenewold-Moyal Plane}}
In the Groenewold-Moyal plane, the multiplication map depends on
$\theta^{\mu \nu}$ and will be denoted by $m_\theta$. It is defined by
\begin{equation}
m_\theta ( \alpha \otimes \beta) = m_0 \left( e^{-\frac{i}{2} (-i
\partial_\mu) \theta^{\mu \nu} \otimes (-i \partial_\nu) } \alpha
\otimes \beta \right) ~,
\label{multmoyal}
\end{equation}
where $m_0$ is the point-wise multiplication
\begin{equation}
m_0 ( \gamma \otimes \delta) (x) := \gamma(x) \delta (x)
\end{equation}
of any two functions $\gamma$ and $\delta$.
We introduce the notation
\begin{equation}
F_\theta = e^{-\frac{i}{2} (-i \partial_\mu) \theta^{\mu \nu}
\otimes (-i \partial_\nu) }
\label{ftheta}
\end{equation}
for the factor appearing in (\ref{multmoyal}) so that
\begin{equation}
m_\theta ( \alpha \otimes \beta) = m_0 \left( F_\theta \alpha \otimes \beta
\right) ~.
\label{multmoyal2}
\end{equation}
Let $g\in \mathcal{D}$ act on ${\mathbb{R}}^{d+1}$ by $x \rightarrow g x$ and
hence on functions by $\alpha \rightarrow \rho(g) \alpha$ where the
representation $\rho$ is canonical:
\begin{equation}
(\rho(g) \alpha) (x) = \alpha ( g^{-1} x) ~.
\label{canonrepr}
\end{equation}
(This action was denoted in Eq.(\ref{gaction}) omitting the symbol
$\rho$.) The important observation is that it can act on ${\cal
A}_\theta({\mathbb{R}}^{d+1}) \otimes {\cal A}_\theta({\mathbb{R}}^{d+1})$ as well
compatibly with $m_\theta$ if a new coproduct $\Delta_\theta$ is used,
where
\begin{equation}
\Delta_\theta (g) =e^{\frac{i}{2} P_\mu {\otimes} \theta^{\mu \nu} P_\nu }
(g {\otimes} g) e^{-\frac{i}{2} P_\mu {\otimes} \theta^{\mu \nu} P_\nu
} = \hat{F}^{-1}_\theta (g \otimes g) \hat{F}_\theta ~,
\label{newcoprod}
\end{equation}
$P_\mu$ being the generators of translations. On functions, that is,
in the representation $\rho$, it becomes $-i \partial_\mu$, so that
the two factors in (\ref{newcoprod}) can be expressed in terms of
$F_\theta$ and its inverse.
We can check that $\Delta_\theta$ is compatible with $m_\theta$ as follows
\begin{eqnarray}
m_\theta \left( (\rho \otimes \rho) \Delta_\theta(g) ( \alpha \otimes
\beta ) \right) &=&
m_0 \left( F_\theta (F_\theta^{-1} \rho(g) \otimes
\rho(g) F_\theta) \alpha \otimes \beta \right) \nonumber \\
&=& \rho(g) \left( \alpha \ast_\theta \beta \right), \quad
\alpha,\beta \in {\cal A}_\theta({\mathbb{R}}^{d+1})
\label{proofcomp}
\end{eqnarray}
as required.
The action of the Poincar\'{e} group on tensor products of plane waves
is simple. For the momentum $p=(p_0, p_1,...p_d) \in {\mathbb{R}}^{d+1}$, let
${\bf e}_p \in {\cal A}_\theta({\mathbb{R}}^{d+1})$ where
\begin{equation}
{\bf e}_p(x) = e^{i p {\cdot} x}, \quad p {\cdot} x = p_\mu x^\mu ~.
\label{plane}
\end{equation}
In the case of the Poincar\'{e} group, if $\exp( i P {\cdot} a) $ is a
translation,
\begin{equation}
(\rho \otimes \rho)\Delta_\theta \left( e^{i P {\cdot} a} \right)
{\bf e}_p \otimes {\bf e}_q = e^{i (p+q) {\cdot} a} {\bf e}_p \otimes
{\bf e}_q ~,
\label{planetrans}
\end{equation}
while if $\Lambda$ is a Lorentz transformation
\begin{equation}
(\rho \otimes \rho)\Delta_\theta(\Lambda) {\bf e}_p \otimes {\bf e}_q
= \left[e^{\frac{i}{2}(\Lambda p)_\mu \theta^{\mu \nu}
(\Lambda q)_\nu } e^{-\frac{i}{2} p_\mu \theta^{\mu \nu}
q_\nu } \right] {\bf e}_{\Lambda p} \otimes {\bf e}_{\Lambda q} ~.
\label{planelorentz}
\end{equation}
Thus the coproduct on translations is not affected while the coproduct
for the Lorentz group is changed.
\subsection{\it{Action on Fourier Coefficients}}
If $\varphi$ is a scalar field, we can regard it as an element of
${\cal A}_\theta({\mathbb{R}}^{d+1})$. If its Fourier representation is
\begin{equation}
\varphi = \int d \mu(p) \, \tilde{\varphi}(p) \, {\bf e}_p ~,
\label{field}
\end{equation}
where $d \mu(p)$ is a Lorentz-invariant measure, then
\begin{eqnarray}
\rho(\Lambda) \varphi &=& \int d \mu(p) \, \tilde{\varphi}(p) \, {\bf
e}_{\Lambda p} = \int d \mu(p) \, \tilde{\varphi}(\Lambda^{-1} p) \, {\bf
e}_{p} ~, \\
\rho \left( e^{i P {\cdot} a} \right) \varphi &=& \int d \mu(p) \,
e^{i p {\cdot} a}\tilde{\varphi}(p) \, {\bf e}_{p} ~.
\end{eqnarray}
Thus the representation $\tilde{\rho}$ of the Poincar\'{e} group on
$\tilde{\varphi}$ is specified by
\begin{eqnarray}
\left(\tilde{\rho}(\Lambda) \tilde{\varphi} \right) (p) &=&
\tilde{\varphi}(\Lambda^{-1} p) ~, \\
\left(\tilde{\rho}\left( e^{i P {\cdot} a} \right) \tilde{\varphi} \right) (p)
&=& e^{i p {\cdot} a} \tilde{\varphi}(p) ~.
\end{eqnarray}
If $\chi$ is another field of ${\cal A}_\theta({\mathbb{R}}^{d+1})$,
\begin{equation}
\chi = \int d \mu(p) \, \tilde{\chi}(p) \, {\bf e}_p ~,
\label{field2}
\end{equation}
then
\begin{equation}
\varphi \otimes \chi = \int d \mu(p) \, d \mu(q) \, \tilde{\varphi}(p)
\tilde{\chi}(q) \, {\bf e}_{p} \otimes {\bf e}_{q} \label{phitimeschi}
~.
\end{equation}
Using (\ref{planetrans}), we see that the action of translations on
$\tilde{\varphi} \otimes \tilde{\chi}$ is
\begin{equation}
\Delta_\theta \left( e^{i P {\cdot} a} \right) \left( \tilde{\varphi} \otimes
\tilde{\chi} \right) (p,q) = e^{i (p+q) {\cdot} a} \tilde{\varphi}(p)
\tilde{\chi}(q)
\label{reprphichi1} ~.
\end{equation}
Using (\ref{reprphichi1}) we can deduce the action of twisted Lorentz
transformations to be
\begin{equation}
\Delta_\theta(\Lambda) \left( \tilde{\varphi} \otimes \tilde{\chi}
\right) (p,q) = \tilde{F}^{-1}_\theta\left( \Lambda^{-1}p,
\Lambda^{-1}q \right) \tilde{F}_\theta\left( p, q \right)
\tilde{\varphi}(\Lambda^{-1}p) \tilde{\chi}(\Lambda^{-1}q)
\label{reprphichi2} ~.
\end{equation}
Here
\begin{equation}
\tilde{F}_\theta\left( r, s \right) := e^{-\frac{i}{2} r_\mu
\theta^{\mu \nu} s_\nu }
\end{equation}
and we have omitted writing $\rho \otimes \rho$ in front of
$\Delta_\theta$'s.
We remark that had we used (\ref{planelorentz}) to deduce the
transformation law for the Fourier coefficients, we would have got
$\tilde{F}_\theta\left( \Lambda^{-1}p, \Lambda^{-1}q \right)
\tilde{F}^{-1}_\theta\left( p, q \right) \tilde{\varphi}
(\Lambda^{-1}p) \tilde{\chi}(\Lambda^{-1}q)$ for the right-hand side
of (\ref{reprphichi2}). We will use (\ref{reprphichi2}) hereafter as
it can be deduced from the conventional action of $P_\mu$ given by
(\ref{reprphichi1}).
\section{Quantum Fields and Spin-Statistics}
A free relativistic scalar quantum field $\varphi$ of mass $m$ can be
expanded as
\begin{equation}
\varphi = \int \frac{d^d p}{2 p_0} \left( a(p) \,{\bf e}_p + a^\dagger
(p) {\bf e}_{-p} \right)
~, \label{phiexpans}
\end{equation}
where $p_0=\sqrt{\left|\vec{p}\right|^2 +m^2} \geq 0$, and $a(p),
a^\dagger(p)$ are subject to suitable relations to be stated below. If
$c(p), c^\dagger(p)$ are the limits of these operators when
$\theta^{\mu \nu}=0$, these relations are
\begin{eqnarray}
\left[ c(p), c(q) \right] & = & \left[ c^\dagger(p),
c^\dagger(q) \right] = 0 ~, \label{standcomm1} \\
\left[ c(p), c^\dagger(q) \right] & = & 2 p_0 \delta^d(p-q) ~.
\label{standcomm2}
\end{eqnarray}
We now argue that such relations are incompatible for $\theta^{\mu
\nu} \neq 0$. Rather $a(p)$ and $a^\dagger(p)$ fulfill certain
deformed relations which reduce to (\ref{standcomm1}),
(\ref{standcomm2}) for $\theta^{\mu \nu} = 0$. We may therefore say
that statistics is deformed, though this is not entirely precise, as
we discuss later.
Similar deformations occur for the operator relations of all tensorial
and spinorial quantum fields.
Suppose now that
\begin{equation}
a(p) a(q) = \tilde{G}_\theta(p,q) a(q) a(p) ~,
\label{newcomm1}
\end{equation}
where $\tilde{G}_\theta$ is a ${\mathbb{C}}$-valued function of $p$ and $q$ yet
to be determined. In particular, if $U(\Lambda)$ and $U(\exp (i P
{\cdot} a))$ are the operators implementing the Lorentz
transformations and translations respectively on the quantum Hilbert
space,
\begin{eqnarray}
U(\Lambda) \tilde{G}_\theta(p,q) U(\Lambda)^{-1} &=&
\tilde{G}_\theta(p,q) ~, \\
U(\exp (i P {\cdot} a)) \tilde{G}_\theta(p,q) U(\exp (i P {\cdot}
a))^{-1} &=& \tilde{G}_\theta(p,q) ~.
\end{eqnarray}
The transformations of $a(p) a(q) = (a \otimes a) (p,q)$ and $a(q)
a(p)$ are determined by $\Delta_\theta$. Hence conjugating
(\ref{newcomm1}) by $U(\Lambda)$, we get
\begin{eqnarray}
&& \tilde{F}^{-1}_\theta\left( \Lambda^{-1}p, \Lambda^{-1}q \right)
\tilde{F}_\theta\left( p, q \right) a(\Lambda^{-1}p) a(\Lambda^{-1}q)
= \nonumber \\
&& = \tilde{G}_\theta(p,q) \tilde{F}^{-1}_\theta\left( \Lambda^{-1}q,
\Lambda^{-1}p \right) \tilde{F}_\theta\left( q, p \right)
a(\Lambda^{-1}q) a(\Lambda^{-1}p) ~,
\end{eqnarray}
or, on using $\tilde{F}_\theta\left( r, s \right)=
\tilde{F}^{-1}_\theta\left( s,r \right)$,
\begin{equation}
a(\Lambda^{-1}p) a(\Lambda^{-1}q)= \tilde{G}_\theta(p,q)
\tilde{F}^{-2}_\theta\left( \Lambda^{-1}q, \Lambda^{-1}p \right)
\tilde{F}^{2}_\theta\left( q, p \right) a(\Lambda^{-1}q)
a(\Lambda^{-1}p) ~.
\label{constraint1}
\end{equation}
Using (\ref{newcomm1}) after changing $p$ to $\Lambda^{-1}p$ and $q$
to $\Lambda^{-1}q$, we get
\begin{equation}
\tilde{G}_\theta(\Lambda^{-1} p, \Lambda^{-1} q)
\tilde{F}^{2}_\theta\left(\Lambda^{-1} q, \Lambda^{-1} p \right)
= \tilde{G}_\theta(p,q) \tilde{F}^{2}_\theta\left( q,p \right) ~,
\label{conG}
\end{equation}
whose solution is
\begin{equation}
\tilde{G}_\theta(p,q) = \tilde{\eta}(p,q) \tilde{F}^{-2}_\theta (q,p) ~,
\label{solutiong}
\end{equation}
where $\tilde{\eta}$ is a Lorentz-invariant function of $p$ and
$q$. For $\theta^{\mu \nu}=0$, $\varphi$ is a standard scalar field
and $\tilde{\eta}(p,q)$ takes the constant value $\eta = +1$. So it is
natural to take
\begin{equation}
\tilde{\eta}(p,q) = \eta = + 1, \quad {\rm for\,\ all} \,\,\theta^{\mu \nu} ~,
\end{equation}
even though $\tilde{\eta}(p,q)$ can depend on $p,q$ and $\theta^{\mu
\nu}$ and approach the value $+1$ only in the limit of vanishing
$\theta^{\mu\nu}$.
Note that in $1+1$ dimensions, $\tilde{F}_\theta\left(\Lambda p,
\Lambda q \right) = \tilde{F}_\theta\left(p,q \right)$ is itself
Lorentz-invariant (but not invariant under parity). Also, $2+1$
dimensions is special because of the availability of braid
statistics. Thus for anyons, $\tilde{\eta}(p,q)$ can be taken to be a
fixed phase.
Summarizing
\begin{equation}
a(p) a(q) = \eta \tilde{F}^{-2}_\theta\left( q,p \right) a(q) a(p) ~.
\label{newcomm2}
\end{equation}
The creation operator $a^\dagger(q)$ carries momentum $-q$, hence its
deformed relation for scalar fields is
\begin{equation}
a(p) a^\dagger(q) = \tilde{\eta}'(p,q) \tilde{F}^{-2}_\theta\left( -q,p
\right) a^\dagger(q) a(p) + 2 p_0 \delta^d(p-q) ~.
\label{newcomm3}
\end{equation}
There is no need that $\tilde{\eta}(p,q) = \tilde{\eta}'(p,q)$, even
though as $\theta^{\mu \nu}$ approaches zero we require that
$\tilde{\eta}'(p,q)$ approaches the constant $\eta'= +1$. Hence, as
before we will set $\tilde{\eta}'(p,q) = \eta '= +1$ for all
$\theta^{\mu \nu}$.
Finally, the adjoint of (\ref{newcomm2}) gives
\begin{equation}
\bar{\eta} a^\dagger(p) a^\dagger(q) = \tilde{F}^{-2}_\theta\left( q,p
\right) a^\dagger(q) a^\dagger(p) ~,
\label{newcomm4}
\end{equation}
where $\bar{\eta}=+1$ for $\eta=+1$.
For spinorial free fields, there are similar deformed relations where
the factors $\tilde{\eta},\tilde{\eta}'$ approach $-1$ as $\theta^{\mu
\nu} \rightarrow0$.
\section{Construction of Deformed Oscillators from Undeformed
Oscillators}
We have presented such a construction elsewhere \cite{bmt} when
considering deformations of target manifolds of fields.
Let $c(p), c^\dagger(p)$ denote the undeformed oscillators, the limits
of $a(p), a^\dagger(p)$ when $\theta^{\mu \nu} \rightarrow 0$, as in
(\ref{standcomm1}), (\ref{standcomm2}). Then
\begin{equation}
a(p) = c(p) e^{\frac{i}{2} p_\mu \theta^{\mu \nu} P_\nu} ~,
\label{aitoc}
\end{equation}
where $P_\nu$ generates translations in the Hilbert space:
\begin{equation}
P_\nu = \int \frac{d^d p}{2 p_0}\;\; p_\nu c^\dagger (p) c(p) ~,
\end{equation}
\begin{equation}
[P_\nu, a(p)] = - p_\nu a(p), \quad [P_\nu, a^\dagger(p)] = p_\nu
a^\dagger(p) ~.
\end{equation}
The adjoint of (\ref{aitoc}) also gives
\begin{equation}
a^\dagger(p) = e^{-\frac{i}{2} p_\mu \theta^{\mu \nu} P_\nu}
c^\dagger(p) ~. \label{aitoc2}
\end{equation}
Before checking that this ansatz for the $a$-oscillators works, let us
first point out that
\begin{eqnarray}
c(p) e^{\frac{i}{2} p_\mu \theta^{\mu \nu} P_\nu} &=& e^{\frac{i}{2}
p_\mu \theta^{\mu \nu} P_\nu} c(p) ~, \\
e^{-\frac{i}{2} p_\mu \theta^{\mu \nu} P_\nu} c^\dagger(p) &=& c^\dagger (p)
e^{-\frac{i}{2} p_\mu \theta^{\mu \nu} P_\nu} ~,
\end{eqnarray}
due to the antisymmetry of $\theta^{\mu \nu}$. Hence the ordering of
factors in (\ref{aitoc}) is immaterial. Note also that
\begin{equation}
P_\nu = \int\frac{d^d p}{2p_0} p_\nu a^\dagger(p) a(p) ~,
\end{equation}
so that the map from the $c$- to the $a$-oscillators is
invertible.
We can check the relation (\ref{newcomm2}) as follows. We have
\begin{equation}
c(p) e^{\frac{i}{2} p_\mu \theta^{\mu \nu} P_\nu} c(q) e^{\frac{i}{2} q_\rho
\theta^{\rho \sigma} P_\sigma} = e^{-\frac{i}{2} p_\mu \theta^{\mu \nu}q_\nu}
c(p) c(q) e^{\frac{i}{2}(p+q)_\mu \theta^{\mu \nu} P_\nu} ~.
\end{equation}
Hence since $[c(p), c(q)]=0$ and $\theta^{\mu \nu} = -\theta^{\nu
\mu}$, we get (\ref{newcomm2}) with $\eta = +1$ for $a(p)$ defined by
(\ref{aitoc}). We can check the remaining commutation relations as
well in the same way.
\subsection{{\it{Deformed Permutation Symmetry}}}
Let ${\cal F}$ be the Fock space of the $c$-oscillators. Since the
$a$-oscillators can be constructed from the $c$'s, ${\cal F}$ is also
a representation space for the $a$-oscillators. In particular, the
Fock vacuum is annihilated by $a(p)$:
\begin{equation}
a(p) |0\rangle =0 ~.
\end{equation}
We work with the representation of the $a$-oscillators on ${\cal F}$.
Multi-particle vector states for $\theta^{\mu \nu} \neq 0$ are
obtained by applying polynomials of $a^\dagger(p)$'s on $|0\rangle$.
The number operator
\begin{equation}
N = \int \frac{d^dp}{2p_0} c^\dagger(p)c(p) ~,
\end{equation}
has the same expression in terms of $a(p)$'s and $a^\dagger(p)$'s,
\begin{equation}
N = \int \frac{d^dp}{2p_0} a^\dagger(p)a(p) ~,
\end{equation}
and has the standard commutators with these oscillators,
\begin{equation}
[N, a^\dagger(p)] = a^\dagger (p), \quad [N, a(p)] = -a(p) ~.
\end{equation}
Thus
\begin{equation}
N \prod_{i=1}^k a^\dagger (p_i)^{n_i}|0\rangle =
\left(\sum_{j=1}^k n_j \right) \left( \prod_{i=1}^k a^\dagger
(p_i)^{n_i} \right) |0 \rangle ~,
\end{equation}
and we can justifiably call
\begin{equation}
\prod_{i=1}^k (a^\dagger (p_i))^{n_i} |0 \rangle ~,
\end{equation}
as the $n$-particle state where $n = \sum_{i=1}^k n_i$.
We now show that there is a totally symmetric representation of the
permutation group on these vectors. The operator representatives of
its group elements depend on $\theta^{\mu \nu}$, but they reduce to
the standard realizations for $\theta^{\mu \nu}=0$.
First consider the free tensor product of two single particle
states. On these, we can define the transposition $\hat{\sigma}$,
\begin{equation}
\hat{\sigma} (v (p) \otimes v (q)) := v(q) \otimes v(p) ~,
\end{equation}
where
\begin{equation}
v(p) = a^\dagger (p) |0 \rangle ~,
\end{equation}
and so
\begin{equation}
\hat{\sigma}^2 = 1\!\mbox{l} ~.
\end{equation}
Here there is no relation between $v(p) \otimes v(q)$ and $v(q)
\otimes v(p)$ for a generic $v$ and all $p,q$.
The twist
\begin{equation}
\hat{F}_\theta = e^{-\frac{i}{2} P_\mu \theta^{\mu \nu}
\otimes P_\nu}
\end{equation}
acts on $v(p) \otimes v(q)$ as
\begin{equation}
\hat{F}_\theta (v (p)\otimes v (q)) = e^{-\frac{i}{2} p_\mu \theta^{\mu \nu}
q_\nu} v (p) \otimes v (q) ~.
\end{equation}
By the antisymmetry of $\theta^{\mu \nu}$,
\begin{equation}
\hat{F}_\theta \hat{\sigma} = \hat{\sigma} \hat{F}_\theta^{-1} ~,
\end{equation}
so that
\begin{equation}
\hat{T} = \hat{F}^{-2}_\theta \hat{\sigma} ~,
\label{Tdef}
\end{equation}
is an involution:
\begin{equation}
\hat{T}^2 = 1\!\mbox{l} ~.
\end{equation}
Note that the action of neither $\hat{\sigma}$ nor $\hat{F}^{-2}_\theta$
preserves the relation (\ref{newcomm4}), while that of $\hat{T}$ does
for $\bar{\eta}=+1$. That is, if (\ref{newcomm4}) is true with
$\bar{\eta}=+1$, then so is
\begin{equation}
\hat{T} a^\dagger (p) a^\dagger (q) \hat{T}^{-1} = \tilde{F}^{-2}_\theta
(q,p) \hat{T} a^\dagger (q) a^\dagger (p) \hat{T}^{-1} ~.
\end{equation}
{\it This means that $\hat{F}^{-2}_\theta$ and $\hat{\sigma}$ individually map
the subspace ${\cal H}_S$ spanned by the vectors $\{ a^\dagger (p)
a^\dagger (q) |0\rangle \}$ out of ${\cal H}_S$ and into the full free
tensor product of two single particle subspaces, while
$\hat{F}^{-2}_\theta \hat{\sigma}$ maps ${\cal H}_S$ to ${\cal H}_S$}.
Further by (\ref{newcomm4}),
\begin{equation}
\hat{T} a^\dagger (p) a^\dagger (q) |0\rangle = a^\dagger (p)
a^\dagger (q) |0\rangle ~.
\end{equation}
For $\theta^{\mu \nu}=0$, we recover $\hat{T} = \hat{\sigma}$, the
standard representation. We therefore call $a^\dagger (p) a^\dagger
(q)|0\rangle$ as the symmetric state. Bose symmetry thus generalizes
to symmetry under $\hat{T}$.
The generalizations of $\hat{T}$ to three-particle states are the two
transpositions
\begin{equation}
\hat{T}_{12} = \hat{T} \otimes {1\!\mbox{l}}, \quad \hat{T}_{23} =
{1\!\mbox{l}} \otimes \hat{T} ~.
\end{equation}
On the $n$-particle states, $\hat{T}$ generalizes to the $(n-1)$
transpositions
\begin{equation}
\hat{T}_{i,i+1} = \underbrace{{1\!\mbox{l}} \otimes ...
\otimes}_{(i-1)\;\; {\rm factors}} \hat{T} \otimes
\underbrace{{1\!\mbox{l}} \otimes {1\!\mbox{l}} ... \otimes
{1\!\mbox{l}}}_{n-(i+1) \;\;{\rm factors}} ~.
\end{equation}
They square to unity:
\begin{equation}
\hat{T}_{i,i+1}^2 = {1\!\mbox{l}} ~. \label{Tsquared}
\end{equation}
In addition, as one can easily verify, they fulfill the relation
\begin{equation}
\hat{T}_{i,i+1} \hat{T}_{i+1,i+2} \hat{T}_{i,i+1} =
\hat{T}_{i+1,i+2} \hat{T}_{i,i+1} \hat{T}_{i+1,i+2} ~.
\label{braid}
\end{equation}
In view of (\ref{Tsquared}) and (\ref{braid}) and a known theorem
\cite{birman}, $\hat{T}_{i,i+1}$ generate the permutation group $S_n$.
The preceding discussion shows that we get the totally symmetric
representation of $S_n$ on the physical state vectors of a scalar
field: the scalar field describes generalized bosons.
\subsection{\it{Projector for Physical States}}
Let $\hat{t}_i, \;(i=1,...,n!)$ be the representatives of the elements
of $S_n$ on ${\cal F}$. The $\hat{t}_i$ can be written in terms of
$\hat{T}_{i,i+1}$. Then, as is well-known \cite{bt},
\begin{equation}
\hat{\cal P} = \frac{1}{n!} \left( \sum_i \hat{t}_i \right) ~,
\end{equation}
is the projector to the symmetric representations of $S_n$ carried by
${\cal F}$. The physical space is
\begin{equation}
\hat{\cal P} {\cal F} ~.
\end{equation}
\subsection{\it{Observables}}
Observables $\hat{K}$ must preserve the space $\hat{\cal P}{\cal F}$:
\begin{equation}
\hat{K}\hat{\cal P}{\cal F} \subseteq \hat{\cal P}{\cal F} ~.
\end{equation}
Hence they must commute with $\hat{\cal P}$,
\begin{equation}
\hat{K} \hat{\cal P} = \hat{\cal P}\hat{K} ~.
\end{equation}
This is assured if they commute with the permutations:
\begin{eqnarray}
\hat{T}_{i,i+1} \hat{\cal P} &=& \hat{\cal P} \hat{T}_{i,i+1} ~, \\
\hat{t}_i \hat{\cal P} &=& \hat{\cal P} \hat{t}_i ~.
\end{eqnarray}
Let us check that the Poincar\'{e} transformations commute with
permutations. (They will, of course, since we arrived at the deformed
representation of permutations by requiring Poincar\'{e} invariance.)
If $U$ is the representation of the Poincar\'{e} group with elements
$g$ on the one-particle quantum states, then its representation in say
two-particle states is $(U \otimes U) \Delta_\theta$. The image of $g$
in this representation is hence
\begin{equation}
U^{(2)} (g) := \hat{F}^{-1}_\theta [U(g) \otimes U(g)]
\hat{F}_\theta ~.
\end{equation}
Now
\begin{eqnarray}
\hat{T} U^{(2)}(g) &=& \hat{F}^{-1}_\theta \hat{\sigma} [(U(g)
\otimes U(g)]\hat{F}_\theta = \hat{F}^{-1}_\theta [U(g) \otimes
U(g)] \hat{\sigma}\hat{F}_\theta \nonumber \\
&=& \hat{F}^{-1}_\theta [U(g) \otimes U(g)] \hat{F}^{-1}_\theta
\hat{\sigma} = U^{(2)} (g) \hat{T} ~.
\end{eqnarray}
This proof generalizes to the $n$-particle sectors. Hence Poincar\'{e}
transformations commute with permutations.
\section{Quantum Hall System}
Consider a particle of charge $e$, like the electron, moving in the
$1-2$-plane ${\mathbb R}^2 \subset {\mathbb R}^3$ in a constant
magnetic field $B$ directed in the third direction. The quantum
mechanical degrees of freedom can be described by two sets of mutually
commuting oscillators $a, a^\dagger$ and $b, b^\dagger$ \cite{fubini}:
\begin{equation}
[a, a^\dagger] = [b,b^\dagger]={1\!\mbox{l}} ~.
\end{equation}
All other commutators involving these operators are zero.
The Hamiltonian describing the Landau levels is
\begin{eqnarray}
H &=& \hbar \omega (a^\dagger a +1/2) ~, \\
\omega &=& \frac{e B}{m c} = {\rm cyclotron \;\; frequency} ~.
\end{eqnarray}
Thus $H$ commutes with the $b$-oscillators.
The $a$- and $b$- oscillators separately describe a Groenewold-Moyal
plane since for example
\begin{equation}
\left[\frac{b+b^\dagger}{\sqrt{2}}, \frac{b-b^\dagger}{i \sqrt{2}}
\right] = i{1\!\mbox{l}} ~.
\end{equation}
We can hence identify $(b+b^\dagger)/\sqrt{2}$ with $\hat{x}_1/l$,
$(b-b^\dagger)/(i \sqrt{2})$ with $\hat{x}_2/l$ and $\theta_{\mu \nu}$
with $l^2 \epsilon_{\mu \nu}$ ($\epsilon_{\mu \nu} = -\epsilon_{\nu
\mu}, \epsilon_{01}=+1$) where the scale factor $l$ is the magnetic
length:
\begin{equation}
l = \frac{1}{\sqrt{|e| B}} ~.
\end{equation}
The $a$-oscillators give the discrete energy levels of the charged
particle while the $b$-oscillators are associated with the coordinates
of the plane ${\mathbb R}^2$. In fact, when only the lowest Landau
levels are excited, it can be readily proved that $\hat{x}_\mu$ are
the projections of the exact spatial coordinates to the subspace
spanned by these levels. They become commuting coordinates when $B
\rightarrow \infty$. In that limit, $\omega \rightarrow \infty$ so
that the approximation of spatial coordinates by $\hat{x}_\mu$ becomes
exact. The operators $\hat{x}_\mu$ are called the ``guiding centre''
coordinates.
When just the lowest Landau level is excited, the Hilbert space is
\begin{equation}
{\cal H}_1 \otimes {\cal H}_\infty ~,
\end{equation}
where ${\cal H}_1$ has the vacuum state $|0\rangle$ of the
$a$-oscillator as basis,
\begin{equation}
a|0 \rangle =0 ~,
\end{equation}
and ${\cal H}_\infty$ is the Fock space of the $b$-oscillators. The
observables are described by the algebra ${\cal A}_\theta ({\mathbb
R}^2)$ $(\theta_{\mu \nu}=l^2 \epsilon_{\mu \nu})$ generated by
$\hat{x}_\mu$.
When $N$ Landau levels are excited, ${\cal H}_1$ becomes the
$(N+1)$-dimensional Hilbert space ${\cal H}_{N+1}$ with basis
\begin{equation}
|0\rangle ~,\ \frac{(a^\dagger)^k}{\sqrt{k!}}|0\rangle, \quad
k=1,..., N \ ~. \label{hallbasis}
\end{equation}
The $(N+1) {\times} (N+1)$ matrix algebra $Mat_{N+1}$ acts on ${\cal
H}_{N+1}$:
\begin{equation}
{\it Mat}_{N+1} {\cal H}_{N+1} \subseteq {\cal H}_{N+1} ~.
\end{equation}
The full Hilbert space is \begin{equation} {\cal H}_{N+1} \otimes {\cal
H}_\infty ~. \end{equation}
The observables are thus described by the noncommutative algebra
$Mat_{N+1} \otimes {\cal A}_\theta({\mathbb R}^2)$.
The algebra ${\cal A}_\theta({\mathbb R}^2)$ admits the action of the
diffeomorphism group ${\cal D}$ provided the coproduct for the latter
is deformed. Although the quantum Hall system is non-relativistic, we
can perhaps impose the dogma that the underlying spacetime algebra
preserves its automorphism group in the process of deformation. If we
do so, the statistics of the excitations described by
(\ref{hallbasis}) are also deformed.
We argue elsewhere \cite{bpq} that at the second-quantized level, such
excitations do not show UV-IR mixing. That is another good reason for
the adoption of the deformed coproduct and statistics.
But the physical implications of this approach remain to be
explored.
\section{Remarks on Phenomenology}
The most striking effects appear to be associated with violations of
Pauli principle, and they can be subjected to stringent experimental
tests. For example, life times for Pauli forbidden processes like
$^{16}O \rightarrow ^{16} \tilde{O}$ or $^{12}C \rightarrow ^{12}
\tilde{C}$, where $^{16} \tilde{O}$ ($^{12} \tilde{C}$) are nuclear
configurations with an extra nucleon in the (filled) $1S_{1/2}$ shell,
are presently found to be longer than $10^{27}$ years (90 \% C.L.)
\cite{sk,borexino}. Here we indicate how such transitions can arise by
studying a very simple example: that of a free (twisted) fermion
field. Spin effects are ignored as they are not important in this
context.
So let $a(p)$ and $a^\dagger(p)$ be twisted fermionic creation and
annihilation operators for momentum $p$. They can be written in the
form (\ref{aitoc}) and (\ref{aitoc2}) where $c(p)$ and its adjoint are
fermionic oscillators for $\theta^{\mu \nu}=0$. A (twisted ) single
particle wave packet state $|\alpha \rangle$ is created from the
vacuum by the operator
\begin{equation}
\langle a^\dagger, \alpha \rangle = \int \frac{d^dp}{2p_0}
\alpha(p) a^\dagger (p) ~.
\end{equation}
Thus
\begin{eqnarray}
|\alpha \rangle &=& \langle a^\dagger, \alpha \rangle |0\rangle \\
&=& \langle c^\dagger, \alpha \rangle |0\rangle ~, \\
\langle c^\dagger, \alpha \rangle &=& \int \frac{d^dp}{2p_0}
\alpha(p) c^\dagger (p) ~.
\end{eqnarray}
Hence with
\begin{equation}
\int \frac{d^dp}{2p_0} |\alpha(p)|^2 =1 ~,
\end{equation}
$|\alpha \rangle$ is normalized to unity:
\begin{equation}
\langle \alpha | \alpha \rangle =1 ~.
\end{equation}
We can approximate a vector with sharp momentum $\vec{p}$ with
arbitrary precision with a function $\alpha$ peaked at $\vec{p}$ and
normalized to 1. A Gaussian $\alpha$ is sufficient for this purpose.
Consider next the two-particle state vector
\begin{eqnarray}
|\alpha, \alpha \rangle &=& \langle a^\dagger, \alpha \rangle
\langle a^\dagger, \alpha \rangle |0\rangle \\ &=& \int
\frac{d^dp_1}{2p_{10}}\frac{d^dp_2}{2p_{20}} e^{-\frac{i}{2} p_{1 \mu}
\theta^{\mu \nu}p_{2\nu}} \alpha(p_1) \alpha(p_2) c^\dagger (p_1)
c^\dagger (p_2) |0\rangle ~. \label{exotic2}
\end{eqnarray}
This vector is identically zero if $\theta^{\mu \nu}=0$ as required by
Pauli principle.
But this vector is not zero if $\theta^{\mu \nu} \neq 0$, as shown for
example by its non-vanishing norm $N(\alpha, \alpha)$:
\begin{eqnarray}
N^2(\alpha,\alpha) &=& \langle \alpha, \alpha|\alpha, \alpha \rangle \\
&=& \int \frac{d^dp_1}{2p_{10}}\frac{d^dp_2}{2p_{20}}
(\bar{\alpha}(p_1) \alpha(p_1))(\bar{\alpha}(p_2)
\alpha(p_2))(1-e^{-i p_{1 \mu}\theta^{\mu \nu} p_{2\nu}}) ~.
\end{eqnarray}
$N^2(\alpha,\alpha) \neq 0$ for $\alpha \neq 0$ as can be seen from
the following argument. We have
\begin{equation}
\int \frac{d^d p_1}{2p_{10}}\frac{d^d p_2}{2p_{20}} (\bar{\alpha}(p_1)
\alpha(p_1))(\bar{\alpha}(p_2) \alpha(p_2)) \sin (p_{1 \mu}
\theta^{\mu \nu} p_{2 \nu}) = 0
\end{equation}
since the integrand is odd under the interchange of $p_1
\leftrightarrow p_2$. Hence
\begin{equation}
N^2 (\alpha, \alpha) = \int \frac{d^d p_1}{2p_{10}}\frac{d^d
p_2}{2p_{20}} (\bar{\alpha}(p_1) \alpha(p_1))(\bar{\alpha}(p_2)
\alpha(p_2)) [1-\cos(p_{1 \mu} \theta^{\mu \nu} p_{2 \nu})] ~.
\label{Nalpha}
\end{equation}
This is strictly positive for $\alpha \neq 0$ since $1-\cos(p_{1 \mu}
\theta^{\mu \nu} p_{2 \nu}) \geq0 $ for $\theta^{\mu \nu} \neq 0$ and
vanishes only on a zero-measure set of $p_1 , p_2$. Note from
(\ref{Nalpha}) that $N(\alpha,\alpha)$ is $O(\theta)$.
We can normalize $|\alpha,\alpha \rangle$:
\begin{eqnarray}
|\alpha, \alpha) &=& |\alpha,\alpha \rangle \frac{1}{N(\alpha,\alpha)} ~, \\
(\alpha,\alpha|\alpha,\alpha) &=& 1 ~.
\end{eqnarray}
This vector, being of unit norm, remains in the Hilbert space even if
$\theta^{\mu\nu} \rightarrow 0$. But the scalar product of $|\alpha,
\alpha)$ with the fermionic Fock space state $c^\dagger (p_1)
c^\dagger (p_2)|0\rangle$ is undefined in the limit $\theta^{\mu \nu}
\rightarrow 0$. Thus
\begin{equation}
\langle 0|c(p_2) c(p_1) |\alpha, \alpha) = -2i \alpha(p_1)\alpha(p_2)
\frac{\sin (p_{1\mu} \theta^{\mu \nu} p_{2\nu}/2)}{N(\alpha,\alpha)} ~.
\end{equation}
Since $N(\alpha,\alpha)$ is $O(\theta)$, the limit of this expression
as $\theta^{\mu \nu} \rightarrow 0$ depends on the manner in which
$\theta^{\mu \nu}$ goes to zero. This means that $|\alpha,\alpha)$ has
different expansions in the Fock space basis depending on the way in
which $\theta^{\mu \nu}$ becomes zero, that is it approaches different
standard fermionic vectors in the Hilbert space depending on this
limit. We do not know how to interpret this result.
Generalizing, we have the vectors
\begin{equation}
|\underbrace{\alpha, \alpha,..., \alpha}_{N \;{\rm factors}}
\rangle = \langle a^\dagger, \alpha \rangle^N |0\rangle ~,
\end{equation}
which after normalization become $|\alpha, \alpha, \ldots, \alpha)$,
$(\alpha, \ldots, \alpha | \alpha, \ldots, \alpha) =1$. These vectors
span a Hilbert space ${\cal H}_S$ of symmetric vectors when
$\theta^{\mu\nu} \rightarrow 0$.
Now consider for example
\begin{equation}
|\beta, \gamma \rangle = \langle a^\dagger , \beta \rangle \langle
a^\dagger , \gamma\rangle |0 \rangle, \quad \beta \neq \gamma.
\end{equation}
We have
\begin{equation}
\langle \beta, \gamma|\alpha, \alpha) = \int
\frac{d^dp_1}{2p_{10}}\frac{d^dp_2}{2p_{20}}(\bar{\beta}(p_1)\alpha(p_1))
(\bar{\gamma}(p_2)\alpha(p_2))[1-e^{-i p_{1 \mu}\theta^{\mu \nu}
p_{2\nu}}]\frac{1}{N(\alpha, \alpha)}\,.
\end{equation}
This overlap amplitude is not in general zero. Thus transitions are
possible between Pauli-principle allowed state vectors $|\beta, \gamma
\rangle$ and Pauli-principle forbidden state vectors
$|\alpha,\alpha)$.
It is important to note that the mean energy and momentum in these new
states are nothing outrageous. In fact, as one can see from
(\ref{exotic2}), the mean value of $P_\mu$ in
$|\alpha,\alpha,\cdots,\alpha )$ can be made arbitrarily close to $N
p_\mu$ by choosing a Gaussian for $\alpha$ which is suitably peaked at
$p_\mu$.
In conventional Fock space, by Pauli principle, there is no fermionic
state vector with energy-momentum $N p_\mu \;(N \geq 2)$ . This shows
rather clearly that Pauli principle is violated when $\theta^{\mu \nu}
\neq 0$.
We plan to further discuss the theory and phenomenology of these exotic
states and associated transitions elsewhere.
\section*{Acknowledgments}
We thank B. Qureshi for extensive discussions. The work of A.P.B. and
A.P. was supported by DOE under grant number DE-FG02-85ER40231.
A.P.B. was also supported by NSF under grant number INT 9908763 and by
CONACyT under grant number C0 141639. The visit of G.M. to Syracuse
during Spring 2005 was fully supported by Syracuse University. This
work would have been impossible without this support.
|
2,877,628,090,466 | arxiv | \section{Introduction}
Mergers are fundamental to the cold dark matter paradigm of structure formation. They not only drive mass evolution by merging smaller dark matter haloes into larger ones, but they also change the morphology of galaxies
from late to early-type \citep[e.g.][]{1972ApJ...178..623T,1992ARA&A..30..705B,2003ApJ...597..893N}, and drive gas to the centre of the merger remnant that triggers star formation \citep{1991ApJ...370L..65B,2006MNRAS.372..839N} and AGN activity \citep{2005Natur.433..604D}. Early structural studies of elliptical galaxies by \citet{1992ApJ...399..462B} already hinted at the mass-dependent importance of dissipation during their formation. In a first systematic study of the morphology of merging pairs in a CDM galaxy formation model, \citet{2003ApJ...597L.117K} could show that massive elliptical galaxies are mainly formed from dry mergers of early-type galaxies, while less massive ones show mixed mergers between an elliptical and a spiral galaxy. Only elliptical galaxies well below $L_*$ are predominantly formed by wet mergers from two spiral galaxies. Subsequent work on the role of dry mergers revealed that they can explain the formation of slow rotating boxy ellipticals
\citep{2005MNRAS.359.1379K,2006ApJ...636L..81N}, that they lie on the fundamental plane \citep{2001ApJ...552L..13C,2003MNRAS.342..501N,2006MNRAS.369.1081B,2006ApJ...641...21R}, follow the $M_{\bullet}-\sigma$-relation \citep{2008arXiv0802.0210J} and that they could possibly explain the formation of a stellar density core in the centre of the remnant due to a binary black hole merger \citep{2001ApJ...563...34M,2004ApJ...613L..33G,2006ApJ...648..976M}. Furthermore, it has been argued that the strong size evolution of massive early-type galaxies \citep{2007MNRAS.382..109T,2007ApJ...671..285T,2008ApJ...677L...5V,2008A&A...482...21C} provides evidence for dry merging \citep{2006ApJ...648L..21K}. In an attempt to model the size-evolution of early-type galaxies, \citet{2006ApJ...648L..21K} showed that the amount of dissipation during mergers can account for the observed size evolution. In their model, dry mergers result in remnants with larger sizes than remnants from gaseous mergers of the same mass. Similar results have been reported from numerical simulations of mergers with varying degrees of gas fractions by \citet{2006ApJ...650..791C}.
The natural question that immediately arises is, what is the reason for dry merging? The early seminal work of \citet{1977ApJ...215..483B}, \citet{1977ApJ...211..638S} and \citet{1977MNRAS.179..541R} predicts the existence of a characteristic mass scale, below which the cooling time $t_{cool}$ of a collapsing gas cloud is shorter than its dynamical time $t_{dyn}$, allowing for efficient collapse on a dynamical time scale and subsequent star formation. In massive dark matter halos with $M_{DM} > 10^{12}$ M$_{\odot}$, one generally finds $t_{cool} \gg t_{dyn}$ and that shock heating of the collapsing gas supports the formation of a hot, static atmosphere at the halo virial temperature \citep[e.g.][]{2003MNRAS.345..349B,2007MNRAS.380..339B}. From an observational point of view, the existence of a bimodality in the properties of the galaxy population, occurring at a mass scale of M$_* > 3 \times 10^{10}$ M$_{\odot}$ \citep{2003MNRAS.341...54K}, lends support to the notion of a transition in the mode of galaxy formation \citep{2006MNRAS.368....2D}. Hence one expects that dry merging will occur when cooling is sufficiently hindered at masses above a transition mass scale and if the reservoir of cold gas in the galaxy is used up by star formation before the merger happens.
The existence of a characteristic shut-off mass scale in galaxy formation seemingly provides a simple way to truncate star formation
within galaxy formation models. Such an ad-hoc prescription has been used in earlier work by \citet{1999MNRAS.303..188K} with the aim of avoiding too massive and too blue galaxies in clusters, and more recently in work by \citet{2006MNRAS.370.1651C}. These latter authors assume a shut-down of star formation in halos of mass $\geq 10^{12}$ M$_{\odot}$ at $z \leq 3$, and show that the colour bimodality and luminosity function can be reproduced accurately in their model. The choice of $10^{12}$ M$_{\odot}$ draws its support from two main arguments laid out in \citet{2006MNRAS.368....2D}.
One is that at this mass scale, stable shocks appear that allow for shock heating of gas \citep{2003MNRAS.345..349B}. The second argument, more important, is that this shock-heated gas is generally so dilute and vulnerable to feedback that it literally stays hot forever and does not cool down to subsequently fuel star formation. While the first argument draws support from various simulations \citep[e.g.][]{2005MNRAS.363....2K}, the second argument is less clear. One main uncertainty is with regards to the heating source of the hot gas. Several plausible candidates are suggested in the literature such as AGN-feedback \citep{1998A&A...331L...1S}, dynamical friction heating \citep{2004MNRAS.354..169E,2007ApJ...658..710N}, or heating by gravitational potential energy \citep{2008ApJ...680...54K,2008MNRAS.383..119D}. Neither the relative contribution nor the overall magnitudes with which these processes heat the hot gas are theoretically certain or observationally confirmed to date.
The aim of this letter is twofold. Firstly, we predict the dry merger rate and its evolution by adopting a shut-off mass scale, and secondly, we use these results to propose observational strategies on how to test the existence of a critical mass scale for the quenching of star formation, that relies on the continued merging activity within CDM-cosmologies and is independent to first order on the underlying baryonic physics involved in the quenching process.
\begin{figure}
\includegraphics[width=0.45\textwidth]{f1.eps}
\caption{The fraction of galaxies that were formed by a dry major merger within the last Gyr. Lines show results for galaxy mass ranges $5 \times 10^{10} -10^{11}$, $10^{11} -5 \times 10^{11}$ M$_{\odot}$, $5 \times 10^{11} -10^{12}$ M$_{\odot}$ and $ > 10^{12}$ M$_{\odot}$. Galaxies with $M_* > M_{*,c}=6.3 \times 10^{10}$ M$_{\odot}$ show similar fractions of dry mergers at $z \le 1$. } \label{fig1}
\end{figure}
\section{The Model}\label{mod}
We use semi-analytical modeling (SAM) of galaxy formation to investigate the effect of the shut-off mass scale for cooling on the galaxy population. The dark matter history is calculated using the merger tree proposed by \citet{som99} with a mass resolution of $2 \times 10^9 M_{\odot}$. The baryonic
physics within these dark matter haloes is calculated following recipes
presented in \citet[][and references therein]{spr01}, including a model for the reionizing background
by \citet{som02}. In our simulation, we assume that elliptical galaxies
form whenever a major merger ($M_1 /M_2 \leq 3.5$ with $M_1 \geq M_2$) takes
place. We assume that during this process, all the cold gas in the
progenitor discs will be consumed in a central starburst, adding to the
spheroid mass, and that all stars in the progenitor discs will
contribute to the spheroid as well. Furthermore, we also add the stars of satellite galaxies involved in minor mergers to the spheroid. The merger time scale for galaxies is calculated using the dynamical friction prescription in \citet{spr01} and we find that the predicted merger rate is in good agreement with observations \citep{2001ApJ...561..517K,jogee}.
For more modeling details, we refer the reader to \citet{2005MNRAS.359.1379K} and \citet[KS]{ks06}. Throughout this paper, we use the following set of cosmological parameters derived from a combination of the 5-year WMAP data with Type Ia supernovae and measurements of baryon acoustic oscillations \citep{2008arXiv0803.0547K}:
$\Omega_0=0.28$, $\Omega_{\Lambda}=0.72$, $\Omega_b/\Omega_0=0.16$,
$\sigma_8=0.8$ and $h=0.7$.
In the following we will modify our fiducial model as laid out in SK by adopting a quenching of cooling in dark matter haloes above a critical mass scale of $M_{DM,crit}\geq 10^{12}$ M$_{\odot}$ at $z \leq 3$ as suggested in \citet{2006MNRAS.368....2D}. Note that we allow the gas that is already in the disc to continue forming stars until it is used up, even after the host halo crossed $M_{DM,crit}$. In the following, we define as dry mergers,
objects for which M$_{gas,tot}/($M$_{*,tot}+$M$_{gas,tot})< 0.1$, with M$_{gas,tot}$ as the total amount of cold gas in both progenitor discs, and M$_{*,tot}$ as the total amount of stars in both progenitors, respectively. Whenever we refer to dry mergers in the following we will mean, dry major mergers with $M_1 /M_2 \leq 3.5$ and $M_1 \geq M_2$.
\begin{figure}
\includegraphics[width=0.45\textwidth]{f2b.eps}
\caption{The cumulative number densities of dry mergers as a function of redshift and remnant stellar mass. Solid lines are prediction for the model with shut-off mass scale and dashed lines for the model without.}\label{fig4}
\end{figure}
\section{Evolution of Massive Galaxies}
One of the main features of a shut-off mass scale $M_{DM,crit}$ is its influence on the evolution of massive galaxies that live in halos above $M_{DM,crit}$.
In a first implementation \citet{2006MNRAS.370.1651C} could show that they were able to reproduce the high mass tail of the luminosity function \citep{2003ApJ...592..819B} and hence prevent the common problem of overproducing too massive galaxies within SAMs. While the low-mass tail of the luminosity function becomes steeper with redshift \citep{2007ApJ...668L.115K} independent of a shut-off, the high-mass tail of the luminosity function shows a much weaker evolution with time in models with shut-off, due to merging being the sole mode of growth compared to merging and star formation in our fiducial model without shut-off. In Fig. \ref{fig1} we show the fraction of massive galaxies that grew by dry major mergers within the last Gyr. In general the contribution from dry mergers decreases at $z>1$ as galaxies of the same mass tend to live in smaller dark matter haloes that fall below $M_{DM,crit}$. At redshifts $z<1$, we find that in the mass range where dry mergers are significant, i.e. $ M_* > M_{*,c} \sim 6.3 \times 10^{10}$ M$_{\odot}$, the fraction of galaxies that where formed by a dry merger within the last Gyr is between $10 \%$ at $z=0$ and $20 \%$ at $z=1$ independent of the galaxy mass.
\section{The Dry Merger Rate}
One critical point is the frequency of dry mergers in the universe. Observationally, there is still a vigorous debate going on as to whether dry mergers do not play any role \citep{2007ApJS..172..494S}, a mild role \citep{2007ApJ...654..858B}, or an important role \citep{2007ApJ...665..265F} in the growth of the most massive galaxies. The strategies to determine the influence of dry merging remain mostly centered on the evolution of the luminosity function and the colour bimodality of galaxies. In Fig. \ref{fig4}, we show the cumulative co-moving number density of dry major mergers as a function of galaxy mass in units of Mpc$^{-3}$. We calculated this number density by counting all dry major mergers that occurred within the cited redshift intervals. The contribution to dry mergers comes mainly from galaxies around $M_{*,c}$. Galaxies more massive than $M_{*,c}$ do not contribute significantly. The number densities increase by a factor of $\sim 2.5$ from $z=0$ to $z=0.34$ for galaxies more massive than $M_{*,c}$. We also show results for a model without shut-off mass scale in the same figure. The dry merger rates we find in this model are almost a factor 5 lower compared to the shut-off model.
\begin{figure}
\includegraphics[width=0.45\textwidth]{f3.eps}
\caption{The merger rate as a function of redshift. Solid black and yellow lines show the overall dry and wet merger rates, respectively. The dashed and dotted lines divide the dry merger sample into sub-samples based on the bulge-to-total mass of the merging galaxies.
}\label{fig2}
\end{figure}
To further quantify the evolution of massive galaxies in terms of dry mergers, we show the corresponding merger rates and fractions for galaxies more massive than $M_{*,c}$ in Fig. \ref{fig2} \& \ref{fig3}, respectively. We measure the dry major merger rate in our model by counting all dry mergers that occurred in our simulation volume within the last 1.0 Gyr of galaxies that are more massive than $M_{*,c}$. This gives the merger rate $R$ in units of Gyr$^{-1}$ Mpc$^{-3}$. The merger fraction $f$ is then calculated by simply dividing $R$ by the number density of galaxies more massive than $M_{*,c}$ at $z$.
To make a consistent comparison to earlier observational work of \citet{2006ApJ...640..241B}, we calculate the fraction of dry mergers at $z=0.5$ by counting all dry mergers that galaxies with $M_* \ge 10^{10} $M$_{\odot}$ in the last 150 Myr experienced and dividing it by the number density of elliptical galaxies with $M_* \ge 10^{10} $M$_{\odot}$. We define here and in the following elliptical galaxies as galaxies with bulge-to-total mass ratios of $B/T > 0.6$. For our comparison we only consider dry mergers between early-type galaxies and mixed mergers. It is likely that latter are contaminating the observed sample of \citet{2006ApJ...640..241B} and we find that in general more than half of all dry mergers are morphologically mixed mergers. As can be seen from Fig. \ref{fig3} the model output agrees well with the observations.
Furthermore, we find an almost constant dry merger rate at $z \le 1$ with $ \sim 6 \times 10^{-5}$ Gyr$^{-1}$ Mpc$^{-3}$ which shows a weaker decline to lower redshifts than the wet merger rate.
\begin{figure}
\includegraphics[width=0.45\textwidth]{f4.eps}
\caption{The merger fraction of galaxies for the same selections as in Fig. \ref{fig2}. The filled star and square are the modeled and observed dry merger fraction, respectively, for a sample of early-type galaxies as defined in \citet{2006ApJ...640..241B}. }\label{fig3}
\end{figure}
The dry merger rate in general declines strongly at $z>1$ and is two orders of magnitude smaller than wet mergers at $ z \sim 1.5$. We continue by splitting up the sample of dry mergers based on the morphologies of the merging galaxies. Here we define galaxies with bulge-to-total stellar mass greater than 0.6 as ellipticals and all other galaxies as spirals. The relative contribution of different types of dry mergers to the merger fraction and rate is roughly constant throughout time. The main channels of dry mergers are between two elliptical galaxies or an ellitpical and a spiral galaxy. Dry mergers between spirals play almost no role and are a factor of 5 less frequent. The model predicts a large fraction of mixed mergers between a spiral and an elliptical galaxy. These are prime targets for the detection of dry mergers by tidal features \citep[e.g.][]{2005AJ....130.2647V,2008ApJ...684.1062F}. In an earlier study \citet{2007MNRAS.381..389K}(K07) investigated the morphology of dry merger progenitors in their SAM, finding that the majority is between two late-type galaxies in contrast to our results. There are two main reasons for this discrepancy between our models. While K07 use N-body simulations to follow the merging history of their model galaxies we apply the dynamical friction estimate to calculate the time it takes galaxies to merge once their haloes merge. It has been argued by e.g. K07, that this time scale is shorter than the actual merging time scale and hence would overproduce mergers. It is interesting to note, that the merger rate estimates based on the dynamical friction time scale in various SAMs or halo occupation models are in good agreement with the observations of the merger rate by \citet{jogee}. In contrast merger rates from SAMs based on N-body simulations following sub-haloes show systematic lower merger rates than the observations (Hopkins et al, in prep). Main problems in the sub-halo scheme for merging are too effective stripping of dark matter from the sub-haloes due to missing baryons and hence too long merging time scales, as well as calculating the merging time scale of the satellite galaxy once its hosting sub-halo has fallen below the halo resolution limit (Hopkins et al, in prep.). At this point it is still open which of the two schemes gives the more physical robust results.
The second main reason for the differences between K07 and our results is the star formation efficiency. While we use a constant efficiency based on the local Schmidt-Kennicutt relation \citep{1998ApJ...498..541K} they use an efficiency that scales proportional to $M_{DM}^{0.73}$ for constant gas masses. As a consequence star formation for massive galaxies, which predominantly live in the most massive haloes, will be more efficient leaving them devoid of gas, but with massive stellar discs in their model. As seen in Fig. 10 of \citet{2005ApJ...631...21K} their colour-magnitude relation shows an excess of very luminous blue galaxies, most likely associated with late-type galaxies, that subsequently take part in mergers. It should be noted however, that simulations of dry late-type mergers in general do not reproduce the kinematics and surface profiles of the most massive elliptical galaxies \citep[e.g.][]{2006MNRAS.369..625N}
\begin{figure}
\includegraphics[width=0.45\textwidth]{f5.eps}
\caption{Fraction of 1:1 to 1:2 (solid line) and 1:1 to 1:3 (dot-dashed line) mergers as a function of {\bf remnant stellar mass within the redshift interval $ 0 < z < 0.06$. The error bars show Poisson error bars} Inset graph: Number density of all mergers (solid line) and of only wet mergers as a function of remnant mass within the same redshift interval. The black vertical dashed line indicates $M_{*,crit}$ }\label{fig5}
\end{figure}
\section{A New Way to Detect Dry Mergers}
In the last section, we argued that it is not straightforward to measure the dry merger rate from the evolution of the luminosity function. We here want to propose a novel approach that is observationally well accessible. In Fig.
\ref{fig5} we show the ratio of 1:1 mergers to 1:2 mergers and 1:3 mergers as a function of remnant galaxy mass. Here we count the number of mergers, dry and wet, with different mass ratios that occurred within the redshift interval $ 0 < z < 0.06$. Galaxies with masses below $M_{*} < M_{*,c}$ do not show any strong variation in their relative merger rates. Only on going to more massive galaxies does the relative contribution from different mass ratios start to change. The variation in the merger rate is most pronounced at a mass scale of $M_{*,c}$, where equal mass mergers dominate the overall merger rate. Galaxies participating in these mergers generally live in halos with masses that are above $M_{DM,crit}$. The sudden increase in equal mass mergers is a direct consequence of the shut-off mass scale $M_{DM,crit}$. Galaxies that just reached $M_{*,c}$ do not grow by star formation anymore: their main channel of growth is mergers. What happens is that galaxies grow until they reach $M_{*,c}$ and then stall until they merge with another galaxy of similar mass. Once galaxies passed $M_{*,c}$ the relative fractions of equal and unequal mass mergers approach the values below $M_{*,c}$. Another signature of the shut-off mass scale is imprinted in the overall number of dry and wet mergers as a function of galaxy mass (see inset graph of Fig. \ref{fig5}). At $M_{*,c}$ the number density of dry mergers gets enhanced because of galaxies growing till they reach $M_{*,c}$ and then waiting to merge. The contribution of wet mergers towards the number density of all mergers drops very steeply at masses larger $M_{*,c}$ and allows us to clearly separate the dry merger activity region. We find a peak in the number density of mergers around $M_{*,c}$, which is around the same scale reported in a study of pair counts by
\citet{2008arXiv0806.0018P}.
\section{Conclusion}
In this Letter, we predicted properties of dry mergers in a model that assumes a critical shut-off mass scale for cooling of gas. The impact on the galaxy population and the merger rates can be summarized as follows. The high-mass end of the luminosity function is dominated by continued dry mergers.
At any redshift $z \le 1$, $10 \% - 20 \%$ of massive galaxies have had experienced a dry merger within their last Gyr. We find a dry merger rate of $ \sim 6 \times 10^{-5}$ Gyr$^{-1}$ Mpc$^{-3}$ and that the number density of dry major mergers is significantly increased with respect to a model without shut-off mass scales. The relative fraction of equal mass mergers is enhanced with respect to unequal mass mergers at $M_{*,c}=6.3 \times 10^{10}$ M$_{\odot}$ which marks the transition of galaxies from being predominantly formed in gaseous mergers or through star formation in discs to dry mergers. In a model where the transition between star forming and non-star forming galaxies is regulated by a
physical process that does not result in a sharp shut-off mass scale, the relative rates of equal to unequal mass mergers do not show a significant change with mass (e.g. K07).
Around the same mass scale, the merger rate is enhanced with respect to the general trend of a decreasing merger rate with mass consistent with recent observations by \citet{2008arXiv0806.0018P}. All of these features can be explained by considering that galaxies grow through star formation in discs only until their host halos reach $M_{DM,crit}$ and their supply of fuel in the form of cold gas stops. At this mass scale, efficient shock heating kicks in,
as well as the gas becoming prone to efficient heating from various feedback sources \citep{2006MNRAS.368....2D}. Galaxies on average have masses of $M_{*,c}$ when this is the case, and can only grow to become more massive by mergers.
As a result the relation between central galaxy stellar mass and host dark mater halo mass will become shallower, resulting in unequal mass dark halo mergers resulting in similar mass galaxy mergers (see also Hopkins et al. in prep).
The results presented here can be used to test the existence of a shut-off mass scale (see e.g. \citep{2009ApJ...695..900Y} for a recent study based on galaxy group catalogue, arguing that this mass scale must be larger than $10^{12.5}$ M$_{\odot}$). If indeed various physical processes conspire to generate a characteristic mass scale, it should leave its fingerprint in the equal mass merger rate. Using systematic surveys of galaxies to count pair statistics one can measure the relative fraction of equal to unequal mass mergers and look for a change as a function of mass. This approach is rather insensitive to difficulties with observing changes in luminosity functions, morphologies of galaxies or signs of interactions, and therefore should be able to provide robust results even at larger redshifts.
\\
We would like to thank the referee for his valuable comments, as well as Shardha Jogee, Gary Mamon, Michael Brown and Avishai Dekel for helpful comments that improved the manuscript.
|
2,877,628,090,467 | arxiv | \section{Introduction}
Recent cosmological data suggest that $26.8$ percent of the energy content of the Universe is in the form of dark matter \citep{Planck2015}, but no compelling understanding of its nature has been possible
so far. Galaxy-structure studies provide an efficient test of various dark matter candidates, as dark matter plays a key role on this scale. The understanding of the kinematics of galaxy clusters \citep[][]{Zwicky1937} and galactic rotation curves \citep[e.g.,][]{Rubin1978,Rubin1985,Mathewson1992,Prugniel1998} all require dark matter. When we compare different dark matter models with galactic rotation curves, it is crucuial to estimate the mass of the baryonic (luminous) component accurately.
The baryonic mass density of the galaxy can be calculated from the luminosity distribution, assuming a certain mass-to-light ($M/L$) ratio. There are five basic techniques employed to estimate this ratio: a) using tabulated relations between color and $M/L$ \citep[e.g.,][]{Bell2001}, b) modeling broadband photometry \citep[e.g.,][]{Sawicki1998}, c) modeling moderate-resolution spectra \citep[e.g.,][]{Giallongo1998}, d) the analysis of CMDs in nearby galaxies with resolved stellar populations \citep[e.g.][]{Dalcanton2012}, and e) dynamical modeling via the Jeans equation in early-type galaxies \citep[e.g.,][]{Cappellari2013} or in
intermediate- and late-type disks \citep[e.g.,][]{Bershady2010,Martinsson2013}. In this paper we employ the first method.
The accuracy of the color--mass-to-light relations \citep[CMLR, e.g.,][]{Bruzual2003,McGaugh2014} highly depends on the assumed initial mass function (IMF) of the employed stellar population synthesis model, the variations in the star formation histories of galaxies, the distribution of stellar metallicities, and the contribution of stars in bright but short-lived phases of evolution (e.g., TP-AGB stars). According to \citet{McGaugh2014}, the semi-empirical stellar population synthesis model of \citet{Bell2003} provides the most identical stellar masses in different photometric bands. They modified the according CMLR relation to achieve the best match of the calculated stellar masses in different photometric bands.
When the Newtonian law of gravity is employed to deduce the rotational curve of the luminous component, the model curve is qualitatively different from the curve emerging from spectroscopic measurements (Doppler shift of the spectral lines). Either gravity is not well understood and needs refinement on galactic scales, or there is an invisible contribution to the mass of the galaxy, which interacts only gravitationally. In this paper we compare the compatibility with galactic rotation curves of three frequently used dark matter models and of the pure baryonic model.
The Navarro-Frenk-White (hereafter NFW) dark matter model is motivated by cold dark matter simulations. \citet{NFW1997} used high-resolution N-body simulations to study the equilibrium density profiles of dark matter, and found that their halos have the same shape regardless of the halo mass, initial density fluctuations, and cosmological parameters. The NFW model has a divergent central density, and it is cuspy, the density scaling as $r^{-1}$ ($r$ being the radial distance).
\citet{Einasto1965} proposed the density $\rho \sim \exp (-A r^{\alpha})$ for a spherical stellar system, able to model both steep and shallow profiles. The Einasto model is formally similar to Sersic's law, but it is fit to the space density as compared to the projected surface density for the latter. \citet{Merritt2006} pointed out that Sersic's law is also an excellent description for $N$-body dark matter halos (see references therein).
The pseudo-isothermal sphere (hereafter PSE) halo has no cosmological motivation, but it often fits the rotational curves better than NFW \citep[][]{deBlok2002,Kuzio2008}, as the PSE profile exhibits finite density at the center of the halo \citep[e.g.,][]{Chemin2011}.
For each density profile, the rotational velocity of the halo can be fit to the spectroscopically measured curves. We did this for either the pure baryonic model or for the three frequently used dark matter profiles (NFW, Einasto, and PSE). We have chosen $15$ high surface brightness (HSB) and $15$ low surface brightness
(LSB) galaxies for this purpose.
In Section \ref{galaxy_rotation_curves} we generate the spatial luminosity density of the baryonic component from the projected surface brightness profile of the galaxies. We also summarize the contributions of the baryonic and the dark matter components to the rotational velocity. In Section \ref{section_conf} we present the model fit results regarding the spatial luminosity density of the baryonic components. In Section \ref{bestfitrot} we compare the rotational velocity models with the spectroscopic rotational curves. In Section \ref{stat_ranking} we investigate the relevance of the dark matter models. In Section \ref{summary} we summarize the results.
The $\Lambda$CDM cosmological model is adopted throughout the paper, with the Hubble constant $H_0 = 67.8 km s^{-1} Mpc^{-1}$ and (baryonic+dark) matter density $\Omega_m = 0.308$ \citep{Planck2015}.
\section{Galactic rotation curves}
\label{galaxy_rotation_curves}
\subsection{Contribution of baryonic matter}
The baryonic rotational curves are derived based on the distribution of the luminous matter, which is deduced from the surface brightness of the galaxy.
The surface brightness $S$ is the radiative flux $F$ per solid angle $\Delta \Omega$ of the image such that $S \approx F /\Delta \Omega$, which is a function of the redshift, and it is independent of the distance $D$ of the emitting surface in a Friedmann universe. The observed surface brightness $S_\mathrm{obs}$ in units of $L_\odot/kpc^2$ can alternatively be expressed as the quantity $\mu$ in units of $mag/arcsec^2$:
\begin{equation}
S_\mathrm{obs}(R)=4.255\times 10^{14}\times 10^{(0.4(\mathcal{M}_{\odot}-\mu(R)))},
\label{eq:mutrafo}
\end{equation}
where $R$ is the distance measured from the center of the galaxy in the galaxy plane, and $\mathcal{M}_{\odot}$ is the absolute brightness of the Sun in units of $mag$. We translated $\mu(R)$ into $ S_\mathrm{obs}(R)$ using Eq. (\ref{eq:mutrafo}), which is valid in the local Universe ($z\ll1$), having an $(1+z)^3$ factor suppressed.
We followed \cite{Tempel2006} to derive the surface brightness, assuming that the spatial luminosity density distribution of each visible component is given by
\begin{equation}
l(a)=l(0)\exp\left[ -\left( \frac{a}{ka_0}\right)^{{1/N}} \right].
\end{equation}
Here $l(0)=hL/(4\pi q a_0^3)$ is the central density, where $a_0$ characterizes the harmonic mean radius of the respecting component, and $k$ and $h$ are scaling parameters. Furthermore, $a=\sqrt{R^2+z^2/q^2}$, where $q$ is the axis ratio, and $R$ and $z$ are cylindrical coordinates.
From the measurements the projection of $l(a)$ onto the plane of the sky perpendicular to the line of sight is derived:
\begin{equation}
S(R)=2 \sum_i^n q_i \int_R^\infty \frac{l_i(a) a}{\sqrt{a^2-R^2}}da.
\label{eq:sr}
\end{equation}
Here $S(R)$ arises as a sum for $n$ visible components, and we assumed constant axis ratios $q_i$. Equation (\ref{eq:sr}) was fit to the surface brightness of the galaxies $\mu(R)$, in order to reveal the spatial luminosity density $l(a)$.
We decomposed the baryonic model into two components, a bulge and a disk. Therefore the mass density is
\begin{equation}
\rho(a)=\sigma l_b(a)+\tau l_d(a),
\end{equation}
where $l_b(a)$ and $l_d(a)$ are the spatial luminosity density of the bulge and of the disk, respectively, and $\sigma$ and $\tau$ are the corresponding mass-to-light ($M/L$) ratios (both are given in solar units).
It follows from the Poisson equation that for spheroidal shape matter, the rotational velocity in the galactic plane induced by the $i$th baryonic component is given by \citep{Tamm2005}
\begin{equation}
V_i^2(R)=4 \pi q_i G \int_0^R \frac{\rho_i(a) a^2}{(R^2-e_i^2 a^2)^{1/2}} da,
\end{equation}
where $G$ is the gravitational constant, $e_i=(1-q_i^2)^{1/2}$ is the eccentricity of the $i$th component, and $\rho_i(a)$ is its mass density. Then a summation of $V_i^2(R)$ over all visible components gives the square of the rotational velocity of the baryonic model.
\subsection{Contribution of the dark matter}
For a spherically symmetric dark matter halo, the rotational velocity square is
\begin{equation}
V^2_\mathrm{DM}(r)=\frac{G M_\mathrm{DM}(r)}{r},
\end{equation}
with the spherical radial coordinate $r$, and cumulative mass within a sphere of $r$ radius
\begin{equation}
M_\mathrm{DM}(r)=4 \pi \int_0^{r} \rho_\mathrm{DM}(r') r^{'2} dr'.
\end{equation}
The NFW dark matter density profile is \citep{NFW1997}
\begin{equation}
\rho_\mathrm{NFW}(r)=\frac{\rho_s}{\left( \frac{r}{r_s}\right) \left( 1 + \frac{r}{r_s} \right)^2},
\end{equation}
where $\rho_s$ and $r_s$ are the characteristic density and scale distance. The contribution of this dark matter to the rotational velocity squared at radial distance $r$ is
\begin{equation}
V^2_\mathrm{NFW}(r)=4\pi G \rho\frac{r_s^3}{r} \left[ \ln \left(1+\frac{r}{r_s}\right)-\frac{r}{r_s} \left(\frac{1}{1+\frac{r}{r_s}}\right) \right].
\end{equation}
The Einasto dark matter profile is described by \citep[e.g.,][]{Merritt2006}
\begin{equation}
\rho_\mathrm{E}(r)=\rho_e \exp \left\lbrace - d_n \left[ \left(\frac{r}{r_e}\right)^{1/n} -1 \right] \right\rbrace,
\end{equation}
where $n$ is a positive parameter. The term $d_n$ is a function of $n$ with the property that $\rho_e$ is the density at $r_e$ defining a half-mass radius. An empirical relation between $d_n$ and $n$ is \citep[][]{Merritt2006}
\begin{equation}
d_n\approx 3n-\frac{1}{3}+\frac{0.0079}{n}.
\end{equation}
The total dark matter mass is
\begin{equation}
M_\mathrm{E,tot}=4 \pi \rho_\mathrm{E,0} h^3 n \Gamma(3n),
\end{equation}
with the central density $\rho_\mathrm{E,0}=\rho_e e^\mathrm{d_n}$, complete Gamma function $\Gamma(3n)$, and $h=r_e/d_n^n$ \citep{Retana2012}.
This dark matter contributes to the rotational velocity squared as
\begin{equation}
V^2_\mathrm{E}(r)=\frac{G M_{\mathrm{E,tot}}}{r} \left[ 1-\frac{\Gamma(3n,s^{1/n})}{\Gamma(3n)} \right],
\end{equation}
with the incomplete Gamma function $\Gamma(3n,s^{1/n})$, and $s=d_n^n r/r_e$.
The PSE density profile is given by \citep[e.g.,][]{Jimenez2003}
\begin{equation}
\rho_\mathrm{P}(r)=\rho_\mathrm{P,0} \left[ 1+\left(\frac{r}{r_c}\right)^2\right]^{-1},
\end{equation}
where $\rho_\mathrm{P,0}$ is the central density, and $r_c$ scales the size of the core.
For this model the contribution to the square of the rotational velocity reads
\begin{equation}
V^2_\mathrm{P}(r)=4 \pi G \rho_\mathrm{P,0} r_c^2 \left[1- \frac{r_c}{r} \arctan \left( \frac{r}{r_c}\right)\right].
\end{equation}
Taking into account both visible and dark matter, the rotational velocity squared in the galactic plane becomes:
\begin{equation}
V^2(R)=V^2_\mathrm{b}(R)+V^2_\mathrm{d}(R)+V_\mathrm{DM}^2 (R,z=0).
\end{equation}
This was fit to the spectroscopic rotational curves for each of the dark matter models (NFW, Einasto, and PSE).
\begin{table*}
\centering
\caption{Best-fit parameters describing the luminosity density distribution of the baryonic matter of $15$ HSB spiral and $15$ LSB galaxies. The surface brightness photometry of the galaxies
that were fit with the models are taken from $^{1}$\citet{Palunas2000}, $^2$\citet{vanderHulst1993}, $^{3}$\citet{deBlok1996}, $^{4}$\citet{deBlok1995}, $^5$\citet{Kim2007}, and $^{6}$\citet{deBlok2008}. The superscript asterisk indicates$^\text{}$ galaxies for which a bulge component fully describes the surface brightness density.}
\label{table:gx_phot}
\resizebox{\textwidth}{!}{\begin{tabular}{lcccccccccc}
\hline
\hline
ID & $z$ & $l(0)_b$ & $ka_{0,b}$ & $N_b$ & $q_b$ & $l(0)_{d}$ & $ka_{0,d}$ & $N_d$ & $q_d$\\
& & $10^9 L_\odot /kpc^3$ & $kpc$ & & & $10^7 L_\odot /kpc^3$ & $kpc$ & &\\
\hline
ESO215G39$^1$ & $0.014$ & $0.349 \pm 0.003$ & $0.404 \pm 0.013$ & $0.956 \pm 0.042$ & $0.742\pm0.040$ & $3.113 \pm 0.007$ & $6.082 \pm 0.002$ & $0.432 \pm 0.003$ & $0.098\pm0.010$\\
ESO322G76$^1$ & $0.015$ & $1.3 \pm 0.003$ & $0.720 \pm 0.011$ & $0.780 \pm 0.022$ & $0.810\pm0.018$ & $35.62 \pm 0.07$ & $3.5 \pm 0.002$ & $0.87 \pm 0.003$ & $0.100\pm0.002$\\
ESO322G77$^1$ & $0.008$ & $7.3 \pm 0.006$ & $0.140 \pm 0.001$ & $1.300 \pm 0.002$ & $0.640\pm0.006$ & $42 \pm 0.02$ & $2.800 \pm 0.002$ & $0.760 \pm 0.011$ & $0.140\pm0.003$\\
ESO322G82$^1$ & $0.015$ & $4.3 \pm 0.001$ & $0.21 \pm 0.005$ & $1.66 \pm 0.02$ & $0.66\pm0.080$ & $12.6 \pm 0.01$ & $8.58 \pm 0.056$ & $0.659 \pm 0.006$ & $0.08\pm0.034$\\
ESO323G25$^1$ & $0.014$ & $2.309 \pm 0.005$ & $0.458 \pm 0.005$ & $0.535 \pm 0.006$ & $0.462\pm0.008$ & $56.58 \pm 0.04$ & $2.467 \pm 0.002$ & $1.055 \pm 0.008$ & $0.154\pm0.002$\\
ESO374G02$^1$ & $0.009$ & $58.4 \pm 0.001$ & $0.07 \pm 0.004$ & $1.92 \pm 0.04$ & $0.66\pm0.04$ & $158.0 \pm 0.01$ & $1.48 \pm 0.04$ & $1.40 \pm 0.018$ & $0.08\pm0.02$\\
ESO375G12$^1$ & $0.010$ & $55.4 \pm 0.001$ & $0.08 \pm 0.003$ & $1.80 \pm 0.03$ & $0.67\pm0.05$ & $88.3 \pm 0.01$ & $2.70 \pm 0.029$ & $1.33 \pm 0.008$ & $0.08\pm0.01$\\
ESO376G02$^1$ & $0.014$ & $28.3 \pm 0.01$ & $0.095 \pm 0.002$ & $1.64 \pm 0.02$ & $0.67\pm0.41$ & $51.0 \pm 0.01$ & $3.20 \pm 0.087$ & $1.01 \pm 0.02$ & $0.08\pm0.17$\\
ESO383G02$^1$ & $0.021$ & $56.056 \pm 0.05$ & $0.130 \pm 0.002$ & $1.20 \pm 0.001$ & $0.43\pm0.04$ & $16 \pm 0.05$ & $ 2.6\pm 0.003$ & $1.200 \pm 0.003$ & $0.200\pm0.005$\\
ESO383G88$^1$ & $0.014$ & $3.89 \pm 0.05$ & $0.100 \pm 0.012$ & $1.61 \pm 0.089$ & $0.68\pm0.09$ & $17.2 \pm 0.00$ & $ 5.61\pm 0.03$ & $0.776 \pm 0.005$ & $0.08\pm0.04$\\
ESO445G19$^1$ & $0.016$ & $6.048 \pm 0.004$ & $0.129 \pm 0.003$ & $1.519 \pm 0.029$ & $0.741\pm0.054 $ & $12.40 \pm 0.08$ & $5.631 \pm 0.008$ & $0.744 \pm 0.003$ & $0.148\pm0.002$\\
ESO446G01$^1$ & $0.023$ & $2.2 \pm 0.001$ & $0.740 \pm 0.007$ & $1.100 \pm 0.017$ & $0.42\pm0.020$ & $8.000 \pm 0.006$ & $4.4 \pm 0.001$ & $1.1 \pm 0.007$ & $0.19\pm0.008$\\
ESO502G02$^1$ & $0.013$ & $23.5 \pm 0.003$ & $0.09 \pm 0.001$ & $1.60 \pm 0.01$ & $0.80\pm0.043$ & $106.8 \pm 0.01$ & $2.16 \pm 0.023$ & $1.06 \pm 0.01$ & $0.08\pm0.010$\\
ESO509G80$^1$ & $0.022$ & $0.972 \pm 0.001$ & $0.666 \pm 0.022$ & $0.963 \pm 0.031$ & $0.986\pm0.023$ & $2.036 \pm 0.001$ & $11.304 \pm 0.001$ & $0.564 \pm 0.005$ & $0.265\pm0.005$\\
ESO569G17$^1$ & $0.013$ & $3.815 \pm 0.003$ & $0.385 \pm 0.023$ & $0.590 \pm 0.06$ & $0.713\pm0.063$ & $174 \pm 2$ & $1.643 \pm 0.067$ & $0.902 \pm 0.018$ & $0.146\pm0.011$\\
\hline
ID & $z$ & $l(0)_b$ & $ka_{0,b}$ & $N_b$ & $q_b$ & $l(0)_{d}$ & $ka_{0,d}$ & $N_d$ & $q_d$\\
& & $10^7 L_\odot /kpc^3$ & $kpc$ & & & $10^6 L_\odot /kpc^3$ & $kpc$ & &\\
\hline
F561-1$^2$ & $0.016$ & $2.235\pm0.002$ & $0.877\pm0.098$ & $1.045\pm0.088$ & $0.894\pm0.085$ & $1.731\pm0.001$ & $9.482\pm0.036$ & $0.138\pm0.048$ & $0.292\pm0.016$\\
F563-1$^{3 \star}$ & $0.012$ & $61.08\pm0.20$ & $0.174\pm0.015$ & $2.128\pm0.019$ & $0.855\pm0.019$ & - & - & - & -\\
F568-3$^4$ & $0.019$ & $1.561\pm0.007$ & $2.290\pm0.014$ & $0.649\pm0.018$ & $0.936\pm0.028$ & $6.368\pm0.05$ & $11.087\pm0.02$ & $0.251\pm0.002$ & $0.100\pm0.002$\\
F579-V1$^3$ & $0.021$ & $1.639\pm0.002$ & $1.283\pm0.026$ & $0.574\pm0.054$ & $0.888\pm0.051$ & $7.342\pm0.01$ & $6.741\pm0.004$ & $0.601\pm0.005$ & $0.262\pm0.007$\\
F583-1$^{3 \star}$ & $0.008$ & $6.059\pm0.008$ & $0.390\pm0.004$ & $1.629\pm0.007$ & $0.625\pm0.006$ & - & - & - & - &\\
F730-V1$^5$ & $0.036$ & $4.351\pm0.01$ & $1.120\pm0.025$ & $1.217\pm0.031$ & $0.816\pm0.018$ & $5.434\pm0.02$ & $9.404\pm0.004$ & $0.43\pm0.076$ & $0.11\pm0.014$\\
UGC128$^{4}$ & $0.015$ & $9.360\pm0.005$ & $0.356\pm0.014$ & $1.595\pm0.026$ & $0.869\pm0.034$ & $2.983\pm0.051$ & $11.362\pm0.005$ & $0.690\pm0.08$ & $0.190\pm0.026$\\
UGC1230$^2$ & $0.012$ & $1.9\pm0.05$ & $0.818\pm0.028$ & $1.002\pm0.027$ & $0.60\pm0.018$ & $9.7\pm0.004$ & $4.3\pm0.005$ & $1.0\pm0.045$ & $0.11\pm0.011$\\
UGC5750$^{4}$ & $0.014$ & $12\pm0.05$ & $0.270\pm0.004$ & $1.5\pm0.021$ & $1.0\pm0.024$ & $6.6\pm0.003$ & $8.8\pm0.006$ & $0.430\pm0.051$ & $0.13\pm0.009$\\
UGC6614$^5$ & $0.021$ & $167.8\pm0.5$ & $0.270\pm0.005$ & $1.47\pm0.016$ & $0.65\pm0.05$ & $5.7\pm0.03$ & $15.8\pm0.008$ & $0.752\pm0.069$ & $0.08\pm0.02$\\
UGC10310$^5$ & $0.002$ & $9.4\pm0.01$ & $0.44\pm0.014$ & $0.88\pm0.044$ & $0.71\pm0.01$ & $17.5\pm0.001$ & $1.4\pm0.12$ & $1.1\pm0.06$ & $0.12\pm0.01$\\
UGC11454$^5$ & $0.022$ & $127.5\pm5$ & $0.206\pm0.011$ & $1.4\pm0.062$ & $0.80\pm0.02$ & $64.4\pm0.001$ & $3.47\pm0.099$ & $1.09\pm0.031$ & $0.09\pm0.01$\\
UGC11616$^5$ & $0.017$ & $83.1\pm0.01$ & $0.545\pm0.043$ & $1.23\pm0.096$ & $0.66\pm0.04$ & $43.8\pm0.002$ & $5.5\pm0.31$ & $0.844\pm0.069$ & $0.08\pm0.02$\\
UGC11748$^5$ & $0.018$ & $114.4\pm5$ & $0.980\pm0.010$ & $0.847\pm0.014$ & $0.80\pm0.04$ & $286.1\pm0.5$ & $3.1\pm0.17$ & $1.19\pm0.046$ & $0.08\pm0.03$\\
UGC11819$^5$ & $0.014$ & $88.01\pm0.1$ & $0.197\pm0.002$ & $1.081\pm0.013$ & $0.991\pm0.013$ & $130\pm1$ & $5.789\pm0.002$ & $0.753\pm0.015$ & $0.113\pm0.001$ \\
\hline
\end{tabular}}
\end{table*}
\section{Best-fit surface brightness profile of the galaxies}
\label{section_conf}
We calculated the $S_\mathrm{obs}(R)$ of the galaxies from $\mu(R)$, given in the literature. The ESO HSB galaxies were imaged in I band \citep{Palunas2000}. Eight of the $15$ LSB galaxies$\text{}$ were detected in R band \citep[F561-1, UGC1230,][]{vanderHulst1993}, \citep[F563-1, F579-V1, F583-1,][]{deBlok1996}, and \citep[F568-3, UGC128, UGC5750,][]{deBlok1995}, and $7$ in V band \citep[F730-V1, UGC10310, UGC11454, UGC11616, UGC11748, UGC11819, and UGC6614][]{Kim2007}. The absolute brightness of the Sun ($\mathcal{M}_{\odot}$) was substituted into Eq. (\ref{eq:mutrafo}) according to the observing photometric filter, assuming the values of \citet{Binney1998} (Table 2.1) as $M_{I,\odot}=4.08^\mathbf{m}$, $M_{R,\odot}=4.42^\mathbf{m}$, and $M_{V,\odot}=4.83^\mathbf{m}$.
Then by using Eq. (\ref{eq:sr}) and varying the parameters $q$, $l(0)$, $ka_0$, and $N$, the spatial luminosity densities were fit to the surface brightness of the galaxies by nonlinear least-squares fitting methods. We set the initial value of the axial ratio of the components as $q=0.7$ for the bulge and $q=0.1$ for the disk, which were shown to be the most frequently used values for the nearby galaxies \citep{Tamm2005}. We set a lower limit of $0.4$ for the bulge and an upper limit of $0.3$ for the disk based on SDSS results \citep{Padilla2008,Rodriguez2013}. Using the photometry of the $15$ HSB and $15$ LSB galaxies, we decomposed the surface brightness profile into bulge and disk components. The fit results are given in Table \ref{table:gx_phot} and shown in Figures \ref{fig:bright_fit_plots1} and \ref{fig:bright_fit_plots2} in the Appendix. In Table \ref{table:gx_phot} we also list the redshift $z$ of the galaxies from the NASA/IPAC Extragalactic Database, confirming that all galaxies belong to the local Universe. In the case of the LSB galaxies F563-1 and F583-1, the surface brightness distribution indicates only a bulge component. Knowing the spatial distribution of the luminous matter $l(a)$, we are able to construct the mass model, and consequently the rotational velocity curve of the baryonic component. In the next subsections we present the fitting results with the baryonic matter and three different dark matter density profiles.
\section{Best-fit galaxy rotational curves}
\label{bestfitrot}
We proceeded in a similar way for the HSB and LSB galaxies. First we fit the surface brightness distributions with the model described in Section \ref{galaxy_rotation_curves}, inferring the luminosity distribution of the baryonic component. When we fit the dark matter models, we included the baryonic component, with fixed $M/L$ ratios as described in the next subsection. We applied
the nonlinear least-squares method to perform the fit with 1/error$^2$ weights, minimizing the residual sum of squares ($\chi^2$) between the data and the model.
\subsection{Estimated mass-to-light ratio of the bulge and the disk}
\label{estimated_ml}
\begin{table*}
\centering
\caption{Coefficients of the color--mass-to-light ratios employed in this paper.}
\label{table:cmlrs}
\begin{tabular}{lccccccc}
\hline
Ref & Color index & $\alpha_V$ & $\beta_V$ & $\alpha_R$ & $\beta_R$ & $\alpha_I$ & $\beta_I$\\
\hline
\cite{Bell2003}& $B-V$ & -0.628 & 1.305 & -0.520 & 1.094 & -0.399 & 0.824\\
& $B-R$ & -0.633& 0.816 & -0.523 & 0.683 & -0.405 & 0.518 \\
\hline
\cite{McGaugh2014}& $B-V$ & -0.628 & 1.305 & - & - & -0.275 & 0.615 \\
\hline
\end{tabular}
\end{table*}
\begin{table*}
\centering
\caption{Color indices of the chosen galaxies (\citet{deVaucouleurs1992}$^1$, \citet{Lauberts1989}$^2$, \citet{McGaugh1997}$^3$, \citet{SDSSDR6}$^4$, \citet{Stark2009}$^5$, \citet{Kim2007}$^6$), their gas mass fraction ($f_g$), total luminosity in $i$ photometric band ($L_i$), stellar ($M_\star$) and gas masses ($M_\mathrm{gas}$), and estimated $M/L$ ratios for the bulge ($\Upsilon_b$), and for the disk ($\Upsilon_d$).}
\label{table:mtolight}
\begin{tabular}{lccccccccccc}
\hline
\hline
ID & $B-V$ & $B-R$ & Ref. & $f_g$ & $L_{i}$ & $M_{\star}$ & $M_\mathrm{gas}$ & $\Upsilon_b$ & $\Upsilon_d$\\
& ($^{m}$) & ($^{m}$) & & & $\times10^8 L\odot$ & $\times10^8 M_\odot$ & $\times10^8 M_\odot$ &($M_{\odot}/L_\odot$) & ($M_{\odot}/L_\odot$)\\
\hline
ESO215G39 & 0.54 &- & 1 & 0.38 & 26.3 & 29.9 & 18.5 & 1.14 & 1.84\\
ESO322G76 &- & 0.13 & 2 & 0.26 & 296 & 136 & 48.4 & 0.46 & 0.62\\
ESO322G77 &- & 0.72 & 2 & 0.52 & 9.11 & 8.48 & 9.34 & 0.93 & 1.96\\
ESO322G82 &- & 0.99 & 2 & 0.24 & 10.2 & 13.1 & 4.24 & 1.28 & 1.69\\
ESO323G25 &- & 1.10 & 2 & 0.28 & 120 & 176 & 70 & 1.47 & 2.05\\
ESO374G02 &- & 0.64 & 2 & 0.27 & 9.01 & 7.58 & 2.77 & 0.84 & 1.15\\
ESO375G12 & 0.61 &- & 1 & 0.20 & 13.4 & 16.8 & 4.31 & 1.25 & 1.58\\
ESO376G02 & 0.40 &- & 1 & 0.37 & 7.31 & 6.82 & 3.98 & 0.93 & 1.48\\
ESO383G02 &- & 0.84 & 2 & 0.31 & 96.1 & 103 & 46.7 & 1.08 & 1.56\\
ESO383G88 &- & 1.08 & 2 & 0.38 & 5.32 & 7.61 & 4.73 & 1.43 & 2.32\\
ESO445G19 &- & 0.71 & 2 & 0.38 & 5.4 & 4.94 & 2.98 & 0.92 & 1.47\\
ESO446G01 &- & 0.57 & 2 & 0.26 & 231 & 180 & 63.2 & 0.78 & 1.05\\
ESO502G02 &- & 1.24 & 2 & 0.35 & 6.21 & 10.7 & 5.71 & 1.72 & 2.64\\
ESO509G80 &- & 1.04 & 2 & 0.29 & 266 & 363 & 152 & 1.37 & 1.94\\
ESO569G17 &- & 0.53 & 2 & 0.35 & 391 & 291 & 159 & 0.74 & 1.15\\
F561-1 & 0.41 &- & 3 & 0.46 & 159 & 135.00 & 115 & 0.85 & 1.57\\
F563-1 & 0.65 &- & 3 &- & 2.89 & 4.48 &- & 1.55 & -\\
F568-3 & 0.55 &- & 3 & 0.55 & 320 & 386 & 472 & 1.21 & 2.68\\
F579-V1 & 0.76 &- & 4 & 0.34 & 74.40 & 151 & 77.9 & 2.03 & 3.08\\
F583-1 & 0.39 &- & 5 &- & 0.72 & 0.58 &- & 0.81 & -\\
F730-V1 & 0.54 &- & 6 & 0.57 & 11.60 & 13.9 & 18.5 & 1.20 & 2.80\\
UGC128 & 0.60 &- & 6 & 0.72 & 1.43 & 1.96 & 5.04 & 1.37 & 4.89\\
UGC1230 & 0.52 &- & 3 & 0.80 & 11.3 & 12.7 & 50.7 & 1.12 & 5.60\\
UGC5750 & 0.53 &- & 4 & 0.67 & 1.08 & 1.25 & 2.54 & 1.16 & 3.51\\
UGC6614 & 0.72 &- & 3 & 0.45 & 2.53 & 5.18 & 4.23 & 2.05 & 3.73\\
UGC10310 & 0.42 &- & 1 & 0.74 & 3.3 & 2.75 & 7.9 & 0.83 & 3.22\\
UGC11454 & 0.47 &- & 6 & 0.67 & 5.61 & 5.46 & 11 & 1.1 & 0.82\\
UGC11616 & 0.36 &- & 6 & 0.83 & 21.7 & 15 & 71.4 & 0.69 & 3.99\\
UGC11748 & 0.38 &- & 6 & 0.80 & 78.5 & 58.3 & 226 & 0.74 & 3.62\\
UGC11819 & 0.60 &- & 6 & 0.29 & 65.2 & 93.4 & 37.9 & 1.43 & 2.01\\
\hline
\end{tabular}
\end{table*}
To estimate the $M/L$ ratios, we employed color--to-mass-to-light ratio relations (CMLR). The relation between the stellar $M/L$ ratio and the color index is $\log \Upsilon_\star=\alpha_i+\beta_i(m_i-m_j)$, where $\Upsilon_\star$ is the stellar $M/L$ ratio, and $m_i-m_j$ is the color index calculated from $i$ and $j$ photometric bands.
\citet{McGaugh2014} combined Spitzer $3.6 \mu m$ infrared observations of a sample of disk galaxies with optical luminosities to test four different population synthesis prescriptions for computing stellar mass. In their analysis the bulge and disk were not distinguished. They found that the semi-empirical stellar population synthesis model of \citet{Bell2003} is self-consistent, and the revised CMLR based on the model of \citet{Bell2003} has the least scatter (the self-consistency satisfies that different photometric bands provide the same stellar mass). This model is one of the two population synthesis models that required the smallest corrections to the Spitzer data. Therefore we calculated $\Upsilon_\star$ considering the revised CMLR of \citet{McGaugh2014}, which explores the color index $B-V$. This model was tested by circular arguments and assumes that the scatter in the baryonic Tully-Fisher relation should be minimized for all galaxies. For the HSB galaxies with known color index $B-R$, we used the original CMLR of \citet{Bell2003}. We summarize the coefficients of these CMLR-s in Table \ref{table:cmlrs}.
The color indices were collected from the literature as indicated in Table \ref{table:mtolight}. When transforming the SDSS colors to $B-V$ color index, we applied the relation $B-V=0.98(g-r)+0.22$ given by \citet{Jester2005}. Where the color indices were not corrected for extinction \citep{SDSSDR6,Kim2007}, we have done it using Landolt standard-fields.
We estimate the mass of the gaseous disk $M_\mathrm{gas}$ employing the gas mass fraction
\begin{flalign}
f_g=\frac{M_\mathrm{gas}}{M_\mathrm{gas}+M_{\star}},
\end{flalign}
where $M_{\star}$ is the stellar mass of the galaxy. There are empirical relations between the gas mass fraction, the brightness, and the color index of the galaxies. \citet{McGaugh1997} found that in general, $f_g$ computed from $B$-band and $I$-band data of spiral galaxies correspond closely. More precisely, for the $I$-band HSB galaxies $f_g=0.12(M_B+23)$ holds for the gas mass fraction and the absolute magnitude of the galaxy in $B$ band \citep{McGaugh1997}. The absolute $B$ magnitudes are derived based on the galaxy distances collected from \citet{Palunas2000} and their apparent $B$ magnitudes collected from the NASA/IPAC Extragalactic Database \citep{Lauberts1989}. \citet{McGaugh1997} gave a relation between $f_g$ and the $B-V$ color index, $f_g=-1.4[(B-V)-0.95]$. We used this equation to calculate $f_g$ for the LSB galaxies with available $B-V$ indices: F730-V1, UGC10310, UGC11454, UGC11616, UGC11748, and UGC11819. For the other galaxies, $f_g$ is available directly from the literature: F561-1, F568-3, UGC128, UGC1230, UGC6614 \citep{McGaugh1997}, F579-V1, and UGC5750 \citep{Schombert2014}.
The stellar mass-to-light ratio $\Upsilon_\star$ was taken as the bulge $M/L$, $\Upsilon_b$. Then the stellar mass, encompassed by the bulge and the disk, is $M_\star=\Upsilon_\star L$, where $L=L_b+L_d$ is the total luminosity of the galaxy, calculated from the best-fit central luminosity density of the bulge ($L_b$) and the disk ($L_d$). The corresponding scaling parameters, $k=\Gamma(2N)/\Gamma(3N)$ and $h=\Gamma ^2(3N)/[N\Gamma^3(2N)]$ \citep{Tamm2012} were calculated based on the best-fit $N$ and $ka_0$ of the bulge and disk components. The gas mass $M_\mathrm{gas}$ was derived based on $f_g$ and $M_\star$. Then the $M/L$ ratio of the disk is $\Upsilon_{d}=(M_\star+M_\mathrm{gas})/L$.
We summarize the photometric informations, $M/L$ ratios, and
total luminosities and masses in Table \ref{table:mtolight}. These $M/L$ ratios are assumed constant when fitting the baryonic + dark matter models to the rotation curve data.
\subsection{HSB galaxies}
We collected the rotational velocity data of the HSB galaxy sample from \citet{Palunas2000}, who presented maximum disk models for a sample of $74$ field and cluster spiral galaxies located in the vicinity of the Hydra-Centaurus cluster. For each galaxy they had $I$-band CCD images and two-dimensional (2D) H$\alpha$ velocity maps, from which the surface brightness distribution and rotational velocity curve of each galaxy were produced. From this sample $15$ galaxies were selected such that these galaxies did not show bars, or rings, which might contradict the assumption of axisymmetry. We summarize the best-fit parameters in Table \ref{table:hsb_vrot}. We present the best-fit rotational velocity models and the galaxy velocity curves on Fig. \ref{fig:hsb_vrot}. We note that the error bars on the individual data points are quite large compared to the scatter in the mean data values, suggesting that the error
bars are slightly overestimated.
\subsection{LSB galaxies}
We have explored a database of LSB galaxies taken from the literature as follows: the smoothed hybrid H$\alpha$-HI rotational velocity curves of the LSB galaxies F561-1, F563-1, F568-3, F583-1, F730-V1, UGC5750, UGC10310, UGC 11454, UGC11616, UGC11748, UGC11819, and
UGC6614 from \citet{deBlok2001b}, the HI rotational velocity curve of F579-V1 from \citet{deBlok1996}, the HI rotational velocity curves of UGC128 and UGC1230 from \citet{vanderHulst1993}. \citet{deBlok2001b} calculated the errors on the smoothed rotational curves as follows. The error bars consist of two components: (1) observational errors that are due to the measurement uncertainties in the individual raw data points, and (2) the differences between approaching and receding sides and noncircular motions. For the final error estimate, these two uncertainties were added quadratically. The original data for the approaching and receding sides are available in \citet{McGaugh2001}\footnote{http://astroweb.case.edu/ssm/data/RCHalpha.0701.dat}, showing only slight asymmetries. In the other four cases, no errors were published with the data. As the radial distribution of the velocities shows a quite regular pattern, we assumed for them $10$ percent errors. We have determined the best-fit parameters for each of the baryonic + NFW, baryonic + Einasto, baryonic + PSE models, and they are listed in Table \ref{table:lsb_vrot}. We present the rotational velocity curves of the best-fit models and the galaxy velocity curves in Fig. \ref{fig:lsb_vrot}.
\begin{figure*}
\centering
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_eso215g39_kozos.eps}
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_eso322g76_kozos.eps}
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_eso322g77_kozos.eps}\newline
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_eso322g82_kozos.eps}
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_eso323g25_kozos.eps}
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_eso374g02_kozos.eps}\newline
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_eso375g12_kozos.eps}
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_eso376g02_kozos.eps}
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_eso383g02_kozos.eps}\newline
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_eso383g88_kozos.eps}
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_eso445g19_kozos.eps}
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_eso446g01_kozos.eps}\newline
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_eso502g02_kozos.eps}
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_eso509g80_kozos.eps}
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_eso569g17_kozos.eps}\newline
\caption{Best-fit rotational curves for the HSB galaxy sample. The dots with error bars denote archive rotational velocity curves derived from spectroscopic data. The fitted models are pure baryonic (black short-dashed curve), baryonic + NFW (red continuous curve), baryonic + Einasto (purple dashed curve), and baryonic + PSE (green continuous curve).}
\label{fig:hsb_vrot}
\end{figure*}
\begin{table*}
\caption{Parameters describing the best-fit pure baryonic, baryonic + NFW, baryonic + Einasto, and baryonic + PSE models of $15$ HSB galaxies. The rotational velocity data of the galaxies that were fit with the models are taken from \citet{Palunas2000}. When the best-fit is within the $1\sigma$ confidence level, the $\chi^2$-values are indicated in boldface.}
\label{table:hsb_vrot}
\resizebox{\textwidth}{!}{\begin{tabular}{lccccccccccccccc}
\hline
\hline
& \multicolumn{3}{|c}{Baryonic} & \multicolumn{3}{|c}{NFW} & \multicolumn{4}{|c}{Einasto} & \multicolumn{3}{|c|}{PSE} & & \\
\hline
ID & $\Upsilon_b$ & $\Upsilon_d$ & $\chi^2_\mathrm{B}$ & $\rho_s$ & $r_s$ & $\chi^2_\mathrm{NFW}$ & $\rho_e$ & $r_e$ & $n$ & $\chi^2_\mathrm{E}$ & $\rho_{0}$ & $r_c$ & $\chi^2_\mathrm{P}$ & $1\sigma_\mathrm{NFW,P}$& $1\sigma_\mathrm{E}$ \\
& & & & ($M_\odot \mathrm{kpc}^{-3}$) & ($\mathrm{kpc}$) & & ($M_\odot \mathrm{kpc}^{-3}$) & ($\mathrm{kpc}$) & & & ($M_\odot \mathrm{kpc}^{-3}$) & ($\mathrm{kpc}$) & & &\\
\hline
ESO215G39 & 1.14 & 1.84 & 922.9 & 1.80E+07 & 10.74 & \textbf{5.76} & 8.04E+05 & 18.85 & 2.9 & \textbf{7.03} & 2.80E+08 & 1.26 & \textbf{6.59} & 36.3 & 35.24\\
ESO322G76 & 0.46 & 0.62 & 1064.8 & 3.74E+07 & 7.27 & \textbf{15.6} & 1.20E+04 & 120 & 6.1 & \textbf{13.62} & 5.80E+08 & 0.9 & \textbf{16.29} & 56.3 & 55.25\\
ESO322G77 & 0.93 & 1.96 & 179.15 & 1.89E+08 & 2.90 & \textbf{2.51} & 2.70E+04 & 76 & 6.5 & \textbf{3.09} & 2.15E+09 & 0.42 & \textbf{3.23} & 14.84 & 13.74\\
ESO322G82 & 1.28 & 1.69 & 663.215 & 1.37E+08 & 3.19 & 76.1 & 7.16E+05 & 15.97 & 4.01 & 88.1 & 2.01E+09 & 0.38 & 77.9 & 39.48 & 38.42\\
ESO323G25 & 1.47 & 2.05 & 324.71 & 2.80E+08 & 2.2 & \textbf{31.28} & 3.60E+03 & 120 & 9.6 & \textbf{29.01} & 7.40E+09 & 0.19 & \textbf{26.01} & 68.83 & 67.79\\
ESO374G02 & 0.84 & 1.15 & 1928.88 & 9.19E+07 & 5.35 & \textbf{37.4} & 7.83E+04 & 55.7 & 5.6 & \textbf{40.9} & 7.67E+08 & 0.90 & \textbf{37.6} & 88.58 & 87.54\\
ESO375G12 & 1.25 & 1.58 & 1052 & 2.02E+08 & 3.32 & \textbf{8.4} & 1.66E+06 & 13.98 & 3.58 & \textbf{14.0} & 1.54E+10 & 0.17 & \textbf{7.95} & 53.15 & 52.11\\
ESO376G02 & 0.93 & 1.48 & 2164.92 & 4.31E+07 & 6.09 & 200.2 & 5.33E+06 & 8.28 & 1.74 & 162.1 & 3.43E+08 & 1.09 & 158.8 & 63.61 & 62.57\\
ESO383G02 & 1.08 & 1.56 & 564.68 & 8.48E+07 & 4.5 & \textbf{5.93} & 1.62E+06 & 13.36 & 2.92 & \textbf{5.84} & 1.41E+09 & 0.54 & \textbf{6.69} & 49.64 & 41.59\\
ESO383G88 & 1.43 & 2.32 & 616.92 & 7.54E+07 & 2.94 & 57.2 & 6.01E+06 & 5.32 & 1.5 & 67.2 & 9.06E+08 & 0.40 & \textbf{53.0} & 53.15 & 52.11\\
ESO445G19 & 0.92 & 1.47 & 363.27 & 2.35E+07 & 8.23 & \textbf{3.56} & 4.60E+04 & 60 & 4.9 & \textbf{0.33} & 2.78E+08 & 1.16 & \textbf{4.26} & 40.53 & 39.48\\
ESO446G01 & 0.78 & 1.05 & 735.53 & 3.56E+07 & 8.38 & \textbf{11.88} & 8.70E+03 & 150 & 6.5 & \textbf{10.53} & 4.72E+08 & 1.11 & \textbf{10.31} & 46.85 & 45.80\\
ESO502G02 & 1.72 & 2.64 & 33.0 & 1.26E+08 &0.42 & \textbf{32.9} & 4.70E+07 & 1.2 & 0.08 & \textbf{31.8} & 8.43E+06 & 1.49 & \textbf{30.1} & 43.7 & 42.64\\
ESO509G80 & 1.37 & 1.94 & 1124.27 & 1.83E+07 & 14.41 & \textbf{8.30} & 1.30E+06 & 22 & 2.4 & \textbf{7.66} & 1.85E+08 & 2.24 & \textbf{7.42} & 39.48 & 38.42\\
ESO569G17 & 0.74 & 1.15 & 126.62 & 1.10E+07 & 13.03 & \textbf{4.64} & 1.30E+06 & 11 & 2.9 & \textbf{7.82} & 2.29E+08 & 1.21 & \textbf{5.06} & 21.36 & 20.28\\
\hline
\end{tabular}}
\end{table*}
\begin{figure*}
\centering
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_f561_1_kozos.eps}
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_f563_1_kozos.eps}
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_f568_3_kozos.eps}\newline
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_f579_v1_kozos.eps}
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_f583_1_kozos.eps}
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_f730_v1_kozos.eps}\newline
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_ugc128_kozos.eps}
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_ugc1230_kozos.eps}
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_ugc5750_kozos.eps}\newline
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_ugc6614_kozos.eps}
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_ugc10310_kozos.eps}
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_ugc11454_kozos.eps}\newline
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_ugc11616_kozos.eps}
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_ugc11748_kozos.eps}
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_ugc11819_kozos.eps}\newline
\caption{Best-fit rotational curves for the LSB galaxy sample. The dots with error bars denote archive rotational velocity curves derived from spectroscopic data. The fitted models are pure baryonic (black short-dashed curve), baryonic + NFW (red continuous curve), baryonic + Einasto (purple dashed curve), and baryonic + PSE (green continuous curve).}
\label{fig:lsb_vrot}
\end{figure*}
\begin{table*}
\caption{Parameters describing the best-fit pure baryonic, baryonic + NFW, baryonic + Einasto, and baryonic + PSE models of $15$ LSB galaxies. The rotational velocity data of the galaxies are taken from: $^{1}$\citet{deBlok2001b}, $^{2}$\citet{deBlok1996}, $^3$\citet{vanderHulst1993}. The superscript $^\star$ singles out galaxies, for which a bulge component fully describes the surface brightness density. Whenever the best-fit is within the $1\sigma$ confidence level, the $\chi^2$-values are indicated in boldface.}
\label{table:lsb_vrot}
\resizebox{\textwidth}{!}{\begin{tabular}{lccccccccccccccc}
\hline
\hline
& \multicolumn{3}{|c}{Baryonic} & \multicolumn{3}{|c}{NFW} & \multicolumn{4}{|c}{Einasto} & \multicolumn{3}{|c|}{PSE} & & \\
\hline
ID & $\Upsilon_b$ & $\Upsilon_d$ & $\chi^2_\mathrm{B}$ & $\rho_s$ & $r_s$ & $\chi^2_\mathrm{NFW}$ & $\rho_e$ & $r_e$ & $n$ & $\chi^2_\mathrm{E}$ & $\rho_{0}$ & $r_c$ & $\chi^2_\mathrm{P}$ & $1\sigma_\mathrm{NFW,P}$& $1\sigma_\mathrm{E}$\\
& & & & ($M_\odot \mathrm{kpc}^{-3}$) & ($\mathrm{kpc}$) & & ($M_\odot \mathrm{kpc}^{-3}$) & ($\mathrm{kpc}$) & & & ($M_\odot \mathrm{kpc}^{-3}$) & ($\mathrm{kpc}$) & & &\\
\hline
F561-1$^1$ & 0.85 & 1.57 & 249.32 & 1.50E+06 & 9.04 & 7.06 & 3.53E+06 & 3.57 & 0.39 & \textbf{1.13} & 1.60E+07 & 1.40 & \textbf{4.10} & 4.72 & 3.53\\
F563-1$^{1 \star}$ & 1.55 & x & 57.49 & 9.90E+04 & 160 & \textbf{1.39} & 1.80E+06 & 11.54 & 0.14 & \textbf{0.51} & 3.36E+06 & 9.73 & \textbf{0.84} & 8.18 & 7.04\\
F568-3$^1$ & 1.21 & 2.68 & 103.32 & 2.98E+05 & 80.24 & 14.03 & 1.00E+07 & 4.70 & 0.13 & \textbf{1.71} & 1.94E+07 & 2.92 & \textbf{6.52} & 9.3 & 8.18\\
F579-V1$^2$ & 2.03 & 3.08 & 278.90 & 7.24E+07 & 3.13 & \textbf{2.88} & 1.87E+06 & 8.13 & 2.90 & \textbf{2.22} & 8.05E+08 & 0.46 & \textbf{1.75} & 12.64 & 11.54\\
F583-1$^{2 \star}$ & 0.81 & x & 683.03 & 8.67E+05 & 38.57 & \textbf{10.82} & 2.03E+06 & 9.22 & 0.99 & \textbf{0.43} & 2.60E+07 & 2.76 & \textbf{0.89} & 15.94 & 14.84\\
F730-V1$^1$ & 1.20 & 2.80 & 423.92 & 1.30E+07 & 9.83 & \textbf{5.48} & 3.76E+06 & 8.03 & 1.67 & \textbf{0.59} & 1.98E+08 & 1.21 & \textbf{1.00} & 5.89 & 4.72\\
UGC128$^3$ & 1.37 & 4.89 & 123.49 & 3.70E+04 & 420 & \textbf{8.15} & 1.88E+05 & 37.49 & 1.48 & \textbf{1.58} & 6.95E+06 & 6.74 & \textbf{0.95} & 10.42 & 9.3\\
UGC1230$^3$ & 1.12 & 5.60 & 146 & 4.20E+04 & 290 & \textbf{4.49} & 1.61E+05 & 34.17 & 1.38 & \textbf{0.70} & 4.53E+06 & 7.19 & \textbf{0.69} & 9.3 & 8.18\\
UGC5750$^1$ & 1.16 & 3.51 & 15.39 & 1.20E+03 & 2000 & \textbf{7.79} & 4.70E+05 & 19 & 0.15 & \textbf{3.98} & 5.39E+05 & 40.97 & \textbf{4.02}& 9.3 & 8.18\\
UGC6614$^1$ & 2.05 & 3.73 & 2229.43 & 6.4E+06 & 24.0 & 81.3 & 1.14E+05 & 59.89 & 3.09 & 83.42 & 9.36E+07 & 2.50 & 75.06 & 12.64 & 11.54\\
UGC10310$^1$ & 0.83 & 3.22 & 173 & 4.1E+04 & 210 & 62.02 & 1.7E+06 & 38 & 0.99 & 108.48 & 1.3E+07 & 1.8E+04 & 35.53 & 15.94 & 14.84\\
UGC11454$^1$ & 1.1 & 0.82 & 13330 & 8.6E+06 & 13 & 125.16 & 1.64E+06 & 12.49 & 1.91 & 124.43 & 1.3E+08 & 1.6 & 98.37 & 26.73 & 25.66 \\
UGC11616$^1$ & 0.69 & 3.99 & 846 & 5.3E+06 & 9.7 & 80.86 & 3.1E+06 & 6.7 & 0.89 & 86.31 & 6.1E+07 & 1.4 & 68.54 & 24.59 & 23.51\\
UGC11748$^1$ & 0.74 & 3.62 & 57534 & 1.17E+08 & 6.47 & 2775 & 3.05E+07 & 6.57 & 1.33 & 2752 & 1.01E+09 & 1.12 & 2627 & 34.18 & 33.12\\
UGC11819$^1$ & 1.43 & 2.01 & 19.23 & 3.18E+06 & 7.39 & \textbf{3.54} & 4.41E+06 & 3.82 & 0.35 & \textbf{2.05} & 2.34E+07 & 1.43 & \textbf{3.09} & 13.74 & 12.64\\
\hline
\end{tabular}}
\end{table*}
\section{Relevance of dark matter models}
\label{stat_ranking}
\subsection{Comparison of dark matter models}
Since the dark matter models are not submodels of each other, comparing them by exact statistical tests, such as the likelihood ratio test, is not conclusive. Instead we applied the Akaike information criterion $\textrm{AIC}=2N+\chi^2$ \citep{Akaike1974} that is usually employed in the literature \citep[e.g.,][]{Chemin2011}.
The AIC is a measure of the fit of a given model to the statistical data based on both the residual sum of squares $\chi^2$ and the number of parameters $N$. A lower AIC value represents a better model performance. In Table \ref{table:gx_vrot_stat} the columns $\textrm{AIC}_{\textrm{NFW}}$, $\textrm{AIC}_{\textrm{E}}$, and $\textrm{AIC}_{\textrm{P}}$ are the calculated $\textrm{AIC}$ values for the baryonic + NFW, the baryonic + Einasto, and the baryonic + PSE model, respectively, for those galaxies for which at least one model fits the dataset within the $1\sigma$ confidence level. The lowest values are marked in boldface.
A model is worse than the best-fit model when the difference $\Delta$ of the corresponding $\textrm{AIC}$ values is higher. These differences are presented in the last three columns of Table \ref{table:gx_vrot_stat}. We establish the following thresholds: $\Delta\leq 2$ refers to approximately equal performances, $4\leq\Delta\leq 7$ represents a measurable difference in the fit of the two models, while $\Delta>10$ clearly favors one of the models over the other. We stress, however, that the imposition of these limits is somewhat arbitrary. For related considerations see Chapter $2$ of \citet{Burnham2003}.
For $7$ galaxies (2 HSB, and 5 LSB) neither the pure baryonic nor any of the dark matter models fit the dataset within the $1\sigma$ confidence level. In order to identify the performances of the best-fit models in the case of the other $23$ galaxies, the $\Delta$-values are listed in the last three columns of Table \ref{table:gx_vrot_stat}, and their interpretation is summarized in Table \ref{table:gx_vrot_disruletable}. The best-fit PSE model is favored in $\Sigma_{++}+\Sigma_{+}=23$ cases (14 HSB, and 9 LSB) and cannot be ruled out in either case. The best-fit NFW is favored in $\Sigma_{++}+\Sigma_{+}=13$ cases (10 HSB, and
3 LSB), and is ruled out in $\Sigma_{--}=2$ cases ($1$ HSB, and
$1$ LSB). The best-fit Einasto model is favored in $\Sigma_{++}+\Sigma_{+}=10$ cases (3 HSB, and 7 LSB), and is ruled out in $\Sigma_{--}=1$ cases ($1$ HSB).
\subsection{Pure baryonic model with M/L ratio fit}
The rotational curves cannot be explained by the pure baryonic matter with the $M/L$ ratios derived in Section \ref{estimated_ml}. However, the estimation of $M/L$ depends on the applied stellar population model and the IMF. Previously, we used the self-consistent model of \citet{Bell2003}.
Now we compare the pure baryonic model with fitted $M/L$ to the models that contain baryonic matter with fixed $M/L$ and dark matter. For this purpose we employ lower and upper limits on the $M/L$ ratios based on the nine different models of \citet{Bell2001} and the corresponding CMLRs, tabulated for different color indices, and for cosmological redshifts $z=0.008$ and $z=0.02$. We used the CMLRs evaluated at $z=0.02$, as this redshift applies for our galaxy sample. In all cases the CMLR based on the Bruzal \& Charlot population synthesis model with a modified Salpeter IMF gave the lower limits, and the CMLR based on the PEGASE model with $x=-1.85$ IMF gave the upper limits to the $M/L$ ratios. These $M/L$ ratios (separately for the bulge and the disk) are summarized in Table \ref{table:comp_ml_ratios}, from columns 5 to 8. The best-fit values of the $M/L$ ratios are presented in columns $9$ and $10$ of the same table. In Table \ref{table:comp_ml_ratios} we also present the $M/L$ ratios of the baryonic+dark matter models (in columns $3$ and $4$) in order to compare the best-fit $M/L$ ratios of the pure baryonic model to them.
Columns $9$ and $10$ of Table \ref{table:comp_ml_ratios} show the best-fit $M/L$ ratios for the bulge and the disk, respectively. In Fig. \ref{fig:hsblsb_vrotfitml} we represent the pure baryonic model fits with rotational curve data within the $1\sigma$ confidence level. We found goods fits like this for ten HSB and two LSB galaxies. Although the fitting is within $1\sigma$ for the HSB galaxies ESO323G25, ESO374G02, ESO445G19, and ESO446G01, the pure baryonic model still does not reproduce the plateau as well
as the best-fit dark matter models, which are also indicated in Fig. \ref{fig:hsblsb_vrotfitml}. This appears most clearly for ESO446G01.
Columns $2$ and $6$ of Table \ref{table:gx_vrot_stat} show the AIC$_B$ and $\Delta$-value (i.e., the difference of AIC$_B$ to
the best-fit AIC), respectively. The second column of Table \ref{table:gx_vrot_disruletable} indicates the relevance (based on the AIC) of the pure baryonic model that is favored in $\Sigma_{++}+\Sigma_{+}=2$ cases (1 HSB, and 1 LSB), which is ruled out in $\Sigma_{--}=13$ cases ($7$ HSB, and $6$ LSB).
\begin{table*}
\centering
\caption{Best-fit (BF) dark matter models in column 2, where N/E/P marks hold for the Navarro-Frank-White/Einasto/pseudo-isothermal sphere models, and the estimated $M/L$ ratios of the baryonic+dark matter models ($\Upsilon_\mathrm{b}$ and $\Upsilon_\mathrm{d}$ for the bulge and disk, respectively) in columns $3$ and $4$. Columns $5$ to $8$ show the lower ($\Upsilon_\mathrm{b,min}$, $\Upsilon_\mathrm{d,min}$) and the upper limits ($\Upsilon_\mathrm{b,max}$, $\Upsilon_\mathrm{d,max}$) of the $M/L$ ratios for the bulge and disk, respectively, derived from the CMLR relations given in \citet{Bell2001}. The best-fit values of the bulge and disk $M/L$ ratios ($\Upsilon_\mathrm{b,bf}$, $\Upsilon_\mathrm{d,bf}$) are given in columns $9$ and $10$, respectively, obtained from fitting the pure baryonic model to the rotational curves. The $\chi^2$ and the $1\sigma$ confidence intervals are given in columns $11$ and $12$. When the fit is within the $1\sigma$ confidence level, the $\chi^2$-values are indicated in boldface.}
\label{table:comp_ml_ratios}
\begin{tabular}{lccc|cccccccc}
\hline
\hline
ID & \multicolumn{3}{c|}{Dark matter} & \multicolumn{8}{c}{Baryonic}\\
\hline
& BF & $\Upsilon_\mathrm{b}$ & $\Upsilon_\mathrm{d}$ & $\Upsilon_\mathrm{b,min}$ & $\Upsilon_\mathrm{b,max}$ & $\Upsilon_\mathrm{d,min}$& $\Upsilon_\mathrm{d,max}$& $\Upsilon_\mathrm{b,bf}$ & $\Upsilon_\mathrm{d,bf}$ & $\chi^2_\mathrm{B}$ & $1\sigma$ \\
\hline
ESO215G39 & E & 1.14 & 1.84 & 0.88 & 3.35 & 1.43 & 5.42 & 3.35 & 5.42 & 203.39 & 36.3\\
ESO322G76 & E & 0.46 & 0.62 & 0.32 & 1.43 & 0.43 & 1.95 & 1.43 & 1.95 & 57.9 & 56.3\\
ESO322G77 & E & 0.93 & 1.96 & 0.66 & 2.61 & 1.39 & 5.49 & 2.6 & 3.7 & \textbf{4.72} & 14.84\\
ESO322G82 & B & 1.28 & 1.69 & 0.92 & 3.43 & 1.22 & 4.54 & 2.6 & 2.4 & 75.47 & 39.48\\
ESO323G25 & P & 1.47 & 2.05 & 1.06 & 3.85 & 1.48 & 5.38 & 3.7 & 2.8 & \textbf{52.8} & 68.83\\
ESO374G02 & N & 0.84 & 1.15 & 0.59 & 2.4 & 0.81 & 3.28 & 1.1 & 2.9 & \textbf{49.84} & 88.58\\
ESO375G12 & P & 1.25 & 1.58 & 0.99 & 3.7 & 1.25 & 4.65 & 2.1 & 2.2 & \textbf{12.75} & 53.15\\
ESO376G02 & P & 0.93 & 1.48 & 0.69 & 2.74 & 1.1 & 4.34 & 1.1 & 2.9 & 190.42 & 63.61\\
ESO383G02 & N & 1.08 & 1.56 & 0.77 & 2.96 & 1.12 & 4.3 & 2.6 & 3.5 & \textbf{7.06} & 49.64\\
ESO383G88 & P & 1.43 & 2.32 & 1.03 & 3.77 & 1.68 & 6.11 & 3.7 & 3 & 87.33 & 53.15\\
ESO445G19 & E & 0.92 & 1.47 & 0.65 & 2.58 & 1.04 & 4.13 & 2.4 & 2.7 & \textbf{5.63} & 40.53\\
ESO446G01 & P & 0.78 & 1.05 & 0.55 & 2.25 & 0.74 & 3.04 & 1.6 & 3 & \textbf{25.77} & 46.85\\
ESO502G02 & P & 1.72 & 2.64 & 1.25 & 4.4 & 1.92 & 6.76 & 1.55 & 2.78 & \textbf{32.14} & 43.7\\
ESO509G80 & P & 1.37 & 1.94 & 0.98 & 3.62 & 1.4 & 5.14 & 3.2 & 4.9 & \textbf{19.16} & 39.48\\
ESO569G17 & N & 0.74 & 1.15 & 0.52 & 2.16 & 0.81 & 3.34 & 0.64 & 1.8 & \textbf{8.84} & 21.36\\
\hline
F561-1 & E & 0.85 & 1.57 & 0.77 & 3.19 & 1.43 & 5.91 & 1.1 & 3.2 & 75.53 & 4.72\\
F563-1 & P & 1.55 & - & 1.25 & 4.94 & - & - & 2.7 & - & 10.77 & 8.18\\
F568-3 & E & 1.21 & 2.68 & 1.02 & 4.12 & 2.27 & 9.15 & 2.8 & 5.3 & \textbf{8.93} & 9.3\\
F579-V1 & P & 2.03 & 3.08 & 1.54 & 5.99 & 2.34 & 9.08 & 5.99 & 9.07 & 39.87 & 12.64\\
F583-1 & P & 0.81 & - & 0.74 & 3.08 & - & - & 3.08 & - & 291.53 & 15.94\\
F730-V1 & P & 1.2 & 2.8 & 0.31 & 4.27 & 0.71 & 9.96 & 4.27 & 7.4 & 22.3 & 5.89\\
UGC128 & P & 1.37 & 4.89 & 1.13 & 4.51 & 4.03 & 16.1 & 1.13 & 11.64 & 17.6 & 10.42\\
UGC1230 & P & 1.12 & 5.6 & 0.96 & 3.9 & 4.8 & 19.49 & 0.96 & 11.46 & 62.57 & 9.3\\
UGC5750 & P & 1.16 & 3.51 & 0.99 & 4 & 2.99 & 12.12 & 0.99 & 3.59 & 14 & 9.3\\
UGC6614 & P & 2.05 & 3.73 & 0.31 & 6.33 & 0.56 & 11.5 & 5.31 & 11.5 & 182.6 & 12.64\\
UGC10310 & P & 0.83 & 3.22 & 0.3 & 3.26 & 1.18 & 12.64 & 0.3 & 1.4 & 57.69 & 15.94\\
UGC11454 & P & 1.1 & 0.82 & 0.31 & 8.85 & 0.35 & 9.94 & 1.1 & 0.82 & 142.59 & 26.73\\
UGC11616 & P & 0.69 & 3.99 & 0.31 & 5.54 & 0.52 & 9.33 & 0.88 & 0.63 & 90.15 & 24.59\\
UGC11748 & P & 0.74 & 3.62 & 0.3 & 3 & 1.48 & 14.63 & 3 & 14.63 & 12934 & 34.18\\
UGC11819 & P & 1.43 & 2.01 & 0.31 & 4.86 & 0.43 & 6.84 & 1.63 & 2.22 & \textbf{4.52} & 13.74\\
\hline
\end{tabular}
\end{table*}
\begin{table*}
\centering
\caption{Akaike information criterion of the pure baryonic model with the $M/L$ ratio fit (AIC$_\mathrm{B}$), baryonic model with estimated $M/L$ ratio combined with the NFW (AIC$_\mathrm{NFW}$), the Einasto (AIC$_\mathrm{E}$), and the PSE dark matter models (AIC$_\mathrm{P}$). The number of the fitted parameters are $2$, $2$, $3$, and $2$, respectively. We indicate the smallest AIC number for each galaxy in boldface, and give the $\Delta$ values.}
\label{table:gx_vrot_stat}
\begin{tabular}{lcccccccc}
\hline
\hline
ID & AIC$_\mathrm{B}$ & AIC$_\mathrm{NFW}$ & AIC$_\mathrm{E}$ & AIC$_\mathrm{P}$ & $\Delta_\mathrm{B}$ & $\Delta_\mathrm{NFW}$ & $\Delta_\mathrm{E}$ & $\Delta_\mathrm{P}$ \\
\hline
ESO215G39 & 207.39 & $\mathbf{9.76}$ & 13.03 & 10.59 & 197.63 & 0 & 3.27 & 0.83\\
ESO322G76 & 61.90 &$\mathbf{19.60}$ & 19.62 & 20.29 & 42.3 & 0 & 0.02 & 0.69\\
ESO322G77 & 8.72 & $\mathbf{6.51}$ & 9.09 & 7.23 & 2.21 & 0 & 2.58 & 0.72\\
ESO323G25 & 56.80 & 35.28 & 35.01 & $\mathbf{30.01}$ & 26.79 & 5.27 & 5 & 0\\
ESO374G02 & 53.84 & $\mathbf{41.4}$ & 46.9 & 41.6 & 12.44 & 0 & 5.5 & 0.2\\
ESO375G12 & 16.75 & 12.4 & 20 & $\mathbf{11.95}$ & 4.8 & 0.45 & 8.05 & 0.0\\
ESO383G02 & 11.06 & $\mathbf{9.63}$ & 11.84 & 10.69 & 1.13 & 0 & 1.91 & 0.76\\
ESO383G88 & 91.33 & 61.2 & 73.2 & $\mathbf{57}$ & 34.33 & 4.2 & 16.2 & 0\\
ESO445G19 & 9.63 & 7.56 & $\mathbf{6.33}$ & 8.26 & 3.3 & 1.23 & 0 & 1.93\\
ESO446G01 & 29.77 & 15.88 & 16.53 & $\mathbf{14.31}$ & 15.46 & 1.57 & 2.22 & 0\\
ESO502G02 & 36.14 & 36.9 & 37.8 & $\mathbf{34.1}$ & 2.04 & 2.8 & 3.7 & 0.0\\
ESO509G80 & 23.16 & 12.30 & 13.66 & $\mathbf{11.42}$ & 11.74 & 0.88 & 2.24 & 0\\
ESO569G17 & 12.84 & $\mathbf{8.64}$ & 13.82 & 9.06 & 4.2 & 0 & 5.18 & 0.42\\
\hline
F561-1 & 79.53 & 11.06 & $\mathbf{7.13}$ & 8.1 & 72.4 & 3.93 & 0 & 0.97\\
F563-1 & 14.77 & 5.39 & 6.51 & $\mathbf{4.84}$ & 9.93 & 0.55 & 1.67 & 0\\
F568-3 & 12.93 & 18.03 & $\mathbf{7.71}$ & 10.52 & 5.22 & 10.32 & 0 & 2.81\\
F579-V1 & 43.87 & 6.88 & 8.22 & $\mathbf{5.75}$ & 38.12 & 1.13 & 2.47 & 0\\
F583-1 & 295.53 & 14.82 & 6.43 & $\mathbf{4.89}$ & 290.64 & 9.93 & 1.54 & 0\\
F730-V1 & 26.30 & 9.48 & 6.59 & $\mathbf{5}$ & 21.3 & 4.48 & 1.59 & 0\\
UGC128 & 21.60 & 12.15 & 7.58 & $\mathbf{4.95}$ & 16.65 & 7.2 & 2.63 & 0\\
UGC1230 & 66.57 & 8.49 & 6.7 & $\mathbf{4.69}$ & 61.88 & 3.8 & 2.01 & 0\\
UGC5750 & 18 & 11.79 & 9.98 & $\mathbf{8.02}$ & 9.98 & 3.77 & 1.96 & 0\\
UGC11819 & 8.52 & 7.54 & 8.05 & $\mathbf{7.09}$ & 1.43 & 0.45 & 0.96 & 0\\
\hline
\end{tabular}
\end{table*}
\begin{table*}[h!]
\centering
\caption{Model performances based the $\Delta$-values. The models marked $\text{with a plus}$ (with $0 < \Delta \leq 2$) refer to comparably good performances, $\text{double pluses}$ represent the best fit (with $\Delta=0$). The analysis of the fitness of the models marked as $0$ (with $2 < \Delta < 4$) is inconclusive, models marked $\text{with a minus}$ (with $4\leq \Delta \leq 10$) are disfavored, while the models marked $\text{with
a double minus}$ (with $10<\Delta$) are clearly ruled out. The number of cases where the given model is favored ($\Sigma_{++}+\Sigma_{+}$) or ruled out ($\Sigma_{--}+\Sigma_{-}$) is also represented.}
\label{table:gx_vrot_disruletable}
\begin{tabular}{lcccc|lcccc}
\hline
\multicolumn{5}{c|}{HSB galaxies} & \multicolumn{5}{c}{LSB galaxies}\\
\hline
ID & $\Delta_\mathrm{B}$ & $\Delta_\mathrm{NFW}$ & $\Delta_\mathrm{E}$ & $\Delta_\mathrm{P}$ & ID & $\Delta_\mathrm{B}$ & $\Delta_\mathrm{NFW}$ & $\Delta_\mathrm{E}$ & $\Delta_\mathrm{P}$\\
\hline
ESO215G39 & $--$ & $++$ & $0$ & $+$ & F561-1 & $--$ & $0$ & $++$ & $+$ \\
ESO322G76 & $--$ & $++$ & $+$ & $+$ & F563-1 & $-$ & $+$ & $+$ & $++$ \\
ESO322G77 & $0$ & $++$ & $0$ & $+$ & F568-3 & $-$ & $--$ & $++$ & $0$ \\
ESO323G25 & $--$ & $-$ & $-$ & $++$ & F579-V1 & $--$ & $+$ & $0$ & $++$ \\
ESO374G02 & $--$ & $++$ & $-$ & $+$ & F583-1 & $--$ & $-$ & $+$ & $++$ \\
ESO375G12 & $-$ & $+$ & $-$ & $++$ & F730-V1 & $--$ & $-$ & $+$ & $++$ \\
ESO383G02 & $+$ & $++$ & $+$ & $+$ & UGC128 & $--$ & $-$ & $0$ & $++$ \\
ESO383G88 & $--$ & $--$ & $--$ & $++$ & UGC1230 & $--$ & $0$ & $0$ & $++$ \\
ESO445G19 & $0$ & $+$ & $++$ & $+$ & UGC5750 & $-$ & $0$ & $+$ & $++$ \\
ESO446G01 & $--$ & $+$ & $0$ & $++$ &UGC11819 & $+$ & $+$ & $+$ & $++$ \\
ESO502G02 & $0$ & $0$ & $0$ & $++$ & \\
ESO509G80 & $--$ & $+$ & $0$ & $++$ & \\
ESO569G17 & $-$ & $++$ & $-$ & $+$ & \\
\hline
\hline
$\Sigma_{++}$ & 0 & 6 & 1 & 6 & $\Sigma_{++}$& 0 & 0 & 2 & 8\\
$\Sigma_{+}$ & 1 & 4 & 2 & 6 & $\Sigma_{+}$ & 1 & 3 & 5 & 1\\
$\Sigma_{-}$ & 2 & 1 & 4 & 0 & $\Sigma_{-}$ & 3 & 3 & 0 & 0\\
$\Sigma_{--}$ & 7 & 1 & 1 & 0 & $\Sigma_{--}$& 6 & 1 & 0 & 0\\
\hline
\end{tabular}
\end{table*}
\begin{figure*}
\centering
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_eso322g77_hat.eps}
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_eso323g25_hat.eps}
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_eso374g02_hat.eps}\newline
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_eso375g12_hat.eps}
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_eso383g02_hat.eps}
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_eso445g19_hat.eps}\newline
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_eso446g01_hat.eps}
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_eso502g02_hat.eps}
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_eso509g80_hat.eps}\newline
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_eso569g17_hat.eps}
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_f568_3_hat.eps}
\includegraphics[width=170pt,height=120pt]{vrotfit_fit_ugc11819_hat.eps}\newline
\caption{Best-fit rotational curves for the HSB and LSB galaxies for which the $\chi^2$ values with the fit $M/L$ ratios are within the $1\sigma$ confidence level of the fitting. The best-fit pure baryonic models are indicated by solid red curves, and the best-fit models composed of baryonic matter with fixed $M/L$ and dark matter are indicated by dashed green curves.}
\label{fig:hsblsb_vrotfitml}
\end{figure*}
\section{Summary and concluding remarks}
\label{summary}
In this paper we have assembled a database consisting of $15$ HSB and $15$ LSB galaxies that are representative for the various possible galaxy morphologies of both types. In particular, the HSB galaxy set contained spiral galaxies of various brightness profiles, while the LSB galaxy set contained both disk and dwarf galaxies. For the selected galaxies, both surface brightness density data and spectroscopic rotation curve data were available in the literature.
We explored this dataset for a comparative testing of frequently applied and well-established dark matter models (NFW, Einasto, and PSE). We investigated the compatibility of the pure baryonic model and baryonic plus various dark matter models with observations on the galaxy database. The mass distribution of the baryonic component of the galaxies was derived from the spatial luminosity distribution by estimating the $M/L$ ratios through color--to-mass-to-light relations and gas mass fractions. For our analysis we constrained the axial ratio of the galaxies based on SDSS results as $0.4<q_b<1$, and $0<q_d<0.3$.
We calculated the Akaike information criterion to characterize the goodness of the best-fit models. In case of the pure baryonic model, the $M/L$ ratios were varied between reasonable limits in the fitting to the rotation curves, while in case of baryonic + dark matter combined models, the baryonic component was inferred using $M/L$ ratios derived based on the CMLR of \citet{Bell2003} and \citet{McGaugh2014}. In case of 7 galaxies (2 HSB, and 5 LSB), neither model fits the dataset within the $1\sigma$ confidence level. According to the Akaike information criterion, the pseudo-isothermal sphere emerges as most favored in $14$ cases out of the remaining 23 galaxies, followed by the Navarro-Frenk-White ($6$ cases)
and the Einasto ($3$ cases) dark matter models.
The pure baryonic model with an $M/L$ ratio fit did not provide the best model performance in any of the cases, and it was ruled
out in case of 7 HSB and 6 LSB galaxies based on the AIC. On the other hand, the pure baryonic model fit the dataset within
the $1\sigma$ confidence level in case of 10 HSB and 2 LSB galaxies. From these 12 galaxies, the pure baryonic model provided a poor fit compared to the best-fit DM model in 5 cases, giving $\Delta>10$. This clearly disfavors the baryonic model.
By cross-correlating the results of the fits by the two methods, we found that the following seven galaxies could not be described with any of the considered models: ESO322G82, ESO376G02, UGC6614, UGC10310, UGC11454, UGC11616, and UGC11748. The proper modeling of these galaxies needs more sophisticated descriptions. Employing massive datasets of 2D velocity fields from integral field unit surveys, such as SAMI \citep{Allen2015} and MaNGA in SDSS-IV \citep{Bundy2015}, will help to further develop the method we presented and test it for non-axisymmetry on high-quality velocity fields with well-defined errors.
The remaining 23 galaxies from the dataset, which require a dark matter component to be modeled, might be useful in the comparative testing of spherically symmetric dark matter substitutes emerging in either alternative gravity theories or from the study of non-standard dark matter candidates, like Bose-Einstein condensates.
\begin{acknowledgements}
The authors are grateful to the anonymous referee for the valuable suggestions provided throughout the refereeing process, which have contributed to significant improvement of the paper. EK, ZK, and L\'{A}G acknowledge the support of the Hungarian National Research, Development and Innovation Office (NKFI) in the form of the grant 123996. ZK and L\'{A}G were further supported by COST Action CA15117 “Cosmology and Astrophysics Network for Theoretical Advances and Training Actions” (CANTATA).
\end{acknowledgements}
|
2,877,628,090,468 | arxiv |
\subsection{A $q$-Linear Representation}
\begin{figure}
\centering
\begin{tikzpicture}[auto,
initial text=, initial distance=5ex,
>=latex,
accepting text=,
every state/.style={minimum size=3.2em}]
\node[state, initial below, accepting] (I) at (0.000000, 0.000000) {$\mathcal{I}$};
\node[state] (e0) at (180:5) {$0$};
\node[state, accepting] (e1) at (155:5) {$1$};
\node[state, accepting] (e2) at (130:5) {$2$};
\node[state, accepting] (e3) at (105:5) {$3$};
\draw[dotted, thick] (95:5) arc (95:35:5);
\node[state, accepting] (eq2) at (25:5) {$q-2$};
\node[state, accepting] (eq1) at (0:5) {$q-1$};
\path[->] (I) edge node[rotate=0, anchor=south] {$0$} (e0);
\path[->] (I) edge node[rotate=-25, anchor=south] {$1$} (e1);
\path[->] (I) edge node[rotate=-50, anchor=south] {$2$} (e2);
\path[->] (I) edge node[rotate=-75, anchor=south] {$3$} (e3);
\path[->] (I) edge node[rotate=25, anchor=south] {$q-2$} (eq2);
\path[->] (I) edge node[rotate=0, anchor=south] {$q-1$} (eq1);
\path[->] (e0) edge[bend left] node[rotate=77.5, anchor=south] {$1$} (e1);
\path[->] (e1) edge[bend left] node[rotate=77.5, anchor=north] {$0$} (e0);
\path[->] (e1) edge[bend left] node[rotate=52.5, anchor=south] {$2$} (e2);
\path[->] (e2) edge[bend left] node[rotate=52.5, anchor=north] {$1$} (e1);
\path[->] (e2) edge[bend left] node[rotate=27.5, anchor=south] {$3$} (e3);
\path[->] (e3) edge[bend left] node[rotate=27.5, anchor=north] {$2$} (e2);
\path[->] (eq2) edge[bend left] node[rotate=-77.5, anchor=south] {$q-1$} (eq1);
\path[->] (eq1) edge[bend left] node[rotate=-77.5, anchor=north] {$q-2$} (eq2);
\end{tikzpicture}
\caption{Automaton~$\mathcal{A}$ recognizing esthetic numbers.}
\label{fig:esthetic-automaton}
\end{figure}
The language consisting of the $q$-ary digit expansions (seen as words
of digits) which are $q$-esthetic
is a regular language, because it is recognized by the
automaton~$\mathcal{A}$ in
Figure~\ref{fig:esthetic-automaton}. Therefore, the indicator sequence
of this language, i.e., the $n$th entry is $1$ if $n$ is $q$-esthetic
and $0$ otherwise, is a $q$-automatic sequence and therefore also
$q$-regular. Let us name this sequence~$x(n)$.
Let $A_0$, \dots, $A_{q-1}$ be the transition matrices of the
automaton~$\mathcal{A}$, i.e., $A_r$ is the adjacency matrix of the
directed graph induced by a transition with digit~$r$.
To make this more explicit, we have
the following $(q+1)$-dimensional square
matrices: Each row and column corresponds to the states~$0$, $1$,
\dots, $q-1$, $\mathcal{I}$. In matrix~$A_r$, the only non-zero entries
are in column~$r\in\set{0,1,\dots,q-1}$, namely $1$ in the rows~$r-1$ and $r+1$ (if
available) and in row~$\mathcal{I}$ as there are transitions from
these states to state~$r$ in the automaton~$\mathcal{A}$.
Let us make this more concrete by considering $q=4$. We obtain the matrices
\begin{align*}
A_0 &=
\begin{pmatrix}
0 & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0
\end{pmatrix},
&
A_1 &=
\begin{pmatrix}
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0
\end{pmatrix},
\\
A_2 &=
\begin{pmatrix}
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 & 0
\end{pmatrix},
&
A_3 &=
\begin{pmatrix}
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0
\end{pmatrix}.
\end{align*}
We are almost at a $q$-linear representation of our sequence; we still
need vectors on both sides of the matrix products. We have
\begin{equation*}
x(n) = e_{q+1}\, A_{r_0} \cdots A_{r_{\ell-1}} v(0)
\end{equation*}
for $r_{\ell-1} \dots r_0$ being the $q$-ary expansion of~$n$ and vectors
$e_{q+1}=\begin{pmatrix}0& \dotsc& 0&1\end{pmatrix}$ and
$v(0)=\begin{pmatrix}0&1& \dotsc& 1\end{pmatrix}^\top$.
As $A_0 v(0)=0\neq v(0)$, this is not a linear representation of a regular
sequence. Thus we cannot use Theorem~\ref{theorem:simple}, but need to use
Theorem~\ref{theorem:contribution-of-eigenspace}. However, the difference is
slight: we simply cannot omit the contributions of the constant vector $Kv(0)$.
However, it will turn out that the joint spectral radius is $1$, so the
contribution will be absorbed by the error term anyway.
To see that the above holds, we have two different interpretations:
The first is that the row vector
\begin{equation*}
w(n) = e_{q+1}\, A_{r_0} \cdots A_{r_{\ell-1}}
\end{equation*}
is the unit vector corresponding to the most significant digit
of the $q$-ary expansion of~$n$ or, in view of the
automaton~$\mathcal{A}$, corresponding to the final state.
Note that we read the digit expansion from the least significant digit
to the most significant one
(although it would be possible the other way round as well).
We have $w(0)=e_{q+1}$
which corresponds to the empty word and
being in the initial state~$\mathcal{I}$ in the automaton.
The vector~$v(0)$ corresponds to the fact that
all states of~$\mathcal{A}$ except~$0$ are accepting.
The other interpretation is: The $r$th component of the column vector
\begin{equation*}
v(n) = A_{r_0} \cdots A_{r_{\ell-1}} v(0)
\end{equation*}
has the following two meanings:
\begin{itemize}
\item In the automaton~$\mathcal{A}$, we start in state $r$ and then
read the digit expansion of $n$. The $r$th component is then the indicator
function whether we remain esthetic, i.e., end in an accepting
state.
\item To a word ending with $r$ we append the digit expansion of
$n$. The $r$th component is then the indicator function whether the result
is an esthetic word.
\end{itemize}
At first glance, our problem here seems to be a special case of the
transducers studied in Section~\ref{sec:transducer}. However, the
automaton~$\mathcal{A}$ is not complete. Adding a sink to have a
formally complete automaton, however, adds an eigenvalue $q$ and thus
a much larger dominant asymptotic term, which would then be multiplied
by~$0$. Therefore, the results
of~\cite{Heuberger-Kropf-Prodinger:2015:output} do not apply to this
case here.
\subsection{Full Asymptotics}
We now formulate our main result for the amount of esthetic numbers
smaller than a given integer~$N$. We abbreviate this amount by
\begin{equation*}
X(N) = \sum_{0 \le n < N} x(n)
\end{equation*}
and have the following corollary.
\begin{corollary}
\label{corollary:esthetic:asy}
Fix an integer~$q\geq2$.
Then the number~$X(N)$ of $q$-esthetic numbers smaller than $N$ is
\begin{multline}\label{eq:esthetic:asy-main}
X(N) = \sum_{j\in\set{1,2,\dots,\ceil{\frac{q-2}{3}}}}
N^{\log_q (2\cos(j\pi/(q+1)))} \Phi_{j}(2\fractional{\log_{q^2} N}) \\
+ \Oh[\big]{(\log N)^{\iverson{q \equiv -1 \tpmod 3}}}
\end{multline}
with $2$-periodic continuous functions~$\Phi_{j}$.
Moreover, we can effectively compute the Fourier coefficients of
each~$\Phi_{j}$ (as explained in Part~\ref{part:numerical}).
If $q$ is even, then the functions $\Phi_{j}$ are actually $1$-periodic.
If $q$ is odd, then the functions $\Phi_j$ for even $j$ vanish.
\end{corollary}
If $q=2$, then the corollary results in $X(N)=\Oh{\log N}$.
However, for each length, the only word of digits satisfying the
esthetic number condition has alternating digits $0$ and $1$,
starting with~$1$ at its most significant digit. The
corresponding numbers~$n$ form the so-called
Lichtenberg sequence~\oeis{A000975}.
Back to a general~$q$: For the asymptotics,
the main quantities influencing the growth are the
eigenvalues of the matrix~$C = A_0+\dots+A_{q-1}$. Continuing our
example $q=4$ above, this matrix is
\begin{equation*}
C = A_0 + A_1 + A_2 + A_3 =
\begin{pmatrix}
0 & 1 & 0 & 0 & 0 \\
1 & 0 & 1 & 0 & 0 \\
0 & 1 & 0 & 1 & 0 \\
0 & 0 & 1 & 0 & 0 \\
1 & 1 & 1 & 1 & 0
\end{pmatrix},
\end{equation*}
and its eigenvalues are
$\pm 2\cos(\frac{\pi}{5})=\pm \frac12\bigl(\sqrt{5} + 1\bigr) = \pm1.618\dots$,
$\pm 2\cos(\frac{2\pi}{5})=\pm \frac12\bigl(\sqrt{5} - 1\bigr) = \pm0.618\dots$
and $0$, all with algebraic and geometric multiplicity $1$. Therefore it turns out that
the growth of the main term is
$N^{\log_4(\sqrt{5} + 1) - \frac12}=N^{0.347\dots}$, see
Figure~\ref{fig:fluct-esthetic}.
The first few Fourier coefficients are shown in Table~\ref{table:esthetic:fourier}.
\begin{figure}
\centering
\includegraphics{esthetic.pdf}
\caption{Fluctuation in the main term of the asymptotic expansion of $X(N)$
for $q=4$.
The figure shows $\f{\Phi_1}{u}$ (red) approximated by
its trigonometric polynomial of degree~$1999$ as well as
$X(4^u) / N^{u(\log_4(\sqrt{5} + 1) - \frac12)}$ (blue).}
\label{fig:fluct-esthetic}
\end{figure}
\begin{table}
\centering
\begin{equation*}\footnotesize
\begin{array}{r|l}
\multicolumn{1}{c|}{\ell} &
\multicolumn{1}{c}{\varphi_{1\ell}} \\
\hline
0&\phantom{-}4.886821584515\\
1&\phantom{-}0.036565359077 - 0.012421753685i\\
2&\phantom{-}0.0131103199420 - 0.017152133508i\\
3&-0.0023895069366 - 0.0506880727105i\\
4&-0.017328669452 + 0.025036392542i\\
5&\phantom{-}0.011186380630 - 0.0066357472861i\\
6&\phantom{-}0.0086354015002 + 0.018593736873i\\
7&-0.014899262928 + 0.0297436287202i\\
8&-0.003867454968 + 0.0064534688733i\\
9&\phantom{-}0.0033747695643 + 0.006159612843i\\
10&-0.002149675882 + 0.006474570022i
\end{array}
\end{equation*}
\caption{Fourier coefficients of~$\Phi_1$ for $q=4$
(Corollary~\ref{corollary:esthetic:asy}). All stated digits are
correct; see also Part~\ref{part:numerical}.}
\label{table:esthetic:fourier}
\end{table}
\subsection{Eigenvectors}
\input{esthetic-eigenvectors}
\subsection{Proof of the Asymptotic Result}
\begin{proof}[Proof of Corollary~\ref{corollary:esthetic:asy}]
We work out the conditions and parameters for using
Theorem~\ref{theorem:simple}.
\proofparagraph{Joint Spectral Radius}
As all the square matrices $A_0$, \dots, $A_{q-1}$ have a maximum
absolute row sum norm equal to $1$, the joint spectral radius of
these matrices is bounded by~$1$.
Let $r\in\set{1,\dots,q-1}$. Then any product with alternating
factors $A_{r-1}$ and $A_r$, i.e., a finite product
$A_{r-1}A_rA_{r-1}\cdots$, has absolute row sum norm at least~$1$ as
the word $(r-1)r(r-1)\dots$ is $q$-esthetic. Therefore the joint
spectral radius of $A_{r-1}$ and $A_r$ is at
least~$1$. Consequently, the joint spectral radius of $A_0$, \dots,
$A_{q-1}$ equals~$1$.
\proofparagraph{Asymptotics}
We apply our Theorem~\ref{theorem:simple}.
We have $\lambda_j=-\lambda_{q+1-j}$, so we combine our approach
with Proposition~\ref{proposition:symmetric-eigenvalues}. Moreover,
we have $\lambda_j>1$ iff $\frac{j}{q+1}<\frac{1}{3}$ iff
$j\leq\ceil{\frac{q-2}{3}}$.
This results
in~\eqref{eq:esthetic:asy-main}.
We now assume that $q$ is even. In this case, we still have to show
that the functions $\Phi_j$ are actually $1$-periodic. We now need to
use Theorem~\ref{theorem:contribution-of-eigenspace}. Let $w_1$, $w_2$,
\ldots, $w_{q-1}$, $w_q$ be the rows of $T$ where the order is chosen in such
a way that
\begin{equation*}
J=\diag\Bigl(2\cos\Bigl(\frac{\pi}{q+1}\Bigr), \ldots,
2\cos\Bigl(\frac{q\pi}{q+1}\Bigr), 0\Bigr).
\end{equation*}
We write $e_{q+1}=\sum_{k=1}^q c_k w_k$ for suitable $c_k\in\mathbb{R}$. Setting
$c\coloneqq \begin{pmatrix}c_1&c_2&\cdots&c_q\end{pmatrix}$, this means that
$e_{q+1}=cT$, or equivalently, $c=e_{q+1} T^{-1}$. The columns of $T^{-1}$ are
the right eigenvectors of $C$ described in
Proposition~\ref{proposition:esthetic-eigenvectors}. Then
Proposition~\ref{proposition:esthetic-eigenvectors}~(\ref{enu:esthetic-eigenvectors:neq0}) implies that $c_k=0$ for
even $k$ with $1\le k\le q$. This means that all fluctuations corresponding
to eigenvalues $2\cos(k\pi/(q+1))$ for even $k$ with $1\le k\le q$ are
multiplied by $0$ and do not contribute to the result.
As $\abs{\cos(\frac{q+1-k}{q+1}\pi)}=\abs{\cos(\frac{k}{q+1}\pi)}$, but
$q+1-k$ and $k$ have different parities, there is no need to use
Proposition~\ref{proposition:symmetric-eigenvalues} and all fluctuations are
$1$-periodic.
The same argument can be used for the case of odd $q$, but in this case,
$q+1-k$ and $k$ have the same parity. So
Proposition~\ref{proposition:symmetric-eigenvalues}
is used for odd $k$, and fluctuations to both eigenvalues $2\cos(k\pi/(q+1))$
and $2\cos((q+1-k)\pi/(q+1))$ vanish for even~$k$.
\proofparagraph{Fourier Coefficients}
We can compute the Fourier coefficients according to
Theorem~\ref{theorem:simple} and
Proposition~\ref{proposition:symmetric-eigenvalues};
see also Part~\ref{part:numerical}.
\end{proof}
\part{Computational Aspects}\label{part:numerical}
The basic idea
for computing the Fourier coefficients is to use the functional equation
in Theorem~\ref{theorem:Dirichlet-series}.
This part describes in detail how this is done.
We basically follow an approach found in Grabner and Hwang~\cite{Grabner-Hwang:2005:digit}
and Grabner and Heuberger~\cite{Grabner-Heuberger:2006:Number-Optimal}, but
provide error bounds.
An actual implementation is also available;
SageMath~\cite{SageMath:2018:8.3} code can be found at
\url{https://gitlab.com/dakrenn/regular-sequence-fluctuations}\,.
We use the Arb library~\cite{Johansson:2017:arb} (more precisely, its
SageMath bindings) for ball arithmetic
which keeps track of rounding errors such that we can be sure about the precision and accuracy of our results.
We use the results of this part to compute Fourier coefficients for
our examples, in particular for esthetic numbers
(Section~\ref{sec:esthetic-numbers}) and Pascal's rhombus
(Section~\ref{sec:pascal}).
\section{Strategy for Computing the Fourier Coefficients}
\label{section:strategy-for-computing}
The computation of the Fourier coefficients relies on the evaluation
of Dirichlet series at certain points~$s=s_0$. It turns out to be
numerically preferable to split up the sum as
\begin{equation*}
\mathcal{F}_{1}(s_0) = \sum_{1 \le n < n_0} n^{-s_0} f(n) + \mathcal{F}_{n_0}(s_0)
\end{equation*}
for some suitable~$n_0$ (see Section~\ref{section:choice-parameters}),
compute the sum of the first $n_0-1$ summands directly and evaluate
$\mathcal{F}_{n_0}(s_0)$ as it is described in the following.
For actually computing the Fourier coefficients, we use a formulation in
terms of a residue; for instance,
see~\eqref{eq:Fourier-coefficient:simple-as-residue} where this is
formulated explicitly in the set-up of Theorem~\ref{theorem:simple}.
As said, we will make use of the functional
equation~\eqref{eq:analytic-continuation} for the matrix-valued
Dirichlet series~$\mathcal{F}_{n_0}(s)$ with its right-hand side, the
matrix-valued Dirichlet series~$\mathcal{G}_{n_0}(s)$.
Let us make this explicit for a simple eigenvalue $\lambda\neq 1$ of~$C$ and
a corresponding eigenvector~$w$. Then
$w (I - q^{-s} C) = w (1 - q^{-s}\lambda)$
and~\eqref{eq:analytic-continuation} can be rewritten as
\begin{equation*}
w\, \mathcal{F}_{1}(s) = \frac{1}{1 - q^{-s}\lambda} w\, \mathcal{G}_{1}(s).
\end{equation*}
Thus, $w\, \mathcal{F}_{1}(s)$ has simple poles at~$s=\log_q\lambda+\chi_\ell$
for all $\ell\in\mathbb{Z}$, where $\chi_\ell=\frac{2\ell\pi i}{\log q}$.
By~\eqref{eq:Fourier:F-s-principal-part} and~\eqref{eq:Fourier:fluctuation-as-Fourier-series} of Theorem~\ref{theorem:use-Mellin--Perron}
(with $\gamma=\log_q\lambda$ and $m=1$), the $\ell$th
Fourier coefficient is given by the residue
\begin{equation*}
\Res[\Big]{\frac{w\, \mathcal{F}_{1}(s)}{s}}{s=\log_q \lambda+\chi_\ell}
= w\, \mathcal{G}_{1}(\log_q\lambda + \chi_\ell) \frac{1}{(\log q)(\log_q\lambda + \chi_\ell)}.
\end{equation*}
Note that $\log q$ is the
derivative of $1 - q^{-s}\lambda$ with respect to~$s$
evaluated at the pole~$s=\log_q\lambda$.
By~\eqref{eq:Dirichlet-recursion}, $\mathcal{G}_{n_0}(\log_q\lambda+\chi_\ell)$ is expressed in
terms of an infinite sum containing $\mathcal{F}_{n_0}(\log_q\lambda+\chi_\ell+k)$ for $k\ge1$.
We truncate this sum and bound the error; this is the aim of
Section~\ref{section:bounding-error} and in particular
Lemma~\ref{lemma:approximation-error}.
We can iterate the above idea for the
shifted Dirichlet series~$\mathcal{F}_{n_0}(\log_q\lambda+\chi_\ell+k)$
which leads to a recursive evaluation scheme.
Note that once we have computed $\mathcal{G}_{n_0}(\log_q\lambda +\chi_\ell+k)$,
we get $\mathcal{F}_{n_0}(\log_q \lambda +\chi_\ell+k)$ by solving a
system of linear equations.
\section{Details on the Numerical Computation}
\label{section:computation-details}
\subsection{Bounding the Error}
\label{section:bounding-error}
We need to estimate the approximation error which arises if the infinite sum
over $k\ge 1$ in~\eqref{eq:Dirichlet-recursion} is replaced by a finite sum.
It is clear that for large $\Re s$ and $n_0$, the value $\mathcal{F}_{n_0}(s)$ will
approximately be of the size of its first summand~$n_0^{-s} f(n_0)$. In view
of $\norm{f(n_0)}=\Oh{\rho^{\log_q n_0}}$, this will be rather small. We give a
precise estimate in a first lemma.
\begin{lemma}\label{lemma:Dirichlet-upper-bound}
Let $n_0> 1$ and let $M\coloneqq \max_{0\le r<q} \norm{A_r}$.
For $\Re s>\log_q M + 1$, we have
\begin{equation*}
\sum_{n\ge n_0}\frac{\norm{f(n)}}{n^{\Re s}}\le \frac{M}{(\Re
s-\log_q M -1)(n_0-1)^{\Re s-\log_q M -1}}.
\end{equation*}
\end{lemma}
\begin{proof}
By definition of $M$, we have $\norm{f(n)}\le M^{1+\log_q n}=M n^{\log_q
M}$. Therefore, we have
\begin{align*}
\sum_{n\ge n_0}\frac{\norm{f(n)}}{n^{\Re s}}&\le M\sum_{n\ge n_0}\frac1{n^{\Re s - \log_q M}}\le
M\int_{n_0-1}^\infty \frac{\mathrm{d} n}{n^{\Re s - \log_q M}}\\&=\frac{M}{(\Re
s-\log_q M -1)(n_0-1)^{\Re s-\log_q M -1}}
\end{align*}
where we interpret the sum as a lower Riemann sum of the integral.
\end{proof}
We now give a bound for the approximation error in~\eqref{eq:Dirichlet-recursion}.
\begin{lemma}\label{lemma:approximation-error}
Let $n_0>1$ and $M$ as in Lemma~\ref{lemma:Dirichlet-upper-bound}. Let $K\ge
1$ and $s\in\mathbb{C}$ be such that $\Re s+K>\max(\log_q M + 1, 0)$.
Then
\begin{multline*}
\norm[\bigg]{\mathcal{G}_{n_0}(s) - \sum_{n_0\le n<qn_0} n^{-s}f(n) - q^{-s}\sum_{0\le
r<q}A_r\sum_{1\le k<K}\binom{-s}{k}\Bigl(\frac{r}{q}\Bigr)^k
\mathcal{F}_{n_0}(s+k)} \\
\le q^{-\Re s}\abs[\Big]{\binom{-s}{K}} \frac{M}{(\Re
s+K-\log_q M -1)(n_0-1)^{\Re s+K-\log_q M -1}}\sum_{0\le
r<q}\norm{A_r}\Bigl(\frac{r}{q}\Bigr)^K.
\end{multline*}
\end{lemma}
\begin{proof}
We set
\begin{equation*}
D\coloneqq \mathcal{G}_{n_0}(s) - \sum_{n_0\le n<qn_0} n^{-s}f(n) - q^{-s}\sum_{0\le
r<q}A_r\sum_{1\le k<K}\binom{-s}{k}\Bigl(\frac{r}{q}\Bigr)^k
\mathcal{F}_{n_0}(s+k)
\end{equation*}
and need to estimate $\norm{D}$.
By definition of $\mathcal{G}_{n_0}(s)$, we have
\begin{align*}
\mathcal{G}_{n_0}(s) &= (1-q^{-s}C)\mathcal{F}_{n_0}(s)\\
&=\sum_{n_0\le n<qn_0} n^{-s}f(n) + \mathcal{F}_{qn_0}(s) -
q^{-s}C\mathcal{F}_{n_0}(s)\\
&=\sum_{n_0\le n<qn_0} n^{-s}f(n) +\sum_{0\le r<q}\sum_{n\ge n_0}\frac{A_r
f(n)}{(qn+r)^s}- q^{-s}C\mathcal{F}_{n_0}(s)\\
&=\sum_{n_0\le n<qn_0} n^{-s}f(n) +q^{-s}\sum_{0\le r<q}A_r \sum_{n\ge n_0}\frac{f(n)}{n^s}\Bigl(\Bigl(1+\frac{r}{qn}\Bigr)^{-s}- 1\Bigr).
\end{align*}
Thus we have
\begin{equation*}
D = q^{-s}\sum_{0\le r<q}A_r \sum_{n\ge
n_0}\frac{f(n)}{n^s}\biggl(\Bigl(1+\frac{r}{qn}\Bigr)^{-s}- \sum_{0\le k<K}\binom{-s}{k}\Bigl(\frac{r}{qn}\Bigr)^k\biggr).
\end{equation*}
For $0\le x<1$, Taylor's theorem (or induction on $K\ge 1$ using integration
by parts) implies that
\begin{equation*}
(1+x)^{-s}-\sum_{0\le k<K}\binom{-s}{k}x^k = K\int_{0}^x
\binom{-s}{K}(1+t)^{-s-K}(x-t)^{K-1}\,\mathrm{d} t.
\end{equation*}
For $0\le t\le x<1$, we can bound $\abs{(1+t)^{-s-K}}$ from above by $1$ since we have assumed that $\Re s + K>0$. Thus
\begin{equation*}
\abs[\bigg]{(1+x)^{-s}-\sum_{0\le k<K}\binom{-s}{k}x^k} \le K\abs[\Big]{\binom{-s}{K}} \int_{0}^x
(x-t)^{K-1}\,\mathrm{d} t = \abs[\Big]{\binom{-s}{K}}x^K.
\end{equation*}
Thus we obtain the bound
\begin{equation*}
\norm{D} \le q^{-\Re s}\abs[\Big]{\binom{-s}{K}}\sum_{0\le
r<q}\norm{A_r}\Bigl(\frac{r}{q}\Bigr)^K\sum_{n\ge
n_0}\frac{\norm{f(n)}}{n^{\Re \sigma+K}}.
\end{equation*}
Bounding the remaining Dirichlet series by Lemma~\ref{lemma:Dirichlet-upper-bound} yields the result.
\end{proof}
\subsection{Choices of Parameters}
\label{section:choice-parameters}
As mentioned at the beginning of this part, we choose the Arb
library~\cite{Johansson:2017:arb} for reliable numerical ball
arithmetic.
In our examples (esthetic numbers in
Section~\ref{sec:esthetic-numbers} and Pascal's rhombus in
Section~\ref{sec:pascal}),
we choose $n_0=1024$ and recursively compute
$\mathcal{F}_{n_0}(\log_q\lambda + \chi_\ell+k)$ for $k\ge 1$
by~\eqref{eq:Dirichlet-recursion}. In each step, we keep adding summands for
$k\ge 1$ until the bound of the approximation error in
Lemma~\ref{lemma:approximation-error} is smaller than
the smallest increment which can still be represented with the chosen number of
bits. For plotting the graphs, we simply took machine precision; for the larger number
of significant digits in Table~\ref{table:pascal-rhombus:fourier}, we used 128
bits precision.
\section{Non-vanishing Coefficients}
\label{section:non-vanishing}
Using reliable numerical arithmetic for the computations (see above) yields small
balls in which the true value of the Fourier coefficients
is.
If such a ball does not contain zero, we know that the Fourier
coefficient does not vanish. If the ball contains zero, however, we
cannot decide whether the Fourier coefficient vanishes. We can only
repeat the computation with higher precision and hope that this will
lead to a decision that the coefficient does not vanish,
or we can try to find a direct argument why the
Fourier coefficient does indeed vanish, for instance using the final
statement of
Theorem~\ref{theorem:contribution-of-eigenspace}~(\ref{item:large-eigenvalue}).
Vanishing Fourier coefficients appear in our introductory
Example~\ref{example:binary-sum-of-digits}: In its continuation
(Example~\ref{example:binary-sum-of-digits:cont}) an alternative
approach is used to compute these coefficients explicitly
symbolically. In this way a decision for them being zero is possible.
The same is true for the example of transducers in Section~\ref{sec:transducer}.
It should also be noted that in the analysis of esthetic numbers
(example in Section~\ref{sec:esthetic-numbers}) we could have modelled
the problem by a complete transducer (by just introducing a sink) and then
applied the results of Section~\ref{sec:transducer}. This would have led to an
asymptotic expansion where the fluctuations of the main term (corresponding to the eigenvalue $q$) would in fact have vanished, but an argument would have been needed.
So we chose a different approach in Section~\ref{sec:esthetic-numbers} to avoid
this problem. There the eigenvalue~$q$ does no longer occur. This
implies that the fluctuations for $q$ of the transducer approach
vanish. Note also that half of the remaining fluctuations still
turn out to vanish:
this is shown in the proof of
Corollary~\ref{corollary:esthetic:asy}.
\subsection{Recurrence Relations and $2$-Regular Sequences}
\label{sec:recurrences}
Let $X(N)$, $Y(N)$ and $Z(N)$ be the number of ones in the first $N$ rows
(starting with row index~$1$)
of $\mathfrak{X}$, $\mathfrak{Y}$ and $\mathfrak{Z}$, respectively.
Goldwasser, Klostermeyer, Mays and
Trapp~\cite[(12)--(14)]{Goldwasser-Klostermeyer-Mays-Trapp:1999:Pascal-rhombus}
get the recurrence relations
\begin{align*}
X(N) &= X(\floor{\tfrac N2}) + Y(\ceil{\tfrac N2}) + Z(\floor{\tfrac N2}), \\
Y(N) &= X(\ceil{\tfrac N2}) + X(\floor{\tfrac N2}-1) + Z(\floor{\tfrac N2}) + Z(\ceil{\tfrac N2}-1), \\
Z(N) &= 2 X(\floor{\tfrac N2}) + 2 Y(\ceil{\tfrac N2})
\end{align*}
for $N\ge2$, and $X(0)=Y(0)=Z(0)=0$, $X(1)=1$, $Y(1)=1$ and $Z(1)=2$
(cf.~\cite[Figures~2 and~3]{Goldwasser-Klostermeyer-Mays-Trapp:1999:Pascal-rhombus}).
Distinguishing between even and odd indices gives
\begin{align*}
X(2N) &= X(N) + Y(N) + Z(N), \\
X(2N+1) &= X(N) + Y(N+1) + Z(N), \\
Y(2N) &= X(N) + X(N-1) + Z(N) + Z(N-1), \\
Y(2N+1) &= X(N+1) + X(N-1) + 2Z(N), \\
Z(2N) &= 2X(N) + 2Y(N), \\
Z(2N+1) &= 2X(N) + 2Y(N+1)
\end{align*}
for all $N\ge1$.
Now we build the backward differences
$x(n) = X(n) - X(n-1)$, $y(n) = Y(n) - Y(n-1)$ and $z(n) = Z(n) - Z(n-1)$.
These $x(n)$, $y(n)$ and $z(n)$ are the number
of ones in the $n$th row of $\mathfrak{X}$, $\mathfrak{Y}$ and
$\mathfrak{Z}$, respectively, and clearly
\begin{equation*}
X(N) = \sum_{1\leq n \leq N} x(n),
\qquad
Y(N) = \sum_{1\leq n \leq N} y(n),
\qquad
Z(N) = \sum_{1\leq n \leq N} z(n)
\end{equation*}
holds. We obtain
\begin{subequations}
\label{eq:rec-pascal-rhombus:main}
\begin{align}
x(2n)&=x(n)+z(n), &
x(2n+1)&=y(n+1), \label{eq:rec-x}\\
y(2n)&= x(n-1)+z(n), &
y(2n+1)&=x(n+1) +z(n), \label{eq:rec-y}\\
z(2n)&= 2x(n), &
z(2n+1)&=2y(n+1) \label{eq:rec-z}
\end{align}
\end{subequations}
for $n\ge1$, and $x(0)=y(0)=z(0)=0$, $x(1)=1$, $y(1)=1$ and $z(1)=2$.
Let us write our coefficients as the
vector
\begin{equation}\label{eq:pascal:vec-v}
v(n) = \bigl(x(n), x(n+1), y(n+1), z(n), z(n+1)\bigr)^\top.
\end{equation}
It turns out that the components included into $v(n)$ are
sufficient for a self-contained linear representation of~$v(n)$.
In particular, it is not necessary to include~$y(n)$.
By using the recurrences~\eqref{eq:rec-pascal-rhombus:main}, we find that
\begin{equation*}
v(2n) = A_0 v(n)
\qquad\text{and}\qquad
v(2n+1) = A_1 v(n)
\end{equation*}
for all\footnote{ Note that $v(0) = A_0 v(0)$ and $v(1) = A_1 v(0)$ are indeed
true.}
$n\ge0$ with the matrices
\begin{equation*}
A_0 =
\begin{pmatrix}
1 & 0 & 0 & 1 & 0 \\
0 & 0 & 1 & 0 & 0 \\
0 & 1 & 0 & 1 & 0 \\
2 & 0 & 0 & 0 & 0 \\
0 & 0 & 2 & 0 & 0
\end{pmatrix}
\qquad\text{and}\qquad
A_1 =
\begin{pmatrix}
0 & 0 & 1 & 0 & 0 \\
0 & 1 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 & 1 \\
0 & 0 & 2 & 0 & 0 \\
0 & 2 & 0 & 0 & 0
\end{pmatrix},
\end{equation*}
and with $v(0) = (0,1,1,0,2)^\top$.
Therefore, the sequences $x(n)$, $y(n)$ and $z(n)$ are $2$-regular.
\subsection{Full Asymptotics}
\label{sec:asymptotics}
\begin{corollary}\label{corollary:pascal-rhombus:main}
We have
\begin{equation}\label{eq:pascal-rhombus:main-asy}
X(N) = \sum_{1\leq n \leq N} x(n)
= N^\gamma \f{\Phi}{\fractional{\log_2 N}} + \Oh{N \log_2 N}
\end{equation}
with $\gamma = \log_2 \bigl(3+\sqrt{17}\,\bigr)-1 = 1.83250638358045\ldots$ and
a $1$-periodic function $\Phi$ which is Hölder continuous with
any exponent smaller than $\gamma-1$.
Moreover, we can effectively compute the Fourier coefficients
of~$\Phi$ (as explained in Part~\ref{part:numerical}).
\end{corollary}
We get analogous results for the sequences~$Y(N)$ and $Z(N)$ (each with
its own periodic function~$\Phi$, but the same exponent $\gamma$).
The fluctuation~$\Phi$ of $X(N)$ is visualized in Figure~\ref{fig:fluct-a} and its
first few Fourier coefficients are shown in Table~\ref{table:pascal-rhombus:fourier}.
\begin{figure}
\centering
\includegraphics{pascal_rhombus_plot.pdf}
\caption{Fluctuation in the main term of the asymptotic expansion of $X(N)$.
The figure shows $\f{\Phi}{u}$ (red) approximated by
its trigonometric polynomial of degree~$1999$ as well as
$X(2^u) / 2^{u\gamma}$ (blue).}
\label{fig:fluct-a}
\end{figure}
\begin{table}
\centering
\begin{equation*}\footnotesize
\begin{array}{r|l}
\multicolumn{1}{c|}{\ell} &
\multicolumn{1}{c}{\varphi_\ell} \\
\hline
0 & \phantom{-}0.6911615112341912755021246 \\
1 & -0.01079216311240407872950510 - 0.0023421761940286789685827i \\
2 & \phantom{-}0.00279378637350495172116712 - 0.00066736128659728911347756i \\
3 & -0.00020078258323645842522640 - 0.0031973663977645462669373i \\
4 & \phantom{-}0.00024944678921746747281338 - 0.0005912995467076061497650i \\
5 & -0.0003886698612765803447578 + 0.00006723866319930148568431i \\
6 & -0.0006223575988893574655258 + 0.00043217220614939859781542i \\
7 & \phantom{-}0.00023034317364181383130476 - 0.00058663168772856091427688i \\
8 & \phantom{-}0.0005339060804798716172593 - 0.0002119380802590974909465i \\
9 & \phantom{-}0.0000678898389770175928529 - 0.00038307823285486235280185i \\
10 & -0.00019981745997355255061991 - 0.00031394569060142799808175i \\
\end{array}
\end{equation*}
\caption{Fourier coefficients of~$\Phi$
(Corollary~\ref{corollary:pascal-rhombus:main}). All stated digits are
correct; see also Part~\ref{part:numerical}.}
\label{table:pascal-rhombus:fourier}
\end{table}
\subsection{Proof of the Asymptotic Result}
At this point, we only prove~\eqref{eq:pascal-rhombus:main-asy} of
Corollary~\ref{corollary:pascal-rhombus:main}. We deal with the
Fourier coefficients in Section~\ref{sec:fourier}. As in the
introductory example of the binary sum-of-digits functions
(Example~\ref{example:binary-sum-of-digits}), we could get Fourier
coefficients by Theorem~\ref{theorem:simple} and the $2$-linear
representation of Section~\ref{sec:recurrences} directly. However,
the information in the vector~$v(n)$ (see \eqref{eq:pascal:vec-v})
is redundant with respect to the asymptotic main term as it contains
$x(n)$ and $z(n)$ as well as $x(n+1)$ and $z(n+1)$; both pairs are
asymptotically equal in the sense of~\eqref{eq:pascal-rhombus:main-asy}.
Therefore, we head for an only $3$-dimensional functional system of equations
for our Dirichlet series of $x(n)$, $y(n)$ and $z(n)$
(instead of a $5$-dimensional system).
\begin{proof}[Proof of~\eqref{eq:pascal-rhombus:main-asy}]
We use Theorem~\ref{theorem:simple}.
\proofparagraph{Joint Spectral Radius}
First we compute the joint spectral radius $\rho$ of
$A_0$ and $A_1$. Both matrices have a maximum absolute
row sum equal to $2$, thus $\rho\leq 2$, and both
matrices have~$2$ as an eigenvalue. Therefore we obtain
$\rho=2$. Moreover, the finiteness property of the linear
representation is satisfied by considering only products with exactly one
matrix factor $A_0$ or $A_1$.
Thus, we have $R=\rho=2$.
\proofparagraph{Eigenvalues}
Next, we compute the spectrum~$\sigma(C)$ of $C=A_0+A_1$. The
matrix $C$ has the eigenvalues~$\lambda_1=\bigl(3+\sqrt{17}\,\bigr)/2=3.5615528128088\ldots$,
$\lambda_2=2$, $\lambda_3=-2$, $\lambda_4=-1$ and
$\lambda_5=\bigl(3-\sqrt{17}\,\bigr)/2=-0.5615528128088\ldots$ (each with multiplicity one).
Note that $\lambda_1$ and $\lambda_5$ are the zeros of the
polynomial~$U^2-3U-2$.
\proofparagraph{Asymptotics}
By using Theorem~\ref{theorem:simple}, we obtain an
asymptotic formula for $X(N-1)$. Shifting from $N-1$ to $N$ does not
change this asymptotic formula, as this shift is absorbed by the
error term $\Oh{N \log_2 N}$.
\end{proof}
\subsection{Dirichlet Series and Meromorphic Continuation}
\label{sec:meromorphic}
In the lemma below,
we provide the functional equation~\eqref{eq:pascal:functional-equation}
as a system of three equations. This is in contrast
to the generic functional equation
provided by Theorem~\ref{theorem:Dirichlet-series} which is a system of five
equations.
Let $n_0\ge2$ be an integer and define
\begin{align*}
\f{\mathcal{X}_{n_0}}{s} &= \sum_{n\geq n_0} \frac{x(n)}{n^s}, &
\f{\mathcal{Y}_{n_0}}{s} &= \sum_{n\geq n_0} \frac{y(n)}{n^s}, &
\f{\mathcal{Z}_{n_0}}{s} &= \sum_{n\geq n_0} \frac{z(n)}{n^s}.
\end{align*}
\begin{lemma}\label{lemma:meromorphic}
Set
\begin{equation*}
M = I -
\begin{pmatrix}
2^{-s} & 2^{-s} & 2^{-s} \\
2^{1-s} & 0 & 2^{1-s} \\
2^{1-s} & 2^{1-s} & 0 \\
\end{pmatrix}.
\end{equation*}
Then
\begin{equation}\label{eq:pascal:functional-equation}
M
\begin{pmatrix}
\f{\mathcal{X}_{n_0}}{s} \\ \f{\mathcal{Y}_{n_0}}{s} \\ \f{\mathcal{Z}_{n_0}}{s}
\end{pmatrix}
=
\begin{pmatrix}
\f{\mathcal{J}_{n_0}}{s} \\ \f{\mathcal{K}_{n_0}}{s} \\ \f{\mathcal{L}_{n_0}}{s}
\end{pmatrix}\!,
\end{equation}
where
\begin{align*}
\f{\mathcal{J}_{n_0}}{s} &= 2^{-s} \f{\Sigma}{s, -\tfrac12, \mathcal{Y}_{n_0}}
+ \mathcal{I}_{\mathcal{J}_{n_0}}(s), \\
&\;\mathcal{I}_{\mathcal{J}_{n_0}}(s) = - \frac{y(n_0)}{(2n_0-1)^s}
+ \sum_{n_0\leq n<2n_0} \frac{x(n)}{n^s}, \\
\f{\mathcal{K}_{n_0}}{s} &=
2^{-s} \f{\Sigma}{s, 1, \mathcal{X}_{n_0}} + 2^{-s} \f{\Sigma}{s, -\tfrac12, \mathcal{X}_{n_0}}
+ 2^{-s} \f{\Sigma}{s, \tfrac12, \mathcal{Z}_{n_0}}
+ \mathcal{I}_{\mathcal{K}_{n_0}}(s), \\
&\;\mathcal{I}_{\mathcal{K}_{n_0}}(s) = \frac{x(n_0-1)}{(2n_0)^s} - \frac{x(n_0)}{(2n_0-1)^s}
+ \sum_{n_0\leq n<2n_0} \frac{y(n)}{n^s}, \\
\f{\mathcal{L}_{n_0}}{s} &= 2^{1-s} \f{\Sigma}{s, -\tfrac12, \mathcal{Y}_{n_0}}
+ \mathcal{I}_{\mathcal{L}_{n_0}}(s), \\
&\;\mathcal{I}_{\mathcal{L}_{n_0}}(s) = - \frac{2 y(n_0)}{(2n_0-1)^s}
+ \sum_{n_0\leq n<2n_0} \frac{z(n)}{n^s},
\end{align*}
with the notion of $\Sigma$ as in Lemma~\ref{lemma:shifted-Dirichlet},
provides meromorphic continuations
of the Dirichlet series~$\f{\mathcal{X}_{n_0}}{s}$, $\f{\mathcal{Y}_{n_0}}{s}$,
and $\f{\mathcal{Z}_{n_0}}{s}$ for $\Re s > \gamma_0=1$ with the only possible
poles at $\gamma + \chi_\ell$ for $\ell\in\mathbb{Z}$,
all of which are simple poles.
\end{lemma}
\begin{proof}
We split the proof into several steps.
\proofparagraph{Functional Equation}
\ifdetails
From \eqref{eq:rec-x} we obtain
\begin{equation*}
\f{\mathcal{X}_{n_0}}{s} = \sum_{n_0\leq n<2n_0} \frac{x(n)}{n^s}
+ \sum_{n\geq n_0} \frac{x(n)}{(2n)^s}
+ \sum_{n\geq n_0} \frac{z(n)}{(2n)^s}
+ \sum_{n\geq n_0} \frac{y(n+1)}{(2n+1)^s}
\end{equation*}
The second and third summands become $2^{-s} \f{\mathcal{X}_{n_0}}{s}$ and $2^{-s} \f{\mathcal{Z}_{n_0}}{s}$,
respectively, and we are left to rewrite the fourth summand. By
using Lemma~\ref{lemma:shifted-Dirichlet} with $\beta=-1/2$ we
get
\begin{align*}
\sum_{n\geq n_0} \frac{y(n+1)}{(2n+1)^s}
&= 2^{-s} \sum_{n\geq n_0} \frac{y(n)}{(n-\frac12)^s}
- \frac{y(n_0)}{(2n_0-1)^s} \\
&= 2^{-s} \f{\mathcal{Y}_{n_0}}{s}
+ 2^{-s} \f{\Sigma}{s, -\tfrac12, \mathcal{Y}_{n_0}} - \frac{y(n_0)}{(2n_0-1)^s}.
\end{align*}
The first row of \eqref{eq:pascal:functional-equation} now follows.
Similarly, from~\eqref{eq:rec-y}\else From~\eqref{eq:rec-y}\fi{} we obtain
\begin{equation}\label{eq:func:Ys}
\begin{split}
\f{\mathcal{Y}_{n_0}}{s} &= \sum_{n_0\leq n<2n_0} \frac{y(n)}{n^s}
+ \sum_{n\geq n_0} \frac{x(n-1)}{(2n)^s}
+ \sum_{n\geq n_0} \frac{z(n)}{(2n)^s} \\
&\phantom{=}\hphantom{0}
+ \sum_{n\geq n_0} \frac{x(n+1)}{(2n+1)^s}
+ \sum_{n\geq n_0} \frac{z(n)}{(2n+1)^s} \\
&= \sum_{n_0\leq n<2n_0} \frac{y(n)}{n^s}
+ 2^{-s} \sum_{n\geq n_0} \frac{x(n)}{(n+1)^s} + \frac{x(n_0-1)}{(2n_0)^s}
+ 2^{-s} \sum_{n\geq n_0} \frac{z(n)}{n^s} \\
&\phantom{=}\hphantom{0}
+ 2^{-s} \sum_{n\geq n_0} \frac{x(n)}{(n-\frac12)^s} - \frac{x(n_0)}{(2n_0-1)^s}
+ 2^{-s} \sum_{n\geq n_0} \frac{z(n)}{(n+\frac12)^s}\\
&= \sum_{n_0\leq n<2n_0} \frac{y(n)}{n^s}
+ 2^{-s} (\f{\mathcal{X}_{n_0}}{s} + \Sigma(s, 1, \mathcal{X}_{n_0})) + \frac{x(n_0-1)}{(2n_0)^s}
+ 2^{-s} \mathcal{Z}_{n_0}(s) \\
&\phantom{=}\hphantom{0}
+ 2^{-s} \bigl(\f{\mathcal{X}_{n_0}}{s} + \Sigma(s, -\tfrac{1}{2}, \mathcal{X}_{n_0})\bigr) - \frac{x(n_0)}{(2n_0-1)^s} \\
&\phantom{=}\hphantom{0}
+ 2^{-s} \bigl(\f{\mathcal{Z}_{n_0}}{s} + \Sigma(s, \tfrac{1}{2}, \mathcal{Z}_{n_0})\bigr).
\end{split}
\end{equation}
The second row of \eqref{eq:pascal:functional-equation}
follows.
\ifdetails
Similarly, \eqref{eq:rec-z} yields
\begin{align*}
\f{\mathcal{Z}_{n_0}}{s} &= \sum_{n_0\leq n<2n_0} \frac{z(n)}{n^s}
+ 2 \sum_{n\geq n_0} \frac{x(n)}{(2n)^s}
+ 2 \sum_{n\geq n_0} \frac{y(n+1)}{(2n+1)^s} \\
&= \sum_{n_0\leq n<2n_0} \frac{z(n)}{n^s}
+ 2^{1-s} \sum_{n\geq n_0} \frac{x(n)}{n^s}
+ 2^{1-s} \sum_{n\geq n_0} \frac{y(n)}{(n-\tfrac12)^s} - \frac{2 y(n_0)}{(2n_0-1)^s},
\end{align*}
and the third row of \eqref{eq:pascal:functional-equation} follows.
\else
Similarly,~\eqref{eq:rec-x} and~\eqref{eq:rec-z} yield the first and third
rows of~\eqref{eq:pascal:functional-equation}, respectively.
\fi
\proofparagraph{Determinant and Zeros}
The determinant of $M$ is
\begin{equation*}
\f{\Delta}{s} = \det M
= 2^{-3s} \bigl(2^{2s} - 3\cdot 2^s - 2\bigr) \bigl(2^s + 2\bigr).
\end{equation*}
It is an entire function.
All zeros of $\Delta$ are simple zeros.
In particular, solving $\f{\Delta}{s} = 0$ gives $2^s = 3/2 \pm \sqrt{17}/2$ (the two zeros of $U^2-3U-U$) and $2^s = -2$.
A solution $\f{\Delta}{s_0} = 0$
implies that $s_0 + 2\pi i \ell/\log 2$ with $\ell\in\mathbb{Z}$ satisfies
the same equation as well.
Moreover, set $\gamma=\log_2 \bigl(3+\sqrt{17}\,\bigr) - 1 = 1.8325063835804\dots$.
Then the only zeros with $\Re s > \gamma_0=1$ are at
$\gamma + \chi_\ell$ with $\chi_\ell = 2\pi i \ell / \log 2$ for $\ell\in\mathbb{Z}$.
It is no surprise that the $\gamma$ of this lemma and the $\gamma$
in the proof of Corollary~\ref{corollary:pascal-rhombus:main} which comes from the
$2$-linear representation of Section~\ref{sec:recurrences} coincide.
\proofparagraph{Meromorphic Continuation}
Let
$\mathcal{D}_{n_0}\in\set{\mathcal{X}_{n_0},\mathcal{Y}_{n_0},\mathcal{Z}_{n_0}}$.
The Dirichlet series~$\f{\mathcal{D}_{n_0}}{s}$ is
analytic for $\Re s > 2 = \log_2 \rho + 1$ with $\rho=2$ being the
joint spectral radius by Theorem~\ref{theorem:Dirichlet-series}.
We use the
functional equation~\eqref{eq:pascal:functional-equation} which
provides the continuation, as we write $\f{\mathcal{D}_{n_0}}{s}$ in terms of
$\f{\mathcal{J}_{n_0}}{s}$, $\f{\mathcal{K}_{n_0}}{s}$ and $\f{\mathcal{L}_{n_0}}{s}$.
By Lemma~\ref{lemma:shifted-Dirichlet},
these three functions are analytic for $\Re s > 1$.
The zeros (all are simple zeros)
of the denominator~$\f{\Delta}{s}$ are the only possibilities
for the poles of $\f{\mathcal{D}_{n_0}}{s}$ for $\Re s > 1$.
\end{proof}
\subsection{Fourier Coefficients}
\label{sec:fourier}
We are now ready to prove the rest of Corollary~\ref{corollary:pascal-rhombus:main}.
\begin{proof}[Proof of Corollary~\ref{corollary:pascal-rhombus:main}]
We verify that we can apply Theorem~\ref{theorem:use-Mellin--Perron}.
The steps of this proof in Section~\ref{sec:asymptotics} provided us
already with an asymptotic
expansion~\eqref{eq:pascal-rhombus:main-asy}. Lemma~\ref{lemma:meromorphic}
gives us the meromorphic function for $\Re s>\gamma_0=1$ which comes from
the Dirichlet series
$\bigl(\f{\mathcal{X}_{n_0}}{s}, \f{\mathcal{Y}_{n_0}}{s}, \f{\mathcal{Z}_{n_0}}{s}\bigr)^\top$\!.
It can only have poles (all simple) at $s=\gamma + \chi_\ell$ for $\ell\in\mathbb{Z}$ and
satisfies the assumptions in
Theorem~\ref{theorem:use-Mellin--Perron} by
Theorem~\ref{theorem:Dirichlet-series} and
Remark~\ref{remark:Dirichlet-series:bound}.
Therefore a computation of the Fourier coefficients via computing
residues (see \eqref{eq:Fourier-coefficient:simple-as-residue}) is possible by
Theorem~\ref{theorem:use-Mellin--Perron}, and this residue may be
computed from~\eqref{eq:pascal:functional-equation} via Cramer's
rule.
\end{proof}
We refer to Part~\ref{part:numerical} for details on the actual
computation of the Fourier coefficients.
\section*{Contents}
\egroup
\vspace*{2ex}
\begin{multicols}{2}
{\vspace*{-12ex}\footnotesize\tableofcontents}
\end{multicols}
\newpage
\part{Introduction}\label{part:overview}
\section{Synopsis: The Objects of Interest and the Result}
In this paper, we study the asymptotic behaviour of the summatory function of a
$q$-regular sequence $x(n)$. At this
point, we give a short overview of the notion of $q$-regular sequences%
\footnote{
In the standard literature
\cite{Allouche-Shallit:1992:regular-sequences, Allouche-Shallit:2003:autom}
these sequences
are called $k$-regular sequences (instead of $q$-regular sequences).}
and our main result.
One characterisation of a $q$-regular sequence is as follows: The sequence
$x(n)$ is said to be $q$-regular if there are square matrices $A_0$, \ldots, $A_{q-1}$
and a vector-valued sequence $v(n)$ such that
\begin{equation*}
v(qn+r)=A_r v(n)\qquad \text{for $0\le r<q$ and $n\ge 0$}
\end{equation*}
and such that $x(n)$ is the first component of $v(n)$.
Regular sequences are
intimately related to the $q$-ary expansion of their arguments.
They have been introduced by Allouche and Shallit
\cite{Allouche-Shallit:1992:regular-sequences}; see also
\cite[Chapter~16]{Allouche-Shallit:2003:autom}. Many special
cases have been investigated in the literature; this is also due to their
relation to divide-and-conquer algorithms. Moreover, every $q$-automatic
sequence---those sequences are defined by finite automata---is $q$-regular as well. Take also a look at the
book~\cite{Allouche-Shallit:2003:autom} for many examples.
Our main result is, roughly speaking, that the summatory function of a
$q$-regular sequence $x(n)$ has the asymptotic form
\begin{equation}\label{eq:synopsis-shape-main}
\sum_{n<N}x(n) = \sum_{j=1}^J N^{\log_q \lambda_j} \frac{(\log
N)^{k_j}}{k_j!} \Phi_{k_j}(\fractional{\log_q N}) + O(N^{\log_q R})
\end{equation}
as $N\to\infty$ for a suitable positive integer~$J$,
suitable constants~$\lambda_j\in \mathbb{C}$,
suitable non-negative integers~$k_j$,
a suitable~$R$ and $1$-periodic continuous functions~$\Phi_{k_j}$.
The $\lambda_j$ will turn out to be eigenvalues of
$C \coloneqq A_0+\cdots+A_{q-1}$, the $k_j$ be related to the multiplicities of these
eigenvalues and the constant $R$ will be a bound for
the joint spectral radius of the matrices $A_0$, \ldots, $A_{q-1}$.
While~\eqref{eq:synopsis-shape-main} gives the shape of the asymptotic
form, gathering as much information as possible on the periodic
fluctuations~$\Phi_{k_j}$ is required to have a full picture. To this aim,
we will give a description of the Fourier coefficients of the $\Phi_{k_j}$
which allows to compute them algorithmically and therefore to describe
these periodic fluctuations with high precision.
In particular, this allows to detect non-vanishing fluctuations. Code%
\footnote{The code accompanying this article can be found at
\url{https://gitlab.com/dakrenn/regular-sequence-fluctuations}\,.
It is meant to be used with the open source mathematics software SageMath~\cite{SageMath:2018:8.3}.}
is provided to compute the Fourier coefficients.
We close this introductory section by noting that the normalized sum
$\frac{1}{N} \sum_{n<N}x(n)$ enlightens us about the expectation of a
random element of the sequence~$x(n)$ with respect to uniform distribution
on the non-negative integers smaller than a certain~$N$.
\section{How to Read this Paper}
This is a long (and perhaps sometimes technical) paper and not all readers
might find the time to read it from the very beginning to the very end. We
therefore outline reading strategies for various interests.
For the reader who wants to \emph{apply our results to a particular problem}:
Read Section~\ref{section:introduction:regular-sequences}
on the definition of $q$-regular sequences and Section~\ref{section:introduction:main-result}
containing the main result in a condensed version which should cover
most applications. These two sections also have a simple,
illustrative and well-known running example.
If it turns out that the refined versions of the results are needed, follow
the upcoming paragraph below.
For the reader who still wants to \emph{apply our results to a particular
problem} but finds the \emph{condensed version insufficient},
turn to the overview of the results
(Section~\ref{section:results:overview}) and then continue
with Section~\ref{section:results} where the notations and results
are stated in full generality.
Formulating them will need quite a number of
definitions provided in Section~\ref{sec:definitions-notations}. In order to
cut straight to the results themselves, we will refrain from motivations and
comments on these definitions and
postpone those comments to Section~\ref{sec:motivation-definitions}.
For the reader who wants to \emph{determine the asympotics of a regular sequence}
instead of determining the asymptotics of the summatory function of the
regular sequence, advice is given in
Section~\ref{section:asy-regular-sequences:non-summatory}.
For the reader who wants to read more about \emph{showcase applications} of our
method yielding \emph{new asymptotic results}, additionally to Section~\ref{section:user-friendly-main-and-example} read
Section~\ref{sec:overview-examples} where an overview of the examples in
this paper is given and then Part~\ref{part:examples}
where these examples are discussed in
detail. For many more examples to which the methods can be applied, read the
original papers~\cite{Allouche-Shallit:1992:regular-sequences,
Allouche-Shallit:2003:regular-sequences-2} and the book by Allouche and
Shallit~\cite{Allouche-Shallit:2003:autom}
which contain many examples of $q$-regular sequences.
For the reader who wants to \emph{compute the Fourier
coefficients} for a particular application, use the provided code. Read
Part~\ref{part:numerical} for more details, in particular, see
Section~\ref{section:non-vanishing} for some comments on how to decide
whether fluctuations are constant or even vanish.
Moreover, for the reader who is interested in
the background on the \emph{algorithmic aspects} and details of the
implementation of the actual computation, we also refer to
Part~\ref{part:numerical}; this part will also be useful for the reader
who wants to review the code written for SageMath.
For the reader who is interested in the \emph{history of the problem}, we refer to
Section~\ref{introduction:relation-to-previous-work}.
For the reader who wants to see a \emph{heuristic argument why everything works out},
there is Section~\ref{sec:heuristic} where it is shown that once one does not care about
convergence issues, the Mellin--Perron summation formula of order zero explains
the result.
For the reader who wants to understand the \emph{idea of the proof}, there is
Section~\ref{section:high-level-overview-proof}
with a high level overview of the proof how the above mentioned
convergence issues with the
Mellin--Perron summation formula can be overcome by a pseudo-Tauberian argument.
For the reader who wants to \emph{overcome convergence problems with the Mellin--Perron
summation formula} in other contexts involving periodic fluctuations, we note that
the pseudo-Tauberian argument (Proposition~\ref{proposition:pseudo-Tauber})
is completely independent of our application to
$q$-regular sequences; the only prerequisite is the knowledge on the existence
of the fluctuation and sufficient knowledge on analyticity and growth
of the Dirichlet generating function. As a consequence,
Theorem~\ref{theorem:use-Mellin--Perron} has been
formulated as an independent result and provisions have been made for several
applications of the pseudo-Tauberian argument.
Finally, for the reader who wants to \emph{fully understand the proof}: We have no other advice
than reading
the whole introduction,
the whole Section~\ref{section:results} on results and
the whole Part~\ref{part:proofs} on the proofs starting
with a very short
Section~\ref{additional-notation} where a few notations used throughout the proofs
are fixed.
\section{User-friendly Main Result and a First Example Application}
\label{section:user-friendly-main-and-example}
\subsection{\texorpdfstring{$q$}{q}-Regular Sequences}\label{section:introduction:regular-sequences}
We start by giving a definition of $q$-regular sequences; see Allouche and
Shallit~\cite{Allouche-Shallit:1992:regular-sequences}. Let $q\ge 2$ be a fixed
integer and $x$ be a sequence on $\mathbb{Z}_{\ge 0}$.\footnote{We use a functional
notation for sequences,
i.e., a sequence $x$ on $\mathbb{Z}_{\ge 0}$ is seen as function $x\colon\mathbb{Z}_{\ge 0} \to \mathbb{C}$.}
Then $x$ is said to
be \emph{$(\mathbb{C}, q)$-regular} (briefly: \emph{$q$-regular} or simply \emph{regular}) if the $\mathbb{C}$-vector space
generated by its \emph{$q$-kernel}
\begin{equation*}
\setm[\big]{x \circ (n \mapsto q^j n+r)}%
{\text{integers $j\ge 0$, $0\le r<q^j$}}
\end{equation*}
has finite dimension.
In other words, $x$ is $q$-regular if
there are an integer $D$ and sequences $x_1$, \dots, $x_D$
such that for every $j\ge 0$ and $0\le r<q^j$
there exist complex numbers $c_1$, \ldots, $c_D$ with
\begin{equation*}
x(q^j n+r) = c_1 x_1(n) + \dotsb + c_D x_D(n)\qquad{\text{for all $n\ge 0$.}}
\end{equation*}
By Allouche and Shallit~\cite[Theorem~2.2]{Allouche-Shallit:1992:regular-sequences},
the sequence~$x$ is $q$-regular if and only if there exists a vector-valued
sequence~$v$ whose first component coincides with
$x$ and there exist square matrices $A_0$, \ldots, $A_{q-1}\in\mathbb{C}^{d\times d}$ such that
\begin{equation}\label{eq:linear-representation}
v(qn+r) = A_r v(n)\qquad\text{for $0\le r<q$ and $n\ge 0$.}
\end{equation}
This is called a \emph{$q$-linear representation} of the sequence~$x$.
The best-known example for a $2$-regular function is the binary sum-of-digits
function.
\begin{example}\label{example:binary-sum-of-digits}
For $n\ge 0$, let $x(n)=s(n)$ be the binary sum-of-digits of $n$. We clearly
have
\begin{equation}\label{eq:recursion-binary-sum-of-digits}
\begin{aligned}
x(2n)&=x(n),\\
x(2n+1)&=x(n)+1
\end{aligned}
\end{equation}
for $n\ge 0$.
Indeed, we have
\begin{equation*}
x(2^j n+ r) = x(n) + x(r)\cdot 1
\end{equation*}
for integers $j\ge 0$, $0\le r <2^j$ and $n\ge 0$; i.e., the complex vector space
generated by the $2$-kernel is generated by $x$ and the
constant sequence $n \mapsto 1$.
Alternatively, we set $v=(x, n \mapsto 1)^\top$ and have
\begin{align*}
v(2n)&=
\begin{pmatrix}
x(n)\\1
\end{pmatrix}=
\begin{pmatrix}
1&0\\
0&1
\end{pmatrix}v(n),\\
v(2n+1)&=
\begin{pmatrix}
x(n)+1\\
1
\end{pmatrix}=
\begin{pmatrix}
1 & 1\\
0 & 1
\end{pmatrix}v(n)
\end{align*}
for $n\ge 0$. Thus \eqref{eq:linear-representation} holds with
\begin{equation*}
A_0 =
\begin{pmatrix}
1&0\\
0&1
\end{pmatrix},\qquad
A_1 =
\begin{pmatrix}
1&1\\
0&1
\end{pmatrix}.
\end{equation*}
\end{example}
At this point, we note that a linear
representation~\eqref{eq:linear-representation} immediately leads to an
explicit expression for $x(n)$ by induction.
\begin{remark}\label{remark:regular-sequence-as-a-matrix-product}
Let $r_{\ell-1}\ldots r_0$ be the $q$-ary digit
expansion\footnote{
Whenever we write that $r_{\ell-1}\ldots r_0$ is the $q$-ary digit
expansion of $n$, we
mean that $r_j\in\set{0,\ldots, q-1}$ for $0\le j<\ell$, $r_{\ell-1}\neq 0$ and
$n=\sum_{0 \le j < \ell} r_j q^j$. In particular, the $q$-ary expansion of
zero is the empty word.}
of $n$. Then
\begin{equation*}
x(n) = e_1 A_{r_0}\dotsm A_{r_{\ell-1}}v(0)
\end{equation*}
where $e_1=\begin{pmatrix}1& 0& \dotsc& 0\end{pmatrix}$.
\end{remark}
\subsection{Condensed Main Result}\label{section:introduction:main-result}
We are interested in the asymptotic behaviour of the summatory function
$X(N)=\sum_{0\le n<N}x(n)$.
At this point, we give a simplified version of our results. We choose any
vector norm $\norm{\,\cdot\,}$ on $\mathbb{C}^d$ and its induced matrix norm. We set $C\coloneqq
\sum_{0 \le r < q} A_r$. We choose $R>0$ such that $\norm{A_{r_1}\dotsm
A_{r_\ell}}=\Oh{R^\ell}$ holds for all $\ell\ge 0$ and
$r_1$, \dots, $r_{\ell} \in \set{0,\dots,q-1}$.
In other words, $R$ is an upper bound for the joint spectral
radius of $A_0$, \ldots, $A_{q-1}$.
The spectrum of $C$, i.e., the set of eigenvalues of $C$, is denoted by
$\sigma(C)$. For $\lambda\in\mathbb{C}$, let $m(\lambda)$ denote the size of the
largest Jordan block of $C$ associated with $\lambda$; in particular,
$m(\lambda)=0$ if $\lambda\notin\sigma(C)$.
Finally, we consider the scalar-valued Dirichlet series~$\mathcal{X}$
and the vector-valued Dirichlet series~$\mathcal{V}$ defined
by\footnote{
Note that the summatory function $X(N)$ contains the summand $x(0)$ but the Dirichlet series cannot.
This is because the choice of including $x(0)$ into $X(N)$ will lead to more consistent results.}
\begin{equation*}
\mathcal{X}(s) = \sum_{n\ge 1} n^{-s}x(n) \qquad\text{and}\qquad
\mathcal{V}(s) = \sum_{n\ge 1} n^{-s}v(n)
\end{equation*}
where $v(n)$ is the vector-valued sequence defined in \eqref{eq:linear-representation}.
Of course, $\mathcal{X}(s)$ is the first component of $\mathcal{V}(s)$.
The principal value of the complex logarithm is denoted by $\log$. The
fractional part of a real number $z$ is denoted by $\fractional{z}\coloneqq z-\floor{z}$.
\begin{theorem}[User-friendly All-In-One Theorem]\label{theorem:simple}
With the notations above, we have
\begin{multline}\label{eq:formula-X-n}
X(N) = \sum_{\substack{\lambda\in\sigma(C)\\\abs{\lambda}>R}}N^{\log_q\lambda}
\sum_{0\le k<m(\lambda)}\frac{(\log N)^k}{k!}
\Phi_{\lambda k}(\fractional{\log_q N}) \\
+ \Oh[\big]{N^{\log_q R}(\log N)^{\max\setm{m(\lambda)}{\abs{\lambda}=R}}}
\end{multline}
for suitable $1$-periodic continuous functions $\Phi_{\lambda k}$. If there
are no eigenvalues $\lambda\in\sigma(C)$ with $\abs{\lambda}\le R$, the
$O$-term can be omitted.
For $\abs{\lambda}>R$ and $0\le k<m(\lambda)$, the function $\Phi_{\lambda k}$ is Hölder continuous with any exponent
smaller than $\log_q(\abs{\lambda}/R)$.
The Dirichlet series $\mathcal{V}(s)$ converges absolutely and uniformly on compact
subsets of the half plane $\Re
s>\log_q R +1$ and can be continued to a meromorphic function on the half plane $\Re
s>\log_q R$.
It satisfies the functional equation
\begin{equation}\label{eq:functional-equation-V}
\bigl(I-q^{-s}C\bigr)\mathcal{V}(s)= \sum_{1 \le n < q} n^{-s}v(n) +
q^{-s}\sum_{0 \le r < q} A_r \sum_{k\ge
1}\binom{-s}{k}\Bigl(\frac{r}{q}\Bigr)^k \mathcal{V}(s+k)
\end{equation}
for $\Re s>\log_q R$. The right-hand side of~\eqref{eq:functional-equation-V} converges absolutely and uniformly on
compact subsets of $\Re s>\log_q R$. In particular, $\mathcal{V}(s)$ can only have
poles where $q^s\in\sigma(C)$.
For $\lambda\in\sigma(C)$ with
$\abs{\lambda}>R$, the Fourier series
\begin{equation*}
\Phi_{\lambda k}(u) = \sum_{\ell\in \mathbb{Z}}\varphi_{\lambda k\ell}\exp(2\ell\pi i u)
\end{equation*}
converges pointwise for $u\in\mathbb{R}$ where the Fourier coefficients
$\varphi_{\lambda k\ell}$ are defined by the singular expansion\footnote{We
use the notion of singular expansion as defined by Flajolet, Gourdon and
Dumas~\cite[Definition~2]{Flajolet-Gourdon-Dumas:1995:mellin}: it is the
formal sum of the principal parts of a meromorphic function over all
poles in the domain given.}
\begin{equation}\label{eq:Fourier-coefficient:simple}
\frac{x(0)+\mathcal{X}(s)}{s} \asymp
\sum_{\substack{\lambda\in\sigma(C)\\\abs{\lambda}>R}}\sum_{\ell\in\mathbb{Z}}\sum_{0\le k<m(\lambda)} \frac{\varphi_{\lambda k\ell}}{\bigl(s-\log_q \lambda-\frac{2\ell\pi i}{\log q}\bigr)^{k+1}}
\end{equation}
for $\Re s>\log_q R$.
\end{theorem}
This theorem is proved in Section~\ref{section:proof-theorem-simple}.
We note:
\begin{itemize}
\item We write $\Phi_{\lambda k}(\fractional{\log_q N})$ to optically
emphasise the $1$-periodicity; technically, we have $\Phi_{\lambda
k}(\fractional{\log_q N})=\Phi_{\lambda k}(\log_q N)$.
\item The
arguments in the proof could be used to meromophically continue the Dirichlet
series to the complex plane, but we do not need this result for our purposes.
See~\cite{Allouche-Mendes-Peyriere:2000:autom-diric} for the corresponding argument for automatic sequences.
\item
Sometimes, it will be convenient to write~\eqref{eq:Fourier-coefficient:simple}
in the equivalent explicit formulation
\begin{equation}\label{eq:Fourier-coefficient:simple-as-residue}
\varphi_{\lambda k \ell}=\Res[\bigg]{\frac{x(0)+\mathcal{X}(s)}{s}\Bigl(s-\log_q
\lambda-\frac{2\ell\pi i}{\log q}\Bigr)^{k}}{s=\log_q \lambda+\frac{2\ell\pi i}{\log q}}.
\end{equation}
In particular, this can be used to algorithmically compute
the~$\varphi_{\lambda k \ell}$.
\item Computing the Fourier coefficients~$\varphi_{\lambda k \ell}$
via the explicit formulation~\eqref{eq:Fourier-coefficient:simple-as-residue}
by reliable numerical arithmetic (see Part~\ref{part:numerical} for details)
enables us to detect the non-vanishing of a fluctuation; see also the
example below and in Section~\ref{sec:transducer}
(on sequences defined by transducers) for examples where the fluctuation of
the leading term is in fact constant. There, additional arguments are required to actually prove this fact;
see Section~\ref{section:non-vanishing} for more details.
\end{itemize}
We come back to the binary sum of digits.
\begin{example}[Continuation of Example~\ref{example:binary-sum-of-digits}]
\label{example:binary-sum-of-digits:cont}
We
have $C=A_0+A_1=\bigl(
\begin{smallmatrix}
2&1\\0&2
\end{smallmatrix}\bigr)
$. As $A_0$ is the identity matrix, any product $A_{r_1}\dotsm A_{r_\ell}$ has
the shape $A_1^k=\bigl(
\begin{smallmatrix}
1&k\\0&1
\end{smallmatrix}\bigr)
$ where $k$ is the number of factors $A_1$ in the product. This implies that
$R$ with $\norm{A_{r_1}\dotsm A_{r_\ell}}=\Oh{R^\ell}$ may be chosen to be any number greater than $1$. As $C$ is a Jordan block
itself, we simply read off that the only eigenvalue of $C$
is $\lambda=2$ with $m(2)=2$.
Thus Theorem~\ref{theorem:simple} yields
\begin{equation*}
X(N) = N(\log N) \f{\Phi_{21}}{\fractional{\log_2 N}}
+ N \f{\Phi_{20}}{\fractional{\log_2 N}}
\end{equation*}
for suitable $1$-periodic continuous functions $\Phi_{21}$ and $\Phi_{20}$.
In principle, we can now use the functional equation
\eqref{eq:functional-equation-V} to obtain the Dirichlet series~$\mathcal{X}$.
Due to the fact that one component of $v$ is the constant
sequence where everything is known, it is more efficient to use an ad-hoc
calculation for $\mathcal{X}$ by splitting the sum according to the parity of the index
and using the recurrence relation~\eqref{eq:recursion-binary-sum-of-digits} for $x(n)$. We obtain
\begin{align*}
\mathcal{X}(s)&=\sum_{n\ge 1}\frac{x(2n)}{(2n)^s} + \sum_{n\ge
0}\frac{x(2n+1)}{(2n+1)^s}\\
&=2^{-s}\sum_{n\ge 1}\frac{x(n)}{n^s} + \sum_{n\ge 0}\frac{x(n)}{(2n+1)^s} +
\sum_{n\ge 0}\frac{1}{(2n+1)^s}\\
&=2^{-s}\mathcal{X}(s) + \frac{x(0)}{1^s} + \sum_{n\ge 1}\frac{x(n)}{(2n)^s} +
\sum_{n\ge 1} x(n)\Bigl(\frac{1}{(2n+1)^s} - \frac{1}{(2n)^s}\Bigr) \\
&\hspace{4.985em}+
2^{-s}\sum_{n\ge 0}\frac1{\bigl(n+\frac12\bigr)^s}\\
&= 2^{1-s}\mathcal{X}(s) + 2^{-s}\f[\big]{\zeta}{s, \tfrac12} + \sum_{n\ge 1} x(n)\Bigl(\frac{1}{(2n+1)^s} - \frac{1}{(2n)^s}\Bigr),
\end{align*}
where the Hurwitz zeta function $\f{\zeta}{s, \alpha}\coloneqq\sum_{n+\alpha>0}(n+\alpha)^{-s}$ has been used. We get
\begin{equation}\label{eq:sum-of-digits-functional-equation}
\bigl(1-2^{1-s}\bigr)\mathcal{X}(s)=2^{-s} \f[\big]{\zeta}{s, \tfrac12} + \sum_{n\ge 1} x(n)\Bigl(\frac{1}{(2n+1)^s} - \frac{1}{(2n)^s}\Bigr).
\end{equation}
As the sum of digits is bounded by the length of the expansion, we have
$x(n)=\Oh{\log n}$. By combining this estimate with
\begin{equation*}
(2n+1)^{-s}-(2n)^{-s}
= \int_{2n}^{2n+1} \Bigl(\frac{\mathrm{d}}{\mathrm{d} t}t^{-s}\Bigr)\,\mathrm{d} t
= \int_{2n}^{2n+1}(-s)t^{-s-1}\,\mathrm{d} t
= \Oh[\big]{\abs{s}n^{-\Re s-1}},
\end{equation*}
we see that the sum in \eqref{eq:sum-of-digits-functional-equation}
converges absolutely for $\Re s>0$ and is therefore analytic for $\Re s>0$.
Therefore, the right-hand side of
\eqref{eq:sum-of-digits-functional-equation} is a meromorphic function for $\Re
s>0$ whose only pole is simple and at $s=1$ which originates from
$\f[\big]{\zeta}{s, \tfrac12}$.
Thus, $\mathcal{X}(s)$ is a meromorphic function for $\Re s>0$ with a double
pole at $s=1$ and simple poles at $1+\frac{2\ell \pi i}{\log 2}$ for
$\ell\in\mathbb{Z}\setminus\set{0}$.
This gives us
\begin{equation}\label{eq:fluctuation-binary-sum-of-digit}
\begin{aligned}
\Phi_{21}(u) = \varphi_{210}
&= \Res[\Big]{\frac{\mathcal{X}(s)(s-1)}{s}}{s=1} \\
&= \Res[\Big]{\frac{2^{-s}(s-1)}{1-2^{1-s}}
\f[\big]{\zeta}{s, \tfrac12}}{s=1}
= \frac1{2(\log 2)}
\end{aligned}
\end{equation}
by \eqref{eq:Fourier-coefficient:simple-as-residue} and
\eqref{eq:sum-of-digits-functional-equation}.
We conclude that
\begin{equation*}
X(N)=\frac12 N \log_2 N + N \f{\Phi_{20}}{\fractional{\log_2 N}}.
\end{equation*}
We will explain in Part~\ref{part:numerical} how to compute
rigorous numerical values for the Fourier coefficients, in our case
those of the fluctuation~$\Phi_{20}$ which can be deduced from
\eqref{eq:sum-of-digits-functional-equation}.
In this particular case of the binary sum-of-digits, simpler and even explicit
expressions for the Fourier coefficients have been stated and derived by other
authors: They
can be obtained in our set-up by rewriting the residues of $\mathcal{X}(s)$ in terms of shifted
residues of $\sum_{n\ge 1}\bigl(x(n)-x(n-1)\bigr)n^{-s}$ and by computing the latter
explicitly; see \cite[Proof of
Corollary~2.5]{Heuberger-Kropf-Prodinger:2015:output}. This yields the
well-known result by Delange~\cite{Delange:1975:chiffres}.
It will also turn out that \eqref{eq:fluctuation-binary-sum-of-digit} being a constant function is an
immediate consequence of the fact that $
\begin{pmatrix}
0& 1
\end{pmatrix}
$ is a left eigenvector of both $A_0$ and $A_1$ associated with the eigenvalue
$1$;
see Theorem~\ref{theorem:contribution-of-eigenspace}.
\end{example}
\subsection{Asymptotics of Regular Sequences}
\label{section:asy-regular-sequences:non-summatory}
This article is written with a focus on the sequence of partial sums of a
regular sequence. In this section, however, we explain how to use all
material for the regular sequence itself.
Let $x(N)$ be a $q$-regular sequence. We may rewrite it as
a telescoping sum
\begin{equation}\label{eq:sum-of-differences}
x(N) = x(0) + \sum_{n<N} \bigl( x(n+1) - x(n) \bigr).
\end{equation}
By~\cite[Theorems~2.5
and~2.6]{Allouche-Shallit:1992:regular-sequences}, the sequence of
differences $x(n+1) - x(n)$ is again $q$-regular. Conversely,
it is also well-known that the summatory function
of a $q$-regular sequence is itself $q$-regular. (This is an immediate
consequence of \cite[Theorem~3.1]{Allouche-Shallit:1992:regular-sequences}.)
Therefore, we
might also start to analyse a regular sequence by considering it to be the
summatory function of its sequence of differences as
in~\eqref{eq:sum-of-differences}. In this way, we can apply all of
the machinery developed in this article.
We end this short section with some remarks on why focusing on
the sequence of partial sums can be rewarding. When
modelling a quantity by a regular sequences, its asymptotic behaviour
is often not smooth, but the asymptotic behaviour of its summatory
function is. Moreover, we will see throughout this work that from a
technical perspective, considering partial sums is appropriate. Therefore,
we adopt this point of view of summatory functions of $q$-regular sequences
throughout this paper.
\section{Overview of the Full Results and Proofs}
\subsection{Overview of the Results}
\label{section:results:overview}
We have already seen the main results collected in a user-friendly
simplified version as Theorem~\ref{theorem:simple} which was written
down in a self-contained way in
Section~\ref{section:introduction:main-result}.
In Theorem~\ref{theorem:contribution-of-eigenspace} the assumptions
are refined. In particular, this theorem uses the joint spectral
radius~$R$ of the matrices in a linear representation of the sequence
(instead of a suitable bound for this quantity in
Theorem~\ref{theorem:simple}). Theorem~\ref{theorem:contribution-of-eigenspace}
states the contribution of each eigenvalue of the sum~$C$ of matrices
of the linear representation---split into the three cases of smaller,
equal and larger in absolute value than~$R$, respectively. This is formulated in
terms of generalised eigenvectors. As a consequence of this precise
breakdown of contributions, Theorem~\ref{theorem:main}, which collects
the different cases into one result, provides a condition on when the
error term vanishes.
Theorem~\ref{theorem:Dirichlet-series} brings up the full formulation
of the functional equation of the Dirichlet series associated to our
regular sequence. This is accompanied by a meromorphic continuation as
well as bounds on the growth of the Dirichlet series along vertical
lines (i.e., points with fixed real value). The analytic properties
provided by Theorem~\ref{theorem:Dirichlet-series} will be used to
verify the assumptions of Theorem~\ref{theorem:use-Mellin--Perron}.
Theorem~\ref{theorem:use-Mellin--Perron} is in fact stated and proved
very generally: It is not limited to Dirichlet
series coming from matrix products and regular sequences, but
it works for general Dirichlet series
provided that periodicity and continuity properties of the result
are known \emph{a priori}. This theorem
handles the Mellin--Perron summation and the theoretical foundations
for the computation of the Fourier coefficients of the appearing
fluctuations.
We want to point out that Theorem~\ref{theorem:use-Mellin--Perron}
can be viewed as a ``successful'' version of the
Mellin--Perron summation formula of order zero. In fact, the theorem
states sufficient conditions to provide the analytic justification
for the zeroth order formula.
Note that there is another result shown in this article, namely a
pseudo-Tauberian theorem for summing up periodic functions. This is
formulated as Proposition~\ref{proposition:pseudo-Tauber}, and all the
details around this topic are collected in
Section~\ref{sec:pseudo-tauber}. This pseudo-Tauberian argument is an
essential step in proving Theorem~\ref{theorem:use-Mellin--Perron}.
\subsection{Heuristic Approach: Mellin--Perron Summation}\label{sec:heuristic}
The purpose of this section is to explain why the formula
\eqref{eq:Fourier-coefficient:simple} for the Fourier coefficients is
expected. The approach here is heuristic and non-rigorous because we do not
have the required growth estimates. See also \cite{Drmota-Grabner:2010}.
By the Mellin--Perron summation formula of order $0$ (see, for example,
\cite[Theorem~2.1]{Flajolet-Grabner-Kirschenhofer-Prodinger:1994:mellin}),
we have
\begin{equation*}
\sum_{1\le n<N}x(n) + \frac{x(N)}{2} = \frac1{2\pi i}\int_{\max\set{\log_q R + 2,1}
-i\infty}^{\max\set{\log_q R + 2,1} +i\infty} \mathcal{X}(s)\frac{N^s\,\mathrm{d} s}{s}.
\end{equation*}
By Remark~\ref{remark:regular-sequence-as-a-matrix-product} and the definition
of $R$, we have
$x(N)=\Oh{R^{\log_q N}}=\Oh{N^{\log_q R}}$. Adding the summand $x(0)$ to match our definition of
$X(N)$ amounts to adding $\Oh{1}$.
Shifting the line of integration to the left---we have \emph{no analytic justification}
that this is allowed---and using the location of the poles of $\mathcal{X}$ claimed in
Theorem~\ref{theorem:simple} yield
\begin{multline*}
X(N) = \sum_{\substack{\lambda\in\sigma(C)\\\abs{\lambda}>R}}\sum_{\ell\in\mathbb{Z}}
\Res[\Big]{\frac{\mathcal{X}(s)N^s}{s}}%
{s=\log_q \lambda + \frac{2\ell\pi i}{\log q}} \\
+ \frac1{2\pi i}\int_{\log_q R+\varepsilon
-i\infty}^{\log_q R+\varepsilon +i\infty} \mathcal{X}(s)\frac{N^s\,\mathrm{d} s}{s} + \Oh{N^{\log_q R} + 1}
\end{multline*}
for some $\varepsilon>0$.
Expanding $N^s$ as
\begin{equation*}
N^s = \sum_{k\ge 0} \frac{(\log N)^k}{k!} N^{\log_q \lambda + \frac{2\ell\pi
i}{\log q}} \Bigl(s-\log_q \lambda
-\frac{2\ell\pi i}{\log q}\Bigr)^k
\end{equation*}
and assuming that the remainder integral converges absolutely yield
\begin{multline*}
X(N) = \sum_{\substack{\lambda\in\sigma(C)\\\abs{\lambda}>R}} N^{\log_q
\lambda}\sum_{0\le k<m_{\lambda\ell}}
\frac{(\log N)^k}{k!} \sum_{\ell\in\mathbb{Z}}\varphi_{\lambda k\ell}\exp\bigl(2\ell\pi i \log_q
N\bigr)\\
+ \Oh{N^{\log_q R+\varepsilon}+1}
\end{multline*}
where $m_{\lambda \ell}$ denotes the order of the pole of $\mathcal{X}(s)/s$ at
$\log_q\lambda + \frac{2\ell\pi i}{\log q}$ and $\varphi_{\lambda k \ell}$ is as
in \eqref{eq:Fourier-coefficient:simple}. (For $\lambda=1$ and $k=0$, the
contribution of $x(0)/s$ in \eqref{eq:Fourier-coefficient:simple} is absorbed
by the error term $\Oh{1}$ here.)
Summarising, this heuristic approach explains most of the formul\ae{} in
Theorem~\ref{theorem:simple}. Some details (exact error term and order of the
poles) are not explained by this approach.
A result ``repairing'' the zeroth order Mellin--Perron formula is known as
Landau's theorem; see \cite[\S~9]{Berthe-Lhote-Vallee:2016:probab}. It is not
applicable to our situation due to multiple poles along vertical lines which
then yield the periodic fluctuations. Instead, we present
Theorem~\ref{theorem:use-Mellin--Perron} which provides the required
justification (not by estimating the relevant quantities, but by reducing the
problem to higher order Mellin--Perron summation). The essential assumption is
that the summatory function can be decomposed into fluctuations multiplied by
some growth factors such as in \eqref{eq:formula-X-n}.
\subsection{High Level Overview of the Proof}
\label{section:high-level-overview-proof}
As we want to use Mellin--Perron summation in some form, we derive
properties of the Dirichlet series associated to the regular sequence. In
particular, we derive a functional equation which allows to compute the
Dirichlet series and its residues with arbitrary precision (Theorem~\ref{theorem:Dirichlet-series}).
We cannot directly use Mellin--Perron summation of order zero
for computing the Fourier coefficients of the fluctuations of interest.
As demonstrated in Section~\ref{sec:heuristic}, however, our theorems
coincide with the results which Mellin--Perron summation of order zero
would give if the required growth estimates could be
provided. Unfortunately, we are unable to prove these required growth
estimates. Therefore, we have to circumvent the problem by applying a
generalisation of the pseudo-Tauberian argument by
Flajolet, Grabner, Kirschenhofer, Prodinger and Tichy~\cite{Flajolet-Grabner-Kirschenhofer-Prodinger:1994:mellin}.
In order to use this argument, we have to know that the asymptotic formula has
the shape~\eqref{eq:formula-X-n}. Note that a successful application
(not \emph{directly} possible!)
of Mellin--Perron summation of order zero would give this directly.
Therefore, we first prove~\eqref{eq:formula-X-n}
and the existence of the
fluctuations (Theorems~\ref{theorem:contribution-of-eigenspace} and~\ref{theorem:main}).
To do so, we decompose the problem into contributions of
the eigenspaces of the matrix $C=A_0+\cdots+A_{q-1}$. The regular sequence is
then expressed as a matrix product. Next, we construct the
fluctuations by elementary means: We replace finite sums occurring in the
summatory functions by infinite sums involving digits using the factorisation
as a matrix product.
Then the pseudo-Tauberian argument states that the summatory function of the
fluctuation is again a fluctuation and there is a
relation between the Fourier coefficients of these fluctuations. The Fourier
coefficients of the summatory function of the fluctuation, however, can be
computed by Mellin--Perron summation of order one, so the Fourier coefficients
of the original fluctuation can be recovered; see Theorem~\ref{theorem:use-Mellin--Perron}.
\subsection{Relation to Previous Work}
\label{introduction:relation-to-previous-work}
The asymptotics of the summatory function of specific examples of regular
sequences has been studied in \cite{Grabner-Heuberger:2006:Number-Optimal},
\cite{Grabner-Heuberger-Prodinger:2005:counting-optimal-joint},
\cite{Dumas-Lipmaa-Wallen:2007:asymp}. There, various methods have been
used to show that the fluctuations exist; then the original
pseudo-Tauberian argument by Flajolet, Grabner, Kirschenhofer,
Prodinger and Tichy~\cite{Flajolet-Grabner-Kirschenhofer-Prodinger:1994:mellin}
is used to compute the Fourier coefficients of the fluctuations.
The first version of the pseudo-Tauberian argument in Theorem~\ref{theorem:use-Mellin--Perron}
was provided in \cite{Flajolet-Grabner-Kirschenhofer-Prodinger:1994:mellin}:
There, no logarithmic factors were allowed, only values $\gamma$ with $\Re \gamma>0$
were allowed and the result contained an error term of $o(1)$ whereas we give a
more precise error estimate in order to allow repeated application.
Dumas~\cite{Dumas:2013:joint, Dumas:2014:asymp} proved the first part
of Theorem~\ref{theorem:simple} using dilation equations. We re-prove it here
in a self-contained way
because we need more explicit results than obtained by Dumas
(e.g., we need explicit expressions for the
fluctuations) to explicitly get the precise
structure depending on the eigenspaces
(Theorem~\ref{theorem:contribution-of-eigenspace}).
Notice that the order of factors in Dumas' paper is inconsistent between
his versions of~\eqref{eq:linear-representation} and
Remark~\ref{remark:regular-sequence-as-a-matrix-product}.
A functional equation for the Dirichlet series of an automatic sequence
has been proved by Allouche, Mendès France and
Peyrière~\cite{Allouche-Mendes-Peyriere:2000:autom-diric}.
In Section~\ref{sec:transducer} we study transducers. The
sequences there are defined as the output sum of transducer automata in the sense of
\cite{Heuberger-Kropf-Prodinger:2015:output}. They are a special case of regular
sequences and are a generalisation of many previously studied concepts.
In that case, much more is known (variance, limiting distribution, higher
dimensional input); see \cite{Heuberger-Kropf-Prodinger:2015:output} for
references and results. A more detailed comparison can be found in Section~\ref{sec:transducer}.
Divide and conquer recurrences (see
\cite{Drmota-Szpankowski:2013:divide-and-conquer} and
\cite{Hwang-Janson-Tsai:2017:divide-conquer-half})
can also be seen as special cases of regular sequences.
The present article gives a unified approach which covers all cases of
regular sequences. As long as the conditions on the joint spectral radius are
met, the main asymptotic terms are not absorbed by the error terms. Otherwise,
the regular sequence is so irregular that the summatory function is not smooth
enough to allow a result of this shape.
\section{Overview of the Examples}
\label{sec:overview-examples}
We take a closer look at three particular examples.
In this section, we provide an overview of these examples;
all details can be found in Part~\ref{part:examples}.
At first gance it seems that
these examples are straight-forward applications of the results. However,
we have to reformulate the relevant questions in terms of a
$q$-regular sequence and will then provide shortcuts for the
computation of the Fourier series. We put a special effort on the
details which gives additional insights like dependencies on certain
residue classes; see Section~\ref{sec:residue-classes}. Moreover, the
study of these examples also encourages us to investigate symmetries
in the eigenvalues; see
Section~\ref{sec:symmetric-eigenvalues-overview} for an overview and
Section~\ref{sec:symmetric} for general considerations.
We start with transducer automata. Transducers have been chosen in
order to compare the results here with the previously available
results~\cite{Heuberger-Kropf-Prodinger:2015:output}.
In some sense, the results complement each other: While the
results in~\cite{Heuberger-Kropf-Prodinger:2015:output} also contain information on the variance and the limiting
distribution, our approach here yields more terms of the asymptotic
expansion of the mean, at least in the general case. Also, it is a class of
examples.
We then continue with esthetic
numbers. These numbers are an example of an automatic sequence, therefore
can be treated by a transducer. However,
it turns out that the generic results
(the results here and in~\cite{Heuberger-Kropf-Prodinger:2015:output})
degenerate: They are too weak to give a meaningful main term. Therefore a
different effort is needed for esthetic numbers.
No precise asymptotic results were known previously.
The example on Pascal's Rhombus is a choice of a regular sequence
where all components of the vector sequence have some combinatorial
meaning. Again, no precise asymptotic results were known previously.
Section~\ref{sec:overview:further-examples} contains further
examples. Note that there are the two additional
Sections~\ref{sec:residue-classes}
and~\ref{sec:symmetric-eigenvalues-overview} pointing out phenomena
appearing in the analysis of our examples.
\subsection{Transducers}
\label{sec:overview-transducers}
The sum~$\mathcal{T}(n)$ of the output labels of a complete deterministic finite transducer~$\mathcal{T}$
when reading the $q$-ary expansion of an integer~$n$ has been investigated
in~\cite{Heuberger-Kropf-Prodinger:2015:output}. As this can be seen as a
$q$-regular sequence, we reconsider the problem in the light of our
general results in this article;
see Section~\ref{sec:transducer}. For the summatory function, the main
terms corresponding to the eigenvalue $q$ can be extracted by both results; if
there are further eigenvalues larger than the joint spectral radius, our
Corollary~\ref{corollary:transducer-main} allows to describe more asymptotic terms which are absorbed by
the error term in~\cite{Heuberger-Kropf-Prodinger:2015:output}. Note, however,
that our approach here does not give any readily available information on the
variance (this could somehow be repaired for specific examples because regular
sequences are known to form a ring) nor on the limiting distribution.
\subsection{Esthetic Numbers}
\label{sec:overview-esthetic}
In this article, we also contribute a
precise asymptotic analysis of $q$-esthetic numbers; see~De~Koninck
and Doyon~\cite{Koninck-Doyon:2009:esthetic-numbers}. These are
numbers whose $q$-ary digit expansion satisfies the condition that
neighboring digits differ by exactly one. The sequence of such numbers
turns out to be $q$-automatic, thus are $q$-regular and can also be
seen as an output sum of a transducer; see the first author's joint
work with Kropf and
Prodinger~\cite{Heuberger-Kropf-Prodinger:2015:output} or
Section~\ref{sec:transducer}. However, the
asymptotics obtained by using the main result of
\cite{Heuberger-Kropf-Prodinger:2015:output} is degenerated
in the sense that the provided main term and second order term both
equal zero; only an error term remains. On the other hand, using a more direct approach via our
main theorem brings up the actual main term and the fluctuation in
this main term. We also explicitly compute the Fourier coefficients.
The full theorem is formulated in
Section~\ref{sec:esthetic-numbers}.
Prior to this precise analysis,
the authors of~\cite{Koninck-Doyon:2009:esthetic-numbers} only performed an analysis
of esthetic numbers by digit-length (and not by the number itself).
The approach used in the analysis of $q$-esthetic numbers can easily
be adapted to numbers defined by other conditions on the word of
digits of their $q$-ary expansion.
\subsection{Dependence on Residue Classes}
\label{sec:residue-classes}
The analysis of $q$-esthetic numbers also brings another aspect into
the light of day, namely a quite interesting dependence of the
behaviour with respect to~$q$ on different moduli:
\begin{itemize}
\item The dimensions in the matrix approach of
\cite{Koninck-Doyon:2009:esthetic-numbers} need to be increased for
certain residue classes of~$q$ modulo~$4$ in order to get a
formulation as a $q$-automatic and $q$-regular sequence,
respectively.
\item The main result in~\cite{Koninck-Doyon:2009:esthetic-numbers}
already depends on the parity of $q$ (i.e., on $q$
modulo~$2$). This reflects our Corollary~\ref{corollary:esthetic:asy}
by having $2$-periodic
fluctuations (in contrast to $1$-periodic fluctuations in the main
Theorem~\ref{theorem:simple}).
\item Surprisingly, the error term in the resulting formula of
Corollary~\ref{corollary:esthetic:asy} depends on the residue class of $q$ modulo~$3$. This can be seen in the spectrum of the matrix~$C=\sum_{0 \le r < q} A_r$:
There is an appearance of an eigenvalue~$1$ in certain cases.
\item As an interesting side-note: In the
spectrum of~$C$, the algebraic multiplicity of the
eigenvalue~$0$ changes again only modulo~$2$.
\end{itemize}
\subsection{Symmetrically Arranged Eigenvalues}
\label{sec:symmetric-eigenvalues-overview}
Fluctuations with longer periods (like in
the second of the four bullet points above) come from a particular
configuration in the spectrum of~$C$. Whenever eigenvalues are arranged as
vertices of a regular polygon, then their influence can be collected;
this results in periodic fluctuations with larger period than~$1$.
We elaborate on the influence of such eigenvalues in
Section~\ref{sec:symmetric}.
This is then used in the particular cases of esthetic numbers and in
conjunction with the output sum of transducers. More specifically, in the latter example
this yields the second order term in
Corollary~\ref{corollary:transducer-main}; see
also~\cite{Heuberger-Kropf-Prodinger:2015:output}.
\subsection{Pascal's Rhombus}
\label{sec:overview-pascal}
Beside esthetic numbers, we perform an asymptotic analysis of the
number of ones in the rows of Pascal's rhombus. The rhombus is in some
sense a variant of Pascal's triangle---its recurrence is similar to that
of Pascal's triangle. It turns out that the
number of ones in the rows of Pascal's rhombus can be modelled by a
$2$-regular sequence.
The authors
of~\cite{Goldwasser-Klostermeyer-Mays-Trapp:1999:Pascal-rhombus}
investigate this number of ones, but only for blocks whose number of rows
is a power of~$2$. In the precise analysis in
Section~\ref{sec:pascal} we not only obtain the asymptotic formula, we
also explicitly compute the Fourier coefficients.
\subsection{Further Examples}
\label{sec:overview:further-examples}
There are many further examples of specific $q$-regular sequences which await
precise asymptotic analysis, for example the Stern--Brocot sequence~\oeis{A002487}, the
denominators of Farey tree fractions~\oeis{A007306}, the
number of unbordered factors of length $n$ of the Thue--Morse
sequence (see \cite{Goc-Mousavi-Shallit:2013}).
The Stern--Brocot sequence is a typical example: It is defined by $x(0)=0$,
$x(1)=1$ and
\begin{equation}\label{eq:stern-brocot:rec}
\begin{aligned}
x(2n)&=x(n),\\
x(2n+1)&=x(n)+x(n+1),
\end{aligned}
\end{equation}
i.e., the right-hand sides are linear combinations of shifted versions of the
original sequence.
Note that recurrence relations like~\eqref{eq:stern-brocot:rec} are
not proper linear representations of regular sequences in the sense of~\eqref{eq:linear-representation}. The good news,
however, is that in general, such a sequence is $q$-regular. The
following remark formulates this more explicitly.
\begin{remark}
Let $x(n)$ be a sequence such that
there are fixed integers $\ell\le 0\le u$ and constants $c_{rk}$ for $0\le r<q$
and $\ell\le k\le u$ such that
\begin{equation*}
x(qn+r) = \sum_{\ell \le k\le u} c_{rk}x(n+k)
\end{equation*}
holds for $0\le r<q$ and $n\ge 0$. Then the sequence $x(n)$ is $q$-regular with
$q$-linear representation for $v(n)=\bigl(x(n+\ell'), \ldots, x(n), \ldots,
x(n+u')\bigr)^\top$ where
\begin{equation*}
\ell'=\floor[\Big]{\frac{q\ell}{q-1}},\qquad
u'=\ceil[\Big]{\frac{qu}{q-1}}.
\end{equation*}
Note that if $\ell'<0$, then a simple permutation of the components
of~$v(n)$ brings~$x(n)$ to its first component (so that the above is indeed
a proper linear representation as defined in
Section~\ref{section:introduction:regular-sequences}).
\end{remark}
By using this remark on~\eqref{eq:stern-brocot:rec}, we set
$v(n)=\bigl(x(n), x(n+1), x(n+2)\bigr)^\top$ and obtain the $2$-linear
representation
\begin{equation*}
v(2n)=
\begin{pmatrix}
1&0&0\\
1&1&0\\
0&1&0
\end{pmatrix}v(n),\qquad
v(2n+1)=
\begin{pmatrix}
1&1&0\\
0&1&0\\
0&1&1
\end{pmatrix}v(n)
\end{equation*}
for $n\ge 0$ for the Stern--Brocot sequence.
\section{Full Results}\label{section:results}
In this section, we fully formulate our results. As pointed out in
Remark~\ref{remark:regular-sequence-as-a-matrix-product}, regular
sequences can essentially be seen as matrix products. Therefore, we
will study these matrix products instead of regular sequences.
Theorem~\ref{theorem:simple} can then be proved as a simple corollary of the
results for matrix products; see Section~\ref{section:proof-theorem-simple}.
\subsection{Problem Statement}
Let $q\ge 2$, $d\ge 1$ be fixed integers and $A_0$, \ldots,
$A_{q-1}\in\mathbb{C}^{d\times d}$.
We investigate the sequence~$f$ of $d\times d$ matrices such that
\begin{equation}\label{eq:regular-matrix-sequence}
f(qn+r)=A_r f(n) \quad\text{ for $0\le r<q$, $0\le n$ with $qn+r\neq 0$}
\end{equation}
and $f(0)=I$.
Let $n$ be an integer with $q$-ary expansion
$r_{\ell-1}\ldots r_0$. Then it is easily seen that \eqref{eq:regular-matrix-sequence} implies that
\begin{equation}\label{eq:f-as-product}
f(n)=A_{r_0}\ldots A_{r_{\ell-1}}.
\end{equation}
We are interested in the asymptotic behaviour of $F(N)\coloneqq\sum_{0\le n<N} f(n)$.
\subsection{Definitions and Notations}\label{sec:definitions-notations}
In this section, we give all definitions and notations which are required in
order to state the results. For the sake of conciseness, we do not give any
motivations for our definitions here; those are deferred to Section~\ref{sec:motivation-definitions}.
The following notations are essential:
\begin{itemize}
\item
Let $\norm{\,\cdot\,}$ denote a fixed norm on $\mathbb{C}^d$ and its induced matrix
norm on $\mathbb{C}^{d\times d}$.
\item We set $B_r \coloneqq \sum_{0\le r'<r} A_{r'}$ for $0\le
r<q$ and $C\coloneqq\sum_{0\le r<q} A_r$.
\item
The joint spectral radius of $A_0$, \ldots, $A_{q-1}$ is denoted by
\begin{equation*}
\rho\coloneqq\inf_{\ell}\sup
\setm[\big]{ \norm{A_{r_1}\ldots A_{r_\ell}}^{1/\ell}}{r_1, \ldots, r_\ell\in\set{0, \ldots, q-1}}.
\end{equation*}
If the set of matrices $A_0$, \dots, $A_{q-1}$ has the \emph{finiteness property},
i.e., there is an $\ell>0$ such that
\begin{equation*}
\rho = \sup
\setm[\big]{\norm{A_{r_1}\ldots A_{r_\ell}}^{1/\ell}}{r_1, \ldots, r_\ell\in\set{0, \ldots, q-1}},
\end{equation*}
then we set $R=\rho$. Otherwise, we choose $R>\rho$ in such a way that there is
no eigenvalue $\lambda$ of $C$ with $\rho<\abs{\lambda}\le R$.
\item
The spectrum of $C$, i.e., the set of eigenvalues of $C$, is denoted by
$\sigma(C)$.
\item For a positive integer $n_0$, let $\mathcal{F}_{n_0}$ be the
matrix-valued Dirichlet series defined by
\begin{equation*}
\mathcal{F}_{n_0}(s) \coloneqq \sum_{n\ge n_0} n^{-s}f(n)
\end{equation*}
for a complex variable $s$.
\item Set $\chi_k\coloneqq \frac{2\pi i k}{\log q}$ for $k\in\mathbb{Z}$.
\end{itemize}
In the formulation of Theorem~\ref{theorem:contribution-of-eigenspace} and
Theorem~\ref{theorem:main}, the following constants are needed
additionally:
\begin{itemize}
\item
Choose a regular matrix $T$ such that $T C T^{-1}\eqqcolon J$ is in Jordan form.
\item Let $D$ be
the diagonal matrix whose $j$th diagonal element is $1$ if the $j$th diagonal
element of $J$ is not equal to $1$; otherwise the $j$th diagonal element of $D$
is $0$.
\item
Set $C'\coloneqq T^{-1}DJT$.
\item
Set $K\coloneqq T^{-1}DT(I-C')^{-1}(I-A_0)$.
\item
For a $\lambda\in\mathbb{C}$, let $m(\lambda)$ be the size of the largest
Jordan block associated with $\lambda$. In particular, $m(\lambda)=0$ if $\lambda\not\in\sigma(C)$.
\item For $m\ge 0$, set
\begin{equation*}
\vartheta_m \coloneqq \frac1{m!}T^{-1}(I-D)T(C-I)^{m-1}(I-A_0);
\end{equation*}
here, $\vartheta_0$ remains undefined if $1\in\sigma(C)$.\footnote{
If $1\in\sigma(C)$, then the matrix $C-I$ is singular. In that case, $\vartheta_0$ will never be used.}
\item Define $\vartheta \coloneqq \vartheta_{m(1)}$.
\end{itemize}
All implicit $O$-constants depend on $q$, $d$, the matrices $A_0$, \ldots, $A_{q-1}$ (and therefore on $\rho$),
as well as on $R$.
\subsection{Decomposition into Periodic Fluctuations}
Instead of considering $F(N)$, it is certainly enough to consider $wF(N)$ for
all generalised left eigenvectors $w$ of $C$, e.g., the rows of $T$. The
result for $F(N)$ then follows by taking appropriate linear combinations.
\begin{theorem}\label{theorem:contribution-of-eigenspace}
Let $w$ be a generalised left eigenvector of rank $m$ of $C$ corresponding to the eigenvalue $\lambda$.
\begin{enumerate}
\item\label{item:small-eigenvalue} If $\abs{\lambda}<R$, then
\begin{equation*}
wF(N)=wK + (\log_q N)^m w\vartheta_m + \Oh{N^{\log_q R}}.
\end{equation*}
\item\label{item:R-eigenvalue} If $\abs{\lambda}=R$, then
\begin{equation*}
wF(N)=wK + (\log_q N)^m w\vartheta_m + \Oh{N^{\log_q R} (\log N)^{m}}.
\end{equation*}
\item\label{item:large-eigenvalue} If $\abs{\lambda}>R$, then there are $1$-periodic continuous functions
$\Phi_k\colon \mathbb{R}\to\mathbb{C}^d$, $0\le k<m$, such that
\begin{equation*}
wF(N)=wK + (\log_q N)^mw\vartheta_m + N^{\log_q\lambda} \sum_{0\le k<m}(\log_q N)^k\Phi_k(\fractional{\log_q N})
\end{equation*}
for $N\ge q^{m-1}$. The function $\Phi_k$ is Hölder continuous with any
exponent smaller than $\log_q\abs{\lambda}/R$.
If, additionally, the left eigenvector $w(C-\lambda I)^{m-1}$ of $C$ happens to be a left eigenvector to each matrix
$A_0$, \ldots, $A_{q-1}$ associated with the eigenvalue~$1$, then
\begin{equation*}
\Phi_{m-1}(u)=\frac1{q^{m-1}(m-1)!}w(C-q I)^{m-1}
\end{equation*}
is constant.
\end{enumerate}
Here, $wK=0$ for $\lambda=1$ and $w\vartheta_m=0$ for $\lambda\neq 1$.
\end{theorem}
This theorem is proved in Section~\ref{section:proof-contribution-of-eigenspace}.
Note that in general, the three summands in the theorem have different growths:
a constant, a logarithmic term and a term whose growth depends essentially
on the joint spectral radius and the eigenvalues larger than the
joint spectral radius, respectively. The vector $w$ is not directly visible in front of
the third summand; instead, the vectors of its Jordan chain are part of the function~$\Phi_k$.
Expressing the identity matrix as linear combinations of generalised left
eigenvalues and summing up the contributions of
Theorem~\ref{theorem:contribution-of-eigenspace} essentially yields the following corollary.
\begin{theorem}\label{theorem:main}
With the notations above, we have
\begin{multline*}
F(N) = \sum_{\substack{\lambda\in\sigma(C)\\\abs{\lambda}>\rho}} N^{\log_q
\lambda}\sum_{0\le k<m(\lambda)}(\log_q N)^k\Phi_{\lambda
k}(\fractional{\log_q N}) + (\log_q N)^{m(1)} \vartheta + K\\
+ \Oh[\big]{N^{\log_q R}(\log N)^{\max\setm{m(\lambda)}{\abs{\lambda}=R}}}
\end{multline*}
for suitable $1$-periodic continuous functions $\Phi_{\lambda k}$.
If $1$ is not an eigenvalue of $C$, then $\vartheta=0$. If
there are no eigenvalues $\lambda\in\sigma(C)$ with $\abs{\lambda}\le \rho$,
then the $O$-term can be omitted.
For $\abs{\lambda}>R$, the function $\Phi_{\lambda k}$ is Hölder continuous with any exponent
smaller than $\log_q(\abs{\lambda}/R)$.
\end{theorem}
This theorem is proved in Section~\ref{section:proof:corollary-main}.
\begin{remark}
We want to point out that the condition $\abs{\lambda}>R$ is
inherent in the problem: Single summands $f(n)$ might be as large as
$n^{\log_q R}$ and must therefore be absorbed by the error term in
any smooth asymptotic formula for the summatory function.
\end{remark}
\subsection{Dirichlet Series}
This section gives the required result on the Dirichlet series~$\mathcal{F}_{n_0}$. For
theoretical purposes, it is enough to study $\mathcal{F}\coloneqq\mathcal{F}_1$; for numerical purposes,
however, convergence improves for larger values of $n_0$. This is because for
large $n_0$ and large $\Re s$, the value of $\mathcal{F}_{n_0}(s)$ is roughly $n_0^{-s} f(n_0)$; see also Part~\ref{part:numerical}.
\begin{theorem}\label{theorem:Dirichlet-series}Let $n_0$ be a positive
integer. Then the Dirichlet series $\mathcal{F}_{n_0}(s)$
converges absolutely and uniformly on compact subsets of the half plane $\Re s > \log_q \rho + 1$, thus is analytic there.
We have
\begin{equation}\label{eq:analytic-continuation}
\bigl(I-q^{-s}C\bigr)\mathcal{F}_{n_0}(s) = \mathcal{G}_{n_0}(s)
\end{equation}
for $\Re s>\log_q \rho +1$ with
\begin{equation}\label{eq:Dirichlet-recursion}
\mathcal{G}_{n_0}(s) = \sum_{n_0 \le n < qn_0} n^{-s}f(n)
+ q^{-s} \sum_{0 \le r < q} A_r
\sum_{k\ge 1} \binom{-s}{k}\Bigl(\frac{r}{q}\Bigr)^k \mathcal{F}_{n_0}(s+k).
\end{equation}
The series in \eqref{eq:Dirichlet-recursion} converge
absolutely and uniformly on compact sets for $\Re s>\log_q \rho$. Thus \eqref{eq:analytic-continuation} gives a meromorphic
continuation of $\mathcal{F}_{n_0}(s)$ to the half plane $\Re s>\log_q \rho$ with
possible poles at $s=\log_q \lambda + \chi_\ell$ for each
$\lambda\in \sigma(C)$ with $\abs{\lambda}>\rho$ and $\ell\in\mathbb{Z}$
whose pole order is at most $m(\lambda)$.
Let $\delta>0$. For real $z$, we set
\begin{equation*}
\mu_\delta(z)= \max\set{ 1 - (z-\log_q \rho -\delta), 0},
\end{equation*}
i.e., the linear function on the interval
$[\log_q\rho+\delta, \log_q\rho+\delta+1]$
with~$\mu_\delta(\log_q\rho+\delta)=1$ and~$\mu_\delta(\log_q\rho+\delta+1)=0$.
Then
\begin{equation}\label{eq:order-F}
\mathcal{F}_{n_0}(s) = \Oh[\big]{\abs{\Im s}^{\mu_\delta(\Re s)}}
\end{equation}
holds uniformly for $\log_q \rho+\delta\le \Re s$ and $\abs{q^s-\lambda} \ge \delta$
for all eigenvalues $\lambda\in\sigma(C)$. Here, the implicit $O$-constant
also depends on $\delta$.
\end{theorem}
Note that by the introductory remark on $\mathcal{F}_{n_0}(s)$, the infinite
sum over $k$ in~\eqref{eq:Dirichlet-recursion} can be well approximated
by a finite sum. Detailed error bounds are discussed in
Part~\ref{part:numerical}. Therefore the theorem allows to transfer the information
on $\mathcal{F}_{n_0}(s)$ for large~$\Re s$ where convergence is unproblematical
to values of $s$ where the convergence of the Dirichlet series $\mathcal{F}_{n_0}$ itself is bad.
\begin{remark}\label{remark:Dirichlet-series:bound}
By the identity theorem for analytic functions, the meromorphic
continuation of $\mathcal{F}_{n_0}$ is unique on the domain given in the
theorem. Therefore, the bound~\eqref{eq:order-F} does not depend on
the particular expression for the meromorphic continuation given
in~\eqref{eq:analytic-continuation}
and~\eqref{eq:Dirichlet-recursion}.
\end{remark}
Theorem~\ref{theorem:Dirichlet-series} is proved in
Section~\ref{section:proof:Dirichlet-series}. In the proof we
translate the linear representation
of $f$ into a system of equations involving $\mathcal{F}_{n_0}(s)$
and shifted versions like $\sum_{n\ge n_0}f(n)(n+\beta)^{-s}$.
We will have to bound the
difference between the shifted and unshifted versions of the Dirichlet series.
These bounds are provided by the following lemma.
It will turn out
to be useful to have it as a result listed in this section and not
buried in the proofs sections.
\begin{lemma}\label{lemma:shifted-Dirichlet}
Let $\mathcal{D}(s) = \sum_{n \ge n_0} d(n)/n^s$ be a Dirichlet series with
coefficients $d(n)=\Oh{n^{\log_q R'}}$ for all $R'>\rho$.
Let $\beta\in\mathbb{C}$ with $\abs{\beta}<n_0$ and $\delta>0$. Set
\begin{equation*}
\f{\Sigma}{s, \beta, \mathcal{D}} \coloneqq
\sum_{n\ge n_0} \frac{d(n)}{(n+\beta)^s} - \mathcal{D}(s).
\end{equation*}
Then
\begin{equation*}
\f{\Sigma}{s, \beta, \mathcal{D}} = \sum_{k\ge 1}
\binom{-s}{k} \beta^k \mathcal{D}(s+k),
\end{equation*}
where the series converges
absolutely and uniformly on compact sets for $\Re s>\log_q \rho$,
thus $\f{\Sigma}{s, \beta, \mathcal{D}}$ is analytic there.
Moreover, with $\mu_\delta$ as in Theorem~\ref{theorem:Dirichlet-series},
\begin{equation*}
\f{\Sigma}{s, \beta, \mathcal{D}}=\Oh[\big]{\abs{\Im s}^{\mu_\delta(\Re s)}}
\end{equation*}
as $\abs{\Im s}\to\infty$
holds uniformly for $\log_q \rho + \delta\le \Re s\le \log_q \rho +\delta+1$.
\end{lemma}
\subsection{Fourier Coefficients}
As discussed in Section~\ref{sec:heuristic}, we would like to apply the zeroth
order Mellin--Perron summation formula but need analytic justification. In the
following theorem we prove that whenever it is known that the result is a
periodic fluctuation, the use of zeroth order Mellin--Perron summation can be
justified. In contrast to the remaining parts of the paper, this theorem does \emph{not}
assume that $f(n)$ is a matrix product.
\begin{theorem}\label{theorem:use-Mellin--Perron}
Let $f$ be a sequence on $\mathbb{Z}_{>0}$, let $\gamma_0\in\mathbb{R}\setminus \mathbb{Z}_{\le 0}$ and $\gamma\in\mathbb{C}$ with $\Re \gamma>
\gamma_0$, $\delta>0$,
$q>1$ be real numbers with $\delta \le \pi/(\log q)$
and $\delta < \Re \gamma-\gamma_0$,
and let $m$ be a positive integer. Moreover, let $\Phi_j$ be
Hölder continuous (with exponent $\alpha$ with
$\Re\gamma-\gamma_0<\alpha\le 1$) $1$-periodic functions for $0\le j<m$ such that
\begin{equation}\label{eq:F-N-periodic}
F(N)\coloneqq \sum_{1\le n< N} f(n) = \sum_{\substack{j+k=m-1\\0\le j<m}}N^\gamma \frac{(\log N)^k}{k!}
\Phi_j(\fractional{\log_q N}) + \Oh{N^{\gamma_0}}
\end{equation}
for integers $N\to\infty$.
For the Dirichlet series $\mathcal{F}(s)\coloneqq \sum_{n\ge 1}n^{-s}f(n)$
assume that
\begin{itemize}
\item there is some real number $\sigma_{\mathrm{abs}}\ge \Re \gamma$ such that $\mathcal{F}(s)$ converges absolutely for $\Re s>\sigma_{\mathrm{abs}}$;
\item the function $\mathcal{F}(s)/s$
can be continued to a meromorphic function for $\Re s > \gamma_0-\delta$
such that poles can only occur at $\gamma+\chi_\ell$ for $\ell\in\mathbb{Z}$ and such that these poles have order at
most $m$ and a possible pole at $0$; the local expansions are written as
\begin{equation}\label{eq:Fourier:F-s-principal-part}
\frac{\mathcal{F}(s)}{s}=\frac1{(s-\gamma-\chi_\ell)^m}\sum_{j\ge 0}\varphi_{j\ell}(s-\gamma-\chi_\ell)^j
\end{equation}
with suitable constants $\varphi_{j\ell}$ for $j$, $\ell\in\mathbb{Z}$;
\item there is some real number~$\eta>0$ such that
for $\gamma_0 \le \Re s \le \sigma_{\mathrm{abs}}$ and
$\abs{s-\gamma-\chi_\ell}\ge \delta$ for all $\ell\in\mathbb{Z}$, we have
\begin{equation}\label{eq:Dirichlet-order}
\mathcal{F}(s) = \Oh[\big]{\abs{\Im s}^{\eta}}
\end{equation}
for $\abs{\Im s}\to\infty$.
\end{itemize}
All implicit $O$-constants may depend on $f$, $q$, $m$, $\gamma$, $\gamma_0$,
$\alpha$, $\delta$, $\sigma_{\mathrm{abs}}$ and $\eta$.
Then
\begin{equation}\label{eq:Fourier:fluctuation-as-Fourier-series}
\Phi_j(u) = \sum_{\ell\in \mathbb{Z}}\varphi_{j\ell}\exp(2\ell\pi i u)
\end{equation}
for $u\in\mathbb{R}$, $\ell\in\mathbb{Z}$ and $0\le j<m$.
If $\gamma_0<0$ and $\gamma\notin \frac{2\pi i}{\log q}\mathbb{Z}$, then $\mathcal{F}(0)=0$.
\end{theorem}
This theorem is proved in Section~\ref{section:proof:use-Mellin--Perron}.
The theorem is more general than necessary for $q$-regular sequences because
Theorem~\ref{theorem:Dirichlet-series} shows that we could use some $0<\eta<1$.
However, it might be applicable in other cases, so we prefer to state it in
this more general form.
\subsection{Fluctuations of Symmetrically Arranged Eigenvalues}
\label{sec:symmetric}
In our main results, the occurring fluctuations are always
$1$-periodic functions. However, if eigenvalues of the sum of matrices
of the linear representation are
arranged in a symmetric way, then we can combine summands and get
fluctuations with longer periods. This is in particular true if all
vertices of a regular polygon (with center~$0$) are eigenvalues.
\begin{proposition}\label{proposition:symmetric-eigenvalues}
Let $\lambda\in\mathbb{C}$, and let $k\ge0$ and $p>0$ be integers. Denote by $U_p$ the
set of $p$th roots of unity. Suppose for each $\zeta\in U_p$
we have a continuous $1$-periodic function
\begin{equation*}
\Phi_{(\zeta\lambda)}(u)
= \sum_{\ell\in\mathbb{Z}}\varphi_{(\zeta\lambda)\ell}\exp(2\ell\pi i u)
\end{equation*}
whose Fourier coefficients are
\begin{equation*}
\varphi_{(\zeta \lambda)\ell}
=\Res[\bigg]{\mathcal{D}(s)
\Bigl(s - \log_q (\zeta\lambda) - \frac{2\ell\pi i}{\log q}\Bigr)^k}%
{s=\log_q (\zeta\lambda) + \frac{2\ell\pi i}{\log q}}
\end{equation*}
for a suitable function $\mathcal{D}$.
Then
\begin{equation}\label{eq:proposition:symmetric-eigenvalues:sum-fluct}
\sum_{\zeta\in U_p} N^{\log_q (\zeta\lambda)} (\log_q N)^k
\Phi_{(\zeta\lambda)}(\fractional{\log_q N})
= N^{\log_q \lambda} (\log_q N)^k \Phi(p\fractional{\log_{q^p} N})
\end{equation}
with a continuous $p$-periodic function
\begin{equation*}
\Phi(u)
= \sum_{\ell\in\mathbb{Z}}\varphi_{\ell}\fexp[\Big]{\frac{2\ell\pi i}{p} u}
\end{equation*}
whose Fourier coefficients are
\begin{equation*}
\varphi_{\ell}
=\Res[\bigg]{\mathcal{D}(s)
\Bigl(s - \log_q \lambda - \frac{2\ell\pi i}{p\log q}\Bigr)^k}%
{s=\log_q \lambda + \frac{2\ell\pi i}{p\log q}}.
\end{equation*}
\end{proposition}
Note that we again write $\Phi(p\fractional{\log_{q^p} N})$ to
optically emphasise the $p$-periodicity. Moreover, the factor
$(\log_q N)^k$ in~\eqref{eq:proposition:symmetric-eigenvalues:sum-fluct}
could be cancelled, however it is there to
optically highlight the similarities to the main results (e.g.\@
Theorem~\ref{theorem:simple}).
The proof of Proposition~\ref{proposition:symmetric-eigenvalues}
can be found in
Section~\ref{sec:proof-symmetric-eigenvalues}.
The above proposition will be used for proving
Corollary~\ref{corollary:transducer-main} which deals with transducer
automata; there, the second order term exhibits a fluctuation with
possible period larger than~$1$. We will also use the proposition for
the analysis of esthetic numbers in
Section~\ref{sec:esthetic-numbers}.
\begin{remark}
We can view Proposition~\ref{proposition:symmetric-eigenvalues} from
a different perspective:
A $q$-regular sequence is $q^p$-regular as well
(by~\cite[Theorem~2.9]{Allouche-Shallit:1992:regular-sequences}).
Then, all eigenvalues $\zeta\lambda$ of the original sequence
become eigenvalues $\lambda^p$ whose algebraic multiplicity is the sum
of the individual multiplicities but the sizes of the corresponding
Jordan blocks do not change.
Moreover, the joint spectral radius is also taken to the $p$th power.
We apply, for example,
Theorem~\ref{theorem:simple} in our $q^p$-world and get again
$1$-period fluctuations.
Note that for actually computing the Fourier coefficients,
the approach presented in the proposition seems to be more suitable.
\end{remark}
\section{Remarks on the Definitions}\label{sec:motivation-definitions}
In this section, we give some motivation for and comments on the definitions listed in
Section~\ref{sec:definitions-notations}.
\subsection{\texorpdfstring{$q$}{q}-Regular Sequences vs.\ Matrix Products}\label{section:q-regular-matrix-product}
We note one significant difference between the study of $q$-regular sequences
as in \eqref{eq:linear-representation} and the study of matrix
products~\eqref{eq:f-as-product}.
The recurrence \eqref{eq:linear-representation} is supposed to hold for
$qn+r=0$, too; i.e. $v(0)=A_0v(0)$. This implies that $v(0)$ is either the zero
vector (which is not interesting at all) or that $v(0)$ is a right eigenvector of
$A_0$ associated with the eigenvalue~$1$.
We do not want to impose this condition in the study of the matrix
product~\eqref{eq:f-as-product}. Therefore, we exclude the case $qn+r=0$ in
\eqref{eq:regular-matrix-sequence}.
This comes at the price of the terms $K$,
$\vartheta_m$, $\vartheta$ in Theorem~\ref{theorem:contribution-of-eigenspace} which
vanish if multiplied by a right eigenvector to the eigenvalue $1$ of $A_0$ from the
right. This is the reason why Theorem~\ref{theorem:simple} has simpler
expressions than those encountered in Theorem~\ref{theorem:contribution-of-eigenspace}.
\subsection{Joint Spectral Radius}
Let
\begin{equation*}
\rho_\ell\coloneqq \sup
\setm[\big]{\norm{A_{r_1}\ldots A_{r_\ell}}^{1/\ell}}{r_1, \ldots, r_\ell\in\set{0, \ldots, q-1}}.
\end{equation*}
Then the
submultiplicativity of the norm and Fekete's subadditivity lemma~\cite{Fekete:1923:ueber-verteil} imply that
$\lim_{\ell\to\infty}\rho_\ell=\inf_{\ell>0}\rho_{\ell}=\rho$;
cf.~\cite{Rota-Strang:1960}. In view of equivalence of norms, this shows that
the joint spectral radius does not depend on the chosen norm. For our purposes,
the important point is that the choice of $R$ ensures that there is an
$\ell_0>0$ such that $\rho_{\ell_0}\le R$, i.e., $\norm{A_{r_1}\ldots
A_{r_{\ell_0}}}\le R^{\ell_0}$ for all $r_j\in\set{0,\ldots, q-1}$. For any $\ell>0$, we use long division to write
$\ell=s\ell_0+r$, and by submultiplicativity of the norm, we get $\norm{A_{r_1}\ldots
A_{r_\ell}}\le R^{s\ell_0} \rho_{r}^r$ and thus
\begin{equation}\label{eq:bound-prod}
\norm{A_{r_1}\ldots
A_{r_\ell}}=\Oh{R^{\ell}}
\end{equation}
for all $r_j\in\set{0,\ldots,q-1}$ and $\ell\to\infty$. We will only use
\eqref{eq:bound-prod} and no further properties of the joint spectral radius.
Note that~\eqref{eq:f-as-product} and \eqref{eq:bound-prod} imply that
\[f(n)=\Oh{R^{\log_q n}}=\Oh{n^{\log_q R}}\]
for $n\to\infty$.
As mentioned, we say that the set of matrices $A_0$, \dots, $A_{q-1}$,
has the \emph{finiteness property} if there is an $\ell>0$ with
$\rho_\ell=\rho$; see~\cite{Jungers:2009:joint-spectral-radius,
Lagarias-Wang:1995:finiteness-conjecture-jsr}.
\subsection{Constants for Theorem~\ref{theorem:contribution-of-eigenspace}}\label{section:constants-for-theorem}
In contrast to usual conventions, we write matrix representations of
endomorphisms as multiplications $x\mapsto xM$ where $x$ is a (row) vector in
$\mathbb{C}^d$ and $M$ is a matrix. Note that we usually denote this endomorphism
by the corresponding calligraphic letter, for example, the endomorphism represented
by the matrix~$M$ is denoted by $\mathcal{M}$.
Consider the endomorphism $\mathcal{C}$ which maps a row vector $x\in\mathbb{C}^d$ to $xC$
and its generalised eigenspaces $W_\lambda$ for $\lambda\in\mathbb{C}$. (These are the
generalised left eigenspaces of $C$. If $\lambda\notin\sigma(C)$, then
$W_\lambda=\set{0}$.) Then it is well-known that $\mathcal{C}\rvert_{W_\lambda}$ is an
endomorphism of $W_\lambda$ and that
$\mathbb{C}^d=\bigoplus_{\lambda\in\sigma(C)}W_\lambda$. Let $\mathcal{T}$ be the basis formed by
the rows of $T$. Then the matrix representation of $\mathcal{C}$
with respect to~$\mathcal{T}$ is $J$.
Let now $\mathcal{D}$ be the endomorphism of $\mathbb{C}^d$ which acts as identity on
$W_\lambda$ for $\lambda\neq 1$ and as zero on $W_1$. Its matrix representation
with respect to the basis $\mathcal{T}$ is $D$; its matrix representation with
respect to the standard basis is $T^{-1}DT$.
Finally, let $\mathcal{C}'$ be the endomorphism $\mathcal{C}'=\mathcal{C} \circ \mathcal{D}$. As
$\mathcal{C}$ and $\mathcal{D}$ decompose along
$\mathbb{C}^d=\bigoplus_{\lambda\in\sigma(C)}W_\lambda$ and $\mathcal{D}$ commutes with every
other endomorphism on $W_\lambda$ for all $\lambda$, we clearly also have
$\mathcal{C}'=\mathcal{D}\circ\mathcal{C}$. Thus the matrix representation of $\mathcal{C}'$ with
respect to $\mathcal{T}$ is $DJ=JD$; its matrix representation with respect to the
standard basis is $T^{-1}DJT=C'$.
Now consider a generalised left eigenvector $w$ of $C$. If it is associated to the eigenvalue $1$, then $w
T^{-1}DT=\mathcal{D}(w)=0$, $wK=0$ and $wC'=\mathcal{C}'(w)=0$. Otherwise, that is, if $w$ is associated to an eigenvalue not equal to $1$, we have $wT^{-1}DT=\mathcal{D}(w)=w$,
$wC'=\mathcal{C}'(w)=\mathcal{C}(w)=wC$,
$w{C'}^j={\mathcal{C}'}^j(w)=\mathcal{C}^j(w)=wC^j$ for $j\ge 0$ and $w\vartheta_m=0$. Also note that
$1$ is not an eigenvalue of $C'$, thus $I-C'$ is indeed regular.
If $1$ is not an eigenvalue of $C$, then everything is simpler:
$D$ is the identity matrix, $C'=C$, $K=(I-C)^{-1}(I-A_0)$ and $\vartheta=0$.
\part{Examples}\label{part:examples}
In this part we investigate three examples in-depth. For an overview,
we refer to Section~\ref{sec:overview-examples} where some of the
appearing phenomena are discussed as well. Further
examples are also mentioned there.
\section{Sequences Defined by Transducer Automata}
\label{sec:transducer}
We discuss the asymptotic analysis related to transducers; see also
Section~\ref{sec:overview-transducers} for an overview.
\input{transducer}
\section{Esthetic Numbers}
\label{sec:esthetic-numbers}
We discuss the asymptotic analysis of esthetic numbers; see also
Section~\ref{sec:overview-esthetic} for an overview.
\input{esthetic-numbers}
\section{Pascal's Rhombus}
\label{sec:pascal}
We discuss the asymptotic analysis of odd entries in Pascal's rhombus; see also
Section~\ref{sec:overview-pascal} for an overview.
\input{pascal-rhombus}
\part{Proofs}\label{part:proofs}
Before reading this part on the collected proofs, it is recommended to
recall the definitions and notations of
Section~\ref{sec:definitions-notations}. Some additional notations which are only used in the proofs are
introduced in the following section.
\section{Additional Notations}\label{additional-notation}
We use Iverson's convention $\iverson{\mathit{expr}}=1$ if
$\mathit{expr}$ is true and $0$ otherwise, which was popularised
by Graham, Knuth and Patashnik~\cite{Graham-Knuth-Patashnik:1994}.
We use the notation
$z^{\underline{\ell}}\coloneqq z(z-1)\dotsm (z-\ell+1)$ for falling factorials.
We use $\binom{n}{k_1, \dotsc, k_r}$ for multinomial coefficients. We sometimes
write a binomial coefficient $\binom{n}{a}$ as $\bibinom{n}{a}{b}$ with $a+b=n$ when we want
to emphasise the symmetry and analogy to a multinomial coefficient.
\section{Decomposition into Periodic Fluctuations: Proof of Theorem~\ref{theorem:contribution-of-eigenspace}}
\label{section:proof-contribution-of-eigenspace}
We first give an overview over the proof.
\begin{proof}[Overview of the Proof of Theorem~\ref{theorem:contribution-of-eigenspace}]
The first step will be
to express the summatory function $F$ in terms of the matrices $C$, $B_r$ and
$A_r$. Essentially, this corresponds to the fact that the summatory function of
a $q$-regular function is again $q$-regular. This expression of $F$ will
consist of two terms: the first is a sum over $0\le j<\log_q N$
involving a $j$th power of $C$ and matrices $B_r$ and $A_r$ depending on the
$\ell-j$ most significant digits of $N$. The second term is again a sum, but
does not depend on the digits of $N$; it only encodes the fact that $f(0)=A_0
f(0)$ may not hold. The fact that we are interested in $wF(N)$ for the
generalised left eigenvector~$w$ corresponding to the eigenvalue~$\lambda$
allows to express $wC^j$ in terms of $w\lambda^j$ (plus some other terms if $w$
is not an eigenvector).
The second term can be disposed of by elementary observations using a geometric
series. We reverse the order of summation in the first summand and extend it to
an infinite sum. The infinite sum is written in terms of periodic fluctuations;
the difference between the infinite sum and the finite sum is absorbed by the
error term.
In order not to have to deal with ambiguities due to non-unique
$q$-ary expansions of real numbers, we define the fluctuations on an infinite
product space instead of the unit interval.
\end{proof}
\subsection{Upper Bound for Eigenvalues of~\texorpdfstring{$C$}{C}}
We start with an upper bound for the eigenvalues of $C$ in terms of the joint
spectral radius.
\begin{lemma}\label{lemma:eigenvalue-spectral-radius-bound}
Let $\lambda\in\sigma(C)$. Then $\abs{\lambda}\le q\rho$.
\end{lemma}
\begin{proof}
For $\ell\to\infty$, we have
\begin{equation*}
\abs{\lambda}^\ell
\le \max \setm{\abs{\lambda}}{\lambda \in \sigma(C)}^\ell
= \Oh[\big]{\norm{C^\ell}}
\end{equation*}
and
\begin{equation*}
\norm{C^\ell}
\le \sum_{0\le r_1, \ldots, r_\ell<q}
\norm{A_{r_1}\dotsm A_{r_\ell}}
= \Oh{q^\ell R^\ell}
\end{equation*}
by \eqref{eq:bound-prod}. Taking $\ell$th roots and the limit $\ell\to\infty$
yields $\abs{\lambda}\le qR$. This last inequality does not depend on
our particular (cf.\@ Section~\ref{sec:definitions-notations}) choice
of $R>\rho$, so the inequality is valid for all $R>\rho$, and
we get the result.
\end{proof}
\subsection{Explicit Expression for the Summatory Function}
In this section, we give an explicit formula for $F(N)=\sum_{0\le n<N} f(n)$ in
terms of the matrices $A_r$, $B_r$ and $C$.
\begin{lemma}\label{lemma:explicit-summatory}
Let $N$ be an integer with $q$-ary expansion
$r_{\ell-1}\ldots r_0$. Then
\begin{equation*}
F(N)=\sum_{0\le j<\ell} C^j B_{r_j} A_{r_{j+1}}\dotsm
A_{r_{\ell-1}} + \sum_{0\le j<\ell} C^j (I-A_0).
\end{equation*}
\end{lemma}
\begin{proof}
We claim that
\begin{equation}\label{eq:sum-recursion}
F(qN+r)=C F(N) + B_r f(N) + (I-A_0)\iverson{qN+r > 0}
\end{equation}
holds for non-negative integers $N$ and $r$ with $0\le r<q$.
We now prove \eqref{eq:sum-recursion}: Using
\eqref{eq:regular-matrix-sequence} and $f(0)=I$ yields
\begin{align*}
F(qN+r)
&= f(0)\, \iverson{qN+r > 0}
+ \sum_{\substack{0<qn+r'<qN+r\\0\le n\\ 0\le r'<q}} f(qn+r')\\
&= f(0)\, \iverson{qN+r > 0}
+ \sum_{\substack{0<qn+r'<qN+r\\0\le n\\ 0\le r'<q}} A_{r'}f(n)\\
&= \bigl(f(0)-A_{0}f(0)\bigr) \iverson{qN+r > 0}
+ \sum_{\substack{0\le qn+r'<qN+r\\0\le n\\ 0\le r'<q}} A_{r'}f(n)\\
&= (I-A_0) \iverson{qN+r > 0}
+ \sum_{0\le n<N}\sum_{0\le r'<q} A_{r'}f(n)
+ \sum_{0\le r'<r} A_{r'}f(N)\\
&= (I-A_0) \iverson{qN+r > 0}
+ CF(N)+B_{r}f(N).
\end{align*}
This concludes the proof of \eqref{eq:sum-recursion}.
Iteration of \eqref{eq:sum-recursion} and using~\eqref{eq:f-as-product} yield the assertion of the lemma;
cf.~\cite[Lemma~3.6]{Heuberger-Kropf-Prodinger:2015:output}.
\end{proof}
\subsection{Proof of Theorem~\ref{theorem:contribution-of-eigenspace}}
\begin{proof}[Proof of Theorem~\ref{theorem:contribution-of-eigenspace}]
For readability, this proof is split into several
steps.
\proofparagraph{Setting}
Before starting the actual proof, we introduce the setting
using an infinite product space
which will be used
to define the fluctuations $\Phi_k$. We also introduce the maps
linking the infinite product space to the unit interval.
We will first introduce
functions $\Psi_k$ defined on the infinite product space
\begin{equation*}
\Omega\coloneqq
\setm[\big]{\mathbf{x}=(x_0, x_1, \ldots)}{
x_j\in\set{0, \ldots,q-1} \text{ for $j\ge 0$}, x_0\neq 0}.
\end{equation*}
We equip it with the metric such that two elements~$\mathbf{x}\neq\mathbf{x}'$ with
a common prefix of length~$j$ and $x_j\neq x'_j$ have distance~$q^{-j}$.
We consider the map $\mathsf{lval}\colon \Omega\to [0, 1]$ with
\begin{equation*}
\mathsf{lval}(\mathbf{x}) \coloneqq \log_q\sum_{j\ge 0}x_jq^{-j};
\end{equation*}
see Figure~\ref{fig:commutative-diagram}. By using the assumption that the
zeroth component of elements of $\Omega$ is assumed to be non-zero, we easily check that $\mathsf{lval}$ is
Lipschitz-continuous; i.e.,
\begin{equation}\label{eq:lval:Lipschitz}
\abs[\big]{\mathsf{lval}(\mathbf{x})-\mathsf{lval}(\mathbf{x}')} = \Oh{q^{-j}}
\end{equation}
for $\mathbf{x}\neq\mathbf{x}'$ with a common prefix of length~$j$.
\begin{figure}[htbp]
\centering
\begin{tikzcd}
\Omega \arrow{rr}{\Psi}\arrow[shift left=0.5ex]{rd}{\mathsf{lval}}&&\mathbb{C}^d\\
&{[0, 1]}\arrow{ru}{\Phi}\arrow[dotted,shift left=0.5ex]{lu}{\mathsf{reprq}}
\end{tikzcd}
\caption{Maps in the proof of Theorem~\ref{theorem:contribution-of-eigenspace}.}
\label{fig:commutative-diagram}
\end{figure}
For $y\in[0, 1)$, let $\mathsf{reprq}(y)$ be the unique $\mathbf{x}\in\Omega$ with
$\mathsf{lval}(\mathbf{x})=y$ such that $\mathbf{x}$ does not end on infinitely many digits~$q-1$, i.e.,
$\mathsf{reprq}(y)$ represents a $q$-ary expansion of $q^y$. This means that
$\mathsf{lval}\circ\mathsf{reprq}$ is the identity on $[0, 1)$.
From the definition of the metric on $\Omega$,
recall that a function
$\Psi\colon \Omega\to\mathbb{C}^d$ is continuous if and only if for each
$\varepsilon>0$, there is a $j$ such that
$\norm{\Psi(\mathbf{x}')-\Psi(\mathbf{x})}<\varepsilon$ holds for all $\mathbf{x}$ and
$\mathbf{x}'$ that have a common prefix of length $j$.
Further recall from the universal property of quotients that if such a continuous function $\Psi$ satisfies
$\Psi(\mathbf{x})=\Psi(\mathbf{x}')$ whenever $\mathsf{lval}(\mathbf{x})=\mathsf{lval}(\mathbf{x}')$, then there is a
unique continuous function $\Phi\colon [0, 1]\to\mathbb{C}^d$ such that $\Phi\circ\mathsf{lval}=\Psi$.
This will be used in the ``Descent''-step of the proof.
\proofparagraph{Notation}
We will deal with the two sums in Lemma~\ref{lemma:explicit-summatory}
separately. We will first introduce notations corresponding to this split
and to the eigenvector structure.
Let $N$ have the $q$-ary expansion
$r_{\ell-1}\ldots r_0$ and set
\begin{equation*}
F_1(N) \coloneqq \sum_{0\le j<\ell} C^j B_{r_j} A_{r_{j+1}}\ldots
A_{r_{\ell-1}}, \qquad F_2(N) \coloneqq \sum_{0\le j<\ell} C^j(I-A_0)
\end{equation*}
so that $F(N)=F_1(N)+F_2(N)$ by Lemma~\ref{lemma:explicit-summatory}.
We consider the Jordan chain $w=v_{0}'$, \ldots, $v_{m-1}'$ generated by $w$,
i.e., $v_k'=w(C-\lambda I)^k$ for $0\le k<m$ and $v_{m-1}'$ is a left eigenvector
of $C$. Thus we have $wC^j=\sum_{0\le
k<m}\binom{j}{k}\lambda^{j-k}v_k'$ for all $j\ge 0$.
If $\lambda\neq 0$, choose vectors $v_0$, \ldots, $v_{m-1} \in \mathbb{C}^d$ such that
\begin{equation}\label{eq:C-sum-eigenvectors}
wC^j=\lambda^j\sum_{0\le k<m}j^kv_k
\end{equation}
holds for all $j\ge 0$. These vectors are
suitable linear combinations of the vectors $v_0'$, \ldots,
$v_{m-1}'$. We note that we have
\begin{equation}\label{eq:v_m-1-expression}
v_{m-1}=\frac1{\lambda^{m-1}(m-1)!}v_{m-1}'.
\end{equation}
\proofparagraph{Second Summand}
We will now rewrite $wF_2(N)$ by evaluating the geometric sum
and rewriting it in terms of a fluctuation.
We claim that
\begin{multline}\label{eq:constant-term}
wF_2(N) = wK + N^{\log_q \lambda}\sum_{0\le k<m} (\log_q
N)^k\Phi^{(2)}_{k}(\fractional{\log_q N}) \\
+ (\log_q N)^mw\vartheta_m
+ \iverson{\lambda = 0} \Oh{N^{\log_q R}}
\end{multline}
for suitable continuously differentiable functions $\Phi^{(2)}_{ k}$ on $\mathbb{R}$,
$0\le k<m$. If $R=0$, then $\Oh{N^{\log_q R}}$ shall mean that the error
vanishes for almost all $N$.
Consider first the case that $\lambda \neq 1$.
Because of $wC^j=w{C'}^j$ and $wT^{-1}DT=w$ (see Section~\ref{section:constants-for-theorem}) we have
\begin{align*}
wF_2(N)&=\sum_{0\le j<\ell}w{C'}^j \bigl(I-A_0\bigr)\\
&=w\bigl(I-{C'}^\ell\bigr)\bigl(I-C'\bigr)^{-1}\bigl(I-A_0\bigr)
= wK - wC^\ell \bigl(I-C'\bigr)^{-1}\bigl(I-A_0\bigr).
\end{align*}
If $\lambda=0$, then $wC^\ell=0$ for almost all $\ell$. We may set
$\Phi^{(2)}_k=0$ for $0\le k<m$ and \eqref{eq:constant-term} is shown.
Otherwise, as
we have $\ell-1=\floor{\log_q N}=\log_q N - \fractional{\log_q N}$ and
by~\eqref{eq:C-sum-eigenvectors}, we can
rewrite $wC^\ell$ as
\begin{equation*}
wC^\ell=\lambda^{\ell}\sum_{0\le k'<m}\ell^{k'}
v_{k'}=\lambda^{1+\log_q N-\fractional{\log_q N}}\sum_{0\le k'<m}(\log_q
N+1-\fractional{\log_q N})^{k'} v_{k'}.
\end{equation*}
Let
\begin{equation*}
G_2(L, \nu)\coloneqq-\lambda^{1-\nu}\sum_{0\le k'<m}(L+1-\nu)^{k'} v_{k'}(I-C')^{-1}(I-A_0)
\end{equation*}
for reals $L$ and $\nu$,
i.e.,
\begin{equation*}
wF_2(N)=wK + \lambda^{\log_q N} G_2(\log_q N, \fractional{\log_q N}).
\end{equation*}
By the binomial theorem, we have
\begin{equation*}
G_2(L, \nu)=-\lambda^{1-\nu}\sum_{0\le k<m}L^k\sum_{\substack{0\le r \\ k+r<m}}\bibinom{k+r}{k}{r}(1-\nu)^r v_{k+r}(I-C')^{-1}(I-A_0).
\end{equation*}
This leads to a representation
$G_2(L, \nu)=\sum_{0\le k<m}L^k\Phi^{(2)}_{ k}(\nu)$ for continuously differentiable functions
\begin{equation*}
\Phi_k^{(2)}(\nu)=-\lambda^{1-\nu}\sum_{\substack{0\le r <m-k}}\bibinom{k+r}{k}{r}(1-\nu)^r v_{k+r}(I-C')^{-1}(I-A_0)
\end{equation*}
for $0\le k<m$. As the functions~$\Phi^{(2)}_{k}$ are continuously differentiable,
they are Lipschitz
continuous on compact subsets of $\mathbb{R}$. We note that in the case $k=m-1$, the
only occurring summand is for $r=0$, which implies that
\begin{equation}\label{eq:fluctuation-2-m-1}
\Phi_{m-1}^{(2)}(\nu) = -\lambda^{1-\nu}v_{m-1}(I-C')^{-1}(I-A_0).
\end{equation}
Rewriting $\lambda^{\log_q N}$ as
$N^{\log_q \lambda}$ and recalling that $w\vartheta_m=0$ yields \eqref{eq:constant-term} for $\lambda\neq 1$.
We now turn to the case $\lambda=1$. We use $wC^j=\sum_{0\le
k<m}\binom{j}{k}v_k'$ for $j\ge 0$ as above.
Thus
\begin{align*}
wF_2(N) &= \sum_{0\le j<\ell}\sum_{0\le k<m}\binom{j}{k}v'_{k}(I-A_0)\\
&= \sum_{0\le k<m}v'_k(I-A_0)\sum_{0\le j<\ell}\binom{j}{k}\\
&= \sum_{0\le k<m}v'_k(I-A_0) \binom{\ell}{k+1},
\end{align*}
where the identity \cite[(5.10)]{Graham-Knuth-Patashnik:1994} (``summation on the upper index'')
has been used in the last step.
Thus $wF_2(N)$ is a polynomial in $\ell$ of degree $m$. By writing
$\ell=1+\log_qN-\fractional{\log_q N}$, we can again rewrite this as a
polynomial in $\log_q N$ whose coefficients depend on $\fractional{\log_q N}$.
The coefficient of $(\log_q N)^m$ comes from
$v_{m-1}'(I-A_0)\binom{\ell}{m}$, therefore, this coefficient is
\begin{equation*}
\frac1{m!}v_{m-1}'(I-A_0)=\frac1{m!}w(C-I)^{m-1}(I-A_0)=w\vartheta_m.
\end{equation*}
The additional factor $T^{-1}(I-D)T$ in $\vartheta_m$ has been introduced in order
to annihilate generalised eigenvectors to other eigenvalues. By construction
of $K$, we have $wK=0$. Thus we have shown \eqref{eq:constant-term} for
$\lambda=1$, too.
\proofparagraph{Lifting the Second Summand}
For later use---at this point, this may seem to be quite artificial---we
set $\Psi^{(2)}_{k}=\Phi^{(2)}_{k}\circ \mathsf{lval}$. As $\Phi^{(2)}_{k}$ is continuously differentiable, it
is Lipschitz continuous on $[0, 1]$. As $\mathsf{lval}$ is also Lipschitz continuous,
so is $\Psi_k^{(2)}$.
\proofparagraph{First Summand}
We now turn to $wF_1(N)$. To explain our plan, assume that $w$ is in fact an
eigenvector. Then $wF_1(N)=\sum_{0\le j<\ell}\lambda^j
wB_{r_j}A_{r_{j+1}}\ldots A_{r_{\ell-1}}$. For $\abs{\lambda}\le R$, it will
be rather easy to see that the result holds. Otherwise,
we will factor out $\lambda^\ell$ and
write the sum as $wF_1(N)=\lambda^{\ell} \sum_{0\le
j<\ell}\lambda^{-(\ell-j)}wB_{r_j}A_{r_{j+1}}\ldots A_{r_{\ell-1}}$.
We will then reverse the order of summation and extend the sum to an infinite
sum, which will be represented by periodic fluctuations. The difference
between the finite and the infinite sums will be absorbed by the error term.
The periodic fluctuations will be defined on the infinite product space $\Omega$.
We now return to the general case of a generalised eigenvector $w$ and the
actual proof.
If $\lambda=0$, we certainly have $\abs{\lambda}\le R$ and we are in one of
the first two cases of this theorem. Furthermore, we have
$wC^j=0$ for $j\ge m$, thus
\begin{equation*}
wF_1(N)=\Oh[\bigg]{\sum_{0\le j<m} R^{\ell-j}}
= \Oh{R^{\ell}}
= \Oh{N^{\log_q R}}
\end{equation*}
by using~\eqref{eq:bound-prod}.
Together with~\eqref{eq:constant-term}, the result follows.
From now on, we may assume that $\lambda\neq 0$.
By using~\eqref{eq:C-sum-eigenvectors}, we have
\begin{equation}\label{eq:w-F-1-n}
wF_1(N)=\sum_{0\le j<\ell} \lambda^j\biggl(\sum_{0\le k<m}j^kv_k \biggr)B_{r_j} A_{r_{j+1}}\ldots
A_{r_{\ell-1}}.
\end{equation}
We first consider the case that $\abs{\lambda}<R$ (corresponding to
Theorem~\ref{theorem:contribution-of-eigenspace}, \itemref{item:small-eigenvalue}). We get
\begin{align*}
wF_1(N) &= \Oh[\bigg]{\sum_{0\le j<\ell}\abs{\lambda}^j j^{m-1}
R^{\ell-j}} \\
&= \Oh[\bigg]{R^{\ell} \sum_{0\le j<\ell}
j^{m-1}\Bigl(\frac{\abs{\lambda}}{R}\Bigr)^j}
=\Oh{R^{\ell}}
=\Oh{N^{\log_q R}},
\end{align*}
where \eqref{eq:bound-prod} was used.
Together with \eqref{eq:constant-term}, the result follows.
Next, we consider the case where $\abs{\lambda}=R$
(Theorem~\ref{theorem:contribution-of-eigenspace}, \itemref{item:R-eigenvalue}).
In that case, we get
\begin{equation*}
wF_1(N)=\Oh[\bigg]{\sum_{0\le j<\ell}\abs{\lambda}^j j^{m-1}
R^{\ell-j}}
= \Oh[\bigg]{R^\ell\sum_{0\le j<\ell}j^{m-1}}
=\Oh{R^\ell \ell^m}.
\end{equation*}
Again, the result follows.
From now on, we may assume that $\abs{\lambda}>R$. We set $Q\coloneqq
\abs{\lambda}/R$ and note that $1<Q\le q$ by assumption and Lemma~\ref{lemma:eigenvalue-spectral-radius-bound}.
We claim that there are continuous functions $\Psi^{(1)}_{k}$ on $\Omega$ for $0\le k<m$
such that
\begin{equation}\label{eq:first-term}
wF_1(N) = N^{\log_q \lambda}\sum_{0\le k<m} (\log_q N)^k
\f[\big]{\Psi^{(1)}_{k}}{\mathsf{reprq}(\fractional{\log_q N})}
\end{equation}
and such that
\begin{equation}\label{eq:quasi-Hoelder}
\norm[\big]{\Psi^{(1)}_{ k}(\mathbf{x})-\Psi^{(1)}_{ k}(\mathbf{x}')}=\Oh{j^{m-1} Q^{-j}}
\end{equation}
when the first $j$ entries of $\mathbf{x}$ and $\mathbf{x}'\in\Omega$ coincide.
Write $N=q^{\ell-1+\fractional{\log_q N}}$ and let $\mathbf{x}=\mathsf{reprq}(\fractional{\log_q N})$,
i.e., $\mathbf{x}$ is the $q$-ary expansion of $q^{\fractional{\log_q N}}=N/q^{\ell-1}\in[1, q)$
ending on infinitely many zeros. This means that $x_j=r_{\ell-1-j}$
for $0\le j<\ell$ and $x_j=0$ for $j\ge \ell$.
Reversing the order of summation in \eqref{eq:w-F-1-n} yields
\begin{align*}
wF_1(N)=\lambda^{\ell-1}\sum_{0\le j<\ell}\lambda^{-j}\biggl(\sum_{0\le k<m}(\ell-1-j)^kv_k \biggr)B_{x_j} A_{x_{j-1}}\ldots
A_{x_0}.
\end{align*}
For $j\ge \ell$, we have $x_j=0$ and therefore $B_{x_j}=0$. Thus we may
extend the sum to run over all $j\ge 0$, i.e.,
\begin{equation*}
wF_1(N)=\lambda^{\ell-1}\sum_{j\ge 0}\lambda^{-j}\biggl(\sum_{0\le k<m}(\ell-1-j)^kv_k \biggr)B_{x_j} A_{x_{j-1}}\ldots
A_{x_0}.
\end{equation*}
We insert $\ell-1=\log_q N - \fractional{\log_q N}$ and obtain
\begin{equation*}
wF_1(N)=\lambda^{\log_q
N}
\f[\big]{G_1}{\log_q N, \mathsf{reprq}(\fractional{\log_q N})}
\end{equation*}
where
\begin{align*}
G_1(L, \mathbf{x})&=\lambda^{-\mathsf{lval}(\mathbf{x})}\sum_{j\ge 0}\lambda^{-j}\biggl(\sum_{0\le
k<m}(L-\mathsf{lval}(\mathbf{x}) - j)^kv_k \biggr)B_{x_j} A_{x_{j-1}}\ldots
A_{x_0}\\
&=\lambda^{-\mathsf{lval}(\mathbf{x})}\sum_{j\ge 0}\lambda^{-j}\biggl(\sum_{\substack{0\le a,\ 0\le r,\
0\le s\\a+r+s<m}}L^a
(-j)^r \trinom{a+r+s}{a}{r}{s}\\&\hspace*{11.225em}\times\bigl(-\mathsf{lval}(\mathbf{x})\bigr)^{s}v_{a+r+s} \biggr)B_{x_j} A_{x_{j-1}}\ldots
A_{x_0}
\end{align*}
for $L\in\mathbb{R}$ and $\mathbf{x}\in\Omega$. Note that in contrast to $G_2$, the second argument of $G_1$ is an element of
$\Omega$ instead of $\mathbb{R}$.
Collecting $G_1(L, \mathbf{x})$ by powers of $L$, we get
\begin{equation*}
G_1(L, \mathbf{x}) = \sum_{0\le k<m} L^k \Psi^{(1)}_{ k}(\mathbf{x})
\end{equation*}
where
\begin{equation*}
\Psi^{(1)}_k(\mathbf{x}) = \sum_{j\ge 0}\lambda^{-j}\sum_{0\le r<m-k}j^r
\f[\big]{\psi_{kr}}{\mathsf{lval}(\mathbf{x})} B_{x_j}A_{x_{j-1}}\ldots A_{x_0}
\end{equation*}
for functions
\begin{equation*}
\psi_{kr}(\nu)=\lambda^{-\nu}
(-1)^r\sum_{0\le s<m-k-r} \trinom{k+r+s}{k}{r}{s}(-\nu)^{s}v_{k+r+s}
\end{equation*}
which are continuously differentiable and therefore Lipschitz
continuous on the unit interval.
This shows \eqref{eq:first-term}.
For $k=m-1$, only summands with $r=s=0$ occur, thus
\begin{equation}\label{eq:fluctuation-1-m-1}
\Psi_{m-1}^{(1)}(\mathbf{x})=\sum_{j\ge 0}\lambda^{-j-\mathsf{lval}(\mathbf{x})}v_{m-1}B_{x_j}A_{x_{j-1}}\ldots A_{x_0}.
\end{equation}
Note that $\Psi^{(1)}_{ k}(\mathbf{x})$ is majorised by
\begin{equation*}
\Oh[\bigg]{\sum_{j\ge 0} \abs{\lambda}^{-j} j^{m-1} R^{j}}
\end{equation*}
according to \eqref{eq:bound-prod}.
We now prove \eqref{eq:quasi-Hoelder}. So let $\mathbf{x}$ and $\mathbf{x}'$ have a
common prefix of length $i$. Consider the summand of $\Psi^{(1)}_k(\mathbf{x})$ with index $j$.
First consider the case that $j<i$. For all $r$, we have
\begin{equation*}
\norm[\big]{\f[\big]{\psi_{kr}}{\mathsf{lval}(\mathbf{x})}-\f[\big]{\psi_{kr}}{\mathsf{lval}(\mathbf{x}')}}
= \Oh{q^{-i}}
\end{equation*}
due to Lipschitz continuity of $\psi_{kr}\circ \mathsf{lval}$.
As the matrix product~$A_{x_{j-1}} \ldots A_{x_0}$
is the same for $\mathbf{x}$ and $\mathbf{x}'$, the
difference with respect to this summand is bounded by
\begin{equation*}
\Oh[\big]{\abs{\lambda}^{-j}j^{m-1}q^{-i}R^{j}}
= \Oh{q^{-i}j^{m-1}Q^{-j}}.
\end{equation*}
Thus the total
contribution of all summands with $j<i$ is $\Oh{q^{-i}}$.
Any summand with $j \ge i$ is bounded by
$\Oh[\big]{\abs{\lambda}^{-j}j^{m-1}R^{j}} = \Oh{j^{m-1}Q^{-j}}$,
which leads to a total contribution of $\Oh{i^{m-1}Q^{-i}}$.
Adding the two bounds leads to a bound of $\Oh{i^{m-1}Q^{-i}}$, as
requested.
\proofparagraph{Descent}
As we have defined the periodic fluctuations $\Psi^{(1)}_k$ on the infinite
product space $\Omega$, we now need to prove that the periodic fluctuation
descends to a periodic fluctuation on the unit interval. To do so,
we will verify that the
values of the fluctuation coincide whenever
sequences in the infinite product space
correspond to the same real number in the interval.
By setting $\Psi_k(\mathbf{x})=\Psi^{(1)}_{k}(\mathbf{x})+\Psi^{(2)}_{k}(\mathbf{x})$, we obtain
\begin{equation}\label{eq:w-F-n}
wF(N)=wK + N^{\log_q\lambda} \sum_{0\le k<m}(\log_q
N)^k\f[\big]{\Psi_k}{\mathsf{reprq}(\fractional{\log_q N})} +(\log_q N)^mw\vartheta_m
\end{equation}
and
\begin{equation}\label{eq:Psi-continuity}
\norm{\Psi_{k}(\mathbf{x})-\Psi_{k}(\mathbf{x}')}=\Oh{j^{m-1}Q^{-j}}
\end{equation}
whenever $\mathbf{x}$ and $\mathbf{x}'\in\Omega$ have a common prefix of length $j$.
It remains to show that $\Psi_k(\mathbf{x})=\Psi_k(\mathbf{x}')$ holds whenever
$\mathsf{lval}(\mathbf{x})=\mathsf{lval}(\mathbf{x}')$ or $\mathsf{lval}(\mathbf{x})=0$ and $\mathsf{lval}(\mathbf{x}')=1$.
Choose $\mathbf{x}$ and $\mathbf{x}'$ such that one of the above
two conditions on $\mathsf{lval}$ holds and such that $x_j=0$ for $j\ge j_0$ and
$x'_j=q-1$ for $j\ge j_0$. Be aware that now the prefixes of
$\mathbf{x}$ and $\mathbf{x}'$ of length $j_0$ do not coincide except for the trivial
case $j_0=0$.
Fix some $j\ge j_0$ and set $\mathbf{x}''$ to be
the prefix of $\mathbf{x}'$ of length $j$, followed by infinitely many zeros.
Note that we have $q^{\mathsf{lval}(\mathbf{x}'')}=q^{\mathsf{lval}(\mathbf{x}')}-q^{-(j-1)}$. Set
$n=q^{j-1+\mathsf{lval}(\mathbf{x}'')}$. By construction, we have
$n+1=q^{j-1+\mathsf{lval}(\mathbf{x})+\iverson{\mathsf{lval}(\mathbf{x})=0}}$. This implies
$\mathsf{reprq}(\fractional{\log_q n})=\mathbf{x}''$ and
$\mathsf{reprq}(\fractional{\log_q(n+1)})=\mathbf{x}$. Taking the difference of
\eqref{eq:w-F-n} for $n+1$ and $n$ yields
\begin{multline*}
wf(n)=(n+1)^{\log_q \lambda} \sum_{0\le k<m}\bigl(\log_q (n+1)\bigr)^k
\Psi_k(\mathbf{x}) - n^{\log_q \lambda} \sum_{0\le k<m} (\log_q n)^k
\Psi_k(\mathbf{x}'') \\+\big((\log_q(n+1))^m-(\log_q n)^m\big)w\vartheta_m.
\end{multline*}
We estimate $n+1$ as $n(1+\Oh{1/n})$ and get
\begin{equation}\label{eq:Tenenbaum-2}
wf(n)=n^{\log_q \lambda }\sum_{0\le k<m} (\log_q n)^k
\bigl(\Psi_k(\mathbf{x})-\Psi_k(\mathbf{x}'')\bigr)
+ \Oh[\big]{n^{\log_q \abs{\lambda} -1}(\log n)^{m-1}}.
\end{equation}
We have $wf(n)=\Oh{R^j}=\Oh{R^{\log_q n}}=\Oh{n^{\log_q R}}$ by~\eqref{eq:f-as-product} and~\eqref{eq:bound-prod}. By \eqref{eq:Psi-continuity},
\begin{equation*}
\norm[\big]{\Psi_k(\mathbf{x}'')-\Psi_k(\mathbf{x}')}
= \Oh[\big]{(\log n)^{m-1}n^{-\log_q Q}}
\end{equation*}
which is used below to replace $\mathbf{x}''$ by $\mathbf{x}'$.
Inserting these estimates in \eqref{eq:Tenenbaum-2} and dividing by
$n^{\log_q \lambda}$ yields
\begin{equation}\label{eq:Tenenbaum-3}
\sum_{0\le k<m}(\log_q n)^k\bigl(\Psi_k(\mathbf{x}')-\Psi_k(\mathbf{x})\bigr)
= \Oh[\big]{n^{-\log_q Q} (\log n)^{2m-2}}.
\end{equation}
Note that $\Psi_k(\mathbf{x}')-\Psi_k(\mathbf{x})$ does not depend on $j$. Now we let
$j$ (and therefore $n$) tend to infinity. We see that
\eqref{eq:Tenenbaum-3} can only remain true if
$\Psi_k(\mathbf{x}')=\Psi_k(\mathbf{x})$ for $0\le k<m$, which we had set out to show.
Therefore, $\Psi_k$ descends to a continuous function $\Phi_k$ on $[0, 1]$ with
$\Phi_k(0)=\Phi_k(1)$; thus $\Phi_k$ can be extended to a $1$-periodic continuous
function.
\proofparagraph{Hölder Continuity}
We will now prove Hölder continuity. As the fluctuations have been defined
on the infinite product space $\Omega$, we will basically have to prove Hölder
continuity there. The difficulty will be that points in the unit interval which
are close to each other there may have drastically different $q$-ary
expansions, thus correspond to drastically different points in the infinite
product space $\Omega$. To circumvent this problem, the interval between
the two points will be split into two parts.
We
first claim that for $0\le y<y'''<1$,
we have
\begin{equation}\label{eq:Hoelder-1}
\norm[\big]{\Phi_k(y''')-\Phi_k(y)}
= \Oh[\big]{(\log(q^{y'''}-q^{y}))^{m-1}(q^{y'''}-q^y)^{\log_q Q}}
\end{equation}
as $y'''\to y$.
To prove this, let $\mathbf{x}\coloneqq \mathsf{reprq}(y)$ and $\mathbf{x}'''\coloneqq
\mathsf{reprq}(y''')$.
Let $\ell$ be the length of the longest common prefix of $\mathbf{x}$ and
$\mathbf{x}'''$ and choose $j\ge 0$ such that $q^{-j}\le q^{y'''}-q^y< q^{-j+1}$.
We define $\mathbf{x}'$ and $\mathbf{x}''\in\Omega$ such that
\begin{alignat*}{4}
\mathbf{x}&=(x_0,x_1,\ldots, x_{\ell-1}, x_{\ell},{}&& x_{\ell+1},{}&& x_{\ell+2},{}&& \ldots),\\
\mathbf{x}'&=(x_0, x_1, \ldots, x_{\ell-1}, x_{\ell},{}&& q-1,{}&& q-1,{}&& \ldots),\\
\mathbf{x}''&=(x_0, x_1, \ldots, x_{\ell-1}, x_{\ell}+1,{}&& 0,{}&& 0,{}&& \ldots),\\
\mathbf{x}'''&=(x_0, x_1, \ldots, x_{\ell-1}, x'''_{\ell},{}&& x'''_{\ell+1},{}&&
x'''_{\ell+2},{}&& \ldots)
\end{alignat*}
and set $y'\coloneqq\mathsf{lval}(\mathbf{x}')$ and $y''\coloneqq\mathsf{lval}(\mathbf{x}'')$.
As $\mathsf{lval}(\mathbf{x})=y<y'''=\mathsf{lval}(\mathbf{x}''')$, we have $x'''_\ell>x_\ell$. We
conclude that $y\leq y'=y''\leq y'''$. Therefore,
\begin{equation*}
q^{y'}-q^{y} \le q^{y'''}-q^{y}< q^{-j+1},
\end{equation*}
so in view of the fact that each entry of
$\mathbf{x}'$ is greater or equal than the corresponding entry of $\mathbf{x}$,
the expansions $\mathbf{x}$ and $\mathbf{x}'$ must have a common prefix of length $j$.
Similarly, the expansions $\mathbf{x}''$ and $\mathbf{x}'''$ must have a common prefix
of length~$j$. Thus \eqref{eq:Psi-continuity} implies that
\begin{align*}
\norm[\big]{\Phi_k(y''')-\Phi_k(y)}
&\le \norm[\big]{\Phi_k(y''')-\Phi_k(y'')}+
\norm[\big]{\Phi_k(y')-\Phi_k(y)}\\
&= \norm[\big]{\Psi_k(\mathbf{x}''')-\Psi_k(\mathbf{x}'')}+
\norm[\big]{\Psi_k(\mathbf{x}')-\Psi_k(\mathbf{x})}
= \Oh{j^{m-1}Q^{-j}}.
\end{align*}
Noting that $-j = \log_q(q^{y'''}-q^y) + \Oh{1}$ leads to~\eqref{eq:Hoelder-1}.
In order to prove Hölder continuity with exponent $\alpha<\log_q Q$, we first
note that Lipschitz-continuity of $y\mapsto q^y$ on the interval $[0, 1]$
shows that \eqref{eq:Hoelder-1} implies
\begin{equation*}
\norm[\big]{\Phi_k(y''')-\Phi_k(y)}
= \Oh[\big]{(y'''-y)^{\alpha}}.
\end{equation*}
This can then easily be extended to arbitrary reals $y<y'''$
by periodicity of $\Phi_k$ because it is sufficient to consider small
$y'''-y$ and the interval may be subdivided at an integer between $y$ and $y'''$.
\proofparagraph{Constant Dominant Fluctuation}
To finally prove the final assertion on constant fluctuations, we will have
to inspect the explicit expression for the fluctuations using the
additional assumption.
Under the additional
assumption that the vector~$w(C-I)^{m-1}=v_{m-1}'$ is a left eigenvector to all
matrices $A_0$, \ldots, $A_{q-1}$ associated with the eigenvalue $1$, the same holds for $v_{m-1}$
by~\eqref{eq:v_m-1-expression}. Then $v_{m-1}$ is also a left eigenvector of
$C$ associated with the eigenvalue $q$. In particular, $\lambda=q\neq 1$.
We can compute $\Phi_{m-1}^{(2)}(\nu)$ using
\eqref{eq:fluctuation-2-m-1}. As $v_{m-1}\in W_{q}$, we have $v_{m-1}C=v_{m-1}C'$
by definition of $C'$ (see Section~\ref{section:constants-for-theorem}) which implies that
$v_{m-1}(I-C')^{-1}=\frac1{1-q}v_{m-1}$. As $v_{m-1}(I-A_0)=0$ by
assumption, we conclude that $\Phi_{m-1}^{(2)}(\nu)=0$ in this case.
We use \eqref{eq:fluctuation-1-m-1} to compute $\Psi_{m-1}^{(1)}(\mathbf{x})$.
By assumption, $v_{m-1}B_{x_j}=x_j v_{m-1}$ which implies that
\begin{equation*}
\Psi_{m-1}^{(1)}(\mathbf{x})
= q^{-\mathsf{lval}(\mathbf{x})} \biggl(\sum_{j\ge 0}q^{-j}x_j\biggr) v_{m-1}
=q^{-\mathsf{lval}(\mathbf{x})}q^{\mathsf{lval}(\mathbf{x})}v_{m-1}=v_{m-1}
\end{equation*}
by definition of $\mathsf{lval}$.
Together with \eqref{eq:v_m-1-expression}, we obtain the assertion.
\end{proof}
\subsection{Proof of Theorem~\ref{theorem:main}}\label{section:proof:corollary-main}
\begin{proof}[Proof of Theorem~\ref{theorem:main}]
We denote the rows of $T$ as $w_1$, \ldots, $w_d$ and the columns of $T^{-1}$
by $t_1$, \ldots, $t_d$. Thus $\sum_{1 \le j \le d} t_jw_j=I$ and $w_j$ is a
generalised left eigenvector of $C$ of some rank $m_j$ corresponding to some
eigenvalue
$\lambda_j\in\sigma(C)$. Theorem~\ref{theorem:contribution-of-eigenspace} and
the fact that there are no eigenvalues of $C$ of absolute value between
$\rho$ and $R$
then immediately imply that
\begin{align*}
F(N) &= \sum_{1 \le j \le d} t_j w_j F(N) \\
&= K + \sum_{1 \le j \le d} (\log_q N)^{m_j} t_jw_j \vartheta_{m_j}\\
&\phantom{= K}\; + \sum_{\substack{1\le j\le d\\\abs{\lambda_j}>\rho }} N^{\log_q \lambda_j}
\sum_{0\le k<m_j}(\log_q N)^k t_j\Psi_{jk}(\fractional{\log_q N}) \\
&\phantom{= K}\; + \iverson{\exists \lambda\in\sigma(C)\colon \abs{\lambda}\le\rho}
\Oh[\big]{N^{\log_q R}(\log N)^{\max\set{0}\cup \setm{m_j}{\abs{\lambda_j}=R}}}
\end{align*}
for some $1$-periodic Hölder continuous functions $\Psi_{jk}$ with exponent
less than $\log_q\abs{\lambda_j}/R$. The first summand $K$ as well as the
error term already coincide with the result stated in the theorem.
From Section~\ref{section:constants-for-theorem} we recall that $w_j\vartheta_{m_j}=0$
for $\lambda_j\neq 1$.
We set
\begin{equation*}
\Phi_{\lambda k}(u)\coloneqq \sum_{\substack{1\le j\le
d\\\lambda_j=\lambda\\k<m_j}}
\bigl(t_j\Psi_{jk}(u) +\iverson{\lambda=1}\iverson{m_j=k}t_jw_j\vartheta_{m_j}\bigr)
\end{equation*}
for $\lambda\in\sigma(C)$ with $\abs{\lambda}>\rho$ and $0\le k<m(\lambda)$.
Then we still have to account for
\begin{equation}\label{eq:phi-m-1-sum}
(\log_q N)^{m(1)}\sum_{\substack{1\le j\le d\\\lambda_j=1\\m_j=m(1)}}t_jw_j\vartheta_{m(1)}.
\end{equation}
The factor $(C-I)^{m(1)-1}$ in the definition of $\vartheta_{m(1)}$ implies that
$w_j\vartheta_{m(1)}$ vanishes unless $\lambda_j=1$ and $m_j=m(1)$. Therefore, the
sum in \eqref{eq:phi-m-1-sum} equals $\vartheta$.
\end{proof}
\section{Meromorphic Continuation of the Dirichlet Series: Proof of
Theorem~\ref{theorem:Dirichlet-series}}
\label{section:proof:Dirichlet-series}
For future use, we state an estimate for the binomial
coefficient. Unsurprisingly, it is a consequence of
a suitable version of Stirling's formula.
Alternatively, it can be seen as the most basic case of Flajolet and
Odlyzko's singularity
analysis~\cite[Proposition~1]{Flajolet-Odlyzko:1990:singul}, where
uniformity in $s$ is easily checked.
\begin{lemma}\label{lemma:binomial-coefficient-asymptotics}
Let $k\in\mathbb{Z}$, $k\ge 0$. Then
\begin{equation}\label{eq:binomial-coefficient-estimate}
\abs[\bigg]{\binom{-s}{k}}\sim \frac{1}{\abs{\Gamma(s)}}k^{\Re s-1}
\end{equation}
uniformly for $s$ in a compact subset of $\mathbb{C}$ and $k\to\infty$.
\end{lemma}
\begin{proof}
By \cite[(5.14)]{Graham-Knuth-Patashnik:1994} (``negating the upper index''), we rewrite the binomial coefficient as
\begin{equation*}
\binom{-s}{k}=(-1)^{k}\binom{s+k-1}{k}=\frac{(-1)^k}{\Gamma(s)}\frac{\Gamma(s+k)}{\Gamma(k+1)}.
\end{equation*}
Thus~\eqref{eq:binomial-coefficient-estimate} follows by \DLMF{5.11}{12}
(which is an easy consequence of Stirling's formula for the Gamma function).
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lemma:shifted-Dirichlet}]
We have
\begin{equation}\label{eq:Dirichlet-shifted:Sigma-as-diff}
\f{\Sigma}{s, \beta, \mathcal{D}}
= \sum_{n\ge n_0} \bigl((n+\beta)^{-s}-n^{-s}\bigr) d(n)
\end{equation}
for $\Re s>\log_q R'+ 1$.
We note that
\begin{equation*}
(n+\beta)^{-s} - n^{-s}
= n^{-s}\Bigl(\Bigl(1+\frac{\beta}{n}\Bigr)^{-s} - 1 \Bigr)
= \Oh[\big]{\abs{s}n^{-\Re s-1}}.
\end{equation*}
Therefore,
\begin{equation*}
\f{\Sigma}{s, \beta, \mathcal{D}} = \Oh[\bigg]{\abs{s}\sum_{n\ge n_0}n^{\log_q R'-\Re s-1}},
\end{equation*}
and the series
converges for $\Re s>\log_q R'$. As this holds for all $R'>\rho$, we obtain
$\f{\Sigma}{s, \beta, \mathcal{D}}=\Oh{\abs{\Im s}}$ as $\abs{\Im s}\to\infty$
uniformly for $\log_q \rho + \delta \le \Re s \le \log_q \rho+\delta+1$.
In the language of \cite[\S~III.3]{Hardy-Riesz:1915},
$\f{\Sigma}{s, \beta, \mathcal{D}}$ has order at most $1$ for $\log_q \rho + \delta \le \Re s \le \log_q
\rho+\delta+1$. As $\log_q \rho+\delta+1$ is larger
than the abscissa of absolute convergence of $\f{\Sigma}{s, \beta, \mathcal{D}}$, it is clear that
$\f{\Sigma}{s, \beta, \mathcal{D}}=\Oh{1}$ for $\Re s=\log_q \rho+\delta+1$, i.e., $\f{\Sigma}{s, \beta, \mathcal{D}}$ has order at most $0$ for
$\Re s=\log_q \rho+\delta+1$. By Lindelöf's theorem
(see \cite[Theorem~14]{Hardy-Riesz:1915}), we conclude that
$\f{\Sigma}{s, \beta, \mathcal{D}}=\Oh[\big]{\abs{\Im s}^{\mu_\delta(\Re s)}}$ for $\log_q \rho + \delta\le \Re s\le
\log_q \rho +\delta+1$.
For $\Re s > \log_q R' + 1$, we may
rewrite~\eqref{eq:Dirichlet-shifted:Sigma-as-diff}
using the binomial series as
\begin{align}\label{eq:shifted-Dirichlet:diff:inner-sum}
\f{\Sigma}{s, \beta, \mathcal{D}} &=
\sum_{n\ge n_0}{n^{-s}}\sum_{k\ge 1}\binom{-s}{k}\frac{\beta^k}{n^k} d(n)\notag\\
&= \sum_{k\ge 1}
\binom{-s}{k} \beta^k \sum_{n\ge n_0} n^{-(s+k)} d(n).
\end{align}
Switching the order of summation was legitimate because
\begin{align*}
\norm[\bigg]{\sum_{n\ge n_0} n^{-(s+k)} d(n)}
&\le \sum_{n\ge n_0} n^{-(\Re s+k)} \norm{d(n)}\\
&= \sum_{n\ge n_0} \Oh[\big]{n^{\log_q R'-\Re s -k}}
= \Oh[\big]{n_0^{\log_q R'-\Re s-k+1}}
\end{align*}
for $\Re s+k>\log_q R'+1$ and Lemma~\ref{lemma:binomial-coefficient-asymptotics} imply absolute and
uniform convergence for $s$ in a compact set.
Noting that the previous arguments hold again for all $R'>\rho$ and that
the inner sum in \eqref{eq:shifted-Dirichlet:diff:inner-sum}
is $\mathcal{D}(s+k)$ completes the proof.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{theorem:Dirichlet-series}]
As $f(n)=\Oh{R^{\log_q n}}=\Oh{n^{\log_q R}}$ by \eqref{eq:f-as-product} and
\eqref{eq:bound-prod}, the Dirichlet series
$\mathcal{F}_{n_0}(s) = \sum_{n \ge n_0} n^{-s} f(n)$
(see Section~\ref{sec:definitions-notations})
converges absolutely and uniformly on compact sets for $\Re s>\log_q R+1$.
As this holds for all $R>\rho$, i.e., does not depend on
our particular (cf.\@ Section~\ref{sec:definitions-notations}) choice
of $R>\rho$, this convergence result holds
for $\Re s>\log_q \rho+1$.
We use~\eqref{eq:regular-matrix-sequence} and
Lemma~\ref{lemma:shifted-Dirichlet} (including its notation)
to rewrite $\mathcal{F}_{n_0}$ as
\begin{align*}
\mathcal{F}_{n_0}(s) &= \sum_{n_0 \le n < qn_0}n^{-s}f(n) + \sum_{0 \le r < q} \sum_{n\ge n_0} (qn+r)^{-s} f(qn+r)\\
&= \sum_{n_0 \le n < qn_0} n^{-s}f(n) + q^{-s} \sum_{0 \le r < q} A_r
\sum_{n\ge n_0} \Bigl(n+\frac{r}{q}\Bigr)^{-s} f(n)\\
&= \sum_{n_0 \le n < qn_0} n^{-s}f(n) + q^{-s}C\mathcal{F}_{n_0}(s) + \mathcal{H}_{n_0}(s)
\end{align*}
with
\begin{equation*}
\mathcal{H}_{n_0}(s)\coloneqq q^{-s} \sum_{0 \le r < q} A_r \f[\big]{\Sigma}{s, \tfrac{r}{q}, \mathcal{F}_{n_0}}
\end{equation*}
for $\Re s>\log_q R+ 1$.
Thus
\begin{equation}\label{eq:functional-equation-H}
\bigl(I-q^{-s}C\bigr)\mathcal{F}_{n_0}(s) = \sum_{n_0 \le n < qn_0}n^{-s}f(n)+\mathcal{H}_{n_0}(s)
\end{equation}
for $\Re s>\log_q R+ 1$.
By Lemma~\ref{lemma:shifted-Dirichlet} we have
$\mathcal{H}_{n_0}(s)=\Oh[\big]{\abs{\Im s}^{\mu_\delta(\Re s)}}$
for $\log_q \rho + \delta\le \Re s\le
\log_q \rho +\delta+1$.
Rewriting the expression for $\mathcal{H}_{n_0}(s)$ using the binomial series
(see Lemma~\ref{lemma:shifted-Dirichlet} again) yields
\begin{equation*}
\mathcal{H}_{n_0}(s)
= q^{-s}\sum_{0 \le r < q} A_r \sum_{k\ge
1}\binom{-s}{k}\Bigl(\frac{r}{q}\Bigr)^k \mathcal{F}_{n_0}(s+k).
\end{equation*}
Combining this with~\eqref{eq:functional-equation-H}
yields the expression~\eqref{eq:Dirichlet-recursion} for $\mathcal{G}_{n_0}$.
Solving \eqref{eq:analytic-continuation} for $\mathcal{F}_{n_0}$ yields the
meromorphic continuation of $\mathcal{F}_{n_0}(s)$ to $\Re s>\log_q R$ (and thus to $\Re
s>\log_q \rho$) with possible poles where
$q^s$ is an eigenvalue of $C$. As long as $q^s$ keeps a fixed positive
distance $\delta$ from the eigenvalues, the bound for $\mathcal{G}_{n_0}$
(coming from the bound for $\mathcal{H}_{n_0}$) carries over
to a bound for $\mathcal{F}_{n_0}$, i.e., \eqref{eq:order-F}.
To estimate the order of the poles, let $w$ be generalised left eigenvector
of rank $m$ of $C$ corresponding to an eigenvalue $\lambda$ with $\abs{\lambda}>R$. We claim that
$w\mathcal{F}_{n_0}(s)$ has a pole of order at most $m$ at $s=\log_q \lambda+\chi_k$ and no other
poles for $\Re s>\log_q R$. We prove this by induction on $m$.
Set $v\coloneqq w(C-\lambda I)$. By definition, $v=0$ or $v$ is a generalised
eigenvector of rank $m-1$ of $C$. By induction hypothesis, $v\mathcal{F}_{n_0}(s)$ has a
pole of order at most $m-1$ at $s=\log_q \lambda+\chi_k$ for $k\in\mathbb{Z}$ and no other
poles for $\Re s>\log_q R$.
Multiplying \eqref{eq:analytic-continuation} by $w$,
inserting the definition of~$v$ and reordering the summands yields
\begin{equation*}
\bigl(1 - q^{-s}\lambda\bigr)w\mathcal{F}_{n_0}(s) = q^{-s}v \mathcal{F}_{n_0}(s) + w\mathcal{G}_{n_0}(s).
\end{equation*}
The right-hand side has a pole of order at most $m-1$ at $\log_q \lambda+\chi_k$ for
$k\in\mathbb{Z}$ and $1-q^{-s}\lambda$ has a simple zero at the same places. This
proves the claim.
\end{proof}
\section{Fourier Coefficients: Proof of
Theorem~\ref{theorem:use-Mellin--Perron}}
\label{section:proof:use-Mellin--Perron}
In contrast to the rest of this paper, this section does not directly relate to
a regular sequence but gives a general method to derive Fourier coefficients of
fluctuations.
\subsection{Pseudo-Tauberian Theorem}
\label{sec:pseudo-tauber}
In this section, we generalise the pseudo-Tau\-be\-rian argument by Flajolet, Grabner,
Kirschenhofer, Prodinger and
Tichy~\cite[Proposition~6.4]{Flajolet-Grabner-Kirschenhofer-Prodinger:1994:mellin}.
The basic idea is that for a $1$-periodic Hölder-continuous function
$\Phi$ and $\gamma\in\mathbb{C}$, there is a $1$-periodic continuously
differentiable function $\Psi$ such that
\begin{equation*}
\sum_{1\le n<N} n^{\gamma} \Phi(\log_q n)
= N^{\gamma+1} \Psi(\log_q N) + \oh{N^{\Re \gamma+1}},
\end{equation*}
and there is a straight-forward relation between the Fourier
coefficients of $\Phi$ and the Fourier coefficients of $\Psi$. This relation
exactly corresponds to the additional factor $s+1$ when transitioning
from the zeroth order Mellin--Perron formula to the first order
Mellin--Perron formula.
In contrast
to~\cite[Proposition~6.4]{Flajolet-Grabner-Kirschenhofer-Prodinger:1994:mellin},
we allow for an additional logarithmic factor, have weaker growth
conditions on the Dirichlet series and quantify the error. We also
extend the result to all complex $\gamma$. The generalisation from
$q=2$ there to our real~$q>1$ is trivial.
\begin{proposition}\label{proposition:pseudo-Tauber}
Let $\gamma\in\mathbb{C}$ and $q>1$ be a real number, $m$ be a
positive integer, $\Phi_0$, \ldots, $\Phi_{m-1}$ be $1$-periodic Hölder continuous
functions with exponent $\alpha>0$, and $0<\beta<\alpha$. Then there exist continuously differentiable functions
$\Psi_{-1}$, $\Psi_{0}$, \ldots, $\Psi_{m-1}$, periodic with period $1$, and a constant $c$ such that
\begin{multline}
\sum_{1\le n< N}n^\gamma \sum_{\substack{j+k=m-1\\0\le j<m}}\frac{(\log n)^{k}}{k!}\Phi_j(\log_q n)\\
=c + N^{\gamma+1}\sum_{\substack{j+k=m-1\\-1\le j<m}} \frac{(\log N)^{k}}{k!}\Psi_j(\log_q N)
+ \Oh[\big]{N^{\Re \gamma+1-\beta}}
\label{eq:pseudo-Tauber-relation}
\end{multline}
for integers $N\to\infty$.
Denote the Fourier coefficients of $\Phi_j$ and $\Psi_j$ by $\varphi_{j\ell}\coloneqq
\int_0^1\Phi_j(u)\exp(-2\ell\pi i u)\, \mathrm{d} u$ and $\psi_{j\ell}\coloneqq
\int_0^1\Psi_j(u)\exp(-2\ell\pi i u)\, \mathrm{d} u$, respectively.
Then the corresponding generating functions fulfil
\begin{equation}\label{eq:pseudo-Tauber-Fourier}
\sum_{0\le j<m}\varphi_{j\ell}Z^j = \Bigl(\gamma+1+\frac{2\ell \pi i}{\log q} + Z\Bigr)\sum_{-1\le j<m}\psi_{j\ell}Z^j
+\Oh{Z^m}
\end{equation}
for $\ell\in \mathbb{Z}$ and $Z\to 0$.
If $q^{\gamma+1}\neq 1$, then $\Psi_{-1}$ vanishes.
\end{proposition}
\begin{remark}
Note that the constant $c$ is absorbed by the error term if
$\Re\gamma+1>\alpha$, in particular if $\Re\gamma>0$.
Therefore, this constant does not occur in the
article~\cite{Flajolet-Grabner-Kirschenhofer-Prodinger:1994:mellin}.
\end{remark}
\begin{remark}
\label{remark:recurrence-fluctuation}
The factor $\gamma+1+\frac{2\ell \pi i}{\log q} + Z$ in
\eqref{eq:pseudo-Tauber-Fourier} will turn out to
correspond exactly to the additional factor $s+1$ in the first order
Mellin--Perron summation formula with the substitution
$s=\gamma+\frac{2\ell\pi i}{\log q}+ Z$ such that the local expansion around
the pole in $s=\gamma+\frac{2\ell\pi i}{\log q}$ of the Dirichlet generating
function is conveniently written as a Laurent series in $Z$. See the proof of
Theorem~\ref{theorem:use-Mellin--Perron} for details.
\end{remark}
Before actually proving Proposition~\ref{proposition:pseudo-Tauber},
we give an outline.
\begin{proof}[Overview of the Proof of Proposition~\ref{proposition:pseudo-Tauber}]
We start with the left-hand side
of~\eqref{eq:pseudo-Tauber-relation} and split the range of summation
according to $\floor{\log_q n}$, thereby, in terms of our periodic functions,
split after each period. We then use periodicity of the~$\Phi_j$
and collect terms. This results in
Riemann sums which converge to the corresponding
integrals. Therefore, we can approximate these sums by the integrals.
More rewriting constructs and reveals the functions~$\Psi_j$ (of the
right-hand side of~\eqref{eq:pseudo-Tauber-relation}): These functions
are basically defined via the above mentioned integral.
We then show
that these functions are indeed periodic and that their Fourier coefficients
relate to the Fourier coefficients of the~$\Phi_j$.
The latter is done by a direct computation of the integrals
defining these coefficients.
For this proof, we use an approach via exponential generating functions.
This reduces the overhead for
dealing with the logarithmic factors $(\log n)^k$
in~\eqref{eq:pseudo-Tauber-relation} such that we can essentially
focus on the case~$m=1$.
The resulting formula~\eqref{eq:pseudo-Tauber-relation}
follows by extracting a suitable coefficient of this power series.
There is another benefit of the generating function approach:
This formulation allows to easily translate the relation between
the Fourier coefficients here to the additional factors occurring
when transitioning to higher order Mellin--Perron summation
formul\ae{},
in particular the factor $s+1$ in the first order Mellin--Perron summation.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{proposition:pseudo-Tauber}]
We split the proof into six parts.
\proofparagraph{Notations}
We start by defining quantities that are used through the whole proof.
Without loss of generality, we assume that $q^{\Re \gamma+1}\neq q^{\alpha}$:
otherwise, we slightly decrease $\alpha$ keeping the inequality
$\beta<\alpha$ intact.
We use the abbreviations $\Lambda\coloneqq \floor{\log_q N}$,
$\nu\coloneqq \fractional{\log_q N}$, i.e.,
$N=q^{\Lambda+\nu}$. We use the generating functions
\begin{align*}
\f{\Phi}{u, Z}&\coloneqq \sum_{0\le j<m}\Phi_j(u)Z^j,\\
L(N, Z)&\coloneqq \sum_{1\le n<N}n^{\gamma+Z} \f{\Phi}{\log_q n, Z}=\sum_{1\le
n<N}n^\gamma \fexp[\big]{(\log n) Z}\f{\Phi}{\log_q n, Z},\\
Q(Z)&\coloneqq q^{\gamma+1+Z}
\end{align*}
for $0\le u\le 1$ and $0<\abs{Z}<2r$ where $r>0$ is chosen such that
$r<(\alpha-\beta)/2$ and such that
$Q(Z)\neq 1$ and $\abs{Q(Z)}\neq q^{\alpha}$ for these $Z$.
(The condition $Z\neq 0$ is only needed for the case $q^{1+\gamma}=1$.)
We will stick to the above choice of~$r$ and restrictions for~$Z$ throughout
the proof.
It is easily seen that the left-hand side
of~\eqref{eq:pseudo-Tauber-relation} equals $[Z^{m-1}]L(N, Z)$, where
$[Z^{m-1}]$ denotes extraction of the coefficient of $Z^{m-1}$.
\proofparagraph{Approximation of the Sum by an Integral}
We will now rewrite $L(N, Z)$ so that its shape is that of a
Riemann sum, therefore enabling us to approximate it by an integral.
Splitting the range of summation with respect to powers of $q$ yields
\begin{align*}
L(N, Z) = \phantom{+\;}&
\sum_{0\le p<\Lambda}\sum_{q^p\le n<q^{p+1}}n^{\gamma+Z}
\f{\Phi}{\log_q n, Z} \\
+\; &
\sum_{q^\Lambda\le n< q^{\Lambda+\nu}}n^{\gamma+Z}\f{\Phi}{\log_q n, Z}.
\end{align*}
We write $n=q^px$ (or $n=q^\Lambda x$ for the second sum), use the
periodicity of $\Phi$ in $u$ and get
\begin{align*}
L(N, Z) = \phantom{+\;}&
\sum_{0\le p<\Lambda}Q(Z)^p\sum_{\substack{x\in q^{-p}\mathbb{Z}\\ 1\le x < q}}
x^{\gamma+Z}\f{\Phi}{\log_q x, Z}\frac{1}{q^p} \\
+\; &
Q(Z)^\Lambda \sum_{\substack{x\in q^{-\Lambda}\mathbb{Z}\\ 1\le x < q^{\nu}}}
x^{\gamma+Z}\f{\Phi}{\log_q x, Z}\frac{1}{q^\Lambda}.
\end{align*}
The inner sums are Riemann sums converging
to the corresponding integrals for $p\to\infty$.
We set
\begin{equation*}
I(u, Z)\coloneqq\int_{1}^{q^u}x^{\gamma+Z} \f{\Phi}{\log_q x, Z}\,\mathrm{d} x.
\end{equation*}
It will be convenient to change variables $x=q^w$ in $I(u, Z)$ to get
\begin{equation}\label{eq:Pseudo-Tauber:I-definition}
I(u, Z)=(\log q)\int_{0}^{u}Q(Z)^w \f{\Phi}{w, Z}\,\mathrm{d} w.
\end{equation}
We define the error~$\varepsilon_p(u, Z)$ by
\begin{equation*}
\sum_{\substack{x\in q^{-p}\mathbb{Z}\\
1\le x < q^u}}x^{\gamma+Z} \f{\Phi}{\log_q x, Z}\frac1{q^p}=I(u, Z) +
\varepsilon_{p}(u, Z).
\end{equation*}
As the sum and the integral are both analytic in $Z$, their difference
$\varepsilon_p(u, Z)$ is analytic in $Z$, too.
We bound~$\varepsilon_{p}(u, Z)$ by the difference of upper and lower
Darboux sums (step size~$q^{-p}$)
corresponding to the integral~$I(u, Z)$:
On each interval of length $q^{-p}$, the maximum and minimum of a
Hölder continuous function can differ by at most $\Oh{q^{-\alpha p}}$. As
the integration interval as well as the range for $u$ and $Z$ are finite, this translates to the bound
$\varepsilon_p(u, Z)=\Oh{q^{-\alpha p}}$ as $p\to\infty$
uniformly in $0\le u\le 1$ and $\abs{Z}<2r$. This results in
\begin{multline*}
L(N, Z)=
I(1, Z)\sum_{0\le p<\Lambda}Q(Z)^p
+ \sum_{0\le p<\Lambda}Q(Z)^p \varepsilon_{p}(1, Z)
\\
+ I(\nu, Z)\,Q(Z)^{\Lambda} + Q(Z)^\Lambda \varepsilon_{\Lambda}(\nu, Z).
\end{multline*}
If $\abs{Q(Z)}/q^\alpha=q^{\Re\gamma+1 + \Re Z -\alpha}<1$, i.e.,
$\Re \gamma+\Re Z<\alpha-1$,
the second sum involving the integration error
converges absolutely and uniformly in $Z$
for $\Lambda\to\infty$ to some analytic function
$c'(Z)$; therefore, we can
replace the second sum by
$c'(Z)+\Oh[\big]{q^{(\Re \gamma+1+2r-\alpha)\Lambda}}=c'(Z)+\Oh[\big]{N^{\Re\gamma+1+2r-\alpha}}$
in this case.
If $\Re \gamma + \Re Z>\alpha-1$, then the second sum is
$\Oh[\big]{q^{(\Re \gamma+2r+1-\alpha)\Lambda}}=\Oh[\big]{N^{\Re\gamma+1+2r-\alpha}}$.
By our choice of $r$, the case $\Re \gamma+\Re Z=\alpha-1$ cannot occur.
So in any case, we may write the second sum as
$c'(Z)+\Oh[\big]{N^{\Re \gamma+1-\beta}}$ by our choice of $r$.
The last summand involving $\varepsilon_{\Lambda}(\nu, Z)$ is absorbed by
the error term of the second summand.
Note that the error term is uniform in $Z$ and, by its construction,
analytic in~$Z$.
Thus we end up with
\begin{equation}\label{eq:Pseudo-Tauber:L-decomposition}
L(N, Z)= c'(Z) + S(N, Z) + \Oh[\big]{N^{\Re \gamma+1-\beta}}
\end{equation}
where
\begin{equation}\label{eq:pseudo-Tauber-S-definition}
S(N, Z)\coloneqq I(1, Z)\sum_{0\le
p<\Lambda}Q(Z)^p+I(\nu, Z)Q(Z)^\Lambda.
\end{equation}
It remains to rewrite $S(N, Z)$ in the form required by
\eqref{eq:pseudo-Tauber-relation}. We emphasise that we will compute $S(N, Z)$
exactly, i.e., no more asymptotics for $N\to\infty$ will play any rôle.
\proofparagraph{Construction of $\Psi$}
We will now rewrite the expression $S(N, Z)$ such that the generating
function~$\Psi$ (i.e., the fluctuations of the right-hand side
of~\eqref{eq:pseudo-Tauber-relation}) appears. After this, we
will gather properties of~$\Psi$ including
properties of its Fourier coefficients.
We rewrite~\eqref{eq:pseudo-Tauber-S-definition} as
\begin{align*}
S(N, Z)&=
I(1, Z)\frac{1-\f{Q}{Z}^\Lambda}{1-\f{Q}{Z}}
+ I(\nu, Z) \f{Q}{Z}^\Lambda.
\end{align*}
We replace
$\Lambda$ by $\log_q N - \nu$ and use
\begin{align*}
\f{Q}{Z}^\Lambda
&= \f{Q}{Z}^{\log_q N}\f{Q}{Z}^{-\nu}
= N^{\gamma+1+Z} \f{Q}{Z}^{-\nu}
\end{align*}
to get
\begin{equation}\label{eq:Pseudo-Tauber:S-decomposition}
S(N, Z)= \frac{I(1, Z)}{1-\f{Q}{Z}}
+ N^{\gamma+1+Z} \Psi(\nu, Z)
\end{equation}
with
\begin{equation}\label{eq:Pseudo-Tauber:Psi-definition}
\Psi(u, Z)\coloneqq \f{Q}{Z}^{-u}
\Bigl(I(u, Z)-\frac{I(1, Z)}{1-\f{Q}{Z}}\Bigr).
\end{equation}
\proofparagraph{Periodic Extension of $\Psi$}
A priori, it is not clear that the function~$\Psi(u, Z)$ defined above can
be extended to a periodic function (and therefore Fourier coefficients
can be computed later on). The aim now is to show that it is
possible to do so.
It is obvious that $\f{\Psi}{u, Z}$ is continuously differentiable in $u\in[0,
1]$.
We have
\begin{equation*}
\f{\Psi}{1, Z}=\frac{I(1, Z)}{\f{Q}{Z}}
\Bigl(1-\frac{1}{1-\f{Q}{Z}}\Bigr)
=-\frac{I(1, Z)}{1-\f{Q}{Z}}=\f{\Psi}{0, Z}
\end{equation*}
because $I(0, Z)=0$ by \eqref{eq:Pseudo-Tauber:I-definition}.
The derivative of $\f{\Psi}{u, Z}$ with respect to $u$ is
\begin{align*}
\frac{\partial \f{\Psi}{u,Z}}{\partial u}
&= -\bigl(\log\f{Q}{Z}\bigr) \f{\Psi}{u, Z}
+ (\log q) \f{Q}{Z}^{-u} \f{Q}{Z}^u \f{\Phi}{u, Z}\\
&= -\bigl(\log\f{Q}{Z}\bigr) \f{\Psi}{u, Z} + (\log q) \f{\Phi}{u, Z},
\end{align*}
which implies that
\begin{equation*}
\frac{\partial \f{\Psi}{u,Z}}{\partial u}\Bigr|_{u=1}=\frac{\partial \f{\Psi}{u,Z}}{\partial u}\Bigr|_{u=0}.
\end{equation*}
We can therefore extend $\f{\Psi}{u, Z}$ to a $1$-periodic continuously
differentiable function in $u$ on $\mathbb{R}$.
\proofparagraph{Fourier Coefficients of $\Psi$}
Knowing that~$\Psi$ is a periodic function, we can now head for
its Fourier coefficients and relate them to those of~$\Phi$.
By using equations~\eqref{eq:Pseudo-Tauber:Psi-definition} and
\eqref{eq:Pseudo-Tauber:I-definition}, $Q(Z)=q^{\gamma+1+Z}$, and
$\exp(-2\ell\pi iu)=q^{-\chi_\ell u}$ with $\chi_\ell=\frac{2\pi i\ell}{\log q}$, we now express the Fourier coefficients of $\f{\Psi}{u, Z}$ in terms of those of
$\f{\Phi}{u, Z}$ by
\begin{multline*}
\int_{0}^1 \f{\Psi}{u, Z} \exp(-2\ell\pi i u) \,\mathrm{d} u\\
\begin{aligned}
&=
(\log q)\int_{0\le w\le u\le 1}
\f{Q}{Z}^{w-u} \f{\Phi}{w, Z} q^{-\chi_\ell u} \,\mathrm{d} w\,\mathrm{d} u \\
&\phantom{=}\;
-\frac{I(1, Z)}{1-\f{Q}{Z}} \int_0^1
q^{-(\gamma+1+Z+\chi_\ell)u} \,\mathrm{d} u\\
&=
(\log q)\int_{0\le w\le 1} \f{Q}{Z}^w \f{\Phi}{w, Z}
\int_{w\le u\le 1} q^{-(\gamma+1+Z+\chi_\ell)u} \,\mathrm{d} u\,\mathrm{d} w \\
&\phantom{=}\;
-\frac{I(1, Z)}{(1-\f{Q}{Z})(\log q)(\gamma+1+Z+\chi_\ell)}
\Bigl(1-\frac{1}{\f{Q}{Z}}\Bigr)\\
&=
\frac{1}{\gamma+1+Z+\chi_\ell}
\int_0^1 \f{Q}{Z}^w \f{\Phi}{w, Z}
\Bigl(q^{-(\gamma+1+Z+\chi_\ell)w}-\frac1{\f{Q}{Z}}\Bigr)
\,\mathrm{d} w \\
&\phantom{=}\;
+ \frac{I(1, Z)}{\f{Q}{Z}(\log q)(\gamma+1+Z+\chi_\ell)}\\
&=
\frac{1}{\gamma+1+\chi_\ell+Z}
\int_0^1 \f{\Phi}{w, Z} \fexp{-2\ell\pi i w} \,\mathrm{d} w\\
&\phantom{=}\;
-\frac{1}{\f{Q}{Z} (\gamma+1+\chi_\ell+Z)}
\int_0^1 \f{Q}{Z}^w \f{\Phi}{w, Z} \,\mathrm{d} w\\
&\phantom{=}\;
+ \frac{I(1, Z)}{\f{Q}{Z}(\log q)(\gamma+1+Z+\chi_\ell)}.
\end{aligned}
\end{multline*}
The second and third summands cancel, and we get
\begin{equation}\label{eq:Pseudo-Tauber:Fourier-Coefficients-GF}
\Bigl(\gamma+1+\chi_\ell + Z\Bigr)
\int_{0}^1 \f{\Psi}{u, Z}\exp(-2\ell\pi i u)\,\mathrm{d} u =
\int_0^1\f{\Phi}{w, Z}
\exp(-2\ell\pi i w)\,\mathrm{d} w.
\end{equation}
\proofparagraph{Extracting Coefficients}
So far, we have proven everything in terms of generating functions.
We now extract the coefficients of these power series which will
give us the result claimed in Proposition~\ref{proposition:pseudo-Tauber}.
By~\eqref{eq:Pseudo-Tauber:Psi-definition}, $\f{\Psi}{u, Z}$ is analytic in $Z$
for $0<\abs{Z}<2r$. If $q^{\gamma+1}\neq 1$, then it is analytic in $Z=0$, too. If
$q^{\gamma+1}=1$, then~\eqref{eq:Pseudo-Tauber:Psi-definition} implies that $\f{\Psi}{u, Z}$
might have a simple pole in $Z=0$.
Note that all other possible poles have been excluded by our choice of $r$.
For $j\ge -1$, we write
\begin{equation*}
\Psi_j(u)\coloneqq [Z^j]\f{\Psi}{u, Z}
\end{equation*}
and use Cauchy's formula to obtain
\begin{equation*}
\Psi_j(u) = \frac1{2\pi i}\oint_{\abs{Z}=r}\frac{\f{\Psi}{u, Z}}{Z^{j+1}}\,\mathrm{d} Z.
\end{equation*}
This and the properties of $\f{\Psi}{u, Z}$ established above
imply that $\Psi_j$ is a $1$-periodic continuously differentiable function.
Inserting \eqref{eq:Pseudo-Tauber:S-decomposition}
in~\eqref{eq:Pseudo-Tauber:L-decomposition} and extracting the coefficient of
$Z^{m-1}$ using Cauchy's theorem and the analyticity of the error in $Z$ yields~\eqref{eq:pseudo-Tauber-relation}
with $c=[Z^{m-1}]\bigl(c'(Z) + \frac{I(1, Z)}{1-\f{Q}{Z}}\bigr)$.
Rewriting
\eqref{eq:Pseudo-Tauber:Fourier-Coefficients-GF} in terms of $\Psi_j$ and $\Phi_j$ leads to~\eqref{eq:pseudo-Tauber-Fourier}.
Note that we have to add $\Oh{Z^m}$ in~\eqref{eq:pseudo-Tauber-Fourier} to
compensate the fact that we do not include $\psi_{j\ell}$ for $j\ge m$.
\end{proof}
We prove a uniqueness result.
\begin{lemma}\label{lemma:uniqueness-fluctuations}
Let $m$ be a positive integer, $q>1$ be a real number, $\gamma\in\mathbb{C}$ such
that $\gamma\notin \frac{2\pi i}{\log q}\mathbb{Z}$, $c\in\mathbb{C}$, and $\Psi_0$, \ldots, $\Psi_{m-1}$ and $\Xi_0$,
\ldots, $\Xi_{m-1}$ be $1$-periodic continuous functions such that
\begin{equation}\label{eq:Fourier:function-comparison}
\sum_{0\le k<m}(\log_qN)^k\Psi_k(\log_q N) = \sum_{0\le k<m}(\log_q N)^k
\Xi_k(\log_q N) + c N^{-\gamma} + \oh{1}
\end{equation}
for integers $N\to\infty$. Then $\Psi_k=\Xi_k$ for $0\le k<m$.
\end{lemma}
\begin{proof}If $\Re \gamma <0$ and $c\neq 0$, then
\eqref{eq:Fourier:function-comparison} is impossible as the growth of the
right-hand side of the equation is larger than that on the left-hand side.
So we can exclude this
case from further consideration.
We proceed indirectly and choose $k$ maximally such that $\Xi_k\neq\Psi_k$.
Dividing \eqref{eq:Fourier:function-comparison} by $(\log_q N)^k$ yields
\begin{equation}\label{eq:comparison}
(\Xi_k-\Psi_k)(\log_q N) = cN^{-\gamma}\iverson{k=0} + \oh{1}
\end{equation}
for $N\to\infty$. Let
$0< u<1$ and set $N_j=\floor{q^{j+u}}$. We
clearly have $\lim_{j\to\infty} N_j=\infty$. Then
\begin{equation*}
j+u + \log_q(1-q^{-j-u}) = \log_q(q^{j+u}-1)\le \log_q N_j \le j+u.
\end{equation*}
We define $\nu_j\coloneqq \log_q N_j-j-u$ and see that $\nu_j=\Oh{q^{-j}}$ for
$j\to\infty$, i.e., $\lim_{j\to\infty} \nu_j = 0$.
This implies that $\lim_{j\to\infty}\fractional{\log_q N_j}=u$ and therefore
\begin{equation*}
\lim_{j\to\infty}(\Xi_k-\Psi_k)(\log_q N_j)=\lim_{j\to\infty}(\Xi_k-\Psi_k)(\fractional{\log_q N_j})=\Xi_k(u)-\Psi_k(u).
\end{equation*}
Setting $N=N_j$ in \eqref{eq:comparison} and letting $j\to \infty$ shows that
\begin{equation}\label{eq:comparison-limit}
\Xi_k(u)-\Psi_k(u) = \lim_{j\to\infty}cN_j^{-\gamma}\iverson{k=0}.
\end{equation}
If $k\neq 0$ or $\Re \gamma>0$, we immediately conclude that
$\Xi_k(u)-\Psi_k(u)=0$. If $\Re \gamma<0$ we have
$c=0$, which again implies that $\Xi_k(u)-\Psi_k(u)=0$.
Now we assume that $\Re \gamma=0$ and $k=0$. We set
$\beta\coloneqq -\frac{\log q}{2\pi i}\gamma$, which implies that
$N^{-\gamma}=\exp(2\pi i \beta\log_q N)$. We choose sequences
$(r_\ell)_{\ell\ge 1}$ and $(s_\ell)_{\ell\ge 1}$ such that
$\lim_{\ell\to\infty }s_\ell=\infty$ and $\lim_{\ell\to\infty}\abs{s_\ell
\beta - r_\ell}=0$: For rational $\beta=r/s$, we simply take $r_\ell=\ell
r$ and $s_\ell=\ell s$, and for irrational $\beta$, we consider the sequence of
convergents $(r_\ell/s_\ell)_{\ell\ge 1}$ of the continued fraction of
$\beta$ and the required properties follow from the theory of continued
fractions; see for example \cite[Theorems~155
and~164]{Hardy-Wright:1975}. By using $\log_q N_j = j+u+\nu_j$, we get
\begin{align*}
\lim_{\ell\to\infty}N_{s_\ell}^{-\gamma} &= \lim_{\ell\to\infty}\exp(2\pi i
(r_\ell + \beta u + (s_\ell \beta - r_\ell) + \beta \nu_{s_\ell})=\exp(2\pi i \beta u),\\
\lim_{\ell\to\infty}N_{s_\ell+1}^{-\gamma} &= \lim_{\ell\to\infty}\exp(2\pi i
(r_\ell + \beta + \beta u + (s_\ell \beta - r_\ell)+\beta \nu_{s_\ell+1})=\fexp[\big]{2\pi i \beta (1+u)}.
\end{align*}
These two limits are distinct as $\beta\notin\mathbb{Z}$ by assumption.
Thus $\lim_{j\to\infty}N_j^{-\gamma}$ does not exist. Therefore,
\eqref{eq:comparison-limit} implies that $c=0$ and therefore $\Xi_k(u)-\Psi_k(u)=0$.
We proved that $\Xi_k(u)=\Psi_k(u)$ for $u\notin\mathbb{Z}$. By continuity, this
also follows for all $u \in \mathbb{R}$; contradiction.
\end{proof}
\subsection{Proof of Theorem~\ref{theorem:use-Mellin--Perron}}
We again start with an outline of the proof.
\begin{proof}[Overview of the Proof of Theorem~\ref{theorem:use-Mellin--Perron}]
The idea is to compute the repeated summatory function of $F$
twice: On the one hand, we use the pseudo-Tauberian Proposition~\ref{proposition:pseudo-Tauber} to rewrite the
right-hand side of \eqref{eq:F-N-periodic} in terms of periodic
functions~$\Psi_{aj}$. On the other hand, we compute it using a higher
order Mellin--Perron summation
formula, relating it to the singularities of $\mathcal{F}$.
More specifically, the expansions at the singularities of $\mathcal{F}$ give
the Fourier coefficients of $\Psi_{aj}$. The Fourier coefficients of the functions
$\Psi_{aj}$ are related to those of the functions $\Phi_j$ via~\eqref{eq:pseudo-Tauber-Fourier}.
\end{proof}
And up next comes the actual proof.
\begin{proof}[Proof of Theorem~\ref{theorem:use-Mellin--Perron}]
\proofparagraph{Initial observations and notations}
As $\Phi_j$ is Hölder continuous, its Fourier series converges by Dini's
criterion; see, for example, \cite[p.~52]{Zygmund:2002:trigon}.
For any sequence $g$ on $\mathbb{Z}_{>0}$, we set $(\mathcal{S} g)(N)\coloneqq \sum_{1\le n< N}g(n)$.
We set $A=1 + \max\set{\floor{\eta}, 0}$. In particular, $A$ is a positive
integer with $A>\eta$.
\proofparagraph{Asymptotic Summation}
We first compute the $A$th repeated summatory function~$\mathcal{S}^A F$
of~$F$ (i.e., the $(A+1)$th repeated summatory function $\mathcal{S}^{A+1} f$ of
the function~$f$) by applying Proposition~\ref{proposition:pseudo-Tauber} $A$ times.
This results in an asymptotic expansion involving new periodic fluctuations
while keeping track of the relation between the Fourier coefficients of
the original fluctuations and those of the new fluctuations.
A simple induction based on~\eqref{eq:F-N-periodic} and using
Proposition~\ref{proposition:pseudo-Tauber}
shows that
there exist
$1$-periodic continuous functions $\Psi_{aj}$ for $a\ge 0$ and $-1\le j<m$
and some constants $c_{ab}$ for $0\le b<a$ such that
\begin{equation}\label{eq:S-a+1-f-asymptotic}
(\mathcal{S}^{a+1} f)(N) = \sum_{0\le b<a}c_{ab}N^b +
N^{\gamma+a}\sum_{\substack{j+k=m-1\\-1\le j<m}} \frac{(\log N)^k}{k!}
\Psi_{aj}(\fractional{\log_q N}) + \Oh{N^{\gamma_0+a}}
\end{equation}
for integers $N\to\infty$. In fact, $\Psi_{0j}=\Phi_j$ for
$0\le j<m$. For $a\ge 1$ and $-1\le j<m$, $\Psi_{aj}$ is continuously differentiable.
Note that the case that $q^{\gamma+a+1}=1$ occurs for at most one $0\le a<A$,
which implies that the number of non-vanishing fluctuations increases at most
once in the application of Proposition~\ref{proposition:pseudo-Tauber}.
Also note that the assumption $\alpha>\Re \gamma-\gamma_0$ implies that the
error terms arising in the application of
Proposition~\ref{proposition:pseudo-Tauber} are absorbed by the error term
stemming from~\eqref{eq:F-N-periodic}.
We denote the corresponding Fourier coefficients by
\begin{equation*}
\psi_{aj\ell}\coloneqq \int_{0}^1 \Psi_{aj}(u)\exp(-2\ell\pi i u)\,\mathrm{d} u
\end{equation*}
for $0\le a\le A$, $-1\le j<m$, $\ell\in\mathbb{Z}$. By~\eqref{eq:pseudo-Tauber-Fourier}
the generating functions of the Fourier coefficients fulfil
\begin{equation*}
\sum_{-1\le j<m}\psi_{aj\ell}Z^j = (\gamma+a+1+\chi_\ell + Z)\sum_{-1\le j<m}\psi_{(a+1)j\ell}Z^j
+\Oh{Z^m}
\end{equation*}
for $0\le a<A$,
$\ell\in\mathbb{Z}$ and $Z\to 0$. Iterating this recurrence
yields
\begin{equation}\label{eq:Fourier:Fourier-coefficient-recursion-full}
\sum_{0\le j<m}\psi_{0j\ell}Z^j = \biggl(\prod_{1 \le a \le A} (\gamma+a+\chi_\ell + Z)\biggr)\sum_{-1\le j<m}\psi_{Aj\ell}Z^j
+\Oh{Z^m}
\end{equation}
for $\ell\in\mathbb{Z}$ and $Z\to 0$.
\proofparagraph{Explicit Summation}
We now compute $\mathcal{S}^{A+1} f$ explicitly with the aim of decomposing it into
one part which can be computed by the $A$th order Mellin--Perron summation
formula and another part which is smaller and can be absorbed by an error term.
Explicitly, we have
\begin{equation*}
(\mathcal{S}^{a+1}f)(N) = \sum_{1\le n_1<n_2<\cdots<n_{a+1}<N}f(n_1)
= \sum_{1\le n<N}f(n)\sum_{n<n_2<\cdots<n_{a+1}<N}1
\end{equation*}
for $0\le a \le A$. Note that we formally write the outer sum over the range
$1\le n<N$ although the inner sum is empty (i.e., equals~$0$) for $n\ge N-a$; this will be useful
later on. The inner sum counts the number of selections of $a$ elements out
of $\set{n+1,\ldots, N-1}$, thus we have
\begin{equation}\label{eq:Fourier:explicit-summation}
(\mathcal{S}^{a+1}f)(N) = \sum_{1\le n< N}\binom{N-n-1}{a}f(n)=\sum_{1\le n< N}\frac1{a!}(N-n-1)^{\underline{a}}f(n)
\end{equation}
for $0\le a\le A$ and falling factorials
$z^{\underline{a}}\coloneqq z(z-1)\dotsm (z-a+1)$.
The polynomials $\frac1{a!}(U-1)^{\underline a}$, $0\le a\le A$, are clearly a basis of
the space of polynomials in $U$ of degree at most $A$. Thus, there exist
rational numbers $b_0$, \ldots, $b_A$ such that
\begin{equation*}
\frac{U^A}{A!}=\sum_{0 \le a \le A} \frac{b_a}{a!} (U-1)^{\underline{a}}.
\end{equation*}
Comparing the coefficients of $U^A$ shows that $b_A=1$. Substitution of
$U$ by $N-n$, multiplication by $f(n)$ and summation over $1\le n<N$
yield
\begin{equation*}
\frac1{A!}\sum_{1\le n<N}(N-n)^A f(n) = \sum_{0 \le a \le A} b_a (\mathcal{S}^{a+1}f)(N)
\end{equation*}
by~\eqref{eq:Fourier:explicit-summation}. When inserting the asymptotic
expressions from \eqref{eq:S-a+1-f-asymptotic}, the summands involving
fluctuations for $0\le a<
A$ are absorbed by the error term~$\Oh{N^{\gamma_0+A}}$
of the summand for $a=A$ because $\Re\gamma - \gamma_0 < 1$. Thus there are
some constants $c_b$ for $0\le b<A$ such that
\begin{multline}\label{eq:Mellin-Perron-sum}
\frac1{A!}\sum_{1\le n<N}(N-n)^A f(n) = \sum_{0\le b<A}c_{b}N^b \\+
N^{\gamma+A}\sum_{\substack{j+k=m-1\\-1\le j<m}} \frac{(\log N)^k}{k!}
\Psi_{Aj}(\fractional{\log_q N}) + \Oh{N^{\gamma_0+A}}
\end{multline}
for integers $N\to\infty$.
If $\gamma+A=b+\chi_{\ell'}$ for some $0\le b<A$ and $\ell'\in\mathbb{Z}$, then we
assume without loss of generality that $c_{b}=0$: Otherwise, we replace
$\Psi_{A(m-1)}(u)$ by $\Psi_{A(m-1)}(u) + c_{b}\exp(-2\ell'\pi i u)$ and
$c_{b}$ by $0$. Both~\eqref{eq:Mellin-Perron-sum} and
\eqref{eq:Fourier:Fourier-coefficient-recursion-full} remain intact: The
former trivially, the latter because the factor for $a=A-b$
in~\eqref{eq:Fourier:Fourier-coefficient-recursion-full} equals
$\gamma+A-b-\chi_{\ell'} + Z=Z$ which compensates
the fact that the Fourier coefficient $\psi_{A(m-1)(-\ell')}$ is modified.
\proofparagraph{Mellin--Perron summation}
We use the $A$th order Mellin--Perron summation formula to write the main contribution
of $\mathcal{S}^{A+1} f$ as determined above in terms of new periodic fluctuations $\Xi_j$ whose
Fourier coefficients are expressed in terms of residues of a suitably
modified version of the Dirichlet generating function $\mathcal{F}$.
Without loss of generality, we assume that $\sigma_{\mathrm{abs}}>0$: The growth
condition~\eqref{eq:Dirichlet-order} trivially holds with $\eta=0$ on the
right of the abscissa of absolute convergence of the Dirichlet series.
By the $A$th order Mellin--Perron summation
formula (see \cite[Theorem~2.1]{Flajolet-Grabner-Kirschenhofer-Prodinger:1994:mellin}), we have
\begin{equation*}
\frac1{A!}\sum_{1\le n<N}(N-n)^A f(n) = \frac1{2\pi
i}\int_{\sigma_{\mathrm{abs}}+1-i\infty}^{\sigma_{\mathrm{abs}}+1+i\infty} \frac{\mathcal{F}(s) N^{s+A}}{s(s+1)\dotsm(s+A)}\,\mathrm{d} s
\end{equation*}
with the arbitrary choice $\sigma_{\mathrm{abs}}+1>\sigma_{\mathrm{abs}}$ for the real part of
the line of integration.
The growth condition~\eqref{eq:Dirichlet-order} allows us to shift the
line of integration to the left such that
\begin{align*}
\frac1{A!}\sum_{1\le n<N}&(N-n)^A f(n) \\ &=
\sum_{\ell\in\mathbb{Z}}
\Res[\Big]{\frac{\mathcal{F}(s)N^{s+A}}{s(s+1)\dotsm (s+A)}}%
{s=\gamma+\chi_\ell}\\
&\phantom{=}\hspace*{0.65em}+ \sum_{0\le a\le \min\{-\gamma_0, A\}}(-1)^a\frac{\mathcal{F}(-a)}{a!(A-a)!}N^{A-a}\iverson[\Big]{\gamma\notin -a+\frac{2\pi i}{\log q}\mathbb{Z}}\\
&\phantom{=}\hspace*{0.65em}+ \frac1{2\pi
i}\int_{\gamma_0-i\infty}^{\gamma_0+i\infty} \frac{\mathcal{F}(s) N^{s+A}}{s(s+1)\dotsm (s+A)}\,\mathrm{d} s.
\end{align*}
The summand for~$a$ in the second term corresponds to a possible pole at $s=-a$ which is not taken care of in the first sum; note that $\mathcal{F}(s)$ is analytic at $s=-a$
in this case
by assumption because of~$\gamma_0<-a$.
We now compute the residue at $s=\gamma+\chi_\ell$. We use
\begin{equation*}
N^{s+A} = N^{\gamma+A+\chi_\ell}\sum_{k\ge 0}\frac{(\log N)^k}{k!} (s-\gamma-\chi_\ell)^k
\end{equation*}
to split up the residue as
\begin{equation*}
\Res[\Big]{\frac{\mathcal{F}(s)N^{s+A}}{s(s+1)\dotsm(s+A)}}{s=\gamma+\chi_\ell} =
N^{\gamma+A+\chi_\ell}\sum_{\substack{k+j=m-1\\-1\le j<m}}\frac{(\log N)^k}{k!} \xi_{j\ell}
\end{equation*}
with
\begin{equation}\label{eq:Fourier:xi-as-residue}
\xi_{j\ell} =
\Res[\Big]{\frac{\mathcal{F}(s)(s-\gamma-\chi_\ell)^{m-1-j}}{s(s+1)\dotsm(s+A)}}{s=\gamma+\chi_\ell}
\end{equation}
for $j\ge -1$.
Note that we allow $j=-1$ for the case of $\gamma\in -a+\frac{2\pi i}{\log q}\mathbb{Z}$
for some $1\le a\le A$ when
$\mathcal{F}(s)/\bigl(s\dotsm (s+A)\bigr)$ might have a pole of order $m+1$ at
$s=-a$.
Using the growth condition~\eqref{eq:Dirichlet-order} and the
choice of~$A$ yields
\begin{equation}\label{eq:Fourier:growth-frac}
\frac{\mathcal{F}(s)}{s(s+1)\dotsm(s+A)}
= \Oh[\big]{\abs{\Im s}^{-1-A+\eta}} = \oh[\big]{\abs{\Im s}^{-1}}
\end{equation}
for $\abs{\Im s}\to\infty$ and $s$ which are at least a distance~$\delta$
away from the poles~$\gamma+\chi_\ell$.
By writing the residue in~\eqref{eq:Fourier:xi-as-residue}
in terms of an integral over a rectangle around
$s=\gamma+\chi_\ell$ (distance again at least~$\delta$ away from $\gamma+\chi_\ell$),
we see that \eqref{eq:Fourier:growth-frac} implies
\begin{equation}\label{eq:Fourier:psi-growth}
\xi_{j\ell} = \Oh[\big]{\abs{\ell}^{-1-A+\eta}} = \oh[\big]{\abs{\ell}^{-1}}
\end{equation}
for $\abs{\ell}\to\infty$. Moreover,
by~\eqref{eq:Fourier:growth-frac}, we see that
\begin{equation*}
\frac1{2\pi i} \int_{\gamma_0-i\infty}^{\gamma_0+i\infty}
\frac{\mathcal{F}(s) N^{s+A}}{s(s+1)\dotsm(s+A)}\,\mathrm{d} s
= \Oh{N^{\gamma_0+A}}.
\end{equation*}
Thus we proved that
\begin{multline}\label{eq:calculate-Fourier-first}
\frac1{A!}\sum_{1\le n<N}(N-n)^A f(n) = N^{\gamma+A}\sum_{\substack{k+j=m-1\\-1\le
j<m}} \frac{(\log N)^k}{k!}
\Xi_j(\log_q N) \\
+ \sum_{0\le a\le\min\{-\gamma_0,A\}}(-1)^a\frac{\mathcal{F}(-a)}{a!(A-a)!}N^{A-a}\iverson[\Big]{\gamma\notin -a+\frac{2\pi i}{\log q}\mathbb{Z}}+ \Oh{N^{\gamma_0+A}}
\end{multline}
for
\begin{equation}\label{eq:Psi-tilde-k-definition}
\Xi_j(u) =\sum_{\ell\in\mathbb{Z}}\xi_{j\ell} \exp(2\ell\pi i u)
\end{equation}
where the $\xi_{j\ell}$ are given in \eqref{eq:Fourier:xi-as-residue}.
By \eqref{eq:Fourier:psi-growth}, the Fourier series
\eqref{eq:Psi-tilde-k-definition} converges uniformly and absolutely. This
implies that $\Xi_j$ is a $1$-periodic continuous function.
\proofparagraph{Fourier Coefficients}
We will now compare the two asymptotic expressions for $\mathcal{S}^{A+1} f$ obtained so far
to see that the fluctations coincide. We know explicit expressions for the
Fourier coefficients of the $\Xi_j$ in terms of residues, and we know how
the Fourier coefficients of the fluctuations of the repeated summatory
function are related to the Fourier coefficients of the fluctuations of $F$.
Therefore, we are able to compute the latter.
By
\eqref{eq:Mellin-Perron-sum}, \eqref{eq:calculate-Fourier-first},
elementary asymptotic considerations for the terms $N^b$ with $b>\Re \gamma+A$,
Lemma~\ref{lemma:uniqueness-fluctuations} and the fact that $c_{b}=0$ if
$b\in \gamma+A+\frac{2\pi i}{\log q}\mathbb{Z}$ for some $0\le b<A$, we see
that $\Xi_j=\Psi_{Aj}$ for $-1\le j<m$. This immediately implies that $\mathcal{F}(0)=0$ if
$\gamma_0<0$ and $\gamma\notin\frac{2\pi i}{\log q}\mathbb{Z}$.
To compute the Fourier coefficients $\psi_{Aj\ell}=\xi_{j\ell}$, we set
$Z\coloneqq s-\gamma-\chi_\ell$ to rewrite~\eqref{eq:Fourier:xi-as-residue}
using \eqref{eq:Fourier:F-s-principal-part} as
\begin{equation*}
\psi_{Aj\ell}=[Z^{-1}]
\frac{\sum_{b\ge 0}\varphi_{b\ell}Z^{b-j-1}}{\prod_{1 \le a \le A} (\gamma+a+\chi_\ell+Z)}
=[Z^{j}]
\frac{\sum_{b\ge 0}\varphi_{b\ell} Z^{b}}{\prod_{1 \le a \le A} (\gamma+a+\chi_\ell+Z)}
\end{equation*}
for $-1\le j<m$ and $\ell\in\mathbb{Z}$.
This is equivalent to
\begin{equation*}
\sum_{-1\le j<m}\psi_{Aj\ell}Z^j=\frac{\sum_{j\ge 0}\varphi_{j\ell}
Z^{j}}{\prod_{1 \le a \le A} (\gamma+a+\chi_\ell+Z)} + \Oh{Z^{m}}
\end{equation*}
for $\ell\in\mathbb{Z}$ and $Z\to 0$. Clearing the denominator and
using~\eqref{eq:Fourier:Fourier-coefficient-recursion-full} as announced in Remark~\ref{remark:recurrence-fluctuation} lead to
\begin{equation*}
\sum_{0\le j< m}\psi_{0j\ell}
Z^{j}=\sum_{j\ge 0}\varphi_{j\ell}
Z^{j} + \Oh{Z^{m}}
\end{equation*}
for $\ell\in\mathbb{Z}$ and $Z\to 0$. Comparing coefficients shows that
$\psi_{0j\ell}=\varphi_{j\ell}$ for $0\le j<m$ and $\ell\in\mathbb{Z}$.
This proves~\eqref{eq:Fourier:fluctuation-as-Fourier-series}.
\end{proof}
\section{Proof of Theorem~\ref{theorem:simple}}\label{section:proof-theorem-simple}
\begin{proof}[Proof of Theorem~\ref{theorem:simple}]
By Remark~\ref{remark:regular-sequence-as-a-matrix-product}, we have
$x(n)=e_1 f(n)v(0)$. If $v(0)=0$, there is nothing to show.
Otherwise, as observed in Section~\ref{section:q-regular-matrix-product},
$v(0)$ is a right eigenvector of $A_0$ associated to the eigenvalue $1$.
As a consequence, $Kv(0)$, $\vartheta_m v(0)$ and $\vartheta v(0)$ all vanish.
Therefore, \eqref{eq:formula-X-n} follows from Theorem~\ref{theorem:main}
by multiplication by $e_1$ and $v(0)$ from left and right, respectively. Note
that the notation is somewhat different: Instead of powers $(\log_q N)^k$ in
Theorem~\ref{theorem:main} we write $(\log N)^k/k!$ here.
The functional equation \eqref{eq:functional-equation-V} follows from
Theorem~\ref{theorem:Dirichlet-series} for $n_0=1$ by multiplication from right
by $v(0)$.
For computing the Fourier coefficients, we denote the rows of $T$ by $w_1$,
\ldots, $w_d$. Thus $w_a$ is a generalised left eigenvector of $C$ of some
order $m_a$ associated to some eigenvalue $\lambda_a$ of $C$. We can write
$e_1=\sum_{1 \le a \le d} c_a w_a$ for some suitable constants $c_1$, \ldots, $c_d$.
For $1\le a\le d$, we consider the sequence~$h_a$ on $\mathbb{Z}_{>0}$
with
\begin{equation*}
h_a(n)=w_a\bigl(v(n)+v(0)\iverson{n=1}\bigr).
\end{equation*}
The reason for incorporating
$v(0)$ into the value for $n=1$ is that the corresponding Dirichlet series $\mathcal{H}^{(a)}(s)\coloneqq \sum_{n\ge
1}n^{-s}h_a(n)$ only takes values at $n\ge 1$ into account. By definition, we
have $\mathcal{H}^{(a)}(s)=w_av(0) + w_a\mathcal{V}(s)$. Taking the linear combination yields
$\sum_{1 \le a \le d} c_a\mathcal{H}^{(a)}(s)=x(0) + \mathcal{X}(s)$. We choose
$\gamma_0> \log_q R$ such that there are no eigenvalues
$\lambda\in\sigma(C)$ with $\log_q R<\log_q\lambda\le \gamma_0$ and such
that $\gamma_0\notin \mathbb{Z}_{\le 0}$.
By
Theorem~\ref{theorem:contribution-of-eigenspace}, we have
\begin{equation}\label{eq:simple:sum-lambda_a}
\sum_{1\le n<N}h_a(n) = N^{\log_q \lambda_a}\sum_{0\le k<m_a}\frac{(\log N)^k}{k!}
\Psi_{ak}(\fractional{\log_q N}) + \Oh{N^{\gamma_0}}
\end{equation}
for $N\to\infty$ for suitable 1-periodic Hölder continuous functions $\Psi_{ak}$
(which vanish if $\abs{\lambda_a}\le R$). By
Theorem~\ref{theorem:Dirichlet-series}, the Dirichlet
series $\mathcal{H}^{(a)}(s)$ is meromorphic for $\Re s>\gamma_0$ with possible
poles at $s=\log_q \lambda_a + \chi_\ell$ for $\ell\in\mathbb{Z}$.
The sequence $h_a$ satisfies
the prerequisites of Theorem~\ref{theorem:use-Mellin--Perron}, either with
$\gamma=\log_q \lambda_a$ if $\Re \log_q \lambda_a>\gamma_0$ or with
arbitrary real $\gamma>\gamma_0$ and $\Phi_j=0$ for all $j$
if $\Re \log_q \lambda_a \le \gamma_0$. The theorem then
implies that
\begin{equation}\label{claim-H-a=0}
\mathcal{H}^{(a)}(0) = 0
\end{equation}
if $\gamma_0<0$ and $\lambda_a\neq 1$.
If $\abs{\lambda_a}>R$,
Theorem~\ref{theorem:use-Mellin--Perron} also yields
\begin{equation*}
\Psi_{ak}(u)=\sum_{\ell\in\mathbb{Z}}\psi_{ak\ell}\exp(2\pi i\ell u)
\end{equation*}
where the $\psi_{ak\ell}$ are given by the singular expansion
\begin{equation}\label{eq:proof-theorem-simple-local-expansion}
\frac{\mathcal{H}^{(a)}(s)}{s}\asymp\sum_{\ell\in\mathbb{Z}}\sum_{0\le k<m_a}\frac{\psi_{ak\ell}}{(s-\log_q\lambda_a-\chi_\ell)^{k+1}}
\end{equation}
for $\Re s>\gamma_0$. Note that~\eqref{claim-H-a=0} ensures that there is no
additional pole at $s=0$ when $\gamma_0<0$ and $\lambda_a\neq 1$. Also note
that in comparison to Theorem~\ref{theorem:use-Mellin--Perron}, $\Phi_{m_a-1-k}$
there corresponds to $\Psi_{ak}$ here.
We now have to relate the results obtained for the sequences $h_a$ with the
results claimed for the original sequence $f$.
For $\lambda\in\sigma(C)$ with $\abs{\lambda}>R$, we have
\begin{equation*}
\Phi_{\lambda k}(u)=\sum_{\substack{1\le a\le d\\\lambda_a=\lambda}}c_a\Psi_{ak}(u).
\end{equation*}
We denote the Fourier coefficients of $\Phi_{\lambda k}$ by $\varphi_{\lambda
k\ell}$ for $\ell\in\mathbb{Z}$ and will show that these Fourier coefficients
actually fulfil~\eqref{eq:Fourier-coefficient:simple}. Taking linear
combinations of~\eqref{eq:proof-theorem-simple-local-expansion} shows that
\begin{equation*}
\sum_{\substack{1\le a\le d\\\lambda_a=\lambda}}\frac{c_a\mathcal{H}^{(a)}(s)}{s}
\asymp \sum_{\ell\in\mathbb{Z}}\sum_{0\le k<m(\lambda)}\frac{\varphi_{\lambda k\ell}}{(s-\log_q\lambda-\chi_\ell)^{k+1}}
\label{eq:residue-with-condition}
\end{equation*}
for $\Re s>\gamma_0$.
Summing over all $\lambda\in\sigma(C)$ yields~\eqref{eq:Fourier-coefficient:simple}
because summands $\lambda$ with $\abs{\lambda}\le R$ are analytic for $\Re
s>\gamma_0$ and do therefore not contribute to the right-hand side.
\end{proof}
It might seem to be somewhat artificial that
Theorem~\ref{theorem:use-Mellin--Perron} is used to prove that
$\mathcal{H}^{(j)}(0)=0$ in some of the cases above. In fact, this can also be shown
directly using the linear representation; we formulate and prove this
in the following remark.
\begin{remark}
With the notations of the proof of Theorem~\ref{theorem:simple},
$\mathcal{H}^{(j)}(0)=0$ if $\lambda_j\neq 1$ and $R<1$ can also be shown using the
functional equation~\eqref{eq:functional-equation-V}.
\end{remark}
\begin{proof}
We prove this by induction on $m_j$. By definition of $T$, we have
$w_j(C-\lambda_j I)=\iverson{m_j>1}w_{j+1}$. (We have $m_d=1$ thus $w_{d+1}$
does not actually occur.)
If $m_j>1$, then $\mathcal{H}^{(j+1)}(0)=0$ by induction hypothesis.
We add $(I-q^{-s})\,v(0)$ to \eqref{eq:functional-equation-V} and get
\begin{align*}
\bigl(I-q^{-s}C\bigr)\bigl(v(0)+\mathcal{V}(s)\bigr) = \bigl(I-q^{-s}C\bigr)v(0)
&+ \sum_{1 \le n < q} n^{-s}v(n) \\
&+ q^{-s}\sum_{0 \le r < q} A_r
\sum_{k\ge 1}\binom{-s}{k}\Bigl(\frac r q\Bigr)^k \mathcal{V}(s+k).
\end{align*}
Multiplication by $w_j$ from the left yields
\begin{align*}
\bigl(1-q^{-s}\lambda\bigr)\mathcal{H}^{(j)}(s) &= \iverson{m_j>1}\,q^{-s}\mathcal{H}^{(j+1)}(s) \\
&\phantom{{}={}}+ w_j \bigl(I - q^{-s}C\bigr)v(0)
+ w_j\sum_{1 \le n < q} n^{-s}v(n) \\
&\phantom{{}={}}+ w_jq^{-s}\sum_{0 \le r < q} A_r
\sum_{k\ge 1}\binom{-s}{k}\Bigl(\frac r q\Bigr)^k \mathcal{V}(s+k).
\end{align*}
As $R<1$ and $\lambda_j\neq 1$, the Dirichlet series $\mathcal{H}^{(j)}(s)$ is
analytic in $s=0$ by Theorem~\ref{theorem:Dirichlet-series}. It is therefore
legitimate to set $s=0$ in the above equation. We use the induction hypothesis that
$\mathcal{H}^{(j+1)}(0)=0$ as well as the fact that $v(n)=A_nv(0)$
(note that $v(0)$ is a right eigenvector of $A_0$ to the eigenvalue~$1$;
see Section~\ref{section:q-regular-matrix-product})
for $0\le n<q$ to get
\begin{equation*}
(1-\lambda)\mathcal{H}^{(j)}(0)=w_j\sum_{0 \le n < q} A_n v(0) -w_jCv(0) = 0
\end{equation*}
because all binomial coefficients $\binom{0}{k}$ vanish.
\end{proof}
\section{Proof of Proposition~\ref{proposition:symmetric-eigenvalues}}
\label{sec:proof-symmetric-eigenvalues}
\begin{proof}[Proof of Proposition~\ref{proposition:symmetric-eigenvalues}]
We set
\begin{equation*}
j_0\coloneqq \floor[\bigg]{-\frac{p\bigl(\pi+\arg(\lambda)\bigr)}{2\pi}}+1
\end{equation*}
with the motive that
\begin{equation*}
-\pi<\arg(\lambda) + \frac{2j\pi}{p}\le \pi
\end{equation*}
holds for $j_0\le j<j_0+p$.
This implies that for $j_0\le j<j_0+p$, the $p$th root of unity~$\zeta_j\coloneqq \exp(2j\pi i/p)$
runs through the elements of $U_p$ such
that $\log_q(\lambda \zeta_j)=\log_q(\lambda) + 2j\pi i/(p\log q)$.
Then
\begin{align*}
N^{\log_q(\zeta_j\lambda)}
&= N^{\log_q \lambda} \exp\Bigl(\frac{2j\pi i}{p}\log_q N\Bigr)\\
&= N^{\log_q \lambda} \exp(2j\pi i\log_{q^p} N)
= N^{\log_q \lambda} \exp(2j\pi i\fractional{\log_{q^p} N}).
\end{align*}
We set
\begin{equation*}
\Phi(u)\coloneqq \sum_{j_0\le j<j_0+p} \exp\Bigl(\frac{2j\pi i}{p}u\Bigr)\Phi_{(\zeta_j\lambda)}(u),
\end{equation*}
thus $\Phi$ is a $p$-periodic function.
For the Fourier series expansion, we get
\begin{multline*}
\Phi(u)=\sum_{\ell\in\mathbb{Z}} \sum_{j_0\le j<j_0+p}
\Res[\bigg]{\mathcal{D}(s)
\Bigl(s - \log_q \lambda - \frac{2(\ell+\frac{j}{p})\pi i}{\log q}\Bigr)^k}%
{s=\log_q \lambda + \frac{2(\ell+\frac{j}{p})\pi i}{\log q}} \\
\times \f[\Big]{\exp}{2\pi i \Bigl(\ell+\frac{j}{p}\Bigr)u}.
\end{multline*}
Replacing $\ell p+j$ by $\ell$ leads to the Fourier series claimed in the
proposition.
\end{proof}
\input{estimates}
\part{References}
\makeatletter\renewcommand{\@bibtitlestyle}{}\makeatother
\subsection{Transducer and Automata}
Let us start with two paragraphs recalling some notions around transducer
automata. A \emph{transducer automaton} has a finite set of
\emph{states} together with \emph{transitions} (directed edges)
between these states. Each transition has an \emph{input label} and an
\emph{output label} out of the \emph{input alphabet} and the
\emph{output alphabet}, respectively.
A transducer is said to be \emph{deterministic} and \emph{complete}
if for every state and every letter of the input alphabet, there is exactly one
transition starting in this state with this input label.
A deterministic and complete transducer processes a word (over the
input alphabet) in the following way:
\begin{itemize}
\item It starts at its unique initial state.
\item Then the transducer reads the word letter by letter and for each
letter
\begin{itemize}
\item takes the transition with matching input label,
\item the output label is written, and
\item we proceed to the next state (according to the end of the
transition).
\end{itemize}
\item Each state has a \emph{final output label} that is written when
we \emph{halt} in this final state; we call a transducer with this
property a \emph{subsequential transducer}.
\end{itemize}
We refer to \cite[Chapter~1]{Berthe-Rigo:2010:combin} for a more
detailed introduction to transducers and automata.
Now we are ready to start with the set-up for our example.
\subsection{Sums of Output Labels}
Let $q\ge 2$ be a positive integer. We consider a complete deterministic
subsequential transducer $\mathcal{T}$ with input alphabet $\set{0, \ldots, q-1}$
and output alphabet $\mathbb{C}$; see \cite{Heuberger-Kropf-Prodinger:2015:output}.
For a non-negative integer $n$, let $\mathcal{T}(n)$ be the sum of the output labels
(including the final output label) encountered when the transducer reads the
$q$-ary expansion of $n$. Therefore, letters of the input alphabet will from now on be called digits.
This concept has been thoroughly studied in
\cite{Heuberger-Kropf-Prodinger:2015:output}: There, $\mathcal{T}(n)$ is considered
as a random variable defined on the probability space $\set{0, \ldots, N-1}$
equipped with uniform distribution. The expectation in this model corresponds
(up to a factor of~$N$) to our summatory function $\sum_{0\le n<N}\mathcal{T}(n)$.
We remark that in \cite{Heuberger-Kropf-Prodinger:2015:output}, the variance
and limiting distribution of the random variable $\mathcal{T}(n)$ have also been
investigated. Most of the results there are also valid for higher dimensional input.
The purpose of this section is to show that $\mathcal{T}(n)$ is a $q$-regular
sequence and to see that the corresponding results
in~\cite{Heuberger-Kropf-Prodinger:2015:output} also follow from our
more general framework here. We note that the
binary sum of digits considered in Example~\ref{example:binary-sum-of-digits}
is the special case of $q=2$ and the transducer consisting of a single state
which implements the identity map. For additional special cases of this
concept; see \cite{Heuberger-Kropf-Prodinger:2015:output}. Note that our result
here for the summatory function contains (fluctuating) terms for all
eigenvalues $\lambda$ of the adjacency matrix of the underlying digraph with
$\abs{\lambda}>1$ whereas in \cite{Heuberger-Kropf-Prodinger:2015:output} only
contributions of those eigenvalues $\lambda$ with $\abs{\lambda}=q$ are
available, all other contributions are absorbed by the error term there.
\subsection{Some Perron--Frobenius Theory}
We will need the following consequence of Perron--Frobenius theory.
By a \emph{component} of a digraph we always mean a strongly
connected component. We call a component \emph{final} if there
are no arcs leaving the component. The \emph{period} of a component
is the greatest common divisor of its cycle lengths. The \emph{final period} of
a digraph is the least common multiple of the periods of its final components.
\begin{lemma}\label{lemma:Perron--Frobenius-again}
Let $D$ be a directed graph where each
vertex has outdegree $q$. Let $M$ be its adjacency matrix and $p$ be its
final period.
Then $M$ has spectral radius $q$, $q$ is an
eigenvalue of $M$ and for all eigenvalues $\lambda$ of $M$ of modulus $q$, the
algebraic and geometric multiplicities coincide and $\lambda = q\zeta$ for
some $p$th root of unity $\zeta$.
\end{lemma}
This lemma follows from setting $t=0$ in
\cite[Lemma~2.3]{Heuberger-Kropf-Prodinger:2015:output}. As
\cite[Lemma~2.3]{Heuberger-Kropf-Prodinger:2015:output} proves more than we
need here and depends on the notions of that article, we extract the relevant
parts of \cite{Heuberger-Kropf-Prodinger:2015:output} to provide a
self-contained (apart from Perron--Frobenius theorem) proof of
Lemma~\ref{lemma:Perron--Frobenius-again}.
\input{perron_frobenius}
\subsection{Analysis of Output Sums of Transducers}
We consider the states of $\mathcal{T}$ to be numbered by $\set{1, \ldots, d}$ for some
positive integer $d\ge 1$ such that the initial state is state~$1$. We set
$\mathcal{T}_j(n)$ to be the sum of the output labels (including the final output
label) encountered when the transducer reads the $q$-ary expansion of $n$ when
starting in state~$j$. By construction, we have $\mathcal{T}(n)=\mathcal{T}_1(n)$ and
$\mathcal{T}_j(0)$ is the final output label of state~$j$. We set
$y(n)=\bigl(\mathcal{T}_1(n), \ldots, \mathcal{T}_d(n)\bigr)$.
For $0\le r<q$, we define the $d\times d$-dimensional $\set{0, 1}$-matrix $P_r$ in such a
way that there is a one in row~$j$, column~$k$ if and only if there is a
transition from state~$j$ to state~$k$ with input label $r$. The vector $o_r$
is defined by setting its $j$th coordinate to be the output label of the transition
from state~$j$ with input label $r$.
For $n_0\ge 1$, we set
\begin{equation*}
\mathcal{X}(s)=\sum_{n\ge 1}n^{-s}\mathcal{T}(n),\qquad
\mathcal{Y}_{n_0}(s)=\sum_{n\ge n_0}n^{-s}y(n),\qquad
\zeta_{n_0}(s, \alpha)=\sum_{n\ge n_0}(n+\alpha)^{-s}.
\end{equation*}
The last Dirichlet series is a truncated version of the Hurwitz zeta function.
\begin{corollary}
\label{corollary:transducer-main}
Let $\mathcal{T}$ be a transducer as described at the beginning of
this section. Let $M$ be the adjacency matrix and $p$ be the final period of
the underlying digraph. For $\lambda\in\mathbb{C}$ let $m(\lambda)$
be the size of the largest Jordan block associated with the eigenvalue
$\lambda$ of $M$.
Then the sequence
$n\mapsto\mathcal{T}(n)$ is a $q$-regular sequence and
\begin{equation}\label{eq:transducer:summatory-as-fluctuation}
\begin{aligned}
\sum_{0\le n<N}\mathcal{T}(n) = e_\mathcal{T} N\log_q N &+ N\Phi(\log_q N)\\
&+ \sum_{\substack{\lambda\in\sigma(M)\\
1<\abs{\lambda}<q
}} N^{\log_q \lambda} \sum_{0\le k<m(\lambda)}(\log_q N)^k\Phi_{\lambda
k}(\log_q N)\\
&+ \Oh[\big]{(\log N)^{\max\setm{m(\lambda)}{\abs{\lambda}=1}}}
\end{aligned}
\end{equation}
for some continuous $p$-periodic function $\Phi$, some continuous
$1$-periodic functions~$\Phi_{\lambda k}$ for $\lambda\in\sigma(M)$ with $1<\abs{\lambda}<q$ and $0\le
k<m(\lambda)$ and some constant
$e_\mathcal{T}$.
Furthermore,
\begin{equation*}
\Phi(u)=\sum_{\ell\in\mathbb{Z}}\varphi_\ell\exp\Bigl(\frac{2\ell\pi i}{p}u\Bigr)
\end{equation*}
with
\begin{equation*}
\varphi_\ell = \Res[\Big]{\frac{\mathcal{X}(s)}{s}}{s=1+\frac{2\ell\pi
i}{p\log q}}
\end{equation*}
for $\ell\in\mathbb{Z}$.
The Fourier series expansion of $\Phi_{\lambda k}$ for $\lambda\in\sigma(M)$
with $1<\abs{\lambda}<q$ is given in Theorem~\ref{theorem:simple}.
The Dirichlet series $\mathcal{Y}_{n_0}$ satisfies the functional equation
\begin{equation}\label{eq:transducer-functional-equation}
\begin{aligned}
\bigl(I-q^{-s}M\bigr)\mathcal{Y}_{n_0}(s) &= \sum_{n_0\le n<qn_0} n^{-s}y(n)
+ q^{-s}\sum_{0\le r<q}\zeta_{n_0}\bigl(s, \tfrac{r}{q}\bigr)o_r\\
&\phantom{={}}+ q^{-s}\sum_{0\le r<q}P_r\sum_{k\ge 1}\binom{-s}{k}\Bigl(\frac rq\Bigr)^k\mathcal{Y}_{n_0}(s+k).
\end{aligned}
\end{equation}
\end{corollary}
Note that the functional equation~\eqref{eq:transducer-functional-equation}
is preferrable over the functional equation given in
Theorem~\ref{theorem:Dirichlet-series} for the generic case
of a regular sequence: The generic functional equation
suggests a double pole at $s=1+\chi_\ell$ for all $\ell\in\mathbb{Z}$
whereas the occurrence of the Hurwitz zeta function
in~\eqref{eq:transducer-functional-equation} shows that
there is a double pole $s=1$ but single poles at $s=1+\chi_\ell$
for all $\ell\in\mathbb{Z}\setminus\{0\}$. Numerically, the same occurrence
of the Hurwitz zeta function is also advantageous because it
allows to decouple the problem.
\subsection{Proof of Corollary~\ref{corollary:transducer-main}}
\begin{proof}[Proof of Corollary~\ref{corollary:transducer-main}]
The proof is split into several steps.
\proofparagraph{Recursive Description}
We set
$v(n)=\bigl(\mathcal{T}_1(n), \ldots, \mathcal{T}_d(n), 1\bigr)^\top$\!.
For $1\le j\le d$ and $0\le r<q$, we define $t(j, r)$ and $o(j, r)$ to be the
target state and output label of the unique transition from
state $j$ with input label $r$, respectively. Therefore,
\begin{equation}\label{eq:transducer-to-matrix-product}
\mathcal{T}_j(qn+r) = \mathcal{T}_{t(j, r)}(n) + o(j, r)
\end{equation}
for $1\le j\le d$, $n\ge 0$, $0\le r<q$ with $qn+r>0$.
For $0\le r<q$, define $A_r=(a_{rjk})_{1\le j,\, k\le d+1}$ by
\begin{equation*}
a_{rjk} =
\begin{cases}
\iverson{t(j, r) = k}& \text{if $j$, $k\le d$,}\\
o(j, r)& \text{if $j\le d$, $k=d+1$,}\\
\iverson{k=d+1}& \text{if $j=d+1$.}
\end{cases}
\end{equation*}
Then \eqref{eq:transducer-to-matrix-product} is equivalent to
\begin{equation*}
v(qn+r) = A_r v(n)
\end{equation*}
for $n\ge 0$, $0\le r<q$ with $qn+r>0$. Defining $f(n)$ as in
\eqref{eq:regular-matrix-sequence} for these $A_r$, we see that
$v(n)=f(n)v(0)$.
\proofparagraph{$q$-Regular Sequence}
If we insist on a proper formulation as a regular sequence, we rewrite
\eqref{eq:transducer-to-matrix-product} to
\begin{equation}\label{eq:transducer-to-regular-sequence}
\mathcal{T}_j(qn+r)= \mathcal{T}_{t(j,r)}(n) + o(j, r) +
\iverson{r=0}\iverson{n=0}\bigl(\mathcal{T}_j(0)-\mathcal{T}_{t(j,0)}(0)-o(j, 0)\bigr)
\end{equation}
for $1\le j\le d$, $n\ge 0$, $0\le r<q$. Setting $\widetilde{v}(n)=\bigl(\mathcal{T}_1(n), \ldots,
\mathcal{T}_d(n), 1, \iverson{n=0}\bigr)$ and
$\widetilde{A}_r=(\widetilde{a}_{rjk})_{1\le j,\, k\le d+2}$ with
\begin{equation*}
\widetilde{a}_{rjk} =
\begin{cases}
\iverson{t(j, r) = k}& \text{if $j$, $k\le d$,}\\
o(j, r)& \text{if $j\le d$, $k=d+1$,}\\
\iverson{r=0}\bigl(\mathcal{T}_j(0)-\mathcal{T}_{t(j,0)}(0)-o(j, 0)\bigr)& \text{if $j\le d$, $k=d+2$,}\\
\iverson{k=d+1}& \text{if $j=d+1$,}\\
\iverson{k=d+2}\iverson{r=0}& \text{if $j=d+2$,}
\end{cases}
\end{equation*}
the system~\eqref{eq:transducer-to-regular-sequence} is equivalent to
\begin{equation*}
\widetilde{v}(qn+r) = \widetilde{A}_r \widetilde{v}(n)
\end{equation*}
for $n\ge 0$, $0\le r<q$.
\proofparagraph{Eigenvalue~$1$}
By construction, the matrices $A_r$ have the shape
\begin{equation*}
A_r = \left(
\begin{array}{c|c}
P_r&o_r\\\hline
0&1
\end{array}
\right).
\end{equation*}
It is
clear that $(0, \ldots, 0, 1)$ is a left eigenvector of $A_r$ associated with
the eigenvalue~$1$.
\proofparagraph{Joint Spectral Radius}
We claim that $A_0, \ldots, A_{q-1}$ have joint spectral radius $1$. Let
$\inftynorm{\,\cdot\,}$ denote the maximum norm of complex vectors as well as the induced
matrix norm, i.e., the maximum row sum norm. Let $j_1$, \ldots,
$j_\ell\in\set{0,\ldots, q-1}$. It is easily shown by induction on $\ell$
that
\begin{equation*}
A_{j_1}\dotsm A_{j_\ell}=\left(
\begin{array}{c|c}
P&b_P\\\hline
0&1
\end{array}
\right)
\end{equation*}
for some $P\in\mathbb{C}^{d\times d}$ and $b_P\in\mathbb{C}^d$ with $\inftynorm{P}\le 1$ and $\inftynorm{b_P}\le \ell \max_{0\le
r<q}\inftynorm{o_r}$.
Thus, we obtain
\begin{equation*}
\inftynorm{A_{j_1}\dotsm A_{j_\ell}}\le 1+\ell\max_{0\le
r<q}\inftynorm{o_r}.
\end{equation*}
As $1$ is an eigenvalue of each matrix~$A_r$ for $0\le r<q$,
the joint spectral radius equals~$1$, which proves the claim.
\proofparagraph{Eigenvectors and Asymptotics}
We now consider $C=\sum_{0\le r<q}A_r$. It has the shape
\begin{equation*}
C = \left(
\begin{array}{c|c}
M&b_M\\\hline
0&q
\end{array}
\right)
\end{equation*}
where $b_M$ is some complex vector.
Let $w_1$, \ldots, $w_\ell$ be a linearly independent system of left
eigenvectors of $M$ associated with the eigenvector $q$.
If $w_j b_M=0$ for $1\le j\le \ell$, then $(w_1, 0)$,
\ldots, $(w_\ell, 0), (0, 1)$ is a linearly independent system of left
eigenvectors of $C$ associated with the eigenvalue $q$. In that case
and because of Lemma~\ref{lemma:Perron--Frobenius-again},
algebraic and geometric multiplicities of $q$ as an eigenvalue of $C$ are
both equal to $\ell+1$.
Otherwise, assume without loss of generality that $w_1 b_M=1$. Then
\begin{equation*}
\bigl(w_2 - (w_2 b_M)w_1, 0\bigr),\,
\ldots,\,
\bigl(w_\ell - (w_\ell b_M)w_1, 0\bigr),\,
\bigl(0, 1\bigr)
\end{equation*}
is a linearly independent
system of left eigenvectors of $C$ associated with the eigenvalue
$q$. Additionally, $(w_1, 0)$ is a generalised left eigenvector of rank $2$
of $C$ associated with the eigenvalue $q$ with $(w_1, 0)(C-qI)=(0, 1)$. As
noted above, the vector
$(0, 1)$ is a left eigenvector to each matrix $A_0$, \ldots, $A_{q-1}$.
Similarly, it is easily seen that any left eigenvector of $M$ associated with
some eigenvalue $\lambda\neq q$ can be extended uniquely to a left
eigenvector of $C$ associated with the same eigenvalue. The same is true for
chains of generalised left eigenvectors associated with $\lambda\neq q$.
Therefore, in both of the above cases, Theorem~\ref{theorem:contribution-of-eigenspace}
yields
\begin{equation*}
\begin{aligned}
\sum_{0\le n<N}\mathcal{T}(N) = e_\mathcal{T} N\log_q N &+ \sum_{\zeta \in U_p} N^{\log_q
(q\zeta)}\Phi_{(q\zeta)}(\fractional{\log_q N}) \\
&+ \sum_{\substack{\lambda\in\sigma(M)\\
1<\abs{\lambda}<q
}} N^{\log_q \lambda} \sum_{0\le k<m(\lambda)}(\log_q N)^k\Phi_{\lambda
k}(\log_q N)\\
&+ \Oh[\big]{(\log N)^{\max\setm{m(\lambda)}{\abs{\lambda}=1}}}
\end{aligned}
\end{equation*}
for some constant $e_\mathcal{T}$ (which vanishes in the first case) and some
$1$-periodic continuous functions $\Phi_{(q\zeta)}$ and $\Phi_{\lambda k}$ where $\zeta$ runs through
the $p$th roots of unity~$U_p$ and $\lambda$ through the eigenvalues of $M$ with
$1<\abs{\lambda}<q$ and $0\le k<m(\lambda)$.
Proposition~\ref{proposition:symmetric-eigenvalues}
leads to \eqref{eq:transducer:summatory-as-fluctuation}.
\proofparagraph{Fourier Coefficients}
By Theorem~\ref{theorem:simple}, we have
\begin{equation*}
\Phi_{(q\zeta)}(u)=\sum_{\ell\in\mathbb{Z}}\varphi_{(q\zeta)\ell}\exp(2\ell\pi i u)
\end{equation*}
with
\begin{equation*}
\varphi_{(q\zeta)\ell}=\Res[\Big]{\frac{\mathcal{T}(0)+\mathcal{X}(s)}{s}}{s=1+\log_q \zeta + \frac{2\ell\pi
i}{\log q}}
\end{equation*}
for a $p$th root of unity $\zeta \in U_p$ and $\ell\in\mathbb{Z}$.
Therefore and by noting that $\mathcal{T}(0)$ does not contribute
to the residue, Proposition~\ref{proposition:symmetric-eigenvalues}
leads to the Fourier series given in the
corollary.
\proofparagraph{Functional Equation}
By \eqref{eq:transducer-to-matrix-product}, we have
\begin{align*}
\mathcal{Y}_{n_0}(s) &= \sum_{n_0\le n<qn_0} n^{-s}y(n) + \sum_{n\ge
n_0}\sum_{0\le r<q}(qn+r)^{-s}y(qn+r)\\
&= \sum_{n_0\le n<qn_0} n^{-s}y(n) + \sum_{n\ge
n_0}\sum_{0\le r<q}(qn+r)^{-s}\bigl(P_r y(n) + o_r\bigr)\\
&= \sum_{n_0\le n<qn_0} n^{-s}y(n) + q^{-s}\sum_{0\le r<q}P_r
\sum_{n\ge
n_0}\Bigl(n+\frac{r}{q}\Bigr)^{-s}y(n) \\
&\hspace*{8.95em}
+ q^{-s}\sum_{0\le r<q}\zeta_{n_0}\bigl(s, \tfrac{r}{q}\bigr)o_r.
\end{align*}
Using Lemma~\ref{lemma:shifted-Dirichlet}
yields the result.
\end{proof}
|
2,877,628,090,469 | arxiv | \section{Introduction}
So far, most exoplanets have been detected by indirect methods, based on the measurement of the effect of the companion on its host star, either in its spectrum thanks to the Doppler effect or in its photometric curve during transits. The currently known exoplanet population is therefore biased, since these techniques are sensitive to relatively short period and very close massive companions. In that context, direct imaging offers a good complement to probe larger separations around stars. In addition, the direct detection of photons emitted or reflected by the planet allows for photometric and spectroscopic studies, which is crucial to get insight into its atmospheric composition. It also enables precise astrometry, which over time provide orbital characteristics of the planets and insight into the dynamical environment and history. However, direct imaging is a challenging technique, as it requires to reach a very high contrast, typically ranging from $10^{-4}$ to $10^{-10}$ for hot giant planets to Earth-like planets, respectively, and a high angular resolution ($\sim0\farcs1$). Coronagraphy, which aims to reject the glaring light of the central star to enhance the signal from the faint companion, combined with extreme adaptive optics systems and advanced image processing, is a requirement to reach such performance.
Among all possible coronagraph designs, the vortex coronagraph was proposed a decade ago by \cite{Mawet05a} and \cite{Foo05}. It consists of a focal plane phase mask inducing a phase ramp around the optical axis. When passing through the phase mask, the light of an on-axis star is diffracted and redistributed outside the geometrical pupil of the telescope in a downstream pupil plane. A diaphragm, referred to as Lyot stop, is then used to block the light of the central star. The light of an off-axis companion is not, or only partially, affected by the vortex phase pattern and can propagate towards the detector.
One possible implementation of vortex phase masks is based on the manufacturing of concentric rings creating a subwavelength rotational grating, in a design referred to as the annular groove phase mask \citep[AGPM,][]{Mawet05a}. For a given linear polarization of the incoming light, the phase mask acts as a rotating phase retarder, thus inducing the desired phase ramp. Our team has previously demonstrated a fabrication process for diamond AGPMs working in the L (3.4--4.1~$\mu$m), M (4.4--5.0~$\mu$m) and N (10--13~$\mu$m) bands \citep{Forsberg13}. The diamond AGPMs were manufactured using nanoimprint lithography (NIL) and inductively coupled plasma reactive ion etching (ICP-RIE) in high density plasmas using highly oxidizing chemistries \citep{Karlsson03, Hwang04, Gu2004}. The fabricated AGPMs designed for the L band were characterized on the YACADIRE coronagraphic test bench at Observatoire de Paris \citep{Delacroix13}, showing starlight rejection up to 500:1. Our phase masks now equip infrared coronagraphs on several 10-m class telescopes (VLT/NACO, \citealt{Mawet13}; VLT/VISIR, \citealt{Del12b}; LBT/LMIRCam, \citealt{Defrere14}; Keck/NIRC2, Serabyn et al., submitted to \aj).
Here we report on a new generation of AGPMs optimized for the highest possible coronagraphic performance in the L and M bands. The design of the subwavelength grating based on rigorous coupled wave analysis \citep[RCWA,][]{Mawet05b} is described in Sect.~\ref{sec:design}. A short description of our improved fabrication process for high aspect ratio diamond gratings, and of means to assess the grating parameters during and after etching is then given in Sect.~\ref{sec:fabrication}. Section~\ref{sec:performance} focuses on the performance assessment of the phase masks using the YACADIRE coronagraphic test bench at the Observatoire de Paris. Considering that the depth of the grating is a determining parameter, we propose in Sect.~\ref{sec:tuning} two possible methods to finely tune the grating depth with further etching and thereby reach the best possible coronagraphic performance. In Sect.~\ref{sec:newperf}, this process is demonstrated on a few AGPMs, which have been successfully re-etched and show significantly improved coronagraphic performance after tuning.
\begin{figure}
\centering
\includegraphics[width=3.6cm]{fig1.png}
\caption{Schematic picture of a cross section of the AGPM, showing the sidewall angle ($\alpha$), the grating depth ($h$) the line width ($w_t$) and the grating period ($\Lambda$).}
\label{fig:agpm}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=7.5cm]{fig2.png}
\caption{RCWA simulations showing the rejection ratio for a grating period of 1.42~$\mu$m on a 3.4-4.1~$\mu$m broadband filter. \textit{Top.} Fixed grating depth of 5.5~$\mu$m as a function of the line width $w_t$ and sidewall angle $\alpha$. \textit{Middle.} Fixed sidewall angle of $2\fdg45$ as a function of the grating depth $h$ and line width $w_t$. \textit{Bottom.} Fixed line width of 0.7~$\mu$m, as a function of the sidewall angle $\alpha$ and grating depth $h$.}
\label{fig:rcwa}
\end{figure}
\section{Design and simulation} \label{sec:design}
\begin{figure*}
\centering
\includegraphics[width=18cm]{fig3.png}
\caption{Rejection ratio as a function of wavelength for different grating depths between 5.0 and 6.0~$\mu$m (from blue to red), with the lines separated by steps of 0.02~$\mu$m. The sidewall angle is set to $2\fdg45$ and the line width $w_t$ is 0.70~$\mu$m.}
\label{fig:rejection_profile}%
\end{figure*}
The subwavelength grating composing the AGPM \citep{Mawet05b} is defined through the following parameters (see Fig.~\ref{fig:agpm}): the grating period $\Lambda$, the line width at the top of the grating $w_t$, the depth of the grating $h$, and the angle of the sidewall $\alpha$. The grating period $\Lambda$ is fixed to fulfill the subwavelength criterion: $\Lambda < \lambda/n$ (when the ambient medium is air), where $\lambda$ is the illuminating wavelength and $n$ is the refractive index of the substrate ($n = 2.38$ for diamond in the thermal infrared regime). Inserting the values means that $\Lambda < 1.428$~$\mu$m for the short-wave end of the L band ($\lambda=3.4$~$\mu$m). Here, we set $\Lambda$ to 1.42~$\mu$m for all our L-band components. All our AGPMs feature a two-dimensional subwavelength grating on their back side, acting as an anti-reflection treatment. This grating keeps internal reflections at the interface between diamond and air below 2\%, which effectively reduces double-pass ghost signals in our AGPMs to less than 0.1\% over the whole L band \citep{Delacroix13}.
The sidewall angle $\alpha$ is determined by the etch process and needs to be low in order to reach high aspect ratios (grating depth divided by the width of the grooves). The process used to make most AGPMs presented in this paper produces a sidewall angle close to $2\fdg45$. With $\Lambda$ and $\alpha$ known, we used RCWA simulations to find the optimal values of $w_t$ and $h$ for high coronagraphic performance \citep{Delacroix12}. The starlight rejection efficiency is quantified by the \emph{rejection ratio} $R$, defined as the ratio of the total intensity of the non-attenuated point spread function (PSF) to the total intensity of the PSF attenuated by the coronagraph. In Fig.~\ref{fig:rcwa}, the rejection ratio is plotted with respect to a known period and one constant parameter as a function of the other two. From this figure, it becomes evident that, to a large extent, an error in the line width can be compensated by changing the etch depth (and vice versa) to improve the rejection ratio. The simulations also show that a variation of $0\fdg1$ in $\alpha$ can lead to a significant loss of performance. Moreover, the RCWA simulations presented in Fig.~\ref{fig:rcwa} reveal that a very small change in $w_t$ (by $\sim10$~nm) or $h$ (by $\sim100$~nm) can lead to a dramatically lowered optical performance of the AGPM. Figure~\ref{fig:rejection_profile} illustrates the fundamental limitations to the rejection ratio of AGPM-based coronagraphs on broadband filters. While the mean rejection ratio on a broadband L filter (3.4--4.1~$\mu$m) could reach up to 2500:1, an AGPM covering both L and M bands (3.4--5.0~$\mu$m) cannot reach a rejection ratio larger than 200:1 simultaneously in both bands. Because of the uncertainties in the etching process, we consider a rejection ratio of about 500:1 to be the maximum value we can reach on a broadband L filter using a single etching process, based on our previous fabrication attempts \citep{Delacroix13}.
\section{Fabrication and grating characterization} \label{sec:fabrication}
\begin{table*}
\caption{Grating parameters, expected and measured broadband rejection ratios at L band (3.5--4.0 $\mu$m) and M band (4.4--5.0 $\mu$m) for our AGPMs after fabrication.}
\label{tab:agpm_params}
\centering
\begin{tabular}{l c c c c c c c}
\hline\hline
& $w_t$ & $h$ & $\alpha$ & Expected $R$ & Measured $R$ & Expected $R$ & Measured $R$ \\
\multicolumn{1}{c}{Name} & [$\mu$m] & [$\mu$m] & [degrees] & (L band) & (L band) & (M band) & (M band) \\
\hline
AGPM-L5 & $0.630\pm 0.015$ & $4.90\pm 0.20$ & $3.20\pm 0.20$ & 30--2400 & 550 & 10--130 & 80 \\
AGPM-L6\tablefootmark{a} & $0.625\pm0.015$ & $4.57\pm0.20$ & $3.20\pm 0.20$ & 20--1600 & 150 & 10--50 & 30 \\
AGPM-L7\tablefootmark{a} & $0.625\pm0.015$ & $4.82\pm0.20$ & $3.45\pm 0.20$ & 20--1400 & 550 & 10--110 & N/A \\
AGPM-L8 & $0.645\pm0.015$ & $4.86\pm 0.20$ & $3.45\pm0.20$ & 30--270 & 50 & 10--80 & N/A \\
AGPM-L9\tablefootmark{b} & $0.750\pm0.010$ & $5.10\pm0.05$ & $2.45\pm0.10$ & 20--110 & 30 & 10--40 & 20 \\
AGPM-L10\tablefootmark{b} & $0.630\pm0.010$ & $5.20\pm0.05$ & $2.10\pm0.10$ & 10--40 & 20 & 70--450 & 90 \\
AGPM-L11\tablefootmark{b} & $0.650\pm0.010$ & $4.92\pm0.05$ & $2.22\pm 0.10$ & 40--670 & 70 & 110--520 & 240 \\
AGPM-L12\tablefootmark{b} & $0.650\pm0.010$ & $4.92\pm0.05$ & $2.22\pm 0.10$ & 40--670 & 70 & 110--520 & 120 \\
AGPM-L13\tablefootmark{b} & $0.590\pm0.010$ & $4.67\pm0.05$ & $2.45\pm0.10$ & 30--250 & 110 & 70--250 & 150 \\
AGPM-L14\tablefootmark{c} & $0.615\pm0.010$ & $4.67\pm0.05$ & $2.45\pm0.10$ & 70--1860 & 370 & 50--160 & 90 \\
AGPM-L15 & $0.630\pm0.010$ & $4.67\pm0.05$ & $2.45\pm0.10$ & 130--2300 & 630 & 40--120 & 70 \\
\hline
\end{tabular}
\tablefoot{
\tablefoottext{a}{Installed in the Keck/NIRC2 camera.}
\tablefoottext{b}{Chosen for grating tuning demonstration.}
\tablefoottext{c}{Installed in the LBT/LMIRCam camera.}}
\end{table*}
Polycrystalline diamond substrates of optical quality (Diamond Materials GmbH and Element Six Ltd.) with a diameter of 10~mm and a thickness of 300~$\mu$m were used. We have recently demonstrated an improved fabrication process for high aspect ratio diamond gratings \citep{Vargas16}. Most of the AGPMs presented in this work have been manufactured using this process, which involves nano-replication and ICP-RIE of Al, Si and diamond. Previously we used NIL in the nano-replication step \citep{Forsberg13,Delacroix13}, but we noticed that this process gave rise to a large reduction in line width, and that variations in line width were common, especially around the center of the AGPM. In our new process, we use solvent assisted micro molding (SAMIM, \citealt{Kim97, Vargas16}), which gives very good fidelity in the replicated patterns with nearly no difference in line widths compared to the master AGPM pattern. Moreover, our improved fabrication process use pure oxygen chemistry during the ICP-RIE of diamond, yielding a lower sidewall angle \citep{Vargas16}, which is beneficial for fabricating high performing AGPMs.
Eleven AGPMs were successfully fabricated (see Table~\ref{tab:agpm_params}). They were numbered from AGPM-L5 to L15 \citep[AGPM-L1 to L4 were presented in][]{Delacroix13}. We would like to point out that AGPM-L5 to L8 were fabricated by using our first generation of fabrication process based on NIL in the nano-replication step \citep{Forsberg13,Delacroix13}, except that C$_4$F$_8$ was added as an etch gas in the Si etch step giving less shrinkage in line width \citep{Vargas16}. When using the first generation of fabrication process, we had to repeat the nano-replication and thin film etching steps several times (for a given substrate) before getting an AGPM with correct line width (acceptable values: 590~nm~$\leq w_t \leq$~750 nm). As a result, the surface of the diamond substrate was degraded and for this reason, we had to discard several diamond substrates. Using our improved process completely removes these problems.
Evaluating the parameters of the etched gratings is not an easy task. Indeed, metrological methods such as atomic force microscopy (AFM) cannot reach down the trenches, and the features are too small for optical interferometers. Furthermore, a precise value of the sidewall angle can only be measured by cracking the AGPM perpendicularly to the grating to resolve a cross section of the profile in scanning electron microscopy (SEM). Therefore, the geometry of the AGPMs' high aspect ratio gratings was analyzed by cross section micrographs using SEM. However, since all of the AGPMs are potentially to be installed in telescopes, none was cracked except AGPM-L10. For each batch of diamond AGPMs (i.e., AGPMs etched together and therefore having almost the same sidewall angle), a twin sample was cracked instead. The twin is a test sample that follows the batch through the complete process. It was measured after each critical step. The grating parameters $w_t$ and $\alpha$ are indeed known to vary during the fabrication process; hence it is critical to follow the process using a twin sample, enabling recalculations of the optimal depth $h$.
The twin sample was cracked after the first Al etching step to check if the pattern was transferred successfully, and after the second Al etching to see if the mask layers have smooth sidewalls and to measure the line width before etching. The optimal etch depth was then recalculated by RCWA simulations, using the measured line width and sidewall angle. A third cracking was performed just before reaching the optimal etch depth (based on etch time, using the mean value of the diamond etch rate), to avoid too deep gratings. Previous etch runs showed that the diamond etch rate can vary up to 5\% \citep{Vargas16}. For the AGPMs demonstrated in \cite{Delacroix13} and AGPM-L5 to AGPM-L8, we wrongly assumed that the etch rate was always the same for our diamond etch recipe, thus giving a larger error in final etch depth (and in $w_t$ and $\alpha$) compared to using a twin sample. Again, $w_t$, $\alpha$ (and $h$) were measured and a final RCWA calculation was made for deciding on the optimal etch depth $h$. In the final step, the grating generally just needed to be etched 100--400~nm deeper to reach the optimal depth. The twin was cracked for a final time; $w_t$, $\alpha$ and $h$ were determined for the twin, and the parameter values for the AGPM fabricated in parallel were assumed to be the same. The measured grating parameters are reported in Table~\ref{tab:agpm_params}.
\section{Coronagraphic performance evaluation} \label{sec:performance}
The AGPMs were optically tested on the YACADIRE testbench at LESIA, Observatoire de Paris \citep{Boccaletti08}. This bench was previously used to characterize our first generation of AGPMs using a broadband L filter \citep{Delacroix13}. We refer to these two papers for a detailed description of the bench. On YACADIRE, the entrance pupil is defined by a circular aperture. The AGPM is placed at the focal plane, where the beam is converging at $f/40$, resulting in a diffraction pattern of full width at half maximum FWHM~$\simeq 150$~$\mu$m at L band. The diameter of the Lyot stop is undersized to 80\% of the original pupil size.
\begin{figure}
\centering
\includegraphics[width=8.8cm]{fig4.png}
\caption{Typical experimental results of AGPM optical performance characterization. \textit{Left}. Radial profile of the image with an AGPM translated by 1 mm (off-axis PSF), and with three different centered AGPMs, showing low (AGPM-L9), median (AGPM-L13) or high (AGPM-L15) performance after initial etching. The vertical dashed line shows the limit of the disk on which the flux is integrated to compute the rejection ratio $R$. \textit{Right}. Corresponding images shown with a logarithmic scale.}
\label{fig:agpm_psf}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=18cm]{fig5.png}
\caption{Schematic of the tuning process. The three left-most sketches (with a blue background) show the process steps for deeper gratings; a) Al deposition, b) Al etching and c) diamond etching. The two right-most sketches (red background) show the process steps for shallower gratings; d) resist filling and e) diamond etching.}
\label{fig:tuning_process}
\end{figure*}
While the theoretical rejection ratio computed in RCWA simulations corresponds to the ratio of the total intensity in the two PSFs, this metrics is not practical in the case of our experimental data for two main reasons. First, the large thermal background encountered on the non-cryogenic YACADIRE bench reduce the exploitable part of the coronagraphic PSF to an angular separation of about $4\lambda/D$ (see Fig.~\ref{fig:agpm_psf}). Beyond this separation, the signal becomes dominated by background noise and by residuals associated to the background subtraction process. Second, the YACADIRE bench is not free from optical aberrations. Indeed, when the AGPM intrinsic rejection ratio exceeds 100:1, one can notice significant deformation of the coronagraphic intensity profile compared to the non-coronagraphic profile, and in particular a bump appearing at about $1\lambda/D$ (see Fig.~\ref{fig:agpm_psf}). This behavior is consistent with low-order aberrations dominating the coronagraphic performance. The vortex effect associated to the AGPM only affects the coherent part of the input beam, and reveals these low-order aberrations that were unnoticeable in non-coronagraphic images. In order to assess the true performance of the AGPM, we propose to compute the experimental rejection ratio $R$ by integrating the flux on an aperture of size equal to the resolution element $\lambda/D$, which encircles 80\% of the total energy in the non-coronagraphic PSF, and where the contribution of the coherent core is most prominent. This definition of the rejection ratio is the same as the one used in \citet{Delacroix13}, and would be strictly equivalent to the definition used in the RCWA simulations of Sect.~\ref{sec:design} if the optical system was perfect. Due to noise and optical aberrations, the measured rejection ratios will generally be somewhat underestimated, especially for the highest rejection ratios.
Our rejection ratio measurements were done using various spectral filters (see Table~\ref{tab:filters}): broadband L or M filters were placed directly in the cryostat to reduce background emission, while narrow-band filters were used at room temperature. The coronagraphic performance measurements performed in the two broadband filters are reported in Table~\ref{tab:agpm_params}. As expected, a large fraction of the fabricated AGPMs do not reach a rejection ratio of 100, which we consider as a bare minimum for on-sky coronagraphic applications \citep{Mawet10}.
\begin{table}
\caption{Central wavelength and width of the filters used for the AGPM coronagraphic performance evaluation.}
\label{tab:filters}
\centering
\begin{tabular}{c c c }
\hline\hline
Filter & $\lambda_0$ [$\mu$m] & $\Delta \lambda$ [$\mu$m] \\
\hline
Broad L & 3.750 & 0.50 \\
Narrow L-short & 3.475 & 0.10 \\
Narrow L-mid & 3.800 & 0.18 \\
Narrow L-long & 4.040 & 0.16 \\
Broad M & 4.700 & 0.60 \\
\hline
\end{tabular}
\end{table}
\section{Grating tuning processes} \label{sec:tuning}
Using the results from the performance measurements together with the RCWA calculations, and using the grating parameters measured on the twin sample as a first guess in the RCWA modeling, it becomes possible to determine more precisely the parameters of the grating, and to compute by how much the grating parameters need to be tuned to improve the AGPM coronagraphic performance. While the line width $w_t$ and sidewall angle $\alpha$ are difficult to change in a controlled manner after the AGPM has been fabricated, the grating depth $h$ can be tuned. Here we demonstrate two techniques for either making the AGPM grating deeper or shallower.
\begin{figure}
\centering
\includegraphics[width=8.8cm]{fig6.png}
\caption{Etch profile for the cracked AGPM-L10. \textit{Left.} Profile with deeper grooves, where some remaining Al can be seen on the top of the grating (and upper part of the sidewalls of the grating). \textit{Center.} Grating before tuning. \textit{Right.} Profile with shallower grooves, where the top has become faceted.}
\label{fig:tuning_sem}
\end{figure}
To make the AGPM deeper, we used a technique that we have recently developed for increasing the depth of an already fabricated high aspect ratio diamond grating \citep{Vargas16}. In short, a layer of Al is deposited on top of the diamond AGPM (Fig.~\ref{fig:tuning_process}a), and due to shadowing effects, the top of the grating is covered with a thicker layer than the area at the bottom of the grating. The thin Al layer at the bottom is etched away using ICP-RIE (Fig.~\ref{fig:tuning_process}b), leaving the bottom of the groove without Al and the top still covered with Al. Finally, the AGPM is shortly diamond etched using ICP-RIE (Fig.~\ref{fig:tuning_process}c).
\begin{figure*}
\centering
\includegraphics[width=18cm]{fig7.png}
\caption{Comparison of rejection ratio between L-band and M-band optical measurements and the RCWA model. \textit{Left}. Measured rejection ratio in five different filters for AGPM-L9 and its remastered versions. AGPM-L9r and AGPM-L9r2 were etched 400~nm and 780~nm deeper than the original AGPM-L9, respectively. \textit{Right}. Possible parameter solutions (marked with crosses) for each optimization, overlaid on predicted coronagraphic performance from RCWA simulations (white lines) with $\alpha$=$2\fdg45$ and wavelength region between 3.5-4.0~$\mu$m.}
\label{fig:L5perf}
\end{figure*}
To make the AGPM shallower, a new technique was developed. The grating was filled with photoresist before etching the diamond as above. The diamond surface is hydrophilic due to the previous diamond plasma etching in oxygen chemistry and, in addition, the grating structure makes the surface even more hydrophilic \citep{Karlsson10}; resist dropped on the AGPM will thus immediately fill up the grating. The process is as follows: the AGPM was placed on a spinner, and Shipley S1813 photoresist was dropped on the surface to completely cover the surface. Excess resist was removed by spinning the AGPM substrate at 6000~rpm for 30~s, which leaves about 1~$\mu$m resist on top of the grating (Fig.~\ref{fig:tuning_process}d). To completely bake out the solvent, the AGPM was placed on a hot plate at $115~\degr$C for 20~minutes. The AGPM was then shortly diamond etched using ICP-RIE. This process quickly removes the resist on top of the grating (40-60~s), while the resist inside the grating grooves remains much longer and thus protects the grooves and sidewalls of the diamond structure (Fig.~\ref{fig:tuning_process}e). In other words, the top of the diamond grating will be almost directly attacked by the oxygen plasma, while the resist in the grooves will protect these areas of the diamond grating. Although the etch selectivity between diamond and resist is very low (i.e., resist is etched much faster than diamond), the top diamond area of the grating can be etched several hundreds nanometers before the resist in the grooves was etched away. If there is a need for even shallower grating, the process can be repeated. However, if the grating is etched for too long, faceting of the top of the grating might start to reduce the optical performance (see Fig. 6, right).
AGPM-L10 was used as a test sample to validate our processes to etch deeper and shallower gratings. It was cracked in two halves and characterized (Fig.~\ref{fig:tuning_sem}).
The half that was etched deeper was sputtered with 400~nm thick Al followed by Al plasma etching using Cl$_2$ and BCl$_3$ (gas flows of 15~sccm and 50~sccm, respectively) at 5~mTorr and with an ICP power of 600~W and a bias power of 30~W for 25~s. The diamond substrate was then plasma etched in an oxygen plasma at 5~mTorr with an ICP power of 850~W and a bias power of 220~W for 150~s, resulting in 400~nm deeper grooves. The final grating grooves have a slightly higher sidewall angle, which must be taken into consideration when using RCWA simulations for finding the optimal depth. The other half of AGPM-L10, chosen as a test sample to reduce the grating depth, was filled with photoresist as described above. The process was ended with the same oxygen plasma recipe (and time) as when etching the grating deeper, resulting in 400~nm shallower grooves.
\section{Performance after re-etching} \label{sec:newperf}
\begin{table}
\caption{Rejection ratios in the broadband L filter for the optimized AGPMs.}
\label{tab:agpm_perf}
\centering
\begin{tabular}{l c c c c}
\hline\hline
& Tuning & $R$ before & $\Delta h$ & $R$ after \\
\multicolumn{1}{c}{Name} & process & tuning & [$\mu$m] & tuning \\
\hline
AGPM-L9r & Al deposition & 30 & $+0.40$ & 100 \\
AGPM-L9r2 & Al deposition & 100 & $+0.38$ & 400 \\
AGPM-L11r & Resist filling & 70 & $-0.32$ & 910 \\
AGPM-L12r & Resist filling & 70 & $-0.42$ & 470 \\
AGPM-L13r & Resist filling & 110 & $-0.29$ &190 \\
\hline
\end{tabular}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=11.3cm]{fig8a.png} \hspace*{2mm}
\includegraphics[width=6.4cm]{fig8b.png}
\caption{Performance of AGPM-L11 before and after the grating tuning process. \textit{Left}. PSF radial profiles measured on the YACADIRE bench in the broadband L filter, together with an illustration of the image shape on the right-hand side. \textit{Right}. Illustration of the experimental coronagraphic performance measured in the five broad- and narrow-band filters, overlaid on the predicted performance based on our best-fit RCWA model of the grating.}
\label{fig:L7perf}
\end{figure*}
The AGPMs showing a rejection ratio around or below 100 (see Table~\ref{tab:agpm_params}) were chosen to test and validate our new tuning processes. Here, we describe the tuning of AGPM-L9 as an example. The experimental rejection ratios measured after initial etching are shown in Fig.~\ref{fig:L5perf} (left) for the broadband L and the three narrow-band filters. These rejection ratios were fit using our RCWA model, giving rise to a family of possible solutions for the line width $w_t$ and grating depth $h$ (thin white lines in Fig.~\ref{fig:L5perf}, right), assuming a sidewall angle $\alpha$ of $2\fdg45$. Thanks to the SEM measurements performed on the twin sample (see Sect.~\ref{sec:fabrication}), we can further constrain the grating parameters, which must be located within the rectangle formed by the four white crosses in Fig.~\ref{fig:L5perf} (right), taking into account the SEM measurement uncertainties of $\pm 10$~nm in $w_t$ and $\pm 50$~nm in $h$. The white lines in the bottom right corner are unrealistic RCWA solutions, as the twin sample can confirm that we have not etched that deep. AGPM-L9 was then processed in the same way as AGPM-L10, with the aim of making the grating 400~nm deeper. The process resulted in a ``new'' phase mask referred to as AGPM-L9r, and the subsequent performance evaluation on our coronagraphic test bench showed an increase of rejection ratio to about 100 in the broadband L filter, as illustrated in Fig.~\ref{fig:L5perf} (left). Based on further RCWA simulations, it was evident that the tuned AGPM was still too shallow, by about 400~nm. A second etch iteration was performed (etch time 200~s) to increase the grating depth by that amount, resulting in the final AGPM-L9r2. The final performance evaluation shows an improvement of the rejection ratio to 400 in the broadband L filter (3.5--4.0~$\mu$m), see Fig.~\ref{fig:L5perf} and Table~\ref{tab:agpm_perf}.
Other successful grating tuning examples include AGPM-L11, AGPM-L12, and AGPM-L13. These AGPMs were originally etched deeper than the previous samples, in an attempt to demonstrate our capability to produce AGPMs delivering good performance simultaneously across the L and M bands (M-band operations requiring deeper gratings), as shown in Fig.~\ref{fig:rejection_profile}. AGPM-L13 revealed to be the closest approach to a science-grade LM-band AGPM, delivering rejection ratios higher than 100 in both broadband filters. After performance evaluation on our coronagraphic test bench, it was decided to use these three AGPMs to demonstrate the grating depth reduction process. The three AGPMs were thus made shallower using the recipe tested on AGPM-L10. In order to optimize their depth, AGPM-L11 was etched for 120~s, AGPM-L12 for 150~s and AGPM-L13 for 110~s. This corresponded to a decrease of etch depth by 320~nm, 420~nm and 290~nm, respectively. As expected, the three tuned AGPMs all showed better performance in rejection ratio at L band (at the expense of degraded performance at M band), with AGPM-L11 setting the record broadband rejection ratio of 910 (see Table~\ref{tab:agpm_perf}). The coronagraphic PSF of AGPM-L11 in the broadband L filter before and after tuning is shown as an illustration in Fig.~\ref{fig:L7perf}, together with a graphical illustration of its performance in all broad- and narrow-band filters compared to the best-fit RCWA model. A rejection ratio up to 1100 is measured in the short-wave narrow-band filter. At this level of performance, we expect that the optical quality of the wavefront delivered by the YACADIRE test bench becomes a limitation to the measured performance. This is further suggested by the ``donut'' shape of the PSF shown as an inset in Fig.~\ref{fig:L7perf}, which is the expected behavior of the vortex phase mask in presence of low-order aberrations. The actual performance of this AGPM could thus be even better than what is shown here. Based on the measured peak rejection of 900 in broadband L, we would expect a raw contrast of about $10^{-5}$ at an angular separation of $2\lambda/D$ for a perfect wavefront on a circular aperture, rather than the $6\times10^{-5}$ shown in Fig.~\ref{fig:L7perf}. We also note that the noise floor of YACADIRE in broadband L corresponds to a raw contrast of about $2\times10^{-5}$, due to the limited amount of photons making it through the single-mode fiber.
In summary, both methods for optimizing the grating depth (shallower or deeper) were successfully performed. All tuned AGPMs (L9r2, L11r, L12r, and L13r) show better rejection performance. The results for all the tuned AGPMs are summarized in Table~\ref{tab:agpm_perf}, where the suffix ``r'' denotes remastered AGPMs. We note that these sub-micron scale high aspect ratio gratings are never perfect; the angle of the sidewall is not completely constant, and there are so-called trenching effects, which means that the floor of the grating is not at the exact same level everywhere (i.e., not uniform etch depth, see Fig.~\ref{fig:tuning_sem}). A completely accurate RCWA simulation of the AGPM is therefore not possible, but based on the experimental characterization of the AGPM, we can nevertheless optimize the depth of the grating in an efficient way. As long as the grating parameters are reasonably within specification ($\pm10$\%, which is valid for our described manufacturing process), it is always possible to hit an optimal etch depth giving a rejection ratio of 500 or more, which makes it suitable for installation in current and future ground-based infrared high-contrast imagers.
\section{Conclusions and outlook}
Over the last few years, we have produced several AGPMs designed for the thermal infrared regime by etching concentric subwavelength gratings into synthetic diamond substrates. Some of them are now installed in world-leading ground-based observatories, such as the Very Large Telescope, the Large Binocular Telescope and the Keck Observatory. Over the years, we have however discovered that it is very difficult to fabricate AGPMs with good optical performance in a one-iteration process. Errors in the grating parameters will always exist when fabricating nanometer-sized high aspect ratio structures over a relatively large area (cm), thus resulting in degraded optical performance.
In this paper, we have successfully demonstrated that we can tune the AGPM grating depth, that is make it deeper or shallower, even after completing the initial etching process. For that purpose, we have combined the information from SEM micrographs, RCWA modeling, and coronagraphic performance characterization to determine the grating parameters (line width, groove depth, and sidewall angle) with a sufficient accuracy to precisely determine by how much the grating depth needs to be modified to reach the highest possible coronagraphic performance. Two different processes have been presented and validated to reduce or increase the grating depth, enabling the production of L-band AGPMs with broadband rejection ratios up to about 1000:1. Such performance would allow raw contrasts up to $10^{-5}$ to be reached at two resolution elements from the optical axis for a perfect input wave front on a circular, unobstructed aperture. This will ensure that the intrinsic performance of the AGPM does not significantly affect the on-sky coronagraphic performance for current and upcoming infrared high-contrast thermal infrared imagers, such as the Mid-infrared E-ELT Imager and Spectrograph \citep[METIS,][]{Brandl14}, where wave front aberrations and diffraction from the non-circular input pupil will be setting the limit on the achievable raw contrast.
Future work will focus on two main aspects. First, we are in the process of building a new coronagraphic bench \citep[VODCA,][]{Jolivet14}, which should allow the characterization of vortex phase masks in the thermal infrared with a higher dynamic range and better optical quality than currently possible on the YACADIRE bench. Second, we are trying to reduce the grating period down to a sub-micron size to enable operations at K- and H-bands, with promising results already obtained at K-band. For such small grating periods, the errors in the fabrication process will become even more critical compared with L-band AGPMs. Tuning the grating of these AGPMs will certainly be a must to achieve very high rejection performance.
\begin{acknowledgements}
The authors are grateful to J\'er\^ome Parisot (LESIA, Observatoire de Paris), who manages the YACADIRE test bench, for his availability and help during every AGPM test campaigns. The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (ERC Grant Agreement n. 337569), the French Community of Belgium through an ARC grant for Concerted Research Action, and the Swedish Research Council (VR) through project grant 621-2014-5959.
\end{acknowledgements}
\bibliographystyle{aa}
\section{Introduction}
So far, most exoplanets have been detected by indirect methods, based on the measurement of the effect of the companion on its host star, either in its spectrum thanks to the Doppler effect or in its photometric curve during transits. The currently known exoplanet population is therefore biased, since these techniques are sensitive to relatively short period and very close massive companions. In that context, direct imaging offers a good complement to probe larger separations around stars. In addition, the direct detection of photons emitted or reflected by the planet allows for photometric and spectroscopic studies, which is crucial to get insight into its atmospheric composition. It also enables precise astrometry, which over time provide orbital characteristics of the planets and insight into the dynamical environment and history. However, direct imaging is a challenging technique, as it requires to reach a very high contrast, typically ranging from $10^{-4}$ to $10^{-10}$ for hot giant planets to Earth-like planets, respectively, and a high angular resolution ($\sim0\farcs1$). Coronagraphy, which aims to reject the glaring light of the central star to enhance the signal from the faint companion, combined with extreme adaptive optics systems and advanced image processing, is a requirement to reach such performance.
Among all possible coronagraph designs, the vortex coronagraph was proposed a decade ago by \cite{Mawet05a} and \cite{Foo05}. It consists of a focal plane phase mask inducing a phase ramp around the optical axis. When passing through the phase mask, the light of an on-axis star is diffracted and redistributed outside the geometrical pupil of the telescope in a downstream pupil plane. A diaphragm, referred to as Lyot stop, is then used to block the light of the central star. The light of an off-axis companion is not, or only partially, affected by the vortex phase pattern and can propagate towards the detector.
One possible implementation of vortex phase masks is based on the manufacturing of concentric rings creating a subwavelength rotational grating, in a design referred to as the annular groove phase mask \citep[AGPM,][]{Mawet05a}. For a given linear polarization of the incoming light, the phase mask acts as a rotating phase retarder, thus inducing the desired phase ramp. Our team has previously demonstrated a fabrication process for diamond AGPMs working in the L (3.4--4.1~$\mu$m), M (4.4--5.0~$\mu$m) and N (10--13~$\mu$m) bands \citep{Forsberg13}. The diamond AGPMs were manufactured using nanoimprint lithography (NIL) and inductively coupled plasma reactive ion etching (ICP-RIE) in high density plasmas using highly oxidizing chemistries \citep{Karlsson03, Hwang04, Gu2004}. The fabricated AGPMs designed for the L band were characterized on the YACADIRE coronagraphic test bench at Observatoire de Paris \citep{Delacroix13}, showing starlight rejection up to 500:1. Our phase masks now equip infrared coronagraphs on several 10-m class telescopes (VLT/NACO, \citealt{Mawet13}; VLT/VISIR, \citealt{Del12b}; LBT/LMIRCam, \citealt{Defrere14}; Keck/NIRC2, Serabyn et al., submitted to \aj).
Here we report on a new generation of AGPMs optimized for the highest possible coronagraphic performance in the L and M bands. The design of the subwavelength grating based on rigorous coupled wave analysis \citep[RCWA,][]{Mawet05b} is described in Sect.~\ref{sec:design}. A short description of our improved fabrication process for high aspect ratio diamond gratings, and of means to assess the grating parameters during and after etching is then given in Sect.~\ref{sec:fabrication}. Section~\ref{sec:performance} focuses on the performance assessment of the phase masks using the YACADIRE coronagraphic test bench at the Observatoire de Paris. Considering that the depth of the grating is a determining parameter, we propose in Sect.~\ref{sec:tuning} two possible methods to finely tune the grating depth with further etching and thereby reach the best possible coronagraphic performance. In Sect.~\ref{sec:newperf}, this process is demonstrated on a few AGPMs, which have been successfully re-etched and show significantly improved coronagraphic performance after tuning.
\begin{figure}
\centering
\includegraphics[width=3.6cm]{fig1.png}
\caption{Schematic picture of a cross section of the AGPM, showing the sidewall angle ($\alpha$), the grating depth ($h$) the line width ($w_t$) and the grating period ($\Lambda$).}
\label{fig:agpm}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=7.5cm]{fig2.png}
\caption{RCWA simulations showing the rejection ratio for a grating period of 1.42~$\mu$m on a 3.4-4.1~$\mu$m broadband filter. \textit{Top.} Fixed grating depth of 5.5~$\mu$m as a function of the line width $w_t$ and sidewall angle $\alpha$. \textit{Middle.} Fixed sidewall angle of $2\fdg45$ as a function of the grating depth $h$ and line width $w_t$. \textit{Bottom.} Fixed line width of 0.7~$\mu$m, as a function of the sidewall angle $\alpha$ and grating depth $h$.}
\label{fig:rcwa}
\end{figure}
\section{Design and simulation} \label{sec:design}
\begin{figure*}
\centering
\includegraphics[width=18cm]{fig3.png}
\caption{Rejection ratio as a function of wavelength for different grating depths between 5.0 and 6.0~$\mu$m (from blue to red), with the lines separated by steps of 0.02~$\mu$m. The sidewall angle is set to $2\fdg45$ and the line width $w_t$ is 0.70~$\mu$m.}
\label{fig:rejection_profile}%
\end{figure*}
The subwavelength grating composing the AGPM \citep{Mawet05b} is defined through the following parameters (see Fig.~\ref{fig:agpm}): the grating period $\Lambda$, the line width at the top of the grating $w_t$, the depth of the grating $h$, and the angle of the sidewall $\alpha$. The grating period $\Lambda$ is fixed to fulfill the subwavelength criterion: $\Lambda < \lambda/n$ (when the ambient medium is air), where $\lambda$ is the illuminating wavelength and $n$ is the refractive index of the substrate ($n = 2.38$ for diamond in the thermal infrared regime). Inserting the values means that $\Lambda < 1.428$~$\mu$m for the short-wave end of the L band ($\lambda=3.4$~$\mu$m). Here, we set $\Lambda$ to 1.42~$\mu$m for all our L-band components. All our AGPMs feature a two-dimensional subwavelength grating on their back side, acting as an anti-reflection treatment. This grating keeps internal reflections at the interface between diamond and air below 2\%, which effectively reduces double-pass ghost signals in our AGPMs to less than 0.1\% over the whole L band \citep{Delacroix13}.
The sidewall angle $\alpha$ is determined by the etch process and needs to be low in order to reach high aspect ratios (grating depth divided by the width of the grooves). The process used to make most AGPMs presented in this paper produces a sidewall angle close to $2\fdg45$. With $\Lambda$ and $\alpha$ known, we used RCWA simulations to find the optimal values of $w_t$ and $h$ for high coronagraphic performance \citep{Delacroix12}. The starlight rejection efficiency is quantified by the \emph{rejection ratio} $R$, defined as the ratio of the total intensity of the non-attenuated point spread function (PSF) to the total intensity of the PSF attenuated by the coronagraph. In Fig.~\ref{fig:rcwa}, the rejection ratio is plotted with respect to a known period and one constant parameter as a function of the other two. From this figure, it becomes evident that, to a large extent, an error in the line width can be compensated by changing the etch depth (and vice versa) to improve the rejection ratio. The simulations also show that a variation of $0\fdg1$ in $\alpha$ can lead to a significant loss of performance. Moreover, the RCWA simulations presented in Fig.~\ref{fig:rcwa} reveal that a very small change in $w_t$ (by $\sim10$~nm) or $h$ (by $\sim100$~nm) can lead to a dramatically lowered optical performance of the AGPM. Figure~\ref{fig:rejection_profile} illustrates the fundamental limitations to the rejection ratio of AGPM-based coronagraphs on broadband filters. While the mean rejection ratio on a broadband L filter (3.4--4.1~$\mu$m) could reach up to 2500:1, an AGPM covering both L and M bands (3.4--5.0~$\mu$m) cannot reach a rejection ratio larger than 200:1 simultaneously in both bands. Because of the uncertainties in the etching process, we consider a rejection ratio of about 500:1 to be the maximum value we can reach on a broadband L filter using a single etching process, based on our previous fabrication attempts \citep{Delacroix13}.
\section{Fabrication and grating characterization} \label{sec:fabrication}
\begin{table*}
\caption{Grating parameters, expected and measured broadband rejection ratios at L band (3.5--4.0 $\mu$m) and M band (4.4--5.0 $\mu$m) for our AGPMs after fabrication.}
\label{tab:agpm_params}
\centering
\begin{tabular}{l c c c c c c c}
\hline\hline
& $w_t$ & $h$ & $\alpha$ & Expected $R$ & Measured $R$ & Expected $R$ & Measured $R$ \\
\multicolumn{1}{c}{Name} & [$\mu$m] & [$\mu$m] & [degrees] & (L band) & (L band) & (M band) & (M band) \\
\hline
AGPM-L5 & $0.630\pm 0.015$ & $4.90\pm 0.20$ & $3.20\pm 0.20$ & 30--2400 & 550 & 10--130 & 80 \\
AGPM-L6\tablefootmark{a} & $0.625\pm0.015$ & $4.57\pm0.20$ & $3.20\pm 0.20$ & 20--1600 & 150 & 10--50 & 30 \\
AGPM-L7\tablefootmark{a} & $0.625\pm0.015$ & $4.82\pm0.20$ & $3.45\pm 0.20$ & 20--1400 & 550 & 10--110 & N/A \\
AGPM-L8 & $0.645\pm0.015$ & $4.86\pm 0.20$ & $3.45\pm0.20$ & 30--270 & 50 & 10--80 & N/A \\
AGPM-L9\tablefootmark{b} & $0.750\pm0.010$ & $5.10\pm0.05$ & $2.45\pm0.10$ & 20--110 & 30 & 10--40 & 20 \\
AGPM-L10\tablefootmark{b} & $0.630\pm0.010$ & $5.20\pm0.05$ & $2.10\pm0.10$ & 10--40 & 20 & 70--450 & 90 \\
AGPM-L11\tablefootmark{b} & $0.650\pm0.010$ & $4.92\pm0.05$ & $2.22\pm 0.10$ & 40--670 & 70 & 110--520 & 240 \\
AGPM-L12\tablefootmark{b} & $0.650\pm0.010$ & $4.92\pm0.05$ & $2.22\pm 0.10$ & 40--670 & 70 & 110--520 & 120 \\
AGPM-L13\tablefootmark{b} & $0.590\pm0.010$ & $4.67\pm0.05$ & $2.45\pm0.10$ & 30--250 & 110 & 70--250 & 150 \\
AGPM-L14\tablefootmark{c} & $0.615\pm0.010$ & $4.67\pm0.05$ & $2.45\pm0.10$ & 70--1860 & 370 & 50--160 & 90 \\
AGPM-L15 & $0.630\pm0.010$ & $4.67\pm0.05$ & $2.45\pm0.10$ & 130--2300 & 630 & 40--120 & 70 \\
\hline
\end{tabular}
\tablefoot{
\tablefoottext{a}{Installed in the Keck/NIRC2 camera.}
\tablefoottext{b}{Chosen for grating tuning demonstration.}
\tablefoottext{c}{Installed in the LBT/LMIRCam camera.}}
\end{table*}
Polycrystalline diamond substrates of optical quality (Diamond Materials GmbH and Element Six Ltd.) with a diameter of 10~mm and a thickness of 300~$\mu$m were used. We have recently demonstrated an improved fabrication process for high aspect ratio diamond gratings \citep{Vargas16}. Most of the AGPMs presented in this work have been manufactured using this process, which involves nano-replication and ICP-RIE of Al, Si and diamond. Previously we used NIL in the nano-replication step \citep{Forsberg13,Delacroix13}, but we noticed that this process gave rise to a large reduction in line width, and that variations in line width were common, especially around the center of the AGPM. In our new process, we use solvent assisted micro molding (SAMIM, \citealt{Kim97, Vargas16}), which gives very good fidelity in the replicated patterns with nearly no difference in line widths compared to the master AGPM pattern. Moreover, our improved fabrication process use pure oxygen chemistry during the ICP-RIE of diamond, yielding a lower sidewall angle \citep{Vargas16}, which is beneficial for fabricating high performing AGPMs.
Eleven AGPMs were successfully fabricated (see Table~\ref{tab:agpm_params}). They were numbered from AGPM-L5 to L15 \citep[AGPM-L1 to L4 were presented in][]{Delacroix13}. We would like to point out that AGPM-L5 to L8 were fabricated by using our first generation of fabrication process based on NIL in the nano-replication step \citep{Forsberg13,Delacroix13}, except that C$_4$F$_8$ was added as an etch gas in the Si etch step giving less shrinkage in line width \citep{Vargas16}. When using the first generation of fabrication process, we had to repeat the nano-replication and thin film etching steps several times (for a given substrate) before getting an AGPM with correct line width (acceptable values: 590~nm~$\leq w_t \leq$~750 nm). As a result, the surface of the diamond substrate was degraded and for this reason, we had to discard several diamond substrates. Using our improved process completely removes these problems.
Evaluating the parameters of the etched gratings is not an easy task. Indeed, metrological methods such as atomic force microscopy (AFM) cannot reach down the trenches, and the features are too small for optical interferometers. Furthermore, a precise value of the sidewall angle can only be measured by cracking the AGPM perpendicularly to the grating to resolve a cross section of the profile in scanning electron microscopy (SEM). Therefore, the geometry of the AGPMs' high aspect ratio gratings was analyzed by cross section micrographs using SEM. However, since all of the AGPMs are potentially to be installed in telescopes, none was cracked except AGPM-L10. For each batch of diamond AGPMs (i.e., AGPMs etched together and therefore having almost the same sidewall angle), a twin sample was cracked instead. The twin is a test sample that follows the batch through the complete process. It was measured after each critical step. The grating parameters $w_t$ and $\alpha$ are indeed known to vary during the fabrication process; hence it is critical to follow the process using a twin sample, enabling recalculations of the optimal depth $h$.
The twin sample was cracked after the first Al etching step to check if the pattern was transferred successfully, and after the second Al etching to see if the mask layers have smooth sidewalls and to measure the line width before etching. The optimal etch depth was then recalculated by RCWA simulations, using the measured line width and sidewall angle. A third cracking was performed just before reaching the optimal etch depth (based on etch time, using the mean value of the diamond etch rate), to avoid too deep gratings. Previous etch runs showed that the diamond etch rate can vary up to 5\% \citep{Vargas16}. For the AGPMs demonstrated in \cite{Delacroix13} and AGPM-L5 to AGPM-L8, we wrongly assumed that the etch rate was always the same for our diamond etch recipe, thus giving a larger error in final etch depth (and in $w_t$ and $\alpha$) compared to using a twin sample. Again, $w_t$, $\alpha$ (and $h$) were measured and a final RCWA calculation was made for deciding on the optimal etch depth $h$. In the final step, the grating generally just needed to be etched 100--400~nm deeper to reach the optimal depth. The twin was cracked for a final time; $w_t$, $\alpha$ and $h$ were determined for the twin, and the parameter values for the AGPM fabricated in parallel were assumed to be the same. The measured grating parameters are reported in Table~\ref{tab:agpm_params}.
\section{Coronagraphic performance evaluation} \label{sec:performance}
The AGPMs were optically tested on the YACADIRE testbench at LESIA, Observatoire de Paris \citep{Boccaletti08}. This bench was previously used to characterize our first generation of AGPMs using a broadband L filter \citep{Delacroix13}. We refer to these two papers for a detailed description of the bench. On YACADIRE, the entrance pupil is defined by a circular aperture. The AGPM is placed at the focal plane, where the beam is converging at $f/40$, resulting in a diffraction pattern of full width at half maximum FWHM~$\simeq 150$~$\mu$m at L band. The diameter of the Lyot stop is undersized to 80\% of the original pupil size.
\begin{figure}
\centering
\includegraphics[width=8.8cm]{fig4.png}
\caption{Typical experimental results of AGPM optical performance characterization. \textit{Left}. Radial profile of the image with an AGPM translated by 1 mm (off-axis PSF), and with three different centered AGPMs, showing low (AGPM-L9), median (AGPM-L13) or high (AGPM-L15) performance after initial etching. The vertical dashed line shows the limit of the disk on which the flux is integrated to compute the rejection ratio $R$. \textit{Right}. Corresponding images shown with a logarithmic scale.}
\label{fig:agpm_psf}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=18cm]{fig5.png}
\caption{Schematic of the tuning process. The three left-most sketches (with a blue background) show the process steps for deeper gratings; a) Al deposition, b) Al etching and c) diamond etching. The two right-most sketches (red background) show the process steps for shallower gratings; d) resist filling and e) diamond etching.}
\label{fig:tuning_process}
\end{figure*}
While the theoretical rejection ratio computed in RCWA simulations corresponds to the ratio of the total intensity in the two PSFs, this metrics is not practical in the case of our experimental data for two main reasons. First, the large thermal background encountered on the non-cryogenic YACADIRE bench reduce the exploitable part of the coronagraphic PSF to an angular separation of about $4\lambda/D$ (see Fig.~\ref{fig:agpm_psf}). Beyond this separation, the signal becomes dominated by background noise and by residuals associated to the background subtraction process. Second, the YACADIRE bench is not free from optical aberrations. Indeed, when the AGPM intrinsic rejection ratio exceeds 100:1, one can notice significant deformation of the coronagraphic intensity profile compared to the non-coronagraphic profile, and in particular a bump appearing at about $1\lambda/D$ (see Fig.~\ref{fig:agpm_psf}). This behavior is consistent with low-order aberrations dominating the coronagraphic performance. The vortex effect associated to the AGPM only affects the coherent part of the input beam, and reveals these low-order aberrations that were unnoticeable in non-coronagraphic images. In order to assess the true performance of the AGPM, we propose to compute the experimental rejection ratio $R$ by integrating the flux on an aperture of size equal to the resolution element $\lambda/D$, which encircles 80\% of the total energy in the non-coronagraphic PSF, and where the contribution of the coherent core is most prominent. This definition of the rejection ratio is the same as the one used in \citet{Delacroix13}, and would be strictly equivalent to the definition used in the RCWA simulations of Sect.~\ref{sec:design} if the optical system was perfect. Due to noise and optical aberrations, the measured rejection ratios will generally be somewhat underestimated, especially for the highest rejection ratios.
Our rejection ratio measurements were done using various spectral filters (see Table~\ref{tab:filters}): broadband L or M filters were placed directly in the cryostat to reduce background emission, while narrow-band filters were used at room temperature. The coronagraphic performance measurements performed in the two broadband filters are reported in Table~\ref{tab:agpm_params}. As expected, a large fraction of the fabricated AGPMs do not reach a rejection ratio of 100, which we consider as a bare minimum for on-sky coronagraphic applications \citep{Mawet10}.
\begin{table}
\caption{Central wavelength and width of the filters used for the AGPM coronagraphic performance evaluation.}
\label{tab:filters}
\centering
\begin{tabular}{c c c }
\hline\hline
Filter & $\lambda_0$ [$\mu$m] & $\Delta \lambda$ [$\mu$m] \\
\hline
Broad L & 3.750 & 0.50 \\
Narrow L-short & 3.475 & 0.10 \\
Narrow L-mid & 3.800 & 0.18 \\
Narrow L-long & 4.040 & 0.16 \\
Broad M & 4.700 & 0.60 \\
\hline
\end{tabular}
\end{table}
\section{Grating tuning processes} \label{sec:tuning}
Using the results from the performance measurements together with the RCWA calculations, and using the grating parameters measured on the twin sample as a first guess in the RCWA modeling, it becomes possible to determine more precisely the parameters of the grating, and to compute by how much the grating parameters need to be tuned to improve the AGPM coronagraphic performance. While the line width $w_t$ and sidewall angle $\alpha$ are difficult to change in a controlled manner after the AGPM has been fabricated, the grating depth $h$ can be tuned. Here we demonstrate two techniques for either making the AGPM grating deeper or shallower.
\begin{figure}
\centering
\includegraphics[width=8.8cm]{fig6.png}
\caption{Etch profile for the cracked AGPM-L10. \textit{Left.} Profile with deeper grooves, where some remaining Al can be seen on the top of the grating (and upper part of the sidewalls of the grating). \textit{Center.} Grating before tuning. \textit{Right.} Profile with shallower grooves, where the top has become faceted.}
\label{fig:tuning_sem}
\end{figure}
To make the AGPM deeper, we used a technique that we have recently developed for increasing the depth of an already fabricated high aspect ratio diamond grating \citep{Vargas16}. In short, a layer of Al is deposited on top of the diamond AGPM (Fig.~\ref{fig:tuning_process}a), and due to shadowing effects, the top of the grating is covered with a thicker layer than the area at the bottom of the grating. The thin Al layer at the bottom is etched away using ICP-RIE (Fig.~\ref{fig:tuning_process}b), leaving the bottom of the groove without Al and the top still covered with Al. Finally, the AGPM is shortly diamond etched using ICP-RIE (Fig.~\ref{fig:tuning_process}c).
\begin{figure*}
\centering
\includegraphics[width=18cm]{fig7.png}
\caption{Comparison of rejection ratio between L-band and M-band optical measurements and the RCWA model. \textit{Left}. Measured rejection ratio in five different filters for AGPM-L9 and its remastered versions. AGPM-L9r and AGPM-L9r2 were etched 400~nm and 780~nm deeper than the original AGPM-L9, respectively. \textit{Right}. Possible parameter solutions (marked with crosses) for each optimization, overlaid on predicted coronagraphic performance from RCWA simulations (white lines) with $\alpha$=$2\fdg45$ and wavelength region between 3.5-4.0~$\mu$m.}
\label{fig:L5perf}
\end{figure*}
To make the AGPM shallower, a new technique was developed. The grating was filled with photoresist before etching the diamond as above. The diamond surface is hydrophilic due to the previous diamond plasma etching in oxygen chemistry and, in addition, the grating structure makes the surface even more hydrophilic \citep{Karlsson10}; resist dropped on the AGPM will thus immediately fill up the grating. The process is as follows: the AGPM was placed on a spinner, and Shipley S1813 photoresist was dropped on the surface to completely cover the surface. Excess resist was removed by spinning the AGPM substrate at 6000~rpm for 30~s, which leaves about 1~$\mu$m resist on top of the grating (Fig.~\ref{fig:tuning_process}d). To completely bake out the solvent, the AGPM was placed on a hot plate at $115~\degr$C for 20~minutes. The AGPM was then shortly diamond etched using ICP-RIE. This process quickly removes the resist on top of the grating (40-60~s), while the resist inside the grating grooves remains much longer and thus protects the grooves and sidewalls of the diamond structure (Fig.~\ref{fig:tuning_process}e). In other words, the top of the diamond grating will be almost directly attacked by the oxygen plasma, while the resist in the grooves will protect these areas of the diamond grating. Although the etch selectivity between diamond and resist is very low (i.e., resist is etched much faster than diamond), the top diamond area of the grating can be etched several hundreds nanometers before the resist in the grooves was etched away. If there is a need for even shallower grating, the process can be repeated. However, if the grating is etched for too long, faceting of the top of the grating might start to reduce the optical performance (see Fig. 6, right).
AGPM-L10 was used as a test sample to validate our processes to etch deeper and shallower gratings. It was cracked in two halves and characterized (Fig.~\ref{fig:tuning_sem}).
The half that was etched deeper was sputtered with 400~nm thick Al followed by Al plasma etching using Cl$_2$ and BCl$_3$ (gas flows of 15~sccm and 50~sccm, respectively) at 5~mTorr and with an ICP power of 600~W and a bias power of 30~W for 25~s. The diamond substrate was then plasma etched in an oxygen plasma at 5~mTorr with an ICP power of 850~W and a bias power of 220~W for 150~s, resulting in 400~nm deeper grooves. The final grating grooves have a slightly higher sidewall angle, which must be taken into consideration when using RCWA simulations for finding the optimal depth. The other half of AGPM-L10, chosen as a test sample to reduce the grating depth, was filled with photoresist as described above. The process was ended with the same oxygen plasma recipe (and time) as when etching the grating deeper, resulting in 400~nm shallower grooves.
\section{Performance after re-etching} \label{sec:newperf}
\begin{table}
\caption{Rejection ratios in the broadband L filter for the optimized AGPMs.}
\label{tab:agpm_perf}
\centering
\begin{tabular}{l c c c c}
\hline\hline
& Tuning & $R$ before & $\Delta h$ & $R$ after \\
\multicolumn{1}{c}{Name} & process & tuning & [$\mu$m] & tuning \\
\hline
AGPM-L9r & Al deposition & 30 & $+0.40$ & 100 \\
AGPM-L9r2 & Al deposition & 100 & $+0.38$ & 400 \\
AGPM-L11r & Resist filling & 70 & $-0.32$ & 910 \\
AGPM-L12r & Resist filling & 70 & $-0.42$ & 470 \\
AGPM-L13r & Resist filling & 110 & $-0.29$ &190 \\
\hline
\end{tabular}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=11.3cm]{fig8a.png} \hspace*{2mm}
\includegraphics[width=6.4cm]{fig8b.png}
\caption{Performance of AGPM-L11 before and after the grating tuning process. \textit{Left}. PSF radial profiles measured on the YACADIRE bench in the broadband L filter, together with an illustration of the image shape on the right-hand side. \textit{Right}. Illustration of the experimental coronagraphic performance measured in the five broad- and narrow-band filters, overlaid on the predicted performance based on our best-fit RCWA model of the grating.}
\label{fig:L7perf}
\end{figure*}
The AGPMs showing a rejection ratio around or below 100 (see Table~\ref{tab:agpm_params}) were chosen to test and validate our new tuning processes. Here, we describe the tuning of AGPM-L9 as an example. The experimental rejection ratios measured after initial etching are shown in Fig.~\ref{fig:L5perf} (left) for the broadband L and the three narrow-band filters. These rejection ratios were fit using our RCWA model, giving rise to a family of possible solutions for the line width $w_t$ and grating depth $h$ (thin white lines in Fig.~\ref{fig:L5perf}, right), assuming a sidewall angle $\alpha$ of $2\fdg45$. Thanks to the SEM measurements performed on the twin sample (see Sect.~\ref{sec:fabrication}), we can further constrain the grating parameters, which must be located within the rectangle formed by the four white crosses in Fig.~\ref{fig:L5perf} (right), taking into account the SEM measurement uncertainties of $\pm 10$~nm in $w_t$ and $\pm 50$~nm in $h$. The white lines in the bottom right corner are unrealistic RCWA solutions, as the twin sample can confirm that we have not etched that deep. AGPM-L9 was then processed in the same way as AGPM-L10, with the aim of making the grating 400~nm deeper. The process resulted in a ``new'' phase mask referred to as AGPM-L9r, and the subsequent performance evaluation on our coronagraphic test bench showed an increase of rejection ratio to about 100 in the broadband L filter, as illustrated in Fig.~\ref{fig:L5perf} (left). Based on further RCWA simulations, it was evident that the tuned AGPM was still too shallow, by about 400~nm. A second etch iteration was performed (etch time 200~s) to increase the grating depth by that amount, resulting in the final AGPM-L9r2. The final performance evaluation shows an improvement of the rejection ratio to 400 in the broadband L filter (3.5--4.0~$\mu$m), see Fig.~\ref{fig:L5perf} and Table~\ref{tab:agpm_perf}.
Other successful grating tuning examples include AGPM-L11, AGPM-L12, and AGPM-L13. These AGPMs were originally etched deeper than the previous samples, in an attempt to demonstrate our capability to produce AGPMs delivering good performance simultaneously across the L and M bands (M-band operations requiring deeper gratings), as shown in Fig.~\ref{fig:rejection_profile}. AGPM-L13 revealed to be the closest approach to a science-grade LM-band AGPM, delivering rejection ratios higher than 100 in both broadband filters. After performance evaluation on our coronagraphic test bench, it was decided to use these three AGPMs to demonstrate the grating depth reduction process. The three AGPMs were thus made shallower using the recipe tested on AGPM-L10. In order to optimize their depth, AGPM-L11 was etched for 120~s, AGPM-L12 for 150~s and AGPM-L13 for 110~s. This corresponded to a decrease of etch depth by 320~nm, 420~nm and 290~nm, respectively. As expected, the three tuned AGPMs all showed better performance in rejection ratio at L band (at the expense of degraded performance at M band), with AGPM-L11 setting the record broadband rejection ratio of 910 (see Table~\ref{tab:agpm_perf}). The coronagraphic PSF of AGPM-L11 in the broadband L filter before and after tuning is shown as an illustration in Fig.~\ref{fig:L7perf}, together with a graphical illustration of its performance in all broad- and narrow-band filters compared to the best-fit RCWA model. A rejection ratio up to 1100 is measured in the short-wave narrow-band filter. At this level of performance, we expect that the optical quality of the wavefront delivered by the YACADIRE test bench becomes a limitation to the measured performance. This is further suggested by the ``donut'' shape of the PSF shown as an inset in Fig.~\ref{fig:L7perf}, which is the expected behavior of the vortex phase mask in presence of low-order aberrations. The actual performance of this AGPM could thus be even better than what is shown here. Based on the measured peak rejection of 900 in broadband L, we would expect a raw contrast of about $10^{-5}$ at an angular separation of $2\lambda/D$ for a perfect wavefront on a circular aperture, rather than the $6\times10^{-5}$ shown in Fig.~\ref{fig:L7perf}. We also note that the noise floor of YACADIRE in broadband L corresponds to a raw contrast of about $2\times10^{-5}$, due to the limited amount of photons making it through the single-mode fiber.
In summary, both methods for optimizing the grating depth (shallower or deeper) were successfully performed. All tuned AGPMs (L9r2, L11r, L12r, and L13r) show better rejection performance. The results for all the tuned AGPMs are summarized in Table~\ref{tab:agpm_perf}, where the suffix ``r'' denotes remastered AGPMs. We note that these sub-micron scale high aspect ratio gratings are never perfect; the angle of the sidewall is not completely constant, and there are so-called trenching effects, which means that the floor of the grating is not at the exact same level everywhere (i.e., not uniform etch depth, see Fig.~\ref{fig:tuning_sem}). A completely accurate RCWA simulation of the AGPM is therefore not possible, but based on the experimental characterization of the AGPM, we can nevertheless optimize the depth of the grating in an efficient way. As long as the grating parameters are reasonably within specification ($\pm10$\%, which is valid for our described manufacturing process), it is always possible to hit an optimal etch depth giving a rejection ratio of 500 or more, which makes it suitable for installation in current and future ground-based infrared high-contrast imagers.
\section{Conclusions and outlook}
Over the last few years, we have produced several AGPMs designed for the thermal infrared regime by etching concentric subwavelength gratings into synthetic diamond substrates. Some of them are now installed in world-leading ground-based observatories, such as the Very Large Telescope, the Large Binocular Telescope and the Keck Observatory. Over the years, we have however discovered that it is very difficult to fabricate AGPMs with good optical performance in a one-iteration process. Errors in the grating parameters will always exist when fabricating nanometer-sized high aspect ratio structures over a relatively large area (cm), thus resulting in degraded optical performance.
In this paper, we have successfully demonstrated that we can tune the AGPM grating depth, that is make it deeper or shallower, even after completing the initial etching process. For that purpose, we have combined the information from SEM micrographs, RCWA modeling, and coronagraphic performance characterization to determine the grating parameters (line width, groove depth, and sidewall angle) with a sufficient accuracy to precisely determine by how much the grating depth needs to be modified to reach the highest possible coronagraphic performance. Two different processes have been presented and validated to reduce or increase the grating depth, enabling the production of L-band AGPMs with broadband rejection ratios up to about 1000:1. Such performance would allow raw contrasts up to $10^{-5}$ to be reached at two resolution elements from the optical axis for a perfect input wave front on a circular, unobstructed aperture. This will ensure that the intrinsic performance of the AGPM does not significantly affect the on-sky coronagraphic performance for current and upcoming infrared high-contrast thermal infrared imagers, such as the Mid-infrared E-ELT Imager and Spectrograph \citep[METIS,][]{Brandl14}, where wave front aberrations and diffraction from the non-circular input pupil will be setting the limit on the achievable raw contrast.
Future work will focus on two main aspects. First, we are in the process of building a new coronagraphic bench \citep[VODCA,][]{Jolivet14}, which should allow the characterization of vortex phase masks in the thermal infrared with a higher dynamic range and better optical quality than currently possible on the YACADIRE bench. Second, we are trying to reduce the grating period down to a sub-micron size to enable operations at K- and H-bands, with promising results already obtained at K-band. For such small grating periods, the errors in the fabrication process will become even more critical compared with L-band AGPMs. Tuning the grating of these AGPMs will certainly be a must to achieve very high rejection performance.
\begin{acknowledgements}
The authors are grateful to J\'er\^ome Parisot (LESIA, Observatoire de Paris), who manages the YACADIRE test bench, for his availability and help during every AGPM test campaigns. The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (ERC Grant Agreement n. 337569), the French Community of Belgium through an ARC grant for Concerted Research Action, and the Swedish Research Council (VR) through project grant 621-2014-5959.
\end{acknowledgements}
\bibliographystyle{aa}
|
2,877,628,090,470 | arxiv | \section{Introduction}
GRB 050315 \citep{va05} has been triggered and located by the BAT instrument \citep{b04,ba05} on board of the {\em Swift} satellite \citep{ga04} at 2005-March-15 20:59:42 UT \citep{pa05}. The narrow field instrument XRT \citep{bua04,bua05} began observations $\sim 80$ s after the BAT trigger, one of the earliest XRT observations yet made, and continued to detect the source for $\sim 10$ days \citep{va05}. The spectroscopic redshift has been found to be $z = 1.949$ \citep{kb05}. We present here the first results of the fit of this source in the framework of our theoretical model and point out the new step toward the uniqueness of the explanation of the overall GRB structure made possible by the Swift data of this source.
\section{Our theoretical model}
GRB 050315 observations find a direct explanation in our theoretical model \citep[see][and references therein]{rlet1,rlet2,rubr,rubr2,EQTS_ApJL2,PowerLaws}. We determine the values of the two free parameters which characterize our model: the total energy stored in the Dyadosphere $E_{dya}$ and the mass of the baryons left by the collapse $M_Bc^2 \equiv B E_{dya}$. We follow the expansion of the pulse, composed by the electron-positron plasma initially created by the vacuum polarization process in the Dyadosphere. The plasma self-propels itself outward and engulfs the baryonic remnant left over by the collapse of the progenitor star. As such pulse reaches transparency, the Proper Gamma-Ray Burst (P-GRB) is emitted \citep{rswx99,rswx00,rlet2}. The remaining accelerated baryons, interacting with the interstellar medium (ISM), produce the afterglow emission. The ISM is described by the two additional parameters of the theory: the average particle number density $<n_{ISM}>$ and the ratio $<\mathcal{R}>$ between the effective emitting area and the total area of the pulse \citep{spectr1}, which take into account the ISM filamentary structure \citep{fil}.
The luminosity in fixed energy bands is evaluated integrating over the equitemporal surfaces \citep[EQTSs, see][]{EQTS_ApJL,EQTS_ApJL2}, computed using the exact solutions of the afterglow equations of motion \citep{PowerLaws}, the energy density released due to the totally inelastic collisions of the accelerated baryons with the ISM measured in the co-moving frame, duly boosted in the observer frame. In the reference frame co-moving with the accelerated baryonic matter, the radiation produced by this interaction of the ISM with the front of the expanding baryonic shell is assumed to have a thermal spectrum \citep{spectr1}.
We reproduce correctly in several GRBs and in this specific case (see e.g. Figs. \ref{15-350}--\ref{global}) the observed time variability of the prompt emission as well as the remaining part of the afterglow \citep[see e.g.][and references therein]{r02,rubr,rubr2,031203}. The radiation produced by the interaction of the accelerated baryons with the ISM agrees with observations both for intensity and time structure.
As shown in previous cases (GRB 991216 \citep{rubr,beam}, GRB 980425 \citep{cospar02}, GRB 030329 \citep{mg10grazia}, GRB 031203 \citep{031203}), also for GRB 050315, using the correct equations of motion, there is no need to introduce a collimated emission to fit the afterglow observations.
The major difference between our theoretical model and the ones in the current literature \citep[see e.g.][and references therein]{p04} is that what is usually called ``prompt emission'' in our case coincides with the peak of the afterglow emission and is not due to a different physical process \citep{rlet2}. The verification of this prediction has been up to now tested in a variety of sources like GRB 991216 \citep{rubr}, GRB 980425 \citep{cospar02}, GRB 030329 \citep{mg10grazia}, GRB 031203 \citep{031203}. However, in all such sources the observational data were available only during the prompt emission and the latest afterglow phases, leaving all the in-between evolution undetermined. Now, thanks to the superb data provided by the Swift satellite, we are finally able to confirm, by direct confrontation with the observational data, our theoretical predictions on the GRB structure \citep{rlet2} with a detailed fit of the complete afterglow light curve of GRB 050315, from the peak (i.e. from the so-called ``prompt emission'') all the way to the latest phases without any gap in the observational data.
\section{GRB 991216}
A basic feature of our model consists in a sharp distinction between two different components in the GRB structure: the proper GRB (P-GRB), emitted at the moment of transparency, followed by an afterglow completely described by external shocks and composed of three different regimes. The first afterglow regime corresponds to a bolometric luminosity monotonically increasing with the photon detector arrival time, corresponding to a substantially constant Lorentz gamma factor of the accelerated baryons. The second regime consists of the bolometric luminosity peak, corresponding to the ``knee'' in the decreasing phase of the baryonic Lorentz gamma factor. The third regime corresponds to a bolometric luminosity decreasing with arrival time, corresponding to the late deceleration of the Lorentz gamma factor.
\begin{figure}
\includegraphics[width=0.9\hsize,clip]{991216}
\caption{This picture shows our prediction on the GRB structure based on the analysis of GRB 991216 \citep{rlet2}. In the main panel there is the bolometic light curve computed using our model and composed by the P-GRB and the afterglow, together with the BATSE noise level. In the lower right panel there is represented the BATSE observation of the prompt emission \citep[see][]{grblc99,brbr99}, with the clear identification of the observed ``main burst'' with the peak of the theoretical afterglow light curve and of the observed ``precursor'' with the theoretically predicted P-GRB (see enlargement). In the lower left panel is represented our theoretical fit of the BATSE observations of the afterglow peak using an inhomogeneous ISM. Details in \citet{r02,rubr,rubr2}.}
\label{991216}
\end{figure}
In some sources the P-GRB is under the observability threshold. In \citet{rlet2} we have chosen as a prototype the source GRB 991216 which clearly shows the existence of this two components. Both the relative intensity of the P-GRB to the peak of the afterglow, as well as their corresponding temporal lag, have been theoretically predicted within a few percent (see Fig. 11 in \citet{rubr}). The continuous line in the main panel of Fig. \ref{991216} corresponds to a constant ISM density averaged over the entire afterglow. The structured curve, shown in the bottom left panel, corresponds to ISM density inhomogeneities which are assumed for simplicity to be spherically symmetric \citep{r02}. Clearly, a more precise description of the BATSE light curve (e.g. the two sharp spikes at $\sim 30$ s) will need a more refined 3-dimensional description of the ISM filamentary structure \citep{fil}.
This same approximation of spherically symmetric description of the ISM inhomogeneities is in the following adopted for GRB 050315, and is sufficient to clearly outline the general behavior of the luminosity vs. photon detector arrival time in selected energy bands.
\section{The fit of the observations}
\begin{figure}
\includegraphics[width=0.9\hsize,clip]{picco_15-350}
\caption{Theoretical fit (red line), computed using our model, of the BAT observations (blue points) of GRB 050315 in the $15$--$350$ keV energy band \citep{va05}.}
\label{15-350}
\end{figure}
\begin{figure}
\includegraphics[width=0.9\hsize,clip]{global}
\caption{Theoretical fit (blue line), computed using our model, of the XRT observations (black points) of GRB 050315 in the $0.2$--$10$ keV energy band \citep{va05}. The theoretical fit of the BAT observations (see Fig. \ref{15-350}) in the $15$--$350$ keV energy band is also represented (red line).}
\label{global}
\end{figure}
The best fit of the observational data leads to a total energy of the Dyadosphere $E_{dya} = 1.47\times 10^{53} erg$ \citep[the observational Swift $E_{iso}$ is $> 2.62\times 10^{52}$ erg, see Ref. ][]{va05}, so that the plasma is created between the radii $r_1 = 5.88\times 10^6$ cm and $r_2 = 1.74 \times 10^8$ cm with an initial temperature $T = 2.05 MeV$ and a total number of pairs $N_{e^+e^-} = 7.93\times 10^{57}$. The amount of baryonic matter in the remnant is assumed to be such that $B = 4.55 \times 10^{-3}$. The transparency point and the P-GRB emission occurs then with an initial Lorentz gamma factor of the accelerated baryons $\gamma_\circ = 217.81$ and at a distance $r = 1.32 \times 10^{14}$ cm from the Black Hole. The interstellar medium (ISM) parameters that we assume to best fit the observational data are: $<n_{ism}>= 0.121$ particles/cm$^{3}$ and $<R> = 2.05 \times 10^{-6}$. The ISM density contrast is found to be $\Delta \rho / \rho \sim 10^2$ on a scale of $5.0 \times 10^{16}$ cm.
In Figs. \ref{15-350} and \ref{global} we represent the theoretically computed GRB 050315 light curves, respectively in the $15$--$350$ keV and in the $0.2$-$10$ keV energy bands, which we obtained using our model, together with the corresponding data observed respectively by the BAT and the XRT instruments on board of the {\em Swift} satellite \citep{va05}. For completeness, in Fig. \ref{global} is also represented the theoretically computed $15$--$350$ keV light curve of Fig. \ref{15-350}, but not the BAT observational data to not overwhelm the picture too much.
The very good agreement between the theoretical curves and the observations is a most stringent proof of our predictions on the GRB structure \citep{rlet2}.
It goes without saying that also in the case of GRB 050315 a more detailed correspondence between the theory and the temporal fine structure of the BAT observational light curve could be achieved with a full 3-dimensional description of the ISM filamentary structure \citep{fil}.
\section{Conclusions}
In view of the above results, which clearly fits the overall luminosity in fixed energy bands, we will return in a forthcoming publication to the identification of the P-GRB using our theoretically predicted values for both its intensity and its time lag relative to the afterglow peak. We will also address the spectral analysis which is a most powerful theoretical prediction in order to evidence the continuity between the ``prompt radiation'' and the late phases of the afterglow and so to prove the uniqueness of the overall GRB structure.
\begin{theacknowledgments}
We thank P. Banat, G. Chincarini, A. Moretti and S. Vaughan for their help in the analysis of the observational data.
\end{theacknowledgments}
|
2,877,628,090,471 | arxiv | \section{Introduction}
The journal \textit{Monthly Notices of the Royal Astronomical Society} (MNRAS) encourages authors to prepare their papers using \LaTeX.
The style file \verb'mnras.cls' can be used to approximate the final appearance of the journal, and provides numerous features to simplify the preparation of papers.
This document, \verb'mnras_guide.tex', provides guidance on using that style file and the features it enables.
This is not a general guide on how to use \LaTeX, of which many excellent examples already exist.
We particularly recommend \textit{Wikibooks \LaTeX}\footnote{\url{https://en.wikibooks.org/wiki/LaTeX}}, a collaborative online textbook which is of use to both beginners and experts.
Alternatively there are several other online resources, and most academic libraries also hold suitable beginner's guides.
For guidance on the contents of papers, journal style, and how to submit a paper, see the MNRAS Instructions to Authors\footnote{\label{foot:itas}\url{http://www.oxfordjournals.org/our_journals/mnras/for_authors/}}.
Only technical issues with the \LaTeX\ class are considered here.
\section{Obtaining and installing the MNRAS package}
Some \LaTeX\ distributions come with the MNRAS package by default.
If yours does not, you can either install it using your distribution's package manager, or download it from the Comprehensive \TeX\ Archive Network\footnote{\url{http://www.ctan.org/tex-archive/macros/latex/contrib/mnras}} (CTAN).
The files can either be installed permanently by placing them in the appropriate directory (consult the documentation for your \LaTeX\ distribution), or used temporarily by placing them in the working directory for your paper.
To use the MNRAS package, simply specify \verb'mnras' as the document class at the start of a \verb'.tex' file:
\begin{verbatim}
\documentclass{mnras}
\end{verbatim}
Then compile \LaTeX\ (and if necessary \bibtex) in the usual way.
\section{Preparing and submitting a paper}
We recommend that you start with a copy of the \texttt{mnras\_template.tex} file.
Rename the file, update the information on the title page, and then work on the text of your paper.
Guidelines for content, style etc. are given in the instructions to authors on the journal's website$^{\ref{foot:itas}}$.
Note that this document does not follow all the aspects of MNRAS journal style (e.g. it has a table of contents).
If a paper is accepted, it is professionally typeset and copyedited by the publishers.
It is therefore likely that minor changes to presentation will occur.
For this reason, we ask authors to ignore minor details such as slightly long lines, extra blank spaces, or misplaced figures, because these details will be dealt with during the production process.
Papers must be submitted electronically via the online submission system; paper submissions are not permitted.
For full guidance on how to submit a paper, see the instructions to authors.
\section{Class options}
\label{sec:options}
There are several options which can be added to the document class line like this:
\begin{verbatim}
\documentclass[option1,option2]{mnras}
\end{verbatim}
The available options are:
\begin{itemize}
\item \verb'letters' -- used for papers in the journal's Letters section.
\item \verb'onecolumn' -- single column, instead of the default two columns. This should be used {\it only} if necessary for the display of numerous very long equations.
\item \verb'doublespacing' -- text has double line spacing. Please don't submit papers in this format.
\item \verb'referee' -- \textit{(deprecated)} single column, double spaced, larger text, bigger margins. Please don't submit papers in this format.
\item \verb'galley' -- \textit{(deprecated)} no running headers, no attempt to align the bottom of columns.
\item \verb'landscape' -- \textit{(deprecated)} sets the whole document on landscape paper.
\item \verb"usenatbib" -- \textit{(all papers should use this)} this uses Patrick Daly's \verb"natbib.sty" package for citations.
\item \verb"usegraphicx" -- \textit{(most papers will need this)} includes the \verb'graphicx' package, for inclusion of figures and images.
\item \verb'useAMS' -- adds support for upright Greek characters \verb'\upi', \verb'\umu' and \verb'\upartial' ($\upi$, $\umu$ and $\upartial$). Only these three are included, if you require other symbols you will need to include the \verb'amsmath' or \verb'amsymb' packages (see section~\ref{sec:packages}).
\item \verb"usedcolumn" -- includes the package \verb"dcolumn", which includes two new types of column alignment for use in tables.
\end{itemize}
Some of these options are deprecated and retained for backwards compatibility only.
Others are used in almost all papers, but again are retained as options to ensure that papers written decades ago will continue to compile without problems.
If you want to include any other packages, see section~\ref{sec:packages}.
\section{Title page}
If you are using \texttt{mnras\_template.tex} the necessary code for generating the title page, headers and footers is already present.
Simply edit the title, author list, institutions, abstract and keywords as described below.
\subsection{Title}
There are two forms of the title: the full version used on the first page, and a short version which is used in the header of other odd-numbered pages (the `running head').
Enter them with \verb'\title[]{}' like this:
\begin{verbatim}
\title[Running head]{Full title of the paper}
\end{verbatim}
The full title can be multiple lines (use \verb'\\' to start a new line) and may be as long as necessary, although we encourage authors to use concise titles. The running head must be $\le~45$ characters on a single line.
See appendix~\ref{sec:advanced} for more complicated examples.
\subsection{Authors and institutions}
Like the title, there are two forms of author list: the full version which appears on the title page, and a short form which appears in the header of the even-numbered pages. Enter them using the \verb'\author[]{}' command.
If the author list is more than one line long, start a new line using \verb'\newauthor'. Use \verb'\\' to start the institution list. Affiliations for each author should be indicated with a superscript number, and correspond to the list of institutions below the author list.
For example, if I were to write a paper with two coauthors at another institution, one of whom also works at a third location:
\begin{verbatim}
\author[K. T. Smith et al.]{
Keith T. Smith,$^{1}$
A. N. Other,$^{2}$
and Third Author$^{2,3}$
\\
$^{1}$Affiliation 1\\
$^{2}$Affiliation 2\\
$^{3}$Affiliation 3}
\end{verbatim}
Affiliations should be in the format `Department, Institution, Street Address, City and Postal Code, Country'.
Email addresses can be inserted with the \verb'\thanks{}' command which adds a title page footnote.
If you want to list more than one email, put them all in the same \verb'\thanks' and use \verb'\footnotemark[]' to refer to the same footnote multiple times.
Present addresses (if different to those where the work was performed) can also be added with a \verb'\thanks' command.
\subsection{Abstract and keywords}
The abstract is entered in an \verb'abstract' environment:
\begin{verbatim}
\begin{abstract}
The abstract of the paper.
\end{abstract}
\end{verbatim}
\noindent Note that there is a word limit on the length of abstracts.
For the current word limit, see the journal instructions to authors$^{\ref{foot:itas}}$.
Immediately following the abstract, a set of keywords is entered in a \verb'keywords' environment:
\begin{verbatim}
\begin{keywords}
keyword 1 -- keyword 2 -- keyword 3
\end{keywords}
\end{verbatim}
\noindent There is a list of permitted keywords, which is agreed between all the major astronomy journals and revised every few years.
Do \emph{not} make up new keywords!
For the current list of allowed keywords, see the journal's instructions to authors$^{\ref{foot:itas}}$.
\section{Sections and lists}
Sections and lists are generally the same as in the standard \LaTeX\ classes.
\subsection{Sections}
\label{sec:sections}
Sections are entered in the usual way, using \verb'\section{}' and its variants. It is possible to nest up to four section levels:
\begin{verbatim}
\section{Main section}
\subsection{Subsection}
\subsubsection{Subsubsection}
\paragraph{Lowest level section}
\end{verbatim}
\noindent The other \LaTeX\ sectioning commands \verb'\part', \verb'\chapter' and \verb'\subparagraph{}' are deprecated and should not be used.
Some sections are not numbered as part of journal style (e.g. the Acknowledgements).
To insert an unnumbered section use the `starred' version of the command: \verb'\section*{}'.
See appendix~\ref{sec:advanced} for more complicated examples.
\subsection{Lists}
Two forms of lists can be used in MNRAS -- numbered and unnumbered.
For a numbered list, use the \verb'enumerate' environment:
\begin{verbatim}
\begin{enumerate}
\item First item
\item Second item
\item etc.
\end{enumerate}
\end{verbatim}
\noindent which produces
\begin{enumerate}
\item First item
\item Second item
\item etc.
\end{enumerate}
Note that the list uses lowercase Roman numerals, rather than the \LaTeX\ default Arabic numerals.
For an unnumbered list, use the \verb'description' environment without the optional argument:
\begin{verbatim}
\begin{description}
\item First item
\item Second item
\item etc.
\end{description}
\end{verbatim}
\noindent which produces
\begin{description}
\item First item
\item Second item
\item etc.
\end{description}
Bulleted lists using the \verb'itemize' environment should not be used in MNRAS; it is retained for backwards compatibility only.
\section{Mathematics and symbols}
The MNRAS class mostly adopts standard \LaTeX\ handling of mathematics, which is briefly summarised here.
See also section~\ref{sec:packages} for packages that support more advanced mathematics.
Mathematics can be inserted into the running text using the syntax \verb'$1+1=2$', which produces $1+1=2$.
Use this only for short expressions or when referring to mathematical quantities; equations should be entered as described below.
\subsection{Equations}
Equations should be entered using the \verb'equation' environment, which automatically numbers them:
\begin{verbatim}
\begin{equation}
a^2=b^2+c^2
\end{equation}
\end{verbatim}
\noindent which produces
\begin{equation}
a^2=b^2+c^2
\end{equation}
By default, the equations are numbered sequentially throughout the whole paper. If a paper has a large number of equations, it may be better to number them by section (2.1, 2.2 etc.). To do this, add the command \verb'\numberwithin{equation}{section}' to the preamble.
It is also possible to produce un-numbered equations by using the \LaTeX\ built-in \verb'\['\textellipsis\verb'\]' and \verb'$$'\textellipsis\verb'$$' commands; however MNRAS requires that all equations are numbered, so these commands should be avoided.
\subsection{Special symbols}
\begin{table}
\caption{Additional commands for special symbols commonly used in astronomy. These can be used anywhere.}
\label{tab:anysymbols}
\begin{tabular}{lll}
\hline
Command & Output & Meaning\\
\hline
\verb'\sun' & \sun & Sun, solar\\[2pt]
\verb'\earth' & \earth & Earth, terrestrial\\[2pt]
\verb'\micron' & \micron & microns\\[2pt]
\verb'\degr' & \degr & degrees\\[2pt]
\verb'\arcmin' & \arcmin & arcminutes\\[2pt]
\verb'\arcsec' & \arcsec & arcseconds\\[2pt]
\verb'\fdg' & \fdg & fraction of a degree\\[2pt]
\verb'\farcm' & \farcm & fraction of an arcminute\\[2pt]
\verb'\farcs' & \farcs & fraction of an arcsecond\\[2pt]
\verb'\fd' & \fd & fraction of a day\\[2pt]
\verb'\fh' & \fh & fraction of an hour\\[2pt]
\verb'\fm' & \fm & fraction of a minute\\[2pt]
\verb'\fs' & \fs & fraction of a second\\[2pt]
\verb'\fp' & \fp & fraction of a period\\[2pt]
\verb'\diameter' & \diameter & diameter\\[2pt]
\verb'\sq' & \sq & square, Q.E.D.\\[2pt]
\hline
\end{tabular}
\end{table}
\begin{table}
\caption{Additional commands for mathematical symbols. These can only be used in maths mode.}
\label{tab:mathssymbols}
\begin{tabular}{lll}
\hline
Command & Output & Meaning\\
\hline
\verb'\upi' & $\upi$ & upright pi\\[2pt]
\verb'\umu' & $\umu$ & upright mu\\[2pt]
\verb'\upartial' & $\upartial$ & upright partial derivative\\[2pt]
\verb'\lid' & $\lid$ & less than or equal to\\[2pt]
\verb'\gid' & $\gid$ & greater than or equal to\\[2pt]
\verb'\la' & $\la$ & less than of order\\[2pt]
\verb'\ga' & $\ga$ & greater than of order\\[2pt]
\verb'\loa' & $\loa$ & less than approximately\\[2pt]
\verb'\goa' & $\goa$ & greater than approximately\\[2pt]
\verb'\cor' & $\cor$ & corresponds to\\[2pt]
\verb'\sol' & $\sol$ & similar to or less than\\[2pt]
\verb'\sog' & $\sog$ & similar to or greater than\\[2pt]
\verb'\lse' & $\lse$ & less than or homotopic to \\[2pt]
\verb'\gse' & $\gse$ & greater than or homotopic to\\[2pt]
\verb'\getsto' & $\getsto$ & from over to\\[2pt]
\verb'\grole' & $\grole$ & greater over less\\[2pt]
\verb'\leogr' & $\leogr$ & less over greater\\
\hline
\end{tabular}
\end{table}
Some additional symbols of common use in astronomy have been added in the MNRAS class. These are shown in tables~\ref{tab:anysymbols}--\ref{tab:mathssymbols}. The command names are -- as far as possible -- the same as those used in other major astronomy journals.
Many other mathematical symbols are also available, either built into \LaTeX\ or via additional packages. If you want to insert a specific symbol but don't know the \LaTeX\ command, we recommend using the Detexify website\footnote{\url{http://detexify.kirelabs.org}}.
Sometimes font or coding limitations mean a symbol may not get smaller when used in sub- or superscripts, and will therefore be displayed at the wrong size. There is no need to worry about this as it will be corrected by the typesetter during production.
To produce bold symbols in mathematics, use \verb'\bmath' for simple variables, and the \verb'bm' package for more complex symbols (see section~\ref{sec:packages}). Vectors are set in bold italic, using \verb'\mathbfit{}'.
For matrices, use \verb'\mathbfss{}' to produce a bold sans-serif font e.g. \mathbfss{H}; this works even outside maths mode, but not all symbols are available (e.g. Greek). For $\nabla$ (del, used in gradients, divergence etc.) use \verb'$\nabla$'.
\subsection{Ions}
A new \verb'\ion{}{}' command has been added to the class file, for the correct typesetting of ionisation states.
For example, to typeset singly ionised calcium use \verb'\ion{Ca}{ii}', which produces \ion{Ca}{ii}.
\section{Figures and tables}
\label{sec:fig_table}
Figures and tables (collectively called `floats') are mostly the same as built into \LaTeX.
\subsection{Basic examples}
\begin{figure}
\includegraphics[width=\columnwidth]{example}
\caption{An example figure.}
\label{fig:example}
\end{figure}
Figures are inserted in the usual way using a \verb'figure' environment and \verb'\includegraphics'. The example Figure~\ref{fig:example} was generated using the code:
\begin{verbatim}
\begin{figure}
\includegraphics[width=\columnwidth]{example}
\caption{An example figure.}
\label{fig:example}
\end{figure}
\end{verbatim}
\begin{table}
\caption{An example table.}
\label{tab:example}
\begin{tabular}{lcc}
\hline
Star & Mass & Luminosity\\
& $M_{\sun}$ & $L_{\sun}$\\
\hline
Sun & 1.00 & 1.00\\
$\alpha$~Cen~A & 1.10 & 1.52\\
$\epsilon$~Eri & 0.82 & 0.34\\
\hline
\end{tabular}
\end{table}
The example Table~\ref{tab:example} was generated using the code:
\begin{verbatim}
\begin{table}
\caption{An example table.}
\label{tab:example}
\begin{tabular}{lcc}
\hline
Star & Mass & Luminosity\\
& $M_{\sun}$ & $L_{\sun}$\\
\hline
Sun & 1.00 & 1.00\\
$\alpha$~Cen~A & 1.10 & 1.52\\
$\epsilon$~Eri & 0.82 & 0.34\\
\hline
\end{tabular}
\end{table}
\end{verbatim}
\subsection{Captions and placement}
Captions go \emph{above} tables but \emph{below} figures, as in the examples above.
The \LaTeX\ float placement commands \verb'[htbp]' are intentionally disabled.
Layout of figures and tables will be adjusted by the publisher during the production process, so authors should not concern themselves with placement to avoid disappointment and wasted effort.
Simply place the \LaTeX\ code close to where the figure or table is first mentioned in the text and leave exact placement to the publishers.
By default a figure or table will occupy one column of the page.
To produce a wider version which covers both columns, use the \verb'figure*' or \verb'table*' environment.
If a figure or table is too long to fit on a single page it can be split it into several parts.
Create an additional figure or table which uses \verb'\contcaption{}' instead of \verb'\caption{}'.
This will automatically correct the numbering and add `\emph{continued}' at the start of the caption.
\begin{table}
\contcaption{A table continued from the previous one.}
\label{tab:continued}
\begin{tabular}{lcc}
\hline
Star & Mass & Luminosity\\
& $M_{\sun}$ & $L_{\sun}$\\
\hline
$\tau$~Cet & 0.78 & 0.52\\
$\delta$~Pav & 0.99 & 1.22\\
$\sigma$~Dra & 0.87 & 0.43\\
\hline
\end{tabular}
\end{table}
Table~\ref{tab:continued} was generated using the code:
\begin{verbatim}
\begin{table}
\contcaption{A table continued from the previous one.}
\label{tab:continued}
\begin{tabular}{lcc}
\hline
Star & Mass & Luminosity\\
& $M_{\sun}$ & $L_{\sun}$\\
\hline
$\tau$~Cet & 0.78 & 0.52\\
$\delta$~Pav & 0.99 & 1.22\\
$\sigma$~Dra & 0.87 & 0.43\\
\hline
\end{tabular}
\end{table}
\end{verbatim}
To produce a landscape figure or table, use the \verb'pdflscape' package and the \verb'landscape' environment.
The landscape Table~\ref{tab:landscape} was produced using the code:
\begin{verbatim}
\begin{landscape}
\begin{table}
\caption{An example landscape table.}
\label{tab:landscape}
\begin{tabular}{cccccccccc}
\hline
Header & Header & ...\\
Unit & Unit & ...\\
\hline
Data & Data & ...\\
Data & Data & ...\\
...\\
\hline
\end{tabular}
\end{table}
\end{landscape}
\end{verbatim}
Unfortunately this method will force a page break before the table appears.
More complicated solutions are possible, but authors shouldn't worry about this.
\begin{landscape}
\begin{table}
\caption{An example landscape table.}
\label{tab:landscape}
\begin{tabular}{cccccccccc}
\hline
Header & Header & Header & Header & Header & Header & Header & Header & Header & Header\\
Unit & Unit & Unit & Unit & Unit & Unit & Unit & Unit & Unit & Unit \\
\hline
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
\hline
\end{tabular}
\end{table}
\end{landscape}
\section{References and citations}
\subsection{Cross-referencing}
The usual \LaTeX\ commands \verb'\label{}' and \verb'\ref{}' can be used for cross-referencing within the same paper.
We recommend that you use these whenever relevant, rather than writing out the section or figure numbers explicitly.
This ensures that cross-references are updated whenever the numbering changes (e.g. during revision) and provides clickable links (if available in your compiler).
It is best to give each section, figure and table a logical label.
For example, Table~\ref{tab:mathssymbols} has the label \verb'tab:mathssymbols', whilst section~\ref{sec:packages} has the label \verb'sec:packages'.
Add the label \emph{after} the section or caption command, as in the examples in sections~\ref{sec:sections} and \ref{sec:fig_table}.
Enter the cross-reference with a non-breaking space between the type of object and the number, like this: \verb'see Figure~\ref{fig:example}'.
The \verb'\autoref{}' command can be used to automatically fill out the type of object, saving on typing.
It also causes the link to cover the whole phrase rather than just the number, but for that reason is only suitable for single cross-references rather than ranges.
For example, \verb'\autoref{tab:journal_abbr}' produces \autoref{tab:journal_abbr}.
\subsection{Citations}
\label{sec:cite}
MNRAS uses the Harvard -- author (year) -- citation style, e.g. \citet{author2013}.
This is implemented in \LaTeX\ via the \verb'natbib' package, which in turn is included via the \verb'usenatbib' package option (see section~\ref{sec:options}), which should be used in all papers.
Each entry in the reference list has a `key' (see section~\ref{sec:ref_list}) which is used to generate citations.
There are two basic \verb'natbib' commands:
\begin{description}
\item \verb'\citet{key}' produces an in-text citation: \citet{author2013}
\item \verb'\citep{key}' produces a bracketed (parenthetical) citation: \citep{author2013}
\end{description}
Citations will include clickable links to the relevant entry in the reference list, if supported by your \LaTeX\ compiler.
\defcitealias{smith2014}{Paper~I}
\begin{table*}
\caption{Common citation commands, provided by the \texttt{natbib} package.}
\label{tab:natbib}
\begin{tabular}{lll}
\hline
Command & Ouput & Note\\
\hline
\verb'\citet{key}' & \citet{smith2014} & \\
\verb'\citep{key}' & \citep{smith2014} & \\
\verb'\citep{key,key2}' & \citep{smith2014,jones2015} & Multiple papers\\
\verb'\citet[table 4]{key}' & \citet[table 4]{smith2014} & \\
\verb'\citep[see][figure 7]{key}' & \citep[see][figure 7]{smith2014} & \\
\verb'\citealt{key}' & \citealt{smith2014} & For use with manual brackets\\
\verb'\citeauthor{key}' & \citeauthor{smith2014} & If already cited in close proximity\\
\verb'\defcitealias{key}{Paper~I}' & & Define an alias (doesn't work in floats)\\
\verb'\citetalias{key}' & \citetalias{smith2014} & \\
\verb'\citepalias{key}' & \citepalias{smith2014} & \\
\hline
\end{tabular}
\end{table*}
There are a number of other \verb'natbib' commands which can be used for more complicated citations.
The most commonly used ones are listed in Table~\ref{tab:natbib}.
For full guidance on their use, consult the \verb'natbib' documentation\footnote{\url{http://www.ctan.org/pkg/natbib}}.
If a reference has several authors, \verb'natbib' will automatically use `et al.' if there are more than two authors. However, if a paper has exactly three authors, MNRAS style is to list all three on the first citation and use `et al.' thereafter. If you are using \bibtex\ (see section~\ref{sec:ref_list}) then this is handled automatically. If not, the \verb'\citet*{}' and \verb'\citep*{}' commands can be used at the first citation to include all of the authors.
\subsection{The list of references}
\label{sec:ref_list}
It is possible to enter references manually using the usual \LaTeX\ commands, but we strongly encourage authors to use \bibtex\ instead.
\bibtex\ ensures that the reference list is updated automatically as references are added or removed from the paper, puts them in the correct format, saves on typing, and the same reference file can be used for many different papers -- saving time hunting down reference details.
An MNRAS \bibtex\ style file, \verb'mnras.bst', is distributed as part of this package.
The rest of this section will assume you are using \bibtex.
References are entered into a separate \verb'.bib' file in standard \bibtex\ formatting.
This can be done manually, or there are several software packages which make editing the \verb'.bib' file much easier.
We particularly recommend \textsc{JabRef}\footnote{\url{http://jabref.sourceforge.net/}}, which works on all major operating systems.
\bibtex\ entries can be obtained from the NASA Astrophysics Data System\footnote{\label{foot:ads}\url{http://adsabs.harvard.edu}} (ADS) by clicking on `Bibtex entry for this abstract' on any entry.
Simply copy this into your \verb'.bib' file or into the `BibTeX source' tab in \textsc{JabRef}.
Each entry in the \verb'.bib' file must specify a unique `key' to identify the paper, the format of which is up to the author.
Simply cite it in the usual way, as described in section~\ref{sec:cite}, using the specified key.
Compile the paper as usual, but add an extra step to run the \texttt{bibtex} command.
Consult the documentation for your compiler or latex distribution.
Correct formatting of the reference list will be handled by \bibtex\ in almost all cases, provided that the correct information was entered into the \verb'.bib' file.
Note that ADS entries are not always correct, particularly for older papers and conference proceedings, so may need to be edited.
If in doubt, or if you are producing the reference list manually, see the MNRAS instructions to authors$^{\ref{foot:itas}}$ for the current guidelines on how to format the list of references.
\section{Appendices and online material}
To start an appendix, simply place the \verb'
\section{Introduction}
Radio pulsars are rotating neutron stars where a small fraction of the spin-down energy powers beamed radio emission that can cross our line-of-sight on every rotation resulting in an observable pulsed signal \citep{pac67,gol68}.
Since their discovery by \citet{hew68}, pulsars have provided a great wealth of scientific discoveries largely thanks to their use as uniquely precise astronomical clocks \citep{man17}. This has motivated ongoing pulsar surveys at a wide range of radio frequencies, which to date have found close to 3000 sources \citep[ATNF catalog;][]{man05}\footnote{\url{http://www.atnf.csiro.au/people/pulsar/psrcat}}.
In order to take advantage of these pulsar discoveries, it is necessary to construct a pulsar timing model by measuring the pulses' times-of-arrival (TOAs).
This model describes the rotational and astrometric properties of the pulsar and the propagation of the pulses through the interstellar medium \citep{edw06}.
The pulsar spin and spin-down rate are indicative of the pulsar evolutionary history.
Assuming a simplified model (i.e. dipole braking with constant magnetic field), pulsar parameters can be estimated from these, such as the characteristic age and magnetic field \citep{gol69}.
For this reason, a scatter-plot of pulsar periods $P$ and period derivatives $\dot{P}$ (the so-called $P-\dot{P}$ diagram) provides valuable information on the properties of the pulsar population as a whole.
TOAs for a given pulsar can be obtained directly from its single pulses.
However, the low signal-to-noise ratio (S/N) and erratic shapes of individual pulses result in larger uncertainties.
Therefore, hundreds of rotational periods are usually added together to increase the S/N and form a stable average pulse profile.
Pulse profiles are correlated with noiseless templates in order to produce TOAs.
A detailed description of this `timing' procedure is provided by \citet{edw06}.
All pulsars manifest some level of pulse-to-pulse variation in flux and pulse shape. In some cases the emission switches between bi-stable states, and these sources are classified as mode changing pulsars \citep{wan07}. In other cases, the single pulses form patterns called drifting sub-pulses \citep{tay75}.
Pulsars that are undetected for one or multiple rotational periods are classified as nullers \citep[][here sufficient sensitivity is needed to distinguish between nulling and weak pulses]{bac70}.
The nulling fraction can vary from a small fraction of the rotations to nearly $100$\% \citep[e.g.][]{wan07}.
In the latter case, sources are often discovered through their single pulses and termed Rotating RAdio Transients \citep[RRATs,][]{mcl06}.
These are pulsars whose emission is detected over single rotations separated by large periods of apparent inactivity ($\gtrsim 1$\,minute and up to hours).
Radio waves propagating through the cold plasma present in the interstellar medium (ISM) undergo various propagation effects \citep[e.g][]{ric90}, which are usually more evident for lower-frequency waves ($\lesssim 300$\,MHz).
These effects are highly relevant both in searching for new pulsars and in measuring precise TOAs.
Dispersion is the frequency-dependence of the wave group velocity. It is quantified by the dispersion measure (DM) and scales as $\nu^{-2}$, where $\nu$ is the observing frequency.
Diffractive scintillation is the phase perturbation of the waves induced by smaller-scale inhomogeneities in the ISM.
It creates intensity modulations of the signal both in time and frequency.
Diffractive scintillation is typically averaged out by wide-band observations ($> 10$\,MHz) at low-frequencies.
Refractive scintillation is the angular broadening of the radio signal due to larger-scale inhomogeneities in the ISM.
It typically manifests in the time domain as an exponential scattering tail in the pulse profiles, which scales roughly as $\nu^{-4}$. Scattering can strongly limit the detectability of pulsars at low frequencies because it can wash out the pulsed signal.
The majority of current pulsar surveys are being carried out at frequencies above 300\,MHz, where the sky background brightness is lower and the aforementioned radio propagation effects are less severe \citep{lor04,sto13}.
However, low-frequency observations ($\lesssim 300$\,MHz) present some practical advantages as well and they can probe the pulsar population in a way that complements the view from higher frequencies.
Firstly, the telescope's field-of-view is typically larger at lower frequencies, allowing a larger survey speed for all-sky surveys \citep{sta11}.
Secondly, most pulsars are brighter at lower frequencies \citep{bil16}.
The flux density S of radio pulsars is usually described by a power-law of the observing frequency $\nu$ whose exponent $\alpha$ is called the spectral index ($S\propto\nu^\alpha$).
If a pulsar has a spectrum steeper than the sky background \citep[$\alpha \sim -2.55$,][]{moz17}, it can potentially be detected more easily at lower frequencies -- as long as scattering is modest.
The average spectral index of pulsars is $\alpha = -1.4$, with a standard deviation of $1$ \citep{bat13}.
Here we report the timing models and other properties of 20 radio pulsars discovered using the Low Frequency ARray \citep[LOFAR,][]{haa13,sta11}, a sensitive radio interferometer that operates at low radio frequencies.
We are using this telescope in the frequency range $119-151$\,MHz to perform the LOFAR Tied-Array All-Sky Survey \citep[LOTAAS,][]{coe14}\footnote{\url{http://www.astron.nl/lotaas}} for pulsars and fast transients in the Northern sky.
A detailed description of the survey is presented in \citet{san18}.
Among the LOTAAS discoveries presented in this paper,
PSR~J0815$+$4611 has been first presented by \citet{jel15}, who identified a steep spectrum, unresolved and polarized point source in continuum images of the 3C\,196 field observed by the LOFAR Epoch of Reionization project (Candi2, Ger de Bruyn, priv. comm.; \citealp{yat13}).
Pulsations were then discovered using a targeted LOFAR beam-formed observation (DDT2\_004, PI: Hessels) and subsequent search over a range of trial DMs (V. Kondratiev, priv. comm.; \citealp{jel15}).
PSR~J1404$+$1159 was discovered by \citet{cha03} and blindly re-detected by LOTAAS.
It did not have a timing model at the time of the LOTAAS re-discovery and we detected it at a very different DM than the value given by \citet{cha03}, so there was initially some ambiguity about whether it was indeed the same source \citep{san18}.
\citet{bri18} recently presented a timing model for the source compatible with the one we obtain.
PSR J0302$+$2252 was first reported by \citet{tyu16}; PSRs J0122+1416, J1635$+$2332 and J2051$+$1248 by \citet{tyu17}; and PSRs J0139$+$3336, J1404$+$1159 and J1848$+$1516 by \citet{tyu18}.
These sources have been blindly detected by LOTAAS around the same time and we present their timing models for the first time.
Pulsars discovered by LOTAAS are regularly monitored using multiple telescopes; these subsequent timing observations are described in \textsection\ref{sec:observations}.
The timing models obtained for these pulsars are presented in \textsection\ref{sec:timing} and the characteristics of the pulse profiles are described in \textsection\ref{sec:profiles}.
Flux densities and spectral indices are analyzed in \textsection\ref{sec:fluxes}.
Individual sources presenting interesting variations within single observations are further described in \textsection\ref{sec:variations}.
Finally, conclusions are drawn in \textsection\ref{sec:conclusions}.
\section{Observations}\label{sec:observations}
\begin{table*}
\centering
\caption{Number of single detections of pulsars in each frequency band.
Non-detections are indicated with a dash.
Pulsars not observed at a certain frequency are highlighted with an `X'.
The two last columns indicate the total span ranged by the observations and the pulsar names reported by \citet{san18}, before a timing model was available.
}
\label{tab:observations}
\begin{tabular}{lcccccc}
\toprule
PSR & LOFAR & Lovell & NRT & Lovell & Span & Name in \\
& 149\,MHz & 334\,MHz & 1484\,MHz & 1532\,MHz & months & \citet{san18} \\
\midrule
J0115$+$6325 & 17& X & -- & 41 & 19 & J0115$+$63\\
J0122$+$1416 & 14& -- & -- & -- & 17 & J0121$+$14\\
J0139$+$3336 & 9& -- & X & 31 & 15 & J0139$+$33\\
J0302$+$2252 & 23& 1 & X & 20 & 24 & J0302$+$22\\
J0349$+$2340 & 20& -- & X & -- & 22 & J0349$+$23\\
J0518$+$5125 & 18& -- & X & -- & 21 & J0518$+$51\\
J0742$+$4334 & 17& -- & X & -- & 24 & J0742$+$43\\
J0815$+$4611 & 31& 1 & X & 20 & 33 & J0815$+$4611\\
J0857$+$3349 & 9& 1 & X & -- & 17 & J0857$+$33\\
J1226$+$0005 & 12& 1 & X & -- & 24 & J1226$+$00\\
J1236$-$0159 & 13& 1 & X & 50 & 25 & J1235$-$02\\
J1343$+$6634 & 24& -- & -- & -- & 23 & J1344$+$66\\
J1404$+$1159 & 14& 1 & 11 & 38 & 19 & J1404$+$11\\
J1635$+$2332 & 15& 1 & X & 33 & 16 & J1635$+$23\\
J1735$+$6320 & 25& 1 & X & 50 & 29 & J1735$+$63\\
J1848$+$1516 & 21& 1 & X & 52 & 31 & J1848$+$15\\
J1849$+$2559 & 21& 1 & X & -- & 24 & J1849$+$25\\
J1933$+$5335 & 11& X & X & X & 15 & J1933$+$53\\
J2051$+$1248 & 24& -- & X & 40 & 25 & J2051$+$12\\
J2329$+$4743 & 15& 1 & X & 31 & 15 & J2329$+$47\\
\bottomrule
\end{tabular}
\end{table*}
As discussed in greater detail by \citet{san18}, the LOTAAS survey is performed using the LOFAR `Superterp', a part of the telescope where 6 stations of antennas are closely spaced.
After a promising candidate is found, its rough sky position is re-observed using the full LOFAR core (up to 24 stations) for confirmation and refined localization.
Thanks to the longer baselines of the full core, the localization improves to roughly arcminute precision.
The increased sensitivity of the full core compared to the Superterp also means that significantly shorter integrations, typically 15\,minutes, can be used to achieve a S/N sufficient to obtain an adequately precise TOA.
The resulting discoveries are added to the LOFAR timing campaign, where selected pulsars are observed monthly using the full LOFAR core.
All the pulsars presented here have been observed for a span of at least one year; the total set of observations used in this study is reported in Table~\ref{tab:observations}.
Typically, each pulsar is observed for 10 minutes in the timing campaign.
However, due to their weak or sporadic signals, PSRs~J0139$+$3336, J0518$+$5125, J1848$+$1516 and J1236$-$0159, were observed for $15$--$20$ minutes per epoch.
During the timing campaign, pulsars are coherently de-dispersed at the best DM value resulting from the confirmation observation in order to correct for the intra-channel smearing \citep{han75}.
The LOFAR PULsar Pipeline (\textsc{pulp}), an automatic pipeline described by \citet{sta11} and \citet{kon16}, processes the data from the telescope using \textsc{psrchive} \citep{hot04,str12}\footnote{\url{http://psrchive.sourceforge.net}} and \textsc{dspsr} \citep{str11}\footnote{\url{http://dspsr.sourceforge.net}} to produce an \textit{archive} file.
The archives are data-cubes containing the signal folded at the approximate pulsar spin period determined from the initial confirmation observation as a function of phase, polarization, frequency and time.
Full Stokes information is recorded, the $78$\,MHz of available bandwidth is divided into $400$ channels, the pulse phase is divided into $1024$ bins and the time resolution is usually $5$ seconds.
Only for PSRs~J0139$+$3336 and J1848$+$1516 did we store single-pulse-resolved, total intensity archives in order to study their variability over short timescales.
All the pulsars except for PSR J1933$+$5335 were observed with the 76-m Lovell Telescope at an observing frequency of $1532$\,MHz with a bandwidth of $384$\,MHz \citep{bas16}.
Each pulsar was first observed between 4-5 times in a span of $10$ days, with single observation lasting between $40$ and $60$ minutes.
If a pulsar was detected, a regular timing campaign began with an average observing cadence
of two weeks and observing lengths between $6$ and $60$ minutes depending on the detected S/N.
The data were processed with the digital filter-bank back-end (dfb), which incoherently de-disperses and produces a data-cube containing $1024$ frequency channels, $10$ second-long sub-integrations and $1024$ phase bins.
For the intermittent source PSR J0139$+$3336, the Apollo back-end was used to store single-pulse resolved data.
These data has a time resolution of $256$\,$\mu$s and $672$ frequency channels of $0.5$\,MHz.
A sample of 18 pulsars were also observed in one occasion with a bandwidth of $64$\,MHz centered at $334$\,MHz with the Lovell Telescope.
Each pulsar was observed for a duration of $30$ minutes.
The data were also processed using the dfb with $512$ channels, $10$ second-long sub-integrations and $512$ phase bins.
Four of the pulsars reported have also been observed with the Nan\c{c}ay radio telescope (NRT) using the NUPPI back-end with a bandwidth of $512$\,MHz centered around $1484$\,MHz.
The data, which are coherently de-dispersed, have a frequency resolution of $4$\,MHz, sub-integrations with a duration of $15$ seconds and $2048$ phase bins.
Typical observation durations were between $\sim 10$ and $40$ minutes.
While PSRs J0115$+$6325 and J0122$+$1416 have been observed two times each, PSR J1343$+$6634 has been observed 9 times without detecting the source.
\section{Timing models}\label{sec:timing}
LOFAR observations supplemented by Lovell TOAs, when available, have been used to construct the timing models presented here.
For each pulsar, initial timing parameters were obtained by using \textsc{presto} \citep{ran01}\footnote{\url{https://www.cv.nrao.edu/~sransom/presto}} to maximize the S/N of the pulse profile in the confirmation observation.
This resulted in approximate values for the period and DM of the sources.
The period derivative was initially set to zero.
The position determined by maximizing the pulse profile S/N of the beam grid of the confirmation observation \citep{san18} was used as a starting point in the timing model.
LOFAR TOAs were obtained by using standard pulsar timing methods.
The \textsc{paz} utility from the \textsc{psrchive} package and \textsc{clean.py} from \textsc{coast guard} \citep{laz16}\footnote{\url{https://github.com/plazar/coast\_guard}} were used to automatically remove radio frequency interference (RFI) present in the observations.
After a visual inspection of the data to remove additional RFI or corrupted observations, sub-integrations and channels were summed to obtain a single pulse profile for each observation.
Since PSRs~J0139$+$3336 and J1848$+$1516 show only very sporadic radio pulses, specific periods where the sources were active have been manually extracted for these pulsars.
An analytic pulse profile template was generated for each pulsar by fitting the profile having the highest S/N with von Mises functions using \textsc{paas} from \textsc{psrchive}.
For each pulsar, \textsc{pat} has been used to cross-correlate the observed profiles with this analytic template in order to obtain one TOA per observation.
LOFAR TOAs are referenced to the topocentric position of the center of the LBA CS002 station.
For the 11 pulsars detected by the Lovell telescope at $1532$\,MHz, a single pulse profile and a TOA were generated from each observation following \citet{hob04}.
The TOAs obtained for each pulsar were fitted with \textsc{tempo2} \citep{hob06}\footnote{\url{https://bitbucket.org/psrsoft/tempo2}} using the initial timing model.
Most of the pulsars which were detected with the Lovell telescope at $1532$\,MHz were observed at high cadence, allowing us to resolve any phase ambiguities and get an initial coherent timing model.
However, since \textsc{tempo2} can only fit TOAs with known integer rotation counts in between \citep{fre18}, for pulsars only detectable at LOFAR the cadence of our observations meant that in some cases the phase ambiguities could not be resolved.
We therefore used a brute-force algorithm in these cases.
This algorithm fitted a set of initial spin periods around the value from the discovery observation, producing a series of plots with the relative residuals.
In this way, assuming that the other parameters have a much smaller impact on the fit, it is possible to identify a more precise spin period.
Using this simple yet effective technique, a timing model could be obtained for all the pulsars in the sample.
\begin{table*}
\centering
\caption{Parameters of the timing models for the 20 pulsars presented. The different columns report the source name, Right Ascension and Declination referenced to the J2000 frame, the reference epoch of the parameters, the measured values of spin period, its time derivative and DM.
Values in parentheses are 1-$\sigma$ uncertainties on the last digit.
}
\label{tab:ephemeris}
\begin{tabular}{lllllll}
\toprule
PSR & RA & DEC & Epoch & $P$ & $\dot{P}$ & DM \\
(J2000) & (J2000) & (J2000) & (MJD) & (s) & (10$^{-15}$) & (pc\,cm$^{-3}$) \\
\midrule
J0115$+$6325 & 01:15:45.87(1) & $+$63:25:50.8(1) & 57876 & 0.521455427473(5) & 1.599(3) & 65.069(4) \\
J0122$+$1416 & 01:22:01.31(3) & $+$14:16:17(1) & 57876 & 1.38899395038(2) & 3.803(1) & 17.693(3) \\
J0139$+$3336 & 01:39:57.23(4) & $+$33:36:59.7(9) & 57901 & 1.2479609557(1) & 2.064(8) & 21.23(1) \\
J0302$+$2252 & 03:02:31.990(4) & $+$22:52:12.1(2) & 57811 & 1.207164839778(2) & 0.0825(1) & 18.9922(6) \\
J0349$+$2340 & 03:49:57.38(4) & $+$23:40:53(2) & 57813 & 2.42077097760(4) & 1.0995(7) & 62.962(5) \\
J0518$+$5125 & 05:18:26.145(8) & $+$51:25:58.7(2) & 57829 & 0.912511685262(8) & 0.191(1) & 39.244(2) \\
J0742$+$4334 & 07:42:42.13(1) & $+$43:34:02.2(3) & 57998 & 0.606190680037(6) & 0.371(2) & 36.255(4) \\
J0815$+$4611 & 08:15:59.4623(8) & $+$46:11:53.24(2) & 57662 & 0.4342422517307(2) & 0.0039(1) & 11.2738(3) \\
J0857$+$3349 & 08:57:07.07(2) & $+$33:49:17.0(6) & 57876 & 0.242961077060(4) & 0.24(1) & 23.998(3) \\
J1226$+$0005 & 12:26:14.4(2) & $+$00:05:44(7) & 57785 & 2.28507617026(8) & 2.474(2) & 18.50(1) \\
J1236$-$0159 & 12:36:02.5(6) & $-$01:59:10(20) & 57803 & 3.5975735967(2) & 5.103(2) & 19.08(3) \\
J1343$+$6634 & 13:43:59.26(2) & $+$66:34:25.05(9) & 57785 & 1.39410378554(1) & 1.0882(7) & 30.031(3) \\
J1404$+$1159 & 14:04:36.987(8) & $+$11:59:16.0(2) & 57820 & 2.65043929892(4) & 1.3768(6) & 18.499(4) \\
J1635$+$2332 & 16:35:05.36(1) & $+$23:32:23.2(3) & 57935 & 1.20869424580(2) & 0.863(5) & 37.568(6) \\
J1735$+$6320 & 17:35:06.562(4) & $+$63:20:00.06(3) & 57760 & 0.510718135408(1) & 0.3226(4) & 41.853(1) \\
J1848$+$1516 & 18:48:56.13(2) & $+$15:16:44.1(4) & 57655 & 2.23376977466(5) & 1.6813(8) & 77.436(9) \\
J1849$+$2559 & 18:49:47.555(1) & $+$25:59:57.66(2) & 57785 & 0.5192634055906(6) & 0.1798(3) & 75.0016(4) \\
J1933$+$5335 & 19:33:01.1(1) & $+$53:35:43(1) & 57546 & 2.052574490(4) & 1.26(2) & 33.54(3) \\
J2051$+$1248 & 20:51:29.66(2) & $+$12:48:21.5(6) & 57811 & 0.55316745256(2) & 0.019(6) & 43.45(1) \\
J2329$+$4743 & 23:29:31.548(7) & $+$47:43:39.73(6) & 57950 & 0.728408609085(4) & 0.016(2) & 44.012(2) \\
\bottomrule
\end{tabular}
\end{table*}
In order to obtain more precise values of the pulsar DMs, LOFAR observations were split into two frequency sub-bands and a TOA was calculated for each one.
The same templates were used for the two sub-bands; the good precision of the obtained timing models (see below) justifies this choice.
In the analysis, we did not include possible DM or profile variations over time that are sometimes detected at low frequencies \citep[e.g.][]{don19,mic18}.
The low frequency of LOFAR observations and the availability of the $1532$\,MHz observations with Lovell for most of the pulsars allowed DM uncertainties $\lesssim 0.01$\,pc\,cm$^{-3}$ (see Table~\ref{tab:ephemeris}).
In all cases, the TOAs from the LOFAR and Lovell telescopes were well described by the same timing model after fitting an arbitrary jump in phase between the two instruments to account for a possible phase offset due to different cable length and differences in the reference pulse phase of the template profile.
Both the observatory clocks are referenced to the GPS time system.
The timing models, obtained by using the solar system ephemeris model DE405 \citep{sta98} and performing an unweighted fit, are reported in Table~\ref{tab:ephemeris}.
Some of the pulsars presented here are among those with the slowest periods ever measured \citep{man05}.
It is interesting to note that PSR J0250$+$5854, the slowest radio pulsar ever found, was also discovered by LOTAAS (\citealp{tan18}; see also \citealp{san18} for a discussion of LOTAAS sensitivity to slow pulsars).
The pulsars in the sample presented here also have, on average, low values of $\dot{P}$.
\begin{figure}
\centering
\includegraphics{ppdot}
\caption{
$P-\dot{P}$ diagram of the new pulsar discoveries represented with black dots.
Grey dots represent non-recycled, `slow' radio pulsars \citep[ATNF catalog;][]{man05}.
Dotted lines represent the indicated characteristic ages and magnetic fields, while the dashed line represents the `classical' death line below which dipolar rotators are not expected to emit radio signals \citep{rud75}.
}
\label{fig:ppdot}
\end{figure}
In Fig.~\ref{fig:ppdot}, the new pulsar discoveries are plotted on the $P-\dot{P}$ diagram together with known normal, non-recycled radio pulsars \citep[from the ATNF catalog;][]{man05}.
The relatively high P and low $\dot{P}$ of the sample imply that the new pulsars are on average closer to the death line than the majority of the pulsar population.
It is unclear whether this is due to survey observational selection effects or if it is a real effect, e.g. due to older pulsars having on average steeper radio spectra.
This will be further investigated in a future study using the full sample of LOTAAS discoveries.
\begin{table*}
\centering
\begin{threeparttable}
\caption{Quantities derived from the timing parameters presented in Table~\ref{tab:ephemeris} assuming dipole braking with constant magnetic fields, short initial periods and a moment of inertia $I=10^{45}$\,g\,cm$^2$.
$gl$ and $gb$ are the galactic coordinates, $t_c$ is the characteristic age, $B$ is the surface magnetic field, $\dot{E}$ is the spin-down energy and d the distance of the pulsars.
}
\label{tab:derived}
\begin{tabular}{llllllll}
\toprule
PSR & $gl$ & $gb$ & $\log t_c$ & $\log B$ & $\log\dot{E}$ & d$^\text{a}$ & d$^\text{b}$\\
& (deg) & (deg) & (yr) & (G) & (erg\,s$^{-1}$) & (kpc) & (kpc)\\
\midrule
J0115$+$6325 & 125.65 & 0.69 & 6.7 & 12.0 & 32.6 & 2.2 & 1.9 \\
J0122$+$1416 & 134.03 & -47.94 & 6.8 & 12.4 & 31.7 & 0.8 & 1.6 \\
J0139$+$3336 & 134.38 & -28.17 & 7.0 & 12.2 & 31.6 & 1.0 & 1.5 \\
J0302$+$2252 & 158.44 & -30.82 & 8.4 & 11.5 & 30.3 & 0.7 & 1.0 \\
J0349$+$2340 & 167.43 & -23.38 & 7.5 & 12.2 & 30.5 & 3.3 & 3.7 \\
J0518$+$5125 & 158.26 & 7.90 & 7.9 & 11.6 & 31.0 & 1.4 & 1.3 \\
J0742$+$4334 & 175.54 & 27.21 & 7.4 & 11.7 & 31.8 & 1.3 & 1.6 \\
J0815$+$4611 & 173.63 & 33.45 & 9.3 & 10.6 & 30.3 & 0.4 & 0.4 \\
J0857$+$3349 & 190.14 & 39.74 & 7.2 & 11.4 & 32.8 & 0.9 & 1.6 \\
J1226$+$0005 & 289.28 & 62.30 & 7.2 & 12.4 & 30.9 & 0.9 & 1.9 \\
J1236$-$0159 & 295.06 & 60.65 & 7.0 & 12.6 & 30.6 & 0.9 & 2.0 \\
J1343$+$6634 & 114.90 & 49.73 & 7.3 & 12.1 & 31.2 & 1.8 & < 13.8 \\
J1404$+$1159 & 355.08 & 67.11 & 7.5 & 12.3 & 30.5 & 1.4 & 2.2 \\
J1635$+$2332 & 42.00 & 39.75 & 7.3 & 12.0 & 31.3 & 4.8 & < 15.7 \\
J1735$+$6320 & 92.72 & 32.55 & 7.4 & 11.6 & 32.0 & 3.2 & < 19.1 \\
J1848$+$1516 & 46.33 & 7.44 & 7.3 & 12.3 & 30.8 & 3.3 & 3.5 \\
J1849$+$2559 & 56.24 & 11.87 & 7.7 & 11.5 & 31.7 & 3.9 & 6.1 \\
J1933$+$5335 & 85.59 & 15.76 & 7.4 & 12.2 & 30.8 & 2.2 & 2.5 \\
J2051$+$1248 & 59.36 & -19.45 & 8.7 & 11.0 & 30.7 & 2.5 & 4.1 \\
J2329$+$4743 & 108.96 & -12.91 & 8.9 & 11.0 & 30.2 & 2.2 & 2.4 \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\item[a] Value based on the NE2001 electron density model \citep{cor02}.
\item[b] Value based on the YMW16 electron density model \citep{yao17}.
\end{tablenotes}
\end{threeparttable}
\end{table*}
Physical quantities were derived from the timing model parameters by using standard assumptions.
The characteristic age, dipole magnetic field strength and spin-down energy of the pulsars \citep{lor04} are reported in Table~\ref{tab:derived}.
Also reported in the table are the pulsar distances derived from their DMs using both the NE2001 \citep{cor02} and the YMW16 \citep{yao17} models for the free electron density distribution in the Milky Way.
The latter model implies a maximum expected Galactic contribution lower than the value measured for three pulsars (PSRs~J1343$+$6634, J1635$+$2332 and J1735$+$6320), indicating possible improvements needed in the model (see the discussion in \citealp{san18}, which include a larger pulsar sample).
Given the relatively long rotation period and short observing timespan of the pulsars presented here, it was not possible to obtain reliable proper motion values since they did not affect the residuals significantly.
\begin{figure*}
\centering
\includegraphics{residuals_0}
\caption{
Timing residuals of the models presented in Table~\ref{tab:ephemeris}.
Different symbols represent different observing frequencies, with `+' for LOFAR lower band ($\sim 130$\,MHz), `x' for LOFAR higher band ($\sim 170$\,MHz) and dots for Lovell at $1532$\,MHz.
}
\label{fig:residuals}
\end{figure*}
\begin{figure*}
\ContinuedFloat
\centering
\includegraphics{residuals_1}
\caption{
\textit{Continued from previous page.}
}
\end{figure*}
The timing residuals, i.e.\ the difference between observed and model-predicted TOAs, are shown in Fig.~\ref{fig:residuals}.
The long period and sometimes irregular emission of some of the sources (see discussion in \textsection\ref{sec:variations}) imply that the integrated pulse profile for some of the observations might be formed by too few single pulses to stabilize.
This would contribute to the scatter in the TOAs.
However, the timing precision achieved is on average higher than what it is often obtained on similar timespans, and a few pulsars are particularly good timers compared to typical slow pulsars (e.g.\ \citealp{hob04}; the residuals of PSRs~J0302$+$2252, J0815$+$4611 and J1849$+$2559 have a root mean square lower than $200$\,$\mu$s).
This relatively high precision could be due to the narrowness of the peaks in the pulse profiles (\textsection\ref{sec:profiles}) and the low impact of timing noise for these pulsars \citep[e.g.][]{hob04}.
\section{Pulse profiles}\label{sec:profiles}
\begin{figure*}
\centering
\includegraphics{profiles}
\caption{Cumulative pulse profiles of the pulsars presented here.
Pulse peaks are all normalized to the same height and full rotational phase windows are shown.
The profiles at different frequencies have been aligned by applying the timing models presented in Table~\ref{tab:ephemeris} and then rotated to show the main peak at the center for the $149$-MHz profile.
}
\label{fig:profiles}
\end{figure*}
We obtained a refined pulse profile from each observation by applying the timing models presented in Table~\ref{tab:ephemeris}.
For each pulsar and observing frequency, the profiles from all observations have been added together to form a global profile; these are presented in Fig.~\ref{fig:profiles}.
The profiles are stored in the EPN Database of Pulsar Profiles\footnote{\url{http://www.epta.eu.org/epndb/}} where they can be accessed on-line.
Since the flux of PSR~J0139$+$3336 is highly variable, rotations where the pulsar was active have been manually selected before forming the integrated profile.
Most pulse profiles are single-peaked, with the exceptions of PSRs~J0302+2252 and J2329+4743, which are double-peaked.
In addition, the single peaks of PSRs~J0815+4611, J1236-0159, J1635+2332, J1848+1516 and J2051+1248 have complex shapes, while the rest are well-fitted by a single von Mises function.
The features of the profiles can be seen in detail in the on-line version stored in the EPN database.
\begin{table*}
\centering
\caption{Characteristics of the pulse profiles shown in Fig.~\ref{fig:profiles}. W is the width at a fraction of the peak intensity and $\delta$ is the duty cycle. The subscripts indicate the percentage of the peak intensity that the value refers to.
Uncertainties are $\sim 1$\,ms on the width values and $\sim 0.1\%$ on the duty cycle values.
}
\label{tab:profiles}
\begin{tabular}{lcccccccccccccccc}
\toprule
PSR && \multicolumn{3}{c}{$W_{20}$ (ms)} && \multicolumn{3}{c}{$\delta_{20}$ (\%)} && \multicolumn{3}{c}{$W_{50}$ (ms)} && \multicolumn{3}{c}{$\delta_{50}$ (\%)} \\
& & 149 & 334 & 1532 & & 149 & 334 & 1532 & & 149 & 334 & 1532 & & 149 & 334 & 1532 \\
& & MHz & MHz & MHz & & MHz & MHz & MHz & & MHz & MHz & MHz & & MHz & MHz & MHz \\
\midrule
J0115$+$6325 & & 35& --& 28 & & 6.7& --& 5.3 & & 22& --& 19 & & 4.3& --& 3.7 \\
J0122$+$1416 & & 48& --& -- & & 3.4& --& -- & & 32& --& -- & & 2.3& --& -- \\
J0139$+$3336 & & 32& --& 22 & & 2.6& --& 1.8 & & 15& --& 14 & & 1.2& --& 1.2 \\
J0302$+$2252 & & 79& 66& 55 & & 6.6& 5.4& 4.6 & & 70& 59& 47 & & 5.8& 4.9& 3.9 \\
J0349$+$2340 & & 71& --& -- & & 2.9& --& -- & & 50& --& -- & & 2.0& --& -- \\
J0518$+$5125 & & 51& --& -- & & 5.5& --& -- & & 33& --& -- & & 3.7& --& -- \\
J0742$+$4334 & & 44& --& -- & & 7.3& --& -- & & 30& --& -- & & 5.0& --& -- \\
J0815$+$4611 & & 21& 17& 16 & & 4.8& 3.8& 3.7 & & 15& 12& 9 & & 3.4& 2.8& 2.2 \\
J0857$+$3349 & & 14& 14& -- & & 5.6& 5.6& -- & & 9& 5& -- & & 3.5& 2.1& -- \\
J1226$+$0005 & & 69& 47& -- & & 3.0& 2.1& -- & & 48& 32& -- & & 2.1& 1.4& -- \\
J1236$-$0159 & & 150& 110& 176 & & 4.2& 3.1& 4.9 & & 94& 96& 90 & & 2.6& 2.7& 2.5 \\
J1343$+$6634 & & 91& --& -- & & 6.5& --& -- & & 57& --& -- & & 4.1& --& -- \\
J1404$+$1159 & & 81& 65& 55 & & 3.0& 2.5& 2.1 & & 58& 45& 36 & & 2.2& 1.7& 1.4 \\
J1635$+$2332 & & 38& 57& 53 & & 3.2& 4.7& 4.4 & & 14& 32& 46 & & 1.1& 2.6& 3.8 \\
J1735$+$6320 & & 15& 16& 9 & & 3.0& 3.0& 1.7 & & 9& 10& 6 & & 1.7& 2.0& 1.1 \\
J1848$+$1516 & & 208& 106& 261 & & 9.3& 4.7& 11.7 & & 40& 20& 146 & & 1.8& 0.9& 6.5 \\
J1849$+$2559 & & 11& 17& -- & & 2.1& 3.2& -- & & 6& 5& -- & & 1.3& 1.0& -- \\
J1933$+$5335 & & 67& --& -- & & 3.3& --& -- & & 53& --& -- & & 2.6& --& -- \\
J2051$+$1248 & & 147& --& 87 & & 26.5& --& 15.7 & & 87& --& 49 & & 15.7& --& 8.8 \\
J2329$+$4743 & & 55& 44& 48 & & 7.5& 6.0& 6.7 & & 10& 6& 33 & & 1.4& 0.8& 4.5 \\
\bottomrule
\end{tabular}
\end{table*}
Full widths at $20\%$ ($W_{20}$) and $50\%$ ($W_{50}$) of the profile peak were calculated.
In order to improve the precision of the calculated pulse limits, the number of phase bins in the profiles was increased by a factor of 10,000 with a linear interpolation.
Full widths were obtained as phase differences between the first and last intersects of the profile with a line at $20$ and $50\%$ of the peak, respectively.
Only the phase bins containing the pulse were selected in order to avoid spurious noise peaks to bias the result for low S/N profiles.
The pulsar duty cycles $\delta$ were obtained by calculating the ratio between the width W and the pulsar period.
The results are summarized in Table~\ref{tab:profiles}.
The obtained duty cycles are usually below 10\%.
The only exceptions are PSRs~J1848+1516 and, most notably, J2051+1248, whose peak occupies more than a quarter of the pulse profile at $149$\,MHz.
For most of the pulsars, the duty cycle decreases with increasing observing frequency.
This is a common behavior explained for example by the radius-to-frequency-mapping model \citep{rud75,cor75}.
However, a few pulsars have wider peaks at higher frequencies.
This could be due to the appearance of additional components in the higher-frequency profile, e.g. in PSRs~J1236$-$0159, J1635$+$2332 and J1848$+$1516, similar to the exceptional cases reported by \citet{pil16}.
The pulse profile of PSR J1848$+$1516 is remarkably different at different observing frequencies.
The phase of the main peak at $149$\,MHz corresponds to the phase of a secondary component in the $1532$-MHz profile.
Instead, no features are present in the $149$-MHz profile coincident with the main peak in the $1532$-MHz profile, although it could become the secondary component shifted at an earlier phase.
In addition, the different components overlap in the $1532$-MHz profile, even though this could be due to the higher noise with respect the $149$-MHz profile.
Finally, the $334$-MHz profile presents only one narrow component clearly detected, coincident with the secondary component in the $149$-MHz profile.
However, only $\sim 25$ of the $816$ pulsar rotations summed to obtain the pulse profile at $334$-MHz contained detectable pulses.
None of the pulse profiles are heavily scattered.
The main peaks of some pulsars, such as PSRs J0115$+$6325, J0139$+$3336, J1236$-$0159, J1635$+$2332 and J1735$+$6320, show a tail at $149$\,MHz that might be consistent with a scattered component.
However, we did not attempt to model the eventual scattering because of its weak effect.
\section{Flux densities and spectral indices}\label{sec:fluxes}
Mean flux densities have been calculated by using the following version of the radiometer equation, obtained by expanding Eqs. 7.1 and A1.21 in \citet{lor04}, after normalizing the pulse profiles.
\[
S_\text{mean} = \frac{T_\text{sky} + T_\text{rec}}{G\sqrt{2t\Delta f}} \sqrt{\frac{\max(p)}{n\max(p)-\sum_{1}^{n}p_i}}\sum_{1}^{n}p_i,
\]
where $p$ is the signal amplitude in a phase bin, $n$ is the total number of phase bins in the pulse profile, $T_\text{sky}$ is the sky temperature, $T_\text{rec}$ is the receiver noise temperature, $G$ is the telescope gain, $t$ is the integration time and $\Delta f$ is the effective bandwidth free from RFI.
Pulse width and period have been expressed in units of phase bins so that $P = n$.
For Lovell telescope observations, we assumed a gain $G=1$\,K\,Jy$^{-1}$ and a system temperature $T_{sys}=25$\,K for the $1532$-MHz receiver \citep{bas16} and $T_{sys}=50$\,K for the $334$-MHz receiver.
The typical RFI environment at the Lovell telescope is estimated to leave $\sim 60$ and $75\%$ of clean data at $334$ and $1532$\,MHz, respectively.
For the NRT telescope, we used a gain $G=1.6$\,K\,Jy$^{-1}$ and a system temperature $T_{sys}=35$\,K \citep{the05}, while the RFI environment is estimated to leave $\sim 50\%$ of clean data.
Since the LOFAR gain depends on source elevation, number of functional antennas and observing frequency, LOFAR observations were calibrated following the procedure described by \citet{nou15} and \citet{kon16}.
The sky temperature $T_{sky}$ was calculated for each source and frequency by using the $408$-MHz sky map by \citet{has82} and a spectral index $\alpha \sim -2.55$ \citep{moz17}.
Observations too contaminated by RFI or with pointing positions too inaccurate that were refined at a later stage were excluded from the analysis.
LOFAR observations were split into two frequency sub-bands to increase the constraint on the spectral index.
This was not possible for PSRs J1933$+$5335 because the source was too weak and the full LOFAR bandwidth was thus used to calculate the flux of this pulsar.
We decided to exclude PSR J0139$+$3336 from the analysis because its nulling fraction is too extreme to obtain reliable flux values in the duration of our observations.
PSR J2051$+$1248 was barely visible at $334$\,MHz and the signal was too weak to calculate a flux density.
For each pulsar, all Lovell and NRT observations from the same receiver have been added together by using \textsc{psradd} from \textsc{psrchive}, which weights the data by the observation duration and RFI level.
The mean flux density was then calculated from the resulting integrated profile.
This is expected to effectively remove the effect of diffractive scintillation observed at higher frequencies, which caused the flux to vary up to 5 times the average value for different observations of the same pulsar.
It was not possible to follow the same method for LOFAR observations since the number of active antennae varied between observations.
However, since the resulting array sensitivity is not expected to be significantly affected and the RFI level and observation duration are approximately constant, we obtained the mean flux density by averaging the flux density measured in different observations.
The resulting values of flux densities are reported in Table~\ref{tab:flux} and shown in Fig.~\ref{fig:flux}.
We calculated a spectral index or upper limits in case of non-detections.
Due to the small number of measurements and large uncertainties, all the flux values have been fitted with a single power-law.
However, a spectral turn-over is sometimes observed in pulsars at LOFAR frequencies \citep[e.g.][]{bil16} and this could be the case of e.g. PSRs J1236$-$0159 and J1848$+$1516.
The values of the spectral indices obtained vary significantly (Table~\ref{tab:flux}) but the spectra of most pulsars are steeper than the average pulsar population \citep[$\alpha \approx -1.4$,][]{bat13}.
In the cases of a non-detection with one of the telescopes, upper limits were derived by assuming that a $\text{S/N}>5$ is needed to confidently detect a source.
\begin{figure*}
\centering
\includegraphics{flux}
\caption{
Mean flux densities (dots) and fitted power-law spectra (lines) reported in Table~\ref{tab:flux} for the different pulsars and observing frequencies.
Triangles represent upper limits considering a fiducial value for detection of $\text{S/N}>5$, with filled triangles for Lovell observations and empty triangles for NRT observations.
Shadowed regions represent 1-$\sigma$ uncertainties on the spectral indices referenced to $149$\,MHz.
}
\label{fig:flux}
\end{figure*}
\begin{table*}
\begin{threeparttable}
\centering
\caption{Mean flux densities $S$ measured at different frequencies (indicated in units of MHz as subscripts) and the inferred spectral indices $\alpha$.
Flux density values have been fitted with a single spectral index. The last column reports the offset between the center of the telescope beams and the position refined with timing models.
}
\label{tab:flux}
\begin{tabular}{llllllll}
\toprule
PSR & S$_{129}$ & S$_{168}$ & S$_{334}$ & S$_{1484}$ & S$_{1532}$ & $\alpha$ & offset \\
& mJy & mJy & mJy & mJy & mJy & & arcmin \\
\midrule
J0115$+$6325 & 16(8) & 10(5) & -- & < 0.1 & 0.05(1) & -2.38(6) & 2.3 \\
J0122$+$1416 & 9(4) & 4(2) & < 1.6 & < 0.1 & < 0.2 & -3(3) & 0.6 \\
J0302$+$2252 & 20(10) & 18(9) & 5(1) & -- & 0.7(2) & -1.43(6) & 1.6 \\
J0349$+$2340 & 3(2) & 1.1(6) & < 1.6 & -- & < 0.2 & -4(3) & 0.8 \\
J0518$+$5125 & 4(2) & 2(1) & < 2.0 & -- & < 0.2 & -3(3) & 1.9 \\
J0742$+$4334 & 4(2) & 3(2) & < 1.6 & -- & < 0.2 & -1(3) & 0.0 \\
J0815$+$4611 & 18(9) & 16(8) & 2.5(6) & -- & 0.36(9) & -1.5(2) & 0.4 \\
J0857$+$3349 & 4(2) & 2(1) & 0.4(1) & -- & < 0.2 & -2.30(8) & 0.9 \\
J1226$+$0005 & 20(10) & 8(4) & 0.5(1) & -- & < 0.2 & -4.05(2) & 2.0 \\
J1236$-$0159 & 7(3) & 6(3) & 3.0(8) & -- & 0.05(1) & -2.3(4) & 3.9 \\
J1343$+$6634 & 14(7) & 7(4) & < 1.5 & < 0.1 & < 0.2 & -3(3) & 1.4 \\
J1404$+$1159 & 30(20) & 18(9) & 2.7(7) & 0.28(7) & 0.09(2) & -2.1(2) & 1.5 \\
J1635$+$2332 & 9(4) & 2(1) & 0.4(1) & -- & 0.025(6) & -2.1(3) & 0.8 \\
J1735$+$6320 & 2(1) & 1.8(9) & 0.6(1) & -- & 0.06(1) & -1.52(8) & 0.6 \\
J1848$+$1516 & 6(3) & 9(5) & 1.5(4) & -- & 0.12(3) & -1.7(2) & 0.4 \\
J1849$+$2559 & 7(4) & 4(2) & 0.5(1) & -- & < 0.2 & -2.9(2) & 1.1 \\
J1933$+$5335 & \multicolumn{2}{c}{1.0(5)$^\text{a}$} & -- & -- & -- & -- & 3.0 \\
J2051$+$1248 & 50(30) & 40(20) & < 1.8 & -- & 0.05(1) & -2.9(2) & 1.1 \\
J2329$+$4743 & 3(2) & 3(1) & 0.5(1) & -- & 0.10(2) & -1.4(2) & 0.8 \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\item[a] Value referenced to a central frequency of $149$\,MHz.
\end{tablenotes}
\end{threeparttable}
\end{table*}
For all the telescopes, the random error on flux densities for specific pulsars and frequencies was obtained as the standard error if more than 9 observations were available.
The systematic error due to uncertain estimates of the telescope gain, temperature and average RFI environment is estimated to be $50\%$ of the flux density value for LOFAR measurements \citep{kon16} and $25\%$ for NRT and Lovell.
The resulting uncertainties on the flux density were estimated by using the largest of the two errors.
In addition to these estimated uncertainties, however, there are potentially significant errors on the flux density values that are not accounted for.
Most important, the observations were acquired before timing models were available and thus approximate source positions were used.
The offset of the refined position from the beam center implies actual flux densities somewhat higher than those reported, and consequently steeper spectra.
The uncertainties on the initial positions were of the order of arcminutes, as discussed in \textsection\ref{sec:observations}.
Given the FWHM of LOFAR beams ($\sim 3.5$\,arcmin) the offset from the center of telescope is important for some pulsars.
For comparison, the FWHM of Lovell beams are $\sim 40$ and $9$\,arcmin at $334$ and $1532$\,MHz, respectively.
Correcting for the complex beam shape of LOFAR \citep[e.g.][]{obr15} is difficult and the simple approach used by \citet{san18} to model the beam as a $\sinc^2$ function proved insufficient for this study.
Moreover, ionospheric effects are expected to cause a jitter of LOFAR beams up to $\sim 1$\,arcmin.
Therefore, we do not attempt to correct for the positional offset and caution the reader that the values reported for LOFAR fluxes and spectral indices are indicative and should not be used for detailed studies.
We report the offset between the beam center and the refined position of the pulsars in the last column of Table~\ref{tab:flux}.
NRT observations of PSR J1404$+$1159 were independently calibrated by regularly observing known calibration sources.
This method could not be applied to Lovell data because calibration sources were not observed nor to LOFAR data because of the dependency of the telescope gain on the source position.
The average flux density obtained for PSR J1404$+$1159 with NRT through calibration sources was $0.7(2)$\,mJy, $\sim 1.5\sigma$ away from the value of $0.28(7)$\,mJy obtained through the radiometer equation and reported in Table~\ref{tab:flux}.
We could not identify a clear reason for this discrepancy and it could originate from the systematic errors described above.
In addition, \citet{bri18} report a flux density for the source of $4.3(9)$\,mJy at $327$\,MHz and $0.027(5)$\,mJy at $1400$\,MHz using Arecibo.
While the first value is compatible with our measurement at $334$\,MHz, the second is lower than our estimates at both $1484$ and $1532$\,MHz.
However, the flux density of the pulsar is highly variable at 1.4\,GHz and we measure values between $\sim 0.04$ and $1.1$\,mJy in individual NRT observations calibrated with known sources.
We checked the TIFR GMRT Sky Survey \citep[TGSS;][]{int17} source catalog around the position of the brightest pulsars at $149$\,MHz in our sample but we did not find any counterpart.
\section{Individual sources}\label{sec:variations}
\begin{figure*}
\centering
\includegraphics{ds}
\caption{Phase-resolved flux density variations over time for six pulsars observed with LOFAR at $149$\,MHz.
The gray-scale is normalized independently for each plot.
Pulsar names are indicated on the individual panels.
Each time bin contains a single pulsar rotation for PSRs~J0139$+$3336 and J1848$+$1516; time bins are 5\,s for the rest of the sources.
There are $1024$ phase bins over the full phase, $10\%$ of the full rotation is shown here.
Horizontal white stripes indicate data that have been excised to remove RFI.
}
\label{fig:ds}
\end{figure*}
Here we discuss six pulsars of the sample that show sporadic emission or interesting single-pulse behavior.
The flux densities of these sources as a function of rotational phase and time is shown in Fig.~\ref{fig:ds} for LOFAR observations.
Two pulsars show drifting sub-pulses and five are nullers, with PSR~J0139$+$3336 tentatively classified as a RRAT (extreme nuller).
We calculated the nulling fractions of these pulsars following the procedure of \citet{wan07}.
All the observations, with the exception of those relative to PSRs J0139$+$3336 and J1848$+$1516, are averaged every 5 seconds and single pulses are not stored.
After excluding PSR~J0139$+$3336, four pulsars in our sample show nulling fractions $>15\%$.
Therefore, the percentage of nulling pulsars in our sample is more than double the percentage in the total pulsar population, where nulling pulsars are $\lesssim 10$\% \citep{yan14}.
Since the characteristic age of the pulsars in our sample is on average larger than the rest of the population, this could support the evidence found by \citet{rit76} and \citet{wan07} that the nulling fraction is related to the pulsar characteristic age, or it could be due to the long dwell time of LOTAAS observations (one hour each).
The nulling fractions found in our sample (between 15 and 50\%) is large with respect to the rest of the nulling pulsars but not unheard-of \citep[e.g.][]{big92,wan07}.
\subsubsection*{PSR~J0139$+$3336}
\begin{figure}
\centering
\includegraphics{sp}
\caption{Flux density (in arbitrary units) as a function of phase in single rotations of PSR~J0139$+$3336.
The two observations (LOFAR at $149$\,MHz, top, and Lovell at $1532$\,MHz, bottom) are not simultaneous.
The same number of rotations are shown for the two observations.
}
\label{fig:sp}
\end{figure}
The source shows the behavior of a RRAT, with only sporadic single pulses detected.
We stored single-pulse resolved data at both $149$ and $1532$\,MHz.
Pulses are visible in single pulsar rotations separated by minutes.
An example of a few bright pulses is reported in Fig.~\ref{fig:sp}.
Using LOFAR observations, we selected pulses having a $\text{S/N}>10$ at the phase of the main peak in the integrated pulse profile.
This threshold was chosen to select pulses clearly separated from the noise distribution.
A total of 30 pulses were detected in LOFAR observations above this threshold.
An average of $\sim 3$ pulses per observation was detected, implying a rate of one pulse every $\sim 5$ minutes.
Given the small number of detected events, the rate of pulses among different observations was roughly compatible with a Poisson distribution.
The same analysis was repeated for Lovell observations at $1532$\,MHz.
The S/N of the brightest pulses is similar in the two cases.
Also the rate is similar, with a pulse detected every $\sim 5$ minutes with a $\text{S/N}>10$ and a distribution roughly consistent with a Poisson distribution.
The lack of a robust estimate of the source spectral index prevents a more detailed comparison of the pulses at the two frequencies.
A peak in the integrated pulse profile was detected in all of LOFAR and most of Lovell observations containing pulsar rotations where emission could be visually identified.
After excluding these single rotations, the integrated pulse profile is indistinguishable from noise.
\subsubsection*{PSR~J0302$+$2252}
The flux of this nulling pulsar is highly variable on short timescales for both the peaks in the profile (Fig.~\ref{fig:ds}).
Unfortunately, we did not store single pulses for this source; rather, the flux is averaged every five seconds (about four rotational periods).
Therefore, it is impossible to assess the flux variability over single rotations.
The degree and timescale of variation are similar for the two peaks, with a nulling fraction $\sim 15$\%.
However, the flux density of the two peaks in single sub-integrations is not obviously correlated.
\subsubsection*{PSR~J1226$+$0005}
A null emission lasting $\sim 40$ seconds can be seen for this pulsar in Fig.~\ref{fig:ds} around $200$ seconds after the start of the observation.
Longer nulls are detected as well, with the pulsar being detected for only the first $\sim 2$ minutes of one 10-minute observation.
The average nulling fraction for the pulsar is $\sim 50\%$.
Fig.~\ref{fig:ds} also reveals drifting sub-pulses for PSR J1226$+$0005.
With no individual pulses being recorded (Fig.~\ref{fig:ds} shows 5 second, or $\sim2.2$ pulse period averages) it is hard to quantify this further.
Nevertheless, the drift-rate appears to be variable with the drift-rate being lower as seen at the top of Fig.~\ref{fig:ds} (i.e. the drift bands are steeper) compared to what it is at $\sim 120$ seconds into the observation.
In addition, the emission appears to wander slightly in pulse phase (e.g. the emission is slightly late $\sim120$ seconds into the observation shown in Fig.~\ref{fig:ds}).
Individual pulse observations might reveal if the observed variability is related to discrete mode changes, or if the effect is smoother.
\subsubsection*{PSR~J1343$+$6634}
This pulsar shows a nulling fraction $\sim 35$\%.
The source switches between detectable and non-detectable states on a timescale of a few tens of seconds.
This behavior is consistent throughout the different observations.
\subsubsection*{PSR~J1404$+$1159}
During the preparation of this manuscript, the source has also been studied by \citet{bri18} at $327$ and $1400$\,MHz using the Arecibo telescope.
The parameters that they present are in agreement with our measurements.
We also detect the bright drifting sub-pulses forming the main peak.
The sub pulses are clearly visible in the 5-second long sub-integrations visible in Fig.~\ref{fig:ds}.
The relatively high S/N of the pulsar and detection of the drifting sub-pulses over multiple frequencies could allow detailed studies of the drifting evolution with frequency \citep[e.g.][]{has13}.
\subsubsection*{PSR~J1848$+$1516}
The source switches between detectable and non-detectable states every few tens of rotations, with an average nulling fraction of $\sim 50$\%.
While the pulsar is relatively active in some observations (as shown in Fig.~\ref{fig:ds}) it is undetected in several 15-minute observations.
Sporadically, a second peak appears trailing the main one, becoming the brightest in three observations.
Only on a very few occasions, a third peak leading the main one has been detected for a few rotations.
\section{Conclusions}\label{sec:conclusions}
We have presented the properties of 20 radio pulsars discovered by the LOFAR telescope as part of the LOTAAS survey.
Since their discovery, the sources have been regularly observed at multiple frequencies using LOFAR, Lovell and NRT telescopes.
This allowed us to calculate the astrometric and rotational parameters of the pulsars.
They have, on average, longer periods and lower spin-down rates than the majority of the pulsar population.
This places the pulsars closer to the death line than the average of the global pulsar population.
It is unclear whether this is a real effect or a selection bias and this will be explored in a subsequent paper using a larger LOTAAS sample.
Integrated pulse profiles were calculated at different frequencies using the obtained timing models.
They are mostly single-peaked and show frequency evolution with a complex behavior in some cases.
Values of mean flux densities at the different observing frequencies have been calculated.
Even keeping in mind the systematic errors present, the resulting spectra are steeper than average for most pulsars.
Five out of the 20 pulsars in the sample are undetectable for more than $15\%$ of time, with PSR J0139$+$3336 tentatively classified as a RRAT.
Two of the pulsars show drifting sub-pulses.
\section*{Acknowledgements}
We thank Anne M. Archibald for her help with the analysis.
DM and JWTH acknowledge funding from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013) / ERC Starting Grant agreement nr. 337062 (`DRAGNET').
JWTH also acknowledges funding from an NWO Vidi fellowship.
DM is a Banting fellow.
JvL acknowledges funding from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement n. 617199
(`ALERT'), and from Vici research programme `ARGO' with project number
639.043.815, financed by the Netherlands Organisation for Scientific
Research (NWO).
This paper is based in part on data obtained with the International LOFAR Telescope (ILT).
LOFAR \citep{haa13} is the Low Frequency Array designed and constructed by ASTRON.
It has facilities in several countries.
These are owned by various parties (each with their own funding sources) and are collectively operated by the ILT
foundation under a joint scientific policy.
The data for this project were taken as part of the LOFAR long-term proposals ``Pulsar Timing with LOFAR" (PI: Verbiest), under proposal codes LC0\_011, DDT0003, LC1\_027, LC2\_010, LT3\_001, LC4\_004, LT5\_003, LPR6\_002, LPR7\_001, LPR8\_001, LC9\_041 and LT10\_004, and ``Additional Timing Observations on LOTAAS Pulsar Discoveries'' (PI: Tan), under proposal codes LC8\_035, LC9\_021 and LT10\_015.
Pulsar research at Jodrell Bank and access to the Lovell Telescope is supported by a Consolidated Grant from the UK's Science and Technology Facilities Council.
The Nan\c{c}ay Radio Observatory is operated by the Paris Observatory,
associated with the French Centre National de la Recherche Scientifique
(CNRS) and Universit\'{e} d'Orl\'{e}ans.
\bibliographystyle{mnras}
|
2,877,628,090,472 | arxiv | \section{Introduction}
\subsection{Background}
Wireless sensor networks (WSNs) have been widely studied for detection
and estimation problems. Recently, considerable research has focused
on the fusion of \emph{analog} rather than encoded digital data in a
distributed sensor network to improve estimation performance. The
advantages of analog WSNs have been established in
\cite{Gastpar:2008,Gastpar:2005,Gastpar:2003}, where it was shown that
when using distortion between the source and recovered signal as the
performance metric, digital transmission (separate source and channel
coding) achieves an exponentially worse performance than analog
signaling. A number of studies have focused on algorithm development
and analysis for analog WSNs with a single-antenna fusion center (FC).
In \cite{Cui:2007}, the sensors amplify and forward their observations
of a scalar source to the FC via fading channels, and algorithms are
developed to either minimize estimation error subject to transmit
power constraints or minimize power subject to estimation error
constraints. The scalar source model for this problem was generalized
to correlated vector sources in \cite{Bahceci:2008}. An opportunistic
power allocation approach was proposed in \cite{Matamoros:2011}, and
the scaling law with respect to the number of sensors was shown to be
the same as the optimal power allocation proposed in
\cite{Cui:2007}. In \cite{Banavar:2010}, the asymptotic variance of
the best linear unbiased estimator of an analog WSN is derived,
together with an analysis of the effect of different assumptions
regarding channel knowledge at the sensors. Scaling laws with respect
to the number of sensors have been studied in \cite{Leong:2010} for a
diversity-based method (where only the sensor with the best channel
transmits), as well as for the coherent multiple access channel (MAC)
and orthogonal channel cases, assuming a Gaussian source. In
\cite{Wang:2011}, a power optimization problem was formulated to
minimize the outage probability of the MSE for the coherent MAC
channel. More complicated settings involving analog WSNs with
nonlinear measurement models \cite{Fang:2009} or relays
\cite{Thatte:2008, Zarifi:2011} have also been studied.
The results described above all assume that the FC is equipped with
only one antenna. Just as multi-antenna receivers can provide
significant capacity or diversity gains in communication systems, the
estimation performance of a WSN should also benefit from the use of a
multi-antenna FC, though prior work on this scenario is limited. A
general scenario is investigated in \cite{Xiao:2008}, involving vector
observations of a vector-valued random process at the sensors, and
linearly precoded vector transmissions from the sensors to a
multi-antenna FC. Optimal solutions for the precoders that minimize
the mean-squared error (MSE) at the FC are derived for a coherent MAC
under power and bandwidth constraints. In \cite{Smith:2009},
single-antenna sensors amplify and forward their observations to a
multi-antenna FC, but it is shown that for Rayleigh fading channels, the
improvement in estimate variance is upper bounded by only a factor of
two compared to the case of a single-antenna FC. The performance of
two heuristic algorithms for choosing the gain and phase of the sensor
transmissions is also studied. Subsequent results by the same
authors in \cite{Banavar:2012, Banavar:20102}, have demonstrated that
when the channel undergoes (zero-mean) Rayleigh fading, there is a
limit to the improvement in detection performance for a multi-antenna
FC as well, but when the channel is Rician, performance improves
monotonically with respect to number of antennas.
The term ``amplify and forward'' is often used to describe analog
sensor networks like those discussed above, since each sensor applies
a complex gain to the observation before sending it to the FC. For a
coherent MAC, one can think of this as a type of distributed transmit
beamforming, although it is distinguished from distributed beamforming
applications such as those in communications since in a WSN the
observed noise is transmitted together with the signal of interest.
Some prior research in radar and communications has focused on
scenarios where the beamformer weights implement only a phase
shift rather than both a gain and a phase. The advantage of using
phase shifting only is that it simplifies the implementation and is
easily performed with analog hardware. Phase-shift-only beamformers
have most often been applied to receivers that null spatial
interference \cite{Smith:1999, Kajenski:2012}, but it has also been
considered on the transmit side for MISO wireless communications
systems \cite{Xia:2009}, which is similar to the problem considered
here. For the distributed WSN estimation problem, phase-only sensor
transmissions have been proposed in \cite{Tepedelenlioglu:2010}, where
the phase is a scaled version of the observation itself. Phase-only
transmissions were also considered in the context of distributed
detection in \cite{Banavar:2012}, leading to a problem similar to
one of those we consider here.
In addition to the work outlined above, other WSN research has focused
on sensor selection problems, particularly in situations where the
sensors have limited battery power. In these problems, only a subset
of the sensors are chosen to transmit their observations, while the
others remain idle to conserve power. The sensor selection problem
has been tackled from various perspectives, with the goal of
optimizing the estimation accuracy
\cite{Thatte:2008,Joshi:2009,Gupta:2006} or some heuristic system
utility \cite{Fang:2006,Krishnamurthy:2008}. In \cite{Joshi:2009},
the authors investigated maximum likelihood (ML) estimation of a
vector parameter by selecting a fixed-size subset of the sensors. An
approximate solution was found by relaxing the original Boolean
optimization to a convex optimization problem. A dynamic model is
used to describe the parameter of interest in \cite{Gupta:2006}, and
sensors use the Kalman filter to estimate the parameter. At each time
step, a single sensor is selected and the measurement at the selected
sensor is shared with all other sensors. A numerical sensor selection
algorithm was proposed to minimize an upper bound on the expected
estimation error covariance. Instead of the estimation accuracy, a
utility function that takes into account the measurement quality or
energy cost can also be used as the metric for sensor selection. In
\cite{Krishnamurthy:2008}, each sensor independently optimizes its own
operation status based on a utility function which depends on the
sensor's own measurement and the predicted operation status of other
sensors. A threshold is then found to enable the sensor to switch its
status for either energy efficiency or energy consumption, and a power
allocation algorithm was proposed to minimize the MSE at FC.
\subsection{Approach and Contributions}
In this paper we consider a distributed WSN with single-antenna
sensors that observe an unknown deterministic parameter corrupted by
noise. The low-complexity sensors apply a phase shift (rather than
both a gain and phase) to their observation and then simultaneously
transmit the result to a multi-antenna FC over a coherent MAC. One
advantage of a phase-shift-only transmission is that it leads to a
simpler analog implementation at the sensor. The FC determines the
optimal value of the phase for each sensor in order to minimize the ML
estimation error, and then feeds this information back to the sensors
so that they can apply the appropriate phase shift. The estimation
performance of the phase-optimized sensor network is shown to be
considerably improved compared with the non-optimized case, and close
to that achieved by sensors that can adjust both the transmit gain and
phase. We analyze the asymptotic behavior of the algorithm for a
large number of sensors and a large number of antennas at the FC. In
addition, we analyze the impact of phase errors at the sensors due,
for example, to errors in the feedback channel, a time-varying main
channel or phase-shifter drift. We also consider a sensor selection
problem similar to that in \cite{Joshi:2009}, and analyze its
asymptotic behavior as well. Some additional details regarding the
contributions of the paper are listed below.
\begin{enumerate}
\item We present two algorithms for determining the phase factors used
at each sensor. In the first, we use the semi-definite relaxation
presented in \cite{Luo:2006,Banavar:2012} to convert the original problem to a
semidefinite programming (SDP) problem that can be efficiently solved
by interior-point methods. For the second algorithm, we apply the
analytic constant modulus algorithm (ACMA) \cite{Alle:1996}, which provides a
considerably simpler closed-form solution. Despite the reduction
in complexity, the performance of ACMA is shown via simulation to be
only slightly worse than the SDP solution, and close to the theoretical
lower bound on the estimate variance. This is especially encouraging
for networks with a large number of sensors $N$, since the SDP complexity
is on the order of $N^{3.5}$, while that for ACMA is only on the order of
$N^2$.
\item We separately derive performance scaling laws with respect to
the number of antennas and the number of sensors assuming non-fading
channels that take path loss into account. For both cases, we derive conditions that
determine whether or not the presence of multiple antennas at the FC
provides a significant benefit to the estimation performance. Prior work in \cite{Smith:2009,Banavar:2012,Banavar:20102}
has focused on either AWGN channels with identical channel gains, or
on fading channels where the channel gains are identically distributed,
corresponding to the case where the distances from the sensors to the
FC are roughly the same. References \cite{Smith:2009,Banavar:2012,Banavar:20102}
also assume a special case where the noise at each of the sensors
has the same variance, although \cite{Banavar:20102} examines how
certain upper bounds on performance change when the sensor noise is
arbitrarily correlated.
\item Using our model for the non-fading case, we are able
to elucidate detailed conditions under which the asymptotic estimation performance
will improve with the addition of more antennas $M$ at the FC. While
\cite{Smith:2009,Banavar:2012} showed that performance always improves
with increasing $M$ for AWGN channels with identical gains and identically
distributed sensor noise, we derive more detailed conditions that take
into account the possibility of non-uniform distances between the sensors
and FC and non-uniform noise at the sensors.
\item We conduct an analysis of the impact of phase errors at the
sensors assuming relatively small phase errors with variance
$\sigma_p^2\ll1$ (square-radians). In particular, we show that the
degradation to the estimate variance is bounded above by a factor of
$1+\sigma_p^2$. We note that the effect of errors in the transmit
phase at the sensors has previously been considered for the case of
$M=1$ in \cite{Banavar:2010}, although using a different phase
error model.
\item We consider the sensor selection problem separately for low
and high sensor measurement noise. For the low measurement noise
scenario, we relax the sensor selection problem to a standard linear
programming (LP) problem, and we also propose a reduced complexity version
of the algorithm. For the high measurement noise scenario, we show
that the estimation error is lower bounded by the inverse of the
measurement noise power, which motivates the use of a simple selection
method based on choosing the sensors with the lowest measurement
noise.
\end{enumerate}
A subset of the above results was presented in an earlier conference
paper \cite{Jiang:2011}.
\subsection{Organization}
The paper is organized as follows. Section~\ref{sec:two} describes the
assumed system model. Section~\ref{sec:three} formulates the phase
optimization problem and proposes a numerical solution based on SDP as
well as a closed-form solution based on the algebraic constant modulus
algorithm. In Section~\ref{sec:four}, the asymptotic performance of
the algorithm is analyzed for a large number of sensors and antennas.
The effect of phase errors is analyzed in Section~\ref{sec:five} and the
sensor selection problem is investigated in Section~\ref{sec:six}.
Simulation results are then presented in Section~\ref{sec:seven} and
our conclusions can be found in Section~\ref{sec:eight}.
\section{System Model}\label{sec:two}
We assume that $N$ single-antenna sensors in a distributed sensor
network independently observe an unknown but deterministic complex-valued parameter
$\theta$ according to the following model for sensor $i$:
\begin{equation}
\label{eq:sensmod}
y_i = \theta + v_i \;\nonumber ,
\end{equation}
where $v_i$ is complex-valued Gaussian observation noise with variance $\sigma_{v,i}^2$.
The noise is assumed to be independent from sensor to sensor.
Each sensor phase shifts its observation and transmits the signal
$a_i y_i$ to the FC, where $|a_i|=1$. Assuming a coherent MAC and an
FC with $M$ antennas, the vector signal received at the FC can be
expressed as
\begin{equation}
\label{eq:yvec}
\mathbf{y}=\mathbf{H}\mathbf{a}\theta+\mathbf{H}\mathbf{D}\mathbf{v}+\mathbf{n}\;,
\end{equation}
where $\mathbf{H}=[\mathbf{h}_{1},\dots,\mathbf{h}_{N}]$ and
$\mathbf{h}_{i}\in\mathbb{C}^{M\times 1}$ is the channel vector
between the $i$th sensor and the FC,
$\mathbf{a}=[a_{1},\dots,a_{N}]^{T}$ contains the adjustable phase
parameters, $\mathbf{D}=\mathrm{diag}\{a_{1},\dots,a_{N}\}$,
$\mathbf{v}$ is the sensor measurement noise vector with covariance
$\mathbf{V}=\mathbb{E}\{\mathbf{v}\mathbf{v}^{H}\}=\mathrm{diag}\left\{\sigma_{v,1}^2,\cdots,\sigma_{v,N}^2\right\}$,
and $\mathbf{n}$ is complex Gaussian noise at the FC with covariance
$\mathbb{E}\{\mathbf{n}\mathbf{n}^{H}\}=\sigma_{n}^2\mathbf{I}_{M}$,
where $\mathbf{I}_{M}$ is an $M\times M$ identity matrix. Note
that since the sensors can only phase shift their observation prior
to transmission, we ignore the issue of power control and assume
that the sensors have sufficient power to forward their observation
to the FC.
The combined noise term $\mathbf{HDv}+\mathbf{n}$ in~(\ref{eq:yvec})
is Gaussian with covariance $\mathbf{HVH}^H+\sigma_n^2\mathbf{I}$,
since $\mathbf{DVD}^H=\mathbf{V}$ due to the phase-only assumption.
Assuming the FC is aware of the channel matrix
$\mathbf{H}$, the noise covariance $\mathbf{V}$ and $\sigma_n^2$,
it can calculate the ML estimate of $\theta$ using \cite{Kay:1993}
\begin{equation}
\hat{\theta}_{ML}=\frac{\mathbf{a}^{H}\mathbf{H}^{H}(\mathbf{H}\mathbf{V}\mathbf{H}^{H}+\sigma_{n}^2\mathbf{I}_{M})^{-1}\mathbf{y}}{\mathbf{a}^{H}\mathbf{H}^{H}(\mathbf{H}\mathbf{V}\mathbf{H}^{H}+\sigma_{n}^2\mathbf{I}_{M})^{-1}\mathbf{H}\mathbf{a}}\;.\nonumber
\end{equation}
The estimator $\hat{\theta}_{ML}$ is unbiased with variance
\begin{equation}\label{eq:ml}
\mathrm{Var}(\hat{\theta}_{ML})=\left(\mathbf{a}^{H}\mathbf{H}^{H}(\mathbf{H}\mathbf{V}\mathbf{H}^{H}+\sigma_{n}^2\mathbf{I}_{M})^{-1}\mathbf{H}\mathbf{a}\right)^{-1}\;.
\end{equation}
Furthermore, since $\|\mathbf{a}\|=N$ when only phase shifts are used
at the sensors, it is easy to see that the variance is lower bounded by
\begin{equation}\label{eq:lb}
\mathrm{Var}(\hat{\theta}_{ML})\!\ge\!\frac{1}{N\lambda_{\max}\left(\mathbf{H}^{H}(\mathbf{H}\mathbf{V}\mathbf{H}^{H}+\sigma_{n}^2\mathbf{I}_{M})^{-1}\mathbf{H}\right)}\;,
\end{equation}
where $\lambda_{\max}(\cdot)$ denotes the largest eigenvalue of its
matrix argument. Note that the bound in~(\ref{eq:lb}) is in general
unachievable, since with probability one the given matrix will not
have an eigenvector with unit modulus elements.
\section{Optimizing the Sensor Phase}\label{sec:three}
In this section we consider the problem of choosing $\mathbf{a}$ to
minimize $\mathrm{Var}(\hat{\theta}_{ML})$ in~(\ref{eq:ml}). The unit
modulus constraint prevents a trivial solution, but as we note below,
a direct solution is not possible even without this constraint since
the noise covariance would then depend on $\mathbf{a}$. The general
optimization problem is formulated as
\begin{eqnarray}\label{eq:opt}
\min_{\mathbf{a}} &&\mathrm{Var}(\hat{\theta}_{ML})\\
s. t. &&|a_i|=1,\; i=1, \dots, N\;.\nonumber
\end{eqnarray}
Defining
$\mathbf{B}=\mathbf{H}^{H}(\mathbf{H}\mathbf{V}\mathbf{H}^{H}+\sigma_{n}^2\mathbf{I}_{M})^{-1}\mathbf{H}$,
the problem can be rewritten as
\begin{eqnarray}\label{eq:quardra}
\max_{\mathbf{a}} &&\mathbf{a}^{H}\mathbf{B}\mathbf{a}\\
s. t. &&|a_i|=1,\; i=1, \dots, N\;.\nonumber
\end{eqnarray}
Note that this optimization can only determine $\mathbf{a}$ to
within an arbitrary phase shift $e^{j\phi}$, but this scaling has no
impact on the estimate of $\theta$. In other words, the vector $\mathbf{a}$
and the vector $\mathbf{a}e^{j\phi}$ for arbitrary $\phi$ will both yield
the same estimate $\hat{\theta}_{ML}$. Since the FC is aware of the vector
$\mathbf{a}$ determined by the optimization in~(\ref{eq:quardra}), any
arbitrary phase factor present in the $\mathbf{Ha}\theta$ term of the
model in~(\ref{eq:yvec}) will be canceled when the ML estimate of $\theta$
is computed. This is also clear from the variance expression in~(\ref{eq:ml}),
which is insensitive to any phase shift to $\mathbf{a}$.
If there are only two sensors in the network, a simple closed-form
solution to~(\ref{eq:quardra}) can be obtained. Defining
$\mathbf{B}=\left[\begin{array}{cc} a&be^{j\beta}\\
be^{-j\beta}&c\end{array}\right]$ with $a, b, c>0$ and $\mathbf{a}=[e^{j\beta_1},e^{j\beta_2}]$, then $\mathbf{a}^H\mathbf{B}\mathbf{a}$ is calculated as
\begin{eqnarray}\label{eq:closetwo}
\mathbf{a}^H\mathbf{B}\mathbf{a}&=&a+c+2b\cos(\beta_1-\beta_2-\beta)\nonumber\\
&\le&a+c+2b\;,
\end{eqnarray}
and the equality in (\ref{eq:closetwo}) can be achieved for any
$\beta_1,\beta_2$ that satisfy $\beta_1-\beta_2=\beta$.
For the general situation where $N>2$, a solution to~(\ref{eq:quardra})
appears to be intractable. Instead, in the discussion that follows
we present two suboptimal approaches in order to obtain an approximate
solution. The first approach is based on an SDP problem obtained by
relaxing a rank constraint in a reformulated version of~(\ref{eq:quardra}),
similar to the approach proposed in \cite{Luo:2006,Banavar:2012}.
The second converts the problem to one that can be solved via the
ACMA of \cite{Alle:1996}.
It is worth emphasizing here that if the transmission gain of the
sensors was also adjustable, then the corresponding problem would be
\begin{eqnarray}
\max_{\mathbf{a}} && \mathbf{a}^{H}\mathbf{H}^{H}(\mathbf{H}\mathbf{DVD}^H\mathbf{H}^{H}
+\sigma_{n}^2\mathbf{I}_{M})^{-1}\mathbf{H}\mathbf{a} \label{eq:amplitudeopt} \\
s. t. && \mathbf{a}^H\mathbf{a} \le N \; , \nonumber
\end{eqnarray}
which also has no closed-form solution due to the dependence on
$\mathbf{a}$ (through the matrix $\mathbf{D}$) inside the matrix
inverse. While in general both our SDP solution and~(\ref{eq:amplitudeopt})
require numerical
optimizations, we will see in Sections~IV-VI that the theoretical
analysis of performance and the solution to the sensor selection
problem is considerably simpler with the phase-only constraint.
The simulations of Section~VII will also demonstrate that there is
often little performance loss incurred by using phase-shift-only
transmissions.
\subsection{SDP Formulation}
To begin, we rewrite~(\ref{eq:quardra}) as follows:
\begin{eqnarray}\label{eq:quardra2}
\max_{\mathbf{a}} &&\mathrm{tr}\left(\mathbf{B}\mathbf{a}\mathbf{a}^{H}\right)\\
s. t. &&|a_i|=1,\; i=1, \dots, N\;.\nonumber
\end{eqnarray}
Making the association $\mathbf{A}=\mathbf{a}\mathbf{a}^H$,
problem~(\ref{eq:quardra2}) is equivalent to:
\begin{eqnarray}\label{eq:rankone}
\max_{\mathbf{A}} && \mathrm{tr}(\mathbf{B}\mathbf{A})\\
s. t. &&\mathbf{A}_{i,i}=1,\; i=1, \dots, N\nonumber\\
&& \mathrm{rank}(\mathbf{A})=1\nonumber\\
&&\mathbf{A}\succeq 0\nonumber\;,
\end{eqnarray}
where $\mathbf{A}_{i,i}$ denotes the $i$th diagonal element of
$\mathbf{A}$. Following the approach of \cite{Luo:2006,Banavar:2012},
we then relax the rank-one constraint, so that
the problem becomes a standard SDP:
\begin{eqnarray}\label{eq:sdp}
\max_{\mathbf{A}} && \mathrm{tr}(\mathbf{B}\mathbf{A})\\
s. t. &&\mathbf{A}_{i,i}=1,\; i=1, \dots, N\nonumber\\
&&\mathbf{A}\succeq 0\nonumber\;.
\end{eqnarray}
Defining $\mathbf{B}_r=\mbox{\rm real}\{\mathbf{B}\}$,
$\mathbf{B}_i=\mbox{\rm imag}\{\mathbf{B}\}$, and similarly for
$\mathbf{A}_r$ and $\mathbf{A}_i$, we can convert~(\ref{eq:sdp}) to
the equivalent real form
\begin{eqnarray}\label{eq:realsdp}
\max_{\{\mathbf{A}_r,\mathbf{A}_i\}} && \mathrm{tr}(\mathbf{B}_r\mathbf{A}_r-\mathbf{B}_i\mathbf{A}_i)\\
s. t. &&\mathbf{A}_{r~i,i}=1,\; i=1, \dots, N\nonumber\\
&&\left[\begin{array}{cc} \mathbf{A}_r&-\mathbf{A}_i\\
\mathbf{A}_i&\mathbf{A}_r\end{array}\right]\succeq 0\nonumber\;.
\end{eqnarray}
Problem (\ref{eq:realsdp}) can be efficiently solved by a standard
interior-point method \cite{Boyd:2004}.
In general, the solution to~(\ref{eq:realsdp}) will not be rank one,
so an additional step is necessary to estimate $\mathbf{a}$.
Let $\mathbf{A}_r^*$, $\mathbf{A}_i^*$ denote the solution to problem (\ref{eq:realsdp}), then the solution to problem (\ref{eq:sdp}) is given by $\mathbf{A}^{*}=\mathbf{A}_r^*+j\mathbf{A}_i^*$. If $\mathrm{rank}(\mathbf{A}^{*})>1$, we can use a
method similar to Algorithm 2 in \cite{Zhang:2011} to extract a
rank-one solution, as follows:
\begin{enumerate}
\item Decompose\footnote{Since $\mathbf{A}^{*}$ is the solution to
problem (\ref{eq:sdp}), $\mathbf{A}^{*}$ is positive semidefinite.}
$\mathbf{A}^{*}=\mathbf{C}^{H}\mathbf{C}$, define
$\tilde{\mathbf{B}}=\mathbf{C}\mathbf{B}\mathbf{C}^{H}$, and find a
unitary matrix $\mathbf{U}$ that can diagonalize $\tilde{\mathbf{B}}$.
\item Let $\mathbf{r}\in\mathbb{C}^{N\times 1}$ be a random vector
whose $i$th element is set to $e^{j\omega_i}$, where
$\omega_i$ is uniformly distributed over $[0, 2\pi)$.
\item Set $\tilde{\mathbf{a}}=\mathbf{C}^{H}\mathbf{U}\mathbf{r}$, and
the solution is given by $\mathbf{a}^*=[a_1^* \; \cdots \; a_N^*]^T$,
where $a_i^* = e^{j\angle{\tilde{a}_i}}$ and $\angle{z}$ represents
the phase of a complex number $z$.
\end{enumerate}
A detailed discussion of the reasoning behind the above rank-one modification
can be found in \cite{Zhang:2011}.
\subsection{ACMA Formulation}
For this discussion, we will assume that $N > M$, which represents the
most common scenario. Thus, the $N\times N$ matrix $\mathbf{B}$ in
the quadratic form $\mathbf{a}^H\mathbf{Ba}$ that we are trying to
maximize is low rank; in particular, $\mbox{\rm rank}(\mathbf{B}) \le
M < N$. Clearly, any component of $\mathbf{a}$ orthogonal to the
columns or rows of $\mathbf{B}$ will not contribute to our goal of
minimizing the estimate variance. In particular, if we define the
singular value decomposition (SVD)
$\mathbf{B}=\mathbf{U}\boldsymbol{\Sigma}\mathbf{U}^H$, we ideally
seek a vector $\mathbf{a}$ such that
\begin{eqnarray}
\label{eq:acma1}
\mathbf{a}&=&\sum_{k=1}^m w_k \mathbf{u}_k = \mathbf{U}_m \mathbf{w}\\
|a_i|&=&1 \;,\nonumber
\end{eqnarray}
where $\mathbf{U}_m=[\mathbf{u}_1 \; \cdots \; \mathbf{u}_m]$ contains
the first $m \le \mbox{\rm rank}(\mathbf{B}) \le M$ singular vectors
of $\mathbf{B}$ and $\mathbf{w}=[w_1 \; \cdots \; w_m]^T$. The
problem of finding the coefficient vector $\mathbf{w}$ of a linear
combination of the columns of a given matrix $\mathbf{U}_m$ that
yields a vector with unit modulus elements is precisely the problem
solved by the ACMA \cite{Alle:1996}.
Our problem is slightly different from the one considered in
\cite{Alle:1996}, since there will in general be no solution
to~(\ref{eq:acma1}) even in the absence of noise. However, in our
simulation results we will see that the ACMA solution provides
performance close to that obtained by the SDP formulation above. Note
also that there is a trade-off in the choice of $m$, the number of
vectors in $\mbox{\rm span}(\mathbf{B})$ to include in the linear
combination of~(\ref{eq:acma1}). A small value of $m$ allows us to focus on forming
$\mathbf{a}$ from vectors that will tend to increase the value of
$\mathbf{a}^H\mathbf{Ba}$, while a larger value for $m$ provides more
degrees of freedom in finding a vector whose elements satisfy
$|a_i|=1$. Another drawback to choosing a larger value for $m$ is
that the ACMA solution can only be found if $N > m^2$. As long as $M$
is not too large, one could in principle try all values of
$m=1,\cdots,M$ that satisfy $N > m^2$ and choose the one that yields
the smallest estimate variance. We will see later in the simulations
that a small value for $m$ already provides good performance, so the
choice of $m$ is not a significant issue.
The general ACMA approach can be formulated to find multiple solutions
to~(\ref{eq:acma1}), but in our case we only need a single solution, and
thus a simplified version of ACMA can be used, as outlined here for a given $m$.
The ACMA is obtained by defining the rows of $\mathbf{U}_m$ as
$\mathbf{U}_m^H=[\tilde{\mathbf{u}}_1 \; \cdots \;\tilde{\mathbf{u}}_N]$,
and then rewriting the constraint $|a_i|=|\tilde{\mathbf{u}}_i^H \mathbf{w}| = 1$ as
\[ \left(\bar{\tilde{\mathbf{u}}}_i \otimes \tilde{\mathbf{u}}_i\right)^H
\left(\bar{\mathbf{w}} \otimes \mathbf{w}\right) = 1 \; ,
\]
where $\bar{(\cdot)}$ denotes the complex conjugate and $\otimes$ the
Kronecker product. Stacking all
$N$ such constraints into a single equation results in
\begin{equation}\label{eq:Pw}
\mathbf{P} \left(\bar{\mathbf{w}} \otimes \mathbf{w}\right) = 0 \; ,
\end{equation}
where
\begin{equation}
\mathbf{P} = \left[ \begin{array}{cc} \left(\bar{\tilde{\mathbf{u}}}_1
\otimes \tilde{\mathbf{u}}_1\right)^H & -1 \\ \vdots & \vdots \\
\left(\bar{\tilde{\mathbf{u}}}_N
\otimes \tilde{\mathbf{u}}_N\right)^H & -1 \end{array} \right] \; .
\end{equation}
If an exact solution to~(\ref{eq:Pw}) existed, then a vector in the
null space of $\mathbf{P}$ would have the form $\left[
\left(\bar{\mathbf{w}} \otimes \mathbf{w}\right)^T \; \; 1 \right]^T$, and
$\mathbf{w}$ could be found by stripping away the $1$ and then
unstacking the resulting vector into a rank-one matrix (see \cite{Alle:1996}
for more details). In our problem, an exact solution to~(\ref{eq:Pw})
does not exist, so we use the following approach to obtain an approximation:
\begin{enumerate}
\item Let $\mathbf{q}$ represent the right singular vector of $\mathbf{P}$
associated with the smallest singular value, and define the vector
$\tilde{\mathbf{q}}$ to contain the first $m^2$ elements of $\mathbf{q}$.
\item Set $\mathbf{w}$ equal to the singular vector of $\tilde{\mathbf{Q}}
+ \tilde{\mathbf{Q}}^H$ with largest singular value, where the $m \times m$
matrix
\begin{equation}
\tilde{\mathbf{Q}} = \mbox{\rm vec}^{-1} (\tilde{\mathbf{q}})
\end{equation}
is formed by dividing $\tilde{\mathbf{q}}$ into sub-vectors of length
$m$ and stacking them together in a matrix.
\item Set $\hat{\mathbf{a}} = \mathbf{U}_m \mathbf{w}$. The
vector $\mathbf{a}$ is then found by setting the magnitude of all
the elements of $\hat{\mathbf{a}}$ equal to unity. In particular,
the $i$-th element of $\mathbf{a}$ is given by
\begin{equation*}
a^*_i = e^{j\angle{\hat{a}_i}} \; .
\end{equation*}
\end{enumerate}
\subsection{Comparison of Computational Complexity}
As discussed in \cite{Luo:2006}, the computational load of the SDP problem
in~(\ref{eq:sdp}) is of the order $O(N^{3.5})$. The additional steps
required to take the SDP result and find a rank-one solution require an
$O(N^3)$ eigenvalue decomposition, so the overall complexity is dominated
by the SDP. For ACMA, the dominant computational step occurs in finding the
$m$ principal eigenvectors of the Hermitian matrix $\mathbf{B}$, which
requires only an order $O(mN^2)$ computation \cite{Golub:1989}. Finding the
least dominant singular vector of $\mathbf{P}$ is an $O(N^2)+O(m^4)$ operation,
and the remaining steps have relatively trivial complexity. Since $m \ll N$
in typical scenarios, we see that ACMA enjoys a significantly lower computational
load compared to the SDP approach. Despite this, we will see that ACMA has
performance that is only slightly inferior to using the SDP solution.
\section{Asymptotic Performance Analysis}\label{sec:four}
In this section, we analyze the asymptotic performance achievable
using only phase-shifts for the sensor transmissions. We will
separately study cases where the number of sensors is large ($N
\rightarrow\infty$) or the number of FC antennas is large
($M\rightarrow\infty$). Our analysis will be based on an a non-fading
channel model that takes path loss into account, similar to models
used in \cite{Gerhard:2003, Jafar:2011}. In particular, for the
channel between the FC and sensor $i$, we assume
\begin{equation}
\mathbf{h}_{i}=\frac{1}{d_i^\alpha}\tilde{\mathbf{h}}_{i}\;,\nonumber
\end{equation}
where $d_i$ denotes the distance between the $i$th sensor and the FC,
$\alpha$ is the path loss exponent and $\tilde{\mathbf{h}}_i$ is
given by
\begin{equation}
\tilde{\mathbf{h}}_{i}=[e^{j\gamma_{i,1}} \; \; e^{j\gamma_{i,2}} \; \cdots \; e^{j\gamma_{i,M}}]^{T}\nonumber\;,
\end{equation}
where $\gamma_{i,j}$ is uniformly distributed over $\left[0,2\pi\right)$.
\subsection{Estimation Performance for Large $N$}
From (\ref{eq:lb}) we know that the lower bound on
$\mathrm{Var}(\hat{\theta}_{ML})$ depends on the largest eigenvalue of
$\mathbf{H}^{H}(\mathbf{H}\mathbf{V}\mathbf{H}^{H}+\sigma_{n}^2\mathbf{I}_{M})^{-1}\mathbf{H}$.
We begin by deriving a lower bound for this eigenvalue.
The $(m,n)$th element of $\mathbf{H}\mathbf{V}\mathbf{H}^H$ can be expressed as
\begin{eqnarray}
\left(\mathbf{H}\mathbf{V}\mathbf{H}^{H}\right)_{m,n}=\sum_{i=1}^{N}
\frac{e^{j(\gamma_{i,m}-\gamma_{i,n})}\sigma_{v,i}^2}{d_i^{2\alpha}}\;.\nonumber
\end{eqnarray}
According to the strong law of large numbers, as $N\to \infty$ we have
\begin{eqnarray}\label{eq:mean}
\lim_{N\to\infty}\frac{1}{N}\sum_{i=1}^{N}\frac{e^{j(\gamma_{i,m}\!-\gamma_{i,n})}\sigma_{v,i}^2}{d_i^{2\alpha}}&\overset{(a)}{=}&\mathbb{E}\left\{\frac{\sigma_{v,i}^2}{d_i^{2\alpha}}\right\}\mathbb{E}\left\{e^{j(\gamma_{i,m}-\gamma_{i,n})}\right\}\nonumber
\\ &\overset{(b)}{=}&\left\{\begin{array}{lr}
\mathbb{E}\left\{\frac{\sigma_{v,i}^2}{d_i^{2\alpha}}\right\} & m=n\\
0 & m\ne n \; ,
\end{array}\right.
\end{eqnarray}
where ($a$) follows from the assumption that $\gamma_{i,m}$, $d_i$ and
$\sigma_{v,i}^2$ are independent and ($b$) is due to
the fact that $\gamma_{i,m}$ and $\gamma_{i,n}$ are independent and
uniformly distributed over $[0,2\pi)$. Thus, for sufficiently
large $N$ we have
\begin{equation}\label{eq:approx}
\lim_{N\to\infty}\mathbf{H}\mathbf{V}\mathbf{H}^{H}=N\mathbb{E}\left\{\frac{\sigma_{v,i}^2}{d_i^{2\alpha}}\right\}\mathbf{I}_{M} \; .
\end{equation}
Based on (\ref{eq:approx}), we have
\begin{eqnarray}\label{eq:lambdamax}
\lim_{N\to\infty}
\lambda_{\max}\left(\mathbf{H}^{H}(\mathbf{H}\mathbf{V}\mathbf{H}^{H}+\sigma_{n}^2\mathbf{I}_{M})^{-1}\mathbf{H}\right)&
= & \frac{1}{N\mathbb{E}\left\{\frac{\sigma_{v,i}^2}{d_i^{2\alpha}}\right\}+\sigma_n^2}
\left[ \lim_{N\to\infty} \lambda_{\max}(\mathbf{H}^H\mathbf{H}) \right]\nonumber\\
&\overset{(c)}{=}&\frac{N\mathbb{E}\left\{\frac{1}{d_i^{2\alpha}}\right\}}{N\mathbb{E}
\left\{\frac{\sigma_{v,i}^2}{d_i^{2\alpha}}\right\}+\sigma_n^2} \; ,
\end{eqnarray}
where ($c$) is due to the fact that
$\lambda_{\max}(\mathbf{H}^{H}\mathbf{H})=\lambda_{\max}(\mathbf{H}\mathbf{H}^{H})$. Substituting
(\ref{eq:lambdamax}) into (\ref{eq:lb}), we have the following asymptotic
lower bound on the estimate variance:
\begin{equation}\label{eq:lb2}
\mathrm{Var}(\hat{\theta}_{ML}) \ge
\frac{N\mathbb{E}\left\{\frac{\sigma_{v,i}^2}{d_i^{2\alpha}}\right\}+\sigma_n^2}{N^2\mathbb{E}\left\{\frac{1}{d_i^{2\alpha}}\right\}}
\; .
\end{equation}
For large enough $N$, the lower bound can be approximated using sample
averages:
\begin{equation}\label{eq:lb2app}
\mathrm{Var}(\hat{\theta}_{ML})\ge\frac{\sum_{i=1}^{N}
\frac{\sigma_{v,i}^2}{d_i^{2\alpha}}+\sigma_{n}^2}{N\sum_{i=1}^{N}\frac{1}{d_i^{2\alpha}}} \; .
\end{equation}
Next, we derive an upper bound on the estimate variance and compare it
with the lower bound obtained above. The upper bound is obtained by
calculating the variance obtained when only a single antenna is present
at the FC. For the given channel model, the optimal choice for the
vector of sensor phases is just the conjugate of the channel phases:
$\mathbf{a}=[e^{-j\gamma_{1,1}} \; \cdots \; e^{-j\gamma_{N,1}}]^T$,
which when applied to~(\ref{eq:ml}) leads to
\begin{equation}\label{eq:allone}
\mathrm{Var}(\hat{\theta}_{ML})\le\frac{\sum_{i=1}^{N}
\frac{\sigma_{v,i}^2}{d_i^{2\alpha}}+\sigma_{n}^2}{\left(\sum_{i=1}^{N}\frac{1}{d_i^\alpha}\right)^2} \; .
\end{equation}
When $N\to\infty$, both the upper and lower bounds converge to $0$,
but the ratio of the lower bound
in~(\ref{eq:lb2app}) to the upper bound in~(\ref{eq:allone})
converges to
\begin{eqnarray}\label{eq:ratio}
\lim_{N\to\infty}\frac{\left(\sum_{i=1}^{N}\frac{1}{d_i^\alpha}\right)^2}{N\sum_{i=1}^{N}\frac{1}{d_i^{2\alpha}}}=\frac{\left(\mathbb{E}\left\{\frac{1}{d_{i}^\alpha}\right\}\right)^2}{\mathbb{E}\left\{\frac{1}{d_{i}^{2\alpha}}\right\}}=1-\frac{\mathrm{Var}\left\{\frac{1}{d_{i}^\alpha}\right\}}{\mathbb{E}\left\{\frac{1}{d_{i}^{2\alpha}}\right\}}\;.
\end{eqnarray}
Interestingly, we see that if
$\mathrm{Var}\left\{\frac{1}{d_{i}^\alpha}\right\}\ll
\mathbb{E}\left\{\frac{1}{d_{i}^{2\alpha}}\right\}$, the gap between
the upper and lower bound is very small, and the availability of
multiple antennas at the FC does not provide much benefit compared
with the single antenna system when $N \rightarrow\infty$. On the
other hand, if $\mathrm{Var}\left\{\frac{1}{d_{i}^\alpha}\right\}\rightarrow
\mathbb{E}\left\{\frac{1}{d_{i}^{2\alpha}}\right\}$, the potential
exists for multiple antennas to significantly lower the estimate
variance.
\subsection{Estimation Performance for Large $M$}
Using the matrix inversion lemma, we have
\begin{eqnarray}\label{eq:expansion}
\mathbf{H}^{H}(\mathbf{H}\mathbf{V}\mathbf{H}^{H}+\sigma_{n}^2\mathbf{I}_{M})^{-1}\mathbf{H}&
= &
\mathbf{H}^{H}\left(\!\frac{1}{\sigma_n^2}\mathbf{I}_{M}\!-\!\frac{1}{\sigma_n^4}\mathbf{H}\left(\mathbf{V}^{-1}+\frac{1}{\sigma_n^2}\mathbf{H}^{H}\mathbf{H}\right)^{-1}\mathbf{H}^{H}\right)\mathbf{H}\nonumber\\
&=&\frac{1}{\sigma_n^2}\mathbf{H}^{H}\mathbf{H}-\frac{1}{\sigma_n^4}\mathbf{H}^{H}\mathbf{H}\left(\mathbf{V}^{-1}+\frac{1}{\sigma_n^2}\mathbf{H}^{H}\mathbf{H}\right)^{-1}\mathbf{H}^{H}\mathbf{H} \; .
\end{eqnarray}
Furthermore, the $(m,n)$th element of $\mathbf{H}^{H}\mathbf{H}$ is given by
\begin{equation}\label{eq:approxh}
\left(\mathbf{H}^{H}\mathbf{H}\right)_{m,n}=\frac{1}{d_m^\alpha d_n^{\alpha}}\sum_{i=1}^{M}e^{j\left(\gamma_{n,i}-\gamma_{m,i}\right)}\;.
\end{equation}
Similar to (\ref{eq:mean}), as $M\to\infty$ we have
\begin{equation}\label{eq:mean2}
\lim_{M\to\infty}\frac{1}{M}\sum_{i=1}^{M}e^{j\left(\gamma_{n,i}-\gamma_{m,i}\right)}
=\left\{\begin{array}{lr}
1 & m=n\\
0 & m\ne n \; ,
\end{array}\right.
\end{equation}
and thus
\begin{eqnarray}\label{eq:approx2}
\lim_{M\to\infty} \mathbf{H}^{H}\mathbf{H} =
M\mathrm{diag}\left\{\frac{1}{d_1^{2\alpha}} \; \cdots \; \frac{1}{d_N^{2\alpha}}\right\}\;.
\end{eqnarray}
Substituting (\ref{eq:approx2}) into (\ref{eq:expansion}), we have
\begin{eqnarray}\label{eq:eigen2}
\lim_{M\to\infty} \mathbf{H}^{H}(\mathbf{H}\mathbf{V}\mathbf{H}^{H}+\sigma_{n}^2\mathbf{I}_{M})^{-1}
\mathbf{H} = \mathrm{diag}\left\{\frac{M}{d_1^{2\alpha}\sigma_n^2+M\sigma_{v,i}^2} \; , \; \cdots \; , \; \frac{M}{d_N^{2\alpha}\sigma_n^2+M\sigma_{v,N}^2}\right\}\; , \nonumber
\end{eqnarray}
and thus
\begin{equation}\label{eq:varapprox}
\lim_{M\to\infty} \mathrm{Var}(\hat{\theta}_{ML}) =
\frac{1}{M\sum_{i=1}^{N}\frac{1}{d_i^{2\alpha}\sigma_n^2+M\sigma_{v,i}^2}} \; .
\end{equation}
Note that this asymptotic expression is independent of the choice of
$\mathbf{a}$. Here, for large $M$, the benefit of having multiple
antennas at the FC hinges on the relative magnitude of
$M\sigma_{v,i}^2$ versus $d_i^{2\alpha}\sigma_n^2 $. If
$M\sigma_{v,i}^2\ll d_i^{2\alpha}\sigma_n^2$, a reduction in variance
by a factor of $M$ is possible. In this case, where the SNR at the FC
is low but the signals sent from the sensors are high quality, the
coherent gain from the combination of the relatively noise-free sensor
signals helps increase the SNR at the FC. On the other hand, when
$M\sigma_{v,i}^2\gg d_i^{2\alpha}\sigma_n^2$, performance is
asymptotically independent of $M$. Here, the coherent gain not only
applies to $\theta$ but also to the sensor noise, which is stronger in
this case.
\section{Impact of Imperfect Phase}\label{sec:five}
The previous sections have assumed that the FC can calculate the
vector $\mathbf{a}$ and feed the phase information back to the sensors
error free. Whether the feedback channel is digital or analog, there
are about to be errors either in the received feedback at the sensors
or in how the phase shift is actually implemented. Furthermore, the
wireless channel may change during the time required for calculation
and feedback of $\mathbf{a}$, so even if the phase shifts are
implemented perfectly at the sensors, they may no longer be valid for
the current channel. In this section, we evaluate the impact of
errors in the sensor phase shifts on the estimation accuracy.
Define the phase shift for the $i$th sensor as $a_i=e^{j\alpha_i}$,
and assume that
\begin{equation}
\alpha_i=\alpha_i^{*}+\Delta_i \; , \nonumber
\end{equation}
where $\alpha_{i}^{*}$ is the optimal phase and $\Delta_i$ is a
Gaussian perturbation (in radians) with zero mean and variance $\sigma_p^2$. Define
$\mathbf{E}=\mathbf{H}^{H}(\mathbf{H}\mathbf{V}\mathbf{H}^{H}+\sigma_{n}^2\mathbf{I})^{-\frac{1}{2}}$,
so that $Var(\hat{\theta}_{ML})$ can be expressed as
\begin{equation}\label{eq:var}
Var(\hat{\theta}_{ML})=\frac{1}{\|\mathbf{a}^{H}\mathbf{E}\|^2}=\frac{1}{\sum_{i=1}^M|\mathbf{a}^{H}\mathbf{e}_i|^2}\;,
\end{equation}
where $\mathbf{e}_i$ is the $i$th column of $\mathbf{E}$. Let
$e_{i,j}e^{j\beta_j}$ be a polar coordinate representation of the
$j$th element of $\mathbf{e}_{i}$, so that
\begin{eqnarray}
|\mathbf{a}^{H}\mathbf{e}_i|^2&=&\left|\sum_{j=1}^{M}e_{i,j}e^{\alpha_j^{*}+\Delta_j+\beta_j}\right|^2\nonumber\\
&=&\sum_{j=1}^{M}e_{i,j}^2+\sum_{l=1}^{M}\sum_{\substack{m=1\\m\neq l}}^{M}e_{i,l}e_{i,m}\cos(\alpha_l^{*}+\Delta_l+\beta_l-\alpha_m^{*}-\Delta_m-\beta_m)\; . \label{eq:p1}
\end{eqnarray}
Define $\delta_{l,m}^{i}=\Delta_l-\Delta_m$ and
$\tau_{l,m}^{i}=\alpha_l^{*}+\beta_l-\alpha_m^{*}-\beta_m$.
If we assume $\sigma_p^2\ll 1$, (\ref{eq:p1}) may be approximated via
a 2nd order Taylor series as follows:
{\small\begin{eqnarray}\label{eq:err} |\mathbf{a}^{H}\mathbf{e}_i|^2\!\!&
\approx
&\!\!\sum_{j=1}^{N}e_{i,j}^2+\sum_{l=1}^{N}\sum_{\substack{m=1,\\m\neq
l}}^{N}e_{i,l}e_{i,m}\left(\cos(\tau_{l,m}^{i})-\sin(\tau_{l,m}^{i})\delta_{l,m}^{i}-\frac{\cos(\tau_{l,m}^{i})}{2}\left(\delta_{l,m}^i\right)^2\right)\nonumber\\
\!\!&=&\!\!\sum_{j=1}^{N}e_{i,j}^2\!+\!\sum_{l=1}^{N}\sum_{\substack{m=1,\\m\neq
l}}^{N}e_{i,l}e_{i,m}\cos(\tau_{l,m}^{i})\!-\!\sum_{l=1}^{N}\sum_{\substack{m=1,\\m\neq
l}}^{N}e_{i,l}e_{i,m}\left(\sin(\tau_{l,m}^{i})\delta_{l,m}^i\!+\!\frac{\cos(\tau_{l,m}^i)}{2}\left(\delta_{l,m}^{i}\right)^2\right).
\end{eqnarray}}
Substituting~(\ref{eq:err}) into~(\ref{eq:var}), we have
{\small\begin{eqnarray}
Var(\hat{\theta}_{ML})\!\approx\!\frac{1}{\sum_{i=1}^{M}\!\!\left(\!\sum_{j=1}^{N}e_{i,j}^2\!+\!\sum_{l=1}^{N}\sum_{\substack{m=1\\m\neq
l}}^{N}e_{i,l}e_{i,m}\cos(\tau_{l,m}^{i})\!-\!\sum_{l=1}^{N}\sum_{\substack{m=1\\m\neq1}}^{N}e_{i,l}e_{i,m}\left(\sin(\tau_{l,m}^i)\delta_{l,m}^i\!+\!\frac{\cos(\tau_{l,m}^i)}{2}\left(\delta_{l,m}^i\right)^2\right)\right)}.\nonumber
\end{eqnarray}}
In the previous equation, the effect of the phase error is
confined to the second double sum inside the outermost parentheses.
If we define $\hat{\theta}_{ML}^P$ to be the estimate obtained with
no phase errors, then
\begin{equation}
Var(\hat{\theta}_{ML}^P)=\frac{1}{\sum_{i=1}^{M}\left(\sum_{j=1}^{N}e_{i,j}^2
\!+\!\sum_{l=1}^{N}\sum_{\substack{m=1\\m\neq l}}^{N}e_{i,l}e_{i,m}\cos(\tau_{l,m}^i)\right)} \; ,
\end{equation}
which is deterministic and does not depend on the random phase
errors. We can then obtain the following approximation
\begin{eqnarray}
Var(\hat{\theta}_{ML})\overset{(f)}{\approx}Var(\hat{\theta}_{ML}^P)\left(1+\frac{\sum_{i=1}^{M}\left(\sum_{l=1}^{N}\sum_{\substack{m=1\\m\neq l}}^{N}e_{i,l}e_{i,m}\left(\sin(\tau_{l,m}^{i})\delta_{l,m}^i+\frac{\cos(\tau_{l,m}^i)}{2}\left(\delta_{l,m}^i\right)^2\right)\right)}{\sum_{i=1}^{M}\left(\sum_{j=1}^{N}e_{i,j}^2\!+\!\sum_{l=1}^{N}\sum_{\substack{m=1\\m\neq l}}^{N}e_{i,l}e_{i,m}\cos(\tau_{l,m}^i)\right)}\right)\;,
\nonumber
\end{eqnarray}
where ($f$) is due to the first order Taylor approximation $(1-\frac{x}{y})^{-1}\approx 1+\frac{x}{y}$
for $x \ll y$. We use the ratio of $Var(\hat{\theta}_{ML})$ to
$Var(\hat{\theta}_{ML}^P)$ to measure the effect of the phase error,
which yields
\begin{equation}
\frac{Var(\hat{\theta}_{ML})}{Var(\hat{\theta}_{ML}^P)}\approx \left(1+\frac{\sum_{i=1}^{M}\left(\sum_{l=1}^{N}\sum_{\substack{m=1\\m\neq l}}^{N}e_{i,l}e_{i,m}\left(\sin(\tau_{l,m}^{i})\delta_{l,m}^i+\frac{\cos(\tau_{l,m}^i)}{2}\left(\delta_{l,m}^i\right)^2\right)\right)}{\sum_{i=1}^{M}\left(\sum_{j=1}^{N}e_{i,j}^2\!+\!\sum_{l=1}^{N}\sum_{\substack{m=1\\m\neq l}}^{N}e_{i,l}e_{i,m}\cos(\tau_{l,m}^i)\right)}\right) \; . \nonumber
\end{equation}
Note that the only term in the above expression that is random
is the numerator on the right-hand side.
Taking the expectation of the ratio with respect to the phase perturbations $\Delta_i$,
we have
{\small\begin{eqnarray}
\mathbb{E}\left\{\frac{Var(\hat{\theta}_{ML})}{Var(\hat{\theta}_{ML}^P)}\right\}&=&\!\left(1+\frac{\sum_{i=1}^{M}\left(\sum_{l=1}^{N}\sum_{\substack{m=1\\m\neq l}}^{N}e_{i,l}e_{i,m}\left(\sin(\tau_{l,m}^{i})\mathbb{E}\left\{\delta_{l,m}^i\right\}+\frac{\cos(\tau_{l,m}^i)}{2}\mathbb{E}\left\{\left(\delta_{l,m}^i\right)^2\right\}\right)\right)}{\sum_{i=1}^{M}\left(\sum_{j=1}^{N}e_{i,j}^2\!+\!\sum_{l=1}^{N}\sum_{\substack{m=1\\m\neq l}}^{N}e_{i,l}e_{i,m}\cos(\tau_{l,m}^i)\right)}\right)\nonumber\\
&\overset{(h)}{=}&\left(1+\frac{\sum_{i=1}^{M}\sum_{l=1}^{N}\sum_{\substack{m=1\\m\neq l}}^{N}e_{i,l}e_{i,m}\cos(\tau_{l,m}^i)\sigma^2_p}{\sum_{i=1}^{M}\left(\sum_{j=1}^{N}e_{i,j}^2\!+\!\sum_{l=1}^{N}\sum_{\substack{m=1\\m\neq l}}^{N}e_{i,l}e_{i,m}\cos(\tau_{l,m}^i)\right)}\right)\; , \label{eq:p2}
\end{eqnarray}}where in ($h$) we exploit the fact that
$\mathbb{E}\left\{\delta_{l,m}^i\right\}=0$ and
$\mathbb{E}\left\{\left(\delta_{l,m}^i\right)^2\right\}=2\sigma^2_p$.
Since
\begin{equation}
\sum_{l=1}^{N}\sum_{\substack{m=1\\m\neq l}}^{N}e_{i,l}e_{i,m}
\cos(\tau_{l,m}^i)\le \left(N-1\right)\sum_{l=1}^{N}e_{i,l}^2 \; , \nonumber
\end{equation}
the ratio in~(\ref{eq:p2}) is approximately upper bounded by
\begin{equation}\label{eq:errbound}
\mathbb{E}\left\{\frac{Var(\hat{\theta}_{ML})}{Var(\hat{\theta}_{ML}^P)}\right\} \le
1+\left(1-\frac{1}{N}\right)\sigma^2_p \; .
\end{equation}
We see from~(\ref{eq:errbound}) that the impact of the phase errors
increases with $N$, but in all cases the degradation in the estimate
variance is approximately bounded above by a factor of $1+\sigma_p^2$.
\section{Sensor Selection}\label{sec:six}
As mentioned earlier, in situations where it is desired to use only a
subset of the sensors to estimate the parameter ({\em e.g.,} in order
to conserve power at the sensors), the FC needs a method to perform
the sensor selection. Assuming only $K < N$ of the sensors are to be
selected for transmission to the FC, an optimal solution to the
problem would require solving the following maximization:
\begin{eqnarray}\label{eq:select}
\max_{\mathbf{a,x}} && \mathbf{x}^{T}\mathbf{D}^{H}\mathbf{H}^H\left(\mathbf{H}\mathbf{V}\mathbf{X}\mathbf{H}^H+\sigma_n^2\mathbf{I}_{M}\right)^{-1}\mathbf{H}\mathbf{D}\mathbf{x}\\
s. t. &&\sum_{i=1}^{N}x_{i}=K\nonumber\\
&&x_{i}=\{0,1\}\nonumber\\
&&|a_{i}|=1\;,\nonumber
\end{eqnarray}
where $\mathbf{D}=\textrm{diag}\left\{a_{1},\cdots,a_N\right\}$,
$\mathbf{x}=[x_1,\cdots, x_N]^T$ is the selection vector and
$\mathbf{X}=\textrm{diag}\{x_1,\cdots,x_N\}$. Even if one chooses one
of the suboptimal approaches described in Section~III for estimating
$\mathbf{a}$, solving for $\mathbf{x}$ in~(\ref{eq:select}) requires
an exhaustive search over all possible $K$-sensor combinations and is
in general NP-hard. Instead, in this section we derive conditions
under which much simpler selection strategies can be applied. We
consider the following two cases: (1) low sensor noise relative to the
noise at the FC, $\sigma_{v,i}^2\ll\sigma_n^2$, and (2) relatively
high sensor noise $\sigma_{v,i}^2\gg\sigma_n^2$. For (1), we derive a
LP solution as well as a simpler greedy algorithm, and
for (2) we show that the problem reduces to choosing the sensors with
the lowest measurement noise.
\subsection{Algorithms for High FC Noise}
Let $\mathbf{a}$ be the phase vector obtained using one of the
algorithms in Section~III assuming all $N$ sensors are active. When
$\sigma_{v,i}^2\ll\sigma_{n}^2$, we ignore the term $\mathbf{HVXH}^H$
in~(\ref{eq:select}), and the problem simplifies to
\begin{eqnarray}
\max_{\mathbf{x}} && \mathbf{x}^{T}\mathbf{D}^{H}\mathbf{H}^H\mathbf{H}
\mathbf{D}\mathbf{x} \label{eq:select2} \\
s. t. &&\sum_{i=1}^{N}x_{i}=K\nonumber\\
&&x_{i}=\{0,1\}\nonumber\;.
\end{eqnarray}
Define $\mathbf{F}=\mathbf{\mathbf{D}^{H}\mathbf{H}^H\mathbf{HD}}$
so that~(\ref{eq:select2}) can be rewritten as
\begin{eqnarray}
\max_{\mathbf{x}} && \mathbf{x}^{T}\textrm{Re}\{\mathbf{F}\}\mathbf{x} \label{eq:select3}\\
s. t. &&\sum_{i=1}^{N}x_{i}=K\nonumber\\
&&x_{i}=\{0,1\}\nonumber\;.
\end{eqnarray}
Since $x_i^2=x_i$, (\ref{eq:select3}) is equivalent to
\begin{eqnarray}\label{eq:}
\max_{x_i} && \sum_{i=1}^{N}\mathbf{F}_{i,i}x_{i}+2\sum_{i=1}^{N-1}
\sum_{j=i+1}^{N}\textrm{Re}\{\mathbf{F}_{i,j}\}x_{i}x_{j}\label{eq:select4}\\
s. t. &&\sum_{i=1}^{N}x_{i}=K\nonumber\\
&&x_{i}=\{0,1\}\nonumber\;,
\end{eqnarray}
where $\mathbf{F}_{i,j}$ denotes the $(i,j)\textrm{th}$ element of
matrix $\mathbf{F}$. By linearizing the term $x_ix_j$
\cite{Billionnet:1997}, (\ref{eq:select4}) is equivalent to
\begin{subequations}\label{eq:intp}
\begin{align}
\max_{x_i,y_{ij}} & \sum_{i=1}^{N}\mathbf{F}_{i,i}x_{i}+2\sum_{i=1}^{N-1}
\sum_{j=i+1}^N\textrm{Re}\{\mathbf{F}_{i,j}\}y_{ij}\\
s. t. &\sum_{i=1}^{N}x_{i}=K\label{eq:linear0}\\
&1-x_i-x_j+y_{ij}\ge0\label{eq:linear1}\\
&x_{i}-y_{ij}\ge0\label{eq:linear2}\\
&x_{j}-y_{ij}\ge0\label{eq:linear3}\\
&y_{ij}\ge0\label{eq:linear4}\\
&x_{i}=\{0,1\}\label{eq:linear5}\;,
\end{align}
\end{subequations}
where the constraints (\ref{eq:linear1})-(\ref{eq:linear5}) lead to
$y_{ij}=x_{i}x_{j}$.
Note that all of the constraints in~(\ref{eq:intp}) are linear,
except for~(\ref{eq:linear5}). If we relax the constraint
in~(\ref{eq:linear5}), the condition $0\le x_i\le 1$ is implicitly
included in~(\ref{eq:linear7})-(\ref{eq:linear10}), and we are
left with a LP problem in standard form
\cite{Billionnet:1997}:
\begin{subequations}\label{eq:lp}
\begin{align}
\max_{x_i, y_{ij}} & \sum_{i=1}^{N}\mathbf{F}_{i,i}x_{i}+2\sum_{i=1}^{N-1}\sum_{j=i+1}^N\textrm{Re}\{\mathbf{F}_{i,j}\}y_{ij}\\
s. t. &\sum_{i=1}^{N}x_{i}=K\label{eq:linear6}\\
&1-x_i-x_j+y_{ij}\ge0\label{eq:linear7}\\
&x_{i}-y_{ij}\ge0\label{eq:linear8}\\
&x_{j}-y_{ij}\ge0\label{eq:linear9}\\
&y_{ij}\ge0\label{eq:linear10}\;.
\end{align}
\end{subequations}
To find the $x_i=\{0,1\}$ solution needed for sensor selection, one
can take the result of~(\ref{eq:lp}) and simply set the $K$ largest
elements to one and the rest to zero. If desired, once the $K$ sensors
have been selected, the phase vector $\mathbf{a}$ for these $K$ sensors
can be recomputed based on a reduced dimension version of the algorithms
in Section~III.
The above LP problem has $\frac{N(N-1)}{2}+N$ variables and
$2N(N-1)+1$ constraints, and thus will require on the order of
$\left(\frac{N(N-1)}{2}+N\right)^2\left(2N(N-1)+1\right)$ arithmetic
operations \cite{Boyd:2004}. A simpler greedy algorithm is presented
below that only requires $O(KN)$ operations, and that achieves
performance close to the LP approach. The greedy algorithm is based
on the following observation:
\begin{eqnarray*}
\mathbf{x}^T\mathbf{D}^H\mathbf{H}^H\mathbf{HDx} & = &
\sum_{i=1}^K \sum_{j=1}^K \bar{a}_i a_j \mathbf{h}_i^H \mathbf{h}_j \\
& = & \sum_{i=1}^{K-1} \sum_{j=1}^{K-1} \bar{a}_i a_j \mathbf{h}_i^H \mathbf{h}_j
+ \|\mathbf{h}_K\|^2 + 2\mbox{\rm Re} \left\{
\sum_{j=1}^{K-1} \bar{a}_K a_j \mathbf{h}_K^H \mathbf{h}_j \right\} \; .
\end{eqnarray*}
The idea behind the greedy algorithm is to add sensors one at a
time based on those for which the last two terms in the above
sum are the largest. The steps of the algorithm are detailed
below.
{\flushleft{\em Greedy Sensor Selection Algorithm}}
\begin{enumerate}
\item Select the first sensor as the one with the strongest channel:
$i = \arg \max_{k} \|\mathbf{h}_k\|^2$, and initialize the active
sensor set as $\mathcal{S}=\{i\}$\;.
\item While $|\mathcal{S}| \le K$, perform the following:
\begin{enumerate}
\item Solve
\begin{equation*}
i = \arg\max_{k\notin\mathcal{S}} \; \|\mathbf{h}_k\|^2 + 2\mbox{\rm Re} \left\{
\sum_{j\in\mathcal{S}} \bar{a}_k a_j \mathbf{h}_k^H \mathbf{h}_j \right\} \; .
\end{equation*}
\item Update $\mathcal{S}=\mathcal{S}\bigcup i$\;.
\end{enumerate}
\end{enumerate}
As with the LP algorithm, once the $K$ sensors are selected, an
updated solution for the associated $K$ elements of $\mathbf{a}$
can be obtained.
\subsection{Algorithm for High Sensor Noise}
When $\sigma_{v,i}^2\gg\sigma_n^2$ and assuming that $N > M$ (the case
of interest when sensor selection is necessary), the original criterion
can be simplified to
\begin{eqnarray*}
\mathbf{a}^H\mathbf{H}^H\left(\mathbf{HVH}^H\right)^{-1}\mathbf{Ha}
& = & \mathbf{a}^H\mathbf{V}^{-\frac{1}{2}}\mathbf{V}^{\frac{1}{2}}
\mathbf{H}^H\left(\mathbf{HVH}^H\right)^{-1}\mathbf{H}
\mathbf{V}^{\frac{1}{2}}\mathbf{V}^{-\frac{1}{2}}\mathbf{a} \\
& = & \mathbf{a}^H\mathbf{V}^{-\frac{1}{2}}
\mathbf{P}_{VH} \mathbf{V}^{-\frac{1}{2}}\mathbf{a} \; ,
\end{eqnarray*}
where $\mathbf{P}_{VH} = \mathbf{V}^{\frac{1}{2}}
\mathbf{H}^H\left(\mathbf{HVH}^H\right)^{-1}\mathbf{H}
\mathbf{V}^{\frac{1}{2}}$ is a rank $M$ projection matrix. Ideally, to
maximize the criterion function, one should attempt to find a
vector of the form $\mathbf{V}^{-\frac{1}{2}}\mathbf{a}$ that lies in the
subspace defined by $\mathbf{P}_{VH}$. Assuming the vector
$\mathbf{a}$ can approximately achieve this goal, the lower bound
on variance is approximately achieved and we have
\begin{equation}\label{eq:varlower}
\frac{1}{\mathbf{a}^H\mathbf{V}^{-\frac{1}{2}}
\mathbf{P}_{VH} \mathbf{V}^{-\frac{1}{2}}\mathbf{a}} \approx
\frac{1}{\mathbf{a}^H\mathbf{V}^{-1}\mathbf{a}} =\frac{1}{\sum_{i=1}^N \frac{1}{\sigma_{v,i}^2}} \; .
\end{equation}
With respect to the sensor selection problem, this suggests that
when $\sigma_{v,i}^2\gg\sigma_n^2$, the $K$ sensors with the smallest
values of $\sigma_{v,i}^2$ should be chosen.
\section{Simulation Results}\label{sec:seven}
Here we present the results of several simulation examples to
illustrate the performance of the proposed algorithms. In all cases,
the path loss exponent $\alpha$ was set to $1$, and each result is
obtained by averaging over 300 channel realizations. The sensors are
assumed to lie in a plane at random angles with respect to the FC,
uniformly distributed over $[0,2\pi)$. The distances of the sensors
to the FC will be specified separately below. To evaluate the
performance without feedback, $\mathbf{a}$ is set to a vector of all
ones. In some of the simulations, we will compare the performance of
the proposed algorithms with that obtained by~(\ref{eq:amplitudeopt}),
where both the sensor gain and phase can be adjusted. In these
simulations, we use the active-set method to
optimize~(\ref{eq:amplitudeopt}), and we use several different
initializations in order to have a better chance of obtaining the
global optimum. When the ACMA algorithm is implemented, the
subspace dimension was set at $m=2$.
In the first two examples, we study the estimation performance for
$M=4$ FC antennas with increasing $N$ for a case where the sensor
measurement noise $\sigma_{v,i}^2$ is uniformly distributed over
$[0.01, 0.1]$ and the FC noise $\sigma_n^2$ is set to $0.1$.
Fig.~\ref{f1} shows the results assuming that the sensor distances
$d_i$ are uniformly distributed in the interval $[3,20]$, while in
Fig.~\ref{f2} $d_i=11.5$ for all sensors. In both cases, even though
the lower bound of~(\ref{eq:lb}) is not achievable, we see that the
performance of the proposed SDP and ACMA methods is nonetheless
reasonably close to the bound, and not significantly worse than the
performance obtained by optimizing both the phase and gain. As $N$
gets larger in Fig.~\ref{f1}, the estimation error for all of the
methods (except the no-feedback case) falls within the asymptotic
lower and upper bounds of~(\ref{eq:lb2app}) and~(\ref{eq:allone}).
When $N=50$, the ratio
$\mathrm{Var}\left\{\frac{1}{d_{i}^\alpha}\right\}/\mathbb{E}\left\{\frac{1}{d_{i}^{2\alpha}}\right\}$
is $0.304$ for Fig.~\ref{f1}, and the ratio between the lower and
upper bound is $0.702$, which is in excellent agreement with the value
of $1-0.304$ predicted by Eq.~(\ref{eq:ratio}). Since the upper bound
in~(\ref{eq:allone}) corresponds to the case of $M=1$, one may suppose
that the gap in Fig.~\ref{f1} between the bounds of~(\ref{eq:lb2app})
and~(\ref{eq:allone}) indicates that the presence of multiple antennas
at the FC could provide a benefit for large $N$. However, the
performance of SDP and ACMA is approaching the upper bound more
tightly, indicating that there is no benefit from having multiple
antennas in this case. In Fig.~\ref{f2} where the $d_i$ are all
equal, the asymptotic bounds in~(\ref{eq:lb2app})
and~(\ref{eq:allone}) are identical, and asymptotically we expect no
benefit from multiple antennas at the FC. We see again that for large
$N$ the performance of the SDP and ACMA methods is essentially at the
predicted bound. When the $d_i$ are equal and
$\frac{\sigma_{v,i}^2}{d_i^{\alpha}}\ll\sigma_n^2$, the matrix
$\mathbf{H}^{H}(\mathbf{H}\mathbf{V}\mathbf{H}^{H}+\sigma_{n}^2\mathbf{I}_{M})^{-1}\mathbf{H}$
asymptotically approaches a scaled identity matrix, so in this case
the performance of the proposed phase-shift only algorithms even
approaches the lower bound of Eq.~(\ref{eq:lb}).
Fig.~\ref{f3} illustrates the performance for $N=4$ with an increasing
number of FC antennas $M$ when $\sigma_{v,i}^2$ is uniformly
distributed over $[0.001, 0.01]$ and $\sigma_n^2=0.1$. In this
example, for most of the sensors we have $M\sigma_{v,i}^2 \ll
d_i^{2\alpha}\sigma_n^2$, so in this case we see an improvement as the
number of FC antennas increases. However, the benefit of optimizing
the transmit phase (and gain for that matter) is reduced as $M$
increases.
In Fig.~\ref{f4}, we investigate the effect of phase errors for two
cases, $\sigma_p^2=0.1$ and $\sigma_p^2=0.2$ assuming the same noise
parameter settings as in the first two examples. For each channel
realization, results for 3000 different phase error realizations were
obtained and averaged to obtain the given plot. The ratio of the
variance obtained by the SDP algorithm with and without phase errors
is plotted for $M=2,4,6$ for both values of $\sigma_p^2$, and the
approximate bound of~(\ref{eq:errbound}) is also shown. The results
show that the performance degradation increases with $N$, and
that~(\ref{eq:errbound}) provides a reasonable indication of
performance for large $N$. Fig.~\ref{f4} also shows that increasing
the number of FC antennas improves the robustness of the algorithm to
imprecise sensor phase.
In Fig.~\ref{f5}, we compare the performance of the three different
sensor selection algorithms discussed in the paper (LP, greedy and
min-sensor-noise) as a function of $\sigma_n^2$ assuming $M=4$
antennas, $N=35$ sensors and the sensor noise is uniformly distributed
over $[0.001, 0.01]$. The sensor distances $d_i$ are uniformly
distributed in the interval $[3,20]$. Three sets of curves are
plotted, one for $K=5$ selected sensors, one for $K=10$, and one
corresponding to when all the sensor nodes are used (the solid curve,
obtained using the SDP algorithm). After the sensor selection, the
proposed SDP is used to re-optimize the selected sensor nodes' phase
parameters. For small $\sigma_n^2$ such that $\sigma_{v,i}^2\gg
\sigma_n^2$, we see as predicted that the best performance is obtained
by simply selecting the $K$ sensors with the smaller measurement
noise. On the other hand, again in agreement with our analysis, the
LP and greedy algorithms achieve the lowest estimation error for
larger values of $\sigma_n^2$. Interestingly, the greedy algorithm
provides performance essentially identical to the LP approach at a
significantly reduced computational cost.
\section{Conclusions}\label{sec:eight}
In this paper, we investigated a distributed network of single antenna
sensors employing a phase-shift and forward strategy for sending their
noisy parameter observations to a multi-antenna FC. We
presented two algorithms for finding the sensor phase shifts that
minimize the variance of the estimated parameter, one based on a
relaxed SDP and a closed-form heuristic algorithm based on the ACMA
approach. We analyzed the asymptotic performance of the phase-shift
and forward scheme for both large numbers of sensors and FC antennas,
and we derived conditions under which increasing the number of FC
antennas will significantly benefit the estimation performance. We
also analyzed the performance degradation that results when sensor
phase errors of variance $\sigma_p^2$ are present, and we showed that
for large $N$ the variance will approximately increase by a factor of
$1+\sigma_p^2$ provided that $\sigma_p^2 \ll 1$ square radian. The
sensor selection problem was studied assuming either low or high
sensor noise with respect to the noise at the FC. For low sensor
noise, two algorithms were proposed, one based on linear programming
with a relaxed integer constraint, and a computationally simpler
greedy approach. For high sensor noise, we showed that choosing the
sensors with the smallest noise variances was approximately optimal.
Simulation studies of the proposed algorithms illustrate their
advantages and the validity of the asymptotic analyses.
\bibliographystyle{IEEEtran}
|
2,877,628,090,473 | arxiv | \section{Introduction}
Simulation of the Dirac particles with condensed matter systems or some artificial systems has recently been
attracted considerable attention \cite{Wilczek1998,SLZhu2007,SLZhu2009,DWZhang2012b,Tarruell2012,Duca2015,DWZhang2012c,Gerritsma2010,Lamata2007,Casanova2010,Gerritsma2011,Casanova2011,SLiu2014}. The Dirac equation successfully merges quantum mechanics with special relativity, and predicts some unexpected peculiar effects for a relativistic quantum particle, such as Klein's paradox \cite{Klein1929,Calogeracos1999} and \emph{Zitterbewegung} \cite{Uber1930}. These predicted phenomena provide fundamental understanding of relativistic quantum effects, but are very hard to observe in elementary particles. In recent years, it has been demonstrated that various artificial quantum systems, such as trapped ions \cite{Gerritsma2010,Lamata2007,Casanova2010,Gerritsma2011} and ultracold atoms \cite{SLZhu2007,Tarruell2012,Duca2015,DWZhang2012b,DWZhang2012c}, can be used to simulate relativistic quantum effects. These systems become promising platforms for quantum simulation due to their high flexibility and controllability, which allow access to different physical regimes \cite{DWZhang2018,Leibfried2003,Hffner2008}.
Besides the spin-1/2 Dirac particles, exotic quantum effects can also appear in higher spin relativistic quantum particles \cite{LLiang2016,Urban2011,Betancur-Ocampo2017,XTan2018,AFang2016,HXu2017,ZHYang2016}. Similar to the spin-1/2 Dirac equation, a relativistic quantum wave equation can be formulated from the classical Maxwell equations. This wave equation describes the dynamics of an effective spin-1 relativistic quantum particle, and thus is called quantum Maxwell equation \cite{YQZhu2017,Oppenheimer1931}. It has been demonstrated that this quantum Maxwell equation can be simulated with ultracold atoms in optical lattices \cite{YQZhu2017,RShen2010,Dora2011,Goldman2011}. In its original formulation, the Klein tunneling was referred as an undamped scattering under a potential step with barrier hight $V > 2mc^2$, where $mc^2$ is the rest energy of the incident particle \cite{Klein1929,Calogeracos1999}. The Klein tunneling effect always comes with the transition and interference of different energy parts of states \cite{Thaller1992}. Due to the richer energy structure for the spin-1 particles, perfect penetration against square barrier can be found for some special incident energy for the spin-1 particles, but less transmission for the spin-1/2 Dirac particles under the same conditions. Such phenomenon is termed as super-Klein tunneling in literature \cite{Urban2011,Betancur-Ocampo2017,AFang2016,HXu2017}. On the other hand, there
is one oscillation frequency in the \emph{Zitterbewegung} effect of the Dirac particles, but there are two different
oscillation frequencies in the \emph{Zitterbewegung} oscillations of Maxwell fermions \cite{XShen}. So both the Klein tunnelling and \emph{Zitterbewegung} effects for the quantum Maxwell particles can have unique features. However, the realization of the quantum Maxwell equation with cold atoms in optical lattices is challenge due to the complicated spin-orbit couplings required for the three-component spinors \cite{YQZhu2017}. Thus, other experimentally more feasible schemes to mimic the quantum Maxwell equation are highly desired.
In this paper, we propose an experimentally feasible scheme to simulate and observe the scattering dynamics described by the quantum Maxwell equation with trapped ions. We explore the scattering dynamics of the pseudospin-1 Maxwell particles in the presence of a linear external potential and demonstrate that the scattered state should be a superposition of a reflection state, a localization state, and a transmission state. The probabilities of these states can be analytically obtained by using
the approach of Landau-Zener transition. We further show that the Maxwell Hamiltonian and the associated scattering dynamics can be mimicked with two trapped ions, similar to the simulation of the Dirac Hamiltonian \cite{Gerritsma2010,Lamata2007,Casanova2010,Gerritsma2011}. The Maxwell spinors are encoded by the internal states of the first ion, and its position and momentum are described by those of the motional modes of the ion. The desired linear potential barrier is built by the second ion. Ions trapped in a RF trap can be well manipulated to deal with a wide range of information precessing tasks with high flexibility and accuracy \cite{Leibfried2003,Hffner2008}. The other experimental manipulations, such as preparation or readout, have already been the standard methods in trapped ion system. Notably, the Klein tunneling and \emph{Zitterbewegung} of a Dirac particle have been simulated and observed with trapped ions \cite{Gerritsma2011,Gerritsma2010}. The technologies developed there can be straightforwardly used in the present scheme. So the phenomena explored here could be observed within the near future.
The rest of the paper is organized as follows. In Sec. \ref{sec2}, we present the relativistic Hamiltonian of a pseudospin-1 Maxwell particle and reveal the probabilities of three scattering states in the Klein tunneling with the approach of Landau-Zener transition. In Sect. \ref{sec3}, we propose quantum simulation of the system and its scattering dynamics with trapped ions. Finally, a short conclusion is given in Sec. \ref{sec5}.
\section{The scattering dynamics}\label{sec2}
We consider quantum tunneling of pseudospin-1 relativistic particles through a potential $V(x,y)$ in a two-dimensional space, which is described the Hamiltonian
\begin{equation}
\hat H_M=c\hat p_xS_x+c\hat p_yS_y+mc^2S_z+V(x,y)I_3,\label{eq_model}
\end{equation}
where $c$ is the effective speed of light, $\hat p_{x,y}=i\hbar \partial_{x,y}$ are the momentum operators, and $m$ is the particle mass. The matrices $S_{x,y,z}$ are matrix representations of the spin components with spin $S=1$, and $I_3$ is the identity matrix. The Hamiltonian in Eq. (\ref{eq_model}) can be derived from the famous classical Maxwell equations in the forms of the Schrodinger equation \cite{YQZhu2017}, and it describes a (pseudo)spin-1 particle in relativistic case. Therefore, the particles described by the quantum wave equation $i\hbar\partial_t \Psi (\mathbf{r},t)=H_M\Psi(\mathbf{r},t)$ are so-called Maxwell particles \cite{YQZhu2017,Oppenheimer1931}.
A general state in momentum space $k_{x,y}=p_{x,y}/\hbar$ can be written as $|\Psi\rangle=\sum_{j=1}^3 a_{j}|\Psi_j\rangle$ with $j=+,-,0$, where $|\Psi_j\rangle$ are the eigenstates (spinors) with the eigenvalues $E_\pm=\pm\sqrt{c^2 k^2+m^2c^4}$ ($k=\sqrt{{k_x^2+k_y^2}}=p/\hbar$) and $E_0=0$. The spinor $|\Psi_j\rangle$ can be constructed by the projection operators in momentum space, $|\Psi_j\rangle=\hat P^j(k)|\Psi\rangle$, where the projection operators are given by
\begin{equation}
\begin{split}
\hat P^{\pm}=&\frac{1}{2}\big[(\frac{cp_xS_x+cp_yS_y+mc^2S_z}{E_+})^2\\ &~~~\pm\frac{cp_xS_x+cp_yS_y+mc^2S_z}{E_+}\big]\,,\\
\hat P^0=&I_3-(\frac{cp_xS_x+cp_yS_y+mc^2S_z}{E_+})^2.
\end{split}
\end{equation}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth]{fig1.pdf}
\caption{Scattering dynamics of a pseudospin-1 Maxwell particle in (a) position space and (b) momentum space. The incident state $|\Psi_{+} (t_0)\rangle$ with positive momentum is initially prepared in the positive energy band. It moves in real space with a repulsive potential $V(x,y)=gx$ shown in the inset of (a). The particle can be reflected as a reflection wave, or enter the forbidden region as a transmission wave by reducing its kinetic energy, or even enter the zero-energy band as a localization state, as shown in (a). The three scattering states are denoted the wave functions $|\Psi_{+}(t)\rangle$, $|\Psi_{-}(t)\rangle$, and $|\Psi_{0}(t)\rangle$ in (b), respectively, and the distributions in the three bands can be calculated by the approach of Landau-Zener transition.}
\label{figone}
\end{figure}
Here we focus on the quantum tunneling with an external potential $V(x,y)=gx\ (g>0)$, as shown in Fig. \ref{figone}(a). If the initially incident state is $|\Psi_{j}\rangle$, the finial state after scattering can be written in a general form,
\begin{equation}
|\Psi(t)\rangle=\sum_{jn} a_{jn}|\Psi_n(t)\rangle,
\end{equation}
where $j,n=+,-,0$, and $a_{jn}$ represents the amplitude transfering from the initial state $|\Psi_{j}\rangle$ to the final energy branch of $|\Psi_{n}\rangle$.
In Fig. \ref{figone}(a), the case with the incident state $|\Psi_{+}\rangle$ is illustrated. The final state should be a superposition of a reflection state denoted as $|\Psi_{+}\rangle$, a localized state denoted as $|\Psi_{0}\rangle$, and a transmission state denoted as $|\Psi_{-}\rangle$. The state $|\Psi_{0}\rangle$ should be a localized state since the group velocities $\partial_{k_x,k_y}E_0 (k_x,k_y)=0$.
Under the linear potential condition, the transition probabilities $|a_{jn}^2|$ can be calculated based on the method of the Landau-Zener tunneling
\cite{Sauter1931,Landau1932,zener1932}. We rewrite the Hamiltonian (\ref{eq_model}) in momentum space,
\begin{equation} \label{eq_model_k}
\begin{split}
\hat H_k=&c\hbar k_x S_x+c\hbar k_yS_y+mc^2S_z+i\hbar g\partial_{k_x} I_3\\
=&c\hbar k_xS_x+\tilde mc^2\tilde S_{\tilde{z}}+i\hbar g\partial_{k_x}I_3,
\end{split}
\end{equation}
where $\tilde mc^2=\sqrt{m^2c^4+\hbar^2k_y^2c^2}$ is an effective mass since $k_y$ is well defined, and $\tilde S_{\tilde z}=n_yS_y+n_zS_z$ with $n_y=\hbar k_y/(\tilde mc)$ and $n_z=m/\tilde m$. The term $i\hbar g \partial_{k_x}$ is equivalent to a constant force along the $k_x$ axis, which makes a decrease in $k_x$. Thus, the scattering process can be interpreted in terms of a reduced Landau-Zener transition in one-dimensional momentum space (i.e., $k_x$). As shown in Fig. \ref{figone}(b), the incident state $|\Psi_{+} (t_0)\rangle$ has an initial momentum $\mathbf{k}_0=(k_{x0},k_{y0})$ with $k_{x0}>0$, and the linear potential decreases $k_x$ and leads to a non-adiabatic transition to another two bands near the anti-crossing point. The finite state acquires a reversed momentum along $k_x$, \emph{i.e.} $k_x <0$, which corresponds to a reflection in real space. Following the calculations outlined in Ref. \cite{Carroll1986}, we can obtain the transition probability
\begin{figure}[tbp]
\centering
\includegraphics[width=0.45\textwidth]{fig2.pdf}
\caption{The transmission probability as a function of the incident angle $\theta=\arctan(k_{y0}/k_{x0})$ for (a) spin-1 particles and (b) spin-1/2 particles with different slope gradients $g$ of the linear potential. Here the natural units $\hbar=c=1$ are adopted, and the parameters $m=1$ and $k_{y0}=1$.}
\label{figtwo}
\end{figure}
\begin{equation}
\begin{split}
\Gamma_{+-}=&|a_{+-}|^2=\exp(-\pi\frac{{\tilde m}^2c^4}{\hbar cg})\\=&\exp(-\pi\frac{m^2c^4+p_0^2c^2\sin^2\theta}{\hbar cg})\,,\\\Gamma_{+0}=&|a_{+0}|^2=2\exp(-\pi\frac{{\tilde m}^2c^4}{2\hbar cg})[1-\exp(-\pi\frac{\tilde{m}^2c^4}{2\hbar cg})]\\=&2\exp(-\pi\frac{m^2c^4+p_0^2c^2\sin^2\theta}{2\hbar cg})\times\\&[1-\exp(-\pi\frac{m^2c^4+p_0^2c^2\sin^2\theta}{2\hbar cg})]\,,\\
\Gamma_{++}=&|a_{++}|^2=1-\Gamma_{+-}-\Gamma_{+0}\,,\label{gamma}
\end{split}
\end{equation}
where $\Gamma_{jn}$ is the occupation probability on the $j$ band at time $t \rightarrow +\infty$ for the state initially on the $n$ band at $t \rightarrow -\infty$, and $\theta$ denotes the incident angle defined as $\theta=\arctan(k_{y0}/k_{x0})$. The transmission probability can be defined as
\begin{equation}
T=\Gamma_{+0}+\Gamma_{+-}.
\end{equation}
\begin{figure*}[htbp]
\centering
\includegraphics[width=0.9\textwidth]{fig3.pdf}
\caption{Up panel: free evolution ($g=0$) for an initial wave packet with (a) a positive-energy eigen-spinor $|\Psi_+\rangle$ and (b) a superposition spinor $\frac{1}{\sqrt{2}}(|\Psi_+\rangle+|\Psi_-\rangle)$; the scattering dynamics with $g=1.5$ for an initial wave packet with (c) a positive-energy eigen-spinor $|\Psi_+\rangle$ and (d) a superposition spinor $\frac{1}{\sqrt{2}}(|\Psi_+\rangle+|\Psi_-\rangle)$. Down panel: the corresponding distributions of three spin components $|\Psi_{+,-,0} (t=7\bar\Delta)\rangle$ in real space. Here $\hbar=c=1$ and the rest mass $m=0.85$.}
\label{figthree}
\end{figure*}
Figure \ref{figtwo}(a) illustrates the transmission probability $T$ as a function of the incident angle $\theta$ for the Maxwell particles. It shows that the potential will become more transparent for all the incident angles when the slope gradient $g$ becomes larger. If $g$ is sufficiently large, the transmission probability is almost unit and this result is very similar to the so-called super-Klein tunneling of spin-1 particles through a square potential barrier \cite{RShen2010,Urban2011}. For comparison, the transmission probability for spin-1/2 particles is also plotted in Fig. \ref{figtwo}(b). The relativistic Hamiltonian of spin-1/2 particles has the same form with that in Eq. (\ref{eq_model}), but the spin operators $S_{x,y,z}$ and the unit matrix $I_3$ are replaced by the Pauli matrices $\sigma_{x,y,z}$ and $2\times 2$ unit matrix $I_2$, respectively. It is clear that the transmission probability is smaller for spin-1/2 particles than that of spin-1 particles under the same conditions.
Without loss of generality, we consider the normal incident case for simulation of the scattering dynamics, in which case the model reduces to one dimension. Figures \ref{figthree}(a-d) show the scattering dynamics of the spin-1 particles described by an initial wave packet $\propto e^{ip_0x}e^{-\frac{x^2}{2\bar \Delta^2}}\xi$ with $p_0=10.0$ and $\bar\Delta=2$, where $\xi$ denotes the spinor function. Figures \ref{figthree}(a) and \ref{figthree}(c) plot the results for $\xi^{T}=(1,0,0)$ with $g=0$ (free evolution) and $g=1.5$ (scattering by the linear potential), respectively. Figures \ref{figthree}(b) and \ref{figthree}(d) show the results when the initial spinor state is $\xi^T=(1,0,1)$, with the same other conditions in \ref{figthree}(a) and \ref{figthree}(c), respectively. The down panel in Fig. \ref{figthree} show the corresponding distributions of three spin components $|\Psi_{j}(t=7\bar \Delta)\rangle $. For the case of $g=1.5$, the final state after a long time is a superposition of a reflection state $|\Psi_{+}\rangle$ propagating along the $-x$ direction (centering at the $x<0$ region), a localized state $|\Psi_{0}\rangle$ centering at the $x>0$ region, and a transmission state $|\Psi_{-}\rangle$ propagating along the $x$ direction (centering at the $x>0$ region), which are also shown in Fig. \ref{figone}(a). In Fig. \ref{figthree}(d), five peaks in the total density distribution are formed when the evolution time is longer than $7\bar \Delta$ because a pair of peaks appear for every $|\Psi_{j}\rangle$ ($j=+,-,0$) with two peaks of $|\Psi_{-}\rangle$ and $|\Psi_{+}\rangle$ almost being overlapped.
\section{Quantum simulation with trapped ions}\label{sec3}
\begin{figure}[tbp]
\centering
\includegraphics[width=0.35\textwidth]{fig4.pdf}
\caption{Proposed trapped-ion setup for simulating scattering dynamics of a Maxwell particle in an external potential. The three-component spinors are encoded by three internal states of ion 1, and the potential is implemented by red and blue sidebands applied to the auxiliary ion 2.}
\label{setup}
\end{figure}
We now propose quantum simulation of scattering dynamics with trapped ions. Similar to the simulation of Dirac particles \cite{Casanova2010,Gerritsma2011}, mimicking the scattering process in a scale potential requires two ions. Let us consider a string of two trapped ions, denoted by ion 1 and ion 2, as shown in Fig. \ref{setup}. The first ion will encode a three-component Maxwell spinor in its three ionic states, while the second ion will be used as an ancilla to implement the potential. The involved internal levels for ion 1 could be chosen as the $^2S_{1/2}$ hyperfine clock ground states of $^{171}\rm{Yb}^+$: $|a\rangle\equiv|1,1\rangle$, $|b\rangle\equiv|0,0\rangle$, and $|c\rangle\equiv |1,-1\rangle$; for ion 2 could be qubit states $|a'\rangle\equiv|1,0\rangle$ and $|b'\rangle\equiv|0,0\rangle$ \cite{Senko2015,Olmschenk2007}. Here the two quantum numbers denote $F$ and $m_F$ for the levels of $^{171}\rm{Yb}^+$ ion.
The motion of two ions trapped in a RF trap are correlated together to form collective modes. The creation and annihilation of motional modes give definition of external parameters by
\begin{equation}
\hat x_l=\Delta_l(\hat a_l+\hat a_l^\dagger)\,,\quad\hat p_l=\frac{\hbar}{2i\Delta_l}(\hat a_l-\hat a_l^\dagger)\,,\label{eq_ca}
\end{equation}
where $a_l$ ($a^\dagger_l$) is the creation (annihilation) operator of a motional mode in the $l$ axis with $l=\{x,y,z\}$, and $\Delta_l=\sqrt{\hbar/2M\omega_l}$ is the spread of ground state of the ion. Lasers or microwaves are applied to couple the internal levels of each ion and the center-of-mass motion in one of the three directions by absorbing or emission of photons \cite{Leibfried2003,Hffner2008,SLZhu2006,SLZhu2006b,Kim2009}.
The dipole coupling between monochromatic light field and ions or Raman coupling between bichromatic light field and ions can be effectively described by a simple Rabi model within the Lamb-Dicke regime. In this case, the effective Hamiltonian of interactions between light and two qubit levels $|\alpha\rangle$ and $|\beta\rangle$ could take the lowest order in $\eta$ \cite{Leibfried2003,Hffner2008},
\begin{equation}
\hat H^{\alpha\beta}=\frac{\hbar}{2}\Omega_0\sigma^{+}_{\alpha\beta}\{1+i\eta(\hat ae^{-i\nu t}+\hat a^\dagger e^{i\nu t})\}e^{i(\phi-\delta t)}+\text{H.c.}\,,
\end{equation}
where $\sigma_{+}^{\alpha\beta}$ is the raising operator acting on the ion's internal states, $\Omega_0$ is the Rabi frequency, $\eta=k\sqrt{2M\omega}$ is the Lamb-Dicke parameter, $\nu$ is the frequency of the laser field, $\phi$ is the phase of the laser field, and $\delta$ is the laser detuning.
Depending on the detuning from the qubit transition, three types of fundamental interactions can be engineered as the quantum simulation toolbox to construct desired dynamics. The first is the case of resonance $\delta=0$ with the reduced Hamiltonian
\begin{equation}
\hat H_{c}^{\alpha\beta}=\frac{\hbar}{2} \Omega_0(\hat \sigma^{+}_{\alpha\beta}e^{i\phi}+\hat \sigma^{-}_{\alpha\beta}e^{-i\phi}).
\end{equation}
which is called carrier interaction. When the detuning is negative $\delta=-\nu$, the effective interaction is called red-sideband with
\begin{equation}
\hat H_{r}^{\alpha\beta}=\frac{\hbar}{2} \Omega_0\eta(\hat a\hat\sigma^{+}_{\alpha\beta}e^{i\phi}+\hat a^\dagger\hat \sigma^{-}_{\alpha\beta}e^{-i\phi}),
\end{equation}
while the blue-sideband for $\delta=+\nu$ with
\begin{equation}
\hat H_{b}^{\alpha\beta}=\frac{\hbar}{2} \Omega_0\eta(\hat a^\dagger\hat\sigma^{+}_{\alpha\beta}e^{i\phi}+\hat a\hat \sigma^{-}_{\alpha\beta}e^{-i\phi})\,.
\end{equation}
Interactions in the toolbox are adjustable with the phase $\phi$ and coupling strength $\Omega$. By recalling the definition of Eq. (\ref{eq_ca}) and making identifications that $\hat \sigma^x_{\alpha\beta}=(\hat \sigma^{+}_{\alpha\beta}+\hat \sigma^{-}_{\alpha\beta})/2$ and $\hat \sigma^y_{\alpha\beta}=(\hat \sigma^{+}_{\alpha\beta}-\hat\sigma^{-}_{\alpha\beta})/2i$, when only the carrier interaction is present, one has
\begin{equation}
\label{eq10}
\hat H_{\sigma_l}^{\alpha\beta}=\hbar \tilde \Omega_l \sigma_l^{\alpha\beta}
\end{equation}
with the appropriate phase setting. Applying the red-sideband interaction and blue-sideband interaction with appropriate manipulation of the laser filed, one could implement the coupling between the internal and external dimensions,
\begin{eqnarray}
\label{Hp}
\hat H_{p_l}^{\alpha\beta}&=&\Delta_l\Omega_{0l}\eta\hat \sigma_l^{\alpha\beta}\hat p_l,\\
\hat H_{x_l}^{\alpha\beta}&=&\hbar \eta\Omega\sigma_{x_l}^{\alpha\beta}\hat{x} _l/\Delta_{x_l}. \label{Hx}
\end{eqnarray}
To simulate a spin-1 particle, three internal levels are needed to encode the three spinors. As shown in Fig. \ref{setup}, the three levels are denoted as $|a\rangle$, $|b\rangle$ and $|c\rangle$, and the spinor state can be expressed as
\begin{equation}
|\Psi\rangle\equiv \Psi_a|a\rangle+\Psi_b|b\rangle+\Psi_c|c\rangle=(\Psi_a,\Psi_b,\Psi_c)^{\rm{T}}\,.
\end{equation}
Applying Eqs. \eqref{eq10}-\eqref{Hx} appropriately to the three levels in ion 1 and the two levels in ion2, we can obtain the following Hamiltonian
\begin{equation}
\begin{split}
\hat H_{ion}=&\hat H_{p_x}^{ab}+\hat H_{p_x}^{bc}+\hat H_{p_y}^{ab}+\hat H_{p_y}^{bc}+\hat H_{\sigma_z}^{ab}+\hat H_{\sigma_z}^{bc}+\hat H_{x}^{a^\prime b^\prime},\\
=&\eta\Delta\tilde\Omega_1(\hat \sigma^{ab}_x+\hat \sigma_x^{bc})\hat p_x+\eta\Delta\tilde\Omega_1(\hat \sigma^{ab}_y+\hat \sigma_y^{bc})\hat p_y+\\&\hbar \Omega_1(\hat \sigma_z^{ab}+\hat \sigma_z^{bc})+\hbar \eta\tilde\Omega_2\hat x\sigma_2^x/\Delta\\
=&\sqrt{2}\eta\Delta\tilde{\Omega}_1 (S_x\hat p_x+S_y \hat p_y)+\hbar \Omega_1 S_z+\hbar \eta\tilde\Omega_2\hat x\sigma_2^x/\Delta\,,\label{eq_eh}
\end{split}
\end{equation}
where $\hat H_{x}^{a^\prime b^\prime}$ and the related $\sigma_2^x$ correspond to the manipulations on ion-2. Here we have chosen the parameters so as $\eta:=\eta_{1x,y}=\eta_2$, $\tilde\Omega_1:=\Omega_{1x}=\Omega_{1y}$, $\Omega_1=\Omega_{1z}$ and $\Delta:=\Delta_{1}=\Delta_{2}$. Note that the $\sigma_z$ terms could be induced by AC Stark shift. To obtain a scalar potential which is linear in the $x$-direction, the second ion is required to be initialized in the positive eigenstate of $\sigma_2^x$. Then we can find that it is equivalent to the model given by Eq. (\ref{eq_model}) with the correspondence $c:=\sqrt{2}\eta\Delta\tilde\Omega_1$, $mc^2:=\hbar \Omega_1$ and $g:=\hbar\eta \tilde \Omega_2/\Delta$. Thus the two-dimensional quantum Maxwell equation can be realized with tunable parameters.
In absence of interactions $\hat H_{p_y}^{ab}$ and $\hat H_{p_y}^{bc}$, the Hamiltonian Eq. \eqref{eq_eh} reduces to one dimension, which could be more easily dealt with in realistic experiments. Under this condition, the scattering dynamics shown in Fig. \ref{figthree} can be experimentally demonstrated.
In typical experiments, the $^{171}\rm{Yb}^+$ ions can be initialized to the ground state by Doppler cooling on the $^2S_{1/2}$-$^2P_{1/2}$ transition at $369.53~\rm nm$. Then the carrier interaction and red/blue sideband interactions can be applied simultaneously to produce the desired dynamics via two-photon stimulated Raman transitions. Afterward, the ion states are measured by the standard flourescence technique with an additional laser field coupling the $^2S_{1/2}$-$^2P_{1/2}$ transition \cite{Olmschenk2007,Kim2009}. Note that the accessible parameters in experiments is sufficient to give observable phenomena. For instance, with typical experimental parameters given by $\eta=0.05$, $\tilde \Omega_1=2\pi \times 10~\rm{kHz}$, $\tilde \Omega_1=2\pi \times 1~\rm{kHz}$ and $\tilde \Omega_2=2\pi \times 50~\rm{kHz}$, which correspond to $m^2c^4/(\hbar c g) \sim 0.56$ in Hamiltonian \ref{eq_model_k}, one could prepare a normal incident positive-energy initial state with spatial extension of order of $\Delta$ to obtain a large transmission probability $T\approx 0.65$. Such tunneling process is expected to take place within about 1 ms, which is well within the typical decoherence time.
\section{Conclusions}\label{sec5}
In summary, we have proposed a feasible scheme to simulate and detect the scattering dynamics of the quantum Maxwell equation with trapped ions. We have shown that the Klein tunnelling probabilities of the Maxwell particles in the linear potential can be resolved by the Landau-Zener transition in momentum space. We have demonstrated that the Klein tunneling can be observed in trapped ion experiments based on quantum simulation of the Maxwell particles with the ion modes. Notably, simulation of the Klein tunneling in the Dirac equation has been experimentally achieved with trapped ions \cite{Gerritsma2011,Gerritsma2010}, and the technologies developed there can be directly used in our proposed scheme. Thus, the present scheme is quite promising for realizing the first experiment on simulation of the Maxwell particles.
\acknowledgments
This work was supported by the NKRDP of China (Grant No. 2016YFA0301803), the NSFC (Grants No.91636218, No. 11604103, No. U1801661, No. and 11474153), the NSAF (Grant No. U1830111), the KPST of Guangzhou (Grant No. 201804020055), the NSF of Guangdong Province (Grant No. 2016A030313436), and the Startup Foundation of SCNU.
|
2,877,628,090,474 | arxiv | \section{INTRODUCTION}
In the last two decades, tremendous progress has been made in our understanding of heavy-flavor
hadrons, thanks to the experimental discoveries by collaborations such as LHCb, BELLE, and BESIII and the related theoretical studies.
In the charmed baryon sector, 24 singly charmed baryons and two doubly charmed baryons are listed in the current version of the review of particle physics~\cite{Tanabashi:2018oca}. Among
them, the newest members include the $\Lambda_c(2860)$~\cite{Aaij:2017vbw}, the five $\Omega_c$ states~\cite{Aaij:2017nav}, and the $\Xi_{cc}^{++}$~\cite{Aaij:2017ueg}. Inspired by these and other experimental discoveries, there are extensive theoretical and lattice QCD studies on their nature and their decay and production mechanisms (see, e.g., Refs.~\cite{Chen:2016qju,Chen:2016spr,Guo:2017jvc,
Esposito:2016noz,Lebed:2016hpi,Eichmann:2016yit,Olsen:2017bmm,Ali:2017jda} references cited therein).
The magnetic moment of a baryon plays an extremely important role in understanding its internal structure. Historically, the experimental measurement of the magnetic moments of the proton and the neutron revealed that they are not point-like particles. The subsequent studies helped the establishment of the quark model as well the theory of the strong interaction, Quantum Chromo Dynamics. Unlike those of the ground-state baryons, the magnetic moments of the spin-1/2 singly charmed baryons have not been measured experimentally. Nevertheless, they have been studied in a variety of phenomenological models~\cite{Barik:1984tq,JuliaDiaz:2004vh,Kumar:2005ei,Faessler:2006ft,Patel:2007gx,Sharma:2010vv,Bernotas:2012nz} as well as QCD sum rules~\cite{Zhu:1997as}. Lately, they have also been studied in the mean-field approach~\cite{Yang:2018uoj}, the self-consistent SU(3) chiral quark-soliton model~\cite{Kim:2018nqf}, the heavy baryon chiral perturbation theory (HB ChPT)~\cite{Wang:2018xoc}, and
lattice QCD simulations~\cite{Can:2013tna,Can:2015exa,Bahtiyar:2016dom,Bahtiyar:2015sga}. In Ref.~\cite{Wang:2018xoc}, the low energy constants (LECs) are determined by the quark model and the heavy quark spin flavor symmetry and by fitting to the lattice QCD data extrapolated to the physical point. In this work, we will study the magnetic moments of the spin-1/2 singly charmed baryons up to the next-to-leading order (NLO) in covariant baryon chiral perturbation theory (BChPT) with the extended-on-mass shell (EOMS) renormalization scheme. The unknown LECs will be
determined by the quark model and the heavy quark spin flavor symmetry and by directly fitting to the lattice QCD data at unphysical pion masses~\cite{Bahtiyar:2016dom,Can:2013tna,Can:2015exa}. One notes that many previous studies, such as Refs.~\cite{Ren:2012aj,Xiao:2018rvd}, have shown that the EOMS BChPT can provide a better description of the lattice QCD quark-mass dependent data than its non-relativistic counterpart.
Chiral perturbation theory (ChPT)~\cite{Weinberg:1978kz}, as a low-energy effective field theory of QCD, is an appropriate framework to study the magnetic moments of hadrons, particularly, their light quark mass dependence. It
provides a systematic expansion of physical observables in powers of $(p/\Lambda_\chi)^{n_\chi}$, where $p$ is a small momentum and $\Lambda_\chi$ is the chiral symmetry breaking scale. However, its application to the one-baryon sector encountered a difficulty, i.e., a systematic power counting (PC) is lost due to the large
non-vanishing baryon mass $m_0$ in the chiral limit. Over the years, three approaches were proposed to overcome this issue, i.e., the HB~\cite{Jenkins:1990jv,Bernard:1995dp}, the infrared (IR)~\cite{Becher:1999he}, and the EOMS~\cite{Fuchs:2003qc} schemes. The IR and the EOMS schemes are the relativistic formulations of BChPT. A brief summary and comparison of the three different approaches can be found in Ref.~\cite{Geng:2013xn}.
The EOMS scheme is different from the HBChPT, because it retains a series of higher-order terms within the covariant power counting (PC) rule when removing the power-counting-breaking (PCB) terms. In recent years,
many physical observables have been successfully studied in this scheme such as the magnetic moments~\cite{Xiao:2018rvd,Geng:2009ys,Geng:2008mf,Geng:2009hh,Liu:2018euh,Blin:2018pmj}, the masses and sigma terms~\cite{Ren:2012aj,Ren:2013oaa,Sun:2016wzh,Yao:2018ifh} of the octet, decuplet and spin-1/2 doubly heavy baryons, the hyperon vector couplings~\cite{Geng:2009ik,Geng:2014efa}, the axial vector charges~\cite{Ledwig:2014rfa}, the pion nucleon~\cite{Alarcon:2012kn,Chen:2012nx} and kaon-nucleon scattering~\cite{Lu:2018zof}. Thus, inspired by these studies, we would like to study the magnetic moments of the spin-1/2 singly charmed baryons in the EOMS scheme.
This work is organized as follows. In Sec.~II, we provide the effective Lagrangians and calculate the relevant Feynman diagrams up to ${\cal O}(p^3)$.
Results and discussions are given in Sec.~III, followed by a short summary in Sec. IV.
\section{Theoretical formalism}
The magnetic moments of singly charmed baryons are defined via the matrix elements of the electromagnetic current $J_\mu$ as follows:
\begin{eqnarray*}
\langle\psi(p_f)|J_\mu|\psi(p_i)\rangle=\bar{u}(p_f)\left[\gamma_\mu F_1^B(q^2)+\frac{i\sigma_{\mu\nu}q^\nu}{2m_B}F_2^B(q^2)\right]u(p_i),
\end{eqnarray*}
where $\bar{u}(p_f)$ and $u(p_i)$ are the Dirac spinors, $m_B$ is the singly charmed baryon mass, and $F_1^B(q^2)$ and $F_2^B(q^2)$ denote the Dirac and Pauli form factors, respectively. The
four-momentum transfer is defined as $q=p_i-p_f$. At $q^2=0$, $F_2^B(0)$ is the so-called anomalous magnetic moment, $\kappa_B$, and the magnetic moment is $\mu_B=\kappa_B+Q_B$, where $Q_B$ is the charge of the singly charmed baryon.
\begin{figure}[h!]
\centering
\includegraphics[width=4cm]{Figa.pdf}\\
\includegraphics[width=4cm]{Figb.pdf}~~~\includegraphics[width=4cm]{Figc.pdf}\\
\includegraphics[width=4cm]{Figd.pdf}~~~\includegraphics[width=4cm]{Fige.pdf}\\
\caption{Feynman diagrams contributing to the singly charmed baryon magnetic moments up to NLO. Diagram (a) contributes at LO, while the other diagrams contribute at NLO. The solid, dashed, and wiggly lines represent singly charmed baryon, Goldstone bosons, and photons, respectively. The heavy dots denote the ${\cal O}(p^2)$ vertices.}\label{Fig1}
\end{figure}
The five Feynman diagrams contributing to $\mu_B$ up to ${\cal O}(p^3)$ are shown in Fig.1. The leading order contribution of ${\cal O}(p^2)$ is provided by the following Lagrangian:
\begin{eqnarray}
{\cal L}_{33}^{(2)}&=&\frac{d_2}{16m_{\bar{3}}}{\rm Tr}(\bar{B}_{\bar{3}}\sigma^{\mu\nu}F_{\mu\nu}^+B_{\bar{3}})\nonumber\\
&&+\frac{d_3}{16m_{\bar{3}}}{\rm Tr}(\bar{B}_{\bar{3}}\sigma^{\mu\nu}B_{\bar{3}}){\rm Tr}(F_{\mu\nu}^+),\nonumber\\
{\cal L}_{66}^{(2)}&=&\frac{d_5}{8m_6}{\rm Tr}(\bar{B}_6\sigma^{\mu\nu}F_{\mu\nu}^+B_6)\nonumber\\
&&+\frac{d_6}{8m_6}{\rm Tr}(\bar{B}_6\sigma^{\mu\nu}B_6){\rm Tr}(F_{\mu\nu}^+),\label{eq:treeL}
\end{eqnarray}
where the numbers in the superscript are the chiral order, $\sigma^{\mu\nu}=\frac{i}{2}[\gamma^\mu,\gamma^\nu]$, $F_{\mu\nu}^+=|e|(u^\dag Q_hF_{\mu\nu}u+uQ_hF_{\mu\nu}u^\dag)$, $F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu$, and $Q_h={\rm diag}(1,0,0)$ is the charge operator of the charmed baryon, $u={\rm exp}[i\Phi/2F_\phi]$, with the unimodular matrix containing the pseudoscalar
nonet, and $F_\phi$ the pseudoscalar decay constant. In the following analysis, we take $F_\pi=92.4~{\rm MeV}$, $F_K=1.22F_\pi$, and $F_\eta=1.3F_\pi$. In the SU(3) flavor representation, there are three kinds of singly charmed baryons, which are denoted as $B_{\bar{3}}$, $B_6$, and $B_6^{*\mu}$, respectively,
\begin{eqnarray*}
B_{\bar{3}}=\left(
\begin{array}{ccc}
0 & \Lambda_c^+ & \Xi_c^+\\
-\Lambda_c^+ & 0 & \Xi_c^0\\
-\Xi_c^+ & -\Xi_c^0 & 0
\end{array}
\right),
\end{eqnarray*}
\begin{eqnarray}
B_6=\left(
\begin{array}{ccc}
\Sigma_c^{++} & \frac{\Sigma_c^+}{\sqrt{2}} & \frac{\Xi_c^{'+}}{\sqrt{2}}\\
\frac{\Sigma_c^+}{\sqrt{2}} & \Sigma_c^0 & \frac{\Xi_c^{'0}}{\sqrt{2}}\\
\frac{\Xi_c^{'+}}{\sqrt{2}} & \frac{\Xi_c^{'0}}{\sqrt{2}} & \Omega_c^0
\end{array}
\right),\qquad B_6^{*\mu}=\left(
\begin{array}{ccc}
\Sigma_c^{*++} & \frac{\Sigma_c^{*+}}{\sqrt{2}} & \frac{\Xi_c^{*+}}{\sqrt{2}}\\
\frac{\Sigma_c^{*+}}{\sqrt{2}} & \Sigma_c^{*0} & \frac{\Xi_c^{*0}}{\sqrt{2}}\\
\frac{\Xi_c^{*+}}{\sqrt{2}} & \frac{\Xi_c^{*0}}{\sqrt{2}} & \Omega_c^{*0}
\end{array}
\right).
\end{eqnarray}
The spin of the $B_{\bar{3}}$ and $B_6$ states is $1/2$ while the spin of
the $B_6^{*\mu}$ states is $3/2$.
In the numerical analysis, we take the average of the masses for each flavor multiplet, i.e., $m_{\bar{3}}=2408$~MeV, $m_6=2535$~MeV, and $m_{6^*}=2602$~MeV~\cite{Tanabashi:2018oca}. The mass differences are $\delta_1=m_6-m_{\bar{3}}=127~{\rm MeV}$, $\delta_2=m_{6^*}-m_{\bar{3}}=194~{\rm MeV}$, and $\delta_3=m_{6^*}-m_6=67~{\rm MeV}$.
The loop diagrams arising at NLO are determined in terms of the lowest order LECs from ${\cal L}_B^{(1)}$+${\cal L}_{MB}^{(1)}$+${\cal L}_M^{(2)}$, which are,
\begin{eqnarray}
{\cal L}_B^{(1)}&=&\frac{1}{2}{\rm Tr}[\bar{B}_{\bar{3}}(i\slashed{D}-m_{\bar{3}})B_{\bar{3}}]+{\rm Tr}[\bar{B}_6(i\slashed{D}-m_6)B_6]\nonumber\\
&&+{\rm Tr}[\bar{B}_6^{*\mu}(-g_{\mu\nu}(i\slashed{D}-m_{6^*})+i(\gamma_\mu D_\nu+\gamma_\nu D_\mu)\nonumber\\
&&-\gamma_\mu(i\slashed{D}+m_{6^*})\gamma_\nu B_6^{*\nu}],\nonumber\\
{\cal L}_{MB}^{(1)}&=&\frac{g_1}{2}{\rm Tr}[\bar{B}_6\slashed{u}\gamma_5B_6]+\frac{g_2}{2}{\rm Tr}[\bar{B}_6\slashed{u}\gamma_5B_{\bar{3}}+{\rm h.c.}]\nonumber\\
&&+\frac{g_3}{2}{\rm Tr}[\bar{B}_6^{*\mu}u_\mu B_6+{\rm h.c.}]+\frac{g_4}{2}{\rm Tr}[\bar{B}_6^{*\mu}u_\mu B_{\bar{3}}+{\rm h.c.}]\nonumber\\
&&+\frac{g_5}{2}{\rm Tr}[\bar{B}_6^{*\mu}\slashed{u}\gamma_5B_{6\mu}^*]+\frac{g_6}{2}{\rm Tr}[\bar{B}_{\bar{3}}\slashed{u}\gamma_5B_{\bar{3}}],\nonumber\\
{\cal L}_M^{(2)}&=&\frac{F_\phi^2}{4}{\rm Tr}[\nabla_\mu U(\nabla^\mu U)^\dag],
\end{eqnarray}
with
\begin{eqnarray}
&&D_\mu B=\partial_\mu B+\Gamma_\mu B+B\Gamma_\mu^T,\nonumber\\
&&\Gamma_\mu=\frac{1}{2}(u^\dag\partial_\mu u+u\partial_\mu u^\dag)-\frac{i}{2}(u^\dag v_\mu u+uv_\mu u^\dag)=-ieQ_hA_\mu,\nonumber\\
&&u_\mu=i(u^\dag\partial_\mu u-u\partial_\mu u^\dag)+(u^\dag v_\mu u-uv_\nu u^\dag),\nonumber\\
&&U=u^2=e^{\frac{i\Phi}{F_\phi}},\qquad\nabla_\mu U=\partial_\mu U+ieA_\mu[Q_l,U],
\end{eqnarray}
where $v_\mu$ stands for the vector source, and the charge matrix for the light quark is $Q_l={\rm diag}(2/3,-1/3,-1/3)$.
The total spin of the light quarks is 0 for the singly charmed baryon in the $B_{\bar{3}}$ state. Considering parity and angular momentum conservation, the $B_{\bar{3}}B_{\bar{3}}\phi$ vertex is forbidden, i.e., $g_6=0$.
For the $B_{\bar{3}}$ and $B_6$ states, the tree level contributions of the magnetic moments can be easily obtained from Eq.~(\ref{eq:treeL}), which are:
\begin{eqnarray}
&&\kappa_{\bar{3}}^{(a,2)}=\alpha_{\bar{3}}d_2+\beta_{\bar{3}}d_3,\nonumber\\
&&\kappa_6^{(a,2)}=\alpha_6d_5+\beta_6d_6.\label{eq:TreeCCs3f}
\end{eqnarray}
The values of $\alpha_{\bar{3}}$, $\beta_{\bar{3}}$, $\alpha_6$, and $\beta_6$ are tabulated in Table~\ref{tab:1} and Table~\ref{tab:2}. The four LECs $d_2$, $d_3$, $d_5$ and $d_6$ will be determined by fitting to lattice QCD data.
\begin{table}[htb]
\caption{\label{tab:1}Coefficients of the tree level contributions of Eq.~(\ref{eq:TreeCCs3f}) for the $B_{\bar{3}}$ states.}
\begin{center}
\begin{tabular}{cccc}
\hline
\hline
~~~~~~ & ~~~~~~$\Lambda_c^+$~~~~~~ & ~~~~~~$\Xi_c^+$~~~~~~ & ~~~~~~$\Xi_c^0$~~~~~~\\
\hline
$\alpha_{\bar{3}}$ & $\frac{1}{2}$ & $\frac{1}{2}$ & $0$\\
\hline
$\beta_{\bar{3}}$ & $1$ & $1$ & $1$\\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
At ${\cal O}(p^3)$, the loop contributions to the magnetic moments, which come from diagrams (b), (c), (d), and (e) in Fig.~\ref{Fig1}, are written as,
\begin{eqnarray}
\kappa_{\bar{3}}^{(3)}&=&\frac{1}{4\pi^2}
\left(\sum_{\phi=\pi,K}\frac{g_2^2}{F_\phi^2}\xi_{B_{\bar{3}}\phi,\delta_1}^{(3,b)}H_{B_{\bar{3}}}^{(b)}(\delta_1,m_\phi)\right.\nonumber\\
&&+\sum_{\phi=\pi,K}\frac{g_4^2}{F_\phi^2}\xi_{B_{\bar{3}}\phi,\delta_2}^{(3,c)}H_{B_{\bar{3}}}^{(c)}(\delta_2,m_\phi)\nonumber\\
&&+\sum_{\phi=\pi,K,\eta}\frac{g_2^2}{F_\phi^2}\xi_{B_{\bar{3}}\phi,\delta_1}^{(3,d)}H_{B_{\bar{3}}}^{(d)}(\delta_1,m_\phi)\nonumber\\
&&\left.+\sum_{\phi=\pi,K,\eta}\frac{g_4^2}{F_\phi^2}\xi_{B_{\bar{3}}\phi,\delta_2}^{(3,e)}H_{B_{\bar{3}}}^{(e)}(\delta_2,m_\phi)\right),\nonumber\\
\kappa_6^{(3)}&=&\frac{1}{4\pi^2}\left(\sum_{\phi=\pi,K}\frac{g_1^2}{F_\phi^2}\xi_{B_6\phi}^{(3,b)}H_{B_6}^{(b)}(0,m_\phi)\right.\nonumber\\
&&+\sum_{\phi=\pi,K}\frac{g_2^2}{F_\phi^2}\xi_{B_6\phi,\delta_1}^{(3,b)}H_{B_6}^{(b)}(\delta_1,m_\phi)\nonumber\\
&&+\sum_{\phi=\pi,K}\frac{g_3^2}{F_\phi^2}\xi_{B_6\phi,\delta_3}^{(3,c)}H_{B_6}^{(c)}(\delta_3,m_\phi)\nonumber\\
&&+\sum_{\phi=\pi,K,\eta}\frac{g_1^2}{F_\phi^2}\xi_{B_6\phi}^{(3,d)}H_{B_6}^{(d)}(0,m_\phi)\nonumber\\
&&+\sum_{\phi=\pi,K,\eta}\frac{g_2^2}{F_\phi^2}\xi_{B_6\phi,\delta_1}^{(3,d)}H_{B_6}^{(d)}(\delta_1,m_\phi)\nonumber\\
&&\left.+\sum_{\phi=\pi,K,\eta}\frac{g_3^2}{F_\phi^2}\xi_{B_6\phi,\delta_3}^{(3,e)}H_{B_6}^{(e)}(\delta_3,m_\phi)\right),\label{eq:LoopCCs3f}
\end{eqnarray}
with the coefficients $\xi_{B_{\bar{3}}\phi,\delta_i}^{(3;b,c,d,e)}$, $\xi_{B_6\phi,\delta_i}^{(3;b,c,d,e)}$ listed in Table~\ref{tab:3} and Table~\ref{tab:4}. The explicit expressions of the loop functions $H_{B_{\bar{3}}}^{(b,c,d,e)}(\delta_i,m_\phi)$ and $H_{B_6}^{(b,c,d,e)}(\delta_i,m_\phi)$ can be found in the Appendix.
Once we obtain the loop functions in the EOMS scheme, we can easily obtain their HB counterparts by performing $1/m_0$ expansions, We have
checked that our results agree with those of Ref.~\cite{Wang:2018xoc}. In the following section, for the sake of comparison, we study also the performance of
the HBChPT in describing the lattice QCD data of Refs.~\cite{Bahtiyar:2016dom,Can:2013tna,Can:2015exa}. It should be noted that in the following section, unless otherwise stated, the HBChPT results
refer to the ones obtained in the present work, not those of Ref.~\cite{Wang:2018xoc}
\begin{table}[htb]
\caption{\label{tab:2}Coefficients of the tree level contributions of Eq.~(\ref{eq:TreeCCs3f}) for the $B_6$ states.}
\begin{center}
\begin{tabular}{ccccccc}
\hline
\hline
~~~~~~ & ~~~$\Sigma_c^{++}$~~~ & ~~~$\Sigma_c^+$~~~ & ~~~$\Sigma_c^0$~~~ & ~~~$\Xi_c^{'+}$~~~ & ~~~$\Xi_c^{'0}$~~~ & ~~~$\Omega_c^0$~~~\\
\hline
$\alpha_6$ & $1$ & $\frac{1}{2}$ & $0$ & $\frac{1}{2}$ & $0$ & $0$\\
\hline
$\beta_6$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$\\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[h!]
\caption{\label{tab:3}Coefficients of the loop contributions of Eq.~(\ref{eq:LoopCCs3f}) for the $B_{\bar{3}}$ states.}
\begin{center}
\begin{tabular}{cccc}
\hline
\hline
~~~~~~ & ~~~~~~$\Lambda_c^+$~~~~~~ & ~~~~~~$\Xi_c^+$~~~~~~ & ~~~~~~$\Xi_c^0$~~~~~~\\
\hline
$\xi_{B_{\bar{3}}\pi,\delta_1}^{(3,b)}$ & $0$ & $1$ & $-1$\\
\hline
$\xi_{B_{\bar{3}}K,\delta_1}^{(3,b)}$ & $1$ & $0$ & $-1$\\
\hline
$\xi_{B_{\bar{3}}\pi,\delta_2}^{(3,c)}$ & $0$ & $1$ & $-1$\\
\hline
$\xi_{B_{\bar{3}}K,\delta_2}^{(3,c)}$ & $1$ & $0$ & $-1$\\
\hline
$\xi_{B_{\bar{3}}\pi,\delta_1}^{(3,d)}$ & $6$ & $\frac{1}{2}$ & $1$\\
\hline
$\xi_{B_{\bar{3}}K,\delta_1}^{(3,d)}$ & $1$ & $5$ & $1$\\
\hline
$\xi_{B_{\bar{3}}\eta,\delta_1}^{(3,d)}$ & $0$ & $\frac{3}{2}$ & $0$\\
\hline
$\xi_{B_{\bar{3}}\pi,\delta_2}^{(3,e)}$ & $3$ & $\frac{1}{4}$ & $\frac{1}{2}$\\
\hline
$\xi_{B_{\bar{3}}K,\delta_2}^{(3,e)}$ & $\frac{1}{2}$ & $\frac{5}{2}$ & $\frac{1}{2}$\\
\hline
$\xi_{B_{\bar{3}}\eta,\delta_2}^{(3,e)}$ & $0$ & $\frac{3}{4}$ & $0$\\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[h!]
\caption{\label{tab:4}Coefficients of the loop contributions of Eq.~(\ref{eq:LoopCCs3f}) for the $B_6$ states.}
\begin{center}
\begin{tabular}{ccccccc}
\hline
\hline
~~~~~~ & ~~~$\Sigma_c^{++}$~~~ & ~~~$\Sigma_c^+$~~~ & ~~~$\Sigma_c^0$~~~ & ~~~$\Xi_c^{'+}$~~~ & ~~~$\Xi_c^{'0}$~~~ & ~~~$\Omega_c^0$~~~\\
\hline
$\xi_{B_6\pi}^{(3,b)}$ & $1$ & $0$ & $-1$ & $\frac{1}{2}$ & $-\frac{1}{2}$ & $0$\\
\hline
$\xi_{B_6K}^{(3,b)}$ & $1$ & $\frac{1}{2}$ & $0$ & $0$ & $-\frac{1}{2}$ & $$-1\\
\hline
$\xi_{B_6\pi,\delta_1}^{(3,b)}$ & $2$ & $0$ & $-2$ & $1$ & $-1$ & $0$\\
\hline
$\xi_{B_6K,\delta_1}^{(3,b)}$ & $2$ & $1$ & $0$ & $0$ & $-1$ & $-2$\\
\hline
$\xi_{B_6\pi,\delta_3}^{(3,c)}$ & $1$ & $0$ & $-1$ & $\frac{1}{2}$ & $-\frac{1}{2}$ & $0$\\
\hline
$\xi_{B_6K,\delta_3}^{(3,c)}$ & $1$ & $\frac{1}{2}$ & $0$ & $0$ & $-\frac{1}{2}$ & $$-1\\
\hline
$\xi_{B_6\pi}^{(3,d)}$ & $3$ & $2$ & $1$ & $\frac{1}{4}$ & $\frac{1}{2}$ & $0$\\
\hline
$\xi_{B_6K}^{(3,d)}$ & $1$ & $\frac{1}{2}$ & $0$ & $\frac{5}{2}$ & $\frac{1}{2}$ & $1$\\
\hline
$\xi_{B_6\eta}^{(3,d)}$ & $\frac{2}{3}$ & $\frac{1}{3}$ & $0$ & $\frac{1}{12}$ & $0$ & $0$\\
\hline
$\xi_{B_6\pi,\delta_1}^{(3,d)}$ & $2$ & $2$ & $2$ & $\frac{1}{2}$ & $1$ & $0$\\
\hline
$\xi_{B_6K,\delta_1}^{(3,d)}$ & $2$ & $1$ & $0$ & $1$ & $1$ & $2$\\
\hline
$\xi_{B_6\eta,\delta_1}^{(3,d)}$ & $0$ & $0$ & $0$ & $\frac{3}{2}$ & $0$ & $0$\\
\hline
$\xi_{B_6\pi,\delta_3}^{(3,e)}$ & $\frac{3}{2}$ & $1$ & $\frac{1}{2}$ & $\frac{1}{8}$ & $\frac{1}{4}$ & $0$\\
\hline
$\xi_{B_6K,\delta_3}^{(3,e)}$ & $\frac{1}{2}$ & $\frac{1}{4}$ & $0$ & $\frac{5}{4}$ & $\frac{1}{4}$ & $\frac{1}{2}$\\
\hline
$\xi_{B_6\eta,\delta_3}^{(3,e)}$ & $\frac{1}{3}$ & $\frac{1}{6}$ & $0$ & $\frac{1}{24}$ & $0$ & $0$\\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
\section{Results and discussions}
\begin{table*}[t]
\caption{\label{tab:5}Magnetic moments of singly charmed baryons at different $m_\pi$~\cite{Bahtiyar:2016dom,Can:2013tna,Can:2015exa,Bahtiyar:2015sga}, in units of nuclear magneton~[$\mu_N$].}
\begin{center}
\begin{tabular}{cccccccc}
\hline
\hline
~~~$m_\pi$~(MeV)~~~ & ~~~$\Xi_c^+$~~~ & ~~~$\Xi_c^0$~~~ & ~~~$\Sigma_c^{++}$~~~ & ~~~$\Sigma_c^0$~~~ & ~~~$\Xi_c^{'+}$~~~ & ~~~$\Xi_c^{'0}$~~~ & ~~~$\Omega_c^0$~~~\\
\hline
~~~Phys.~~~ & $\cdots$ & $\cdots$ & $1.499(202)$ & $-0.875(103)$ & $\cdots$ & $\cdots$ & $-0.667(96)$\\
\hline
~~~$156$~~~ & $0.235(25)$ & $0.192(17)$ & $\cdots$ & $\cdots$ & $0.315(141)$ & $-0.599(71)$ & $-0.688(31)$\\
\hline
~~~$300$~~~ & $\cdots$ & $\cdots$ & $1.867(388)$ & $-0.929(206)$ & $\cdots$ & $\cdots$ & $-0.640(55)$\\
\hline
~~~$410$~~~ & $\cdots$ & $\cdots$ & $1.591(358)$ & $-0.897(223)$ & $\cdots$ & $\cdots$ & $-0.621(44)$\\
\hline
~~~$570$~~~ & $\cdots$ & $\cdots$ & $1.289(161)$ & $-0.724(80)$ & $\cdots$ & $\cdots$ & $-0.658(46)$\\
\hline
~~~$700$~~~ & $\cdots$ & $\cdots$ & $1.447(125)$ & $-0.757(67)$ & $\cdots$ & $\cdots$ & $-0.701(56)$\\
\hline
\hline
\end{tabular}
\end{center}
\end{table*}
In this section, we determine the LECs $d_2$, $d_3$, $d_5$ and $d_6$ by fitting to the lattice QCD data of Refs.~\cite{Bahtiyar:2016dom,Can:2013tna,Can:2015exa}, which are collected in Table~\ref{tab:5} for the sake of easy reference. Because of the limited lattice QCD data, the other LECs $g_{1-4}$ are fixed by the quark model and the heavy quark spin flavor symmetry. Their values are $g_1=0.98$, $g_2=-\sqrt{\frac{3}{8}}g_1=-0.60$, $g_3=\frac{\sqrt{3}}{2}g_1=0.85$, and $g_4=-\sqrt{3}g_2=1.04$~\cite{Jiang:2015xqa,Jiang:2015xqa,Jiang:2014ena}. In our least-squares fit, the $\chi^2$ as a function of the LECs is defined as
\begin{eqnarray}
\chi^2({\rm C_X})=\sum_{i=1}^n\frac{(\mu_i^{\rm th}({\rm C_X})-\mu_i^{\rm LQCD})^2}{\sigma_i^2},
\end{eqnarray}
where $C_X$ denote all the LECs, $\sigma_i$ correspond to the uncertainty of each lattice QCD datum, $\mu_i^{\rm th}({\rm C_X})$ and $\mu_i^{\rm LQCD}$ stand for the magnetic moments obtained in the BChPT and those of the lattice QCD in Table~\ref{tab:5}, respectively.
In order to decompose the contributions of loop diagrams, we will consider two cases. In case 1, all the allowed intermediate baryons are taken into account, while in case 2, only intermediate baryons of the same type as those of the external baryons are
considered. Fitting to the lattice QCD data of Table ~\ref{tab:5} and with $g_{1-4}$ fixed, the resulting LECs and $\chi^2$ are listed in Table~\ref{tab:6}. One notes
that the EOMS BChPT descriptions of the lattice QCD data are better than that of the HB BChPT in both cases.
\begin{table}[h!]
\caption{\label{tab:6} LECs $d_2$, $d_3$, $d_5$, and $d_6$ determined by fitting to the lattice QCD data, with $g_{1-4}$ fixed.
In case 1 all the allowed intermediate baryons in the loop diagrams are taken into account, while in case 2
only intermediate baryons of the same type as those of the external baryons in the loop diagrams are considered. }
\begin{center}
\begin{tabular}{cccccc}
\hline
\hline
\multirow{2}{0.5cm} & \multicolumn{2}{c}{Case 1} & & \multicolumn{2}{c}{Case 2}\\
\cline{2-3}\cline{5-6}
& EOMS 1 & HB 1 & & EOMS 2 & HB 2\\
\hline
$d_2$ & $-1.25(15)$ & $-2.32(15)$ & & $-1.78(15)$ & $-1.78(15)$ \\
\hline
$d_3$ & $2.20(4)$ & $0.65(4)$ & & $0.49(4)$ & $0.49(4)$ \\
\hline
$d_5$ & $7.83(34)$ & $13.49(34)$ & & $5.08(34)$ & $8.69(34)$ \\
\hline
$d_6$ & $-3.76(5)$ & $-4.93(5)$ & & $-2.66(5)$ & $-3.40(5)$ \\
\hline
$g_1$ & $0.98$ & $0.98$ & & $0.98$ & $0.98$ \\
\hline
$g_2$ & $-0.60$ & $0$ & & $-0.60$ & $0$ \\
\hline
$g_3$ & $0.85$ & $0.85$ & & $0.85$ & $0.85$ \\
\hline
$g_4$ & $1.04$ & $0$ & & $1.04$ & $0$ \\
\hline
$\chi_{\rm min}^2$ & $41.42$ & $131.05$ & & $15.10$ & $34.35$ \\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
Note that we do not fit to the lattice QCD data obtained at $m_\pi=700$~MeV, which are probably out of the range of validity of NLO ChPT. Furthermore, as can be seen in Fig.~\ref{Fig3},
the difference between the lattice QCD value and the ChPT prediction for $\mu_{\Xi_c^{'0}}$ is somehow relatively large. Thus, we do not include the lattice QCD magnetic moment
of $\Xi_c^{'0}$ in our fitting as well.
For the sake of comparison with the lattice QCD data, in Fig.~\ref{Fig2}, we present the predicted magnetic moments of the singly charmed anti-triplet baryons as a function of $m_\pi^2$. It is seen that the EOMS BChPT results are of the same qualify as those of the HB BChPT for $\Xi_c^+$ and $\Xi_c^0$. However, surprisingly, the EOMS and HB predictions for $\Lambda_c^+$ in case 1 are very different. From Table~\ref{tab:7}, we note that in the HBChPT the contributions from the intermediate anti-triplet and sextet baryons cancel each other at ${\cal O}(p^3)$. Thus, at this order, loop corrections are quite small. But in the EOMS scheme, the loop contributions are rather large, especially for $\Lambda_c^+$. In addition, we note that the main contributions of the loop diagrams are from the baryon pole diagram. Therefore, the large difference for the prediction of $\mu_{\Lambda_c^+}$ is caused
by the absence of the baryon pole diagram in the HB BChPT at $\mathcal{O}(p^3)$.
In Fig.~\ref{Fig3}, we plot the predicted magnetic moments of the singly charmed sextet baryons as a function of $m_\pi^2$, in comparison with the lattice QCD data. The EOMS BChPT results are in better agreement with the
lattice QCD data than those of the HB BChPT. As shown in Tables \ref{tab:6}, on average the description of the lattice QCD data becomes worse if the intermediate anti-triplet states are
included. Therefore, on average the results obtained in case 2 are in better agreement with the lattice QCD data. This has been noted in Ref.~\cite{Wang:2018xoc} as well.
In Tables \ref{tab:7} and \ref{tab:8}, we decompose the loop contributions mediated by the $\bar{3}$, 6, and $6^*$ states. One can see that the convergence pattern in case 2 is in generally better than that in case 1, with probably the exception of $\Sigma^0_c$. Therefore, we take the predictions obtained in case 2 as our final results.
In Fig.~\ref{Fig4} and Fig.~\ref{Fig5}, we compare the predicted magnetic moments of all the singly charmed baryons at the physical point with those obtained in other approaches. We note that the results of different approaches are rather scattered. However, our results are in better agreement with those of the HBChPT of Ref.~\cite{Wang:2018xoc}, though
we have chosen different strategies to determine some of the LECs. Clearly,
further experimental or lattice QCD studies are needed to pin down their values and to discriminate between different theoretical approaches.
\begin{figure*}[h!]
\centering
\includegraphics[width=16cm]{3fstatesmpi2.pdf}\\
\caption{Magnetic moments of the singly charmed anti-triplet baryons as a function of $m_\pi^2$. The solid black nablas represent the corresponding lattice QCD data that are fitted.}\label{Fig2}
\end{figure*}
\begin{figure*}[h!]
\centering
\includegraphics[width=16cm]{6fstatesmpi2.pdf}\\
\caption{Magnetic moments of the singly charmed sextet baryons as a function of $m_\pi^2$. The solid black nablas refer to the corresponding lattice QCD data fitted. The hollow nablas stand for the lattice QCD physical values. The blue nablas denote the lattice QCD data not used in our fitting.}\label{Fig3}
\end{figure*}
\begin{table*}[htb]
\caption{\label{tab:7}Decomposition of the loop contributions to the magnetic moments of singly charmed baryons. The subscript $\bar{3}$, $6$, and $6^*$ denote the loop diagrams with the intermediate $\bar{3}$, 6, and $6^*$ states at ${\cal O}(p^3)$, respectively.}
\begin{center}
\begin{tabular}{ccccccccccccccc}
\hline
\hline
\multirow{2}{0.5cm} & & \multicolumn{5}{c}{EOMS 1} & & \multicolumn{5}{c}{HB 1} & & \multirow{2}{2cm}{\rm LQCD~\cite{Bahtiyar:2015sga,Can:2013tna}}\\
\cline{3-7}\cline{9-13}
& & ${\cal O}(p^2)$ & ${\cal O}(p^3)_{\bar{3}}$ & ${\cal O}(p^3)_6$ & ${\cal O}(p^3)_{6^*}$ & $\mu_{\rm tot}$ & & ${\cal O}(p^2)$ & ${\cal O}(p^3)_{\bar{3}}$ & ${\cal O}(p^3)_6$ & ${\cal O}(p^3)_{6^*}$ & $\mu_{\rm tot}$ & &\\
\hline
\multirow{3}{0.5cm}{$B_{\bar{3}}$} & $\mu_{\Lambda_c^+}$ & $1.005$ & $\cdots$ & $0.035$ & $-1.272$ & $-0.232$ & $~~~$ & $0.191$ & $\cdots$ &$-0.263$ & $0.280$ & $0.208$ & & $\cdots$\\
\cline{2-15}
& $\mu_{\Xi_c^+}$ & $1.005$ & $\cdots$ & $0.141$ & $-0.913$ & $0.233$ & & $0.191$ & $\cdots$ & $-0.169$ & $0.215$ & $0.237$ & & $\cdots$\\
\cline{2-15}
& $\mu_{\Xi_c^0}$ & $0.859$ & $\cdots$ & $0.330$ & $-0.996$ & $0.193$ & & $0.253$ & $\cdots$ & $0.432$ & $-0.495$ & $0.190$ & & $\cdots$\\
\hline
\multirow{6}{0.5cm}{$B_6$} & $\mu_{\Sigma_c^{++}}$ & $2.251$ & $-0.293$ & $-0.444$ & $0.090$ & $1.604$ & & $3.916$ & $-0.319$ & $-0.988$ & $0.288$ & $2.897$ & & $1.499(202)$\\
\cline{2-15}
& $\mu_{\Sigma_c^+}$ & $0.428$ & $-0.192$ & $-0.094$ & $-0.042$ & $0.100$ & & $1.044$ & $-0.243$ & $-0.349$ & $0.091$ & $0.543$ & & $\cdots$\\
\cline{2-15}
& $\mu_{\Sigma_c^0}$ & $-1.394$ & $-0.090$ & $0.256$ & $-0.175$ & $-1.403$ & & $-1.828$ & $-0.168$ & $0.290$ & $-0.106$ & $-1.812$ & & $-0.875(103)$\\
\cline{2-15}
& $\mu_{\Xi_c^{'+}}$ & $0.428$ & $0.067$ & $0.112$ & $-0.048$ & $0.559$ & & $1.044$ & $0.084$ & $-0.145$ & $0.053$ & $1.036$ & & $\cdots$\\
\cline{2-15}
& $\mu_{\Xi_c^{'0}}$ & $-1.394$ & $0.135$ & $0.380$ & $-0.198$ & $-1.077$ & & $-1.828$ & $0.159$ & $0.494$ & $-0.144$ & $-1.319$ & & $\cdots$\\
\cline{2-15}
& $\mu_{\Omega_c^0}$ & $-1.394$ & $0.361$ & $0.505$ & $-0.220$ & $-0.748$ & & $-1.828$ & $0.486$ & $0.698$ & $-0.182$ & $-0.826$ & & $-0.667(96)$\\
\hline
\hline
\end{tabular}
\end{center}
\end{table*}
\begin{table*}[htb]
\caption{\label{tab:8}Same as Table~\ref{tab:7} , but for case 2.}
\begin{center}
\begin{tabular}{ccccccccccccccc}
\hline
\hline
\multirow{2}{0.5cm} & & \multicolumn{5}{c}{EOMS 2} & & \multicolumn{5}{c}{HB 2} & & \multirow{2}{2cm}{\rm LQCD~\cite{Bahtiyar:2015sga,Can:2013tna}}\\
\cline{3-7}\cline{9-13}
& & ${\cal O}(p^2)$ & ${\cal O}(p^3)_{\bar{3}}$ & ${\cal O}(p^3)_6$ & ${\cal O}(p^3)_{6^*}$ & $\mu_{\rm tot}$ & & ${\cal O}(p^2)$ & ${\cal O}(p^3)_{\bar{3}}$ & ${\cal O}(p^3)_6$ & ${\cal O}(p^3)_{6^*}$ & $\mu_{\rm tot}$ & &\\
\hline
\multirow{3}{0.5cm}{$B_{\bar{3}}$} & $\mu_{\Lambda_c^+}$ & $0.235$ & $\cdots$ & $\cdots$ & $\cdots$ & $0.235$ & $~~~$ & $0.235$ & $\cdots$ & $\cdots$ & $\cdots$ & $0.235$ & & $\cdots$\\
\cline{2-15}
& $\mu_{\Xi_c^+}$ & $0.235$ & $\cdots$ & $\cdots$ & $\cdots$ & $0.235$ & & $0.235$ & $\cdots$ & $\cdots$ & $\cdots$ & $0.235$ & & $\cdots$\\
\cline{2-15}
& $\mu_{\Xi_c^0}$ & $0.192$ & $\cdots$ & $\cdots$ & $\cdots$ & $0.192$ & & $0.192$ & $\cdots$ & $\cdots$ & $\cdots$ & $0.192$ & & $\cdots$\\
\hline
\multirow{6}{0.5cm}{$B_6$} & $\mu_{\Sigma_c^{++}}$ & $1.639$ & $\cdots$ & $-0.444$ & $0.090$ & $1.285$ & & $2.703$ & $\cdots$ & $-0.988$ & $0.288$ & $2.003$ & & $1.499(202)$\\
\cline{2-15}
& $\mu_{\Sigma_c^+}$ & $0.326$ & $\cdots$ & $-0.094$ & $-0.042$ & $0.190$ & & $0.721$ & $\cdots$ & $-0.349$ & $0.091$ & $0.463$ & & $\cdots$\\
\cline{2-15}
& $\mu_{\Sigma_c^0}$ & $-0.986$ & $\cdots$ & $0.256$ & $-0.175$ & $-0.905$ & & $-1.261$ & $\cdots$ & $0.290$ & $-0.106$ & $-1.077$ & & $-0.875(103)$\\
\cline{2-15}
& $\mu_{\Xi_c^{'+}}$ & $0.326$ & $\cdots$ & $0.112$ & $-0.048$ & $0.390$ & & $0.721$ & $\cdots$ & $-0.145$ & $0.053$ & $0.629$ & & $\cdots$\\
\cline{2-15}
& $\mu_{\Xi_c^{'0}}$ & $-0.986$ & $\cdots$ & $0.380$ & $-0.197$ & $-0.803$ & & $-1.261$ & $\cdots$ & $0.494$ & $-0.144$ & $-0.911$ & & $\cdots$\\
\cline{2-15}
& $\mu_{\Omega_c^0}$ & $-0.986$ & $\cdots$ & $0.504$ & $-0.220$ & $-0.702$ & & $-1.261$ & $\cdots$ & $0.698$ & $-0.182$ & $-0.745$ & & $-0.667(96)$\\
\hline
\hline
\end{tabular}
\end{center}
\end{table*}
\vspace{3cm}
\begin{figure*}[h!]
\centering
\includegraphics[width=12cm]{3fstatesMM.pdf}\\
\caption{Magnetic moments of the anti-triplet baryons obtained in different approaches. The light-blue bands represent the result obtained in the present work. The others are taken from Ref.~\cite{Barik:1984tq} (N. Barik et al., 83), Ref.~\cite{JuliaDiaz:2004vh} (B. Julia-Diaz et al., 04), Ref.~\cite{Kumar:2005ei} (S. Kumar et al., 05), Ref.~\cite{Faessler:2006ft} (A. Faessler et al., 06), Ref.~\cite{Patel:2007gx} (B. Patel et al., 08), Ref.~\cite{Sharma:2010vv} (N. Sharma et al., 10), Ref.~\cite{Bernotas:2012nz} (A. Bernotas et al., 12), and Ref.~\cite{Wang:2018xoc} (HB ChPT, 18).}\label{Fig4}
\end{figure*}
\begin{figure*}[h!]
\centering
\includegraphics[width=16cm]{6fstatesMM.pdf}\\
\caption{ Same as Fig. 4, but for the sexet baryons. Additional data are taken from Ref.~\cite{Zhu:1997as} (S.-L Zhu et al., 97), Ref.~\cite{Yang:2018uoj} (G.-S Yang et al., 18), Ref.~\cite{Kim:2018nqf} (J. Y. Kim et al., 18), Ref.~\cite{Can:2013tna} (LQCD, 14), and Ref.~\cite{Bahtiyar:2015sga} (LQCD, 15).}\label{Fig5}
\vspace{5cm}
\end{figure*}
\section{Summary}
Motivated by the recent experimental progress on heavy flavor hadrons, we have studied the magnetic moments of the singly charmed baryons in
the covariant baryon chiral perturbation theory (BChPT) up to the next-to-leading order. Using the quark model and the heavy quark spin flavor symmetry to fix some of the low energy constants, we determined the rest by fitting to the lattice QCD data. We compared our results with those of the heavy baryon (HB) ChPT and found that on average the lattice QCD quark mass dependent data can be better described by
the covariant BChPT, consistent with previous studies. In addition, we found that the baryon pole diagram, which is absent in the HB ChPT, can
play an important role in certain cases.
Compared with the results of other approaches, our predicted magnetic moments for the anti-triplets are relatively small. The same is true for the
$\Sigma_c^{++}$, $\Sigma_c^+$, and $\Xi_c^{'+}$. On the other hand, our results for $\Sigma^0_c$, $\Xi_c^{'0}$, and $\Omega_c^0$ are relatively large (small in absolute value). It is not clear how to understand such a pattern at present. We hope that future lattice QCD or experimental studies can
help us gain more insight into these important quantities and better understand the singly charmed baryons.
\section{Acknowledgements}
RXS thanks Jun-Xu Lu and Xiu-Lei Ren for useful discussions.
This work is partly supported by the National Natural Science Foundation of China under Grants No.11522539, No. 11735003, and the fundamental Research Funds for the Central Universities.
\section{Appendix}
The pertinent loop functions, with the PCB terms removed, are given here.
\vspace{8cm}
\begin{figure*}
\begin{eqnarray}
&&H_{B_{\bar{3}}}^{(b)}(\delta_1,m_\phi)=H^{(b)}(m_{\bar{3}},\delta_1,m_\phi),\qquad H_{B_{\bar{3}}}^{(d)}(\delta_1,m_\phi)=H^{(d)}(m_{\bar{3}},\delta_1,m_\phi),\nonumber\\
&&H_{B_{\bar{3}}}^{(c)}(\delta_2,m_\phi)=\left\{
\begin{array}{c}
H_{\delta<m_\phi}^{(c)}(m_{\bar{3}},\delta_2,m_\phi),\qquad (\delta_2<m_\phi)\\
H_{\delta>m_\phi}^{(c)}(m_{\bar{3}},\delta_2,m_\phi),\qquad (\delta_2>m_\phi)
\end{array}
\right.\nonumber\\
&&H_{B_{\bar{3}}}^{(e)}(\delta_2,m_\phi)=\left\{
\begin{array}{c}
H_{\delta<m_\phi}^{(e)}(m_{\bar{3}},\delta_2,m_\phi),\qquad (\delta_2<m_\phi)\\
H_{\delta>m_\phi}^{(e)}(m_{\bar{3}},\delta_2,m_\phi),\qquad (\delta_2>m_\phi)
\end{array}
\right.\nonumber\\
&&H_{B_6}^{(b)}(0,m_\phi)=H^{(b)}(m_6,0,m_\phi),\qquad H_{B_6}^{(b)}(\delta_1,m_\phi)=H^{(b)}(m_6,-\delta_1,m_\phi)\nonumber\\
&&H_{B_6}^{(d)}(0,m_\phi)=H^{(d)}(m_6,0,m_\phi),\qquad H_{B_6}^{(d)}(\delta_1,m_\phi)=H^{(d)}(m_6,-\delta_1,m_\phi)\nonumber\\
&&H_{B_6}^{(c)}(\delta_3,m_\phi)=H_{\delta<m_\phi}^{(c)}(m_6,\delta_3,m_\phi),\qquad (\delta_3<m_\phi)\nonumber\\
&&H_{B_6}^{(e)}(\delta_3,m_\phi)=H_{\delta<m_\phi}^{(e)}(m_6,\delta_3,m_\phi),\qquad (\delta_3<m_\phi)
\end{eqnarray}
\begin{eqnarray}
H^{(b)}(m_B,0,m_\phi)&=&-\frac{1}{4\pi^2}\left[2m_{\phi}^2+\frac{m_{\phi}^2}{m_{B}^2}(2m_{B}^2-m_{\phi}^2)
\log\left(\frac{m_{\phi}^2}{m_{B}^2}\right)\right.
+\left.\frac{2m_{\phi}(m_{\phi}^4-4m_{\phi}^2m_{B}^2+2m_{B}^2)}{m_{B}^2\sqrt{4m_{B}^2-m_{\phi}^2}}
\arccos(\frac{m_{\phi}}{2m_{B}})\right],
\end{eqnarray}
\begin{eqnarray}
H^{(b)}(m_B,\delta,m_\phi)&=&\frac{m_B}{8 \pi ^2}\int_0^1dx\int_0^{1-x}dy\left\{\frac{x^4 m_B^3+\delta x^3 m_B^2}{x \left(m_B+\delta \right){}^2+(x-1) \left(x m_B^2-m_{\phi }^2\right)}\right.\nonumber\\
&&+\left[\left(4 x^2+4 x-2\right) m_B+\delta (3 x-1)\right] \log \left(\frac{x \left(m_B+\delta \right){}^2+(x-1) \left(x m_B^2-m_{\phi }^2\right)}{\mu ^2}\right)\nonumber\\
&&\left.-2 \left(2 x^2+2 x-1\right) m_B \log \left(\frac{x^2 m_B^2}{\mu ^2}\right)-x^2 m_B-\delta +4 \delta x\right\},\qquad(0<\delta<m_\phi)
\end{eqnarray}
\begin{eqnarray}
H^{(d)}(m_B,0,m_\phi)&=&-\frac{1}{4\pi^2}\left[2m_{\phi}^{2}+\frac{m_{\phi}^2}{m_{B}^2}(m_{B}^2-m_{\phi}^2)
\log\left(\frac{m_{\phi}^2}{m_{B}^2}\right)\right.
+\left.\frac{2m_{\phi}^3(m_{\phi}^2-3m_{B}^2)}{m_{B}^{2}\sqrt{4m_{B}^2-m_{\phi}^2}}\arccos(\frac{m_{\phi}}{2m_{B}})\right],
\end{eqnarray}
\begin{eqnarray}
H^{(d)}(m_B,\delta,m_\phi)&=&-\frac{\left(2 m_B+\delta \right){}^2}{16 \pi ^2 m_B^4} \left\{2 \left[m_B^2 \left(m_{\phi }^2-2 \delta ^2\right)+3 \delta m_B \left(m_{\phi }-\delta \right) \left(\delta +m_{\phi }\right)-\left(m_{\phi }^2-\delta ^2\right){}^2\right] \log \left(\frac{m_{\phi }}{m_B+\delta }\right)\right.\nonumber\\
&&\left.-2 \sqrt{\frac{\left(m_{\phi }-\delta \right) \left(\delta +m_{\phi }\right)}{\left(2 m_B+\delta -m_{\phi }\right) \left(2 m_B+\delta +m_{\phi }\right)}} \left[m_B^2 \left(8 \delta ^2-3 m_{\phi }^2\right)+5 \delta m_B \left(\delta ^2-m_{\phi }^2\right)+4 \delta m_B^3\right.\right.\nonumber\\
&&\left.+\left(m_{\phi }^2-\delta ^2\right){}^2\right] \tan ^{-1}\left(\frac{-2 \delta m_B-2 m_B^2-\delta ^2+m_{\phi }^2}{\sqrt{\left(m_{\phi }-\delta \right) \left(\delta +m_{\phi }\right) \left(2 m_B+\delta -m_{\phi }\right) \left(2 m_B+\delta +m_{\phi }\right)}}\right)\nonumber\\
&&+2 \left[m_B^2 \left(8 \delta ^2-3 m_{\phi }^2\right)+5 \delta m_B \left(\delta ^2-m_{\phi }^2\right)+4 \delta m_B^3+\left(m_{\phi }^2-\delta ^2\right){}^2\right]\nonumber\\
&&\cdot\sqrt{\frac{\left(m_{\phi }-\delta \right) \left(\delta +m_{\phi }\right)}{\left(2 m_B+\delta -m_{\phi }\right) \left(2 m_B+\delta +m_{\phi }\right)}}\nonumber\\
&&\cdot\tan ^{-1}\left(\frac{m_{\phi }^2-\delta \left(2 m_B+\delta \right)}{\sqrt{\left(m_{\phi }-\delta \right) \left(\delta +m_{\phi }\right) \left(2 m_B+\delta -m_{\phi }\right) \left(2 m_B+\delta +m_{\phi }\right)}}\right)\nonumber\\
&&\left.+m_B^2 \left[-2 \delta m_B+m_B^2+2 \left(m_{\phi }-\delta \right) \left(\delta +m_{\phi }\right)\right]\right\}+\frac{m_B^2}{4\pi^2},
\end{eqnarray}
\end{figure*}
\begin{figure*}
\begin{eqnarray}
&&H_{\delta<m_\phi}^{(c)}(m_B,\delta,m_\phi)\nonumber\\
&=&\frac{1}{864 \pi ^2 m_B^4 \left(\delta +m_B\right){}^2}\left\{-30 \log \left(\frac{\delta +m_B}{m_{\phi }}\right) m_{\phi }^8-6 \left[\left(38 \log \left(\frac{m_{\phi }}{\delta +m_B}\right)+5\right) m_B^2\right.\right.\nonumber\\
&&+5 \left(\tan ^{-1}\left(\frac{\delta ^2+2 m_B \delta -m_{\phi }^2}{\sqrt{\left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right) \left(m_{\phi }^2-\delta ^2\right)}}\right)-\tan ^{-1}\left(\frac{\delta ^2+2 m_B \delta +2 m_B^2-m_{\phi }^2}{\sqrt{\left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right) \left(m_{\phi }^2-\delta ^2\right)}}\right)\right)\nonumber\\
&&\left.\cdot\sqrt{\left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right) \left(m_{\phi }^2-\delta ^2\right)}-4 \delta \log \left(\frac{\delta +m_B}{m_{\phi }}\right) \left(5 \delta +11 m_B\right)\right] m_{\phi }^6\nonumber\\
&&+3 \left[-4 \log \left(\frac{\delta +m_B}{m_{\phi }}\right) \left(15 \delta ^2+66 m_B \delta +79 m_B^2\right) \delta ^2+2 \left(\tan ^{-1}\left(\frac{\delta ^2+2 m_B \delta -m_{\phi }^2}{\sqrt{\left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right) \left(m_{\phi }^2-\delta ^2\right)}}\right)\right.\right.\nonumber\\
&&\left.-\tan ^{-1}\left(\frac{\delta ^2+2 m_B \delta +2 m_B^2-m_{\phi }^2}{\sqrt{\left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right) \left(m_{\phi }^2-\delta ^2\right)}}\right)\right) \sqrt{\left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right) \left(m_{\phi }^2-\delta ^2\right)}\nonumber\\
&&\cdot\left(15 \delta ^2+34 m_B \delta +28 m_B^2\right)+m_B^2 \left(30 \delta ^2+68 m_B \delta +61 m_B^2+2 \log \left(\frac{m_{\phi }}{\delta +m_B}\right)\right.\nonumber\\
&&\left.\left.\cdot\left(76 \delta ^2+195 m_B \delta +75 m_B^2\right)\right)\right] m_{\phi }^4\nonumber\\
&&+2 \left[30 \log \left(\delta +m_B\right) \left(12 \delta +m_B\right) m_B^5\right.-\left(45 \delta ^4+204 m_B \delta ^3+354 m_B^2 \delta ^2+267 m_B^3 \delta +92 m_B^4\right) m_B^2\nonumber\\
&&+3 \left(\tan ^{-1}\left(\frac{\delta ^2+2 m_B \delta +2 m_B^2-m_{\phi }^2}{\sqrt{\left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right) \left(m_{\phi }^2-\delta ^2\right)}}\right)-\tan ^{-1}\left(\frac{\delta ^2+2 m_B \delta -m_{\phi }^2}{\sqrt{\left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right)\left(m_{\phi }^2-\delta ^2\right)}}\right)\right)\nonumber\\
&&\sqrt{\left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right) \left(m_{\phi }^2-\delta ^2\right)} \left(15 \delta ^4+68 m_B \delta ^3+118 m_B^2 \delta ^2+91 m_B^3 \delta +29 m_B^4\right)\nonumber\\
&&+3 \left(-2 \log \left(\frac{1}{\mu ^2}\right) \left(3 \delta +2 m_B\right) m_B^5-6 \log \left(m_{\phi }\right) \left(22 \delta +3 m_B\right) m_B^5\right.\nonumber\\
&&-\delta ^3 \log \left(\frac{m_{\phi }}{\delta +m_B}\right) \left(38 \delta +195 m_B\right) m_B^2+\delta ^2 \log \left(\frac{\delta +m_B}{m_{\phi }}\right)\nonumber\\
&&\left.\left.\cdot\left(20 \delta ^4+132 m_B \delta ^3+316 m_B^2 \delta ^2+295 m_B^3 \delta +360 m_B^4\right)\right)\right] m_{\phi }^2\nonumber\\
&&\left[-72 \delta \log \left(m_{\phi }\right) m_B^6 \left(\delta +2 m_B\right)+60 \log \left(\delta +m_B\right) m_B^6 \left(2 \delta +m_B\right) \left(\delta +2 m_B\right)\right.\nonumber\\
&&+6 \log \left(\frac{1}{\mu ^2}\right) m_B^6 \left(\delta +2 m_B\right) \left(4 \delta +5 m_B\right)+30 \delta ^3 \log \left(\frac{m_{\phi }}{\delta +m_B}\right) m_B^4 \left(59 \delta +26 m_B\right)\nonumber\\
&&-6 \delta ^5 \log \left(\frac{\delta +m_B}{m_{\phi }}\right) \left(5 \delta ^3+44 m_B \delta ^2+158 m_B^2 \delta +295 m_B^3\right)\nonumber\\
&&+6 \left(\tan ^{-1}\left(\frac{\delta ^2+2 m_B \delta -m_{\phi }^2}{\sqrt{\left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right) \left(m_{\phi }^2-\delta ^2\right)}}\right)-\tan ^{-1}\left(\frac{\delta ^2+2 m_B \delta +2 m_B^2-m_{\phi }^2}{\sqrt{\left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right) \left(m_{\phi }^2-\delta ^2\right)}}\right)\right)\nonumber\\
&&\cdot\sqrt{\left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right) \left(m_{\phi }^2-\delta ^2\right)} \left(\delta +2 m_B\right){}^2 \left(5 \delta ^4+14 m_B \delta ^3+14 m_B^2 \delta ^2+3 m_B^3 \delta -3 m_B^4\right)\nonumber\\
&&\left.\left.+m_B^2 \left(30 \delta ^6+204 m_B \delta ^5+525 m_B^2 \delta ^4+618 m_B^3 \delta ^3+250 m_B^4 \delta ^2-110 m_B^5 \delta -86 m_B^6\right)\right]\right\}\nonumber\\
&&-\frac{m_B^2 \left(30 \log \left(\frac{m_B^2}{\mu ^2}\right)-43\right)}{432 \pi ^2},
\end{eqnarray}
\vspace{3cm}
\end{figure*}
\begin{figure*}
\begin{eqnarray}
&&H_{\delta>m_\phi}^{(c)}(m_B,\delta,m_\phi)\nonumber\\
&=&\frac{1}{864 \pi ^2 m_B^4 \left(\delta +m_B\right){}^2}\left\{-30 \log \left(\frac{\delta +m_B}{m_{\phi }}\right) m_{\phi }^8-6 \left[\left(38 \log \left(\frac{m_{\phi }}{\delta +m_B}\right)+5\right) m_B^2\right.\right.\nonumber\\
&&+5 \coth ^{-1}\left(\frac{m_{\phi }^2+\delta \left(\delta +2 m_B\right)}{\sqrt{\left(\delta ^2-m_{\phi }^2\right) \left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right)}}\right) \sqrt{\left(\delta ^2-m_{\phi }^2\right) \left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right)}\nonumber\\
&&\left.-4 \delta \log \left(\frac{\delta +m_B}{m_{\phi }}\right) \left(5 \delta +11 m_B\right)\right] m_{\phi }^6\nonumber\\
&&+3 \left[-4 \log \left(\frac{\delta +m_B}{m_{\phi }}\right) \left(15 \delta ^2+66 m_B \delta +79 m_B^2\right) \delta ^2+2 \coth ^{-1}\left(\frac{m_{\phi }^2+\delta \left(\delta +2 m_B\right)}{\sqrt{\left(\delta ^2-m_{\phi }^2\right) \left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right)}}\right)\right.\nonumber\\
&&\cdot\sqrt{\left(\delta ^2-m_{\phi }^2\right) \left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right)} \left(15 \delta ^2+34 m_B \delta +28 m_B^2\right)+m_B^2 \left(30 \delta ^2+68 m_B \delta +61 m_B^2\right.\nonumber\\
&&\left.\left.+2 \log \left(\frac{m_{\phi }}{\delta +m_B}\right) \left(76 \delta ^2+195 m_B \delta +75 m_B^2\right)\right)\right] m_{\phi }^4\nonumber\\
&&+2 \left[30 \log \left(\delta +m_B\right) m_B^6\right.-\left(45 \delta ^4+204 m_B \delta ^3+354 m_B^2 \delta ^2+267 m_B^3 \delta +92 m_B^4\right) m_B^2\nonumber\\
&&-3 \coth ^{-1}\left(\frac{m_{\phi }^2+\delta \left(\delta +2 m_B\right)}{\sqrt{\left(\delta ^2-m_{\phi }^2\right) \left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right)}}\right) \left(15 \delta ^4+68 m_B \delta ^3+118 m_B^2 \delta ^2+91 m_B^3 \delta +29 m_B^4\right)\nonumber\\
&&\cdot\sqrt{\left(\delta ^2-m_{\phi }^2\right) \left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right)}+3 \left(20 \log \left(\frac{\delta +m_B}{m_{\phi }}\right) \delta ^6+132 \log \left(\frac{\delta +m_B}{m_{\phi }}\right) m_B \delta ^5\right.\nonumber\\
&&+316 \log \left(\frac{\delta +m_B}{m_{\phi }}\right) m_B^2 \delta ^4+295 \log \left(\frac{\delta +m_B}{m_{\phi }}\right) m_B^3 \delta ^3-\log \left(\frac{m_{\phi }}{\delta +m_B}\right) m_B^2 \left(38 \delta +195 m_B\right) \delta ^3\nonumber\\
&&+360 \log \left(\frac{\delta +m_B}{m_{\phi }}\right) m_B^4 \delta ^2-6 \log \left(\frac{\left(\delta +m_B\right){}^2}{\mu ^2}\right) m_B^5 \delta +132 \log \left(\frac{\delta +m_B}{m_{\phi }}\right) m_B^5 \delta\nonumber\\
&&\left.\left.-4 \log \left(\frac{1}{\mu ^2}\right) m_B^6-18 \log \left(m_{\phi }\right) m_B^6\right)\right] m_{\phi }^2\nonumber\\
&&+60 \delta \log \left(\delta +m_B\right) m_B^6 \left(2 \delta +5 m_B\right)+6 \coth ^{-1}\left(\frac{m_{\phi }^2+\delta \left(\delta +2 m_B\right)}{\sqrt{\left(\delta ^2-m_{\phi }^2\right) \left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right)}}\right) \nonumber\\
&&\sqrt{\left(\delta ^2-m_{\phi }^2\right) \left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right)} \left(\delta +2 m_B\right){}^2 \left(5 \delta ^4+14 m_B \delta ^3\right.+14 m_B^2 \delta ^2+3 m_B^3 \delta -3 m_B^4)\nonumber\\
&&+m_B^2 \left(30 \delta ^6+204 m_B \delta ^5+525 m_B^2 \delta ^4+618 m_B^3 \delta ^3+250 m_B^4 \delta ^2-110 m_B^5 \delta -86 m_B^6\right)\nonumber\\
&&+6 \left[10 \log \left(\frac{\left(\delta +m_B\right){}^2}{\mu ^2}\right) m_B^8
-12 \delta \log \left(m_{\phi }\right) \left(\delta +2 m_B\right) m_B^6+\delta \log \left(\frac{1}{\mu ^2}\right) \left(4 \delta +13 m_B\right) m_B^6\right.\nonumber\\
&&\left.\left.+5 \delta ^3 \log \left(\frac{m_{\phi }}{\delta +m_B}\right) \left(59 \delta +26 m_B\right) m_B^4-\delta ^5 \log \left(\frac{\delta +m_B}{m_{\phi }}\right) \left(5 \delta ^3+44 m_B \delta ^2+158 m_B^2 \delta +295 m_B^3\right)\right]\right\}\nonumber\\
&&-\frac{m_B^2 \left(30 \log \left(\frac{m_B^2}{\mu ^2}\right)-43\right)}{432 \pi ^2},
\end{eqnarray}
\vspace{6cm}
\end{figure*}
\begin{figure*}
\begin{eqnarray}
H_{\delta<m_\phi}^{(e)}(m_B,\delta,m_\phi)&=&\frac{\left(80 \log \left(\frac{m_B^2}{\mu ^2}\right)-3\right) m_B^2}{432 \pi ^2}+\frac{1}{432 \pi ^2 \left(\delta +m_B\right){}^4 m_B^4}\left\{\left[-2 \log \left(\frac{\delta +m_B}{m_{\phi }}\right) \left(9 \delta ^2+22 m_B \delta +16 m_B^2\right)\right] m_{\phi }^8\right.\nonumber\\
&&+2 \left[-\left(9 \left(28 \log \left(\frac{m_{\phi }}{\delta +m_B}\right)+1\right) \delta ^2+22 m_B \delta +16 m_B^2\right) m_B^2\right.\nonumber\\
&&+\left(\tan ^{-1}\left(\frac{\delta ^2+2 m_B \delta +2 m_B^2-m_{\phi }^2}{\sqrt{\left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right) \left(m_{\phi }^2-\delta ^2\right)}}\right)-\tan ^{-1}\left(\frac{\delta ^2+2 m_B \delta -m_{\phi }^2}{\sqrt{\left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right) \left(m_{\phi }^2-\delta ^2\right)}}\right)\right)\nonumber\\
&&\cdot\sqrt{\left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right) \left(m_{\phi }^2-\delta ^2\right)} \left(9 \delta ^2+22 m_B \delta +16 m_B^2\right)\nonumber\\
&&\left.+4 \log \left(\frac{\delta +m_B}{m_{\phi }}\right) \left(9 \delta ^4+40 m_B \delta ^3+8 m_B^2 \delta ^2+62 m_B^3 \delta +25 m_B^4\right)\right] m_{\phi }^6\nonumber\\
&&+\left[16 \delta ^2 \log \left(\frac{m_{\phi }}{\delta +m_B}\right) \left(63 \delta ^2+137 m_B \delta +135 m_B^2\right) m_B^2\right.\nonumber\\
&&+\left(54 \delta ^4+240 m_B \delta ^3+421 m_B^2 \delta ^2+330 m_B^3 \delta +84 m_B^4\right) m_B^2\nonumber\\
&&+2 \left(\tan ^{-1}\left(\frac{\delta ^2+2 m_B \delta -m_{\phi }^2}{\sqrt{\left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right) \left(m_{\phi }^2-\delta ^2\right)}}\right)-\tan ^{-1}\left(\frac{\delta ^2+2 m_B \delta +2 m_B^2-m_{\phi }^2}{\sqrt{\left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right) \left(m_{\phi }^2-\delta ^2\right)}}\right)\right)\nonumber\\
&&\sqrt{\left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right) \left(m_{\phi }^2-\delta ^2\right)} \left(27 \delta ^4+120 m_B \delta ^3+206 m_B^2 \delta ^2+172 m_B^3 \delta +68 m_B^4\right)\nonumber\\
&&-4 \left(-204 \delta \log \left(m_{\phi }\right) m_B^5+60 \log \left(\delta +m_B\right) \left(4 \delta +m_B\right) m_B^5+6 \log \left(\frac{1}{\mu ^2}\right) \left(3 \delta +5 m_B\right) m_B^5\right.\nonumber\\
&&\left.\left.+\delta ^3 \log \left(\frac{\delta +m_B}{m_{\phi }}\right) \left(27 \delta ^3+174 m_B \delta ^2+216 m_B^2 \delta +124 m_B^3\right)\right)\right] m_{\phi }^4\nonumber\\
&&-2 \left[20 \log \left(\delta +m_B\right) \left(-85 \delta ^3-27 m_B \delta ^2+2 m_B^2 \delta +2 m_B^3\right) m_B^5\right.\nonumber\\
&&+\left(27 \delta ^6+174 m_B \delta ^5+454 m_B^2 \delta ^4+612 m_B^3 \delta ^3+422 m_B^4 \delta ^2+110 m_B^5 \delta -6 m_B^6\right) m_B^2\nonumber\\
&&+\left(\tan ^{-1}\left(\frac{\delta ^2+2 m_B \delta -m_{\phi }^2}{\sqrt{\left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right) \left(m_{\phi }^2-\delta ^2\right)}}\right)-\tan ^{-1}\left(\frac{\delta ^2+2 m_B \delta +2 m_B^2-m_{\phi }^2}{\sqrt{\left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right) \left(m_{\phi }^2-\delta ^2\right)}}\right)\right)\nonumber\\
&&\cdot\sqrt{\left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right) \left(m_{\phi }^2-\delta ^2\right)} \left(\delta +2 m_B\right)\left(27 \delta ^5+120 m_B \delta ^4+214 m_B^2 \delta ^3+172 m_B^3 \delta ^2+50 m_B^4 \delta +8 m_B^5\right)\nonumber\\
&&+4 \left(\log \left(\frac{m_{\phi }}{\delta +m_B}\right) m_B^2 \left(63 \delta +274 m_B\right) \delta ^5-\log \left(\frac{\delta +m_B}{m_{\phi }}\right) \left(9 \delta ^4+76 m_B \delta ^3+208 m_B^2 \delta ^2+257 m_B^3 \delta +620 m_B^4\right) \delta ^4\right.\nonumber\\
&&\left.\left.+\log \left(\frac{1}{\mu ^2}\right) m_B^5 \left(9 \delta ^3+36 m_B \delta ^2+41 m_B^2 \delta +11 m_B^3\right)+\log \left(m_{\phi }\right) m_B^5 \left(443 \delta ^3+207 m_B \delta ^2+72 m_B^2 \delta +12 m_B^3\right)\right)\right] m_{\phi }^2\nonumber\\
&&+\left[3180 \delta ^6 \log \left(\frac{m_{\phi }}{\delta +m_B}\right) m_B^4+32 \delta ^2 \log \left(m_{\phi }\right) m_B^5 \left(103 \delta ^3+73 m_B \delta ^2+32 m_B^2 \delta +6 m_B^3\right)\right.\nonumber\\
&&-2 \delta ^7 \log \left(\frac{\delta +m_B}{m_{\phi }}\right) \left(9 \delta ^3+94 m_B \delta ^2+416 m_B^2 \delta +1028 m_B^3\right)\nonumber\\
&&+2 \delta \left(\tan ^{-1}\left(\frac{\delta ^2+2 m_B \delta -m_{\phi }^2}{\sqrt{\left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right) \left(m_{\phi }^2-\delta ^2\right)}}\right)-\tan ^{-1}\left(\frac{\delta ^2+2 m_B \delta +2 m_B^2-m_{\phi }^2}{\sqrt{\left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right) \left(m_{\phi }^2-\delta ^2\right)}}\right)\right)\nonumber\\
&&\sqrt{\left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right) \left(m_{\phi }^2-\delta ^2\right)} \left(\delta +2 m_B\right){}^3 \left(9 \delta ^4+22 m_B \delta ^3+24 m_B^2 \delta ^2+20 m_B^3 \delta +6 m_B^4\right)\nonumber\\
&&-2 \log \left(\frac{1}{\mu ^2}\right) m_B^5 \left(\delta +2 m_B\right) \left(36 \delta ^4+144 m_B \delta ^3+196 m_B^2 \delta ^2+105 m_B^3 \delta +20 m_B^4\right)\nonumber\\
&&-20 \log \left(\delta +m_B\right) m_B^5 \left(172 \delta ^5+160 m_B \delta ^4+148 m_B^2 \delta ^3+109 m_B^3 \delta ^2+46 m_B^4 \delta +8 m_B^5\right)\nonumber\\
&&\left.\left.+m_B^2 \left(18 \delta ^8+152 m_B \delta ^7+519 m_B^2 \delta ^6+918 m_B^3 \delta ^5+954 m_B^4 \delta ^4+688 m_B^5 \delta ^3+382 m_B^6 \delta ^2+116 m_B^7 \delta +3 m_B^8\right)\right]\right\},\nonumber\\
\end{eqnarray}
\end{figure*}
\begin{figure*}
\begin{eqnarray}
H_{\delta>m_\phi}^{(e)}(m_B,\delta,m_\phi)&=&\frac{\left(80 \log \left(\frac{m_B^2}{\mu ^2}\right)-3\right) m_B^2}{432 \pi ^2}+\frac{1}{432 \pi ^2 \left(\delta +m_B\right){}^4 m_B^4}\left\{\left[-2 \log \left(\frac{\delta +m_B}{m_{\phi }}\right) \left(9 \delta ^2+22 m_B \delta+16 m_B^2\right)\right] m_{\phi }^8\right.\nonumber\\
&&+2 \left[-4 \log \left(\frac{m_{\phi }}{\delta +m_B}\right) \left(9 \delta +40 m_B\right) \delta ^3-\coth ^{-1}\left(\frac{\delta ^2+2 m_B \delta +m_{\phi }^2}{\sqrt{\left(\delta ^2-m_{\phi }^2\right) \left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right)}}\right) \left(9 \delta ^2+22 m_B \delta +16 m_B^2\right)\right.\nonumber\\
&&\cdot\sqrt{\left(\delta ^2-m_{\phi }^2\right) \left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right)}\nonumber\\
&&\left.+m_B^2 \left(-9 \delta ^2-22 m_B \delta -16 m_B^2+4 \log \left(\frac{\delta +m_B}{m_{\phi }}\right) \left(71 \delta ^2+62 m_B \delta +25 m_B^2\right)\right)\right] m_{\phi }^6\nonumber\\
&&+\left[\left(54 \delta ^4+240 m_B \delta ^3+421 m_B^2 \delta ^2+16 \log \left(\frac{m_{\phi }}{\delta +m_B}\right) \left(117 \delta ^2+137 m_B \delta +135 m_B^2\right) \delta ^2\right.\right.\nonumber\\
&&\left.+330 m_B^3 \delta +84 m_B^4-24 \log \left(\frac{m_{\phi }^2}{\mu ^2}\right) m_B^3 \left(3 \delta +5 m_B\right)\right) m_B^2\nonumber\\
&&+2 \coth ^{-1}\left(\frac{\delta ^2+2 m_B \delta +m_{\phi }^2}{\sqrt{\left(\delta ^2-m_{\phi }^2\right) \left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right)}}\right) \sqrt{\left(\delta ^2-m_{\phi }^2\right) \left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right)}\nonumber\\
&&\cdot\left(27 \delta ^4+120 m_B \delta ^3+206 m_B^2 \delta ^2+172 m_B^3 \delta +68 m_B^4\right)\nonumber\\
&&\left.-4 \log \left(\frac{\delta +m_B}{m_{\phi }}\right) \left(27 \delta ^6+174 m_B \delta ^5+124 m_B^3 \delta ^3+240 m_B^5 \delta +60 m_B^6\right)\right] m_{\phi }^4\nonumber\\
&&-2\left[\left(27 \delta ^6+174 m_B \delta ^5+454 m_B^2 \delta ^4+612 m_B^3 \delta ^3+422 m_B^4 \delta ^2+110 m_B^5 \delta -6 m_B^6\right) m_B^2\right.\nonumber\\
&&+\coth ^{-1}\left(\frac{\delta ^2+2 m_B \delta +m_{\phi }^2}{\sqrt{\left(\delta ^2-m_{\phi }^2\right)\left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right)}}\right) \sqrt{\left(\delta ^2-m_{\phi }^2\right) \left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right)} \left(\delta +2 m_B\right)\nonumber\\
&&\cdot \left(27 \delta ^5+120 m_B \delta ^4+214 m_B^2 \delta ^3+172 m_B^3 \delta ^2+50 m_B^4 \delta +8 m_B^5\right)\nonumber\\
&&+4 \log \left(\frac{1}{\mu ^2}\right) \left(11 m_B^8+41 \delta m_B^7+9 \delta ^3 m_B^5\right)+20 \log \left(\delta +m_B\right) \left(2 m_B^8+2 \delta m_B^7-85 \delta ^3 m_B^5\right)\nonumber\\
&&+4 \left(-\log \left(\frac{\delta +m_B}{m_{\phi }}\right) m_B^2 \left(271 \delta ^2+257 m_B \delta +620 m_B^2\right) \delta ^4\right.\nonumber\\
&&+\left(36 \log \left(\frac{m_{\phi }^2}{\mu ^2}\right) m_B^6+\log \left(\frac{m_{\phi }}{\delta +m_B}\right) \left(9 \delta ^6+76 m_B \delta ^5+274 m_B^3 \delta ^3+135 m_B^6\right)\right) \delta ^2\nonumber\\
&&\left.\left.+\log \left(m_{\phi }\right) \left(12 m_B^8+72 \delta m_B^7+443 \delta ^3 m_B^5\right)\right)\right]m_{\phi }^2\nonumber\\
&&+\left[-20 \left(\log \left(\frac{1}{\mu ^2}\right)+2 \log \left(\delta +m_B\right)\right) \left(23 \delta +4 m_B\right) m_B^9\right.\nonumber\\
&&-2 \delta ^2 \log \left(\frac{m_{\phi }^2}{\mu ^2}\right) \left(36 \delta ^3+216 m_B \delta ^2+484 m_B^2 \delta +497 m_B^3\right) m_B^5+4 \delta ^6 \log \left(\frac{m_{\phi }}{\delta +m_B}\right) \left(208 \delta ^2+795 m_B^2\right) m_B^2\nonumber\\
&&+\left(18 \delta ^8+152 m_B \delta ^7+519 m_B^2 \delta ^6+918 m_B^3 \delta ^5+954 m_B^4 \delta ^4+688 m_B^5 \delta ^3+382 m_B^6 \delta ^2+116 m_B^7 \delta +3 m_B^8\right) m_B^2\nonumber\\
&&+2 \delta \coth ^{-1}\left(\frac{\delta ^2+2 m_B \delta +m_{\phi }^2}{\sqrt{\left(\delta ^2-m_{\phi }^2\right) \left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right)}}\right) \sqrt{\left(\delta ^2-m_{\phi }^2\right) \left(\left(\delta +2 m_B\right){}^2-m_{\phi }^2\right)}\nonumber\\
&&\cdot\left(\delta +2 m_B\right){}^3 \left(9 \delta ^4+22 m_B \delta ^3+24 m_B^2 \delta ^2+20 m_B^3 \delta +6 m_B^4\right)\nonumber\\
&&\left.\left.-2 \delta ^2 \log \left(\frac{\delta +m_B}{m_{\phi }}\right) \left(9 \delta ^8+94 m_B \delta ^7+1028 m_B^3 \delta ^5+1720 m_B^5 \delta ^3+1600 m_B^6 \delta ^2+1480 m_B^7 \delta +1090 m_B^8\right)\right]\right\}.\nonumber\\
\end{eqnarray}
\end{figure*}
|
2,877,628,090,475 | arxiv | \section{INTRODUCTION}
A spin-polarized current traversing a thin magnetic layer can exert a significant torque on the magnetization through the spin transfer torque (STT) effect.~\cite{slonczewski1996jmmm,berger1996prb,tsoi1998prl,tsoi2000nt,kiselev2003nt,ralph2008jmmm,sun2008jmmm} The effect can be described as negative damping, linearly proportional to the spin-polarized current, which at a certain threshold
can overcome the natural Gilbert damping
in the magnetic layer, allowing for coherent, large amplitude, excitation of spin waves. If the magnetic layer is part of a structure with magnetoresistance, such as a spin valve (SV) or a magnetic tunnel junction (MTJ), the excited spin waves can be used to generate a current- and field-tunable microwave voltage signal; the resulting device is commonly called a spin torque oscillator (STO).~\cite{silva2008jmmm} Interest in STOs for microwave applications is steadily increasing, due to their attractive combination of very large frequency tuning ranges,~\cite{rippard2004prb,bonetti2009apl,muduli2011jap} efficient spin-wave emission in magnonic
devices,~\cite{bonetti2010prl,demidov2010ntm,madami2011nn} very high modulation rates,~\cite{pufall2005apl,manfrini2009apl,muduli2010prb,muduli2011if,pogoryelov2011apl,pogoryelov2011apldc,manfrini2011jap,muduli2011ieeem,muduli2011aip} sub-micron footprints,~\cite{vincent2009ieeejssc} and straightforward integration with semiconductor technology using the same processes as magnetoresistive random access memory.~\cite{engel2005ieeemag, akerman2005sc}
A minimal spectral linewidth, $\Delta f$, of the microwave signal is highly desirable for applications. While a number of recent experimental studies have addressed the temperature dependence of $\Delta f$ in nanopillar STOs ~\cite{sankey2005prb, mistral2006apl, georges2009prb, boone2009prb,bortolotti2012apl,sierra2012apl}
the study of the temperature dependent linewidth in nanocontact STOs is limited to a recent work by Schneider \emph{et. al.}~\cite{schneider2009prb} The theory of the origin of STO linewidths and their temperature dependence is now well established for single spin-wave modes.~\cite{kim2006prb,kim2008prl1,kim2008prl, tiberkevich2008prb, slavin2009ieeem, silva2010ieeemag} A key result is the strong impact that limited amplitude noise can have on the STO phase noise, via the strong amplitude-phase coupling. Gaussian (white) amplitude noise is transformed into colored phase noise, and the intrinsic Lorentzian line shape expected for an auto-oscillator with zero amplitude-phase coupling changes into a convolution of Lorentzian and Gaussian line shapes.~\cite{keller2010prb} The coupling also leads to a substantial enhancement, or amplification, of the thermal broadening, and can also lead to asymmetric line shapes near threshold.~\cite{kim2008prl} The degree of coloring should also change with temperature, leading to a crossover from a linear temperature dependence of $\Delta f$ at low temperature, to a square root dependence at high temperature.~\cite{tiberkevich2008prb}
All temperature dependent studies to date show temperature regions with unexpected behavior. In Ref.~\onlinecite{georges2009prb}, $\Delta f$ in the subthreshold regime narrows by a factor of 6, from 1.2 GHz to 200 MHz, when the temperature is raised from 20 K to 140 K. In Ref.~\onlinecite{mistral2006apl}, the slope of the temperature dependence even changes sign multiple times as a function of drive current, and is close to zero at the smallest $\Delta f$. In Ref.~\onlinecite{sankey2005prb}, $\Delta f$ increases exponentially above a certain temperature; the concept of mode hopping was introduced to explain and model this dependence. The origin of these rather complex temperature dependencies is yet to be explained. More recently, a linear behavior of linewidth is observed for a certain range of temperature in magnetic tunnel junction based STOs.~\cite{bortolotti2012apl,sierra2012apl} A saturation of linewidth is observed in both these studies for temperature below 100 K, which is not explained by the existing theories. In addition, the temperature dependence of power restoration rate observed in Ref~\onlinecite{sierra2012apl} can not be explained by the single mode theory.~\cite{slavin2009ieeem} Thus the details of the temperature dependence of linewidth in STOs is far from being understood.
In this work, we present a detailed study of the temperature-dependent linewidth in nanocontact STOs. While all measurements were carried out at current and magnetic field values where only propagating spin waves were generated,~\cite{bonetti2010prl,bonetti2012prb} we found a large number of mode transitions as a function of current (at a fixed temperature $T$) \emph{and} temperature (at a fixed current $I$). The measured linewidth is highly nonmonotonic both as a function of current and of temperature, with large enhancements at currents or temperatures where mode transitions occurred. We show that the linewidth is very well fitted by the single oscillator theory~\cite{tiberkevich2008prb, slavin2009ieeem}, if the so-called amplification factor
is obtained directly from measurements. While this agreement is similar to that of Refs.~\onlinecite{georges2009prb,pogoryelov2011apl}, we find the temperature dependence of the linewidth does not agree with that obtained directly from calculations using the nonlinear single-oscillator
theory~\cite{tiberkevich2008prb,slavin2009ieeem}, from which
typically a linear dependence on $T$ is obtained for the systems under study here. These observations indicate that the central mechanism for
linewidth broadening in
nonlinear single-oscillator theory applies here, too: The linewidth is driven by phase noise amplified by the coupling through the
nonlinear frequency shift to power amplitude
fluctuations. However, our results indicate that this coupling may itself have a nontrivial temperature (and current) dependence, especially
near mode transitions. We will here show that extending the nonlinear single-oscillator theory to include two coupled modes~\cite{muduli2012prl} leads
to additional couplings between the phase and power fluctuations. Under some simplifying assumptions,
these couplings lead to a changed power restoration rate and the final result for the linewidth looks
very much like that from the nonlinear single-oscillator theory\cite{tiberkevich2008prb, slavin2009ieeem}, but with an
enhanced nonlinear amplification that carries additional temperature dependence. This explains qualitatively the observed temperature dependence
of the linewidth near mode transitions.
\section{EXPERIMENT}
The results presented in this work are from a single nanocontact STO device with an e-beam patterned $50\times150$~nm$^{2}$ elliptical nanocontact fabricated on top of a 8$\times$26~$\mu$m$^{2}$ pseudo-spin-valve mesa based on Co$_{81}$Fe$_{19}$(20 nm)/Cu(6 nm)/Ni$_{80}$Fe$_{20}$(4.5 nm), as described in Ref.~\onlinecite{mancoff2006apl}. While not shown here, other nanocontacts of varying sizes were also studied as a function of temperature, and gave the same qualitative results.
\begin{figure}[t!]
\includegraphics*[width=0.45\textwidth]{cscan}
\caption{(Color online)(a) Two-dimensional power spectral density map of $f$ versus $I$ at a magnetic field of $\mu_0H$=1~T, applied at an angle of $80^{\circ}$ to the film plane. Top inset shows two examples of mode transitions at $I$=30.2~mA and 35.3~mA respectively, where the left spectrum has two clearly resolved Lorentzian peaks, and the right spectrum shows a single broader, asymmetric peak that can still be well fitted by two Lorentzian functions. The bottom inset shows the inverse power $1/p$ vs current and a linear fit (solid line). (b) Experimentally measured (red triangles) and calculated $\Delta f$ (blue solid line). The black dashed line represents a linear fit to linewidth using Eq.(1) for subthreshold currents.}\label{fig:cscan}
\end{figure}
The experimental circuit is similar to that employed in Refs.~\onlinecite{bonetti2009apl} and \onlinecite{muduli2011prb}. The signal generated from the STO was amplified using a broadband +22-dB microwave amplifier, and
detected by a 20 Hz-46 GHz Rohde \& Schwarz FSU46 spectrum analyzer. The measurement was performed in the default mode of the spectrum analyzer, mode for spectrum analysis, the so-called analyzer mode. We use a resolution bandwidth of 10~MHz and video bandwidth of 10~kHz. The spectra were measured in the frequency range 13-25 GHz with a sweep time of 100~ms. We also average 20 traces resulting in a total measurement time of about 6.4~s. The dc bias current is fed to the device by a current source through a 0-26 GHz bias tee connected in parallel with the transmission line. The temperature of the sample was varied in the range 300-400~K through use of a heating foil underneath the sample. Each measurement temperature was maintained with a precision of 0.1~K using a thermocouple attached to the bottom of the sample and a software-based PID controller. All measurements were performed in a $\mu_0H$=1~T field applied at an angle of 80$^{\circ}$ w.r.t. the film plane. In this geometry only a propagating spin wave mode~\cite{slonczewski1999jmmm, slavin2005prl, bonetti2010prl,madami2011nn} is excited, and the output power is close to its maximum value.~\cite{bonetti2009apl}
\section{RESULTS}
Figure~\ref{fig:cscan} shows the current ($I$) dependence of the STO frequency at room temperature. In addition to the expected linear blue shift with $I$, a large number of discontinuous jumps and other nonlinearities can be observed. We argue that all these nonlinear features are related to mode transitions, some large, where two distinct peaks can be observed on the spectrum analyzer [the left spectrum in the inset of Fig.~\ref{fig:cscan}(a)], and others small, where only a single peak is observed, though with a significant increase in both nonlinearity and linewidth [the right spectrum in the inset of Fig.~\ref{fig:cscan}(a)]. Similar mode transitions have been observed in the literature~\cite{sankey2005prb,rippard2006prb,krivorotov2007prb} and numerical simulations have reproduced this behavior for in-plane fields.~\cite{berkov2007prb}
\begin{figure}[t!]
\includegraphics*[width=0.45\textwidth]{freqvstemp}
\caption{(Color online) Map of frequency of the strongest mode versus temperature and bias $I$, showing the mode transition with temperature. The solid lines are linear fits to the threshold current for the mode transitions $I_1$ and $I_2$ versus temperature. The dotted lines are the positions at which the behavior of the linewidth is discussed in Fig.~\ref{fig:lwvstemp}.}\label{fig:freqvstemp}
\end{figure}
The mode transitions have a significant impact on $\Delta f$ vs. $I$, as shown in Fig.~\ref{fig:cscan}~(b). We define $\Delta f$ as the full width at half maximum (FWHM) obtained by fitting a single Lorentzian function. In the case of two modes, we use the linewidth of the strongest mode (the mode with the highest output power). In the subthreshold regime, $\Delta f$ decreases linearly with increasing $I$, which we attribute to the narrowing of the natural ferromagnetic resonance (FMR) linewidth under the influence of the negative damping associated with spin torque.~\cite{kim2008prl1,slavin2009ieeem} At every mode transition position, we also observe a dramatic increase in $\Delta f$ leading to a highly nonlinear dependence on $I$. It is noteworthy that a strong mode transition, and the associated increase in $\Delta f$, can also be observed well inside the subthreshold regime, at about 25~mA. The existence of mode transitions is hence not limited to states of steady precession, as in Ref.~\onlinecite{berkov2007prb}.
In order to show the effect of temperature on mode transitions, we plot a map of measured frequency vs temperature and current, as shown in Fig.~\ref{fig:freqvstemp}. At room temperature these transitions are located at about $I_1$=27~mA and $I_2$=30~mA. As $T$ is increased, both $I_1$ and $I_2$ move to lower values following a linear dependence (the solid lines in Fig.~\ref{fig:freqvstemp}). This $T$ dependence of $I_1$ and $I_2$ has direct consequences for $\Delta f(T)$. To illustrate this, we have chosen three current values, shown by the dashed lines in Fig.~\ref{fig:freqvstemp}, which lie below, on top of, and above the second mode transition. Figures~\ref{fig:lwvstemp} (a)--~\ref{fig:lwvstemp}(c) show $\Delta f$ vs. $T$ at these three currents, which clearly exhibit three dramatically different $T$ dependencies: i) at 28.4~mA, we observe a nonlinear increase of $\Delta f$ with $T$, ii) at 29.4~mA, we observe a nonmonotonic $T$ dependence, and iii) at 30.3~mA we observe an nonlinear \emph{decrease} in $\Delta f$ with $T$. It is quite obvious that none of the measured curves in Fig.~\ref{fig:lwvstemp} follow either a linear or a square-root $T$ dependence, as expected from the theories of thermally induced phase noise.~\cite{sankey2005prb,kim2008prl1,slavin2009ieeem,silva2010ieeemag}
\begin{figure}[t!]
\includegraphics*[width=0.45\textwidth]{linewidthvstemp}
\caption{(Color online) Measured linewidth versus temperature at (a)~28.4~mA, (b)~29.4~mA, and (c)~30.3~mA. The solid black circles (respectively the solid blue squares) denote the mode excited below (above) $I_{2}$=30~mA at room temperature. (d) Integrated power versus temperature at 28.4~mA (solid black circles), 29.4~mA (open symbols), and 30.3~mA (solid blue squares). The dashed lines serve as visual aids.}\label{fig:lwvstemp}
\end{figure}
Now we will compare our results with the
single mode analytical theory.~\cite{kim2008prl1,slavin2009ieeem} According to this theory, $\Delta f$ of a nonlinear oscillator is given by
\begin{eqnarray}
\Delta f = & \Gamma_{\rm g}(1-\frac{I}{I_{\rm th}}), & \mbox{for }\mbox{$I<<I_{\rm th}$} \label{eq:lw1}\\
= & \Delta f_{\rm L}(1+\nu^2) , & \mbox{for }\mbox{$I>>I_{\rm th}$}, \label{eq:lw2}
\end{eqnarray}
where $\Gamma_{\rm g}$ is the natural FMR linewidth, $I$ the bias current, $I_{\rm th}$ the threshold current, and the nonlinear linewidth amplification is $(1+\nu^2)=1+\left(\frac{p_0N}{\Gamma_p}\right)^2$, where $N=\frac{d\omega}{dp}$ is the nonlinear frequency
shift, and $\Gamma_p$ is the power restoration rate ($\Gamma_p^{-1}$ is the correlation time of the power
fluctuations); $\Delta f_{\rm L}=\Gamma_{\rm g}\frac{kT}{E(p_{0})}$ is the intrinsic thermal linewidth, i.e., the linewidth of a linear ($\nu = 0$) oscillator. Here, ${E(p_{0})}$ is the total energy of the oscillator. Above threshold ($I\gg I_{\rm th}$), the nonlinear amplification of the linewidth is controlled by the ratio of the nonlinear frequency shift $N$ to the power restoration rate $\Gamma_p$.
The reason for this\cite{kim2008prl1,slavin2009ieeem} is that power fluctuations couple to phase
fluctuations through the nonlinear frequency shift $N$, and the linewidth is dominated by phase fluctuations. The linewidth
increases when $N$ is large, so that small power fluctuations give rise to large phase fluctuations, or if $\Gamma_p$
is small, so that power fluctuations remain for a long time during which they affect phase fluctuations.
For the nanocontact under study, the nonlinear damping $Q$ is small\cite{boone2009prb}, and
we can approximate $(1+\nu^2)\approx1+\left(\frac{I}{\Gamma_{\rm g}}\frac{df}{dI}\right)^2$.
We first compare our experimental results for fixed $T$ with theory.\cite{kim2008prl1,slavin2009ieeem} In order to do so,
we need to extract $\Gamma_{\rm g}$. We fit the initial decrease in linewidth with Eq.~(\ref{eq:lw1}), and obtain
$\Gamma_{\rm g}=(500\pm20)$~MHz and $I_{th}=(29\pm1)$~mA, as shown by the dashed line in Fig.~\ref{fig:cscan}~(b). Next, from the measured $f$ vs $I$, we obtain $df/dI$ and directly calculate the nonlinear amplification factor $(1+\nu^2)$, and find from a fit to Eq.~(\ref{eq:lw2}) that $\Delta f_{\rm L}\sim$~67~kHz, for $I>I_{\rm th}$. This value of $\Delta f_{\rm L}$ corresponds to $kT/E(p_{0})\sim1.5\times10^{-4}$. As shown in Fig.~\ref{fig:cscan}~(b), the calculated $\Delta f$ shows very good agreement with the experimentally measured linewidth, and also reproduces the dramatic increase in $\Delta f$ which occurs around each mode transition. The agreement indicates that the nonlinear amplification of the linewidth is
controlled by the nonlinear frequency shift $N\propto df/dI$, while the power restoration rate $\Gamma_p$ is
constant. The agreement is lost for $I<$27~mA, as expected for currents below threshold.~\cite{kim2008prl1,slavin2009ieeem} We have also used the inverse power method~\cite{tiberkevich2007apl} to determine the threshold current as shown in the inset of Fig.~\ref{fig:cscan}~(a). A fit of this data for current below 25 mA is shown by the solid line. From this fit it appears as if the STO is close to auto-oscillation already at about 24.5~mA, but gets interrupted by one or more subthreshold mode transitions. It is only at about 27-28 mA that robust auto-oscillation begins.
\begin{figure}[t!]
\includegraphics*[width=0.45\textwidth]{LinearLinewidthvsT}
\caption{ Temperature dependence of (a) extracted linear contribution of linewidth $\Delta f_{\rm L}$ (solid symbols) and (b) the nonlinear amplification, $(1+\nu^2)$ (solid and open symbols). The solid red lines are calculation based with inclusion of temperature dependence of $M_{\rm s}$, where as the dashed blue lines are calculation assuming no temperature dependence of $M_{\rm s}$.}\label{fig:LlwNLvstemp}
\end{figure}
Next, we want to compare the temperature dependence (at fixed $I$) of $\Delta f_{\rm L}$ and $(1+\nu^2)$ as obtained
from the experiment with theoretical predictions\cite{kim2008prl1,slavin2009ieeem}. According to the
theory, $\Delta f_{\rm L}(T)$ should be proportional to $T$, since it is the linewidth of a linear oscillator
in contact with a thermal bath, while $(1+\nu^2)$ has a monotonic temperature dependence. Using the agreement between the calculated and measured linewidths in Fig.~\ref{fig:cscan}, we can now extract $\Delta f_{\rm L}$ and its temperature dependence, as shown in Fig.~\ref{fig:LlwNLvstemp}(a). Since the determination of $(1+\nu^2)$ is more accurate in regions between mode transitions, i.e., where $df/dI$ do not diverge, we use the average value of $\Delta f_{\rm L}$ for 30.5~mA$<I<31.5$~mA, which excludes any mode transitions and is above threshold at all temperatures. A linear increase in $\Delta f_{\rm L}$ with $T$ is observed. The solid and dashed lines are calculations based on the classical quasi-Hamiltonian formalism for spin waves,~\cite{slavin2005ieeem,slavin2008ieeem,slavin2009ieeem,kim2008prl,kim2008prl1} which shows reasonable agreement with the experiment and also predicts a linear behavior similar to experiment even with the inclusion of the temperature dependence of $M_0$ in the calculation (red solid line). This calculation assumes single mode excitation but considers the nonuniform nature of propagating spin waves by "exchange normalization" of magnetic field, and normalization of volume under the nanocontact.~\cite{slavin2009ieeem} The parameters used are similar to those of Ref.~\onlinecite{bonetti2010prl}, the electron gyromagnetic factor: $\gamma=1.76\times10^{7}$ rad/Oe, saturation magnetization: $M_0(300 K)=640$ emu/cm$^{3}$, Gilbert damping parameter: $\alpha_G=0.01$, dimensionless spin-polarization efficiency: $\epsilon=0.2$, the exchange length: $\lambda_{\rm ex}=5$~nm, and $(I/I_{\rm th})_{300 K}=5$. The effective volume $V_{\rm eff}$ is assumed to be 1.5 times that of the volume under the nanocontact. We use $\Gamma_{\rm g}=(500\pm 20)$~MHz, as determined from the experiment. Calculation also predicts $\Gamma_{\rm g}=500$~MHz for our experimental geometry. We note that the agreement with $\Delta f_{\rm L}$ with $T$ was obtained only when $I/I_{\rm th}>5$. We attribute this to the fact that the analytical Eq.~(\ref{eq:lw2}) is an asymptotic equation that is valid only for $I>>I_{\rm th}$.~\cite{kim2008prl1} Basically we treated
$I/I_{\rm th}$ as a fitting parameter, with the other parameters kept fixed at their reasonable values, since the precise values of these parameters are a bit uncertain.
\begin{figure}[t!]
\includegraphics*[width=0.45\textwidth]{spectmap}
\caption{ Map of power (dB) vs frequency ($f$) and temperature $T$ for the three example current values of 28.4~mA, 29.4~mA, and 30.3~mA. The arrows indicate the presence of additional modes, the amplitude of which depends on temperature.}\label{fig:spectmap}
\end{figure}
In Fig.~\ref{fig:LlwNLvstemp}(b) we show the behavior of measured $(1+\nu^2)$ vs $T$ (symbols) for the three current values of 28.4~mA, 29.4~mA and 30.3~mA along with the calculated behavior for $I/I_{\rm th}=5$ (solid and dashed lines). The experimental behavior of $(1+\nu^2)$ vs $T$ is dramatically different for the three cases but very similar to the behavior of the linewidth
as a function of $T$ shown in Fig~\ref{fig:lwvstemp}. In contrast, the calculations of single-mode theory predict a monotonic decrease of $(1+\nu^2)$ with $T$ when the temperature dependence of $M_{\rm s}$ is included. Hence the calculations agree with the experiment only for a limited range of temperature and when the STO is far from the mode transition. For example, at 28.4~mA (30.3~mA), $(1+\nu^2)$ is enhanced at higher (lower) temperature, which is close to the mode transition. Detail examination show that this enhancement occurs when two modes are observed. This can be clearly seen in Fig.~\ref{fig:spectmap} which shows the measured power {\it vs} frequency ($f$) and temperature $T$
for the three current values. These spectra show the presence of an additional mode (as shown by the arrows) for all there currents. The temperature dependence of the amplitude of this additional mode has a clear correlation with the behavior of $(1+\nu^2)$ vs $T$ in Fig.~\ref{fig:LlwNLvstemp}(b). For example, the amplitude of second mode increases (decreases) with temperature at 28.4~mA (30.3~mA). Thus our results indicate that when two modes are observed, the experimental $(1+\nu^2)$ is enhanced compared to the prediction of single mode calculation.
\section{DISCUSSION}
We will now discuss the mechanism for the anomalous temperature dependence of the linewidth. The basic assumption is
that in the presence of two mode, mode coupling near a transition can lead to an increase in the linewidth.
The starting point is a set of coupled equations for the complex amplitudes $c_i$, $i=1,2$ of the
time-dependence of the modes,~\cite{muduli2012prl}
\begin{widetext}
\begin{eqnarray}
\frac{dc_1}{dt}+i\omega_1\left( p_1,p_2\right)c_
+\left[\Gamma_+\left(p_1,p_2\right)-\Gamma_-\left(p_1,p_2\right)\right]c_1
-ke^{i\varphi}c_2 &=&0\nonumber\\
\frac{dc_2}{dt}+i\omega_2\left( p_1,p_2\right)c_
+\left[\Gamma_+\left(p_1,p_2\right)-\Gamma_-\left(p_1,p_2\right)\right]c_2
-ke^{i\varphi}c_1 &=&0.\label{eq:coupled_eqn_1}
\end{eqnarray}
\end{widetext}
Here, $\Gamma_+$ and $\Gamma_-$ are the positive and negative damping, and $\omega_1$ and $\omega_2$ the mode-frequencies;
we have indicated the dependence of $\omega_i$, $\Gamma_+$, and $\Gamma_-$ on the mode powers $p_1$ and $p_2$. The equations
contain a linear coupling term with complex amplitude $ke^{i\varphi}$, with $k$ real and $k\geq0$. This term is not allowed on short
time scales if $\omega_1\not=\omega_2$. Here, however, we are interested in behavior over times much larger than the
time-scale of the periods of the modes or of thermal fluctuations. In that case, the coupling mediated through the linear coupling
describes processes in which one mode can decay into the other through intermediate states and energy that is absorbed or
released into other magnetic modes or a thermal reservoir. Such a process becomes more likely as the mode frequencies approach
each other, with a concomitant increase in $k$. The experiments show a significant current and temperature dependence of the main mode frequency. Therefore, the linear mode coupling also has
a strong current and temperature dependence, $k=k(I,T)$ and $\varphi=\varphi(I,T)$. In particular, the magnitude of
$k$ has maxima at currents and temperatures at which mode transitions occur. We will see that this coupling plays a key role.
We now make some simplifying assumptions. First, we assume that the mode frequency $\omega_i$
only depends on $p_i$ and not
on $p_j$, $j\not=i$. Next, we assume that the system is close to, but above, threshold (recall that the threshold current is about 27 mA and the
relevant current values are around 30 mA). We then expand Eq.~(\ref{eq:coupled_eqn_1}) near
$c_i=0$ and write the equations in terms of power amplitude and phase,
$c_i=\frac{Q_i}{\sqrt{(\omega_i}}e^{-i\left(\omega_{i,0}t-\varphi_i\right)},
$
where $\omega_{i,0}$ is the threshold mode frequency. This leads to the following equations for the time dependence of the
amplitudes $Q_i$ and phases $\varphi_i$:
\begin{subequations}
\label{eq:coupled_eqns_2}
\begin{eqnarray}
\frac{dQ_1}{dt} & = & \Gamma_g \left( I/I_{\rm th}-1\right) Q_1 -\left( \overline Q Q_1^2+\overline PQ_2^2\right) Q_1\nonumber\\
&&+kQ_2\sqrt{ \frac{\omega_{1,0}}{\omega_{2,0}}} \cos(\varphi-\varphi_2+\varphi_1) \label{eq:coupled_eqn_2a}\\
\frac{dQ_2}{dt} & = & \Gamma_g \left(I/I_{\rm th}-1\right) Q_2 -\left( \overline Q Q_2^2+\overline PQ_1^2\right) Q_2\nonumber\\
&&+kQ_1 \sqrt{\frac{\omega_{2,0}}{\omega_{1,0}}} \cos(\varphi-\varphi_2-\varphi_1) \label{eq:coupled_eqn_2b}\\
\frac{d\varphi_1}{dt} & = & -N_1Q_1^2+k\frac{Q_2}{Q_1}\sqrt{ \frac{\omega_{1,0}}{\omega_{2,0}}}\sin(\varphi+\varphi_2-\varphi_1)
\label{eq:coupled_eqn_2c}\\
\frac{d\varphi_2}{dt} & = & -N_2Q_2^2+k\frac{Q_1}{Q_2}\sqrt{\frac{\omega_{2,0}}{\omega_{1,0}}}\sin(\varphi-\varphi_2+\varphi_1).
\label{eq:coupled_eqn_2d}
\end{eqnarray}
\end{subequations}
Here, $N_i$ is the nonlinear frequency shift, $I_{\rm th}$ the threshold current, and $\overline Q$ and $\overline P$ the diagonal
and off-diagonal nonlinear damping coefficients, respectively. We will for simplicity assume that $N_1=N_2=N$.
Next, we introduce the transformations\cite{vanderSande} $Q_1=\sqrt{p}\cos\left(\frac{\theta+\pi/2}{2}\right)$ and
$Q_2=\sqrt{p}\sin\left(\frac{\theta+\pi/2}{2}\right)$, where $p$ is the total power in the two modes. These transformations
recast the description of the mode amplitudes $Q_i$ in terms of $p$ and $\theta$, where $p$ is total power and $\theta$
describes how the power is distributed between the two modes.
Inserting
these in Eqs.~(\ref{eq:coupled_eqn_2a}) and (\ref{eq:coupled_eqn_2b}) and assuming
that the {\em average} power $p_0$ is stationary and writing $p=p_0+\delta p$, with $dp_0/dt=0$ and $\delta p$ the power fluctuations, we obtain the following
linearized equation for the power (linearized in power fluctuations about the average power $p_0$):
\begin{widetext}
\begin{eqnarray}
\frac{d(p_0+\delta p)}{dt} & = & 2\left( I/I_{\rm th}-1\right)\Gamma_g\left(p_0+\delta p\right)
-2\overline Q\left(p_0^2+2p_0\delta p\right)-\left(\overline P-\overline Q\right)\cos^2\theta\left(p_0^2+2p_0\delta p\right)\nonumber\\
&&+{k}\left(p_0+\delta p\right)\cos\theta\left[\sqrt{\frac{\omega_{1,0}}{\omega_{2,0}}}\cos\left(\varphi+\psi\right)
+\sqrt{\frac{\omega_{2,0}}{\omega_{1,0}}}\cos\left(\varphi-\psi\right)\right],
\label{eq:power_fluct_1}
\end{eqnarray}
where $\psi=\varphi_2-\varphi_1$, with a time evolution given by
\begin{eqnarray}
\frac{d\psi}{dt} & = & -Np_0\sin\theta-N\delta p\sin\thet
+k\frac{1-\sin\theta}{\cos\theta}\sqrt{\frac{\omega_{2,0}}{\omega_{1,0}}}\sin(\varphi-\psi
-k\frac{1+\sin\theta}{\cos\theta}\sqrt{\frac{\omega_{1,0}}{\omega_{2,0}}}\sin(\varphi+\psi
\label{eq:phase_fluct_1}
\end{eqnarray}
\end{widetext}
Ignoring fluctuations for the moment, and keeping
in mind that the power $p_0$ is constant, the equations~(\ref{eq:power_fluct_1}) and (\ref{eq:phase_fluct_1}) describe a
two-dimensional dynamically driven system in $(\theta,\psi)$-space. The system under consideration here has, far away from a mode transition so that $k\approx0$, a single
stable fixed point $\theta=-\pi/2$ ($\theta=\pi/2$) with all power in mode $\omega_1$ ($\omega_2$) well below (above) the mode transition. Near or at the mode transition, the system may have stable fixed points, unstable fixed points, or limit cycles. In either case we will assume that the experimental linewidth arises from fluctuations in the {\em total} power and phase difference, and we
will therefore ignore fluctuations in $\theta$. By enforcing the stationarity condition $dp_0/dt=0$ we obtain from Eq.~(\ref{eq:power_fluct_1})
\begin{widetext}
\begin{equation}
2\left(I/I_{\rm th}-1\right)\Gamma_g-2\overline Qp_0-\left(\overline P-\overline Q\right)p_0\cos^2\theta
+kp_0\cos\theta\left[\sqrt{\frac{\omega_{1,0}}{\omega_{2,0}}}\langle\cos(\varphi+\psi)\rangle
+\sqrt{\frac{\omega_{2,0}}{\omega_{1,0}}}\langle\cos(\varphi-\psi)\rangle\right]=0,
\label{eq:constant_p}
\end{equation}
\end{widetext}
where $\langle\ldots\rangle$ denotes a suitable time-average over times long compared to the time scale of fluctuations ({\em e.g.,} a limit
cycle).
Inserting this into Eq.~(\ref{eq:power_fluct_1}), and separating $\psi$ into a regular part $\Psi$, describing the
slow time evolution of the phase difference of the two modes, and fluctuations
$\delta\psi$, $\psi=\Psi+\delta\psi$,
and replacing $\cos\psi$ ($\sin\psi$) with $\langle\cos\Psi\rangle$ ($\langle\sin\Psi\rangle$) we obtain
the following linearized equations relating the fluctuations in power and phase angle difference:
\begin{widetext}
\begin{eqnarray}
\frac{d\delta p}{dt} & = & 2\left(I/I_{\rm th}-1\right)\Gamma_g\delta p-4\overline Qp_0\delta p+2\left(\overline P-\overline Q\right)
\cos^2\theta\delta
+k\delta p\cos\theta\left[ \sqrt{\frac{\omega_{1,0}}{\omega_{2,0}}}\langle\cos(\varphi+\Psi)\rangle+\sqrt{\frac{\omega_{2,0}} {\omega_{1,0}}}\langle\cos(\varphi-\Psi)\rangle\right]\nonumber\\
&&-kp_0\cos\theta\delta\psi \left[ \sqrt{\frac{\omega_{1,0}}{\omega_{2,0}}}\langle\sin(\varphi+\Psi)\rangle
-\sqrt{\frac{\omega_{2,0}} {\omega_{1,0}}}\langle\sin(\varphi-\Psi)\rangle\right]
\label{eq:linear_fluct_2}
\end{eqnarray}
\end{widetext}
and
\begin{eqnarray}
\frac{d\delta\psi}{dt} &=& -N\delta p\sin\theta\nonumber\\
&&-k\delta\psi\frac{1-\sin\theta}{\cos\theta} \sqrt{\frac{\omega_{2,0}}{\omega_{1,0}}}\langle\cos(\varphi-\Psi)\rangle\nonumber\\
&&-k\delta\psi\frac{1+\sin\theta}{\cos\theta} \sqrt{\frac{\omega_{1,0}}{\omega_{2,0}}}\langle\cos(\varphi+\Psi)\rangl
\label{eq:fluct_3}
\end{eqnarray}
with $\Psi$ satisfying
\begin{eqnarray}
\frac{d\Psi}{dt} & = & -Np_0\sin\theta\nonumber\\
&&+k\frac{1-\sin\theta}{\cos\theta}\sqrt{\frac{\omega_{2,0}}{\omega_{1,0}}}\sin(\varphi-\Psi)\nonumber\\
&&-k\frac{1+\sin\theta}{\cos\theta}\sqrt{\frac{\omega_{1,0}}{\omega_{2,0}}}\sin(\varphi+\Psi)
\label{eq:Phi_eqn}
\end{eqnarray}
We pause for a moment to note that Eqs.~(\ref{eq:linear_fluct_2}) to (\ref{eq:Phi_eqn}) restricted to a single mode
($k=0$ and $\cos\theta=0$)
are precisely the results of Kim, Slavin, and Tiberkevich\cite{slavin2005ieeem,slavin2008ieeem,slavin2009ieeem,kim2008prl},
with $\Gamma_p=-\left(I/I_{\rm th}-1\right)\Gamma_g+2\overline Qp$, describing the power fluctuations in the oscillator, and
how the power fluctuations couple to the phase fluctuations through the nonlinear frequency shift $N$. It is of course
this latter coupling that gives rise to the enhanced linewidth through the enhanced phase fluctuations. As we
noted earlier, in the low-temperature limit,
applicable here, the single-oscillator linewidth enhancement is described by the ratio of the nonlinear frequency shift $N$ to the
power restoration rate $\Gamma_p$: Power fluctuations couple to the nonlinear frequency shift, and the longer the decay time
of power fluctuations is ({\em i.e.,} smaller $\Gamma_p$), the more power fluctuations can affect phase fluctuations. For the system
under consideration here, Eqs.~(\ref{eq:linear_fluct_2}) and (\ref{eq:fluct_3}) show that the mode-coupling $k$ leads to
additional coupling between power and phase fluctuations. In general, the solutions to these equations, especially in the presence
of thermal fluctuations, are complicated. We can, however, gain some insight by assuming that $\omega_{1,0}\approx\omega_{2,0}$ with
$\omega_{2,0} > \omega_{1,0}$
and consider the system far from a mode transition so that $k$ is small and $\theta=-\pi/2+\delta$, with
$\delta\ll1$, and $\varphi$ small and negative. For the nanocontact STOs, the nonlinear amplification $N$ is large, and the nonlinear damping small. First, with
$N$ large and $k$ small, we can neglect the terms in $\delta\psi$ on the right-hand side of Eq.~(\ref{eq:fluct_3}). This means
that power amplitude fluctuations couple to phase fluctuations through $N$ just as for the single oscillator. It follows that if the power
amplitude fluctuations are enhanced or prolonged by the mode coupling so that $\delta p$ is enhanced or $\Gamma_p$ reduced
by the mode coupling,
Second, for $N$ large and the nonlinear damping small, at the fixed point $\theta\approx-\pi/2$ we have
$\cos(\varphi-\Psi)\approx 0$ and $\sin(\varphi-\Psi)\approx -1$. If we neglect the terms in $\delta\psi$ on the right-hand
side of Eq.~(\ref{eq:linear_fluct_2}), the net effect under these assumptions is to change the power restoration rate $\Gamma_p\to\Gamma_p-k\cos(\theta)\sin(|\varphi|)$ with a concomitant enhancement
of the nonlinear amplification and the linewidth as the coupling term $\nu$ between power amplitude and phase fluctuations is given by $\nu=Np_0/\Gamma_p$ in single mode theory.\cite{slavin2009ieeem} This explains qualitatively why the observed dependence of the linewidth on temperature in general does not agree with the theoretical expression\cite{slavin2009ieeem} (Fig.~\ref{fig:LlwNLvstemp}). In the latter, the temperature
dependence is driven by the stochastic thermal noise. In contrast, the experimentally
determined nonlinear amplification contains a modified power restoration rate that includes
the temperature (and current) dependence of $k$ (and $\varphi$). %
\section{CONCLUSIONS}
In conclusion, we have shown that the behavior of spin torque oscillator linewidths is to a large extent determined by nonlinearities arising from a number of mode transitions. The mode transitions are observed at increasing current at fixed temperature, or at increasing temperature
at fixed current. Near the mode transitions, the linewidth increases substantially. Nevertheless, both the current and temperature
dependence of the linewidth are well described by the analytical single-oscillator theory using the nonlinear
amplification extracted from experimental data. In contrast, the temperature dependence of the linewidth near the mode transitions does not
agree well with the single-oscillator analytical theory if the nonlinear amplification is calculated directly from the theory. The experimental
data showed the presence of an additional mode where the nonlinear amplification is enhanced near the mode transitions. We have
argued that a temperature-dependent mode coupling leads to reduction of the power restorations rate, and therefore
an enhancement of the nonlinear amplification and of the linewidth, and that this at least qualitatively explains the anomalous temperature dependence of the linewidth near the mode transitions. These results are important for the understanding of linewidth in spin torque oscillators.
\section*{ACKNOWLEDGEMENTS}
We thank Fred~Mancoff at Everspin Technologies, USA for providing the samples used in this work. We also thank S.~Bonetti and Niels de Vreede for assistance in experiments and useful discussions. Support from the Swedish Foundation for Strategic Research (SSF), the Swedish Research Council (VR), and the G\"{o}ran Gustafsson Foundation are gratefully acknowledged. Knut and Alice Wallenberg foundation (KAW), is acknowledged for funding of the equipment used for measurements presented here. P. M. acknowledges Swedish Research Council (VR) for the "Junior Researchers Project Grant". J.~\AA . is a Royal Swedish Academy of Sciences Research Fellow supported by a grant from the Knut and Alice Wallenberg Foundation. Argonne National Laboratory is operated under Contract No. DE-AC02-06CH11357 by UChicago Argonne, LLC.
|
2,877,628,090,476 | arxiv | \section{Introduction}\label{sec:introduction}
Adaptive coding techniques are frequently employed, especially in wireless communications, in order to dynamically
adjust the coding rate to changing channel conditions. An example of adaptive coding technique consists in puncturing a
mother code. When the channel conditions are good more bits are punctured and the coding rate is increased. In poor
channel conditions all redundant bits are transmitted and the coding rate drops. However, in harsh conditions, the
receiver might not be able to successfully decode the received signal, even if all the redundant bits have been
transmitted. In such a case, the coded block can be retransmitted until the sent information is successfully decoded.
This is equivalent to additional repetition coding, which further lowers the coding rate below the mother coding rate.
However, the use of retransmission techniques might not be suitable nor possible in some situations, such as
multicast/broadcast transmissions, or whenever the return link is strictly limited or not available (such situations
are generally encountered in satellite communications). The main alternative in this case is the use of {\em erasure
codes} that operate at the transport or the application layer of the communication system: source data packets are
extended with redundant (also referred to as {\em repair}) packets that are used to recover the lost data at the
receiver. Physical (PHY) and upper layer (UL) codes are not mutually exclusive, but they are complementary to each
other. Adaptive coding schemes are also required at the upper layer, in order to dynamically adjust to variable loss
rates. Besides, codes with very small rates or even {\em rateless} \cite{luby2002lc,shokrollahi2006rc} are sometimes
used at the application layer for fountain-like content distribution applications.
In this paper we propose a coding technique that allows to produce extra redundant bits, such as to decrease the coding
rate below the mother coding rate. Extra redundant bits can be produced in an incremental way, yielding very small
coding rates, or can be optimized for a given target rate below the mother coding rate. As for puncturing, the proposed
technique allows for using the same decoder, regardless of how many extra redundant bits have been produced, which
considerably increases the flexibility of the system, without increasing its complexity.
The proposed coding scheme is based on non-binary low density parity check (NB-LDPC) codes \cite{gall-monograph} or,
more precisely, on their {\em extended binary image} \cite{Valentin10}. If $q=2^p$ denotes the size of the non-binary
alphabet, each non-binary symbol corresponds to a $p$-tuple of bits, referred to as its binary image. Extra redundant
bits, called {\em extended bits}, are generated as the XOR of some bits from the binary image of the same non-binary
coded symbol. If a certain number of extended bits are transmitted over the channel, we obtain an {\em extended code},
the coding rate of which is referred to as {\em extended (coding) rate}. In the extreme case when all the extended bits
are transmitted, the mother code is turned into a {\em very small rate} code, and can be used for fountain-like content
distribution applications \cite{Valentin10}. A similar approach to fountain codes, by using multiplicatively repeated
NB-LDPC codes, has been proposed in \cite{kasai10}. If some extended rate is targeted, we show that the extended code
can be optimized by using density evolution methods.
The paper is organized as follows. Section \ref{sec:nbldpc_codes} gives the basic definitions and the notation related
to NB-LDPC codes. In Section \ref{sec:extended_nbldpc_codes}, we introduce the extended NB-LDPC codes and discuss their
erasure decoding. The analysis and optimization of extended NB-LDPC codes are addressed in Section
\ref{sec:analysis_optimization}. Section \ref{sec:code_design_performance} focuses on the code design and presents
simulation results, and Section \ref{sec:conclusions} concludes the paper.
\section{Non-binary LDPC Codes}\label{sec:nbldpc_codes}
We consider NB-LDPC codes defined over $\f_q$ \cite{Davey-MacKey}, the finite field with $q$ elements, where $q = 2^p$
is a power of $2$ (this condition is only assumed for practical reasons). We fix once for all an isomorphism of vector
spaces:
\begin{equation}
\label{eq:identify}
\f_q \stackrel{\sim}{\longrightarrow} \f_2^p \nonumber
\end{equation}
Elements of $\f_q$ will also be referred to as {\em symbols}, and we say that $\ux = (x_0,\dots,x_{p-1})\in\f_2^p$ is
the {\em binary image} of the symbol $X\in{\f_q}$, if they correspond to each other by the above isomorphism. A
non-binary LDPC code over $\f_q$ is defined as the kernel of a sparse parity-check matrix $H\in
\mathbf{M}_{M,N}(\f_q)$. Alternatively, it can be represented by a bipartite (Tanner) graph \cite{Tann} containing
symbol-nodes and constraint-nodes associated respectively with the $N$ columns and $M$ rows of $H$. A symbol-node and a
constraint-node are connected by an edge if and only if the corresponding entry of $H$ is non-zero; in this case, the
edge is assumed to be {\em labeled} by the non-zero entry. As usually \cite{Rich-Urba}, we denote by $\lambda$ and
$\rho$ the left (symbol) and right (constraint) edge-perspective degree distribution polynomials. Hence, $\lambda(x) =
\sum_d \lambda_d x^{d-1}$ and $\rho(x) = \sum_d \rho_d x^{d-1}$, where $\lambda_d$ and $\rho_d$ represent the fraction
of edges connected respectively to symbol and constraint nodes of degree-$d$. The design coding rate is defined as $r =
1 - \frac{\int_{0}^{1}\rho(x)\text{d}x}{\int_{0}^{1}\lambda(x)\text{d}x}$, and it is equal to the coding rate if and
only if the parity-check matrix is full-rank.
\section{Extended Non-binary LDPC Codes}\label{sec:extended_nbldpc_codes}
\subsection{Extended code description}
For any integer $1 \leq k \leq q-1$, let $[k] = (k_0, \dots, k_{p-1})^{\text{T}}$ denote the column vector
corresponding to the binary decomposition of $k$; {\em i.e.} $k = \sum_{i = 0}^{p-1}k_i2^i$, with $k_i\in\{0,1\}$. Let
$X\in{\f_q}$ be a non-binary symbol, and $\ux = (x_0,\dots,x_{p-1})$ be its binary image. The $k$-th {\em extended bit}
of $X$ is by definition:
$$\alpha_k = \ux \times [k] = \sum_{i = 0}^{p-1} k_i x_i$$
The vector $\ualpha = (\alpha_1,\dots,\alpha_{q-1})$ is called {\em extended binary image} of $X$. Note that
$\alpha_{2^i} = x_i$, for any $0\leq i \leq p-1$. An extended bit $\alpha_k$ is said to be {\em nontrivial} if $k$ is
not a power of $2$ (hence, $\alpha_k$ is a linear combination of at least two bits from the binary image $\ux$ of $X$).
Now, consider a NB-LDPC code defined over $\f_q$, with coding rate $r = \frac{K}{N}$. Let $(X_1, \dots, X_N) \in
\f_q^N$ be a non-binary codeword, $(\ux_1, \dots, \ux_N) \in \f_2^{Np}$ be its binary image, and $(\ualpha_1, \dots,
\ualpha_N) \in \f_2^{N(q-1)}$ be its extended binary image. By transmitting the extended binary image over the channel,
we obtain a code with rate $\frac{Kp}{N(q-1)} = r\frac{p}{q-1}$, which can be advantageously used for application
requiring very small coding rates \cite{Valentin10}.
We define an {\em extension of the NB-LDPC code}, as a family of matrices $\{A_1,\dots,A_N\}$, where each
$A_n\in\mathbf{M}_{p,t_n}(\f_2)$ is a binary matrix with $p$ rows and $t_n$ columns, with $t_n\geq 0$. Let $\ua_n =
\ux_n \times A_n \in \f_2^{t_n}$; hence, $\ua_n$ is constituted of $t_n$ extended bits of $X_n$ (possibly with
repetitions, if $A_n$ contains two or more identical columns). The binary vector $(\ua_1,\dots,\ua_N)$ is called {\em
extended codeword}, and the {\em extended coding rate} is given by $r_e = \frac{Kp}{T}$, where $T = \sum_{n=1}^N t_n$.
Note that the above definition is very broad, and it can yield extended rates below as well as extended rates above the
mother coding rate. In particular, it includes punctured codes: if $A_n = \text{col}([2^1], \dots, [2^{p-1}])$, then
$\ua_n = (x_{n,1},\dots,x_{n,p-1})$, which is the same as puncturing the first bit, $x_{n,0}$, from the binary image of
$X_n$. Moreover, taking some $t_n = 0$ is equivalent to puncturing the whole symbol $X_n$. The optimization of
puncturing distributions for NB-LDPC codes has been addressed in \cite{ISITA2010}. In this paper, we restrict ourselves
to the case when matrices $A_n$ are of the form $A_n = [I_p \mid B_n ]$, where $I_p$ is the $p\times p$ identity
matrix, meaning that each $\ua_n$ contains the binary image $\ux_n$ (the use of ``extension'' complies with its literal
meaning). We will further assume that any two columns of a matrix $A_n$ are different. It follows that $p\leq t_n \leq
q-1$, and each $\ua_n$ is constituted of the binary image $\ux_n$ and $t_n-p$ pairwise different (nontrivial) extended
bits. In this case, we shall say that {\em the symbol $X_n$ is extended by $k=t_n-p$ bits}.
For instance, let $p=3$ and consider that the following matrix $A$ is used to extend by $2$ bits some coded symbol $X$:
$$
\ua = \ux \times A, \text{ where } A = \left[
\begin{array}{lllll}
1 & 0 & 0 & 1 & 0 \\
0 & 1 & 0 & 0 & 1 \\
0 & 0 & 1 & 1 & 1
\end{array}
\right]
$$
Then $\ua = (\alpha_1, \alpha_2, \alpha_4, \alpha_5, \alpha_6) = (x_0, x_1, x_2, x_0 \wedge x_2, x_1 \wedge x_2)$,
where $\wedge$ denotes the bit-xor operator.
In order to determine the coding rate of the extended code, we denote by $\fdk$ the {\em fraction of degree-$d$ symbols
with $k$ nontrivial extended bits}, $0 \leq k < q - p$; thus
$\sum_{k=0}^{q-p-1}\!\fdk \!= \!1$.\break
The average number of nontrivial extended bits per coded symbol is given by
${f} = \sum_{d=1}^{d_s} \Lambda_d\sum_{k=0}^{q-p-1}k\fdk$,
where $d_s$ is the maximum symbol node degree, and $\Lambda_d = \frac{\lambda_d}{d \int_0^1{\lambda(x)dx}}$ is the
fraction of degree-$d$ symbol nodes. It follows that the extended coding rate is given by $r_e = r\frac{p}{p+f}$. Thus,
we can achieve an arbitrary extended rate within the interval $\left[r \frac{p}{q - 1} , r\right]$ by varying the
parameter $f$.
Figure \ref{fig:extended_codes} illustrates an extended code defined over $\f_8$, with $f_{2,0} = f_{2,1} = f_{2,2} =
f_{2,4} = 1/4$, and $f_{3,2} = f_{3,3} = 1/2$, which correspond to $f=2$. The mother coding rate is $r=0.5$ and the
extended coding rate is $r_e = 0.3$.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{extended_codes
\caption{Extension of a NB-LDPC code. Blue circles represent bits of the binary image ($p=3$), while red circles
represent (nontrivial) extended bits.}
\label{fig:extended_codes
\vspace{-3mm}
\end{figure}
\subsection{Iterative erasure decoding}
We consider that the extended codeword $(\ua_1,\dots,\ua_N)$ is transmitted over a binary erasure channel (BEC). At the
receiver part, the received bits (both from the binary image and extended bits) are used to reconstruct the
corresponding non-binary symbols. Precisely, for each received bit we know its position within the extended binary
image of the corresponding symbol. Hence, for each symbol node we can determine a set of {\em eligible symbols} that is
constituted of symbols whose extended binary images match the received bits. These sets are then iteratively updated,
according to the linear constraints between symbol-nodes \cite{Valentin08}. Alternatively (and equivalently), the
extended code can be decoded by using the linear-time erasure decoding proposed in \cite{Valentin10}.
The {\em asymptotic threshold} of an ensemble of codes is defined as the maximum erasure probability $\pth$ that allows
transmission with an arbitrary small error probability when the code length tends to infinity \cite{Rich-Urba}. Given
an ensemble of codes, its threshold value can be efficiently computed by tracking the fraction of erased messages
passed during the belief propagation decoding; this method is referred to as {\em density evolution}. In this paper,
the density evolution is approximated based on the Monte-Carlo simulation of an infinite code, similar to the method
presented in \cite{ISITA2010}. This method has two main advantages: it can easily incorporate the extending
distribution $\{\fdk\}_{d,k}$, and it can be extrapolated to more general channel models.
\section{Analysis and optimization}\label{sec:analysis_optimization}
\begin{figure}[!b]
\centering\vspace{-3mm}
\includegraphics[width=\columnwidth]{1_ext_bit_selection
\caption{1-bit extension for regular NB-LDPC codes over $\f_{16}$
\label{fig:ext_bit_selec_expl
\end{figure}
The goal of this section is to answer the following questions.
First of all, assume that we have given a symbol node that has to be extended by $k$ bits. How should they be chosen
among the $q-p-1$ (nontrivial) extended bits?
Secondly, given an extended coding rate $r_e$, how should be extended bits distributed over the symbol-nodes? Put
differently, which is the optimal extending distribution $\left\{\fdk\right\}$?
\subsection{Extended bits selection strategy} \label{subsubsec:ext_bit_selec}
We assume that we have given a symbol-node that has to be extended by $k$ bits. A choice of the $k$ bits among the
$q-p-1$ extended bits corresponds to an extending matrix $A = [I_p \mid B]$ of size $p\times (p+k)$, with pairwise
distinct columns. For each such a matrix, assume that the extended symbol $\ua = \ux \times A$ is transmitted over the
BEC, and let $E(A)$ be the expected number of eligible symbols at the receiver. Recall that an eligible symbol is a
symbol whose extended binary image match the received bits. If all transmitted bits have been erased, any symbol is
eligible. Conversely, if the received bits completely determine the non-binary symbol, then there is only one eligible
symbol. More generally, let $\ua_{\text{rec}}$ denote the sequence of received bits, and $A_{\text{rec}}$ denote the
submatrix of $A$ determined by the columns that correspond to the received positions of $\ua$. Then the eligible
symbols are the solutions of the linear system $\ux \times A_{\text{rec}} = \ua_{\text{rec}}$, and their number is
equal to $2^{p-\text{rank}(A_{\text{rec}})}$. Now, if $\epsilon$ denotes the erasure probability of the BEC, it can be
easily verified that:
$$E(A) = \sum_{i=0}^{p+k}(1-\epsilon)^i\epsilon^{p+k-i}\left(\sum_{A_i\subseteq A} 2^{p-\text{rank}(A_i)}\right),$$
where the second sum takes over all the submatrices $A_i$ cons\-ti\-tuted of $i$ among the $p+k$ columns of $A$. Hence,
in order to minimize the expected number of eligible symbols $E(A)$, we choose $A$ such that $\dmin(A)$ is maximal,
where $\dmin(A)$ is the smallest number of linearly dependent columns of $A$.
\subsubsection{One extended bit per symbol node} \label{expl:selec_one_ext_bit} Consider the ensemble of regular
$\left(\lambda(x) = x, \rho(x) = x^3\right)$ LDPC codes defined over the $\f_{16}$. Assume that each symbol-node is
extended by $k=1$ bit, such as to achieve an extended rate $r_e = 0.4$. According to the choice of the extended bit
(among the $11$ nontrivial extended bits), $\dmin$ may be equal to $3,4$, or $5$. The asymptotic threshold
corresponding to each choice of the extended bit is shown in Figure \ref{fig:ext_bit_selec_expl}. Note that extended
bits are ordered on the abscissa according to the corresponding $\dmin$. For comparison purposes, we show also the
asymptotic threshold corresponding to the repetition of some bit from the binary image (trivial extended bit
$\alpha_{2^i} = x_i$), in which case $\dmin=2$. Also, the blue line correspond to the threshold obtained if each symbol
node was extended by choosing a random nontrivial extended bit. We observe that the best threshold is obtained when
each symbol node is extended by $\alpha_{15}$, which is\break the XOR of the four bits $x_{0},x_{1},x_{2},x_{3}$ of the
binary image.
\begin{figure}[!b]
\centering \vspace{-3mm}
\includegraphics[width=\columnwidth]{ext_bit_selection
\caption{$k$-bits extension for regular and semi-regular NB-LDPC codes over $\f_{16}$
\label{fig:ext_bit_selec_expl_overall
\end{figure}
\subsubsection{Several extended bits per symbol node} \label{expl:selec_ext_bit}
We consider two ensembles of regular codes $\mathcal{C}_1\left(\lambda(x)=x, \rho(x)=x^3\right)$,
$\mathcal{C}_2\left(\lambda(x)=x^2, \rho(x)=x^5\right)$ and one ensemble of semi-regular codes
$\mathcal{C}_3\left(\lambda(x)=0.5x+0.5x^4, \rho(x)=0.25x^4+0.75x^5\right)$, of coding rate $r=1/2$, defined over
$\f_{16}$. For each ensemble of codes, we consider five different cases, in which all symbol nodes are extended by the
same number $k$ of bits, with $k = 1$, $2$, $3$, $4$, and $5$. Accordingly, the extended coding rate $r_e = 0.4$,
$0.33$, $0.29$, $0.25$, and $0.22$.
The {\em normalized gap to capacity}, defined as:
$$\delta = \frac{\text{capacity} - \text{threshold}}{\text{capacity}} = \frac{1-r-\pth}{1-r},$$
is shown in Figure \ref{fig:ext_bit_selec_expl_overall}. Solid curves correspond to a $\dmin$-optimized choice of the
extended bits, while dashed curves correspond to a random choice of the extended bits. For $k = 5$, there is only a
small difference between these two strategies. However, when $k \leq 4$, the gain of the $\dmin$-optimized choice is
significant for both regular and semi-regular codes.
\subsection{Extending distribution analysis} \label{sec:extending_distr}
First of all, we discuss the case of regular codes. In Figure~\ref{fig:spread_cluster_regular_expl}, we consider three
ensembles of regular codes over $\f_{16}$, with coding rate $r = 0.5$. For each ensemble of codes, we consider five
cases, corresponding to values of $k$ between $1$ and $5$. In each case, a fraction $f_k$ of symbol-nodes are extended
by $k$ bits, while the remaining symbol-nodes have no extended bit. The fraction $f_k$ is chosen such that the extended
coding rate $r_e = 0.4$. Hence, $f_k = 1, 0.5, 0.33, 0.25,0.20,0.09$, for $k = 1,2,3,4,5$, respectively. The right most
point on the abscissa corresponds to a sixth case, in which the extended rate $r_e = 0.4$ is achieved by extending
$9\%$ of symbol-nodes by $k=11$ bits (hence, $k = q-p-1$, which is the maximum number of extended bits). For any of the
three ensembles, we can observe that the smallest gap to capacity is obtained for $k=1$, which means that extended bits
are {\em spread} over as many symbol nodes as possible (in this case, $100$\%), instead of being {\em clustered} over
the smallest possible number of symbol-nodes.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{spreading_clustering_regular
\caption{Comparison of spreading vs. clustering extending distributions for regular NB-LDPC codes over $\f_{16}$
\label{fig:spread_cluster_regular_expl
\vspace{-4mm}
\end{figure}
In case of irregular NB-LDPC codes, let $\phi = \left\{f_{d,k}\right\}_{d,k}$ be an extending distribution. Thus,
$\fdk$ is the fraction of degree-$d$ symbols with $k$ nontrivial extended bits, $0 \leq k < (q - p)$. Let ${f}_d$
denote the average number of extended bits per symbol-node of degree-$d$; that is:
$$ {f}_d = \sum_{k = 0}^{q-p-1}k\fdk \in [0, q-p-1]$$
\noindent We say that {\em the extending distribution $\phi$ is of spreading-type} if for any degree $d$, $f_{d,k} \neq
0$ only if $k=\lfloor{f}_d\rfloor$ or $k=\lceil{f}_d\rceil$. In different words, for any degree $d$, the extended bits
are uniformly spread over all the symbol-nodes of degree $d$. Clearly, a spreading-type distribution is completely
determined by the parameters $\{{f}_d\}$, as we have $f_{d,\lfloor{f}_d\rfloor} = \lceil{f}_d\rceil - {f}_d$,
$f_{d,\lceil{f}_d\rceil} = \lfloor{f}_d\rfloor - {f}_d$, and $f_{d,k} = 0$ for $k \not\in \{\lfloor{f}_d\rfloor,
\lceil{f}_d\rceil\}$.
\smallskip
\noindent We say that {\em the extending distribution $\phi$ is of clustering-type} if for any degree $d$, $f_{d,k}
\neq 0$ only if $k=q-p+1$. In different words, for any degree $d$, the extended bits are clustered over the smallest
possible fraction of symbol-nodes of degree $d$. Clearly, a clustering-type distribution is completely determined by
the parameters $\{{f}_d\}$, as we have $f_{d,q-p+1} = \frac{{f}_d}{q-p+1}$ and $f_{d,k} = 0$ for $k \neq q-p+1$.
Now, let us consider the ensemble of semi-regular LDPC codes over $\f_{16}$ with edge-perspective degree distribution
polynomials $\lambda(x) =0.5x+0.5x^4$ and $\rho(x) =0.25x^4+0.75x^5$. The mother coding rate is $r=0.5$, and we intend
to extend symbol-nodes such as to achieve extended coding rates $r_e \in \{0.45, 0.4, 0.35, 0.3\}$. Several extending
distributions are compared in Figure \ref{fig:ext_distr_irreg_expl}. There are three spreading-type distributions,
which spread the extended bits over all the symbol-nodes, or only over the symbol-nodes of degree either $2$ or $5$,
and two clustering-type distributions, which cluster the extended bits over the symbol-nodes of degree either $2$ or
$5$. In all cases, extended bits (or, equivalently, extending matrices $A_n$) are chosen such as to maximize the
corresponding $\dmin$ values. We observe that the smallest gap to capacity is obtained for extending distributions that
spread extended bits either over the degree-$5$ symbol nodes only ($r_e = 0.45,0.4$), or over all the symbol-nodes
($r_e = 0.35,0.3$).
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{spreading_clustering_irregular
\caption{Comparison of several extending-distributions, for semi-regular NB-LDPC codes over $\f_{16}$
\label{fig:ext_distr_irreg_expl
\vspace{-4mm}
\end{figure}
\subsection{Extending distribution optimization} \label{subsec:strategy}
Based on the above analysis, we only consider spreading-type extending distributions. Such an extending distribution is
completely determined by the parameters $\{{f}_d\}$, and the extended coding rate can be computed by $r_e = \frac{r}{1+
\frac{1}{p}\sum_{d} \Lambda_d {f}_d}$, where $\Lambda_d$ is the fraction of symbol-nodes of degree $d$.
For given degree distribution polynomials $\lambda$ and $\rho$, and a given extending rate $r_e$, we use the
differential evolution algorithm \cite{DiffEvol} to search for parameters $\{{f}_d\}$ that minimize the asymptotic gap
to capacity. We assume that, for each symbol-node, the extended bits are chosen such as to maximize the corresponding
$\dmin$. The optimized extended codes are presented in the next section.
\section{Code Design and Performance}\label{sec:code_design_performance}
In this section we present optimized extending distributions for an irregular mother code over $\f_{16}$. The mother
code has coding rate $r = 1/2$, and it has been optimized by density evolution. The asymptotic threshold is $\pth =
0.4945$, and the edge-perspective degree distribution polynomials are:
$$ \begin{array}{l}
\lambda(x) = 0.596 x + 0.186 x^4 + 0.071 x^7 + 0.147x^{17} \\
\rho(x) = 0.2836 x^4 + 0.7164 x^5
\end{array}$$
We optimized extending distributions for extended rates $r_e\in\{0.45, 0.40, 0.35, 0.30, 0.25, 0.20\}$. Optimized
distributions $\{{f}_{d}\}$ are shown in Table \ref{tab:opt_ext_distr}, together with the corresponding asymptotic
threshold $\pth$ and normalized gap to capacity $\delta$. For comparison purposes, we have also indicated the
normalized gap to capacity $\delta_{\text{rand}}$ corresponding to a random choice of extended bits. The last column
corresponds to extended rate $r_e = 2/15$, obtained by extending each symbol-node by the maximum number of extended
bits, {\em i.e.} $q-p-1 = 11$ bits. It can be observed that the optimized distributions allow to maintain an almost
constant value of $\delta\approx 0.01$, for all extended rates $0.45 \geq r_e \geq 2/15$.
\begin{table}[!t]
\centering \caption{Optimized extending distributions for a mother NB-LDPC code with $r=0.5$ over $\f_{16}$, for $r_e =
\left\{0.45\right.$, 0.4, 0.35, 0.3, 0.25, $\left.0.2\right\}$} \label{tab:opt_ext_distr
\begin{tabular}{|@{\ }l@{\ }|*{8}{@{\ }c@{\ }|}}
\hline
\textbf{$r_e$} & \textbf{0.5} & \textbf{0.45} & \textbf{0.4} & \textbf{0.35} & \textbf{0.3} & \textbf{0.25} & \textbf{0.2} & \textbf{2/15} \\
\hline
\hline
$f_2$ & 0 & 0.4610 & 1.0164 & 1.7851 & 2.7442 & 4.1290 & 6.1737 & 11 \\
\hline
$f_5$ & 0 & 0.3731 & 1.2113 & 1.2981 & 2.5055 & 3.5864 & 5.3409 & 11 \\
\hline
$f_8$ & 0 & 0.2487 & 0.0359 & 1.8748 & 1.6831 & 2.3393 & 4.7494 & 11 \\
\hline
$f_{18}$ & 0 & 0.1309 & 0.4871 & 0.8511 & 1.6415 & 2.9800 & 4.0234 & 11 \\
\hline
\hline
$\pth$ & 0.4945 & 0.544 & 0.5939 & 0.6406 & 0.69 & 0.74 & 0.7872 & 0.8543 \\
\hline
$\delta$ & 0.011 & 0.0109 & 0.0102 & 0.0145 & 0.0143 & 0.0133 & 0.016 & 0.0143 \\
\hline
\hline
$\delta_{\text{rand}}$ & 0.011 & 0.0234 & 0.0284 & 0.0266 & 0.0251 & 0.0213 & 0.0172 & 0.0143 \\
\hline
\end{tabular}
\vspace{-2mm}
\end{table}
Finally, Figure \ref{fig:flp_opt_ext_distr_irreg} presents the Bit Erasure Rate (BER) performance of optimized
extending distributions for finite code lengths. All the codes have binary dimension (number of source bits) equal to
5000 bits (1250 $\f_{16}$-symbols). The mother code with rate $1/2$ has been constructed by using the Progressive Edge
Growth (PEG) algorithm \cite{PEG}, and symbol nodes have been extended according to the optimized distributions
(extension matrices $A_n$ being chosen such as to maximize $\dmin(A_n)$).
\section{Conclusions}\label{sec:conclusions}
Based on the extended binary image of NB-LDPC codes, we presented a coding technique that allows to produce extra
redundant bits, such as to decreases the coding rate of a mother code. The proposed method allows for using the same
decoder as for the mother code: extra redundant bits transmitted over the channel are only used to ``improve the
quality of the decoder input''.
Extending distributions for regular and irregular codes have been analyzed by using simulated density evolution
thresholds of extended codes over the BEC. We have also presented optimized extending distributions, which exhibit a
normalized gap to capacity $\delta\approx 0.01$, for extended rates from $0.45$ to $2/15$
Finally, although this paper dealt only with NB-LDPC codes over the BEC, the results presented here can be easily
generalized to different channel models.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{finite_length_deopt
\vspace{-2mm}
\caption{Finite length performance of optimized extended NB-LDPC codes
\label{fig:flp_opt_ext_distr_irreg
\vspace{-3mm}
\end{figure}
\bibliographystyle{IEEEbib}
|
2,877,628,090,477 | arxiv | \section{Introduction}
Dark matter (DM) remains one of the most important mysteries in modern cosmology. One of the hypotheses for the nature of DM are Primordial black holes (PBHs) made in the early Universe from large amplitude primordial fluctuations \citep{1971MNRAS.152...75H, 1974MNRAS.168..399C, 1975Natur.253..251C, 1975ApJ...201....1C}. The abundance of PBHs is limited by several observational constraints;
reviews over the entire PBH mass range can be found, for example, in \citet{2018CQGra..35f3001S,2021RPPh...84k6902C,2021arXiv211002821C}. While their abundance is severely limited below a mass of $\sim 10^{-16.5}\, {\rm M_{\odot}}$ by the contribution from Hawking evaporation to the $\gamma$-ray background and other impacts on the Cosmic Microwave Background \citep{2020PhRvD.101l3514L}, PBHs might still constitute all of the DM in the Universe in the asteroid-mass range from $10^{-16} \, {\rm M_{\odot}}$ to $10^{-11}\, {\rm M_{\odot}}$.
Despite several early claims for closing this mass window, follow-up studies raised objections on their validity \citep{2019JCAP...08..031M,2020PhRvD.101f3005S}. In addition, PBH might
account for a substantial fraction, but not all, the DM at higher masses while avoiding gravitational microlensing and other constraints \citep{2000ApJ...542..281A,2019NatAs...3..524N, 2020PhRvD.101f3005S}.
Alternative ways to find observational consequences of PBHs in this mass range are therefore of strong interest. A novel approach was proposed in \citet{2009arXiv0901.1093R}, where the idea that these asteroid-mass PBHs might randomly traverse through a star and be captured in the stellar core was presented. If the DM contains PBHs, these would be present in the cool molecular gas clouds where stars form and, if the relative velocities are low enough, they would adiabatically follow the gas contraction during the formation of a protostar, resulting in PBHs bound to the newly born star. A portion of the PBHs in highly eccentric orbits would lose energy via dynamical friction when crossing through the stellar interior and fall to the stellar core. Once settled in the star centre, a PBH would start accreting, growing up to a total mass that can be close to the total mass of the star.
This suggests the possibility to form a black hole with a typical mass of a star but below the Chandrasekhar mass, which cannot be explained in ordinary stellar evolution. Possible origins of these transmuted black holes, as they have been named, include also particle DM \citep{2018PhRvL.121v1102K, 2021PhRvL.126n1105D} and PBH capture by neutron stars \citep{2020PhRvD.102h3004G}.
If a black hole with mass below the Chandrasekhar limit were discovered, it would strongly point to an origin in a PBH or some other process involving DM interaction with stars. However, \citet{2009ApJ...705..659A} questioned that this process could occur in the Milky Way, finding the rate of PBH capture in normal stars to be negligible for the present low DM density and high velocity dispersion.
The capture of PBH was also studied by \citet{2009MNRAS.399.1347B}, focusing on massive stars as the possible origin of supermassive black holes at high redshift. The higher density and lower velocity dispersion of DM halos in the early universe leads to more common PBH capture in the first metal-free stars, although these would not leave unique observational signatures, their mass being similar to other stellar black holes formed at the end of the lives of the same stars.
PBH capture in the present Universe was reexamined by \citet{2013PhRvD..87b3507C}, considering star formation in globular clusters formed in dense DM halos made of PBHs in the asteroid-mass range. The impact of eccentric orbits to the capture rate was included in \citet{2014PhRvD..90h3507C}, who found an enhanced capture rate implying that no neutron stars would form because all their progenitors would have captured a PBH that would accrete the star before or during the formation of the neutron star.
This work, however, considered globular clusters with a very high DM and baryonic density, whereas in fact globular clusters may form from gas cloud fragmentation without involving any DM, so their constraints are not readily applicable \citep{2019JCAP...08..031M}.
In this paper we seek to expand on previous work by studying the effect of PBH capture in the evolution and fate of low-mass stars at high redshift, using precise stellar models of low-mass stars with a range of masses from $0.32\, {\rm M_{\odot}}$ to $1\, {\rm M_{\odot}}$, and with very low metallicity (as expected for the first stars). We consider the first stars formed in the Universe at $z \sim 20$, when both DM density is highest and the velocity dispersion in the star-forming DM halos is lowest, therefore maximising the capture rate of PBHs by stars. We focus on low-mass stars because they are uniquely able to produce black holes of stellar mass below the Chandrasekhar value, giving a clear signature that cannot be explained by standard stellar evolution theory. Although the first stars to form from metal-free gas are expected to have a top heavy initial mass function, stars of lower mass should form almost immediately thereafter from gas polluted by the first few supernovae, so they were probably abundantly made in a way similar to present-day galaxies.
The methods of our calculation are described in section \ref{sec:Methods}. In section \ref{sec:Results} we present our results for the PBH capture rates for stellar models of various stellar masses. Finally our conclusions are discussed in section \ref{sec:Conclusions}. We assume a flat cosmology with $\Omega_b = 0.3$ and $H_0= 70 \, \rm km s^{-1} Mpc^{-1}$
\section{Capture of black holes by a Star: Methods}\label{sec:Methods}
Our aim in this section is to calculate the rate at which PBH accounting for the DM in a halo where a star forms are captured by randomly traversing the star and being slowed down by dynamical friction.
\subsection{Dark Matter density profile around first stars}\label{sec:rhoPBH}
According to the standard Cold Dark Matter model, the first stars
should form in the first halos where the gravitationally collapsed gas
that is shock-heated to the halo virial temperature is able to
radiatively cool. For the gas of primordial composition, this happens
first when trace amounts of molecular hydrogen formed from the remnant
ionization that is left over from the recombination epoch induce a
cooling rate that is higher than the characteristic inverse time between
successive halo mergers, at $z\simeq 20$ in halos of $M\sim 10^6 \, {\rm M_{\odot}}$
\citep[e.g.][]{1984Natur.311..517B, 1997ApJ...474....1T}
Analytic studies and numerical simulations initially suggested that
stars formed in these halos from primordial gas are all of high mass
\citep[][]{2002Sci...295...93A, 2014ApJ...781...60H}. Newer results however have shown strong fragmentation is possible, resulting in zero metallicity low-mass star formation \citep{2002ApJ...569..549N,2011Sci...331.1040C,2015MNRAS.447.3892H, 2014ApJ...792...32S} and even binary systems \citep{2018MNRAS.479..667R}. Even ignoring this and assuming low-mass stars do not form at zero metallicity, they
should start forming soon thereafter. In this paper we are interested
in low-mass stars, which can survive to the present time either as red
dwarfs in the main-sequence or as white dwarfs, but may become a low-mass black hole if they have captured a PBH from the surrounding DM.
We consider that these low-mass stars acquired a quantity of bound DM at
birth owing to adiabatic contraction during the collapse of the gas
cloud that formed the star \citep{2013PhRvD..87b3507C}, which thereafter remains
bound to the star for an arbitrarily long time. In other words, we
assume that the original DM that is left bound to the star is not
removed by tidal disruption at a later time; we will discuss this further when considering the impact of external tides.
We take as fiducial values of the mass and formation time of the halo
that hosts the first generation of low-mass stars $M=10^7\, {\rm M_{\odot}}$ and
$z=20$, to be conservative on the minimum halo mass where low-mass stars
start forming. The mass of DM left bound to the star
within a given radius depends only on the phase-space density of DM in the formation site. We define the virial halo density
$\rho_v$ and virial radius $R_v$ as
\begin{align}\label{eq:densh}
\rho_v &= \rho_c \Delta_c = 18\pi^2 \rho_c \simeq 0.22
\left( \frac{1+z}{21} \right)^3 \, {\rm M_{\odot}}\,{\rm pc}^{-3} ~, \\
R_v &= \left( \frac{3M}{4\pi \rho_v} \right)^{1/3} \simeq
220\, \left( \frac{M}{10^7 \, {\rm M_{\odot}}} \right)^{1/3} \frac{21}{1+z} \,{\rm pc} ~,
\label{eq:rho}
\end{align}
where $\rho_c=3H^2/(8\pi G)$ is the critical density of the Universe
and we use the critical overdensity of the top-hat spherical model at
virialization, $\Delta_c=18\pi^2$. The implied halo velocity dispersion
depends of course on the assumed density profile, but is approximately
\begin{equation}\label{eq:sigmah}
\sigma_v \simeq \left( \frac{GM}{2R_v}\right)^{1/2} \simeq 9.9
\left( \frac{M}{10^7 \, {\rm M_{\odot}}} \right)^{1/3}\,
\left( \frac{1+z}{21} \right)^{1/2} \,{\rm km}\,{\rm s}^{-1} ~.
\end{equation}
The phase-space density of DM around the star depends on its distance
from the halo center at formation time, increasingly rapidly towards the
center. We assume the halo has a standard Navarro-Frenk-White (NFW,
\citet{1997ApJ...490..493N}) density profile, as found in numerical
simulations of structure formation. The profile is characterized by a
scale radius $R_s=R_v/c$, where $c$ is the concentration parameter:
\begin{align}
\rho_h (R) = \frac{\rho_{0}}{R/R_s \left(1+R/R_s \right)^2} ~,
\label{eq:rho_vir}
\end{align}
where R is the radial distance to the center of the halo and $\rho_0$ is a normalization constant that we determine by
requiring the total mass within $R_v$ to be $M$. We use the value
$c=10$ throughout this paper, in agreement with simulations of the
early halo formation \citep{2001MNRAS.321..155L}.
The detailed phase-space density of the DM at a given radius $R$
depends on the velocity as determined by the condition of dynamical
equilibrium. For simplicity, we assume a Gaussian velocity distribution
to compute the phase-space number density
for small velocities,
\begin{equation}
Q_g= \frac{\rho_h(R)}{m_b \left[2\pi\sigma_h^2(R)\right]^{3/2} } ~,
\end{equation}
where $\sigma_h(R)$ is the DM one-dimensional velocity dispersion in
the halo as a function of radius, assuming an isotropic model, and $m_b$ is the mass of each PBH. The
DM phase-space density bound to the star is assumed to have a constant
value $Q=f_s Q_g$, where $f_s$ is a dimensionless constant of
order unity that depends on the velocity of the star, and is used also to absorb the difference between the detailed velocity distribution of the NFW profile at a specific radius $R$ and a Gaussian distribution. The maximum amount of bound
DM will be acquired when the star is at rest relative to the mean
surrounding DM, but typically the star is moving at an
rms velocity $\sim \sqrt{3}\sigma_h$ and will be acquiring bound DM at the
phase-space density near the star velocity.
The density profile of bound DM, $\rho_{bd}(r)$, after the star of mass
$M_*$ is formed, at a distance $r$ from the star, is determined by an
approximately constant DM phase-space density up to a maximum of the escape velocity
relative to the star $v=(2GM_*/r)^{1/2}$, and is therefore given by
\begin{equation}
\label{eq:rbd}
\rho_{bd}= Q m_b \int_0^{\sqrt{2GM_*/r}} 4\pi v^2\, dv =
\frac{4f_s}{3 \sqrt{\pi} }\, \frac{\rho_h(R)}{\sigma_h(R)^3}\,
\left(\frac{GM_*}{r} \right)^{3/2} ~.
\end{equation}
Therefore the DM that is left bound around the star has a density
profile proportional to $r^{-3/2}$, with a normalization that is
determined by the density and velocity dispersion of the dark matter
halo at the formation site of the star, and the dimensionless factor of
order unity $f_s$. In this paper we will consider only the capture of
this bound DM, and neglect any unbound DM that may be also be captured
by the star when traversing it, if the incoming velocity is low enough
to allow capture after a single passage through the star. From \citet{2009ApJ...705..659A, 2013PhRvD..87b3507C} we expect the total capture rate should generally be dominated by this
originally bound DM, especially when taking into account the effects of likely
external perturbations on the DM orbits around the star that can
randomly change orbital eccentricities.
We compute the DM phase-space density in our model using $\sigma_h(R)$ from equation (13) of \citet{2001MNRAS.321..155L}, for the isotropic case. For reference, we show in Figure \ref{fig:sigma_halo} the computed one-dimensional velocity dispersion $\sigma_h(R)$, for our fiducial model of a halo mass $M=10^7\, {\rm M_{\odot}}$ at $z=20$.
\begin{figure}
\includegraphics[width=\columnwidth]{Pictures/Sigma_halo.pdf}
\caption{One-dimensional velocity dispersion of an isotropic NFW halo of mass $M=10^7 \, {\rm M_{\odot}}$ at $z=20$,
used to compute the DM phase-space density around a star formed at a radius $R$ in the halo according to equation (5).}
\label{fig:sigma_halo}
\end{figure}
\subsection{Black Hole capture: dynamical friction inside the star}\label{sec:PBH_capt}
Once the PBH is in a bound orbit around the star, the mechanism to dissipate its orbital energy and gradually fall to the core of the star is dynamical friction when it traverses the stellar interior. This requires the orbital eccentricity to be close enough to one for the periastron to be in the stellar interior.
To compute the dynamical friction effect accurately, a stellar model for the interior density, temperature, and sound speed profiles must be used, which will be described below in section \ref{sec:Stellar}.
The orbit external to the star is purely Keplerian, characterized by the specific energy $u$ and angular momentum $\ell$,
\begin{align}
\ell =
\sqrt{GM_{\ast} a \left(1-e^2\right)} ~,
\label{eq:mom}
\end{align}
\begin{align}
u = \frac{-G M_{\ast}}{2a} = \frac{1}{2}v^2 - \frac{GM_*}{r} \;,
\label{eq:ener}
\end{align}
where $e$ is the eccentricity and $a$ the semimajor axis.
To compute the PBH capture rate, we need to calculate the time required for capture as a function of the semimajor axis and orbital eccentricity. This is determined by the amount of energy the PBH loses per stellar crossing. As long as the PBH periastron is within the stellar radius, the energy dissipation process continues with a decreasing semimajor axis and orbital period, and more frequent crossings that result in faster energy dissipation.
Therefore, to compute the time required for capture we do not need to worry about the detailed evolution of the energy and angular momentum of the orbit as the PBH is gradually slowed down through a large number of stellar crossings, because the semimajor axis remains much larger than the periastron, implying a negligible change in the part of the orbit in the stellar interior,
over most of the capture process time. In fact, we can simply compute the loss of energy over one stellar crossing by integrating over the unperturbed orbit of the PBH through the interior of the star, because the change in orbital energy is very small in a single stellar crossing compared to the kinetic energy of the PBH in the stellar interior.
We also approximate dynamical friction to depend only on the local plasma density and sound speed in the PBH vicinity (this is equivalent to neglecting the contribution to the Coulomb logarithm from the largest distances, comparable to the stellar radius, where these thermodynamic variables have substantial variation). Then, the dynamical friction acceleration $a_{df}$ is always in the direction opposite to the orbital velocity $v$, and the rate of energy loss is $du= - a_{df}\, v\, dt$, so the loss of specific orbital energy per stellar crossing is given by
\begin{equation}
\Delta u= - \int_{t1}^{t2} a_{df}\, v \, d t = - 2\int_q^{R_{\ast}} a_{df} \, v \frac{d t}{d r}\, dr ~,
\end{equation}
where the first integral is from the time $t_1$ when the PBH enters the star to the time $t_2$ when it exits it. In the second integral, we change variables to the radial coordinate $r$ and use the fact that the integral contains two symmetric parts, from the stellar radius $R_*$ to the periastron $q$ and viceversa.
As explained before, we compute this integral for the unperturbed orbit of the PBH moving in the gravitational field of the star without the dynamical friction, because the modification of the orbit in a single passage is very small. The radial time derivative is related to the conserved specific angular momentum as $dr/dt=v[1-(\ell/rv)^2]^{1/2}$, so the above integral becomes
\begin{equation}\label{eq:Du}
\Delta u = - 2\int_q^{R_{\ast}} \frac{a_{df}}{\sqrt{1-(\ell/rv)^2}}\, dr ~.
\end{equation}
The acceleration caused by dynamical friction is the quantity that depends on the stellar interior model. A first approximation one can use is
Chandrasekhar's formula \citep[][]{1949RvMP...21..383C}, but this is valid for collisionless matter only, which does not apply to the interior plasma in stars. Instead, the adequate computation to use is for a collisional fluid, as presented by \citet[][]{1999ApJ...513..252O}. This fluid friction is close to the collisionless formula for a Mach number $M_a=v/c_s > 2$, where $c_s$ is the sound speed, but is substantially larger for $1<M_a<2$, a common value for the capture process because the free-fall speed of the PBH is larger than the sound speed in the stellar interior by a small factor (as an example, the escape velocity at the surface of the Sun is $v\approx 615\, {\rm km/s}$ compared to a typical sound speed of $c_s \sim 350 \, {\rm km/s}$ \citep{2001ApJ...555..990B}). For the collisional case, we will actually use a formula from a simplified analytic estimate by \citet{2016A&A...589A..10T}, obtained from numerical simulations.
In general, the dynamical friction can be written as
\begin{align}\label{eq:adf}
a_{df} = \frac{4\pi G^2 \rho_s m_b}{v^2} \, I(v,\Lambda) ~,
\end{align}
where $\rho_s$ is the density of the star, $m_b$ is the PBH mass, and the dimensionless function $I(v,\Lambda)$ contains the detailed physical dependence on the velocity dispersion and Coulomb logarithm for the collisionless or collisional cases. The Chandrasekhar formula is written as
\begin{align}
I_C(v, \Lambda) = \left[ \, {\rm erf} \left( \frac{v}{\sqrt{2}\sigma_s}\right) -\frac{2 v}{\sqrt{2\pi}\sigma_s}\, \exp\left(-\frac{v^2}{2\sigma_s^2} \right) \right]\, \ln\Lambda ~,
\label{eq:Chandra}
\end{align}
while for the fluid case, the equation from \citet{2016A&A...589A..10T} is
\begin{align}
I_T(v, \Lambda) = {\rm ln} \left[ 2\Lambda \left(1-1/M_a^2 \right) \right] ~,
\label{eq:Thun}
\end{align}
where erf is the error function, $\sigma_s = (3kT/\mu)^{1/2}$ is the three-dimensional velocity dispersion for the plasma, and $\Lambda = R_{\rm max}/R_{\rm min}$ is the Coulomb logarithm. The lengths $R_{\rm max}$ and $R_{\rm min}$ are the usual maximum and minimum impact parameters for an effective gravitational interaction to produce dynamical friction, while $T$ is the temperature, $k$ the Boltzmann constant, and $\mu$ the mean particle mass. Equation (\ref{eq:Thun}) is valid for $M_a>1$, which is generally correct for an object moving near the escape speed in the stellar interior. In addition, the factor of $2$ in this equation is a numerical result obtained for an adiabatic index of $5/3$ for the plasma, the value for a monatomic gas that we assume here.
We use the radius of the star for $R_{\rm max}$, and $R_{min}=(2Gm_b)/v^2$. Below $R_{\rm min}$, the deflection of gas by the PBH gravity is much less effective at slowing it down; in particular, any accretion into the PBH is negligible as far as the rate of slowing down its velocity is concerned.
\subsection{Number of Captured Black Holes}\label{sec:PBH_num}
If a PBH follows the trajectory determined only by the gravity of the star and the dynamical friction when it traverses the stellar interior calculated in the previous subsection, it will slow down gradually and reduce its semimajor axis until the orbit moves entirely to the interior of the star and the PBH settles on the stellar core. For the PBH to complete the process of orbital energy loss during the present age of the Universe, the initial orbital period has to be short enough so that the energy loss at each passage can add up roughly to the initial orbital energy.
As in previous work \citep[e.g.,][]{2013PhRvD..87b3507C},
we assume the orbital energy loss at each crossing of the stellar interior, $| \Delta u|$, is small compared to $| u|$, so we can integrate the evolution of the orbital energy with time $t$ with the simple equation
\begin{equation}
du = \Delta u \, \frac{dt}{P} = \Delta u \frac{dt}{P_0} \left( \frac{u}{u_0} \right)^{3/2} ~,
\end{equation}
where $u_0$ is the initial orbital energy and $P_0=\pi GM_* (|u_0|)^{-1}\, (2|u_0|)^{-1/2}$ is the initial orbital period. Solving this equation, we find the time $t_c$ required to capture the PBH to the stellar interior (i.e., to reduce the semimajor axis to a value much smaller than the initial one) is
\begin{equation}\label{eq:tc}
t_c = P_0\, \frac{2u_0}{\Delta u} ~.
\end{equation}
Note that as long as the semimajor axis remains much larger than the stellar radius, the trajectory of the PBH through the stellar interior at each crossing remains nearly the same, so $\Delta u$ stays almost constant and this simple solution is a good estimate. The detailed orbital evolution during the late stages of the capture, when the semimajor axis becomes comparable to the stellar radius, do not matter because the orbital period is then very short and capture takes a short time to be completed. Most of the required time for capture is at large semimajor axis, close to its initial value. In our model we still compute the time and energy loss for each orbit allowing for $\Delta u$ to vary until the orbit is wholly within the star, but the analytic estimate gives results very close to the ones we found.
For each initial orbital energy $u_0$, there is a critical value of the eccentricity $e_c(u_0)$ that makes the capture time $t_c$ equal to the present age of the Universe. The condition for the PBH to be captured (in the absence of other orbital perturbations, which we will discuss below) is then that for a fixed energy the orbital eccentricity is larger than this critical value. We assume the distribution of eccentricities follows the thermal distribution that is implied when the phase-space density is constant, so the probability for the eccentricity to be above $e_c$ is $1-e_c^2$.
We can now calculate the total number of PBH that will be captured by the star over a time $t_c$. At a fixed radius $r$ from the star, the number density of PBH with phase-space density $Q$ that will be found with an orbital semimajor axis $a$, and therefore orbital energy $u=-GM_*/(2a)$ and velocity $v=(2GM_*/r-GM_*/a)^{1/2}$, is
\begin{equation}
n_b(a)\, da= 4\pi Q v^2\, dv = {2\pi Q} \,(GM_*)^{3/2}\,
\left(\frac{2}{r} - \frac{1}{a} \right)^{1/2}\, \frac{da}{a^2} ~,
\end{equation}
where we have used $v\, dv= GM_*\, da/(2a^2)$ at fixed $r$. Integrating over the volume, from $r=0$ to $r=2a$ and replacing $x=r/a$, the total number of PBH with semimajor axis $a$ is found to be
\begin{equation}
N_b(a)\, da= {8\pi^2 Q}\, (GM_*)^{3/2}\, a^{1/2}\, da\, \int_0^2 dx\, x^2\,
\sqrt{\frac{2}{x}-1} ~,
\end{equation}
which yields
\begin{equation}
N_b(a)\, da= {4\pi^3 Q}\, (GM_*)^{3/2}\, a^{1/2}\, da ~.
\end{equation}
Finally, the total number of captured PBH is expressed as
\begin{equation}\label{eq:ncap}
N_c = \frac{\sqrt{2\pi^3}\, f_s \rho_h}{m_b\sigma_h^3} (GM_*)^{3/2} \int_0^{a_m} \left[1-e_c^2(a)\right] \sqrt{a}\, da ~,
\end{equation}
where $a_m$ is the maximum semimajor axis allowing capture within the age of the universe, at which $e_c(a_m)=1$, and we have used equation (\ref{eq:rbd}) to express the phase-space density in terms of the DM density and velocity dispersion around the star at its formation time.
The critical eccentricity is related to a critical extrapolated periastron, $q_c(a)=a[1-e_c(a)]$, where the true periastron is larger than $q_c(a)$ because of the reduced gravitational potential in the stellar interior compared to the Kepler one, owing to the extended mass distribution. Typically, $q_c(a)$ is of order the radius of the stellar core, where the stellar density is close to the maximum, at the values of $a$ close to $a_m$ that dominate the contribution to the integral in equation (\ref{eq:ncap}). We define the effective mean capture periastron, $\bar q_c$, as
\begin{equation}\label{eq:barqc}
\bar q_c\, 2\sqrt{a_m} = \int_0^{a_m} q_c(a)\,
\frac{da}{\sqrt{a}} ~.
\end{equation}
The captured number of black holes is then, approximating $1-e_c^2\simeq 2(1-e_c)$,
\begin{equation}
N_c= \frac{2(2\pi GM_*)^{3/2} f_s \rho_h}{m_b \sigma_h^3}\, \bar q_c \sqrt{a_m} ~.
\label{eq:Capture_rate}
\end{equation}
It is also useful to express this in terms of fiducial values,
\begin{equation}
N_c= \frac{f_s \rho_h/m_b}{10^{14} {\rm pc}^{-3} }\,
\left( \frac{M_{\ast}}{\, {\rm M_{\odot}}} \right)^{3/2} \left(\frac{10\, {\rm km/s}}{ \sigma_h}\right)^3 \frac{\bar q_c}{0.05 R_{\odot}} \sqrt{ \frac{a_m}{\rm pc}} ~.
\label{eq:generalized_capture_rate}
\end{equation}
From equations (\ref{eq:densh}), (\ref{eq:sigmah}) and (\ref{eq:rho_vir}), we find that for a star formed at radius $R\sim 0.1 R_v$, a typical density $\rho_h\sim 10 \, {\rm M_{\odot}}\,{\rm pc}^{-3}$ is expected, so for PBHs of mass $m_b=10^{-12}\, {\rm M_{\odot}}$ that are the DM and follow a thermal eccentricity distribution, a star
of mass close to $\, {\rm M_{\odot}}$ would have a probability of order 0.1 to capture a black hole during the age of the Universe if $a_m$ is as large as a parsec.
It is useful to estimate at this point a rough value for $a_m$ for some fiducial parameters. The change in orbital energy per stellar crossing can be approximated, from equations (\ref{eq:Du}) and (\ref{eq:adf}), as
\begin{equation}
\Delta u \sim \frac{8\pi G^2\rho_c r_c m_b}{v_e^2} \ln \Lambda ~,
\end{equation}
where the stellar density near the core is $\rho_c\sim 100\, {\rm g\, cm}^{-3}$, the stellar core has size $r_c\sim 10^{10}\rm cm$, $v_e\sim 1000 \rm km/s$ is the escape velocity from the core, and $\Lambda=R_{\rm max}/R_{\rm min}\simeq R_*v_e^2/(2Gm_b) \sim M_*/m_b$, so we find
\begin{align}\label{eq:anest}
\Delta u\sim ~ 6\times 10^{-5} \frac{\rho_c r_c}{ 10^{12}{\rm g\, cm^{-2}} }\left(\frac{10^3\, {\rm km/s}}{ v_e}\right)^2 \frac{m_b}{10^{-12}\, {\rm M_{\odot}}}\, \frac{{\rm km}^2}{{\rm s}^2 } ~.
\end{align}
Precise calculations of $\Delta u$ will be presented in Section \ref{sec:Results} for specific stellar models. For a capture time $t_c= 10^{10}\, {\rm yr}$ and $M_*=\, {\rm M_{\odot}}$, the maximum semimajor axis at which equation (\ref{eq:tc}) is obeyed is
\begin{align}\label{eq:ams}
a_m\sim (2\, {\rm pc})\,
\left( \frac{\rho_c r_c}{10^{12}{\rm g\, cm^{-2}}} \frac{m_b}{10^{-12}\, {\rm M_{\odot}}}
\right)^2 \left(\frac{10^3\,{\rm km/s}}{v_e}\right)^4 ~.
\end{align}
However, the ideal case of a Keplerian orbit around the single star of mass $M_*$ is not realistic for the large values of $a_m$ implied for the typical parameters in equation (\ref{eq:ams}), because tidal perturbations by the host DM halo and possibly other factors perturb the orbit, as we discuss next.
\subsection{Impact of Tidal Perturbations on the Capture Rate}\label{sec:Perturbations}
We have so far assumed that the PBH moves in a Kepler orbit around the star of mass $M_*$ with no gravitational perturbations. This assumption is clearly unrealistic for a PHB mass as low as $m_b\sim 10^{-12}\, {\rm M_{\odot}}$, because the tidal acceleration caused by the host DM halo, $g_h$, at the maximum semimajor axis $a_m$ is
\begin{equation}
g_h\sim \frac{GM_h a_m}{R^3} \sim g_* \frac{M_h a_m^3}{M_* R^3}~,
\end{equation}
where $g_*=GM_*/a_m^2$ is the gravitational acceleration from the star on the PBH. For $M_h/M_*\sim 10^6$, the external tidal acceleration is larger than $g_*$ at $a_m> 0.01 R$. Taking as an example a typical halo radius where the star is located as $R\sim 0.1 R_v\sim 10\, {\rm pc}$, we would expect any dark matter further than $\sim 0.1$ pc from the star to actually be tidally disrupted from the host halo tide.
Moreover, the external tide perturbs the orbital eccentricity of any PBH, deviating it from the nearly radial orbit required to cross the stellar interior. The change in orbital eccentricity over one period induced by the external tide is related to the change in specific angular momentum as
\begin{equation}
\delta e\simeq \frac{\sqrt{1-e^2} \delta\ell}{e \sqrt{GM_* a} } ~.
\end{equation}
We can reasonably assume that the external perturbation causes a change $\delta \ell / \sqrt{GM_* a} \sim g_h/g_* $ over one period, so for nearly radial orbits ($1-e \ll 1$), the eccentricity perturbation in one orbit is
\begin{equation}\label{eq:deg}
\delta e \sim \sqrt{2(1-e)}\, \frac{g_h}{g_*} ~.
\end{equation}
At the same time, the minimum eccentricity at each semimajor axis $a$ required for the PBH to be effectively slowed down as it crosses the stellar interior is $1-e_c(a)=\bar{q}_c(a)/a$. This small window of eccentricity was denominated the ``loss-cone'' in \citet{1976MNRAS.176..633F} to refer to the region in velocity space where an orbiting object is lost because of the interaction with the central object, but we use the term ``loss-cylinder'' here because of the cylinder shape of this velocity space region. When $\delta e > 1-e_c$, the PBH is removed from the loss-cylinder. This clearly has the effect of decreasing the capture rate at large semimajor axis.
However, at small semimajor axis the PBH capture rate can be increased by the external perturbations. The reason is that in the absence of perturbations, PBHs that are initially at $a\ll a_m$ and within the loss-cylinder are captured over a time $t\ll t_c$, so the loss-cylinder is depleted and further captures can occur only from PBH near the edges of the loss-cylinder that cross the star through the low-density envelope, with reduced friction and energy loss. When perturbations are present, the loss-cylinder is refilled and the capture rate increases back to its most effective rate.
For orbits of semimajor axis $a$, the ideal orbital perturbation rate that leads to the maximum PBH capture rate is that which produces, over a time $t_c$, a change in eccentricity of
\begin{equation}\label{eq:De}
\Delta e \sim \frac{\bar q_c a_m^{1/2}}{a^{3/2} } ~,
\end{equation}
because the time required for the PBH to lose its orbital energy at $a$ when the periastron is within $\bar q_c$ is only $t_c\, (a/a_m)^{1/2}$, so the PBH will be captured if perturbations induce a random-walk of the eccentricity within the characteristic interval $\Delta e$ over time $t_c$.
In general, orbital perturbations can greatly reduce PBH captures from $a\sim a_m$ because PBH are always removed from the loss-cylinder before they are captured, but as $a$ is decreased, the interval $\delta e$ over which the eccentricity random-walks over time $t_c$ decreases until it equals $\Delta e$ in equation (\ref{eq:De}). At this value of $a$, the PBH capture rate will be roughly the same as it was at the maximum semimajor axis $a_m$ in the absence of perturbations. Therefore, we conclude that despite the presence of orbital perturbations, the PBH capture rate will always be roughly the same as obtained from equation (\ref{eq:ncap}). Perturbations imply that most PBH are actually captured from orbits much closer to the star than $a_m$, but the total capture rate should not be greatly modified.
It is possible that the bound DM around the star has been tidally stripped at some time down to a semimajor axis $a\ll a_m$, and subsequently any external perturbers are removed so that perturbations are absent but the PBH are no longer available at large $a$. In this case $a_m\sim 0.1 pc$, so the capture rate may obviously be greatly reduced. As we expect the capture rate to have a dependence of $a_m^{1/2}$ at larger $a$, when the change in $\bar{q}_c$ becomes negligible, this should result in a capture rate less than one order of magnitude smaller compared to previous $a_m\sim 1 \, {\rm pc}$. We will assume here that enough DM has been retained to maintain the capture rate close to the value computed with equation (\ref{eq:ncap}) in the presence of tidal perturbations by refilling of the loss-cylinder.
\subsection{Black Hole Growth after Capture}
Once the PBH has settled in the center of the stellar core by the continuous action of dynamical friction, it will start growing in mass by accreting the surrounding stellar plasma. As discussed previously by \citet{
1995MNRAS.277...25M,
2009arXiv0901.1093R,
2009MNRAS.399.1347B,
2019JCAP...08..031M}, the PBH can accrete rapidly at the Bondi accretion rate if photons are trapped with the accreting plasma, and proceeds more slowly if an accretion disk is formed that can radiate efficiently and slow down the accretion by emitting close to the Eddington luminosity. The Bondi accretion rate is
\begin{align}
\frac{\Dot{M}_{\rm B}}{m_b } = \, & \frac{\pi G^2 m_b \rho_s}{c_{s}^3} \simeq
3.24 \cdot 10^{-6} \, {\rm yr}^{-1} ~ \times \nonumber \\
& \frac{m_b}{10^{-12}\, {\rm M_{\odot}}} \frac{\rho_s}{100 \, \rm g/cm^3} \left(\frac{c_{s}}{300 \rm km/s}\right)^{-3} ~,
\end{align}
where $\rho_s$ and $c_s$ are the plasma density and sound speed in the stellar center. We see that at the Bondi accretion rate and typical values presented above, even an initial PBH mass as low as $10^{-16}\, {\rm M_{\odot}}$ will grow its mass in less than $10^{10}$ years.
The growth rate becomes faster as the mass increases so the PBH can accrete all the stellar mass if Bondi accretion continues. If accretion becomes Eddington-limited at some stage due to the angular momentum of accreting matter and formation of an accretion disk, the accretion rate becomes
\begin{align}
\frac{\Dot{M}_{E}}{m_b} = \frac{4\pi G m_{p}}{c\;\eta\;\sigma_{T}} \simeq 2.2\cdot 10^{-8} \frac{0.1}{\eta} \rm yr^{-1} \, ,
\end{align}
where $\rm m_{p}$ is the proton mass, $\eta$ is the radiative efficiency of the accretion disk matter and $ \sigma_{T}$ the Thompson cross-section. In this case, the constant e-folding time for mass growth is also much less than the present age of first stars $t_c\simeq 10^{10}$ years. Therefore, the PBH will continue to grow until it has accreted a substantial mass of the star.
If Bondi accretion continues, the star would only start being dynamically affected by the black hole accretion a few days before being completely accreted \citep{1995MNRAS.277...25M}. On the other hand, if an Eddington luminosity is emitted the total stellar luminosity will obviously be dominated by the accretion already when $m_b \sim 10^{-5}\, {\rm M_{\odot}}$ for $M_*=1\, {\rm M_{\odot}}$. At this late stage, formation of a jet or other mechanical energy release resulting from accretion may result in the ejection of the stellar envelope, and the end of PBH accretion, but the details of this final process are complex and it is not clear what the final mass of the PBH will be. Depending on stellar rotation and perhaps other stellar properties, the final mass may be close to the initial mass of the star, or may be much lower if mechanical energy can disperse the stellar material.
\subsection{Stellar Models} \label{sec:Stellar}
In the present work, we consider low-mass stars formed at high redshift in low-mass DM halos. Low-mass stars live for a long time ($\gtrsim 10^{10}$ yrs) and thus might have a considerably high probability of capturing a PBH. Consequently, they might form low-mass black holes that, if discovered, would point to a formation process beyond the standard stellar formation channels.
Low-mass stars were expected to form soon after the first metal-free stars, once the first supernovae increased the heavy element abundance of the gas enough to lead to fragmentation and collapse of low-mass cloud cores
\citep{2021MNRAS.508.4767S, 2022MNRAS.510.4019P}. The concept of critical metallicity, that is, a minimum metallicity below which low-mass stars could not form \citep{2005IAUS..228..121B} seemed to have observational support \citep{2007MNRAS.380L..40F}. However recent observations of metal-poor stars in the halo keep pushing this low threshold to increasingly low values. The star by \citet{2014Natur.506..463K} holds the current record, with a metallicity [Fe/H]=-7.1 .
Even in the primordial metal-free gas, disks around massive stars might fragment into low-mass objects that form stars in the range $0.1$ to $1 \, {\rm M_{\odot}}$ \citep{2002ApJ...569..549N, 2020ApJ...901...16D, 2011Sci...331.1040C,2015MNRAS.447.3892H, 2014ApJ...792...32S, 2022ApJ...925...28L}, though the Initial Mass Function (IMF) of the most primitive stars is still largely uncertain\citep{2016MNRAS.462.1307S, 2015MNRAS.447.3892H}. Any such stars formed near the center of a low-mass DM halo are the best candidates for PHB capture.
In this study, we use models of very metal poor stars, that is, of $Z =10^{-4}$. This is the typical metallicity at which we expect the transition from a top-heavy to a bottom-heavy IMF to happen \citep{2013MNRAS.432L..46S,2019ffbh.book...67K, 2021MNRAS.503.2014S}. Six different stellar models with stellar masses in the range $0.32\, {\rm M_{\odot}} < M_* < 1 \, {\rm M_{\odot}}$, were computed for the present study with the open-source software instrument Modules for Experiments in Stellar Astrophysics (MESA) \citep{2011ApJS..192....3P, 2013ApJS..208....4P, 2015ApJS..220...15P, 2018ApJS..234...34P, 2019ApJS..243...10P}.
These models provide the stellar density, temperature, sound speed and mean atomic weight as functions of the radii for models ranging from the zero age main-sequence till times above the age of the universe, from which the loss of energy per orbit depending on orbital energy and angular momentum can be calculated as described in the previous subsections.
A precise calculation of the PBH capture rate for a given stellar mass would involve averaging the capture rate over all ages, from the stellar birth to the present time. Instead of this, we present results for six cases of fixed stellar mass and age, assuming a constant stellar profile over the time $t_c$ at a fixed age. In practice, stars are in the main-sequence most of the time and low-mass stellar evolution is very slow, so this is a good approximation as long as we use an age when the star has already settled on its main-sequence. However, for stars of $M< 0.4 \, {\rm M_{\odot}}$ the time required to stabilize near the main-sequence is as long as $\sim 10^9$ years, which causes a complex dependence of the central stellar density on mass if an early age is used. To illustrate the dependence on both mass and age, we present results for the six stellar models with masses
$1$, $0.79$, $0.63$, $0.5$, $0.4$, and $0.32 \, {\rm M_{\odot}}$, and ages $1.5$, $3.2$, $1.3$, $1.3$, $10$, and $10$ Gyr, respectively. The density and temperature radial profiles of these six models are presented in Figures \ref{fig:Densstellmod} and \ref{fig:Tempstellmod}.
\begin{figure}
\includegraphics[width=\columnwidth]{Pictures/Stellar_models_density.pdf}
\caption{Mass density profile for the six stellar models used in this paper, with stellar masses 1, 0.79, 0.63, 0.5, 0.4, and 0.32 $\, {\rm M_{\odot}}$, and ages 1.5, 3.2, 1.3, 1.3, 10, and 10 Gyr, respectively.}
\label{fig:Densstellmod}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{Pictures/Stellar_models_temperature.pdf}
\caption{Temperature profiles for the same stellar models as in Figure \ref{fig:Densstellmod}.}
\label{fig:Tempstellmod}
\end{figure}
\section{Results for six stellar models}\label{sec:Results}
We now present specific results for our six stellar models, starting with the energy loss in a single stellar crossing, then the critical eccentricity for a given capture time of $t_c= 10^{10}$ years, and then the total number of captured black holes. Our results are presented for $m_b=10^{-12}\, {\rm M_{\odot}}$, but can be scaled to other PBH masses in the asteroid mass range as good estimates in the way described previously.
\subsection{Energy loss by dynamical friction} \label{sec:Dyn_fric_rlts}
Using equations (\ref{eq:Du}) and (\ref{eq:adf}), we can find the loss of energy over a single passage as a function of the PBH specific energy and angular momentum. As an illustration, we show this energy loss for the highest mass stellar model, with $M=1\, {\rm M_{\odot}}$ and $t=1.5$ Gyr.
Results are plotted as a function of specific angular momentum in
Figure \ref{fig:Eloss} for the collisionless case using equation (\ref{eq:Chandra}), and for the fluid case, using equation
(\ref{eq:Thun}), for four different values of the specific energy: the parabolic case with zero energy, an unbound case, and two bound orbits, with specific energies as indicated in the figures. The assumed PBH mass is $m_b=10^{-12}\, {\rm M_{\odot}}$, but note that the energy loss is proportional to $m_b$, from equation (\ref{eq:adf}).
\begin{figure}
\includegraphics[width=\columnwidth]{Pictures/PBH_energy_loss_sel_1.pdf}
\caption{Specific energy loss of a $10^{-12}\, {\rm M_{\odot}}$ PBH when crossing our model star with $M_*=1\, {\rm M_{\odot}}$ and $t=1.5$ Gyr, as a function of the orbital
angular momentum, using both the Chandrasekhar dynamical friction formula for collisionless matter (Chandra, thin line) and the hydrodynamic formula (Thun, thick line) based on \citet{2016A&A...589A..10T} and \citet{1999ApJ...513..252O}. The four curves of each case are for the indicated specific orbital energies (zero energy is a parabolic orbit, and negative energy is for bound elliptical orbits).}
\label{fig:Eloss}
\end{figure}
The energy loss $\Delta u$ in Figure \ref{fig:Eloss} agrees with the simple analytic estimate from equation (\ref{eq:anest}). When the PBH is captured from a semimajor axis much larger than the stellar radius, the relevant result for calculating the time it takes for the PBH to finalize the capture process is the one for the parabolic orbit, with zero total energy. The unbound case is never used for our computation (we neglect any contribution to the capture rate from unbound, incoming PBHs that were not placed in bound orbits when the star formed). The bound orbits show appreciable differences in $\Delta u$ in Figure \ref{fig:Eloss} only when the apoastron is already not much larger than the stellar radius; by then, the capture process is almost finished. The critical eccentricity $e_c(a)$ therefore depends, to an excellent approximation, on the zero energy curve for $a\gg R_*$. Note that the characteristic specific angular momentum where $\Delta u$ drops corresponds to $\sim v_e \bar q_c$, as we would expect from equation (\ref{eq:barqc}).
The hydrodynamical expression leads to an energy loss that is a factor $\sim 1.6$ larger than that obtained from the Chandrasekhar formula.
\subsection{Critical eccentricity for capture}
Using our stellar models we compute the critical eccentricity as a function of the initial semimajor axis $a$ required for the PBH to be captured over a time $t_c= 10^{10}$ years, assuming there are no gravitational perturbations to the potential of the spherical star of mass $M_*$. All results shown from now on are for the fluid case, which is the valid one for stellar interiors,
with the use of equation (\ref{eq:Thun}), but use of equation (\ref{eq:Chandra}) results in only minor changes. Equation (\ref{eq:tc}) can be used for this purpose to a very good approximation, except when $a$ is not very large compared to $R_*$, but here we have carried out the exact calculation of the orbital evolution up to the point where the PBH orbit is completely absorbed in the stellar interior. The result is shown in Figure \ref{fig:Critecc_1}, as the critical extrapolated periastron $q_c(a)=a[1-e_c(a)]$ (or the periastron the orbit would have if the star were replaced by a point mass), for our six stellar models. Note that the probability for a random orbit of semimajor axis $a$ to have a periastron below $q_c$ is $2q_c/a$, if $q_c \ll a$.
As expected, $q_c$ has a very weak variation with $a$ when $a\ll a_m$, and then drops sharply when $a$ becomes close to $a_m$.
The friction when the PBH moves through the dense stellar core determines the maximum semimajor axis $a_m$ where the energy loss allows the PBH to be captured in the time $t_c$. When the PBH starts at $a\ll a_m$, it can cross the star many times resulting in $q_c \sim R_{\ast}$, but if the crossing occurs in the outer envelope the friction is greatly reduced; this causes the slow decrease of $q_c$ with $a$.
In the absence of orbital perturbations, PBHs that start on an orbit with eccentricity $e> e_c=1-q_c/a$ are the ones that are inside the loss-cylinder and will therefore be captured by the stars, and those outside the loss-cylinder will not be captured in the time $t_c$.
\begin{figure}
\includegraphics[width=\columnwidth]{Pictures/q_c_various_radius.pdf}
\caption{Critical extrapolated periastron $q_c$ of a PBH of $10^{-12} \, {\rm M_{\odot}}$ as defined in section \ref{sec:PBH_num}, as a function of the initial semimajor axis $a$, for capture to occur in a time $t< 10^{10}$ years.
}
\label{fig:Critecc_1}
\end{figure}
Finally, we list the results of our calculation for the mean extrapolated pericenter $\bar q_c$ in Table \ref{tab:XiforR}, for our six stellar models and several values of $a$ within which the average is made. As previously seen in figure \ref{fig:Critecc_1}, the average $q_c$ will be of the order of the stellar radius but as we get closer to $a_{m}$, $\bar q_c$ falls as PBHs at such distances need to cross the core to get captured. If trying to reproduce the capture rate for a lower $a$ it is enough to use the corresponding value of $\bar q_c$ in the table on equation (\ref{eq:generalized_capture_rate}) with the new value for $a_m$.
\begin{table}
\centering
\setlength\tabcolsep{4pt}
\begin{tabular}{lcccccr}
\multicolumn{6}{c}{}\\
\hline
$M_{\ast}\, [\, {\rm M_{\odot}}]$ & 1 & 0.79 & 0.63 & 0.50 & 0.40 & 0.32 \\
$R_{\ast}\, [\, {\rm R_{\odot}}]$ & 0.91 & 0.76 & 0.57 & 0.43 & 0.33 & 0.28\\
$t\, \rm [Gyr] $ & 1.5 & 3.2 & 1.3 & 1.3 & 10 & 10 \\
\hline
$a_{m}\, [pc]$ & 0.98 & 0.93 & 0.88 & 0.83 & 0.78 & 0.72 \\
\hline
$a\, [km]$ & & \ & $\bar q_c\, (a) \,[\, {\rm R_{\odot}}]$ & & & \\
\hline
$5\cdot 10^8$& 0.64 & 0.63 & 0.52 & 0.40 & 0.31 & 0.27 \\
\hline
$10^9$& 0.64 & 0.62 & 0.52 & 0.40 & 0.31 & 0.27 \\
\hline
$3\cdot 10^9$& 0.63 & 0.62 & 0.52 & 0.40 & 0.31 & 0.27 \\
\hline
$10^{10}$& 0.59 & 0.58 & 0.51 & 0.40 & 0.31 & 0.27\\
\hline
$3\cdot 10^{10}$&0.58 & 0.56 & 0.51 & 0.40 & 0.31 & 0.27 \\
\hline
$10^{11}$& 0.53 & 0.51 & 0.48 & 0.39 & 0.31 & 0.27 \\
\hline
$3\cdot 10^{11}$& 0.50 & 0.48 & 0.45 & 0.38 & 0.31 & 0.27 \\
\hline
$10^{12}$& 0.44 & 0.43 & 0.41 & 0.36 & 0.30 & 0.26 \\
\hline
$3\cdot 10^{12}$& 0.38 & 0.38 & 0.36 & 0.33 & 0.28 & 0.25\\
\hline
$10^{13}$& 0.32 & 0.34 & 0.31 & 0.30 & 0.27 & 0.23 \\
\hline
$a_{m}$& 0.25 & 0.27 & 0.25 & 0.25 & 0.25 & 0.21 \\
\hline
\end{tabular}
\caption{Values of $\bar q_c$ for $10^{-12} \, {\rm M_{\odot}}$ PBHs for various upper limits of $a$ and stellar models. $a_{m}$ is defined as the maximum a in which $e_c = 1$ results in capture within $t_c$, as discussed in section \ref{sec:Perturbations}, but we give $\bar q_c$ for various additional upper limits of $a$.}
\label{tab:XiforR}
\end{table}
\subsection{Results for captured PBH}
\begin{figure}
\includegraphics[width=\columnwidth]{Pictures/PBH_Capture_rate_all_Ostriker_mod.pdf}
\caption{Mean number of PBHs captured by the star with initial semimajor axis within $a$, for the phase-space density of PBH at the halo radius $R=0.1 R_v$. The calculation is done for no external perturbations which we could expect to happen above the vertical red line but do not greatly alter the total capture rate, see main text for details.}
\label{fig:PBH_capture_rate_all_stars}
\end{figure}
Figure \ref{fig:PBH_capture_rate_all_stars} shows the mean number of PBHs captured from orbits within an initial semimajor axis $a$. As expected from equation (\ref{eq:ncap}), this total number increases as $a^{1/2}$, except when $a$ is already close to $a_m$ and $q_c$ starts declining rapidly with $a$.
The assumption that there are no external tidal perturbations used for this calculation is not realistic, because even the tidal perturbation of the host DM halo becomes comparable to the acceleration by the star at $a\simeq 0.1$ pc, as discussed in section \ref{sec:Perturbations}. Even at the smaller radius $a\simeq 0.01$ pc (indicated by the vertical red line in the figure), where the tidal perturbation from the minihalo is $10^3$ times smaller than the stellar acceleration, the critical eccentricity is only $1-e_c=q_c/a\simeq 10^{-6}$, so the tidal perturbation can move the periastron outside the loss-cylinder in just one period, according to equation (\ref{eq:deg}).
However, while the perturbations eliminate any PBH captures from semimajor axes as large as $0.01$ pc, they should correspondingly increase captures from smaller $a$, as discussed in section \ref{sec:Perturbations}. As $a$ decreases, the loss-cylinder from which the PBH can be captured increases in width as $\delta e \propto a^{-3/2}$ if we assume that the PBH random-walks through the interval $\delta e$ owing to the tidal perturbations, spending a fraction of the total time $t_c$ proportional to $a^{1/2}$ in the region of width $\delta e_c\propto a^{-1}$ where the stellar core is crossed. This fraction of time is enough to capture a PBH that starts on an orbit at semimajor axis $a$. The eccentricity interval swept by the random-walk caused by an external tide narrows down as $a$ decreases, and when it coincides with the capture region with $\delta e\propto a^{-3/2}$, a total dominant capture rate is produced that is roughly independent of $a$ if the total number of PBH within $a$ increases as $a^{3/2}$.
Based on this argument, our calculation of the total capture rate shown in Figure \ref{fig:PBH_capture_rate_all_stars} should have a wide range of validity, even in cases where tidal perturbations are added from a variety of causes such as passing stars or crossing of galactic disks. The predicted total capture rate for a star that acquired its bound DM at birth at $R=0.1 R_v$, in our standard halo of $M_h=10^7\, {\rm M_{\odot}}$ at $z=20$, is $\sim 0.3$ PBH for the $1\, {\rm M_{\odot}}$ star, and 10 times lower for stars of $0.3\, {\rm M_{\odot}}$. This is a substantial probability for low-mass stars born at this high-redshift to have formed low-mass stellar black holes by the present time. Tidal perturbations may still reduce this probability, for example if the bound DM is first tidally disrupted and then perturbations cease when the star is ejected to a region of very low density. But, as we have argued, our calculation should be realistic for many of the low-mass stars formed at high-redshift.
Furthermore, many stars may form much closer to the halo center, increasing the phase-space density of the bound PBH acquired at birth. Figure \ref{fig:PBH_capture_rate_Halo} shows the result for the total capture rate, up to $a=a_m$, as a function of the initial halo radius $R$ where the star is formed, assuming the validity of our argument that tidal perturbations never reduce this rate. We assume the NFW profile with isotropic velocity dispersion, with the velocity dispersion profile shown in Figure \ref{fig:sigma_halo}, and the normalizing factor $Q$ for the bound DM density in equation (\ref{eq:rbd}), $Q \propto \rho_h/\sigma_h^3$. We see that the mean number of PBH captured actually reaches unity for stars born at $R\sim 0.03 R_v$, which is a typical formation radius for stars in present-day galaxies in galactic DM halos.
Considering PBH of different masses, we note that the specific energy loss in a stellar crossing is proportional to $m_b$, implying that lighter PBH are more difficult to capture, and that the maximum semimajor axis for capture $a_m$ increases as $m_b^2$ in equation (\ref{eq:ams}), so lighter PBH are captured only from smaller radii. However, for fixed $Q$ and therefore a fixed {\it mass} density of PBH, the number density is proportional to $m_b^{-1}$ and the total PBH capture rate is independent of $m_b$. We therefore conclude with the robust conclusion that if PBH over the broad asteroid-mass range account for most of the DM, a substantial fraction of low-mass stars formed at high redshift will inevitably form low-mass black holes after capturing a PBH and accreting onto them, and these low-mass stellar black holes should be present in the Universe today.
\begin{figure}
\includegraphics[width=\columnwidth]{Pictures/PBH_Capture_rate_all_sigma_iso.pdf}
\caption{Mean number of PBHs captured by a star as a function of halo radius of the stellar birth site, for a NFW halo with DM halo with isotropic velocity dispersion.}
\label{fig:PBH_capture_rate_Halo}
\end{figure}
\section{Discussion and Conclusions} \label{sec:Conclusions}
Observational constraints for the abundance of PBHs have left only the asteroid-mass window, $10^{-16} \lesssim m_b/M_\odot \lesssim 10^{-11}$, as the one where all the DM may be composed of PBH \citep{2021RPPh...84k6902C}. This manuscript shows that if the DM is indeed made of these PBH, the first generation of low-mass stars formed at $z\sim 20$ in low-mass halos would have a high probability to make stellar black holes with masses less than a Chandrasekhar mass, through the process of capture of a PBH by the main-sequence star and the subsequent accretion and growth of the PBH.
The final black hole may reach a mass comparable to the initial stellar mass, with uncertainties related to the possible formation of an accretion disk around the growing black hole. The reason is that the radiative efficiency of such a disc might be able to hamper accretion by dispersing the stellar material when only a small fraction of the stellar mass has been accreted. Other uncertainties include the possible role of external tidal perturbations or internal ones due to, for example, a planetary system around the star. The PBH capture rate would also be heavily modified in binary stars \citep{2012PhRvL.109f1301B}. Further work will be needed to clarify some of these uncertainties; nevertheless, the calculations presented in this paper suggest as a likely outcome of this asteroid-mass PBH scenario that many low-mass stellar black holes formed in this process may exist today in the Milky Way, after having originated in early dwarf galaxies that later merged into our Galaxy.
This paper extends earlier works \citep{2009ApJ...705..659A,2009MNRAS.399.1347B,2013PhRvD..87b3507C} with
calculations that use models of very metal-poor main-sequence stars of low mass, an analysis of the impact of external tidal perturbations on the PBH capture rate (with an improved treatment compared to previous work, e.g. \citet{2013PhRvD..87l3524C, 2014PhRvD..90h3507C}), and its dependence on the PBH mass. We reach the remarkable conclusion that the capture of PBH by these low-mass stars formed in the early DM halos with highest phase-space densities should occur for most stars formed within a halo radius $R\sim 0.03 R_v$ (a typical location for star formation in galactic halos), over this whole asteroid-mass range for PBH.
If many of these early low-mass stars have indeed collapsed to low-mass stellar black holes, their remnants should be found today among the Milky Way stellar populations, because the early dwarf galaxies where they formed may merge into more massive halos and end up tidally disrupted in the Milky Way halo over a wide range of radii. Their spatial distribution in the Milky Way is not easy to predict: early simulations proposed that
the remnants of the most ancient stellar populations should be found near the centre of the Galaxy, owing to the high bias factor of DM halos formed at high redshift \citep{2000fist.conf..327W,2006ApJ...653..285S}, but more recent work has found that baryonic effects may imply a broader radial distribution over the outer halo \citep{2018MNRAS.480..652E}.
How could these low-mass stellar black holes be detected at present? The most obvious possibility is via gravitational wave emission, if the black hole is in a binary system that leads to a merger. Some binary systems might contain two stars that both collapsed to low-mass black holes below the Chandrasekhar mass; alternatively, one of these low-mass black holes might be left in orbit around a more massive star that collapses to a black hole or neutron star following the standard evolution. In both cases, these binaries could spiral down into a merger at the present time that is detectable by the LIGO-Virgo collaboration \citep{2015CQGra..32g4001L,2021arXiv211103634T}. The detection of a merging black hole with a mass below the Chandrasekhar value would naturally indicate new physics beyond standard stellar evolution theory, and has already been searched in recent gravitational wave searches \citep{2021arXiv210912197T}.
Another way of detecting these low-mass black holes is when they are observed in a binary system where the other object is a luminous star. For a main-sequence companion, a normal star would be seen orbiting around a dark object. The difficulty in this case would be to rule out an old white dwarf as the unseen companion, because white dwarfs are very faint and difficult to see when they are unresolved from the main-sequence star. If the companion is a white dwarf then the system is much fainter and difficult to discover, but it is then easier to rule out another white dwarf companion for an unseen compact object below the Chandrasekhar mass. The additional difficulty to accomplish this type of detection is that the low-mass black holes formed from PBH capture would be rare and present only among very low-metallicity stars, so a lot of these possible binaries would have to be examined, most of which would contain regular old white dwarf companions.
If the low-mass black hole that is formed by a PBH captured by a star is isolated, then it is extremely difficult to discover. Microlensing seems the only possibility \citep{1986ApJ...304....1P, 2000ApJ...542..281A}, but these black holes would actually pass for M-dwarfs, white dwarfs or brown dwarfs, depending on their mass. These other objects are all very faint and therefore usually not possible to distinguish from low-mass black holes, with a much lower expected abundance in our PBH scenario.
In summary, the open asteroid-mass window for PBH as DM is a possibility in which we expect that low-mass black holes of stellar mass, but below the Chandrasekhar limit, exist today. These can be found if they are in binaries that are either tight enough to lead to mergers with other compact objects detectable via gravitational waves at present, or through the direct detection of the binary companion in the Milky Way and identification of the unseen object as a low-mass black hole. These binaries are expected to be among the first stellar systems to have formed, and therefore of very low-metallicity, which may help in their identification among halo stars in our vicinity or closer to the Milky Way centre.
\section*{Acknowledgements}
We would like to acknowledge helpful discussions and advice from N. Bellomo, J. L. Bernal, A. Escriv\`a, C. Germani, and J. Salvad{\'o}. This work was supported in part by Spanish grants CEX-2019-000918-M funded by MCIN/AEI/10.13039/501100011033, AYA2015-71091-P, and PID2019-108122GB-C32.
\bibliographystyle{mnras}
\typeout{}
|
2,877,628,090,478 | arxiv | \section{Introduction}
In recent years, deep learning models have achieved and continued to improve on state-of-the-art results on many NLP tasks.
However, models that perform extremely well on standard datasets have been shown to be rather brittle and easily tricked.
In particular, the idea of \emph{adversarial} examples or attacks was brought over from computer vision, and various methods of slightly perturbing inputs have been developed that cause models to fail catastrophically \cite{mccoy_right_2019,glockner_breaking_2018,naik_stress_2018}.
Adversarial attacks need to be studied from a security perspective for the deployment of real-world systems, but they are also a powerful lens into \emph{interpretability} of black-box deep learning systems.
By examining the failures of state-of-the-art models, we can learn a lot about what they are really learning, which may give us insights into improving their robustness and general performance.
One philosophical generalization about the cause of failure for all current NLP systems is a lack of deep, `real' understanding of language.
We will focus on the task of natural language inference (NLI), which is a basic natural language understanding task thought to be a key stepping stone to higher-level understanding tasks like question answering and summarization.
The setup of the NLI task is to determine whether a \emph{hypothesis} is true given a \emph{premise}, answering \emph{entailment}, \emph{contradiction}, or \emph{neutral}.
The current top-performing systems for NLI rely on pretraining on generic tasks, followed by fine-tuning on a labeled task-specific dataset.
This is in contrast to older (before late 2018) models, which were primarily task-specific architectures trained on task-specific labeled datasets.
In addition, the Transformer architecture \cite{vaswani_attention_2017} now outperforms the previously dominating recurrent architectures (LSTM and variants).
We want to analyze what kinds of adversarial attacks are still potent on highly-acclaimed recent NLP models like BERT \cite{devlin_bert:_2018} and MT-DNN \cite{liu_multi-task_2019}.
Our contributions are as follows:
\begin{itemize}
\item We test models on a variety of existing adversarial datasets, with a high level of granularity to different linguistic phenomena.
Results indicate that the pre-trained models are remarkably good at lexical meaning, while struggling most with logic and syntactic phenomena.
\item We focus in on the syntax-focused dataset created by \citeauthor{mccoy_right_2019} \cite{mccoy_right_2019}.
We look closely at the 30 subcases, and analyze the effects of model size (base vs. large size) and multi-task learning (MT-DNN vs. BERT).
We also examine what subcases all models fail at.
\item We experiment with fine-tuning the models with (flattened) dependency parses as input (with no adjustments to architecture or data pre-processing).
We find that this does improve performance on some, but not all, subcases that rely on the hierarchical structure of sentences.
\item Lastly, we investigate MNLI's biases by analyzing performance after different amounts of fine-tuning (more and more overfitting) on MNLI.
\end{itemize}
\section{Related Work}
This work joins a growing movement in NLP to go beyond improving test set metrics to more deeply analyze model learning and performance \cite{belinkov_analysis_2019}.
This genre of work believes in the value of interpretability, both to build safer practical systems, and just to find fruitful directions for improving raw model performance.
\citeauthor{liu_inoculation_2019} \cite{liu_inoculation_2019} use a metaphor of inoculation to disentangle the blame for adversarial vulnerability between training data and model architecture.
They expose a small part of the challenge dataset to the model during training, and re-test its evaluation performance on the original test set and the challenge dataset.
\begin{enumerate}
\item If the model still fails the challenge dataset, the weakness probably lies in its design/architecture or training process.
\item If the model can now succeed at the challenge dataset (without sacrificing performance on the original dataset), then the original dataset is at fault.
\item If the model does better on the challenge dataset but worse on the original dataset, the challenge dataset is somehow not representative of the phenomenon it was trying to test, for example having annotation artifacts or being very skewed to a particular label.
\end{enumerate}
Unfortunately, even if adversarial training does improve model performance on that particular dataset, it is fundamentally impossible to devise and train on all possible linguistic phenomena.
The transferability of adversarial robustness to new kinds of examples has been tested by some of the creators of adversarial datasets, by withholding some example generation methods while training on others.
\citeauthor{nie_analyzing_2018} \cite{nie_analyzing_2018} find that knowledge of each of their rule-based templates was almost completely non-transferable to others.
In fact, training on some specific templates caused overfitting and hurt overall robustness.
\citeauthor{mccoy_right_2019} \cite{mccoy_right_2019} find more mixed results, with some cases of successful transfer.
Many standard datasets for different tasks have been shown to have blatant annotation artifacts, allowing models to learn features that are strong in the training (and testing) data, but that have nothing to do with actually performing the task.
\citeauthor{gururangan_annotation_2018} \cite{gururangan_annotation_2018} find many of these artifacts in standard NLI datasets (SNLI and MNLI).
For example, \emph{neutral} hypotheses tend to be longer in length, because an easy way to generate a hypothesis that isn't necessarily entailed by the premise is to add extra details.
Meanwhile, strong negation words like \emph{nobody, no, never} are strong indicators of \emph{contradiction}.
With these artifacts in mind, they split the data into ``hard'' and ``easy'' versions, and model performance decreased by about 15\% on the hard test set.
These findings suggest that it is not the models' faults for failing on adversarial examples, given that there exist easier ways to get high accuracy than truly understanding anything.
But it also means that current evaluation metrics greatly overestimate models' abilities and understanding.
\section{Models}
The two new models that we study gain most of their power from pre-training on a generic language task with a huge unlabeled dataset.
They achieve state-of-the-art performance on a variety of language understanding tasks.
\begin{enumerate}
\item \textbf{BERT} \cite{devlin_bert:_2018} pre-trains on a bidirectional word-masking language modelling task, in addition to sentence pair prediction, i.e. whether the second sentence is likely to directly follow the first.
\item\textbf{MT-DNN} \cite{liu_multi-task_2019} builds on BERT by performing multi-task learning on the nine GLUE (General Language Understanding Evaluation) benchmark tasks \cite{wang_glue:_2018}, after BERT's pre-training.
\end{enumerate}
BERT is based on the Transformer architecture \cite{vaswani_attention_2017}, a non-recurrent, purely attention-based architecture.
BERT has a base version (12 Transformer layers), and a large version (24 layers).
We trained base and large versions of both BERT and MT-DNN.
These models are fine-tuned on MNLI starting from publicly available pre-trained checkpoints.
We compare with an older recurrent model, \textbf{ESIM} (Enhanced Sequential Inference Model) \cite{chen_enhanced_2016}.
It is NLI-task-specific and only trained on MNLI, with no huge pre-training.
It uses a bidrectional LSTM to encode the premise and hypothesis sentences, and uses attention across those representations.
We also considered another model,
Syntactic TreeLSTM (S-TLSTM), which is identical to ESIM except it uses a TreeLSTM that takes a dependency parse as input \cite{chen_enhanced_2016}.
This model may provide a useful comparison to BERT because its explicit use of the hierarchical structure of language is the exact opposite model design direction from extensive unsupervised pre-training.
However, various studies suggest that the BERT architecture does in fact learn hierarchical structure:
\citeauthor{goldberg_assessing_2019} \cite{goldberg_assessing_2019} found that BERT performed remarkably well when fine-tuned for external syntactic classification tasks, and \citeauthor{jawahar_what_2019} \cite{jawahar_what_2019} showed that different layers of BERT learned structural representations of language at different abstraction levels.
\citeauthor{mccoy_right_2019} \cite{mccoy_right_2019} test a different tree-based model (SPINN \cite{bowman_fast_2016}) on their adversarial dataset, and find that it outperforms ESIM, but not BERT.
Considering all this, and the fact that there is currently no tree-based model that comes close to outperforming BERT and variants on standard datasets, we decided not to test S-TLSTM, despite its philosophical appeal.
\section{Overall Results and Analysis}
First, for reference, we provide the accuracies on the matched MNLI dev set for the models we trained (and tested) in Table \ref{table:overallMNLIResults}.
$\rm {BERT}_{large}$ results do not quite match published results, but we had limited hardware and did not carefully tune hyperparameters.
The BERT-based models all perform comparably, and even ESIM does respectably.
\begin{table}[!htbp]
\hspace*{1em}
\begin{center}
\begin{tabular}{|l|l|}
\hline
Model & Accuracy (\%) \\ \hline \hline
ESIM & 76.80 \\ \hline
BERT base & 84.17 \\ \hline
BERT large & \textbf{85.84} \\ \hline
MT-DNN base & 84.20 \\ \hline
MT-DNN large & \textbf{86.69} \\ \hline
\end{tabular}
\caption{Overall MNLI Results}
\label{table:overallMNLIResults}
\end{center}
\end{table}
Let us now analyze the performance of the selected models on the adversarial datasets (also called challenge sets, stress tests).
We discuss the first two briefly and then focus on the last one \cite{mccoy_right_2019} because it is the most interesting in terms of actually distinguishing the strengths of the better-performing models.
\subsection{\textbf{{\citeauthor{glockner_breaking_2018}}} (2018)}
This dataset is created by modifying SNLI examples with single word replacements of different lexical relations, based on WordNet.
It tests lexical inferences and relatively simple world knowledge.
They test a model called KIM (Knowledge-based Inference Model) \cite{chen_enhanced_2016}, which builds on ESIM to explicitly incorporate knowledge from WordNet in a variety of ways, including in architecture additions.
However, the BERT-based models still significantly outperform KIM.
This could be due to model architecture, but is most likely a result of their extensive pretraining on a huge diverse corpus.
There is not a big difference between model sizes, or between MT-DNN and BERT.
This suggests that lexical semantics is more basic and low-level, so learning it does not need so many layers of abstraction, or multi-task learning (see Table \ref{table:GlocknerAttacks}).
\begin{table}[!htbp]
\hspace*{1em}
\begin{center}
\begin{tabular}{|l|l|}
\hline
Model & Accuracy (\%) \\ \hline \hline
ESIM\textsuperscript{*} & 65.6 \\ \hline
KIM\textsuperscript{*} & 83.5 \\ \hline
BERT base & 92.2 \\ \hline
BERT large & {\bf 94.2} \\ \hline
MT-DNN base & 92.9 \\ \hline
MT-DNN large & {\bf 94.8} \\ \hline
\end{tabular}
\end{center}
\caption{Single Word Replacement Attacks from \cite{glockner_breaking_2018}. *ESIM and KIM results from original paper.}
\label{table:GlocknerAttacks}
\end{table}
\subsection{\textbf{\citeauthor{naik_stress_2018}} (2018)}
This dataset is composed of a variety of tests motivated by a manual examination and categorization of 100 mistakes made by the best performing model at the time \cite{nie_shortcut-stacked_2017}.
The categories are antonyms, word overlap (append ``and true is true''), negation words (append ``and false is not true''), length mismatch (append ``and true is true'' 5 times), and spelling errors.
Antonyms and spelling are ``competence'' tests, while the rest are ``distraction'' tests.
The examples are generated by modifying examples from MNLI.
\begin{table}[!htbp]
\begin{center}
\begin{tabular}{|l|l|}
\hline
Model & Accuracy (\%) \\ \hline \hline
ESIM & 68.39 \\ \hline
BERT base & 74.30 \\ \hline
BERT large & {\bf 77.21} \\ \hline
MT-DNN base & 73.73 \\ \hline
MT-DNN large & {\bf 77.14} \\ \hline
\end{tabular}
\end{center}
\caption{Error-analysis motivated attacks from \cite{naik_stress_2018}. Accuracy averaged over all categories of attacks.}
\label{table:NaikAttacks}
\end{table}
$\rm {BERT}_{large}$ and $\rm {MT\mbox{-}DNN}_{large}$ do best.
Overall model performance trends the same as performance on MNLI, but differences are not huge.
Furthermore, when we examined performance on specific categories, all models had about the same pattern of relative performance on different categories of tests, i.e. they have the same relative successes and failures.
This consistency and generally similar performance indicates in this case that the dataset is not well-targeted enough for really interesting insight.
In addition, compared to \citeauthor{mccoy_right_2019} \cite{mccoy_right_2019} (below), the way that examples are generated is more artificial, and maybe less meaningful.
Of course, a robust NLI system still should not be defeated by this kind of attack, i.e. be able to determine irrelevant information, including tautologies, and this test shows that even the best models do not have this capability mastered.
\subsection{\textbf{\citeauthor{mccoy_right_2019}} (2019)}
They hypothesize that models utilize shallow, fallible syntactic heuristics to achieve accuracy on MNLI, instead of ``real'' understanding.
The dataset consists of examples generated from manually created templates that break these heuristics.
They have three categories of heuristics (each is a special case of the one before).
\begin{enumerate}
\item Lexical overlap: The model is likely to answer \emph{entailment} if the premise and hypothesis share a lot of words. \\
It would trick bag-of-words (no word order) models.
\item Subsequence: The hypothesis is a contiguous string of words from the premise. \\
\emph{The ball by \underline{the bed rolled}. $\nrightarrow$ The bed rolled.} \\
It could confuse sequence models too.
\item Constituent: The hypothesis is a syntactic constituent in the premise. \\
\emph{If \underline{the boys slept}, they would not eat. $\nrightarrow$ The boys slept.} \\
It could confuse models that know about syntax.
\end{enumerate}
All three heuristics involve the model thinking the answer is \emph{entailment} when it is not, i.e. the \emph{non-entailment} examples are the ones that contradict the heuristic.
So the extreme imbalance in model performance between entailment and non-entailment examples is strong evidence that the models do indeed rely on the hypothesized heuristics (Table \ref{table:McCoyAttacksEntailment} vs. \ref{table:McCoyAttacksNotEntailment}).
\begin{table}[!htbp]
\hspace*{1em}
\begin{center}
\begin{tabular}{|l|l|l|l|}
\hline
\emph{Entailment} & word overlap & subseq & constituent \\ \hline
ESIM & 96.52 & 98.46 & 94.48 \\ \hline
$\rm {BERT}_{base}$ & 97.20 & 99.52 & 99.04 \\ \hline
$\rm {BERT}_{large}$ & {\bf 90.48} & 99.48 & 96.70 \\ \hline
$\rm {MT\mbox{-}DNN}_{base}$ & 97.22 & 99.98 & 99.22 \\ \hline
$\rm {MT\mbox{-}DNN}_{large}$ & 96.06 & 99.54 & 99.14 \\ \hline
\end{tabular}
\end{center}
\caption{Accuracy on examples labeled `entailment'}
\label{table:McCoyAttacksEntailment}
\end{table}
\begin{table}[!htbp]
\hspace*{1em}
\begin{center}
\begin{tabular}{|l|l|l|l|}
\hline
\emph{Non-entailment} & word overlap & subseq & constituent \\ \hline
ESIM & 1.56 & 4.88 & 3.32 \\ \hline
$\rm {BERT}_{base}$ & 54.68 & 9.46 & 4.88 \\ \hline
$\rm {BERT}_{large}$ & \textbf{83.44} & \textbf{31.38} & \textbf{44.72} \\ \hline
$\rm {MT\mbox{-}DNN}_{base}$ & 72.96 & 5.66 & 16.50 \\ \hline
$\rm {MT\mbox{-}DNN}_{large}$ & \textbf{88.08} & \textbf{31.24} & 22.88 \\ \hline
\end{tabular}
\end{center}
\caption{Accuracy on examples labeled `non-entailment'}
\label{table:McCoyAttacksNotEntailment}
\end{table}
All the BERT-based models do significantly better than the LSTM-based ESIM in most categories, as we see in Table \ref{table:McCoyAttacksNotEntailment}.
But $\rm {BERT}_{large}$ and $\rm {MT\mbox{-}DNN}_{large}$ do vastly better than all others, a difference that was not nearly as apparent in any of the other datasets we tested.
In combination with the granularity in the manually created templates, these huge differences in performance indicate that this dataset more directly probes and reveals the strengths and weaknesses of different models.
The success of $\rm {BERT}_{large}$ and $\rm {MT\mbox{-}DNN}_{large}$ suggests that structural/syntactic information is learned more deeply by a larger model with more layers and parameters to work with (in contrast to lexical semantics (\citeauthor{glockner_breaking_2018}, above)).
$\rm {BERT}_{large}$ also has lower accuracy on the \emph{entailment} examples, also indicating that it is less prone to blindly following the heuristics.
$\rm {MT\mbox{-}DNN}_{base}$ (which is built on $\rm {BERT}_{base}$ and is therefore of comparable size) does significantly better than $\rm {BERT}_{base}$ in some categories, indicating the value of multi-task learning (specifically on language understanding tasks).
\section{Fine-grained Model Comparison}
\begin{table*}[t]
\begin{center}
\begin{tabular}{| p{.6in} | l | p{.5in} | p{.5in} | p{.5in} | p{.5in} | p{.5in} || p{.5in} | p{.5in} | }
\hline
Heuristic& Syntactic subcategory& MT-DNN large & BERT large & MT-DNN base & BERT base & ESIM & BERT large UP & MT-DNN base PO \ \\ \hline \hline
\multirow{5}{*}{\shortstack{Lexical \\ Overlap}}
& subject/object\_swap & \textbf{0.999} & \textbf{0.994} & 0.935 & 0.729 & 0 & 0.988 & 0.936 \\
& preposition & 0.934 & \textbf{0.979} & 0.794 & 0.745 & 0.004 & 0.960 & 0.889 \\
& relative\_clause & 0.912 & \textbf{0.928} & 0.699 & 0.504 & 0.069 & \textbf{0.930} & 0.837 \\
& passive & \textbf{0.625} & 0.298 & 0.432 & 0.036 & 0 & 0.214 & 0.505 \\
& conjunction & 0.934 & \textbf{0.973} & 0.788 & 0.720 & 0.005 & 0.943 & 0.711 \\ \hline
\multirow{5}{*}{Subseq}
& NP/S & 0.042 & 0.003 & 0 & 0.016 & 0.058 & 0.004 & 0.003 \\
& PP\_on\_subject & 0.668 & 0.673 & 0.168 & 0.293 & 0.001 & \textbf{0.786} & 0.533 \\
& relative\_clause\_on\_subject & 0.749 & 0.698 & 0.082 & 0.133 & 0.087 & \textbf{0.863} & 0.347 \\
& past\_participle & 0.006 & 0.049 & 0.013 & 0.018 & 0.050 & 0.032 & 0.008 \\
& NP/Z & 0.097 & 0.146 & 0.020 & 0.013 & 0.047 & \textbf{0.217} & 0.172 \\ \hline
\multirow{5}{*}{Constituent}
& embedded\_under\_if & 0.703 & \textbf{0.987} & 0.369 & 0.767 & 0.137 & 0.907 & 0.387 \\
& after\_if\_clause & 0.001 & 0 & 0 & 0 & 0 & 0 & 0.010 \\
& embedded\_under\_verb & 0.342 & \textbf{0.903} & 0.252 & 0.299 & 0 & 0.546 & 0.146 \\
& disjunction & 0.005 & 0 & 0.001 & 0.001 & 0.029 & 0.008 & 0.002 \\
& adverb & 0.093 & \textbf{0.346} & 0.203 & 0.079 & 0 & 0.083 & 0.036 \\ \hline
\end{tabular}
\end{center}
\caption{Results for \emph{non-entailment} subcases. Each row corresponds to a syntactic phenomenon. BERT large UP: trained on unparsed then parsed; MT DNN-base PO: trained on parsed only}
\label{table:FineGrainedAnalysisResults}
\end{table*}
\subsection{Comparison of $\rm {BERT}_{base}$ and $\rm {BERT}_{large}$}
$\rm {BERT}_{large}$ performs better than or equal to $\rm {BERT}_{base}$ (at worst -1\%) on all fifteen \emph{non-entailment} subcases.
Some templates saw particularly large improvement, such as modifying clauses:
\begin{itemize}
\item Relative clauses that modify nouns (+42.4\%) \\
\emph{The artists that supported the senators shouted. $\nrightarrow$ The senators shouted.}
\item Prepositional phrase modifiers (+38\%) \\
\emph{The managers next to the professors performed. $\nrightarrow$ The professors performed.}
\end{itemize}
Understanding modifying clauses requires understanding the mechanics of compositional semantics (probably utilizing some kind of hierarchical syntax), which is a basic but crucial step in language understanding.
So $\rm {BERT}_{large}$'s performance over $\rm {BERT}_{base}$ on these examples is evidence of significantly deeper understanding.
Another area of improvement is the lexical meanings of special subclasses of verbs and adverbs.
\begin{itemize}
\item Non-truth verbs with clause complements (+60.4\%) \\
\emph{The tourists \underline{said that} the lawyer saw the secretary. $\nrightarrow$ The lawyer saw the secretary.}\\
This template uses a variety of verbs, all of which suggest but do not entail their complements.
\item Modal adverbs (+26.7\%) \\
\emph{\underline{Maybe} the scientist admired the lawyers. $\nrightarrow$ The scientist admired the lawyers.}
\end{itemize}
Similarly, passive voice is a special \emph{syntactic} phenomenon that $\rm {BERT}_{large}$ improves on, but still has trouble with.
\begin{itemize}
\item Passive voice (3.6\% $\rightarrow$ 29.8\%) \\
\emph{The managers were advised by the athlete. $\nrightarrow$ The managers advised the athlete.}
\end{itemize}
$\rm {BERT}_{base}$ and $\rm {BERT}_{large}$ were trained (pre-training and fine-tuning) on the same data, so the difference in the richness of their learning must reside only in the doubled number of layers in $\rm {BERT}_{large}$.
These performance improvements are evidence that more layers are necessary for learning all the different special cases of language.
There are also some partially learned special cases, such as the meaning of ``if'' and related (logical implication).
\begin{itemize}
\item 76.6\% $\rightarrow$ 98.7\%: \emph{\underline{Unless} \underline{the professor danced}, the student waited. $\nrightarrow$ The professor danced.}
\item both 0\%: \emph{\underline{Unless} the bankers called the professor, \underline{the lawyers shouted}. $\nrightarrow$ The lawyers shouted.}
\end{itemize}
Meanwhile, all models fail to understand the logical meaning of disjunction (0-2\%).
\begin{itemize}
\item \emph{The actor helped the lawyers, or the managers stopped the author. $\nrightarrow$ The actor helped the lawyers.}
\end{itemize}
Logic is a very important component of inference as an understanding task, but understandably difficult for statistical models to learn properly, because it is in some sense not probabilistic, in addition to being dependent on exact meanings of single function words.
Many traditional inference systems relied primarily on formal logic machinery, and finding a way to incorporate that into new models seems like a promising direction.
Designing and training neural networks that parse and understand formal, symbolic logic is a pretty well-studied problem \cite{evans_can_2018}, and it is certainly known theoretically that general neural networks can represent arbitrary nonlinear logical relations.
The difficulty is getting natural language models to actually care enough about logic during training to use it correctly for a specific task.
Many different approaches have been explored recently, including but not limited to modifying the loss function to encourage logical consistency \cite{minervini_adversarially_2018}, rule distillation in a teacher-student network \cite{hu_harnessing_2016}, and indirect supervision using probabilitic logic \cite{wang_deep_2018}.
To our knowledge, these have not yet been incorporated into state-of-the-art models, but they show promising results on the baseline models tested, especially in lower-resource scenarios.
All of these special cases are almost certainly encountered in BERT's huge pre-training corpus, but that unsupervised stage does not necessarily teach the model how to use that information towards performing inference.
This is why larger and larger pre-training may not be the most effective or at least efficient way to achieve language understanding.
Some of the subsequence templates are still a struggle for all models, including large BERT and $\rm {MT\mbox{-}DNN}$ (\textless 10\%):
\begin{itemize}
\item \emph{\underline{The manager knew the athlete} mentioned the actor $\nrightarrow$ The manager knew the athlete.}
\item \emph{When \underline{the students fought the secretary} ran. $\nrightarrow$ The students fought the secretary.}
\end{itemize}
These templates are in the spirit of \emph{garden path sentences}, where local syntactic ambiguity causes a sequential reading of a sentence to lead to an incorrect interpretation.
This kind of sentence has been studied extensively in cognitive science, specifically language processing, as human readers are first misled and then must backtrack to reanalyze the composition of the sentence to understand it properly \cite{ferreira_recovery_1991,osterhout_brain_1994}.
\citeauthor{goldberg_assessing_2019} \cite{goldberg_assessing_2019} shows that BERT performs well on complex subject-verb agreement tasks, even without any fine-tuning, indicating that the pre-trained model already has the ability to correctly parse this kind of sentence.
So the model somehow knows about syntax but does not know how to use it towards the task of inference, a teaching failure that can only be blamed on the inference-task-specific fine-tuning.
MNLI probably has a low occurrence of complex syntax, but perhaps more importantly, the complete syntactic information is rarely necessary to perform the task.
Nevertheless, an ability to utilize challenging syntax is an important generalizable skill, because it indicates deep, principled understanding of language.
\subsection{Comparison of $\rm {BERT}$ and $\rm {MT\mbox{-}DNN}$}
Even though ${\rm MT\mbox{-}DNN}_{large}$ performs better on MNLI than $\rm {BERT}_{large}$, $\rm {BERT}$ beats ${\rm MT\mbox{-}DNN}$ on more subcases in this dataset.
In particular, ${\rm MT\mbox{-}DNN}_{large}$ struggles much more with subcases that test special lexical meanings that prevent entailment (number is difference between ${\rm MT\mbox{-}DNN}_{large}$ and $\rm {BERT}_{large}$):
\begin{enumerate}
\item conditionals: if, unless, whether or not (28.4\%)
\item `belief' verbs: believed, thought, hoped (56.1\%)
\item uncertainty adverbs: hopefully, maybe, probably (25.3\%)
\end{enumerate}
The only subcase that ${\rm MT\mbox{-}DNN}_{large}$ is significantly better at is the passive voice (+32.7\%).
$\rm {MT\mbox{-}DNN}$ is trained starting with a pre-trained $\rm {BERT}$ and then fine-tuning on the 9 language understanding tasks in the GLUE benchmark (before fine-tuning again on MNLI).
So if $\rm {MT\mbox{-}DNN}$ performs worse than a $\rm {BERT}$ model of the same size, this fine-tuning caused it to \emph{forget} some knowledge that it had before.
This would happen if the datasets being fine-tuned on do not explicitly test that knowledge, teaching the model to care less about the information from these words.
Considering that most of the GLUE tasks are not straight NLI tasks, it is somewhat unsurprising that the model forgot how these words affect entailment.
\begin{table*}
\begin{center}
\begin{tabular}{| l | p{3.9in} | l |}\hline
Type & Sentence 1 & Sentence 2\\ \hline \hline
NP/S & The manager knew the tourists supported the author. & The manager knew the tourists.\\ \hline
NP/Z & Since the judge stopped the author contacted the managers. & The judge stopped the author. \\ \hline
past\_participle & The scientist presented in the school stopped the artists. &The scientist presented in the school.\\ \hline
after\_if\_clause & Unless the scientists introduced the presidents, the athletes recommended the senator. & The athletes recommended the senator. \\ \hline
\end{tabular}
\end{center}
\caption{Non-entailed cases where $\rm {BERT}_{large}$ and $\rm {MT\mbox{-}DNN}_{large}$ do very poorly: Sentence 1 does not entail Sentence 2.}
\label{table:NonEntailedBERT0Cases}
\end{table*}
\section{Parses as Input}
Considering that syntactic phenomena are one of the models' weaknesses, we conduct an experiment of simply passing the flattened binary parses as the input ``sentences''.
We use the automatically generated parses that come with MNLI and the adversarial dataset.
We test on the dataset from \citeauthor{mccoy_right_2019} \cite{mccoy_right_2019}. \\
We try two fine-tuning regimens:
\begin{enumerate}
\item Fine tune on original (unparsed) MNLI, then fine-tune again on the same data, parsed (labeled UP in Table \ref{table:FineGrainedAnalysisResults}).
\item Only fine-tune on parsed MNLI (no other inference-specific fine-tuning) (labeled PO in Table \ref{table:FineGrainedAnalysisResults}).
\end{enumerate}
We find that it is rather difficult to get the different models to train well.
Some had loss that never converged, some got near 0\% on all \emph{non-entailment} subcases.
The only reasonable parsed models are $\rm {BERT}_{large}$ under the first regimen (UP), and $\rm {MT\mbox{-}DNN}_{base}$ under the second (PO).
It is likely that these difficulties could be overcome with some systematic hyperparameter tuning, but we see substantial consistency (in model performance on the adversarial dataset) between the two successes, so do not think it would be very insightful to test more.
But the fact that the models responded so differently to fine-tuning suggests that the models have significantly different `knowledge states' in terms of what they learned about how to solve tasks, i.e. they ended up in different local optima after pre-training.
This idea deserves more analysis, because the whole point of huge pre-training is to learn maximally transferable and general representations of language.
Thus, how to guide models towards these ideal local optima (and away from overfitting) is a very important and difficult question.
The fact that any model is able to learn what to do with parses is already surprising, given that none of their pre-training is parsed.
Evaluating on the parses of MNLI (matched dev set), $\rm {BERT}_{large}$ achieves 82\% accuracy (compare to 86\% unparsed), and $\rm {MT\mbox{-}DNN}_{base}$ gets 84\% (equal to unparsed).
These are the six subcases that saw a 10\% or greater change in accuracy between parsed and unparsed inputs.
Numbers are percent change from unparsed to parsed ($\rm {BERT}_{large}$, $\rm {MT\mbox{-}DNN}_{base}$).\\
Parsing does better on:
\begin{itemize}
\item Modifiers on subject \\
\emph{The managers next to \underline{the professors performed}. $\nrightarrow$ The professors performed.} (+11.3\%, +36.5\%) \\
\emph{The artists that supported \underline{the senators shouted}. $\nrightarrow$ The senators shouted.} (+16.5\%, +26.5\%)
\item NP/Z (+7.1\%, +15.2\%) \\
\emph{Since \underline{the athlete hid the secretaries} introduced the president. $\nrightarrow$ The athlete hid the secretaries.} \\
The parsed models still only achieve 21.7\% and 17.2\% accuracy, but this is still some improvement.
\item Conjunction (+22.2\%, +1.8\% (unparsed $\rm {MT\mbox{-}DNN}_{base}$ already gets 90.8\%)) \\
\emph{The tourists \underline{and} senators admired the athletes $\rightarrow$ The tourists admired the athletes.} \\
This is an \emph{entailment} template, so $\rm {BERT}_{large}$'s lower accuracy actually indicates less heuristic reliance, and parsed improvement from 64.4\% $\rightarrow$ 86.6\% really indicates better understanding (while $\rm {MT\mbox{-}DNN}_{base}$'s performance could just be using the heuristic).
\end{itemize}
Parsing does worse on:
\begin{itemize}
\item Embedded clause under non-truth verb (-35.7\%, -10.6\%) \\
\emph{The lawyers \underline{believed that} the tourists shouted. $\nrightarrow$ The tourists shouted.}
\item Adverbs indicating uncertainty (-26.3\%, -16.7\%) \\
\emph{\underline{Hopefully} the presidents introduced the doctors $\nrightarrow$ The presidents introduced the doctors.}
\end{itemize}
Of this small set of significant changes, it can be said that the parsed inputs helped the model with syntactic, hierarchical examples, and hurt it on specific lexical semantics.
This is a surprisingly intuitive result: the model shifted its focus more to syntax!
However, these are the only subcases that changed significantly, out of 30, suggesting either that the parses don't encode that much useful information, or (more likely) that the fine-tuning didn't teach the model how to use the extra information.
For example, maybe $\rm {BERT}_{large}$ (trained on unparsed then the exact same data parsed) just learned to ignore parentheses.
Furthermore, the subcases which had score close to 0 for the unparsed model basically did not see any improvement. These obstinate cases are given in Table \ref{table:NonEntailedBERT0Cases}.
Most of these cases are tests of syntactic phenomena, so parsed data certainly contains useful information, but again, the fine-tuning is somehow not enough to teach the model how to use it.
We do not think that parsing is necessarily a preprocessing step that should be incorporated into future models/systems, because it takes extra computational and annotated data resources. But this experiment does show that without induced biases, BERT's massive, generic pre-training does not capture some basic rule-like principles.
\section{Overfitting to MNLI}
Models learn and use fallible heuristics only because it works on their training datasets; in other words, they \emph{overfit} to their training data, MNLI.
We analyze this process by evaluating the model after different amounts of fine-tuning on MNLI.
We perform this experiment on ${\rm MT\mbox{-}DNN}_{large}$, the best performer on MNLI, and gauge overfitting by evaluating on the adversarial dataset from \citeauthor{mccoy_right_2019} (non-entailment subcases).
\begin{table}[!htbp]
\begin{center}
\begin{tabular}{| l | l | l | l | l |}
\hline
Epoch \# & 1 & 2 & 3 \\ \hline
MNLI (matched dev set) & 85.66 & \textbf{86.69} & \textbf{86.59} \\ \hline
\emph{non-entailment} subcases from \cite{mccoy_right_2019} & 44.09 & \textbf{47.40} & 42.49 \\ \hline
\end{tabular}
\caption{Accuracy (\%) for ${\rm MT\mbox{-}DNN}_{large}$ fine-tuned on MNLI for varying numbers of epochs, and then evaluated on the dataset from \citeauthor{mccoy_right_2019}. \cite{mccoy_right_2019}}
\label{table:OverfittingMNLI}
\end{center}
\end{table}
The ${\rm MT\mbox{-}DNN}_{large}$ model trains very quickly, reaching 1\% away from max dev accuracy after only one epoch of fine-tuning, and decreasing slightly on dev accuracy by the third epoch.
This is a claimed benefit of multi-task learning: the model is more flexible to learning different tasks quickly.
From epoch 2 to 3, MNLI dev performance decreases by only 0.1\%, but according to performance on the adversarial dataset, the model is relying significantly more on heuristics, revealing a more overfit state.
Looking at specific subcases, the epoch-3 model differs by more than 10\% in 6 subcases, split very similarly to what happened with parsed inputs:
\begin{itemize}
\item Improves at lexical semantics: `belief' verbs (believed, thought) (+11.8\%) and uncertainty adverbs (hopefully, maybe) (+24.3\%)
\item Gets worse at structural/syntactic phenomena: passive voice (-24.4\%), conjunction (-12.4\%), and subject modifiers (PP (-15.6\%), relative clauses (-19.1\%))
\end{itemize}
Interestingly, the subcases that more MNLI fine-tuning helps are exactly the same as the ones that $\rm {BERT}_{large}$ beats ${\rm MT\mbox{-}DNN}_{large}$ on.
This strongly suggests that the purpose of these words is emphasized in MNLI; ${\rm MT\mbox{-}DNN}$ forgets about it while fine-tuning on other GLUE tasks, and more fine-tuning on MNLI makes it re-learn it.
On the other hand, the subcases that more fine-tuning hurts are all structural/syntax-focused, indicating that MNLI is biased against actually utilizing complex syntactic phenomena in a way that affects entailment (supporting the \emph{syntactic} heuristic hypothesis of \citeauthor{mccoy_right_2019}).
Creating feasibly-sized training datasets with ``no biases'' is impossible.
Here we find some subtle examples in MNLI, emphasizing the sensitivity of these models to pick up on any useful signal.
NLI is a very broad task, making it hard to define what a natural or representative input distribution would be, so ultimately dataset design should depend on desired abilities and applications.
\section{Conclusion}
In this work, we use adversarial and challenge datasets to probe and analyze the failures of current state-of-the-art natural language inference models, comparing BERT and MT-DNN models of different sizes.
Evaluating on these datasets distinguishes the actual understanding capabilities of the different models better than simply looking at their performance on MNLI (the large dataset they were trained on).
Our analysis is very fine-grained, targeting many specific linguistic phenomena.
We find various improvements from larger model size and multi-task learning.
We find that the most difficult examples for the best models are logic or syntax-based, including propositional logic and garden-path sentences.
We experiment with passing parses as input to the out-of-the-box pre-trained models, and find that it does provide some improvement in examples that require understanding syntax, demonstrating the value of syntactic induced biases.
We analyze what overfitting to MNLI looks like, and reveal some biases/artifacts in the dataset.
Some may argue that testing NLI systems on artificially challenging datasets is unfair and not useful, because it is not representative of their performance on naturalistic, real-world data.
But even if the data humans naturally produce is not so difficult (because humans also are lazy and use heuristics), the difference is that we always \emph{can} parse sentences correctly, utilizing rules and principles.
And we intuitively know that ability is crucial to robust, trustworthy, and \emph{real} language understanding.
\section*{Acknowledgment}
The work reported in this paper is supported by the National Science Foundation under Grant No. 1659788. Any opinions, findings and conclusions or recommendations expressed in this work are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
\balance
\bibliographystyle{IEEEtranN}
|
2,877,628,090,479 | arxiv | \section{Introduction}
Some of the most relevant properties of the universe have been established through astronomical
data associated to light curves of distant Supernova Ia \cite{riess},
the temperature anisotropies in the cosmic microwave background radiation \cite{wmap3} and the matter power spectrum of large scale structures \cite{tegmark}.
Such observations give a strong evidence
that the geometry of the universe is flat and that our Universe is undertaking an
accelerated expansion at the present epoch. This acceleration
is attributed to a dominant dark
energy component, whose most popular candidate is the
cosmological constant, $\Lambda$.
The present days dominance of dark energy make us wonder if this component
may affect the formation and stability of large astrophysical structures, whose physics is basically Newtonian.
This is in fact an old question put forward already by Einstein \cite{einstein} and pursued by many other authors \cite{noer,chernin2,manera,nunes,baryshev}
and \cite{kagramanova,jetzer1}.
In general, the problem is rooted
in the question whether the expansion of the universe, which in
the Newtonian sense could be understood as a repulsive force, affects local astrophysical
properties of large objects. The answer is certainly affirmative if part of the terms responsible for the Universe expansion survives the
Newtonian limit of the Einstein equations. This is indeed the case of $\Lambda$ which is part of the Einstein tensor.
In fact, as explained in the main text below,
all the effects of the universe expansion can be taken into account, regardless of the model, by generalizing the Newtonian limit.
This approach allow us to calculate the impact of a given cosmological model on astrophysical
structures.
Although, there are several candidates to dark energy which have their own cosmological signature, e.g.
\cite{Koivisto1, daly, wang2, koivisto, koivisto2, shaw2} and \cite{seo,brook,daly2,Koivisto3, shaw3},
in this paper we will investigate the $\Lambda$CDM model only.
Such consideration is in fact not restrictive and our results will be common to most dark energy models. At
astrophysical scales and within the Newtonian limit one does not expect to find important differences among the different dark energy models. This, however should not be interpreted as if
$\Lambda$ has no effect at smaller astrophysical scales. In fact, the effects of a cosmological constant
on the equilibrium and stability of astrophysical structures is not negligible, and can be of relevance to
describe features of astrophysical systems such as globular clusters, galaxy clusters or even galaxies \cite{cher,iorio,bala1,nowakowski1,cardoso}.
Motivated by this,
we investigate the effects of a dark energy component on the Newtonian limit of Einstein gravity and its consequences at astrophysical scales.
In this article we investigate how
the cosmological constant changes certain aspects of astrophysical hydrostatic
equilibrium. In particular we search for specific imprints which are unique to the existence of a dark energy fluid.
For instance, the instability of previously viable astrophysical models when $\Lambda$ is included.
We explore such possibility using spherical configurations
described by a polytropic equation of state (e.o.s) $p\sim \rho^{\gamma}$.
The polytropic equation of state
derives its importance from its success and consistency, and it is widely used in determining the properties of gravitational structures ranging from stars
\cite{chan} to galaxies \cite{binney}.
It leads to an acceptable description of the
behavior of astrophysical objects in accordance with
observations and numerical simulations \cite{kennedy1,gruzinov,kaniadakis,sadeth,pinzon1,ruffet}.
The description of such configurations can be verified in the general relativistic framework \cite{herrera}
and applications of these models to the dark energy problem have in fact been explored \cite{mukhop}.
The effect of a positive cosmological constant can be best visualized as a repulsive non local force
acting on the matter distribution. It is clear that this extra force will result
into a minimum density (either central or average) which is possible for the
distribution to be in equilibrium.
This minimum density is a crucial crossing point: below this value no matter can be in equilibrium, above this value
low density objects exist \cite{lahav91}. Both effects are novel features due to $\Lambda$.
We will demonstrate such inequalities, which are generalizations of
corresponding inequalities found in \cite{nowakowski1} and \cite{bala2}, for every polytropic index $n$.
However, the most drastic effect can be found in the
limiting case of the polytropic equation of state,
i.e, the isothermal sphere where the polytropic index $n$ goes to infinity.
This case captures, as far as the effects of $\Lambda$ are
concerned, many features also for higher, but finite $n$. The model
of the isothermal sphere is often used to model galaxies and galactic clusters \cite{natarajan}
and used in describing effects of gravitational lensing \cite{kawano,sereno,maccio}.
Herein lies the importance of the model.
Regarding the isothermal sphere we will show that $\Lambda$ renders the model unacceptable on general grounds.
This essentially means that the model does not even have an appealing asymptotic behavior for large radii and
any attempt to definite a physically acceptable radius has its severe drawbacks.
The positive cosmological constant offers, however, yet another unique opportunity, namely the
possible existence of young low density virialized objects, understood as configurations that have reached virial equilibrium just at the vacuum dominated epoch (in contrast to the structures forming during the matter dominated era, where the criteria for virialization is roughly $\bar{\rho}\approx 200 \rho_{\rm crit}$). This low density hydrostatic/virialized objects
can be explained again due to $\Lambda$ which now partly plays the role of the outward pressure.
The applicability of fluid models, virial theorem and hydrostatic equilibrium to large astrophysical bodies has been
discussed many times in the literature. For a small survey on this topic we refer the reader to \citep{jackson,bala5} where one can also
find the relevant references.
It is interesting to notice that dark matter halos represent a constant density background
which, in the Newtonian limit, objects embedded in them feel the analog to a negative cosmological constant. The
equilibrium analysis for such configurations has been performed in \cite{Umemura, Horedt}.
A negative $\Lambda$ will just enhance the attractive gravity effect, whereas a positive one opposes this attraction.
As a result the case $\Lambda > 0$ reveals different physical concepts as discussed in this paper.
The article is organized as follows. In the next section we introduce the equations relevant
for astrophysical systems as a result of the weak field
limit and the non-relativistic limit of Einstein field equations taking into account a cosmological constant.
There we derive such limit
taking into account the background expansion independently of the dark energy model.
In section 3 we derive the equations governing polytropic configurations,
the equilibrium conditions and stability criteria.
In section 4 we describe the isothermal sphere and investigate
its applicability in the presence of $\Lambda$.
In section 5 we explore some examples of astrophysical configurations
where the cosmological constant may play a relevant role.
In particular, we probe into low density objects, fermion (neutrino) stars and boson stars.
Finally we perform an important comparison between polytropic configurations with $\Lambda$ and
parameterized density of Dark Matter Halos.
We end with conclusions. We use units $G_N=c=1$ except in section 5.4 where we restore $G_N$ and use natural units $\hbar=c=1$.
\section{Local dynamics in the cosmology background}
The dynamics of the isotropic and homogeneous cosmological background is determined
by the evolution of the (dimensionless) scale factor given through the Friedman-Robertson-Walker
line element as solution of Einstein field equations,
\begin{equation}\label{fri}
\frac{\ddot{a}(t)}{a(t)}=-\frac{4}{3}\pi \left[\rho(t)+3p(t)\right],
\hspace{0.5cm}\left[\frac{\dot{a}(t)}{a(t)}\right]^{2}=H(t)^{2}=\frac{8}{3}\pi\rho(t)-\frac{k}{a^{2}(t)},
\end{equation}
corresponding to the Raychaudhury equation and Friedman equation, respectively. The total energy density $\rho$
is a contribution from a matter component - baryonic plus dark matter - ($\rho_{\rm mat}\sim a^{-3}$),
radiation ($\rho_{\rm rad}\sim a^{-4}$) and a dark energy
component ($\rho_{\rm x}\sim a^{-f(a)}$ with $p=\omega_{\rm x}\rho$). The function $f(a)$ is given as
\begin{equation}\label{fa}
f(a) \equiv \frac{3}{\ln a}\int_{1}^{a}\frac{\omega_{\rm x}(a')+1}{a'}{\rm d} a',
\end{equation}
where the term $\omega_{\rm x}(a)$ represents the equation of state for the dark energy component.
The case $\omega_{\rm x}=-1$ corresponds to the cosmological constant $\rho_{\rm x}=\rho_{\rm vac}=\Lambda/8\pi$.
The effects of the background on virialized structures can be explored through the Newtonian
limit of field equations from which one can derive a
modified Poisson's equation (see e.g. \cite{noer,nowakowski2}).
Recalling that pressure is also a source for gravity, the gravitational potential produced by an overdensity is given by
\begin{equation}\label{o}
\nabla^{2}\Phi=4\pi (\rho_t+3P_t),
\end{equation}
where $\rho_t=\delta\rho + \rho$ and $P_t=\delta P + P$. Where $\delta \rho$ is the local overdensity with respect to the background density $\rho$.
Notice that equation (\ref{o}) reduces to the usual Poisson equation, $\nabla^{2}\Phi=4\pi \delta \rho$, when non relativistic matter dominates the Universe,
and $\delta \rho \gg \rho$. However, at present times, when dark energy dominates, the pressure is non-negligible and $\delta P$ might even be non zero, such
as in the case of quintessence models \cite{ml,wang,mota1}. In this work, however, we will focus in the case of an homogeneous dark energy component where $\delta\rho=\delta P=0$.
With this in mind, one can then write the modified Poisson equation as
\begin{equation} \label{yyy0}
\nabla^{2}\Phi=4\pi \delta \rho-3\frac{\ddot{a}(t)}{a(t)},
\end{equation}
Note that this equation allows one to probe local effects of different Dark Energy models through the term $\ddot{a}/a$ given in Eq.(\ref{fri}).
Since we will be investigating the configuration and stability of astrophysical objects nowadays, when dark energy dominates,
it is more instructive to write the above
equations in terms of an effective vacuum density i.e.
\begin{equation}\label{pois}
\nabla^{2}\Phi=4\pi \delta \rho-8\pi\rho_{\rm vac}^{\rm eff}(a),
\end{equation}
where by using (\ref{fri}) $\rho_{\rm vac}^{\rm eff}(a)$ has been defined as
\begin{equation}
\label{rhoeff}
\rho_{\rm vac}^{\rm eff}(a)\equiv -\frac{1}{2}\left[\left(\frac{\Omega_{\rm cdm}}{\Omega_{\rm vac}}\right) a^{-3}
+(1+3\omega_{\rm x})a^{-f(a)}\right]\rho_{\rm vac},
\end{equation}
which reduces to $\rho_{\rm vac}$ for $\omega_{\rm x}=-1$ and negligible contribution
from the cold dark matter component with respect to the over-density $\delta \rho$. With $\Phi_{\rm grav}$ being the solution associated to the pure gravitational interaction, the full solution for the potential can be simply written as
\begin{equation} \label{yyy1}
\Phi(r,a)=\Phi_{\rm grav}(r)-\frac{4}{3}\pi \rho_{\rm vac}^{\rm eff}(a)r^{2},\hspace{1cm}\Phi_{\rm grav}(r)=-\int_{V'} \frac{\delta \rho(\textbf{r}')}{|\textbf{r}-\textbf{r}'|}\,\dtr',
\end{equation}
which defines the Newton-Hooke space-time for a scale factor close to
the present time (vacuum dominated epoch), $\omega_{\rm x}=-1$ and $\Omega_{\rm cdm}\ll\Omega_{\rm vac}$ \cite{gibbons,aldro}. For a $\Lambda$CDM universe with $\Omega_{\rm cdm}=0.27$ and $\Omega_{\rm vac}=0.73$ we get $\rho_{\rm vac}^{\rm
eff}(\rm today)=0.81\rho_{\rm vac}$: that is, the positive density of matter which has an attractive effect
opposing the repulsive one of $\Lambda$ reduces effectively the
strength of the 'external force' in (\ref{pois}). Note that, although in the
text we will use the notation $\rho_{\rm vac}$ which would be vaild in the case of a Newton-Hook space time, it must be understood that
we can replace $\rho_{\rm vac}$ by $\rho_{\rm vac}^{\rm eff}(\rm today)=0.81\rho_{\rm vac}$ for a $\Lambda$CDM background.
Given the potential $\Phi(r,a)$, we can write the Euler's equation for a self-gravitating configuration as
$$\rho \frac{d \langle v_{i}\rangle}{d t}+ \partial_{i}p+ \rho \partial_{i} \Phi=0$$, where $\langle v_{i}\rangle$ is the (statistical) mean velocity and $\rho$ is the total energy density in the system. We can go beyond Euler's equation and write down the tensor virial equation which reduces to its
scalar version for spherical configurations. The (scalar) virial equation with the background contribution reads as \cite{bala3,caimmi}
\begin{equation}
\label{virialeq}
\frac{{\rm d} ^{2}\mathcal{I}}{{\rm d} t^{2}}=2T+\mathcal{W}^{\rm grav}+3\Pi+\frac{8}{3}\pi \rho_{\rm vac}^{\rm eff}(a)\mathcal{I}-\int_{\partial V}p\left(\vec{r}\cdot \hat{n}\right)\,{\rm d} A,
\end{equation}
where $\mathcal{W}^{\rm grav}$ is the gravitational potential energy defined by
\begin{equation}
\label{maw}
\mathcal{W}^{\rm grav}=\frac{1}{2}\int_{V}\rho(\textbf{r})\Phi_{\rm grav}(\textbf{r}) \dtr,
\end{equation}
and $T=\frac{1}{2}\int_{V}\rho\langle v^{2}\rangle \dtr$ is the contribution of ordered motions to the kinetic energy.
Also, $\mathcal{I}\equiv \int_{V} \rho r^{2}\dtr$ is the moment of inertia about the center of
the configuration and $\Pi\equiv \int _{V}p\dtr$ is the trace of the dispersion tensor.
The full description of a self gravitating configuration is completed with an equation for mass conservation, energy conservation and an
equation of state $p=p(\delta \rho,s)$. If we assume equilibrium
via $\ddot{\mathcal{I}}\approx 0$, we obtain the known virial theorem \cite{jackson,bala3}
\begin{equation}
\label{virial}
|\mathcal{W}^{\rm grav}|=2T+3\Pi+\frac{8}{3}\pi \rho_{\rm vac}^{\rm eff}(a)\mathcal{I},
\end{equation}
where we have neglected the surface term in (\ref{virialeq}), which is
valid in the case when we define the boundary of the configuration
where $p=0$.
With $\rho_{\rm vac}^{\rm eff}$ given in (\ref{rhoeff}), one must
be aware that an equilibrium configuration is at the most a dynamical
one. This is to say that the 'external repulsive force' in
(\ref{rhoeff}) is time dependent through the inclusion of the background expansion and so are the terms in (\ref{virialeq}). This leads to a violation of energy
conservation, which also occurs in the virialization process \cite{wang3,ml, caimmi,mota1,shaw}).
Traced over cosmological times, this implies that if we insist on the second derivative of the
inertial tensor to be zero, then the internal properties of the object
like angular velocity or the internal mean velocity of the components
will change with time.
Even if in the simplest case, one can assume that the
objects shape and its density remain constant.
Hence, equilibrium here can be thought of as represented by long time averages in which case the second
derivative also vanishes, not because of constant volume and density,
but because of stability \cite{bala4,bala5}.
The expressions
derived in the last section, especially (\ref{pois}) and
(\ref{virial}) can be used for testing dark energy models on
configurations in a dynamical state of equilibrium.
However, in these cases, one should point out that, in this approach there is no energy conservation within the overdensity: dark energy flows in and out of the overdensity.
Such feature is a consequence of the assumption that dark energy does not cluster at small scales (homogeneity of dark energy).
This is in fact the most common assumption in the literature\cite{wang,chen,hore}, with a few exceptions investigated in \cite{wang3,ml, caimmi,mota1}.
In this paper, we
will concentrate on the possible effects of a background dominated by
a dark energy component represented by the cosmological constant at late times ($z\leq $1). It implies that the total density
involved in the definitions of the integral quantities appearing in
the virial equation can be approximated to $\rho\sim \delta \rho$. In
that case the Poisson equation reduces to the form
$\nabla^{2}\Phi=4\pi \delta \rho-8\pi \rho_{\rm vac}$. As mentioned before, the
symbol $\rho_{\rm vac}$ has no multiplicative factors in the case of a
Newton-Hooke space time, while for a $\Lambda$CDM model it must be
understood as $\rho_{\rm vac}\to 0.8\rho_{\rm vac}$. As the reader will see, the most
relevant quantities derived here come in forms of ratio of a
characterizing density and $\rho_{\rm vac}$, and hence the extra factor
appearing in the $\Lambda$CDM can be re-introduced in the
characterizing density. For general
consideration of equilibrium in the spherical case see \cite{boehmer1}
and \cite{boehmer}, while the quasi spherical collapse with cosmological
constant has been discussed in \cite{debnath}.
\section{Polytropic configurations and the $\Lambda$-Lane-Emden equation}
We can determine the relevant features of astrophysical systems by
solving the dynamical equations describing a self gravitating
configuration (Euler's equation, Poisson's equation, continuity
equations). In order to achieve this goal we must first know the
potential $\Phi$ to be able to calculate the gravitational potential
energy. To obtain $\Phi$, one must supply the density profile and
solve Poisson's equation. In certain cases, the potential is given and
we therefore can solve for the density profile in a simple way. Here
we face the situation where no information on the potential (aside
from its boundary conditions) is available and we also do not have an
apriori information about the density profile (see for instance
\cite{binney} for related examples). In order to determine both, the
potential and the density profile, a complete description of
astrophysical systems required. This means we need to know an equation
of state $p=p(\rho)$ (here we change notation and we call $\rho$ the
proper density of the system). The equation of state can take several
forms and the most widely used one is the so-called \emph{polytropic
equation of state}, expressed as
\begin{equation}
\label{poly}
p=\kappa \rho^{\gamma},\hspace{2cm}\gamma\equiv 1+\frac{1}{n},
\end{equation}
where $\gamma$ is the polytropic index and $\kappa$ is a parameter
that depends on the polytropic index, central density, the mass and
the radius of the system. The exponent $\gamma$ is defined as
$\gamma=(c_{\rm p}-c)/(c_{\rm v}-c)$ and is associated with processes
with constant (non-zero) specific heat $c$. It reduces to the
adiabatic exponent if $c=0$. The polytropic equation of state was
introduced to model fully convective configurations. From a
statistical point of view, Eq. (\ref{poly}) represents a
collisionless system whose distribution function can be written in
the form $f=f(\tilde{E})\sim \tilde{E}^{n-3/2}$, with $\tilde{E}\equiv
\phi-(1/2)mv^{2}$ being the relative energy and $\phi(r)\equiv
\Phi_{0}-\Phi(r)$ being the relative potential (where $\Phi_{0}$ is a
constant chosen such that $\phi(r=R)=0$)\cite{binney}. In
astrophysical contexts, the polytropic equation of state is widely
used to describe astrophysical systems such as the sun, compact
objects, galaxies and galaxy clusters \cite{chan,binney,kennedy1}.
We now derive the well known Lane-Emden equation. We start from Poisson equation and Euler equation for spherically symmetric configurations, written as
\begin{equation}
\label{poly+pois}
\frac{{\rm d} ^{2}\Phi}{{\rm d} r^{2}}+\frac{2}{r}\frac{{\rm d} \Phi}{{\rm d} r} =4\pi
\rho-8\pi \rho_{\rm vac}, \hspace{1cm}\frac{{\rm d} p}{{\rm d} r}=-\rho\frac{{\rm d}
\Phi}{{\rm d} r}.
\end{equation}
\begin{figure}
\begin{center}
\includegraphics[angle=0,width=12cm]{lanelambda.ps}
\caption[]{{Solutions of $\Lambda$LE equation for different $\zeta_{c}$ and different index $n$ ranging from $n=3$ (red, short-dashed line), $n=4$ (blue,long-dashed line) and $n=5$ (green,dot short-dashed line).}}
\label{lanea}
\end{center}
\end{figure}
This set of equations together with Eq. (\ref{poly}) can be integrated
in order to solve for the density in terms of the potential as
\begin{equation}
\label{new1}
\rho(r)=\rho_{\rm c}\left[1-\left(\frac{\gamma-1}{\kappa
\gamma}\right)\rho_{\rm
c}^{1-\gamma}(\Phi(r)-\Phi(0))\right]^{\frac{1}{\gamma-1}},
\end{equation}
where $\rho_c$ is the central
density. In view of eq. (\ref{new1}) in conjunction with eq. (\ref{yyy1}) it
is clear that $\rho_{\rm vac}$ will have the effect to increase the value of
$\rho(r)$. Therefore the boundary of the configuration will be located
in a greater $R$ as compared to the case $\rho_{\rm vac}=0$. In order to determine the behavior of the density profile, we again combine Eq(\ref{poly+pois}) and Eq.(\ref{poly}) in order to eliminate the potential $\Phi(r)$. We obtain
\begin{equation}
\label{arbigeo2}
\frac{1}{n}\left(\frac{\nabla \rho}{\rho}\right)^{2}+\nabla^{2}\ln
\rho=-\frac{4\pi n\rho^{1-\frac{1}{n}}}{\kappa (n+1)}\lp1-\zeta \right),
\end{equation}
where we defined the function
\begin{equation} \label{yyy3}
\zeta=\zeta(r)\equiv 2\left( \frac{\rho_{\rm vac}}{\rho(r)}\right).
\end{equation}
We can rewrite Eq (\ref{arbigeo2}) by introducing the variable $\psi$ defined by $\rho=\rho_{\rm c}\psi^{n}$, where $\rho_{\rm c}$ is the central density.
We also introduce the variable $\xi=r/a$ where
\begin{equation}
\label{a}
a\equiv \sqrt{\frac{\kappa (n+1)}{4\pi \rho_{\rm c}^{1-\frac{1}{n}}} }
\end{equation}
is a length scale. Eq.(\ref{arbigeo2}) is finally written as \cite{bala2,chan}
\begin{equation}
\label{le}
\frac{1}{\xi^{2}}\frac{{\rm d} }{{\rm d} \xi}\left( \xi^{2}\frac{{\rm d} \psi}{{\rm d}
\xi}\right)=\zeta_{\rm c}-\psi^{n},
\hspace{0.8cm} \zeta_{\rm c} \equiv 2\left(\frac{\rho_{\rm vac}}{\rho_{\rm c}}\right).
\end{equation}
This is the $\Lambda$-Lane-Emden equation ($\Lambda{\rm LE}$). Note that for
constant density, we recover $\rho=2\rho_{\rm vac}$ as the first non trivial solution of
$\Lambda{\rm LE}$ equation. This is consistent with the results from virial
theorem for constant density spherical objects which tell us that
$\rho \ge 2\rho_{\rm vac}$ \cite{nowakowski1}. Note that using Eq. (\ref{new1}) we can write the solution $\psi(\xi)$
with the explicit contribution of $\rho_{\rm vac}$ as
\begin{equation}
\label{new2}
\psi\left( \xi=r/a\right)=1-\lp4\pi a^{2}\rho_{\rm c}\right)^{-1}(\Phi_{\rm grav}(r)-\Phi_{\rm grav}(0))+6\zeta_{\rm c}\xi^{2},
\end{equation}
so that for a given $r$ smaller than the radius we will obtain
\begin{equation} \label{xxx6}
\psi\left( r/a\right)>\psi\left( r/a\right)_{\Lambda=0},
\end{equation}
as already pointed out before. Then the differential equation (\ref{le})
must be solved with the initial conditions $\psi(0)=1, \,\,\,
\psi'(0)=0$, satisfied by (\ref{new2}). Numerical solutions were
obtained for the first time in \cite{bala3}.
The solutions presented
in Fig. \ref{lanea} are given in terms of the ratio $\rho/\rho_{\rm vac}$ for
$n=3, 4$ and $n=5$. This choice of variables are useful also since
$\rho_{\rm vac}$ sets a fundamental scale of density (the choice
$\rho_{0}=\rho_{\rm vac}$ will be explored for the isothermal sphere, where
figure \ref{lanea} will be helpful for discussions). The radius of a
polytropic configuration is determined as the value of $r$ when the
density of matter with the e.o.s (\ref{poly}) vanishes. This happens
at a radius located at
\begin{equation} \label{xxx8}
R=a\xi_{1}\,\,\, {\rm such\,\, that} \,\,\, \psi(\xi_{1})=0.
\end{equation}
Note that equation (\ref{le}) yields a transcendental equation to
determine $\xi_{1}$.
Also one notes from Fig. \ref{lanea} that not all values of $\zeta_{\rm
c}$ yield allowed configurations in the sense that we cannot find a
value of $\xi_{1}$ such that $\psi(\xi_{1})=0$. This might not be
surprising since for $n=5$ and $\Lambda=0$ we find the situation where
the asymptotic behavior is $\rho \to 0$ as $\xi\to \infty$ (we
consider this still as an acceptable behavior). There is, however, one
crucial difference when we switch on a non-zero $\Lambda$. For
$\Lambda\neq 0$ not only we cannot reach a definite radius but the
derivative of the density changes sign and hence becomes non-physical.
The situation for the cases $n \ge 5$ is somewhat similar to the
extreme case of $n \to \infty$ (isothermal sphere). Clearly, these features are responsible of the last term in Eq.(\ref{new2}), which for high values of $\zeta$ may become dominant over the remaining (gravitational) terms. We will discuss
this case in section four where we will attempt another definition of
a finite radius with the constraint $\psi(\xi)'<0$. For now it
is sufficient to mention that, as expected, the radius of the allowed
configurations are larger than the corresponding radius when
$\Lambda=0$.
\subsection{Equilibrium and stability for polytropes}
In this section we will derive the equilibrium conditions for
polytropic configurations in the presence of a positive cosmological
constant. We will use the results of last section in order to write
down the virial theorem. The total mass of the configuration can be
determined as usual with $M=\int \rho \dtr$ together with
Eq.(\ref{le}). One then has a relation between the mass, the radius and the central
density:
\begin{equation}
\label{radius3}
R=M^{1/3}\rho_{\rm c}^{-1/3}f_{0}(\zeta_{\rm c};n)=(Mr_{\Lambda}^{2})^{1/3}(4\pi )^{1/3}\zeta_{c}^{1/3}f_{0}(\zeta_{\rm c};n)
\end{equation}
where
\begin{equation}
f_{0}(\zeta_{\rm c};n)
\equiv
\left(\frac{\xi_{1}^{3}}{4\pi}\right)^{\frac{1}{3}}\left(\int_{0}^{\xi_{1}}\xi^{2}\psi^{n}(\xi){\rm d}\xi\right)^{-\frac{1}{3}},
\end{equation}
Note that we have introduced the cosmological constant in the equation for the radius, leading to the appearance of the astrophysical length scale $\left( Mr_{\Lambda}^{2}\right)^{1/3}$ (with $r_{\Lambda}=\Lambda^{-1/2}=(8\pi \rho_{\rm vac})^{-1/2}=2.4\times 10^{3}(\Omega_{\rm vac}h^{2})^{-1/2}$Mpc $\approx 4.14\times 10^{3}$ Mpc for the concordance values $\Omega_{\rm vac}=0.7$ and $h=0.7$). This scale has been already found in the
context of Schwarzschild - de Sitter metric where it is the maximum
allowed radius for bound orbits. At the same time it is the scale of
the maximum radius for a self gravitating spherical and homogeneous configuration in the presence of a positive $\Lambda$ \cite{bala3}. This also let us relate the mean density of the configuration with its central density and/or cosmological parameters as $\bar{\rho}=(3/4\pi f_{0}^{3}) \rho_{\rm c}=(3/2\pi \zeta_{\rm c}f_{0}^{3})\rho_{\rm vac}=(3\Omega_{\rm vac}/2\pi \zeta_{\rm c}f_{0}^{3})\rho_{\rm crit}$.
Similarly we can determine the other relevant quantities appearing in
the equations for the energy and the scalar virial theorem
(\ref{virial}). For the traces of the moment of inertia tensor and the
dispersion tensor $\Pi$ we can write
\begin{equation}
\label{piner}
\mathcal{I}=MR^{2}f_{1},\hspace{1cm}\Pi=\kappa \rho_{\rm
c}^\frac{1}{n}Mf_{2},
\end{equation}
where the functions $f_{1,2}$ have been defined as
\begin{equation}
\label{pintener}
f_{1}(\zeta_{\rm c};n)\equiv
\frac{\int_{0}^{\xi_{1}}\xi^{4}\psi^{n}{\rm d}\xi}{\xi_{1}^{2}\int_{0}^{\xi_{1}}\xi^{2}\psi^{n}{\rm d}\xi},
\hspace{1cm}f_{2}(\zeta_{\rm c};n)\equiv
\frac{\int_{0}^{\xi_{1}}\xi^{2}\psi^{n+1}{\rm d}\xi}{\int_{0}^{\xi_{1}}\xi^{2}\psi^{n}{\rm d}\xi},
\end{equation}
using (\ref{radius3}). These functions are numerically determined
in the sequence $\psi(\xi;\zeta_{\rm c})\to \xi_{1}(\zeta_{\rm c})\to
f_{i}(\zeta_{\rm c})$, such that for a given mass we obtain the radius as $R=a(M;\xi_{1})\xi_{1}$.
Let us consider the virial theorem (\ref{virial}) for a polytropic
configuration. The gravitational potential energy $\mathcal{W}^{\rm grav}$
can be obtained following the same arguments shown in \cite{chan}.
The method consist in integrating Euler's equation and solve for
$\Phi_{\rm grav}$, then using Eq.(\ref{maw}) one obtains $\mathcal{W}^{\rm grav}$.
The final result is written as
\begin{equation}
\label{poliwc}
\mathcal{W}^{\rm grav}=-\frac{M^{2}}{2R}-\frac{1}{2}(n+1)\Pi+\frac{2}{3}\pi\rho_{\rm vac}\left(
\mathcal{I}-MR^{2}\right),
\end{equation}
To show the behavior of $\mathcal{W}^{\rm grav}$ with respect to the index
$n$, we can solve the the virial theorem (\ref{virial}) for $\Pi$ and
replace it in Eq.(\ref{poliwc}). We obtain
\begin{equation}
\label{poliwd}
\mathcal{W}^{\rm
grav}=-\frac{3}{5-n}\left[1-\frac{\rho_{\rm vac}}{\bar{\rho}}\left(\frac{1}{3}(5+2n)f_{1}-1\right)
\right]\frac{M^{2}}{R}.
\end{equation}
This expression shows the typical behavior of a $n=5$ polytrope (even
if $\rho_{\rm vac} \neq0$): the configuration has an infinite potential energy,
due to the fact that the matter is distributed in a infinite volume.
The energy of the configuration in terms of the polytropic index can
be easily obtained by using (\ref{virial}), (\ref{poliwc}) and
(\ref{poliwd})
\begin{equation}
\label{poliwen}
E=\mathcal{W}^{\rm grav}+\frac{8}{3}\pi \rho_{\rm vac}\mathcal{I}
+n\Pi=-\left(\frac{3-n}{5-n}\right)\frac{M^{2}}{R}\left[1-
\frac{\rho_{\rm vac}}{\bar{\rho}} \left( 5f_{1}^{(n)}-1\right)\right].
\end{equation}
One is tempted to use $E<0$ as the condition to be fulfilled for a
gravitationally bounded system. For $\rho_{\rm vac}=0$ we recover the condition
$\gamma>4/3$ ($n<3$) for gravitationally bounded configurations in
equilibrium. On the other hand, for $\rho_{\rm vac} \neq 0$ this condition
might not be completely true due to the following reasoning:
The two-body effective potential in the presence of a positive
cosmological constant does not go asymptotically to zero for large
distances, which is to say that $E<0$ is not stringent enough to
guarantee a bound system. Therefore, we rather rely on the numerical
solutions from which, for every $n$, we infer the value of ${\cal
A}_n$ such that
\begin{equation} \label{An}
\rho_c \ge {\cal A}_n \rho_{\rm vac}.
\end{equation}
This gives us the lowest possible central density in terms of
$\rho_{\rm vac}$. The behavior of the $\zeta_{\rm crit}$, the functions $f_{i}$, the solution $\xi_{1}(\zeta_{c}=\zeta_{\rm crit},n)$ and the values of ${\cal A}_n$ are shown in
fig \ref{tablelane2}. Note that this inequality can be understood as a
generalization of the equilibrium condition $\varrho>\mathcal{A} \rho_{\rm vac}$,
which, when applied for a spherical homogeneous configurations yields
$\mathcal{A}=2$ with $\varrho=\rho=$ constant \cite{bala3,bala2}.
\begin{figure}
\begin{center}
\includegraphics[angle=0,width=9cm]{salida1.ps}
\caption[]{{ The values of $\zeta_{\rm crit}$, $\xi(\zeta_{\rm crit})$, the functions $f_{i}(\zeta_{\rm c};n)$, and $\mathcal{A}_{n}=2\zeta_{\rm crit}^{-1}$. Equilibrium configurations are reached for $\rho_{\rm c}>\mathcal{A}
_{n}\rho_{\rm vac}$ (for a $\Lambda$CDM cosmoolgy one has to rewrite $\mathcal{A}_{n}\to 0.81\mathcal{A}_{n}$).}}
\label{tablelane2}
\end{center}
\end{figure}
Note that, at $n=5$ the radius of the configuration becomes undefined, as well
as the energy. A $n=5$ polytrope is highly concentrated at the center
\cite{chan}. No criteria can be written since even for $\rho_{\rm vac}=0$
there is not a finite radius. But it is this high concentration at
the center and a smooth asymptotic behavior which makes this case
still a viable phenomenological model if $\Lambda$ is zero. On the
contrary for non-zero, positive $\Lambda$ the solutions start
oscillating around $2\rho_{\rm vac}$ which makes the definition of the
radius more problematic. For $n\to \infty$ the polytropic e.o.s
describes an ideal gas (isothermal sphere). Since in this limit the
expressions derived before are not well definite, this case will be
explored in more detail in the next section. In spite of the
mathematical differences, the isothermal sphere bears many
similarities to the cases $n \ge 5$ and our conclusions regarding the
definition of a radius in the $n \to \infty$ case equally apply to
finite $n$ bigger than $5$.
\begin{figure}
\begin{center}
\includegraphics[angle=0,width=12cm]{phlanelambda.eps}
\caption[]{{Density of a polytropic configuration for
$\zeta_{\rm c}=10^{-3}$ and different polytropic index in a
modified Newton-Hook space time with a dark energy equation of state
$\omega_{\rm x}=-2/3$ (black,dots) and $\omega_{\rm x}=-2$ (red,short dash). Compare with Fig.\ref{lanea}}}
\label{laneph}
\end{center}
\end{figure}
\subsection{Effects with generalized dark energy equation of state}
In the last section we have explored the effects of a dark
energy-dominated background with the equation of state
$\omega_{\rm x}=-1$. Other dark energy models are often used with
$\omega_{\rm x}=-1/3$ and $\omega_{\rm x}=-2/3$ or even $\omega_{\rm x}<-1$, in the so-called phantom regime, or even a time dependent dark energy model (quintessence). A simple generalization to such
models can be easily done by making the following replacement in our equations
\begin{equation}
\zeta\to \zeta(a)_{\rm eff}\equiv -\frac{1}{2}\zeta\eta(a) a^{-f(a)},
\end{equation}
where $\eta(a)=1+3\omega_{\rm x}(a)$, and where the function $f(a)$ is
defined in Eq.(\ref{fa}). Note that for this generalization to be coherent with the derivation of
$\Lambda$LE in (\ref{le}), one needs to consider that the equation of state is close to $-1$, so that there is almost no time-dependence,
and the energy density for dark energy is almost constant. This is indeed the case for most popular candidates of dark energy specially at low redshifts $z<1$.
Also, notice once again, that we are still assuming an homogeneous dark energy component which flows freely to and from the overdensity.
Hence, violating energy conservation inside it.
Clearly, other models of dark energy will
posses dynamical properties that the cosmological constant does not
have. For instance, we could allow some fraction of dark energy to
take part in the collapse and virialization \cite{ml,caimmi,mota1},
which would lead to the presence of self and cross interaction terms
for dark energy and the (polytropic like) matter in Euler equation,
which at the end modifies the Lane Emden equation. With this
simplistic approach, we see that the effects with a general equation
of state are smaller than those associated to the cosmological
constant. In particular, the equation of state $\omega_{\rm x}=-1/3$
displays a null effects since it implies
$\eta=0$ (note that this equation of state can also resemble the curvature term in evolution equation for the background).
On the other hand, phantom models of dark energy, which are associated to equations of state
$\omega_{\rm x}<-1$\cite{cald,nojiri}, have quite a strong effect.
In Fig. \ref{laneph} we show numerical solutions of Lane-Emden
equations for a background dominated with dark energy with $\omega_{\rm x}=-2/3$ and a phantom dark energy with
$\omega_{\rm x}=-2$, with $\zeta=10^{-3}$. These curves are to be compared with those at fig.\ref{lanea}. Clearly equations of state with $\omega_{\rm
x}<-1$ will generate larger radius than the case described in the main
text. Furthermore, the asymptotic behavior of the ratio between the
density and $\rho_{\rm vac}$ is $\rho/\rho_{\rm vac} \to |\eta|$.
\subsection{Stability criteria with cosmological constant}
Stability criteria for polytropic configurations can be derived from the virial equation. Using equations (\ref{piner}),
(\ref{pintener}) and (\ref{poliwc}) we can write the virial theorem (\ref{virial}) in terms of the radius $R$ and the mass $M$:
\begin{equation}
\label{vpol}
-\frac{M^{2}}{2R}+\frac{1}{2}(5-n)\kappa \rho_{\rm
c}^{\frac{1}{n}}Mf_{2}+\frac{2}{3}\pi \rho_{\rm vac} MR^{2}\lp5f_{1}-1\right)=0,
\end{equation}
where we have assumed that the only contribution to the kinetic energy
comes from the pressure in the form of $\mathcal{K}=\frac{3}{2}\Pi$. Note that for $\rho_{\rm vac}=0$ and finite mass, one obtains $R\propto (5-n)^{-1}$
while for $\rho_{\rm vac}\neq 0$ we would obtain a cubic equation for the
radius. Instead of solving for the virial radius, we solve for the mass as a
function of central density with the help of Eq (\ref{radius3}). We
have
\begin{equation}
\label{fmas2aa}
M = \mathcal{G} \rho_{\rm c}^{\frac{3-n}{2n}},
\hspace{0.8cm}\mathcal{G} =\mathcal{G}(\zeta_{\rm c};n)\equiv
\left[\frac{\kappa f_{0}f_{2}(5-n)}{1-\frac{2}{3}\pi \zeta_{\rm
c}f_{0}^{3}(5f_{1}-1)}\right]^{\frac{3}{2}}.
\end{equation}
The explicit dependence of the mass with respect to the central
density splits into two parts: on one hand it has the same form as
the usual case with $\Lambda=0$, that is, $\rho_c^{(3-n)/2n}$; on the
other hand the function $\mathcal{G}$ has a complicated dependence on
the central density because of the term $\zeta_{\rm c}$. With the
help of (\ref{radius3}) and (\ref{fmas2aa}) we can write a mass-radius
relation and the radius-central density relation
\begin{equation}
\label{fmas2aaa}
M=\left(
\mathcal{G}^{\frac{2}{3}\left(\frac{n}{n-1}\right)}f_{0}^{\frac{3-n}{n-1}}\right)
R^{\frac{3-n}{1-n}},
\hspace{0.8cm}R=\left( \mathcal{G}^{\frac{1}{3}}f_{0}\right)\rho_{\rm
c}^{\frac{1}{2}\left(\frac{1-n}{n}\right)}.
\end{equation}
Following the stability theorem (see for instance in
\cite{weinberg}), the stability criteria can be determined from the
variations of the mass in equilibrium with respect to the central
density. We derive from Eq. (\ref{fmas2aaa}):
\begin{equation}
\label{slope}
\frac{\partial M}{\partial \rho_{\rm
c}}=\left[\frac{3}{2}\left(\gamma-\frac{4}{3}\right)\rho_{c}^{-1}\mathcal{G}+\frac{\partial
\mathcal{G}}{\partial \rho_{\rm c}}\right]\rho_{\rm
c}^{\frac{3}{2}(\gamma-\frac{4}{3})}.
\end{equation}
Stability (instability) stands for $\partial M/\partial \rho_{\rm c}>0$ ($\partial
M/\partial \rho_{\rm c}<0$). This yields a critical value of the polytropic
exponent $\gamma_{\rm crit}$ when $\partial M/\partial \rho_{\rm c}=0$ given by
\begin{equation}
\label{slope2}
\gamma_{\rm crit}=\gamma_{\rm crit}(\zeta_{c})\equiv
\frac{4}{3}+\frac{2}{3}\frac{\partial \ln \mathcal{G}}{\partial \ln \rho_{\rm
c}},
\end{equation}
in the sense that polytropic configurations are stable under small
radial perturbations if $\gamma>\gamma_{\rm crit}$. It is clear that
the second term in (\ref{slope2}) also depends on the polytropic index
and therefore this equation is essentially a transcendental expression
for $\gamma_{\rm crit}$.
It is worth mentioning that by including the
corrections due to general relativity, the critical value for
$\gamma_{\rm crit}$ is also modified as $\gamma_{\rm
crit}=(4/3)+R_{\rm s}/R$ \cite{shapiro} and hence for compact objects
the correction to the critical polytropic index is stronger from the
effects of general relativity than from the effects of the
background. This is as we would expect it. Stability of relativistic
configurations with non-zero cosmological constant has been explored
in \cite{boehmer, boehmer1}.
Going back to equation (\ref{fmas2aa}), we can write the mass of the
configuration as
$M=\alpha_{M}M(0)$,
where $M(0)$ is the mass when $\Lambda=0$ and
$\alpha_{M}=\alpha_{M}(\zeta_{\rm c},n)$ is the enhancement factor.
Both quantities can be calculated to give
\begin{equation}
M(0)\equiv
\left(\kappa(5-n)f_{0}^{(n)}f_{2}^{(n)}\right)^{\frac{3}{2}}\rho_{\rm
c}^{\frac{3-n}{2n}},\hspace{0.8cm} \alpha_{M}\equiv \left[
\frac{f_{0}f_{2}}{f_{0}^{(n)}f_{2}^{(n)}\lp1-\frac{2}{3}\pi \zeta_{\rm
c}f_{0}^{3}(5f_{1}-1)\right)} \right]^{\frac{3}{2}},
\end{equation}
where $f_{i}^{(n)}\equiv f_{i}(\zeta_{\rm c}=0,n)$ are numerical
factors (tabulated in table 1) that can be determined in a
straightforward way. Similarly, by using Eq (\ref{radius3}), the
radius can be written as
$R=\alpha_{R}R(0)$,
where
\begin{equation} \label{alphar}
R(0)=\left(\kappa(5-n)f_{0}^{(n)}f_{2}^{(n)}\right)^{\frac{1}{2}}f_{0}^{(n)}\rho_{\rm
c}^{\frac{1-n}{2n}},\hspace{0.3cm} \alpha_{R}\equiv
\left(\frac{f_{0}}{f_{0}^{(n)}}\right)\left[
\frac{f_{0}f_{2}}{f_{0}^{(n)}f_{2}^{(n)}\lp1-\frac{2}{3}\pi \zeta_{\rm
c}f_{0}^{3}(5f_{1}-1)\right)} \right]^{\frac{1}{2}}.
\end{equation}
In table \ref{tablelane2a} we show the values of the enhancement factors $\alpha_{M}$
and $\alpha_{R}$ for different values of $\zeta_{\rm c}$ and different
polytropic index $n$. We also show the values of the critical ratio
$\zeta_{\rm crit}$ which separates the configurations with definite
ratio such that a zero $\xi_{1}$ exist provided that $\zeta_{\rm
c}<\zeta_{\rm crit}$. We will show some examples where the
enhancement factors may be relevant in section 5.
\begin{table}
\begin{center}
\begin{tabular}{cccc}\hline\hline
$n$ &$\zeta_{\rm c}=0.1$&$\zeta_{\rm c}=0.05$ &$\zeta_{\rm c}=0.001$\\ \hline
$1$ &$(1.12,1.29)$ &$(1.05,1.12)$ &$(1.001,1.002)$ \\
$1.5$&$(-,-)$&$(1.11,1.17$ &$(1.002,1.003)$ \\
$3 $&$(-,-)$ &$(-,-)$ &$(1.022,1.01)$ \\ \hline \hline
\end{tabular}
\caption[Numerical values for the enhancement factors in the
polytropic model]{{Numerical values for the enhancement
factors $(\alpha_{R},\alpha_{M}$) (values for $n=1$ have been taken from
\cite{bala3}). The symbol $-$ indicates a non defined
radius.}}\label{tablelane2a}
\end{center}
\end{table}
\section{The isothermal sphere}
The isothermal sphere is a popular model in astrophysics, either to
model large astrophysical and cosmological objects (galaxies, galaxy clusters)
\cite{lynden,penston,yabu,sommer,chavanis1,more}, to examine the so-called
gravothermal catastrophe
\cite{binney,natarajan,lombardi} and finally to
compare observations with model predictions \cite{rines,more}.
In the limit $n\to \infty$ in the polytropic equation of state one obtains the description for an
isothermal sphere, (ideal gas configuration) with
\begin{equation} \label{xxx11}
p=\sigma^{2}\rho
\end{equation}
where $\sigma$ is the velocity dispersion ($\sigma^{2}\propto T$). The pattern we found
in section 3 for $n \ge 5$ gets confirmed here: no finite radius of
the configurations is found with $\Lambda$, the asymptotic behavior is
not $\rho \to 0$ as $r \to \infty$ (but rather $\rho \to 2 \rho_{\rm
vac}$) and, as we will show below, other attempts to define a proper
finite radius are not satisfactory.
The results for a finite value of the index $n$ are
defined in the limit $n\to \infty$ only asymptotically in the case
$\Lambda=0$. We consider this as an acceptable behavior of the
density. Because of the limiting case $n\to \infty$ the analysis for
the isothermal sphere must be done in a slightly different way. As
was done in Eq. (\ref{new1}), we can integrate the equilibrium
equations (Euler and Poisson's equations) and obtain an explicit
dependence of the density with $\rho_{\rm vac}$ as
\begin{equation}
\label{new3}
\rho(r)=\rho_{c}\exp\left[-\frac{1}{\sigma^{2}}(\Phi_{\rm
grav}(r)-\Phi_{\rm grav}(0))\right]\exp\left[\frac{8}{3\sigma^{2}}\pi
\rho_{\rm vac} r^{2}\right],
\end{equation}
The resulting differential equation for the density with cosmological
constant can be written as
\begin{equation}
\label{isothermaljaja}
\frac{\sigma^{2}}{r^{2}}\frac{{\rm d} }{{\rm d} r}\left( r^{2}\frac{{\rm d} \ln \rho
}{{\rm d} r}\right)=-4\pi \rho+8\pi \rho_{\rm vac} , \nonumber
\end{equation}
\begin{figure}
\begin{center}
\includegraphics[angle=0,width=9cm]{iso3.eps}
\caption[Isothermal sphere]{{Scaled density
$\rho/\rho_{\rm vac}=e^{\psi}$ for the isothermal sphere at different values of
central density. The solutions oscillate around the value
$\rho=2\rho_{\rm vac}$.}}
\label{iso2}
\end{center}
\end{figure}
This differential equation could be treated in the same way as we did
for the polytropic equation of state, i.e, by defining a new function
$\psi\sim \rho/\rho_{\rm c}$, but here we can already use the fact
that the cosmological constant introduces scales of density, length
and time \cite{bala1}. Let us then define the function
$\psi(\xi)=\ln(\rho(r_{0}\xi)/\rho_{\rm vac})$ and $r=r_{0}\xi$, with $r_{0}$
the associated length scale. Since we are now scaling the density with
$\rho_{\rm vac}$, the associated length scale $r_{0}$ should be also scaled by
the length scale imposed by $\Lambda$:
\begin{equation}\label{aji}
r_{0}=\sigmar_{\Lambda}= 13.34 \left(\frac{\sigma}{10^{3}\, {\rm km}/{\rm s}}\right)
\,{\rm Mpc}.
\end{equation}
For an hydrogen cloud with $\sigma \sim 4$ km$/$s we have
$r_{0}\approx 40$ kpc which is approximately the radius of an
elliptical (E0) galaxy.
In terms of the function $\psi(\xi)$, the differential
equation governing the density profile is then written as
\footnote{Compare with Eq. 374 of \cite{chan} or Eq.1 of
\cite{natarajan} where the density is scaled by the central
density. The factor $1$ on the r.h.s of (\ref{isothermal}) is due to
$\rho_{\rm vac}$.}
\begin{equation}
\label{isothermal}
\frac{1}{\xi^{2}}\frac{{\rm d} }{{\rm d} \xi}\left(\xi^{2}\frac{{\rm d} \psi }{{\rm d}
\xi}\right)=1-\frac{1}{2}e^{\psi},
\end{equation}
so that according to Eq. (\ref{new3}) we may write
\begin{equation}
\label{new4}
\psi(r/r_{0})=\ln\left(\frac{\rho_{\rm c}}{\rho_{\rm vac}}\right)-\sigma^{-2}\left(
\Phi_{\rm grav}(r)-\frac{8}{3} \pi \rho_{\rm vac} r^{2}\right).
\end{equation}
From this we can derive different solutions $\psi$ depending on the
initial condition $\psi(\xi=0)=\ln (\rho_{\rm c}/\rho_{\rm vac})$ and ${\rm d} \psi(\xi)/{\rm d} \xi=0$ at $\xi=0$. In
Fig. \ref{iso2} we show numerical results for the solutions of
equation (\ref{isothermal}) using different values values of
$\rho_{\rm c}/\rho_{\rm vac}$. As it is the case for $n>5$, the radius cannot
be defined by searching the first zero of the density i.e. the value
$\xi_{1}$ such that $e^{\psi(\xi_{1})}=0$ (including $\psi \to
-\infty$). In this case, the behavior of the derivative of the
density profile changes as compared with the the $\Lambda=0$ case
since with increasing $\xi$ the density starts oscillating around the
value $\rho=2\rho_{\rm vac}$ such that for $\xi\to \infty $ one has a solution
$\rho\to 2\rho_{\rm vac}$. This can be checked from (\ref{isothermal}) which
corresponds to the first non trivial solution for $\rho$.
This behavior implies that there exist a value of $\xi=\xi_{1}$ where
the derivative changes sign and hence the validity of the physical
condition required for any realistic model i.e. ${\rm d} \rho/{\rm d} r <0$
should be given up unless we define the size of the configuration as
the radius at the value of the first local minimum. We will come back
to this option below to show that it is not acceptable. A second
option would be to set the radius at the position where the density
acquires for the first time its asymptotic value $2\rho_{\rm vac}$. We
could motivate such a definition by demanding that the density at the
boundary goes smoothly to the background density. This is for tow
reasons, however, not justified. First, we recall that a positive
cosmological constant leads to a repulsive 'force' as it accelerates
the expansion. A negative cosmological constant could be modeled in a
Newtonian sense by a constant positive density which, however, is
strictly speaking still not a background density. Secondly, if we
include the background density $\rho_b$ we would have started with
$\rho+\rho_b$ (with a dynamical equation for $\rho$ being
$\rho_b=$constant) in which case the boundary condition would again be
$\rho (R)=0$ to define the extension of the body (or at least, $\rho
\to \infty$ as $r\to \infty$). Hence, this second option can be
excluded on general grounds. In any case as can be seen from Figure
\ref{iso2} both definitions would yield two different values of
radius.
Since the first candidate to define a radius is based upon a physical
condition of the configuration, we could expect this definition as the
more suitable one. However such a definition must be in agreement
with the observed values for masses and radius of specific
configurations and the validity of this definition can be put to test
by the total mass of the configuration, given as
\begin{eqnarray}\label{massiso}
M=
1.55\times 10^{15}\left(\frac{\sigma}{10^{3}\, {\rm km}/{\rm
s}}\right)^{3}f(\xi_{1})\,M_{\odot},\hspace{0.8cm} f(\xi_{1}) &\equiv
&\int_{0}^{\xi_{1}}\xi^{2}e^{\psi}{\rm d}\xi.
\end{eqnarray}
Combining (\ref{aji}) and (\ref{massiso}) we can write
\begin{equation}\label{massiso2}
M=653.55\left(\frac{R}{{\rm
kpc}}\right)^{3}\xi_{1}^{-3}f(\xi_{1})\,M_{\odot}.
\end{equation}
If we define the radius at the first minimum (see Fig. \ref{iso2}), we
find $M\approx 2\times 10^{9}\left( R/{\rm
kpc}\right)^{3}M_{\odot}$. Although this might set the right order of
magnitude for the mass of a E0 galaxy if we insist on realistic values
for the respective radius, say $R\sim 10$ kpc, the picture changes
again as the radius is fixed by (\ref{aji}) which gives $R\approx
5.3\times 10^{6}\left(\sigma/10^{3}{\rm km/s}\right) {\rm kpc}$. In order to
get a radius of the order of kpc with masses of the order of
$10^{10}M_{\odot}$ we would require $\sigma \sim 10^{-5}{\rm
km/s}$. This differs by almost eight orders of magnitude with the
measured values for the velocity dispersion $\sigma$ in elliptical
galaxies ($\sigma \sim 300$ km$/$s) or with the Faber-Jackson Law for
velocity dispersion \cite{padma}. We conclude that defining the radius by the position of the first minimum is not a realistic solution. As the last option to get realistic values for the parameters of the
configuration we consider the brute force method to simply fix the
value of $\xi_{1}$. It is understood, however, that this method is
not acceptable if we insist that the model under consideration has
some appealing features (without such features almost any model would
be phenomenologically viable). Therefore we discuss this option only
for completeness. For a configuration with $\rho_{\rm
c}=10^{5}\rho_{\rm vac}$, say an elliptical galaxy, we fix the radius
at $R\sim 50$ kpc in Eq.(\ref{aji}) and using a typical value
for the velocity dispersion $\sigma \sim 300$ km$/$s we get
$\xi_{1}\sim 0.012$ which implies $f(\xi_{1})\sim 0.036$. The mass
in Eq.(\ref{massiso}) is then given as $M\sim 1.5\times 10^{12}M_{\odot}$
while the density at the boundary is $\rho_{R}\sim 36000 \rho_{\rm
vac}$, that is, $\rho_{\rm c}\sim 2.35 \rho_{R}$. In table
\ref{tablelane2xx} we perform the same exercise for other radii. The
resulting mean density is in accordance with the observed values of
the mean density of astrophysical objects ranging from an small
elliptical galaxy to a galaxy cluster. However, as mentioned above,
the model introduces an arbitrary cut-off and cannot be considered as
a consistent model of hydrostatic equilibrium.
In summary, the attempts to define a finite radius for the isothermal
sphere fail in the presence of a cosmological constant either because
such a model fails to reproduce certain phenomenological values (if
the definition of the radius is fixed by the first minimum) or because
the definition is technically speaking quite artificial to the extent
of introducing arbitrary cut-offs. Note that this conclusion is valid
almost for any object as the density of the isothermal sphere
with $\Lambda$ has a minimum whatever the central density we choose.
\begin{table}
\begin{center}
\begin{tabular}{ccccc}\hline\hline
$R/{\rm kpc}$ &$\xi_{1}$ & $\rho_{\rm c}/\rho(R)$ & $M/M_{\odot}$&
$\bar{\rho}$ (gr/cm$^{3}$) \\ \hline $10$ &$0.00249$ &$1.0005$
&$2.17\times 10^{8}$ & $3.4\times 10^{-25}$ \\ $50$ &$0.01248$
&$1.0129$ &$2.7\times 10^{10}$ & $1.94\times 10^{-25}$ \\ $100$
&$0.0249$ &$1.052 $ &$2.11\times 10^{11}$ & $7.2\times 10^{-26}$ \\
\hline \hline
\end{tabular}
\caption[Numerical values ]{{Values for $\xi_{1}$,
$\rho_{\rm c}/\rho(R)$ and the mass for different values of radius and
for $\rho_{\rm c}=10^{3}\rho_{\rm vac}$ with $\sigma=300$km/s.} }\label{tablelane2xx}
\end{center}
\end{table}
\section{Exotic astrophysical configurations}
In this section we will probe into the possibilities of exotic, low
density configurations. The global interest in such structures is
twofold. First, the cosmological constant will affect the properties
of low density objects. Secondly, $\Lambda$ plays effectively the role
of an external, repulsive force. Hence a relevant issue that arises
in this context is to see whether the vacuum energy density can partly
\emph{replace} the pressure which essentially is encoded in the
parameter $\kappa$ ($p=\kappa \rho^{\gamma}$). By the word
\emph{replace} we mean that we want to explore the possibility of a
finite radius as long as the pressure effects are small in the
presence of $\rho_{\rm vac}$.
\subsection{Minimal density configurations}
As mentioned above the effect of a positive cosmological constant on
matter is best understood as an external repulsive force. In previous
sections we have probed into one extreme which describes the situation
where a relatively low density object is pulled apart by this force (to an extent that
we concluded that the isothermal sphere is not a viable model in the presence of
$\Lambda$). Limiting conditions when this happens were derived.
On the other hand, approaching with our parameters these limiting conditions, but remaining
still on the side of equilibrium, means that relatively low density objects can be still
in equilibrium thanks to the positive cosmological constant.
The best way to investigate low density structures is to to use the
lowest possible central density. As explained in section 3, for every
$n$ there exist a ${\cal A}_n$ such that $\rho_c \ge {\cal A}_n
\rho_{\rm vac}$ which defines the lowest central density. Certainly, a
question of interest is to see what such objects would look like.
We start with the parameters of the configuration. The radius at the critical value $\zeta_{\rm crit}$ is given by Eq.(\ref{radius3})
after taking the limit $\rho_{\rm c}\to \mathcal{A}_{n}\rho_{\rm vac}$ (see (\ref{An})). It is given by
\begin{equation}
\label{rque}
R_{\rm crit}=2.175\left(\frac{M}{10^{12}M_{\odot}}\right)^{1/3}f_{0}(\zeta_{\rm crit};n) \zeta_{\rm crit}^{1/3}\,\,\rm Mpc.
\end{equation}
From fig. \ref{lanecrit} we can see that the product $f^{3}_{0}(\zeta_{\rm crit};n)\zeta_{\rm crit}$ changes a little round the value $\sim 0.2$ as we change the index $n$, so that these polytropic configurations will have roughly the same radius (for a given mass).
This implies that such configurations have approximately the same average density $\bar{\rho}(\zeta_{\rm crit})=1.64 \rho_{\rm crit} \approx 2.34 \rho_{\rm vac}$. That is, such configurations have a mean density of the order of the of the critical density of the universe.
Given such a density we would, at the first glance, suspect that the object described by this
density cannot be in equilibrium. However, our result follows strictly from hydrostatic
equilibrium and therefore there is no doubt that such object can theoretically exist. Furthermore,
$\bar{\rho}$ satisfies the inequalities derived in \cite{nowakowski1} and \cite{bala2} from virial
equations and Buchdahl inequalities which guarantee that the object is in equilibrium ($\bar{\rho} > 2\rho_{\rm vac}$).
Interestingly, the central density for such objects has to be much higher than $\bar{\rho}$ as, e.g., for $n=3$ we have
$A_3 \approx 300$ and therefore $\rho_c > 300 \rho_{\rm vac}$.
\begin{figure}
\begin{center}
\includegraphics[angle=0,width=9cm]{salida2.ps}
\caption[]{{Same as fig. \ref{tablelane2}, now the functions $f_{i}$ being evaluated at the value $\xi_{\rm vir}$ solution of Eq.(\ref{virtras}).}}
\label{lanecrit}
\end{center}
\end{figure}
Note that these values have been given from the solution of the Lane-Emden equation,
which is a consequence of dynamical equations reduced to describe our system in a steady state.
However, we have not tried to solve explicitly quantities from the virial theorem.
This makes sense as for Dark Matter Halos (DMH) the parametrized density profiles go often only
asymptotically to zero and the radius of DMH is defined as a virial radius where the density
is approximately two hundred times over the critical one. Therefore, an analysis
using virial equations seems to be adequate here.
For constant density Eq.(\ref{vpol}) can be expressed as a cubic equation \cite{bala1}
for the radius at which the virial theorem is satisfied (let us ignore for these analysis
any surface terms coming from the tensor virial equation). However,
if the density is not constant, this expression becomes a transcendental equation for
the dimensionless radius $\xi_{\rm vir}=R_{\rm vir}/a$. This equation is
\begin{equation}\label{virtras}
\xi_{\rm vir}^{2}=\frac{6(5-n)f_{0}^{3}}{(n+1)\left[3-2\pi \zeta_{\rm c}(5f_{1}-1)f^{3}_{0}\right]},
\end{equation}
understanding the functions $f_{i}$ now as integrals up to the value $\xi_{\rm vir}$. Once we fix $\zeta_{\rm crit}$ for a given index $n$, we use as a first guess for the iteration process the value $\xi_{1}(\zeta_{\rm crit})$. In fig \ref{lanecrit} we show the behavior of the functions $f_{i}$ and the solutions of Eq.(\ref{virtras}). For these values, Eq. (\ref{rque}) gives for $n=3$ a radius
\begin{equation}\label{xxx14}
R_{\rm vir}=25.8 \left(\frac{M}{M_{\odot}}\right)^{1/3}\,\rm pc,
\end{equation}
which can be compared with the radius-mass relation derived in the top-hat sphericall collapse \cite{padma}
\begin{equation}\label{contrast1}
R_{\rm vir}=21.5h^{-2/3}(1+z_{\rm vir})^{-1}\left(\frac{M}{M_{\odot}}\right)^{1/3}\,\rm pc,
\end{equation}
where $h$ is the dimensionless Hubble parameter and $z_{\rm vir}$ is the redshift of virialization. The resulting average density is then of the order of the value predicted by the top-hat sphericall model:
\begin{equation}\label{contrast}
\bar{\rho}=\left(\frac{3\Omega_{\rm vac}}{2\pi \zeta_{\rm crit}f_{0}^{3}}\right)\rho_{\rm crit}\approx 200 \rho_{\rm crit},
\end{equation}
Note with the help of Eq.(\ref{a}) and (\ref{rque}) that the mass can be written as proportional to the parameter $\kappa^{3/2}$ (introduced in the polytropic equation of state Eq.(\ref{poly})). Therefore $\kappa \to 0$ is equivalent to choosing a small pressure and, at the
same time, a small mass which, in case of a relatively small radius, amounts to
a diluted configuration with small density and pressure.
Without $\Lambda$ such configurations would be hardly in equilibrium.
Hence, for the configuration which has the extension of pc, Eq.(\ref{xxx14}), with one solar mass we conclude that the equilibrium is not fully due to the pressure, but partially maintained also by $\Lambda$. This is possible, as $\Lambda$ exerts an
outwardly directed non local force on the body. Other mean densities, also independent of $M$ are for $n=1.5$ and $n=4$, respectively:
\begin{equation} \label{xxx34}
\bar{\rho}= 15.3\rho_{\rm crit},\hspace{1cm}\bar{\rho}= 2.6 \times 10^{3}\rho_{\rm crit},
\end{equation}
The first value is close to $\rho_{\rm crit}$ and therefore also to $\rho_{\rm vac}$. Certainly,
if in this example we choose a small mass, equivalent to choosing a
negligible pressure, part of the equilibrium is maintained by the
repulsive force of $\Lambda$. In \cite{bala2} we found a simple
solution of the hydrostatic equation which has a constant density of
the order of $\rho_{\rm vac}$. The above is a non-constant and
non-trivial generalization of this solution.
\subsection{Cold white dwarfs}
The neutrino stars which we will discuss in the subsequent subsection
are modeled in close analogy to white dwarfs. Therefore it makes sense
to recall some part of the physics of white dwarfs. In addition we can
contrast the example of white dwarfs to the low density cases affected
by $\Lambda$.
In the limit where the thermal energy $k_{\rm B}T$ of a (Newtonian)
white dwarf is much smaller than the energy at rest of the electrons
($p_F $, these configurations can be treated as polytropic
configuration with $n=3$. This is the ultra-relativistic limit where
the mass of electrons is much smaller than Fermi's momentum $p_{\rm
F}$. In the opposite case we obtain a polytrope or configuration with
$n=3/2$. \cite{weinberg,shapiro}. In both cases, the parameter
$\kappa_{n}$ from the polytropic equation of state is given as
\begin{equation}
\label{kn}
\kappa_{3}=\frac{1}{12\pi
^{2}}\left(\frac{3\pi^{2}}{m_{n}\mu}\right)^{\frac{4}{3}}
,\hspace{1cm}\kappa_{3/2}=\frac{1}{15
m_{e}\pi^{2}}\left(\frac{3\pi^{2}}{m_{n}\mu}\right)^{\frac{5}{3}},
\end{equation}
where $m_{n}$ is the nucleon mass, $m_{e}$ is the electron mass and
$\mu$ is the number of nucleons per electron. Using the Newtonian
limit with cosmological constant, we can derive the mass and radius of
these configurations in equilibrium. In the first case, for $n=3$ the
mass is written using (\ref{fmas2aa}) as $M_0=\mathcal{G}(n=3)$ which
corresponds approximately to the Chandrasekhar's limit (strictly
speaking a configuration would have the critical mass, i.e, the
Chandrasekhar's limit, if its polytropic index $\gamma$ is such that
$\gamma=\gamma_{\rm crit}$). For this situation one has $M_{0}(n=3)=
5.87 \mu^{-2}M_{\odot}$ and $R_{0}(n=3)= 6.8
(\bar{\rho}_{\odot}/\rho_{\rm c})^{1/3}\mu^{-\frac{2}{3}} R_{\odot}$.
On the other hand, for $n=3/2$ one obtains $M_{0}(n=3/2)=3.3\times
10^{-3}(\bar{\rho}_{\odot}/\rho_{\rm c})^{-1/2}\mu^{-5/2}M_{\odot}$
and the radius is given by
$R_{0}(n=3/2)=0.27(\bar{\rho}_{\odot}/\rho_{\rm
c})^{1/6}\mu^{-\frac{5}{6}}R_{\odot}$ where $\bar{\rho}_{\odot}$ is
the mean density of the sun. Since for these configurations the ratio
$\zeta_{\rm c}$ is much smaller than $10^{-4}$ we see from Fig.
\ref{tablelane2} that the effects of $\Lambda$ are almost negligible.
The critical value of the ratio $\zeta_{\rm c}$ gives for $n=3$ the
inequality $\rho_{\rm c}>307.69\rho_{\rm vac}$ and for $n=3/2$ the
same limit reads $\rho_{\rm c}>24.24\rho_{\rm vac}$. Central
densities of white dwarfs are of the order of $10^{5}{\rm gr}/{\rm
cm}^{3}$ which corresponds to a deviation of nearly thirty orders of
magnitude of $\rho_{\rm vac}$.
\subsection{Neutrino stars}
An interesting possibility is to determine the effects of $\rho_{\rm
vac}$ on configurations formed by light fermions. Such configurations
can be used, for instance, to model galactic halos
\cite{dolgov,lattanzi,jetzer, boerner}. While discussing the phenomenological
interest of fermion stars below, we intend to describe such a halo.
Clearly, these kind of systems will maintain equilibrium by
counterbalancing gravity with the degeneracy pressure as in a white
dwarf. For stable configurations, i.e, $n=3/2$, one must replace the
mass of the electron and nucleon by the mass of the considered fermion
and set $\mu=1$ in (\ref{kn}) and (\ref{fmas2aa}). We then get for
the mass and the radius:
\begin{eqnarray}
\label{mrar}
M_{0}&=&3.28\times
10^{28}\left(\frac{\rho_{c}}{\bar{\rho}_{\odot}}\right)^{\frac{1}{2}}\left(\frac{{\rm
eV}}{m_{f}}\right)^{4}\, M_{\odot} \,\,=\,\,3.64\times 10^{14} \zeta_{\rm
c}^{-1/2}\left(\frac{{\rm eV}}{m_{f}}\right)^{4}\, M_{\odot}, \\ \nonumber
R_{0}&=&1.31\times
10^{-4}\left(\frac{\rho_{c}}{\bar{\rho}_{\odot}}\right)^{-\frac{1}{6}}\left(\frac{{\rm
eV}}{m_{f}}\right)^{\frac{4}{3}}\, {\rm Mpc}=6.16 \zeta_{\rm
c}^{1/6}\left(\frac{{\rm eV}}{m_{f}}\right)^{\frac{4}{3}}\, {\rm Mpc}.
\end{eqnarray}
On the other hand, for $n=3$ one has
\begin{eqnarray}
\label{mrara}
M_{0}&=&5.16\times 10^{18}\left(\frac{{\rm eV}}{m_{f}}\right)^{2}\,
M_{\odot}, \\ \nonumber R_{0}&=&0.14\left(\frac{{\rm
eV}}{m_{f}}\right)^{\frac{2}{3}}\left(\frac{\rho_{\rm
c}}{\bar{\rho}_{\odot}}\right)^{-\frac{1}{3}}\, {\rm pc}\,\,=\,\,
3.1\times 10^{5} \zeta_{\rm c}^{1/3}\left(\frac{\rm eV}{m_{\rm
f}}\right)^{2/3} \,{\rm pc},
\end{eqnarray}
The cases represent, among other, possible cosmological configurations
when the fermion mass if of the order of eV, for instance, massive
neutrinos.
\begin{figure}
\begin{center}
\includegraphics[angle=0,width=10cm]{nu_mass_radius2.ps}
\caption[]{Mass and radius for different central
densities, for index $n=3$ and $n=3/2$. The masses range from $m_{f}=1$ eV up to $m_{f}=5$ eV.}\label{tablelane244}
\end{center}
\end{figure}
In figure \ref{tablelane244}
some representative values for the choice $m_f=1$ eV up to $m_f=5$ eV are
given. Obviously in the case $n=1.5$ the values for the mass and
radius are sensitive to the choice of $m_f$. Indeed, the dependence
on the fermion mass is much stronger than on the central density. It is justified to speculate
that a relative low density objects, affected by $\Lambda$, might
exist. If then, as in an example we choose $m_f \sim 5$ eV and
$\rho_c \sim 40 \rho_{\rm vac}$ then the mass comes out as $10^{12}$
solar masses with a radius of the order of magnitude of half Mpc which
might indeed be the dark matter halo of a galaxy (or at least part of
the halo). As pointed out before, such configurations must have a
central density greater than $24.4 \rho_{\rm vac}$ in order to be in
equilibrium. From table \ref{tablelane2a} we see that the effect on a
configuration with $\zeta_{\rm c}\sim 0.05$ is represented in an
increase in the mass by $17\%$ with respect to $M_{0}$ and an
increment of $11\%$ in the radius. Then the conclusion would be that
$\Lambda$ affects such a dark matter halo. This is to be taken with
some caution as the fermions in such a configuration would be
essentially non-relativistic. Note also that it is not clear if
neutrinos make up a large fraction of the halo, however, we can also
speculate about a low density clustering around luminous matter. Of
course, allowing larger central density might change the picture.
However, the emerging scenario would not necessarily be a viable
phenomenological model. For instance, changing the value of $m_f$ from
eV to keV (MeV) would reduce the mass by twelve (twenty four) orders
of magnitude which is definitely too small to be of interest. We could
counterbalance this by increasing the central density by twenty four (
forty eight!) orders of magnitude. Such a 'countermeasure' would,
however, result in a reduction of the radius by eight (sixteen!)
orders of magnitude, again a too small length scale to be of
importance for dark matter halos. In other words, the example with a
fermion mass of the order of one eV and low central density is
certainly of some phenomenological interest.
The case $n=3$ is similarly stringent. A neutrino mass of $1-5$ eV
gives a mass for the entire object of the order of ten to the eighteen
solar masses which is too large. A fermion mass of several keV would
be suitable for a galaxy halo (a mass of the order Me V and higher
would give a too small total mass). With a relative low central
density as before (see Figure \ref{tablelane244}) we then obtain the
right order of magnitude for the halo. But then we will have to live
with the fact that such halo reaches up to the next large galaxy. Briefly, we touch upon the other possible application of fermions stars
which have been discussed as candidates for the central object in our
galaxy. If we allow the extension of this object to be $120$ AU and
the mass roughly $2.6$ million solar masses, then the fermion mass
would come out as $10^{4}$ eV for $n=1.5$ ($10^6$ eV for $n=3$) and
the central density as $10^{22}\rho_{\rm vac}$ ($10^{28}\rho_{\rm
vac}$ for $n=3$).
\subsection{Boson stars}
We end the section by putting forward a speculative question in
connection with boson stars. The latter are general relativistic
geons and can be treated exactly only in general relativistic
framework i.e. these kind of configurations are based on the
interaction of a massive scalar field and gravitation which leads to
gravitational bounded systems. These objects have been also widely
discussed as candidates for dark matter \cite{ruffini,chi}. On the
other hand, variational methods in the connection with the
Thomas-Fermi equations give relatively good results even without
invoking the whole general relativistic formalism. By including
$\Lambda$ we essentially introduce into the theory a new scale, say in
this case a length scale $r_{\Lambda}=1/\sqrt{\Lambda}$. The basic
parameters of dimension length in a theory with a boson mass $m_B$ and
a total mass $M=N_Bm_B$ where $N_B$ is the number of bosons are (for a
better distinction of the different length scales, we restore in
this subsection the value of $G_N$)
\begin{equation} \label{L}
L_1=r_s=G_NM, \,\, L_2=r_B=\frac{1}{m_B}, \,\,
L_3=r_{\Lambda}=\frac{1}{\sqrt{\Lambda}}.
\end{equation}
The resulting radius of the object's extension can be a combination of
these scales i.e.
\begin{eqnarray} \label{R}
R_i^{(1)}&\propto &L_i, \,\, R_i^{(2)}\propto N_B^nL_i \nonumber \\
R_1&\propto &(L_i^2L_j)^{1/3},\,\, R_2\propto N_B^nR_1
\end{eqnarray}
and similar combination of higher order. Which one of the combination
gets chosen, depends on the details of the model. In a close analogy
to \cite{spruch,eckehard} we can examine this taking into account the
presence of a positive cosmological constant by considering the energy
of such configuration as a two variable function of the mass and the
radius $E=E(R,M)$
\begin{equation}
E\sim \frac{N_{\rm B}}{R}-\frac{G_{\rm N}m_{\rm B}^{2}N_{\rm
B}^{2}}{R}+\frac{8}{3}\pi G_{\rm N}\rho_{\rm vac} m_{\rm B}N_{\rm B}R^{2}
\end{equation}
The first term corresponds to the total kinetic energy written as
$\mathcal{K}=N_{\rm B}p=N_{\rm B} /\lambda$ and taking $\lambda\sim R$. The
second term is the gravitational potential energy and the third term
corresponds to the contribution of the background (see
Eq. (\ref{virialeq})). By treating mass and radius as independent
variables (we think this this is the right procedure since the radius
will depend on the 'external force' due to $\rho_{\rm vac}$), we extremize the
energy leading to the following values of mass and radius:
\begin{equation}
M\sim \frac{1}{G_{\rm N}m_{\rm B}}\sim 10^{-10}\left(\frac{\rm eV}{m_{\rm
B}}\right) M_{\odot},
\hspace{1cm}R\sim \left( \frac{1}{8\pi G_N m_{\rm B}\rho_{\rm vac}}\right)^{1/3}\sim
10^{5}\left(\frac{\rm eV}{m_{\rm B}}\right)^{\frac{1}{3}}\,R_{\odot}
\end{equation}
Such values would lead to a mean density of the order of
$\bar{\rho}\sim 6\rho_{\rm vac}$ i.e. an extremely low density
configuration. The mass given in the last expression is the so-called
Kaup limit \cite{spruch}. Of course, this relative simple treatment
does not guarantee that the full, general relativistic treatment, will
give the same results. Therefore we consider it as a
conjecture. However, it is also obvious from the the discussion above
that low density boson stars are a real possibility worth pursuing
with more rigor (we intend to do so in the near future).
\subsection{A comparison between $\Lambda$LE profiles and Dark Matter Halos profiles}
It is of some importance to see whether our results from the examination of polytropic hydrostatic
equilibrium or from the virial equations can be applied to Dark matter configurations.
N-body simulations based in a $\Lambda$CDM model of the universe show that the density profile of virialized Dark Matter Halos (DMH) can be described by a profile of the form \cite{nfw}
\begin{equation}
\rho(r)=(2)^{3-m}\rho_{s}(r/r_{s})^{-m}\lp1+r/r_{s}\right)^{m-3},
\end{equation}
where $r_{s}$ is the characteristic radius (the logarithmic slope is ${\rm d} \ln \rho /{\rm d} \ln r_{s}=-m+\frac{1}{2}(m-3)$), $\rho_{s}=\rho(r_{s})$ and the index $m$ characterizes the slope of the profile in the central regions of the halo. The mass of the configuration enclosed in the virial radius $r_{\rm vir}$ is given as $M_{\rm vir}=4\pi(2)^{3-m}\rho_{s}r_{s}^{3}F(c)$ such that one can write $\rho_{s}=(1/3)(2)^{m-3}\Delta_{\rm vir}\rho_{\rm crit}c^{3}F^{-1}(c)$, where $c=r_{\rm vir}/r_{s}$ is the concentration parameter, $\Delta_{\rm vir}$ is the ratio between the mean density at the time of virialization and the critical density of the universe ($\Delta_{\rm vir}\approx 18\pi^{2}$ in the top-hat model for a flat Einstein-deSitter universe, while $\Delta_{\rm vir}\sim 104$ in the $\Lambda$CDM cosmological model \citep{diemand})
and $F(c)=\int_{0}^{c}x^{2-m}(1+x)^{m-3}{\rm d} x$. This \emph{universal profile} has been widely used in modeling DMH in galaxy clusters, and the comparison of these models with a stellar polytropic-like profile -without the explicit contribution from the cosmological constant in the Lane-Emden equation- can be found in \cite{sussman,arieli}.
Some differences can be described between the $\Lambda$LE and the NFW profiles. First, on fundamental
grounds, it is clear that the NFW profile does not satisfy the LE equation;
a basic reason for this is that dark matter is assumed to be collisionless and is only affected by gravity.
However, the fact that the real nature of dark matter is still an unsolved issue leaves an open door through which one can introduce interaction between dark matter particles leading to a different equation of state (see for instance \citep{rem}). On functional forms, one sees that at the central region the difference is abrupt, since the Lane-Emden equation has a flat density profile at $r=0$, while the NFW profile has a cuspy profile of the form $\rho \sim r^{-m}$.
It has been widely discussed how such cuspy profile is inconsistent with data
showing central regions of clusters with homogeneous cores (see discussion at the
end of this section). Such small slope in the inner regions can be reproduced by the \emph{universal profile} for the case $m=0$, and for the profile derived from the $\Lambda$LE equation (which is just a consequence of initial conditions, and hence, its independent of $\Lambda$). In this case, the effects of the cosmological constant can be reduced to explore the outer regions of the halos and compare, for instance, the slope of the profiles and the virial radius predicted by each one.
\begin{figure}
\begin{center}
\includegraphics[angle=0,width=12cm]{lambdacompar_vir.ps}
\caption[]{{NFW profile (black, solid line) compared to the solutions of $\Lambda$LE equation for different masses and different index $n$ ranging from $n=3$(blue,dots), $n=4$ (red,short-dashed line), $n=4.5$ (green,long-dashed line) and $n=4.9$ (magenta,dot short-dashed line), in the limiting case $\zeta_{\rm c}=\zeta_{\rm crit}$. The solution from LE equation is written for the critical value of the parameter $\zeta_{\rm crit}$, shown in table \ref{tablelane2}.
The vertial black line represents the virial radius from the NFW profile.
The vertical colored lines represent the virial radius $R_{\rm vir}(\zeta_{\rm crit})=a\xi_{\rm vir}$ for the different polytropic indices.}}\label{compav}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[angle=0,width=12cm]{lambdacomparcritica.ps}
\caption[]{{Same as fig \ref{compav} but for the radius given by the condition $\psi(\xi_{1})=0$.}} \label{compac}
\end{center}
\end{figure}
The density profiles predicted by the $\Lambda$LE equation and the NFW profile (for $m=1$) are presented in fig.\ref{compav} for the virial radius given by $\xi_{\rm vir}$ and in fig.\ref{compac} for the radius given by $\xi_{1}$, both with $\zeta_{\rm c}=\zeta_{\rm crit}$. Also, the behavior of the radius for different values of mass and polytropic indices are given in Fig \ref{raddd} (with $r_{s}\approx 25.3\left( M_{\rm vir}/10^{12}M_{\odot}\right)^{0.46}$ kpc and $r_{\rm vir}=1.498\Delta_{\rm vir}^{-1/3}\left( M_{\rm vir}/ 10^{12}M_{\odot}\right)^{1/3}$ Mpc $\approx 255\left( M_{\rm vir}/ 10^{12}M_{\odot}\right)^{1/3}$ kpc \cite{gentile}).
\begin{figure}
\begin{center}
\includegraphics[angle=0,width=12cm]{mass_radius.ps}
\caption[]{{Mass-radius relation for the NFW (red, dashed line) profile compared with the solutions Eq. \ref{virtras} (blus, solid line) for four different values of $n$.}} \label{raddd}
\end{center}
\end{figure}
We see that the virial radius given by the $\Lambda$LE equation is
surprisingly close to the virial radius given by the NFW profile for
$n=3$ in the range of masses shown in fig \ref{raddd}, and as
pointed in Eq.(\ref{contrast}), this value yields for this model
$\Delta^{\Lambda LE}_{\rm vir}\approx 200$. However, as can be seen
from the plots, the $\Lambda$LE density profile is almost flat until
the virial radius. On the other hand, the $m=0$ profile allow us to
parameterize it as $\rho(r)/\rho_{\rm vac}=2\zeta_{\rm
c}^{-1}(1+r/r_{s})^{-3}$, where the concentration parameter $c$ can be
written as $c+1\approx 1.11 \zeta_{\rm c}^{1/3}\Delta_{\rm
vir}^{1/3}$.
\begin{figure}
\begin{center}
\includegraphics[angle=0,width=12cm]{comp_m_1_n_3.ps}
\caption[]{{Generalized NFW profile (black solid line) for $m=0$ compared with the and $n=3$ $\Lambda$LE profile, with its corresponding $\zeta_{\rm crit}$, for different masses.}} \label{comp1}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[angle=0,width=12cm]{comp_m_1_n_49.ps}
\caption[]{{Same as fig \ref{comp1} for $n=4.9$}} \label{comp4}
\end{center}
\end{figure}
In fig.\ref{comp1} and \ref{comp4} we have compared both profiles
with the corresponding $\zeta_{\rm crit}$ for polytropic
index $n=3$ and $n=4.9$. For the first case we see that the virial radius are of the
same order of magnitude than the one predicted by the NFW profile. For
$n=4.9$, these quantities differ by one order of magnitude. In all
cases, the slope of the NFW profiles changes faster than the
$\Lambda$LE profile, which implies that the polytropic configurations
enclosed by $r_{\rm vir}$ display almost a constant density.
The comparison we made above between the polytropic configuration and the NFW profiles shows that
up to the cuspy behaviour the polytropic results agree with NFW for the polytropic index $n=3$.
It is worth pointing out here that it is exactly this cuspy behaviour which seems to be at odds
with observational facts \cite{arieli, Matos, Hoeft}. Our result for $n=3$ is then a good
candidate to describe DMH. Indeed, the undesired feature of the cuspy behaviour led at least some
groups to model the DMH as a polytropic configuration \cite{arieli, Dehnen, Matos, McKee, Debattista, Gonzalez, Yepes, Henriksen,
Hogan}. Sometimes it is claimed that a polytropic model is favoured over the results from N-body simulation \cite{Zavala}. However, the oscillatory behavior of the $n=3$ solutions of the $\Lambda$LE represents a disadvantage when compared with the NFW profiles, although the region of physical interest (below the virial radius) is well represented by the solutions of $\Lambda$LE.
\section{Conclusions}
In this paper we have explored the effects of a positive cosmological
constant on the equilibrium and stability of astrophysical
configurations with a polytropic equation of state. We have found
that the radius of these kind of configurations is affected in the
sense that not all polytropic indices yield configurations with
definite radius even in the asymptotic sense. Among other, the widely
used isothermal sphere model becomes a non-viable model in the
presence of $\Lambda$ unless we are ready to introduce an arbitrary
cut-off which renders the model unappealing. Indeed, in this
particular case we have tried different definitions of a finite radius
with the result that none of them seems to be justified, either from
the phenomenological or from the theoretical point of view. This is
then an interesting global result: $\Lambda$ not only affects
quantitatively certain properties of large, low-density astrophysical
structures, but it also excludes certain commonly used models
regardless what density we use.
For polytropic indexes $n<5$ and for \emph{certain values} of the central density we
cannot find a well definite radius. These \emph{certain values} are
encoded in a generalization of the equilibrium condition found for
spherical configurations (i.e, $\rho>2\rho_{\rm vac}$) written now as
$\rho_{\rm c}>\mathcal{A}_{n}\rho_{\rm vac}$. We obtain $\mathcal{A}_{1}=10.8$,
$\mathcal{A}_{3/2}=24.2$, $\mathcal{A}_{3}=307.7$, $\mathcal{A}_{4}\approx 4000$. These
values set a minimal central density for a given polytropic index
$n$. We have discussed such minimal density configurations and
determined their average density which strongly depend on $n$.
Interestingly, the radius of such configurations has a connection to
the length scale which appears in the Schwarzschild- de Sitter as the
maximally possible radius for bound orbits \cite{bala1}. In this
framework we found also a solution of very low, non-constant density.
Indeed, the limiting value of the central density in the above equations
is a crucial point. Below this value no matter can be in equilibrium. However,
above this value low density objects can still exist. Both effects
are due to $\Lambda$: in the first case the external repulsive force is
too strong for the matter to be in equilibrium, in the second case
this force can counterbalance the attractive Newtonian gravity effects (even if the
pressure is small).
Other examples of low density configuration which we examined in some
detail are neutrino stars with mass of the order $1$ eV and $1$
keV. In there we found that a nowadays dominating cosmological constant
affects both the mass as well as the radius of such exotic objects.
Such effect could change those physical quantities several orders of magnitude.
The magnitude of such effects, however, depend on the fermionic masses and on the assumption that
the fermions in such a configuration would be
essentially non-relativistic.
Finally, we made a conjecture regarding boson stars, and used variational methods in connection with the
Thomas-Fermi equations which could give relatively good results even without
invoking the whole general relativistic formalism,
We then found extremely low density configurations for such astrophysical objects.
Notice however, that this
conjecture relied purely on arguments based on scales and in fact needs a full
general relativistic investigation to confirm the results here obtained.
We have compared polytropic configurations with Dark Matter density profiles
from N-body simulations. Surprisingly, we find a reasonable agreement between
both approaches for the polytropic index $n=3$ and restricting ourselves to the
virial radius. Our model does not have the the undesired features of the cuspy
behaviour of the NFW profiles.
The importance of the astrophysical properties and configurations
found in this article is that they are specific features to the
existence of a dark energy component. Hence, such
configurations (e.g, low density configurations) or properties, if
ever found in nature would imply a strong evidence for the presence of
a dark energy component. Such observations would be a completely independent, and so complementary,
of other cosmological probes of dark energy such as Supernova Ia or the CMBR.
\section*{Acknowledgments}
We acknowledge Stefanie Phleps for her comments on the manuscript. DFM acknowledge support from the A. Humboldt
Foundation.
\bibliographystyle{mn2e}
|
2,877,628,090,480 | arxiv | \section{Introduction}\label{section:intro}
\subsection{Centralized and cooperative repair models} The problem considered in this paper is motivated by the distributed nature of the system wherein the coded data is distributed across a large number of physical storage nodes. When some storage nodes fail, the repair task performed by the system relies on communication between individual nodes, which introduces new challenges in the code design. Coding schemes that address these challenges are known under the name of {\em regenerating codes}, a concept that was isolated and studied in the work of Dimakis et. al. \cite{Dimakis10}. In paper \cite{Dimakis10} the authors suggested a new metric that has a bearing on the overall efficiency
of the system, namely, the {\em repair bandwidth}, i.e., the amount of data communicated between the nodes in the process of repairing failed nodes. Most works on this class of codes assume that the information is protected with
Maximum Distance Separable (MDS) codes which provide the optimal tradeoff between failure tolerance and storage overhead. Paper \cite{Dimakis10} also gave a lower bound on the minimum repair bandwidth of MDS codes,
known as the {\em cut-set bound}. Code families that achieve this bound with equality are said to have the {\em optimal repair property}.
Constructions of optimal-repair MDS codes (also known as {\em minimum storage regenerating}, or MSR codes) were proposed in
\cite{Rashmi11,Tamo13,Ye16,Sasid16,Ye16a,Tamo17RS}.
To encode information with an MDS code, the original file is divided into $k$ information blocks viewed as vectors over a finite field $F$. The encoding procedure then finds $r=n-k$ parity blocks, also viewed as vectors over $F$, which together with the information blocks form a codeword of a code of length $n$. The $n$ blocks of the codeword are stored on $n$ different storage nodes. Motivated by this model, we also refer to the coordinates of the codeword as nodes. The task of node repair therefore
amounts to erasure correction with the chosen code, and the special feature of the erasure correction problem arising from
the distributed data placement is the constraint on the repair bandwidth involved in the repair procedure.
Most studies of MDS codes with optimal repair bandwidth in the literature are concerned with a particular subclass of codes known as MDS {\em array codes} \cite{Blaum98}. An $(n,k,l)$ MDS array code over a finite field $F$ is formed of $k$ information nodes and $r=n-k$ parity nodes with the property that the contents of any $k$ out of $n$ nodes suffices to recover the codeword. Every
node is a column vector in $F^l,$ reflecting the fact that the system
views a large data block stored in one node as one coordinate of the codeword.
The parameter $l$ that determines the dimension
of each node is called {\em sub-packetization}.
While originally the repair problem was confined to a single node failure, studies into regenerating codes
have expanded into the task of repairing multiple erasures. The problem of repairing multiple erasures comes in
two variations.
One of them is the {\em centralized model}, where a single data center is responsible for the repair of all the failed nodes \cite{Cadambe13,Ye16,Rawat16a,Wang17,Zorgui17,Ye17,Zorgui18}, and the other is the {\em cooperative model}, where the failed nodes may communicate but are distinct, and the amount of data exchanged between them is included in the repair bandwidth \cite{Kermarrec11,Shum13,Li14,Shum16}.
The cut-set bounds on the repair bandwidth for multiple erasures under these two models were derived in \cite{Cadambe13} and \cite{Shum13} respectively.
Let $\mathcal{F}\subset [n], |\mathcal{F}|=h$ and $\mathcal{R}\subseteq[n]\backslash \mathcal{F}, |\mathcal{R}|=d$ be the sets of indices of the failed nodes and
the helper nodes, respectively, where we use the notation $[n]:=\{1,2,\dots,n\}.$
Informally speaking, under the {centralized model}, repair proceeds by downloading $\beta_j,j\in\mathcal{R}$ symbols of $F$ from {each of} the helper nodes {$C_j,j\in\mathcal{R}$}, and computing the values of the failed nodes. It is assumed that the repair is performed
by a data center having access to all the downloaded information, and so the {\em repair bandwidth} equals $\beta_\mathcal{F}(\mathcal{R})=\sum_{j\in\mathcal{R}}\beta_j$. The variation introduced by the {cooperative model} does not include the data center, and so the repair bandwidth includes not only the information downloaded from the helper nodes but also the information exchanged between the
failed nodes in the repair process. In other words, under the centralized model, each failed node has access to all the data downloaded from the helper nodes, while under the cooperative model, each failed node only has access to its own downloaded data.
\subsection{Formal statement of the problems}
Consider an $(n,k,l)$ MDS array code $\mathcal{C}$ over a finite field $F$ and let $C\in \mathcal{C}$ be a codeword.
We write $C$ as $(C_1,C_2,\dots,C_n)$,
where $C_i=(c_{i,0},c_{i,1},\dots,c_{i,l-1})^T\in F^l, i=1,\dots, n$ is the $i$th coordinate of $C$.
The node repair models can be formalized as follows.
\begin{definition}[Centralized model]
Let $\mathcal{F}$ and $\mathcal{R}$ be the sets of failed and helper nodes, and suppose that $|\mathcal{F}|=h\le r$ and $|\mathcal{R}|=d\ge k.$
We say that the failed nodes $\{C_i,i\in\mathcal{F}\}$ can be repaired from the helper nodes $\{C_j,j\in\mathcal{R}\}$
by downloading\footnote{We note the use of the application-inspired term ``download'' for evaluating the functions $f_j$ and making their values
available to the failed nodes. This term is used extensively throughout the paper.} $\beta_{\mathcal{F}}(\mathcal{R})$ symbols of $F$ if there are
$d$ numbers $\beta_j, j\in\mathcal{R}$, $d$ functions $f_j: F^l\to F^{\beta_{j}}, j\in\mathcal{R},$
and $h$ functions $g_i: F^{\sum_{j\in\mathcal{R}}\beta_{j}}\to F^l, i\in\mathcal{F}$
such that
\begin{enumerate}
\item for every $i\in \mathcal{F}$ and every $C\in\mathcal{C}$
$$
C_i=g_i(\{f_j(C_j),j\in\mathcal{R} \}),
$$
\item
$$
\sum_{j\in\mathcal{R}}\beta_j=\beta_{\mathcal{F}}(\mathcal{R}).
$$
\end{enumerate}
\end{definition}
Under the cooperative model, the repair process is divided into two rounds. In the first round, each failed node downloads data from the helper nodes, and in the second round, the failed nodes exchange data among themselves (namely, each failed node downloads data from the other failed nodes).
\begin{definition}[Cooperative model] \label{def:op} In the notation of the previous definition, we assume two rounds
of communication between the nodes. In the first round, each failed node $C_i,i\in\mathcal{F}$ downloads a vector $f_{ij}(C_j)$ from each helper node $C_j,j\in\mathcal{R},$ and in the second round, each failed node $C_i,i\in\mathcal{F}$ downloads a vector
$f_{ii'}(\{f_{i'j}(C_j),j\in\mathcal{R} \})$ from each of the other failed nodes $C_{i'},i'\in\mathcal{F}\setminus\{i\}$.
We require that each failed node $C_i,i\in\mathcal{F}$ can be recovered from its own downloaded data
$f_{ij}(C_j),j\in\mathcal{R}$ and $f_{ii'}(\{f_{i'j}(C_j),j\in\mathcal{R} \}),i'\in\mathcal{F}\setminus\{i\}$.
The amount of downloaded data in this two-round repair process is
$$
\sum_{i\in\mathcal{F}}\Big(\sum_{j\in\mathcal{R}} \dim_F \big( f_{ij}(C_j) \big)
+ \sum_{i'\in\mathcal{F}\setminus\{i\}} \dim_F \big( f_{ii'}(\{f_{i'j}(C_j),j\in\mathcal{R} \}) \big) \Big),
$$
where $\dim_F(\cdot)$ is the dimension of the argument expressed as a vector over $F.$
\end{definition}
This definition may look somewhat restrictive in the part where the communication is constrained to only two rounds.
Indeed, in the definition proposed in \cite{Shum13}, the repair process may include an arbitrary number $T$ of communication rounds.
However, in this paper we show that it suffices to consider $T=2$ to construct codes with optimal repair bandwidth for all
possible parameters, and therefore we rely on the above definition, which also leads to simplified notation. At the same time, it may be that for other problems of cooperative repair, such
as optimal-access repair or others, more than two rounds are in fact necessary.
Given a code $\mathcal{C}$, define $N_{\ce}(\mathcal{C},{\mathcal{F}},{\mathcal{R}})$ and $N_{\co}(\mathcal{C},{\mathcal{F}},{\mathcal{R}})$ as the smallest number of symbols of $F$ one needs to download in order to recover the failed
nodes $\{C_i,i\in{\mathcal{F}}\}$ from the helper nodes $\{C_j,j\in{\mathcal{R}}\}$ under the centralized model and the cooperative model, respectively.
The repair bandwidth of the code is defined as follows.
\begin{definition}[Repair bandwidth]
Let $\mathcal{C}$ be an $(n,k,l)$ {MDS} array code over a finite field $F$.
The \emph{$(h,d)$-repair bandwidth} of the code $\mathcal{C}$ under centralized/cooperative repair model is given by
\begin{equation}\label{eq:beta}
\begin{aligned}
\beta_{\ce}(h,d):=\max_{|{\mathcal{F}}|=h,|{\mathcal{R}}|=d, {\mathcal{F}}\bigcap{\mathcal{R}}=\emptyset} N_{\ce}(\mathcal{C},{\mathcal{F}},{\mathcal{R}}),\\
\beta_{\co}(h,d):=\max_{|{\mathcal{F}}|=h,|{\mathcal{R}}|=d, {\mathcal{F}}\bigcap{\mathcal{R}}=\emptyset} N_{\co}(\mathcal{C},{\mathcal{F}},{\mathcal{R}}).
\end{aligned}
\end{equation}
\end{definition}
As already mentioned, the quantity $\beta(h,d)$ satisfies a general lower bound. In the next theorem we collect
results from several papers that establish different versions of this result.
\begin{theorem}[Cut-set bound \cite{Dimakis10,Cadambe13,Shum13}, this paper] \label{def:csb}
Let $\mathcal{C}$ be an $(n,k,l)$ MDS array code. For any two disjoint subsets ${\mathcal{F}},{\mathcal{R}}\subseteq[n]$ such that $|{\mathcal{F}}|\le r$ and $|{\mathcal{R}}|\ge k,$ we have the following inequalities:
\begin{align}
N_{\ce}(\mathcal{C},{\mathcal{F}},{\mathcal{R}}) & \ge \frac{|{\mathcal{F}}||{\mathcal{R}}|l}{|{\mathcal{F}}|+|{\mathcal{R}}|-k}, \label{eq:csce}\\
N_{\co}(\mathcal{C},{\mathcal{F}},{\mathcal{R}}) & \ge \frac{|{\mathcal{F}}|(|{\mathcal{R}}|+|{\mathcal{F}}|-1)l}{|{\mathcal{F}}|+|{\mathcal{R}}|-k}. \label{eq:cutset}
\end{align}
\end{theorem}
We note that in \cite{Shum13}, the bound \eqref{eq:cutset} was proved under the additional assumption that each failed node downloads the same amount of data from each helper node, and each failed node also downloads the same amount of data from each of the other failed nodes (the {\em uniform download assumption}), while our proof of \eqref{eq:cutset} in this paper does not require any additional assumptions.
A self-contained rigorous proof of \eqref{eq:cutset} is given in Section~\ref{ap:st} as a part of the proof of Theorem \ref{thm:st} below.
Inequality \eqref{eq:csce} gives the cut-set bound for the centralized model, and \eqref{eq:cutset} gives the cut-set bound under the cooperative one. For the case of a single failed node, there is no difference between the two repair models, and
these bounds coincide.
Note that although in this paper we consider only two-round cooperative repair schemes, bound \eqref{eq:cutset} holds for cooperative repair with any number of communication rounds.
If $\beta_{\ce}(h,d)$ (resp., $\beta_{\co}(h,d)$) meets the bound \eqref{eq:csce} (resp., \eqref{eq:cutset}) with equality, i.e.,
$$
\beta_{\ce}(h,d)=\frac{hdl}{h+d-k} \quad
\Big( \text{resp.,~~} \beta_{\co}(h,d)=\frac{h(h+d-1)l}{h+d-k} \Big),
$$
we say that the code $\mathcal{C}$ has the {\em $(h,d)$-optimal repair property} under the centralized (resp., cooperative) model.
Let us give a heuristic argument in favor of \eqref{eq:cutset} based on the cut-set bound for repairing single erasure. Let $i$ be one of the indices of the failed nodes. Suppose that all the other failed nodes $C_j,j\in\mathcal{F}\setminus\{i\}$ are functional, and we need to repair $C_i$. Using either \eqref{eq:csce} or \eqref{eq:cutset} with $|\mathcal{F}|=1,$ we see that $C_i$ needs
to download at least $l/(|\mathcal{F}|+|\mathcal{R}|-k)$ field symbols from each of the nodes $C_j,j\in\mathcal{R}\cup\mathcal{F}\setminus\{i\}.$
Therefore each failed node $C_i,i\in\mathcal{F}$ needs to download at least $(|\mathcal{F}|+|\mathcal{R}|-1)l/(|\mathcal{F}|+|\mathcal{R}|-k)$ symbols of $F$ in total.
Thus, if \eqref{eq:cutset} is achievable with equality, then each failed node can be repaired as though all the other failed nodes were functional and available.
We note that this argument is not rigorous because the single-erasure cut-set bound is derived under a one-round repair process while the repair process under the cooperative model is divided into two rounds.
The argument in the previous paragraph also suggests that optimality of a code under cooperative repair implies
its optimality under centralized repair.
We formalize this idea in the next theorem.
\begin{theorem}[{Cooperative model is stronger than centralized model}]\label{thm:st}
Let $\mathcal{C}$ be an $(n,k,l)$ MDS array code and let ${\mathcal{F}},{\mathcal{R}}\subseteq[n]$ be two disjoint subsets such that $|{\mathcal{F}}|\le r$ and $|{\mathcal{R}}|\ge k.$ If
\begin{equation}\label{eq:of}
N_{\co}(\mathcal{C},{\mathcal{F}},{\mathcal{R}}) = \frac{|{\mathcal{F}}|(|{\mathcal{R}}|+|{\mathcal{F}}|-1)l}{|{\mathcal{F}}|+|{\mathcal{R}}|-k},
\end{equation}
then
\begin{equation}\label{eq:xv}
N_{\ce}(\mathcal{C},{\mathcal{F}},{\mathcal{R}}) = \frac{|{\mathcal{F}}||{\mathcal{R}}|l}{|{\mathcal{F}}|+|{\mathcal{R}}|-k}.
\end{equation}
The statement of the theorem holds for cooperative repair schemes with any number $T\ge 2$ of communication rounds.
\end{theorem}
The statement in Theorem~\ref{thm:st} is trivially true under the uniform download assumption and in this form it was stated in \cite{Rawat16a}. In this paper we prove the theorem in Section~\ref{ap:st} under no additional assumptions.
The following arguments provide an intuitive explanation of its claim in the case of $T=2$, and they can be easily extended to any $T$.
As mentioned above, for \eqref{eq:of} to hold with equality, each failed node $C_i,i\in\mathcal{F}$ should
download $l/(|\mathcal{F}|+|\mathcal{R}|-k)$ symbols of $F$ from each of the nodes $C_j,j\in\mathcal{R}\cup(\mathcal{F}\setminus\{i\})$ in the course
of the two-round repair process. Therefore, each failed node $C_i,i\in\mathcal{F}$
downloads only $|\mathcal{R}|l/(|\mathcal{F}|+|\mathcal{R}|-k)$ symbols of $F$ in total from all the helper nodes $\{C_j,j\in\mathcal{R}\}.$
Switching to the centralized model, we observe that once these symbols are made available to one failed node,
they are automatically available to all the other failed nodes at no cost to the bandwidth, and so
\eqref{eq:xv} follows immediately.
\remove{
\begin{definition}[Optimal repair property]\label{def:orp}
We say that an $(n,k,l)$ MDS code $\mathcal{C}$ has the \emph{$(h,d)$-optimal repair} property under the centralized model if the $(h,d)$-repair bandwidth of $\mathcal{C}$ (see \eqref{eq:beta})
equals
$$
\beta_{\ce}(h,d)=\frac{hdl}{h+d-k},
$$
meeting the lower bound in \eqref{eq:csce} with equality.
Similarly, we say that an $(n,k,l)$ MDS code $\mathcal{C}$ has the \emph{$(h,d)$-optimal repair} property under the cooperative model if the $(h,d)$-repair bandwidth of $\mathcal{C}$ (see \eqref{eq:beta})
equals
$$
\beta_{\co}(h,d)=\frac{h(h+d-1)l}{h+d-k},
$$
meeting the lower bound in \eqref{eq:cutset} with equality.
\end{definition}}
According to Theorem~\ref{thm:st}, MDS codes with $(h,d)$-optimal repair property under the cooperative model also have the same property under the centralized model. At the same time, it is not known how to transform optimal centralized-repair codes
into cooperative-repair codes. This might be the reason why the latter are more difficult to construct.
Indeed, while general $(h,d)$-optimal repair MDS codes for the centralized model are available in several variations \cite{Ye16,Goparaju17,Ye17}, MDS codes with the same property under the cooperative model are known only for some special values of $h$ and $d$. Specifically, the following results appeared in the literature. Paper \cite{Shum13} constructed optimal MDS codes for cooperative repair for the (trivial) case $d=k$, and \cite{Li14} presented a family of optimal MDS codes for the repair of two erasures in the regime of low rate $k/n \le 1/2$ (more precisely, \cite{Li14} constructed $(n,k)$ MDS codes with the $(2,d)$-optimal repair property for any $n,k,d$ such that $2k-3\le d\le n-2$).
Thus, prior to our work, even the existence problem of cooperative MDS codes with the $(h,d)$-optimal repair property for general values of $h$ and $d$ (apart from the two special cases mentioned above) was an open question\footnote{In \cite{Shum13}, the authors showed that the cut-set bound \eqref{eq:cutset} is achievable under the weaker ``functional repair" requirement, which does not assume that the repair scheme recovers the exact content of the failed nodes, as opposed to the more prevalent exact repair requirement considered in this paper.}.
In the rest of the paper we focus on the cooperative model, and, unless stated otherwise, all the concepts
and objects mentioned below such as the repair bandwidth, the cut-set bound, etc., implicitly assume this
model.
Our results in this work are as follows:
\begin{enumerate}
\item We give a complete solution of repairing multiple erasures for all possible parameters. More precisely, given any $n,k,h,d$ such that $2\le h \le n-d\le n-k-1$, we present an explicit $(n,k)$ MDS code with the
$(h,d)$-optimal repair property.
We limit ourselves to the case of $d\ge k+1$ because constructions for $d=k$ were already given in \cite{Shum13}.
The size of the underlying finite field is $sn$ for all constructions, where $s:=d+1-k.$ At the same time, the sub-packetization $l$ is rather large: for $h=2$ we need to take approximately $l=s^{n(n-1)}$, while for general $d$ and $h$ it is approximately $l=s^{h\binom nh}.$ We do not know
whether this is necessary or is merely an artifact of our construction.
\item
We prove the cut-set bound \eqref{eq:cutset} for the most general case without the uniform download assumption, and
we also show that the any MDS code that affords cooperative optimal repair is also optimally repairable under the centralized model
(see Theorem~\ref{thm:st}).
\end{enumerate}
\subsection{Organization of this paper}
In Section~\ref{ap:st}, we prove the general versions of the cut-set bound \eqref{eq:cutset} and Theorem~\ref{thm:st} without the uniform download assumption.
In Section~\ref{sect:sl} we prove a technical lemma which forms the core of the proposed repair schemes. Various versions of this lemma will be used throughout the paper.
Moving to the code constructions, we start with the special case of $h=2$ and $d=k+1$ to illustrate the new ideas behind the proposed code families. These results are presented in Section~\ref{sect:bdblock}. Namely, in
Section~\ref{sect:first} we construct MDS codes $\mathcal{C}_{2,k+1}^{(0)}$ that can optimally repair the first two nodes
(or any {\em given} pair of nodes) from any $d=k+1$ helper nodes. In Section~\ref{sect:warmup}, we use this code as a building block
to construct $(n,k)$ MDS codes $\mathcal{C}_{2,k+1}$ with the $(2,d=k+1)$-optimal repair property.
In Section~\ref{sect:gend}, we deal with general values of $d, k+1\le d\le n-2$. Similarly to the above, in Section~\ref{sect:fd} we construct a code $\mathcal{C}_{2,d}^{(0)}$ that supports
optimal repair of the first two nodes, and in Section~\ref{sect:rb} we use it
as a building block to construct MDS codes $\mathcal{C}_{2,d}$ with the $(2,d)$-optimal repair property for general values of $d, k+1\le d\le n-2$.
In Section~\ref{sect:h} we construct $(n,k)$ MDS codes with $(h,d=k+1)$-optimal repair property for general values of $h, 2\le h\le r-1$. Following the route chosen above, in Section~\ref{sect:gh} we handle the case of repairing the first $h$ nodes while in Section~\ref{sect:lo} we extend the construction to repair any subset of $h$ failed nodes. The corresponding codes are labeled as $\mathcal{C}_{h,k+1}^{(0)}$ and $\mathcal{C}_{h,k+1},$ respectively.
Finally, in Section~\ref{sect:fg}, we present the main result of this paper---the construction for general values of both $h$ and $d$. In Section~\ref{sect:hd0} we construct an MDS code $\mathcal{C}_{h,d}^{(0)}$ that supports
optimal repair of the first $h$ nodes, and in Section~\ref{sect:hd1} we use it as a building block to construct an $(n,k)$ MDS codes $\mathcal{C}_{h,d}$ with the $(h,d)$-optimal repair property for general values of $h$ and $d$, $2\le h \le n-d\le r-1$.
The extension from repairing a fixed $h$-subset of nodes to any subset of cardinality $h$ relies on an idea that has already
appeared in the literature on regenerating codes \cite{Ye16,Goparaju17}, albeit in a somewhat veiled form. We isolate and illustrate this idea in
Section~\ref{sect:hew}. Apart from revealing the structure behind our constructions, it also enables us to give a family
of $(n,k)$ {\em universal MSR codes} with the $(h,d)$-optimal repair property for all $1\le h\le n-d\le n-k$ simultaneously, i.e., these codes can optimally repair any number of failed nodes from any number of helper nodes. This construction forms a simple extension of the main results, and is given in a brief Section~\ref{sect:universal}.
Note that Sections~\ref{sect:bdblock}-\ref{sect:h} serve as preparation for Section~\ref{sect:fg}, and all the constructions in Sections~\ref{sect:bdblock}-\ref{sect:h} are special cases of the constructions in Section~\ref{sect:fg}. Even though the structure of the sections looks similar, each of the constructions adds new elements to the basic idea, and without the introductory sections it may be difficult to understand the intuition behind the code constructions in
later parts of the paper. At the same time, we note that the codes in Sections~\ref{sect:fg} reduce to the codes in Section~\ref{sect:gend} and \ref{sect:h} upon appropriate adjustment of the parameters, such as taking $d=k+1$ or $h=2$, etc. (see Section~\ref{sect:connections} below for more details).
The complete reduction scheme between the code families in this paper is as shown in Fig.~1, and the parameters of the codes are listed in Table~\ref{table:parameters}.
\begin{table}[ht]
\captionsetup{width=.8\linewidth,font=scriptsize}
\centering
\begin{tabular}{|l||cc|cc|}
\hline
&\multicolumn{2}{|c|}{Repairing the first $h$ nodes}&\multicolumn{2}{c|}{Repairing any $h$ nodes}\\
\hline
\multicolumn{1}{|c||}{Values of $h=|\mathcal{F}|,d=|\mathcal{R}|$} &$|F|$ &$l$ &\hspace*{.1in}$|F|$ &$l$\\
\hline
Sec.~\ref{sect:bdblock}: $h=2,d=k+1$ &$n+2$ &3 &\hspace*{.1in}$2n$ &$3^{\binom n2}$\\
Sec.~\ref{sect:gend}: $h=2,$ any $d$ &$n+2(s-1)$ &$s^2-1$ &\hspace*{.1in}$sn$ &$(s^2-1)^{\binom n2}$\\
Sec.~\ref{sect:h}: any $h,$ $d=k+1$ &$n+h$ &$h+1$ &\hspace*{.1in}$2n$ &$(h+1)^{\binom n h}$\\
Sec.~\ref{sect:fg}: any $h$, any $d$ &$n+h(s-1)$ &$(h+d-k)(s-1)^{h-1}$ &\hspace*{.1in}$sn$ &$((h+d-k)(s-1)^{h-1})^{\binom n h}$\\
\hline
\end{tabular}
\caption{\noindent\hangindent .5in \hangafter=1 We list the parameters (field size, sub-packetization) of the codes constructed in this paper, where $s:=d+1-k$. In the first of the two pairs of columns the codes are constructed for optimal repair of the {\em first $h$ nodes only}, while the second pair gives the parameters of codes that can optimally repair {\em any} $h$ failed nodes.}\label{table:parameters}
\end{table}
\begin{center}
\begin{tikzpicture}
\draw node at (9.5,-1.5) [text width=0.3in, align=center](A) {$\mathcal{C}_{2,k+1}^{(0)}$ {\scriptsize Sec.~\ref{sect:first}}};
\draw node at (7,0.5) [text width=0.3in, align=center](B) {$\mathcal{C}_{2,d}^{(0)}$ {\scriptsize Sec.~\ref{sect:fd}}};
\draw node at (9.5,0.3) [text width=0.3in, align=center](C) {$\mathcal{C}_{2,k+1}$ {\scriptsize Sec.~\ref{sect:warmup}}};
\draw node at (12,0.5) [text width=0.3in, align=center](D) {$\mathcal{C}_{h,k+1}^{(0)}$ {\scriptsize Sec.~\ref{sect:gh}}};
\draw node at (7,2.5) [text width=0.3in, align=center](E) {$\mathcal{C}_{2,d}$ {\scriptsize Sec.~\ref{sect:rb}}};
\draw node at (12,2.5) [text width=0.3in, align=center](F) {$\mathcal{C}_{h,k+1}$ {\scriptsize Sec.~\ref{sect:lo}}};
\draw node at (9.5,2.3) [text width=0.3in, align=center](G) {$\mathcal{C}_{h,d}^{(0)}$ {\scriptsize Sec.~\ref{sect:hd0}}};
\draw node at (9.5,4) [text width=0.3in, align=center](H) {$\mathcal{C}_{h,d}$ {\scriptsize Sec.~\ref{sect:hd1}}};
\draw[->, >=stealth,line width=.2mm] (B) -- (A) node[draw=none,fill=none,font=\scriptsize,midway,below] {};
\draw[->, >=stealth,line width=.2mm] (C) -- (A) node[draw=none,fill=none,font=\scriptsize,near start,above] {};
\draw[->, >=stealth,line width=.2mm] (D) -- (A) node[draw=none,fill=none,font=\scriptsize,midway,below] {};
\draw[->, >=stealth,line width=.2mm] (E) -- (B) node[draw=none,fill=none,font=\scriptsize,midway,below] {};
\draw[->, >=stealth,line width=.2mm] (E) -- (C) node[draw=none,fill=none,font=\scriptsize,midway,above] {};
\draw[->, >=stealth,line width=.2mm] (F) -- (C) node[draw=none,fill=none,font=\scriptsize,midway,below] {};
\draw[->, >=stealth,line width=.2mm] (F) -- (D) node[draw=none,fill=none,font=\scriptsize,midway,below] {};
\draw[->, >=stealth,line width=.2mm] (G) -- (B) node[draw=none,fill=none,font=\scriptsize,midway,below] {};
\draw[->, >=stealth,line width=.2mm] (G) -- (D) node[draw=none,fill=none,font=\scriptsize,midway,below] {};
\draw[->, >=stealth,line width=.2mm] (H) -- (G) node[draw=none,fill=none,font=\scriptsize,midway,below] {};
\draw[->, >=stealth,line width=.2mm] (H) -- (E) node[draw=none,fill=none,font=\scriptsize,midway,below] {};
\draw[->, >=stealth,line width=.2mm] (H) -- (F) node[draw=none,fill=none,font=\scriptsize,midway,below] {};
\end{tikzpicture}
\noindent\begin{minipage}{.75\linewidth}{\footnotesize \noindent\hangindent=.35in\hangafter=1
Fig.1: Relations between the code families constructed in the paper. Arrows point from more general
code families to their subfamilies. The superscript $^{(0)}$ indicates that the code supports optimal repair of the first two (or the first $h$)
erasures only. \par}\end{minipage}
\end{center}
\subsection{Future directions}
\begin{enumerate}
\item In this paper we consider the problem of repairing multiple erasures for MDS codes, which correspond to the minimum storage regenerating (MSR) point on the trade-off curve between storage and repair bandwidth in the regenerating code literature \cite{Dimakis10,Elyasi16}. A natural future direction is to extend our results to the whole trade-off curve, starting with the minimum bandwidth regenerating (MBR) point.
\item The repair problem of Reed-Solomon (RS) codes has attracted significant attention recently \cite{Guruswami16,Dau17,Dau16,Chowdhury17,Bartan17,Ye16b,Tamo17RS,Ye17,Tamo18}. In particular, explicit RS code constructions with the $(h,d)$-optimal repair property under the centralized model were given in \cite{Ye17}. Can this result be extended to the cooperative model (and are two rounds enough)? Note that cooperative repair of (full-length) RS codes was previously considered in \cite{Dau16}, which gave schemes for repairing 2 and 3 erasures with small repair bandwidth (since codes in \cite{Dau16} have small $l$, the repair bandwidth ends up being rather far away from the cut-set bound).
\item Let us consider the regime where we fix the number of parity nodes $r:=n-k$ and let $n$ grow.
The sub-packetization value of our MDS code construction with the $(h,d)$-optimal repair property scales as $\exp(\Theta(n^h))$ in this regime, which is much larger than its counterpart under the centralized model, where the sub-packetization value is $\exp(O(n))$ (see \cite{Ye16}). One possible reason is that since the cooperative model is more restrictive than the centralized model, the larger sub-packetization is the penalty we have to pay. The other possibility is that our construction can be improved in terms of the sub-packetization value. This raises an open question of either deriving a lower bound on sub-packetization for the cooperative model (cf. also Table~\ref{table:parameters}) or constructing codes with smaller sub-packetization.
\item Several families of codes under centralized repair also have the {\em optimal access} property, wherein the number
of field symbols accessed at the helper nodes equals the number of symbols downloaded for the purposes of repair \cite{Sasid16,Ye16a}.
Is it possible to design optimal-repair codes for the cooperative model that reduce or minimize the number of symbols accessed during the repair process?
\end{enumerate}
\section{Proof of \eqref{eq:cutset} and Theorem~\ref{thm:st}} \label{ap:st}
\remove{Although our proof below is written for two-round cooperative repair schemes, a simple modification allows us to apply the same proof to cooperative repair with any number of communication rounds, which means that Theorem~\ref{thm:st} holds for multiple-round cooperative repair; see Remark~\ref{rm:mr} for details.}
Let $\mathcal{C}$ be an $(n,k,l)$ MDS code over $F$. Our goal is to prove that if \eqref{eq:cutset} holds with equality, then so does \eqref{eq:csce}. We will argue
by showing that inequality \eqref{eq:csce} implies \eqref{eq:cutset} and then observe that the equality in \eqref{eq:cutset}
implies the same for \eqref{eq:csce}. The first step of this
argument also yields a self-contained proof of the cooperative cut-set bound \eqref{eq:cutset}.
Recall that $h:=|\mathcal{F}|$ and $d:=|\mathcal{R}|$. To shorten the expressions, below we use the following notation
$$
D_i(\mathcal{R})= \sum_{j\in\mathcal{R}}\dim_F(\big( f_{ij}(C_j) \big), \quad D_{i}(\mathcal{F})= \sum_{i'\in\mathcal{F}\setminus\{i\}} \dim_F \big( f_{ii'}(\{f_{i'j}(C_j),j\in\mathcal{R} \}) \big)
$$
for the number of symbols of $F$ downloaded by $C_i\in \mathcal{F}$ from the helper nodes (in the first round of repair) and
from the other failed nodes (in the second round of repair), respectively, where the functions $f_{i,\cdot}$ were introduced in
Definition~\ref{def:op}. For a given node $C_i$ there are $d+h-1$ such functions, and therefore, in total there are $h(d+h-1)$
of them for any given subsets $\mathcal{F},\mathcal{R}.$ Our goal is to show that
\begin{equation}\label{eq:gl}
\sum_{i\in\mathcal{F}}( D_i(\mathcal{R})
+ D_{i}(\mathcal{F}) )
\ge \frac{h(h+d-1)}{h+d-k}l.
\end{equation}
Our proof relies on the following simple observation: in the first round of the repair process, the data downloaded from the helper nodes by all the failed nodes is the following set of vectors:
\begin{equation}\label{eq:rm}
\{f_{ij}(C_j),i\in\mathcal{F},j\in\mathcal{R}\}.
\end{equation}
After obtaining this set of vectors, the failed nodes can recover their values by performing additional information exchange
during the second round of repair. Recalling the centralized model, this means that all the information needed
to collectively repair the failed nodes is contained in the set \eqref{eq:rm}. Therefore, on account of the
centralized version of the cut-set bound \eqref{eq:csce} we have
\begin{equation}\label{eq:pl1}
\sum_{i\in\mathcal{F}} D_i(\mathcal{R}) \ge \frac{hd}{h+d-k}l.
\end{equation}
To bound the second term on the left-hand side of \eqref{eq:gl}, we use the following basic fact about MDS code: for an $(n,k)$ MDS code, any subset of $k-1$ coordinates contains no information about any other coordinate of the code. Assume a uniform distribution
on the codewords $C=(C_1,\dots,C_n) \in \mathcal{C}$ and (by a slight abuse of notation) use the same symbols $C_i, i=1,\dots,n$ for the associated random variables.
For any $i\in [n]$ (in particular, for any $i\in\mathcal{F}$) and any subset $\mathcal{S}\subseteq \mathcal{R}$ of the helper nodes of size $|\mathcal{S}|=k-1$, we have
$$
H(C_i)=H(C_i|\{C_j,j\in\mathcal{S}\})=l \log_2|F|,
$$
where $H(X|Y)$ is the conditional entropy of $X$ given $Y$, measured in bits. Applying a deterministic function to $Y$
can only increase the conditional entropy, and therefore for any $\mathcal{S}\subseteq\mathcal{R}, |\mathcal{S}|=k-1$ we have
\begin{equation}\label{eq:k1}
H(C_i|\{f_{ij}(C_j),j\in\mathcal{S}\})=l \log_2(|F|).
\end{equation}
On the other hand, each $C_i,i\in\mathcal{F}$ is uniquely determined by
$\{f_{ij}(C_j),j\in\mathcal{R}\}\cup\{f_{ii'}(\{f_{i'j}(C_j),j\in\mathcal{R} \}):i'\in\mathcal{F}\setminus\{i\}\}$, so
\begin{equation}\label{eq:k2}
H(C_i|\{f_{ij}(C_j),j\in\mathcal{R}\}\cup\{f_{ii'}(\{f_{i'j}(C_j),j\in\mathcal{R} \}):i'\in\mathcal{F}\setminus\{i\}\})=0.
\end{equation}
Combining \eqref{eq:k1} and \eqref{eq:k2}, and using Lemma~\ref{lem:ax} below, we obtain that
\begin{align}\label{eq:H}
H \left( \{f_{ij}(C_j),j\in\mathcal{R}\setminus\mathcal{S}\}\cup\{f_{ii'}(\{f_{i'j}(C_j),j\in\mathcal{R} \}):i'\in\mathcal{F}\setminus\{i\}\} \right)
\ge l \log_2|F|.
\end{align}
Therefore, for any $i\in\mathcal{F}$ and any $\mathcal{S}\subseteq \mathcal{R}, |\mathcal{S}|=k-1$
\begin{align}
\sum_{j\in\mathcal{R}\setminus\mathcal{S}} \dim_F \big( f_{ij}(C_j) \big)
+ \sum_{i'\in\mathcal{F}\setminus\{i\}} \dim_F \big( f_{ii'}(\{f_{i'j}(C_j),j\in\mathcal{R} \}) \big) \ge l \label{eq:js}
\end{align}
(the left-hand side on the above line is the entropy of the left-hand side of \eqref{eq:H} under the uniform distribution on
its arguments. Since the entropy is maximized for the uniform distribution, \eqref{eq:js} is implied by \eqref{eq:H}. Note also
the switching of the base of logarithms from 2 to $|F|$.).
Let us sum \eqref{eq:js} over all subsets $\mathcal{S}\subseteq\mathcal{R}$ of size $|\mathcal{S}|=k-1$.
Only the first term on the left-hand side depends on $\mathcal{S}$, and for every $j\in\mathcal{R}$, the term $\dim_F \big( f_{ij}(C_j) \big)$ appears for $\binom{d-1}{k-1}$ different choices of $\mathcal{S}.$ Thus we have
$$
\binom{d-1}{k-1} D_i(\mathcal{R})
+ \binom{d}{k-1} D_i(\mathcal{F}) \ge \binom{d}{k-1} l, \quad i\in \mathcal{F}.
$$
Dividing both sides by $\binom{d}{k-1},$ we obtain that for every $i\in\mathcal{F}$,
$$
\frac{d-k+1}{d} D_i(\mathcal{R})
+ D_i(\mathcal{F}) \ge l.
$$
Let us sum these inequalities on all $i\in\mathcal{F}$. We obtain
\begin{equation}\label{eq:pl2}
\frac{d-k+1}{d} \sum_{i\in\mathcal{F}} D_i(\mathcal{R})
+ \sum_{i\in\mathcal{F}} D_i(\mathcal{F}) \ge hl.
\end{equation}
Multiplying \eqref{eq:pl1} on both sides by $\frac{k-1}{d}$ and then adding it to \eqref{eq:pl2}, we obtain the desired inequality \eqref{eq:gl}.
This completes the proof of \eqref{eq:cutset}.
We are left to prove the claim that for a given code $\mathcal{C}$, \eqref{eq:of} implies \eqref{eq:xv}. Assuming $\eqref{eq:of},$
we observe that there is a choice of the functions $\{\{f_{ij}, j\in \mathcal{R}\}, \{f_{ii'}, i'\in \mathcal{F}\backslash\{i\}\}: i\in \mathcal{F}\}$ such that \eqref{eq:gl} holds with equality. This means that \eqref{eq:pl2} and all the inequalities preceding it in the proof, including
\eqref{eq:pl1}, hold with equality, but equality in \eqref{eq:pl1} means that \eqref{eq:xv} holds true.
\begin{lemma}\label{lem:ax} Let $X,Y,Z$ be arbitrary discrete random variables such that $H(X|YZ)=0,$ then $H(Z)\ge H(X|Y).$
\end{lemma}
\begin{IEEEproof} By the assumption we have $H(XYZ)=H(YZ)$. Therefore,
\begin{align*}
H(Z)\ge H(Z|Y)&= H(YZ)-H(Y)\\& = H(XYZ)-H(Y)\\& \ge H(XY)-H(Y) \\& = H(X|Y).
\end{align*}
\end{IEEEproof}
It remains to justify the final claim of the theorem, namely that it holds for the general case of $T\ge 2$
communication rounds. Indeed the proof given above can be easily modified
to cover the general situation.
To explain this, let us assume that the repair process is divided into $T$ rounds for some finite integer $T$.
In this case, for $i\in\mathcal{F}$ and $j\in\mathcal{R}$, we view $f_{ij}(C_j)$ as all the data downloaded by the failed node
$C_i$ from the helper node $C_j$ in all $T$ rounds of communication.
For $i,i'\in \mathcal{F},i\neq i'$, we view $f_{ii'}(\{f_{i'j}(C_j),j\in\mathcal{R} \})$ as all the data downloaded by the failed node $C_i$ from
another failed node $C_{i'}$ in all $T$ rounds of communication\footnote{Observe
that the notation $f_{ii'}(\{f_{i'j}(C_j),j\in\mathcal{R} \})$
is not accurate for multiple-round repair because $f_{ii'}$ can also depend on the data $f_{i'j},j\in\mathcal{F}\setminus\{i'\}$ downloaded in previous round(s). At the same time, this issue
does not affect our argument, so we prefer to keep the already established notation.}.
It is easy to check that under this point of view, our proof applies directly to a $T$-round repair process for any integer $T$.
\remove{
\begin{remark}
Our proof in this section implies a simple method of transforming an optimal cooperative repair scheme to an optimal centralized repair scheme. Now let us consider the reverse transformation. Here we use the most naive method to transform an optimal centralized repair scheme to a cooperative one, and show that there is a gap between the resulting repair bandwidth and the optimal value \eqref{eq:cutset}.
Suppose that $\mathcal{C}$ is an $(n,k,l)$ MDS array code over a finite field $F$ with the $(h,d)$-optimal repair property under the centralized model.
For simplicity we assume that the first $h$ nodes $C_1,C_2,\dots,C_h$ are the failed nodes, and $\mathcal{R}$ is the set of indices of $d$ helper nodes. A naive way to perform two-round cooperative repair for $\mathcal{C}$ using its optimal centralized repair scheme is as follows: In the first round, $C_1$ acts as the data center in the centralized model and downloads all the information that is needed for the repair of all the $h$ failed nodes from the helper nodes $C_j,j\in\mathcal{R}$. According to \eqref{eq:csce}, in the first round, $C_1$ downloads
$$
\frac{hdl}{h+d-k}
$$
symbols of $F$, and $C_2,C_3,\dots,C_h$ download nothing.
After the first round, $C_1$ knows the values of all the failed nodes, and in the second round, it transmits the value of $C_i$ to the $i$th node for $i=2,3,\dots,h$. Therefore in the second round, $C_1$ downloads nothing, and each $C_i$ downloads $l$ symbols of $F$ for $i=2,3,\dots,h$.
Thus in total the cooperative repair bandwidth of this naive scheme is
$$
\frac{hdl}{h+d-k}+(h-1)l.
$$
The difference between this repair bandwidth and the optimal value in \eqref{eq:cutset} is
$$
\frac{hdl}{h+d-k}+(h-1)l - \frac{h(h+d-1)l}{h+d-k}
= \frac{d-k}{h+d-k} (h-1)l.
$$
We can see that the difference is always nonnegative, and for the nontrivial case $d>k$, it is always positive,
which means that this naive approach can not transform an optimal centralized repair scheme into an optimal cooperative one.
\end{remark}
}
\section{A technical lemma}\label{sect:sl}
In this section we prove a technical lemma which will be frequently used throughout the paper.
Let $C\in \mathcal{C}$ be a codeword of an $(n,k=n-r,l)$ MDS array code $\mathcal{C}$. We write $C$ as $(C_1,C_2,\dots,C_n)$,
where $C_i=(c_{i,0},c_{i,1},\dots,c_{i,l-1})^T \in F^l$ is the $i$th coordinate of $C$.
\begin{lemma}\label{lem:tch}
Let $n,k,d$ be positive integers such that $k\le d \le n-1$. Let $r:=n-k$ and let $s:=d+1-k$.
Let $F$ be a finite field with cardinality $|F|\ge n+s-1$. Let $\lambda_{1,0},\lambda_{1,1},\dots,\lambda_{1,s-1}, \lambda_2,\lambda_3,\dots,\lambda_n$ be $n+s-1$ distinct elements of $F$.
Define an $(n,k,s)$ MDS array code $\mathcal{C}$ over the field $F$ by the following $rs$ parity check equations:
\begin{equation}\label{eq:org}
\lambda_{1,u}^t c_{1,u} + \sum_{i=2}^n \lambda_i^t c_{i,u} = 0, \quad
u=0,1,\dots,s-1, \quad t=0,1,\dots,r-1.
\end{equation}
Let $\mu_i:= \sum_{u=0}^{s-1} c_{i,u}$ for all $i\in[n]$. Then for any subset $\mathcal{R}\subseteq \{2,3,\dots,n\}$ with cardinality $|\mathcal{R}|=d$, the values $\{c_{1,0},c_{1,1},\dots,c_{1,s-1},\mu_2,\mu_3,\dots,\mu_n\}$ can be calculated from $\{\mu_i:i\in\mathcal{R}\}$.
\end{lemma}
\begin{IEEEproof}\!\footnote{This proof draws on the ideas in \cite[Theorem 7]{Ye16}.}
Summing \eqref{eq:org} over $u\in\{0,1,\dots,s-1\}$, we obtain
$$
\sum_{u=0}^{s-1} \lambda_{1,u}^t c_{1,u} + \sum_{i=2}^n \lambda_i^t \mu_i = 0,
\quad t=0,1,\dots,r-1.
$$
Writing these $r$ equations in matrix form, we obtain the following equality:
\begin{equation}\label{eq:ml}
\left[\begin{array}{ccccccccc}
1 & 1 & \dots & 1 & 1 & 1 & 1 & \dots & 1\\
\lambda_{1,0} & \lambda_{1,1} & \dots & \lambda_{1,s-1} & \lambda_2 & \lambda_3 & \lambda_4 & \dots & \lambda_n \\
\lambda_{1,0}^2 & \lambda_{1,1}^2 & \dots & \lambda_{1,s-1}^2 & \lambda_2^2 & \lambda_3^2 & \lambda_4^2 & \dots & \lambda_n^2 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
\lambda_{1,0}^{r-1} & \lambda_{1,1}^{r-1} & \dots & \lambda_{1,s-1}^{r-1} & \lambda_2^{r-1} & \lambda_3^{r-1} & \lambda_4^{r-1} & \dots & \lambda_n^{r-1} \end{array}\right]
\left[\begin{array}{c} c_{1,0} \\ c_{1,1} \\ \vdots \\ c_{1,s-1}\\
\mu_2\\ \mu_3\\ \mu_4\\ \vdots \\ \mu_n\end{array}\right] = 0.
\end{equation}
Since $\lambda_{1,0}, \lambda_{1,1}, \dots, \lambda_{1,s-1}, \lambda_2, \lambda_3, \lambda_4, \dots, \lambda_n$ are all distinct, the vector
$(c_{1,0},c_{1,1},\dots,c_{1,s-1},\mu_2,\mu_3,\dots,\linebreak[4] \mu_n)$ is a codeword in an $(n+s-1,n+s-1-r=d)$ generalized Reed-Solomon code.
Therefore, for any $\mathcal{R}\subseteq \{2,3,\dots,n\}, |\mathcal{R}|=d,$
the values $\{c_{1,0},c_{1,1},\dots,c_{1,s-1},\mu_2,\mu_3,\dots,\mu_n\}$ can be calculated from $\{\mu_i:i\in\mathcal{R}\}$. This completes the proof of the lemma.
\end{IEEEproof}
\section{Cooperative $(2,k+1)$-optimal codes}\label{sect:bdblock}
\subsection{Repairing the first two nodes from any $k+1$ helper nodes}\label{sect:first}
Let $F$ be a finite field. For any $k<n\le |F|-2$ we present a construction of $(n,k,3)$ MDS array codes $\mathcal{C}=\mathcal{C}_{2,k+1}^{(0)}$ over $F$ that support
optimal repair of the first two nodes. Specifically, when the first two nodes of $\mathcal{C}$ fail, the repair of each failed node can be accomplished by connecting to {\em any} $k+1$ helper nodes and downloading a total of $k+2$ symbols of $F$
from these helper nodes as well as from the other failed node, achieving the optimal repair bandwidth according to the cut-set bound \eqref{eq:cutset}.
For $i=1,2,\dots,n$, we write the $i$th node of $\mathcal{C}$ as $C_i=(c_{i,0},c_{i,1}, c_{i,2})^T\in F^3$, which is a column vector of dimension $3$ over $F$.
Let $\lambda_{1,0},\lambda_{1,1},\lambda_{2,0},\lambda_{2,1},\lambda_3,\lambda_4,\dots,\lambda_n$ be $n+2$ distinct elements of the field $F$.
The code $\mathcal{C}$ is defined by the following $3$ sets of parity check equations:
\begin{align}
\lambda_{1,0}^t c_{1,0} + \lambda_{2,0}^t c_{2,0} + \sum_{i=3}^n \lambda_i^t c_{i,0} = 0,
\quad t=0,1,\dots,r-1, \label{eq:c11} \\
\lambda_{1,1}^t c_{1,1} + \lambda_{2,0}^t c_{2,1} + \sum_{i=3}^n \lambda_i^t c_{i,1} = 0,
\quad t=0,1,\dots,r-1, \label{eq:c12} \\
\lambda_{1,0}^t c_{1,2} + \lambda_{2,1}^t c_{2,2} + \sum_{i=3}^n \lambda_i^t c_{i,2} = 0,
\quad t=0,1,\dots,r-1. \label{eq:c13}
\end{align}
For each $a=0,1,2$ the set of vectors $\{(c_{1,a},c_{2,a},\dots,c_{n,a})\}$ obviously forms an $(n,k=n-r)$ MDS code, and so $\mathcal{C}$ is indeed
an $(n,k,3)$ MDS array code.
The following lemma suggests a description of the repair scheme for the first two nodes using the bandwidth that meets the cut-set bound \eqref{eq:cutset} with equality.
\begin{lemma}\label{lem:bb} For $i=1,\dots,n$ let
$$
\mu_{i,1}:=c_{i,0}+c_{i,1}, \quad
\mu_{i,2}:=c_{i,0}+c_{i,2}.
$$
For any set of helper nodes $\mathcal{R}\subseteq \{3,4,\dots,n\},|\mathcal{R}|=k+1$, the values of $c_{1,0},c_{1,1},$ and $\mu_{2,1}$ are uniquely determined by $\{\mu_{i,1}:i\in \mathcal{R}\}$.
Similarly, the values of $c_{2,0},c_{2,2},$ and $\mu_{1,2}$ are uniquely determined by $\{\mu_{i,2}:i\in \mathcal{R}\}$.
\end{lemma}
\begin{IEEEproof}
This lemma follows immediately from Lemma~\ref{lem:tch}. Indeed, take $d=k+1$ and $s=2$, then there are only two groups of
equations in \eqref{eq:org}, namely those for $u=0,1.$
To prove the first statement of Lemma~\ref{lem:bb}, consider the equations in \eqref{eq:c11} and \eqref{eq:c12}. These two sets of equations have the same structure as the equations in \eqref{eq:org}: namely,
only the coefficients of $c_{1,u}$ vary with $u$ while the coefficients of $c_{i,u}$ are
independent of the value of $u$ for all $i\in\{2,3,\dots,n\}$. Therefore Lemma~\ref{lem:tch} applies directly, and we obtain the
claimed fact about $c_{1,0},c_{1,1}$ and $\mu_{2,1}.$
Similarly, to prove the second statement, consider the equations in \eqref{eq:c11} and \eqref{eq:c13}. These two sets of equations also have the same structure as the equations in \eqref{eq:org}: namely,
only the coefficients of $c_{2,u}$ vary with $u$ while the coefficients of $c_{i,u}$ are independent of the value of $u$ for all $i\in[n]\setminus\{2\}$.
\end{IEEEproof}
This lemma implies
that the first two nodes of $\mathcal{C}$ can be repaired with optimal bandwidth.
As already mentioned, the repair process is divided into two rounds.
In the first round, the node $C_j,j=1,2$ downloads $k+1$ symbols
$\mu_{ij}$ from the helper nodes $C_i,i\in \mathcal{R}$.
According to Lemma~\ref{lem:bb}, after the first round, $C_1$ knows the values of
$c_{1,0},c_{1,1}$ and $c_{2,0}+c_{2,1}$, and $C_2$ knows the values of $c_{2,0},c_{2,2}$ and $c_{1,0}+c_{1,2}$.
In the second round, $C_1$ downloads the sum $c_{1,0}+c_{1,2}$ from $C_2$, and $C_2$ downloads the sum $c_{2,0}+c_{2,1}$ from $C_1$. Clearly, after the second round, both $C_1$ and $C_2$ can recover all their coordinates. Moreover, in the whole repair process, $C_1$ only downloads one symbol of $F$ from each of the nodes $C_i,i\in \mathcal{R}\cup\{2\}$, and $C_2$ only downloads one symbol of $F$ from each of the nodes $C_i,i\in \mathcal{R}\cup\{1\}$.
Therefore the total repair bandwidth is $2(k+1)+2$, meeting the cut-set bound \eqref{eq:cutset} with equality.
\subsection{Repairing any two erasures from any $k+1$ helper nodes}\label{sect:warmup}
Here we develop the idea in the previous section to construct explicit MDS array codes with the $(2,k+1)$-optimal repair property.
More specifically, given any $n\ge k+3$ and a finite field $F,|F|\ge 2n$, we present an $(n,k,l=3^{m})$ MDS array code $\mathcal{C}=\mathcal{C}_{2,k+1}$ over
$F,$ where $m=\binom n2.$
When {\em any} two nodes of $\mathcal{C}$ fail, the repair of each failed node can be accomplished by connecting to {\em any} $k+1$ helper nodes and downloading $(k+2)3^{m-1}$ symbols of $F$ in total from these helper nodes as well as from the other failed node.
Clearly, the repair bandwidth meets the cut-set bound \eqref{eq:cutset} with equality.
We will define $\mathcal{C}$ by its parity-check equations, and we begin with some notation. Let $\{\lambda_{i,j}\}_{i\in[n],j\in\{0,1\}}$ be $2n$ distinct elements of the field $F$.
Let $g$ be a bijection between the set of pairs $\{(i_1,i_2):1\le i_1<i_2\le n\}$ and the set $\{1,2,\dots,m\}$. For concreteness, let
\begin{equation}\label{eq:bn}
g: (i_1,i_2)\mapsto\binom{i_2-1}{2}+i_1
\end{equation}
($g$ partitions the set $[m]$ into
segments of length $(i_2-1)$, where $i_2=2,3,\dots,n$).
Given an integer $a\in\{0,1,\dots,l-1\}$, let $(a_m,a_{m-1},\dots, a_1)$ be the digits of its ternary expansion, i.e.,
$a=\sum_{j=0}^{m-1}a_{j+1}3^j.$
Define the following function
\begin{equation}\label{eq:deff}
\begin{aligned}
f:&\,[n]\times\{0,1,\dots,l-1\}\to\{0,1\}\\
&(i,a)\mapsto\Big(\sum_{j=1}^{i-1} \mathbbm{1}\{a_{g(j,i)}=2\}+ \sum_{j=i+1}^n
\mathbbm{1}\{a_{g(i,j)}=1\} \Big) \Mod 2,
\end{aligned}
\end{equation}
where $\mathbbm{1}$ is the indicator function. We note that $f$ computes the parity of the count of 1's and 2's in a certain subset of the digits of $a.$ This subset is formed of all the digits with indices in the set $\{g(1,i),\dots,g(i-1,i),g(i,i+1),\dots,g(i,n)\}$. To give an example, let $n=6,$ then $m=15$, and the function $g$ maps from $\{(i_1,i_2):1\le i_1<i_2\le 6\}$ to $\{1,2,\dots,15\}$. Let $i=2$ and let $0\le a\le 3^{15}-1= 14348906$ be an integer. The function $f$ isolates the digits $a_u$ in the ternary expansions of $a$ such that $u\in\{g(\cdot,2),g(2,\cdot)\},$
i.e., $u\in\{g(1,2),g(2,3),g(2,4),g(2,5),g(2,6)\}=\{1,3,5,8,12\}.$ The value of the function $f(2,a)$ equals the parity of
$\mathbbm{1}\{a_1=2\}+\mathbbm{1}\{a_3=1\}+\mathbbm{1}\{a_5=1\}+\mathbbm{1}\{a_8=1\}+\mathbbm{1}\{a_{12}=1\}.$
\begin{definition}\label{def:anytwo}
The code $\mathcal{C}=\mathcal{C}_{2,k+1}$ is defined by the following $rl$ parity check equations:
$$
\sum_{i=1}^n \lambda_{i,f(i,a)}^t c_{i,a} = 0,\;
t=0,1,\dots,r-1, a=0,1,\dots,l-1.
$$
\end{definition}
For all $a=0,1,\dots,l-1$, the set of vectors
$\{(c_{1,a},c_{2,a},\dots,c_{n,a})\}$ forms an $(n,k)$ MDS code,
so $\mathcal{C}$ is indeed an $(n,k,l)$ MDS array code.
Next we show that $\mathcal{C}$ has optimal repair bandwidth for repairing any two failed nodes from any $k+1$ helper nodes.
Let $C_{i_1}$ and $C_{i_2}, i_1<i_2$ be the failed nodes.
First let us introduce some notation to describe the repair scheme.
For $a=0,1,\dots,l-1$, $j\in[m],$ and $u=0,1,2,$ let
$$
a(j,u):=(a_m,\dots,a_{j+1},u,a_{j-1},\dots,a_1).
$$
For $a=0,1,\dots,l-1$ and $i\in[n]$, let
\begin{align*}
\mu_{i,1}^{(a)}&:=c_{i,a(g_{12},0)}+c_{i,a(g_{12},1)}, \\
\mu_{i,2}^{(a)}&:=c_{i,a(g_{12},0)}+c_{i,a(g_{12},2)},
\end{align*}
where for brevity we write $g_{12}$ instead of $g(i_1,i_2).$
The fol\-low\-ing lemma, which develops the ideas in Lemma~\ref{lem:bb}, accounts for the $(2,k+1)$ optimal repair property of the code $\mathcal{C}.$
\begin{lemma}\label{lem:cv} Let $C_{i_1}$ and $C_{i_2},$ $i_1<i_2$ be the failed nodes.
For any set of helper nodes $\mathcal{R}\subseteq [n]\setminus\{i_1,i_2\}, |\mathcal{R}|=k+1$ and any $a\in\{0,1,\dots,l-1\}$, the values
$
c_{i_1,a(g_{12},0)},c_{i_1,a(g_{12},1)},\mu_{i_2,1}^{(a)}
$
are uniquely determined by the set of values $\{\mu_{i,1}^{(a)}:i\in \mathcal{R}\}$.
Similarly, the values
$
c_{i_2,a(g_{12},0)},c_{i_2,a(g_{12},2)},\mu_{i_1,2}^{(a)}
$
are uniquely determined by the set of values $\{\mu_{i,2}^{(a)}:i\in \mathcal{R}\}$.
\end{lemma}
\begin{IEEEproof} Recall that $a=0,1,\dots,l-1$ numbers the coordinates of the node, or the rows in the codeword array. For a fixed value of $a$,
the parity check equations corresponding to the rows $a(g_{12},0),a(g_{12},1),a(g_{12},2)$ are as follows:
\begin{equation}\label{eq:cis}
\sum_{i=1}^n \lambda_{i,f(i,a(g_{12},u))}^t c_{i,a(g_{12},u)} = 0
, \quad t=0,1,2,\dots,r-1, \quad u=0,1,2.
\end{equation}
According to definition of the function $f$ in \eqref{eq:deff} and the remarks made after it,
we have
\begin{align*}
f(i,a(g_{12},0)) &= f(i,a(g_{12},1)) = f(i,a(g_{12},2)), \quad i\in[n]\setminus\{i_1,i_2\}\\
f(i_1,a(g_{12},0)) &= f(i_1,a(g_{12},2)) \neq f(i_1,a(g_{12},1)), \\
f(i_2,a(g_{12},0)) &= f(i_2,a(g_{12},1)) \neq f(i_2,a(g_{12},2)).
\end{align*}
This implies that for $i\in[n]\setminus\{i_1,i_2\}$ the following notation is well defined:
\begin{equation}\label{eq:sh1}
\lambda_i:=\lambda_{i,f(i,a(g_{12},0))} = \lambda_{i,f(i,a(g_{12},1))}
=\lambda_{i,f(i,a(g_{12},2))}.
\end{equation}
Note that $\lambda_i$ depends on the value of $a$, though we omit this dependence from the notation.
Further, let
\begin{equation}\label{eq:sh2}
\begin{aligned}
\lambda_{i_1,0}'&:= \lambda_{i_1,f(i_1,a(g_{12},0))} = \lambda_{i_1,f(i_1,a(g_{12},2))},\\
\lambda_{i_1,1}'&:= \lambda_{i_1,f(i_1,a(g_{12},1))}, \\
\lambda_{i_2,0}'&:= \lambda_{i_2,f(i_2,a(g_{12},0))} = \lambda_{i_2,f(i_2,a(g_{12},1))},\\
\lambda_{i_2,1}'&:= \lambda_{i_2,f(i_2,a(g_{12},2))}.
\end{aligned}
\end{equation}
Notice that
\begin{gather*}
\lambda_{i_1,0}'\neq \lambda_{i_1,1}',
\lambda_{i_2,0}'\neq \lambda_{i_2,1}'\\
\{\lambda_{i_1,0}', \lambda_{i_1,1}'\}=\{\lambda_{i_1,0}, \lambda_{i_1,1}\}\\
\{\lambda_{i_2,0}', \lambda_{i_2,1}'\}=\{\lambda_{i_2,0}, \lambda_{i_2,1}\}\\
\lambda_i\in\{\lambda_{i,0},\lambda_{i,1}\},\; i\in[n]\setminus\{i_1,i_2\}.
\end{gather*}
Therefore $\lambda_{i_1,0}',\lambda_{i_1,1}',\lambda_{i_2,0}',\lambda_{i_2,1}',
\lambda_i,i\in[n]\setminus\{i_1,i_2\}$ are all distinct.
Using the notation defined in \eqref{eq:sh1}-\eqref{eq:sh2}, we can write \eqref{eq:cis} as
\begin{align*}
(\lambda_{i_1,0}')^t c_{i_1,a(g_{12},0)}
+ (\lambda_{i_2,0}')^t c_{i_2,a(g_{12},0)}
+\sum_{i\in[n]\setminus\{i_1,i_2\}} \lambda_i^t c_{i,a(g_{12},0)} &= 0, \\
(\lambda_{i_1,1}')^t c_{i_1,a(g_{12},1)}
+ (\lambda_{i_2,0}')^t c_{i_2,a(g_{12},1)}
+\sum_{i\in[n]\setminus\{i_1,i_2\}} \lambda_i^t c_{i,a(g_{12},1)} &= 0, \\
(\lambda_{i_1,0}')^t c_{i_1,a(g_{12},2)}
+ (\lambda_{i_2,1}')^t c_{i_2,a(g_{12},2)}
+\sum_{i\in[n]\setminus\{i_1,i_2\}} \lambda_i^t c_{i,a(g_{12},2)} &= 0,\\
t&=0,1,2,\dots,r-1.
\end{align*}
Now notice that up to a notational change, these equations have the same form as equations \eqref{eq:c11}-\eqref{eq:c13}.
Therefore, the proof of Lemma~\ref{lem:bb} applies directly, completing the proof.
\end{IEEEproof}
This lemma implies that the nodes $C_{i_1}$ and $C_{i_2}$ can be repaired with optimal bandwidth.
To see this, we partition the coordinates of a node into $l/3$ groups of
size $3$ where each group is formed of the coordinates with indices $a(g_{12},0),a(g_{12},1),a(g_{12},2)$ for a given $a$. By Lemma~\ref{lem:cv} above we know that each group can be repaired with optimal bandwidth, so the entire contents of the failed nodes can also be
optimally recovered.
A more detailed description of the repair process is as follows.
In the first round of the repair process, $C_{i_1}$ downloads the values in the set
$\{\mu_{i,1}^{(a)}:a_{g_{12}}=0\}$ and $C_{i_2}$ downloads the values $\{\mu_{i,2}^{(a)}:a_{g_{12}}=0\}$ from each helper node $C_i,i\in \mathcal{R}$.
This enables $C_{i_1}$ to find the values
$$
\{c_{i_1,a}:a_{g_{12}}=0\}\cup\{c_{i_1,a(g_{12},1)}:a_{g_{12}}=0\}\cup\{\mu_{i_2,1}^{(a)}:a_{g_{12}}=0\}.
$$
Similarly, $C_{i_2}$ is able to find the values
$$
\{c_{i_2,a}:a_{g_{12}}=0\}\cup\{c_{i_2,a(g_{12},2)}:a_{g_{12}}=0\}\cup\{\mu_{i_1,2}^{(a)}:a_{g_{12}}=0\}.
$$
In the second round, $C_{i_1}$ downloads $\{\mu_{i_1,2}^{(a)}:a_{g_{12}}=0\}$ from $C_{i_2}$, and $C_{i_2}$ downloads $\{\mu_{i_2,1}^{(a)}:a_{g_{12}}=0\}$ from $C_{i_1}$. After the second round, $C_{i_1}$ knows the values of all the elements in the set
\begin{equation*}
\{c_{i_1,a(g_{12},u)}:a_{g_{12}}=0,u\in\{0,1,2\}\}
=\{c_{i_1,a}:a\in\{0,1,2,\dots,l-1\}\},
\end{equation*}
and $C_{i_2}$ knows the values of all the elements in the set
\begin{equation*}
\{c_{i_2,a(g_{12},u)}:a_{g_{12}}=0,u\in\{0,1,2\}\}
=\{c_{i_2,a}:a\in\{0,1,2,\dots,l-1\}\},
\end{equation*}
i.e., both $C_{i_1}$ and $C_{i_2}$ can recover all their coordinates. Moreover, in the whole repair process, $C_{i_1}$ downloads $l/3$ symbols of $F$ from each of the nodes $C_i,i\in \mathcal{R}\cup\{i_2\}$, and $C_{i_2}$ downloads $l/3$ symbols of $F$ from each of the nodes $C_i,i\in \mathcal{R}\cup\{i_1\}$.
Therefore the total repair bandwidth is $2(k+2)l/3$, meeting the cut-set bound \eqref{eq:cutset} with equality.
\section{Cooperative $(2,d)$-optimal codes for general $d$}\label{sect:gend}
\subsection{Optimal repair of the first two nodes}\label{sect:fd}
In this section we present an explicit MDS array code that can optimally repair the first two nodes from any $d$ helper nodes for general values of $d$. Let $n,k,d$ be such that $k+1\le d \le n-2$, let $s:=d+1-k,$ and let $F$ be a finite field of
size at least $n-2+2s.$ We will construct an $(n,k,s^2-1)$ MDS array code $\mathcal{C}=\mathcal{C}_{2,d}^{(0)}$ over the field $F$ that has the following
property. When the first two nodes of $\mathcal{C}$ fail, the repair of each of them can be accomplished by connecting to {\em any} $d$
surviving (helper) nodes and downloading $(s-1)(d+1)$ symbols of $F$ in total from these helper nodes as well as from
the other failed node. Clearly, the amount of downloaded data meets the cut-set bound \eqref{eq:cutset} with equality.
Let $\lambda_{1,0},\lambda_{1,1},\dots,\lambda_{1,s-1},
\lambda_{2,0},\lambda_{2,1},\dots,\lambda_{2,s-1},
\lambda_3,\lambda_4,\dots,\lambda_n$ be $n-2+2s$ distinct elements of the field $F$.
Given an integer $a, 0\le a\le s^2-2,$ let $b_1(a),b_2(a)$ be the digits of its expansion to the base $s$:
\begin{equation} \label{eq:defb}
a=(b_2(a),b_1(a)).
\end{equation}
The code $\mathcal{C}=\mathcal{C}_{2,d}^{(0)}$ is defined by the following $r(s^2-1)$ parity check equations.
\begin{align}\label{eq:asj}
\lambda_{1,b_1(a)}^t c_{1,a} + \lambda_{2,b_2(a)}^t c_{2,a}
&+\sum_{i=3}^n \lambda_i^t c_{i,a} = 0.\\
&t=0,1,\dots,r-1 ,\; a=0,1,2,\dots,s^2-2.\nonumber
\end{align}
Clearly, for a given $a$ the set of vectors $\{(c_{1,a},c_{2,a},\dots,c_{n,a})\}$ that satisfy the system \eqref{eq:asj} forms an MDS code of length $n$ and dimension $k$.
Therefore $\mathcal{C}$ is indeed an $(n,k,s^2-1)$ MDS array code.
Note that for $d=k+1$, the code $\mathcal{C}$ defined by \eqref{eq:asj} is the same as the code defined by \eqref{eq:c11}-\eqref{eq:c13} in Section~\ref{sect:bdblock}.
For every $i\in[n]$ define the following elements of $F$:
\begin{align*}
\mu_{i,1}^{(v_2)}:=\sum_{v_1=0}^{s-1}c_{i,sv_2+v_1} , \quad v_2 \in\{0,1,\dots,s-2\}; \\
\mu_{i,2}^{(v_1)}:=\sum_{v_2=0}^{s-1}c_{i,sv_2+v_1} , \quad v_1 \in\{0,1,\dots,s-2\}.
\end{align*}
Similarly to the previous sections, we have the following lemma:
\begin{lemma}\label{lem:jj} Suppose that the failed nodes are $C_1,C_2$ and let
$\mathcal{R}\subseteq \{3,4,\dots,n\},|\mathcal{R}|=d$ be a set of $d$ helper nodes.
For any $v_2\in\{0,1,\dots,s-2\}$,
the values $\{c_{1,sv_2+v_1}, v_1=0,1,\dots,s-1\}$ and $\mu_{2,1}^{(v_2)}$
are uniquely
determined by the set of values $\{\mu_{i,1}^{(v_2)}:i\in \mathcal{R}\}$.
Similarly, for any $v_1\in\{0,1,\dots,s-2\}$, the values $\{c_{2,sv_2+v_1},v_2=0,1,\dots,s-1\}$ and
$\mu_{1,2}^{(v_1)}$
are uniquely determined by the set of values $\{\mu_{i,2}^{(v_1)}:i\in \mathcal{R}\}$.
\end{lemma}
\begin{IEEEproof}
We again use Lemma~\ref{lem:tch} to prove this lemma.
To prove the first statement, we use definition \eqref{eq:asj} to write out the parity-check equations that correspond to $a=sv_2,sv_2+1,\dots,sv_2+s-1$ for a fixed $v_2\in\{0,1,\dots,s-2\}$:
\begin{align*}
\lambda_{1,v_1}^t c_{1,sv_2+v_1} + \lambda_{2,v_2}^t c_{2,sv_2+v_1}
&+\sum_{i=3}^n \lambda_i^t c_{i,sv_2+v_1} = 0,\\
&t=0,1,\dots,r-1,\; v_1=0,1,\dots,s-1.
\end{align*}
These equations have the same structure as the equations in \eqref{eq:org}: $v_1$ here plays the role of $u$ in \eqref{eq:org}.
Only the coefficients of $c_{1,sv_2+v_1}$ vary with the value of $v_1$ while the coefficients of $c_{i,sv_2+v_1}$ are independent of the value of $v_1$ for all $i\in[n]\setminus\{1\}$. Therefore the proof of Lemma~\ref{lem:tch} can be directly applied here.
To prove the second statement, we use definition \eqref{eq:asj} to write out the parity-check equations that correspond to $a=v_1,v_2+v_1, 2v_2+v_1,\dots,(s-1)v_2+v_1$ for a fixed $v_1\in \{0,1,\dots,s-2\}$:
\begin{align*}
\lambda_{1,v_1}^t c_{1,sv_2+v_1} + \lambda_{2,v_2}^t c_{2,sv_2+v_1}
&+\sum_{i=3}^n \lambda_i^t c_{i,sv_2+v_1} = 0,\\
&t=0,1,\dots,r-1,\; v_2=0,1,\dots,s-1.
\end{align*}
These equations have the same structure as the equations in \eqref{eq:org}: $v_2$ here plays the role of $u$ in \eqref{eq:org}.
Only the coefficients of $c_{2,sv_2+v_1}$ vary with the value of $v_2$ while the coefficients of $c_{i,sv_2+v_1}$ are independent of the value of $v_2$ for all $i\in[n]\setminus\{2\}$. Therefore the proof of Lemma~\ref{lem:tch} can be directly applied here.
\end{IEEEproof}
Let us show that this lemma implies that the first two nodes of $\mathcal{C}$ can be repaired with optimal bandwidth.
In the first round, the first node $C_1$ downloads the values $\{\mu_{i,1}^{(v_2)}, v_2=0,1,\dots,s-2\}$
from each helper node $C_i,i\in \mathcal{R}$, and the second node $C_2$ downloads $\{\mu_{i,2}^{(v_1)}, v_1=0,1,\dots,s-2\}$
from each helper node $C_i,i\in \mathcal{R}$.
From Lemma~\ref{lem:jj} we conclude that after the first round, $C_1$ knows the values
\begin{gather*}
c_{1,sv_2+v_1},\; v_2=0,1,\dots,s-2,v_1=0,1,\dots,s-1\\
\text{and~} \mu_{2,1}^{(v_2)},\;v_2=0,1,\dots,s-2.
\end{gather*}
In the same way, $C_2$ knows the values
\begin{gather*}
c_{2,sv_2+v_1},\;v_1=0,1,\dots,s-2,v_2=0,1,\dots,s-1\\
\mu_{1,2}^{(v_1)},\; v_1=0,1,\dots,s-2.
\end{gather*}
In the second round, $C_1$ downloads the sums $\mu_{1,2}^{(v_1)}, v_1=0,1,\dots,s-2$ from $C_2$, and $C_2$ downloads the sums $\mu_{2,1}^{(v_2)}, v_2=0,1,\dots,s-2$
from $C_1$. It is easy to verify that after the second round, both $C_1$ and $C_2$ can recover all of their coordinates. Moreover, over the course of the entire repair process,
$C_1$ downloads $(s-1)$ symbols of $F$ from each of the nodes $C_i,i\in \mathcal{R}\cup\{2\}$, and $C_2$ downloads $(s-1)$ symbols of $F$ from each of the nodes $C_i,i\in \mathcal{R}\cup\{1\}$.
Therefore the total repair bandwidth is $2(s-1)(d+1)$, meeting the cut-set bound \eqref{eq:cutset} with equality.
\subsection{Optimal repair of any two erasures}\label{sect:rb}
In this section we present a construction of MDS array codes with the $(2,d)$-optimal repair property, relying on the ideas of
the previous section. Let $n,k,d$ be such that $k+1\le d \le n-2,$ let $s:=d+1-k$ and let $F$ be a finite field such that $|F|\ge sn.$
We present an $(n,k,l=(s^2-1)^m)$ MDS array code $\mathcal{C}=\mathcal{C}_{2,d}$ over the field $F$, where $m:=\binom{n}{2}$.
When {\em any} two nodes of $\mathcal{C}$ fail, the repair of each failed node can be accomplished by connecting to
{\em any} $d$ helper nodes and downloading $(d+1)l/(s+1)$ symbols of $F$ in total from these helper nodes as well as from
the other failed node. Clearly, the repair bandwidth meets the cut-set bound \eqref{eq:cutset} with equality.
We will define $\mathcal{C}$ by its parity-check equations, and we begin with some notation. Let $\{\lambda_{ij}\}_{i\in[n],j\in\{0,1,\dots,s-1\}}$ be $sn$ distinct elements of the field $F$.
Let $g$ be a bijection between the set of pairs $\{(i_1,i_2):i_1,i_2\in[n],i_1<i_2\}$ and the set $\{1,2,\dots,m\}$
defined in \eqref{eq:bn}.
For every $a=0,1,2,\dots,l-1$, we write its expansion in the base $(s^2-1)$ as
$a=(a_m,a_{m-1},\dots,a_1)$, i.e., $a=\sum_{j=0}^{m-1}a_{j+1}(s^2-1)^j$.
Define the following function
\begin{equation}\label{eq:deffnew}
\begin{aligned}
f:&\,[n]\times\{0,1,\dots,l-1\}\to \{0,1,\dots,s-1\}\\
&(i,a)\mapsto \Big(\sum_{j=1}^{i-1} b_2(a_{g(j,i)}) + \sum_{j=i+1}^n
b_1(a_{g(i,j)}) \Big) \Mod s,
\end{aligned}
\end{equation}
where $b_1(x)$ and $b_2(x)$ form the digits of the expansion of $x$ in the base $s$; see definition \eqref{eq:defb}.
Note that when $d=k+1$, the function $f$ defined in \eqref{eq:deffnew} is the same as the function defined in \eqref{eq:deff} in Section~\ref{sect:warmup}.
\begin{definition}\label{def:ex}
The code $\mathcal{C}=\mathcal{C}_{2,d}$ is defined by the following $rl$ parity check equations.
$$
\sum_{i=1}^n \lambda_{i,f(i,a)}^t c_{i,a} = 0,\;
t=0,1,2,\dots,r-1 , \, a=0,1,2,\dots,l-1.
$$
\end{definition}
For a given $a=0,1,\dots,l-1$ the set of vectors $\{(c_{1,a},c_{2,a},\dots,c_{n,a})\}$ forms an MDS code of length $n$ and dimension $k.$
Therefore $\mathcal{C}$ is indeed an $(n,k,l)$ MDS array code.
Also note that when $d=k+1$, the code $\mathcal{C}$ is the same as the code defined in Section~\ref{sect:warmup}.
Next we show that $\mathcal{C}$ has optimal repair bandwidth for repairing any two failed nodes from any $d$ helper nodes.
We need several elements of notation which are similar to the notation used in the previous sections.
For $a=0,1,\dots,l-1$, $j\in[m],$ and $u\in\{0,1,2,\dots,s^2-2\}$, let
$a(j,u):=(a_m,\dots,a_{j+1},u,a_{j-1},\dots,a_1)$.
For $a=0,1,\dots,l-1$ and $i\in[n]$, we define
\begin{align*}
\mu_{i,i_1}^{(a,v_2)}:=\sum_{v_1=0}^{s-1} c_{i,a(g_{12},sv_2+v_1)},\; v_2=0,1,\dots,s-2, \\
\mu_{i,i_2}^{(a,v_1)}:=\sum_{v_2=0}^{s-1} c_{i,a(g_{12},sv_2+v_1)},\; v_1=0,1,\dots,s-2,
\end{align*}
where for brevity we again write $g_{12}$ instead of $g(i_1,i_2).$
The following lemma implies that $\mathcal{C}$ is an MDS code with the $(2,d)$ optimal repair property.
\begin{lemma}\label{lem:fn} Let the failed nodes be $C_{i_1}$ and $C_{i_2},$ $1\le i_1<i_2\le n$ and let $\mathcal{R}\subset[n],|\mathcal{R}|=d$
be a set of $d$ helper nodes.
For any $a\in\{0,1,\dots,l-1\}$ and any $v_2\in\{0,1,\dots,s-2\}$, the values
$\{c_{i_1,a(g_{12},sv_2+v_1)}, v_1=0,1,\dots,s-1\}$
and $\mu_{i_2,i_1}^{(a,v_2)}$ are uniquely determined by the set of values $\{\mu_{i,i_1}^{(a,v_2)}:i\in \mathcal{R}\}$.
Similarly, for any $v_1\in\{0,1,\dots,s-2\}$, the values
$\{c_{i_2,a(g_{12},sv_2+v_1)}, v_2=0,1,\dots,s-1\}$
and $\mu_{i_1,i_2}^{(a,v_1)}$ are uniquely determined by the set of values $\{\mu_{i,i_2}^{(a,v_1)}:i\in \mathcal{R}\}$.
\end{lemma}
\begin{IEEEproof}
The parity-check equations that correspond to the row indices $a(g_{12},0),a(g_{12},1),\linebreak[4]
\dots,a(g_{12},s^2-2)$ are as follows:
\begin{equation}\label{eq:s2}
\sum_{i=1}^n \lambda_{i,f(i,a(g_{12},u))}^t c_{i,a(g_{12},u)} = 0,
\;t=0,1,2,\dots,r-1,\,u=0,1,\dots,s^2-2.
\end{equation}
According to definition of the function $f$ in \eqref{eq:deffnew}, if $i\ne i_1,i_2$ then the value of $f$ does not
depend on the value of the digit $a_{g_{12}}$. Thus, we have
$$
f(i,a(g_{12},0)) = f(i,a(g_{12},1)) = \dots = f(i,a(g_{12},s^2-2)),
\; i\in[n]\setminus\{i_1,i_2\}.
$$
Again according to \eqref{eq:deffnew}, for all $u=0,1,2,\dots,s^2-2$, we have
\begin{equation}\label{eq:dz}
\begin{aligned}
f(i_1,a(g_{12},u)) = \big( f(i_1,a(g_{12},0)) + b_1(u) \big) \mod s, \\
f(i_2,a(g_{12},u)) = \big( f(i_2,a(g_{12},0)) + b_2(u) \big) \mod s.
\end{aligned}
\end{equation}
Therefore, we are justified in using the following notation:
\begin{align}
\lambda_i&:=\lambda_{i,f(i,a(g(i_1,i_2),0))} = \lambda_{i,f(i,a(g(i_1,i_2),1))}
=\lambda_{i,f(i,a(g(i_1,i_2),2))},\; i\not\in\{i_1,i_2\} \label{eq:rh1}\\
\lambda_{i_1,v}' &:= \lambda_{i_1,v\oplus f(i_1,a(g_{12},0))},\quad
\lambda_{i_2,v}' := \lambda_{i_2,v\oplus f(i_2,a(g_{12},0))},
\;v\in\{0,1,\dots,s-1\} \nonumber
\end{align}
where $\oplus$ is addition modulo $s$.
By \eqref{eq:dz}, for every $u=0,1,2,\dots,s^2-2$, we have
\begin{equation}\label{eq:rh2}
\begin{aligned}
\lambda_{i_1,f(i_1,a(g_{12},u))} = \lambda_{i_1,b_1(u)\oplus f(i_1,a(g_{12},0))}=
\lambda_{i_1,b_1(u)}'; \\
\lambda_{i_2,f(i_2,a(g_{12},u))} = \lambda_{i_2,b_2(u)\oplus f(i_2,a(g_{12},0))} = \lambda_{i_2,b_2(u)}'.
\end{aligned}
\end{equation}
Notice that
$$
\{\lambda_{i,0}',\lambda_{i,1}',\dots,\lambda_{i,s-1}'\} = \{\lambda_{i,0},\lambda_{i,1},\dots,\lambda_{i,s-1}\} \text{~for~} i\in\{i_1,i_2\},
$$
and that
$$
\lambda_i\in\{\lambda_{i,0},\lambda_{i,1},\dots,\lambda_{i,s-1}\}
\text{~for all~} i\in[n]\setminus\{i_1,i_2\}.
$$
Therefore $\lambda_{i_1,0}',\lambda_{i_1,1}',\dots,\lambda_{i_1,s-1}',\lambda_{i_2,0}',
\lambda_{i_2,1}',\dots,\lambda_{i_2,s-1}',
\lambda_i,i\in[n]\setminus\{i_1,i_2\}$ are all distinct.
Using \eqref{eq:rh1} and \eqref{eq:rh2}, we can write \eqref{eq:s2} as
\begin{align*}
(\lambda_{i_1,b_1(u)}')^t c_{i_1,a(g_{12},u)} + (\lambda_{i_2,b_2(u)}')^t c_{i_2,a(g_{12},u)}
+ \sum_{i\in[n]\setminus\{i_1,i_2\}} \lambda_i^t c_{i,a(g_{12},u)} = 0 \\
t=0,1,2,\dots,r-1,\; u=0,1,\dots,s^2-2.
\end{align*}
These equations have exactly the same form as the equations in \eqref{eq:asj}.
Therefore the remainder of the proof of this lemma follows the steps in the proof of Lemma~\ref{lem:jj}, and
there is no need to reproduce them here.
\end{IEEEproof}
This lemma enables us to set up a repair procedure for the nodes $C_{i_1}$ and $C_{i_2}$.
In the first round of repair, $C_{i_1}$ downloads the set of elements
\begin{equation}\label{eq:ccd}
\bigcup_{v_2=0}^{s-2}\{\mu_{i,i_1}^{(a,v_2)}:a_{g_{12}}=0\}
\end{equation}
from each helper node $C_i,i\in \mathcal{R}.$ In the same way, $C_{i_2}$ downloads the set of elements
$$
\bigcup_{v_1=0}^{s-2}\{\mu_{i,i_2}^{(a,v_1)}:a_{g_{12}}=0\}
$$
from each helper node $C_i,i\in \mathcal{R}$.
For future use, let us calculate the number of symbols that $C_{i_1}$ downloads from $C_i,i\in \mathcal{R},$ i.e., the cardinality of the set in \eqref{eq:ccd}. Since each digit of $a$ in its $(s^2-1)$-ary expansion can take $s^2-1$ possible values, $|\{\mu_{i,i_1}^{(a,v_2)}:a_{g_{12}}=0\}|=l/(s^2-1)$. The set in \eqref{eq:ccd} is the union of $s-1$ such sets, so its cardinality is $(s-1)l/(s^2-1)=l/(s+1)$.
According to Lemma~\ref{lem:fn}, after the first round, $C_{i_1}$ knows the values of
\begin{equation}\label{eq:21}
\Big(\bigcup_{v_2=0}^{s-2}\bigcup_{v_1=0}^{s-1}\{c_{i_1,a(g_{12},sv_2+v_1)}:a_{g_{12}}=0\}\Big) \bigcup \Big(\bigcup_{v_2=0}^{s-2}\{\mu_{i_2,i_1}^{(a,v_2)} : a_{g_{12}}=0\}\Big),
\end{equation}
and $C_{i_2}$ knows the values of
\begin{equation}\label{eq:22}
\Big(\bigcup_{v_1=0}^{s-2}\bigcup_{v_2=0}^{s-1}\{c_{i_2,a(g_{12},sv_2+v_1)}:a_{g_{12}}=0\}\Big) \bigcup \Big(\bigcup_{v_1=0}^{s-2}\{\mu_{i_1,i_2}^{(a,v_1)} :a_{g_{12}}=0\}\Big).
\end{equation}
In the second round of the repair process, the nodes $C_{i_1},C_{i_2}$ exchange the second terms in \eqref{eq:21}-\eqref{eq:22}: namely,
$C_{i_1}$ downloads the elements in the set $\cup_{v_1=0}^{s-2}\{\mu_{i_1,i_2}^{(a,v_1)}: a_{g_{12}}=0\}$ from $C_{i_2}$, and
$C_{i_2}$ downloads the elements in the set $\cup_{v_2=0}^{s-2}\{\mu_{i_2,i_1}^{(a,v_2)}: a_{g_{12}}=0\}$ from $C_{i_1}$.
After the second round, $C_{i_1}$ knows the values of all the elements in the set
$$
\{c_{i_1,a(g_{12},u)}:a_{g_{12}}=0,u\in\{0,1,2,\dots,s^2-2\}\}
=\{c_{i_1,a}:a\in\{0,1,2,\dots,l-1\}\},
$$
and $C_{i_2}$ knows the values of all the elements in the set
$$
\{c_{i_2,a(g_{12},u)}:a_{g_{12}}=0,u\in\{0,1,2,\dots,s^2-2\}\}
=\{c_{i_2,a}:a\in\{0,1,2,\dots,l-1\}\},
$$
i.e., both $C_{i_1}$ and $C_{i_2}$ have recovered all their coordinates.
Moreover, in the course of the repair process, $C_{i_1}$ downloads
$l/(s+1)$ symbols of $F$ from each of the nodes $C_i,i\in \mathcal{R}\cup\{i_2\}$, and $C_{i_2}$
downloads $l/(s+1)$ symbols of $F$ from each of the nodes $C_i,i\in \mathcal{R}\cup\{i_1\}$.
Therefore the total repair bandwidth is $2(d+1)l/(s+1)$, meeting the cut-set bound \eqref{eq:cutset} with equality.
\subsection{Optimal repair of two erasures from arbitrary number of helper nodes}\label{sect:hew}
In this section, we point out a technique which has been used extensively but somewhat implicitly in the literature, and we use it
to construct $(n,k)$ MDS array codes with the universal $(2,d)$-optimal repair property for all $k\le d\le n-2$ simultaneously.
We only aim to convey the main ideas underlying the universal constructions, and we will not discuss all the details in a rigorous way which would require developing new notation, and would lead to tedious and redundant presentation. The initial
idea to use the expansion of the row index is due to \cite{Cadambe11,Tamo13}, and it was used in \cite{Ye16} to construct
explicit universal families of regenerating codes for centralized repair.
To illustrate this technique, let us start from the simplest case of repairing single erasure. Returning to the $(n,k,s=d+1-k)$ MDS code defined by the parity-check equations in \eqref{eq:org}, we observe that the proof of Lemma~\ref{lem:bb} gives a repair scheme of the first node relying on downloading a $\frac1{s}$ proportion of symbols from each of the $d$ helper nodes (it also gives the $\mu_i$'s which at this point we ignore). Moreover, as already remarked, with straightforward changes to the construction we can obtain a code
with optimal repair of the $i$th node for any given $i=1,\dots,n.$ Denote this code by $\mathcal{C}_i.$
The next step is to show how two codes of this kind can be combined to construct an $(n,k,l=s^2)$ MDS code that supports optimal
repair of each of the first two nodes from any $d$ helper nodes. For instance, take the codes $\mathcal{C}_1,\mathcal{C}_2$ defined over a field $F$
of size at least $n+2s-2,$ and let $\lambda_{1,0},\lambda_{1,1},\dots,\linebreak[3]\lambda_{1,s-1}, \lambda_{2,0},\lambda_{2,1},\dots,\lambda_{2,s-1}, \lambda_3,\lambda_4,\dots,\lambda_n$ be distinct elements of $F$.
Define an $(n,k,s^2)$ MDS array code $\mathcal{C}=\mathcal{C}_1\odot\mathcal{C}_2$ over $F$ by the following $rs^2$ parity-check equations:
\begin{equation}\label{eq:pol}
\lambda_{1,a_1}^t c_{1,a} + \lambda_{2,a_2}^t c_{2,a} + \sum_{i=3}^n \lambda_i^t c_{i,a} = 0, \quad
a=0,1,\dots,s^2-1, \quad t=0,1,\dots,r-1,
\end{equation}
where $(a_1,a_2)$ is the two-digit $s$-ary expansion of the row index $a\in\{0,1,\dots,s^2-1\}$.
For the repair of the first node, we fix $a_2$ and let $a_1$ take all the values in the set $\{0,1,\dots,s-1\}$. In this way we divide the coordinates of each node into $s$ groups according to the value of $a_2$, and the parity check equations
that correspond to each group have exactly the same structure as \eqref{eq:org}. Therefore we can optimally repair the first node from any $d$ helper nodes. At the same time, fixing $a_1$ and varying $a_2$, we can optimally repair the second node in the same way.
It is clear that the code $\mathcal{C}$ defined by \eqref{eq:pol} is obtained by a combination of the codes $\mathcal{C}_1$ and $\mathcal{C}_2$
which is similar to the so-called serial concatenation \cite{BDMP98}.
Now it is easily seen that the code $\mathcal{C}_{1,d}:=\mathcal{C}_1 \odot \mathcal{C}_2 \odot \dots \odot \mathcal{C}_n$ has the $(1,d)$-optimal repair property. In fact, this code family already appeared in the literature; see Construction 2 in \cite{Ye16}.
Now let us consider cooperative repair of two erasures. For $\mathcal{F}\subseteq[n], |\mathcal{F}|=2$ and $k\le d\le n-2$, let $\mathcal{C}_{\mathcal{F},d}$ be the $(n,k,l=s^2-1)$ MDS array code that can optimally repair the failed nodes $C_i,i\in\mathcal{F}$ from any $d$ helper nodes. Note that $\mathcal{C}_{\{1,2\},d}$ is the code defined by \eqref{eq:asj}, and we previously denoted it as $\mathcal{C}_{2,d}^{(0)}$.
As before, the specific choice of $\mathcal{F}$ is not important, and we can construct a code $\mathcal{C}_{\mathcal{F},d}$
with the same structure and parameters as $\mathcal{C}_{\{1,2\},d}$ for any $2$-subset $\mathcal{F}\subset [n].$
Now it is clear that the code $\mathcal{C}_{2,d}$ in Definition~\ref{def:ex} is the concatenation of all $\mathcal{C}_{\mathcal{F},d}$ such that $\mathcal{F}\subseteq[n], |\mathcal{F}|=2$, i.e.,
$$
\mathcal{C}_{2,d} = \bigodot\limits_{\mathcal{F}\subseteq[n], |\mathcal{F}|=2} \mathcal{C}_{\mathcal{F},d}.
$$
Following this line of thought, we can easily construct an $(n,k)$ MDS array code $\mathcal{C}_2^U$ with the {\em universal $(2,d)$-optimal repair property} for all $k\le d\le n-2$ simultaneously. Namely, the concatenated code\footnote{{It is easy to see that the code $\mathcal{C}_{2,n-2}$ has the $(2,d)$-optimal repair property not only for $d=n-2,$ but also for $d=k.$ Therefore in the concatenation we do not need to include $\mathcal{C}_{2,k}$.}}
$$
\mathcal{C}_2^U := \bigodot\limits_{k+1\le d\le n-2} \mathcal{C}_{2,d}
$$
can optimally repair any two failed nodes from any subset of $d$ helper nodes as long as $d\ge k$. The size of the finite field is
determined by the code $\mathcal{C}_{2,n-2}$ and is at least $(r-1)n$, and the sub-packetization of the code $\mathcal{C}_2^U$ equals
$
\prod_{d=k+1}^{n-2}\big((d-k+1)^2-1\big)^{\binom n2}.
$
\section{Cooperative $(h,k+1)$ optimal codes for general $h$}\label{sect:h}
\subsection{Repairing the first $h$ nodes from any $d=k+1$ helper nodes} \label{sect:gh}
In this section we present a construction of MDS array codes that can optimally repair the first $h$ nodes from any $d=k+1$
helper nodes for any given $h=2,\dots,r-1$. More specifically, given any $k<n,$ any $h\le r-1,$ and a finite field $F$ of
cardinality $|F|\ge n+h$, we present an $(n,k,h+1)$ MDS array code $\mathcal{C}=\mathcal{C}_{h,k+1}^{(0)}$ over the field $F$ that has the following property.
When the first $h$ nodes of $\mathcal{C}$ fail, the repair of each failed node can be accomplished by connecting to {\em any} $k+1$ helper nodes and downloading $k+h$ symbols of $F$ in total from these helper nodes as well as from other failed nodes.
Clearly, the amount of downloaded data meets the cut-set bound \eqref{eq:cutset} with equality.
Let $(\lambda_{ij},i=1,\dots,h, j=0,1),
\lambda_{h+1},\lambda_{h+2},\dots,\lambda_n$ be $n+h$ distinct elements of the field $F$.
The code $\mathcal{C}$ is defined by the following parity check equations.
\begin{equation}\label{eq:eov}
\begin{aligned}
\sum_{i=1}^h\lambda_{i,0}^t c_{i,0} + \sum_{i=h+1}^n \lambda_i^t c_{i,0} & = 0,\; t=0,1,\dots,r-1; \\
\lambda_{a,1}^t c_{a,a} + \sum_{i\in[h]\setminus\{a\}} \lambda_{i,0}^t c_{i,a} + \sum_{i=h+1}^n \lambda_i^t c_{i,a} & = 0,\; t=0,1,\dots,r-1,\, a=1,2,\dots,h.
\end{aligned}
\end{equation}
For every $a=0,1,\dots,h,$ the set of vectors $\{(c_{1,a},c_{2,a},\dots,c_{n,a})\}$ forms an $(n,k)$ MDS code,
therefore $\mathcal{C}$ is indeed an $(n,k,h+1)$ MDS array code.
When $h=2$, this code is the same as the code defined in Section~\ref{sect:bdblock}.
For $i\in[n]$ and $j\in[h]$, define
$$
\mu_{ij}:=c_{i,0}+c_{ij}.
$$
Similarly to the previous sections, we have the following lemma:
\begin{lemma}\label{lem:gh} Let $C_1,\dots,C_h$ be the failed nodes.
For any set of helper nodes $\mathcal{R}\subseteq \{h+1,h+2,\dots,n\},|\mathcal{R}|=k+1$ and any $j\in[h]$, the values of $c_{j,0},c_{j,j}$ and the sums $\{\mu_{ij},i\in[h]\setminus\{j\}\}$ are uniquely determined by $\{\mu_{ij}:i\in \mathcal{R}\}$.
\end{lemma}
The proof of this lemma is the same as that of Lemma~\ref{lem:bb}, and we do not repeat it here.
This lemma implies that the first $h$ nodes of $\mathcal{C}$ can be repaired with optimal bandwidth.
In the first round, every failed node $C_j,j\in[h]$ downloads $\mu_{ij}$ from each helper node $C_i,i\in \mathcal{R}$.
According to Lemma~\ref{lem:gh}, after the first round, for every $j\in[h]$, the node $C_j$ knows the values of
$c_{j,0},c_{j,j}$ and $\{\mu_{ij},i\in[h]\setminus\{j\}\}$.
In the second round, every failed node $C_j,j\in[h]$ downloads the sum $\mu_{ji}$ from each of the other failed nodes $C_i,i\in[h]\setminus\{j\}$. After the second round, every failed node $C_j,j\in[h]$ knows the values of $c_{j,0},c_{j,j}$ and the sums $c_{j,0}+c_{j,i},i\in[h]\setminus\{j\}$. Therefore $C_j$ can recover all its coordinates. Moreover, in the whole repair process, every failed node $C_j,j\in[h]$ downloads only one symbol of $F$ from each of the nodes $C_i,i\in \mathcal{R}\cup [h]\setminus\{j\}$.
Therefore the total repair bandwidth is $h(k+h)$, meeting the cut-set bound \eqref{eq:cutset} with equality.
\subsection{Repairing arbitrary $h$ nodes}\label{sect:lo}
In this section we construct explicit MDS array codes that support $(h,k+1)$-optimal repair of any
$h$-tuple of failed nodes.
More specifically, given any $k<n,$ any $h\le r-1,$ and a finite field $F$ of cardinality $|F|\ge 2n$,
we present an $(n,k,l=(h+1)^m)$ MDS array code $\mathcal{C}=\mathcal{C}_{h,k+1}$ over the field $F$, where $m:=\binom{n}{h}$. The code $\mathcal{C}$ has
the property that for {\em any} $h$-subset $\mathcal{F}$ of $[n],$ the repair of each failed node $C_i,i\in\mathcal{F}$ can be accomplished by connecting
to {\em any} $k+1$ helper nodes and downloading $(k+h)l/(h+1)$ symbols of $F$ in total from these helper nodes as well as from other failed nodes.
Clearly, the amount of downloaded data meets the cut-set bound \eqref{eq:cutset} with equality.
As in the previous sections, we will define $\mathcal{C}$ by its parity-check equations, and we begin with some notation. Let $\{\lambda_{ij}\}_{i\in[n],j\in\{0,1\}}$ be $2n$ distinct elements of the field $F$.
Let $g$ be a bijection between the set of $h$-subsets $\{\mathcal{F}:\mathcal{F}\subseteq [n],|\mathcal{F}|=h\}$ and the numbers $\{1,2,\dots,m\}.$
As in \eqref{eq:bn}, the particular choice of $g$ does not matter; for instance, we can take
\begin{equation}\label{eq:Dg}
g(\{i_h,i_{h-1},\dots,i_1\})=\sum_{j=0}^{h-1}\binom{i_{h-j}-1}{h-j}+1
\text{~~for all~} n\ge i_h > i_{h-1} >\dots >i_1\ge 1,
\end{equation}
where we use the convention that $\binom{n_1}{n_2}=0$ if $n_1<n_2$.
For a given $a=0,1,2,\dots,l-1$, let $a_m,a_{m-1},\dots,a_1$ be the digits of its expansion in the base $h+1,$ i.e., $a=\sum_{j=0}^{m-1}a_{j+1}(h+1)^j$.
For a set $\mathcal{F}\subseteq[n]$ and an element $i\in\mathcal{F}$, let
$z(\mathcal{F},i)=|\{j:j\in\mathcal{F},j\le i\}|$ be the number of elements in $\mathcal{F}$ that are no larger than $i$.
Define the following function:
\begin{equation}\label{eq:lo}
\begin{aligned}
f:\,&[n]\times\{0,1,\dots,l-1\}\to\{0,1\}\\
&(i,a)\mapsto\Big(\sum_{\mathcal{F}\subseteq [n],|\mathcal{F}|=h,\;\mathcal{F}\ni\, i} \mathbbm{1}\{a_{g(\mathcal{F})}=z(\mathcal{F},i)\} \Big) \Mod 2,
\end{aligned}
\end{equation}
where $\mathbbm{1}$ is the indicator function.
Finally, given $a=0,1,\dots,l-1,\,i\in[m]$ and $u=0,1,2,\dots,h$, let
$a(i,u):=(a_m,\dots,a_{i+1},u,a_{i-1},\dots,a_1).$
\begin{definition}
The code $\mathcal{C}=\mathcal{C}_{h,k+1}$ is defined by the following $rl$ parity-check equations:
$$
\sum_{i=1}^n \lambda_{i,f(i,a)}^t c_{i,a} = 0
,\; t=0,1,2,\dots,r-1;\, a=0,1,2,\dots,l-1.
$$
\end{definition}
For a given $a=0,1,2,\dots,l-1$ the vectors $(c_{1,a},c_{2,a},\dots,c_{n,a})$ form an $(n,k)$ MDS code.
Therefore $\mathcal{C}$ is indeed an $(n,k,l)$ MDS array code.
Let us show that $\mathcal{C}$ has the $(h,k+1)$-optimal repair property.
As before, we define sums of particular entries of the $i$th node. Namely, let $\mathcal{F}=\{i_1,i_2,\dots,i_h\}$, where $i_1<i_2<\dots<i_h$, be an $h$-subset of $[n].$
Given $a=0,1,\dots,l-1,j\in[h]$ and $i\in[n]$, let
$$
\mu_{i,i_j}^{(a)}:=c_{i,a(g(\mathcal{F}),0)}+c_{i,a(g(\mathcal{F}),j)}.
$$
The following lemma implies the optimal bandwidth of $\mathcal{C}$ for repairing $h$ failed nodes.
\begin{lemma}\label{lem:lo} Let $\mathcal{F}=\{i_1,i_2,\dots,i_h\}$ be the set of failed nodes.
For any set of helper nodes $\mathcal{R}\subseteq [n]\setminus\mathcal{F},|\mathcal{R}|=k+1$, any $j\in[h],$ and any $a\in\{0,1,\dots,l-1\}$,
the values of $c_{i_j,a(g(\mathcal{F}),0)},c_{i_j,a(g(\mathcal{F}),j)}$ and $\{\mu_{i,i_j}^{(a)}: i\in\mathcal{F}\setminus\{i_j\}\}$ are uniquely determined by $\{\mu_{i,i_j}^{(a)}:i\in \mathcal{R}\}$.
\end{lemma}
The proof of this lemma relies on the same ideas as the proofs of Lemmas~\ref{lem:cv} and \ref{lem:fn}. For completeness we outline it at the end of this section.
Let us explain why Lemma~\ref{lem:lo} implies that $C_i,i\in\mathcal{F}$ can be repaired with optimal bandwidth.
In the first round of the repair process, every failed node $C_{i_j},j\in[h]$ downloads
$\{\mu_{i,i_j}^{(a)}:a_{g(\mathcal{F})}=0\}$ from each helper node $C_i,i\in \mathcal{R}$.
According to Lemma~\ref{lem:lo}, after the first round, $C_{i_j}$ knows the values of
$$
\{c_{i_j,a}:a_{g(\mathcal{F})}=0\}\cup\{c_{i_j,a(g(\mathcal{F}),j)}:a_{g(\mathcal{F})}=0\}
\cup\{c_{i,a}+c_{i,a(g(\mathcal{F}),j)}:a_{g(\mathcal{F})}=0,i\in\mathcal{F}\setminus\{i_j\}\}.
$$
In the second round of the repair process, every failed node $C_{i_j},j\in[h]$ downloads $\{c_{i_j,a}+c_{i_j,a(g(\mathcal{F}),j')}:a_{g(\mathcal{F})}=0\}$ from each of the other failed nodes $C_{i_{j'}},j'\in[h]\setminus\{j\}$. As a result,
$C_{i_j}$ knows the values of all the elements in the set
$$
\{c_{i_j,a(g(\mathcal{F}),u)}:a_{g(\mathcal{F})}=0,u=0,1,\dots,h\}
=\{c_{i_j,a}:a\in\{0,1,2,\dots,l-1\}\},
$$
or, in other words, $C_{i_j}$ can recover all its coordinates. In regards to the repair bandwidth expended during
the two rounds of communication, every failed node $C_{i_j},j\in[h]$ downloads $l/(h+1)$ symbols of $F$ from each of the nodes $C_i,i\in \mathcal{R}\cup\mathcal{F}\setminus\{i_j\}$.
Therefore the total repair bandwidth is $h(k+h)l/(h+1)$, meeting the cut-set bound \eqref{eq:cutset} with equality.
\vspace*{0.1in}{\em Proof of Lemma~\ref{lem:lo}:}
The parity-check equations that correspond to the rows labeled by
$a(g(\mathcal{F}),0),\linebreak[4] a(g(\mathcal{F}),1),\dots,a(g(\mathcal{F}),h)$ are as follows:
\begin{equation}\label{eq:eli}
\sum_{i=1}^n \lambda_{i,f(i,a(g(\mathcal{F}),u))}^t c_{i,a(g(\mathcal{F}),u)} = 0,\;
t=0,1,2,\dots,r-1,\, u=0,1,2,\dots,h.
\end{equation}
According to definition of the function $f$ in \eqref{eq:lo}, if $i\not\in \mathcal{F},$ then the value of $f(i,a)$ does not
depend on the digit of $a$ in position $g(\mathcal{F}).$ Thus we have
$$
f(i,a(g(\mathcal{F}),0)) = f(i,a(g(\mathcal{F}),1)) = \dots = f(i,a(g(\mathcal{F}),h)),\; i\in[n]\setminus\mathcal{F}.
$$
Likewise we have for any $j\in[h]$
\begin{align*}
f(i_j,a(g(\mathcal{F}),0)) &\neq f(i_j,a(g(\mathcal{F}),j)),\\
f(i_j,a(g(\mathcal{F}),0)) &= f(i_j,a(g(\mathcal{F}),j')), \;j'\in[h]\backslash\{j\}.
\end{align*}
Thus we are justified in using the following notation:
\begin{align}
\lambda_i&:=\lambda_{i,f(i,a(g(\mathcal{F}),0))} = \lambda_{i,f(i,a(g(\mathcal{F}),1))} = \dots
=\lambda_{i,f(i,a(g(\mathcal{F}),h))},\;i\in[n]\backslash\mathcal{F} ; \label {eq:ol1}\\
&\begin{array}{l}
\lambda_{i_j,0}' := \lambda_{i_j,f(i_j,a(g(\mathcal{F}),0))} = \lambda_{i_j,f(i_j,a(g(\mathcal{F}),j'))}
, j\in[h],\, j'\in[h]\setminus\{j\} ;\\[.1in]
\lambda_{i_j,1}' := \lambda_{i_j,f(i_j,a(g(\mathcal{F}),j))},\;j\in[h].
\end{array}\label{eq:ol2}
\end{align}
Notice that
\begin{gather*}
\lambda_{i_j,0}'\neq \lambda_{i_j,1}' \text{~and~}
\{\lambda_{i_j,0}', \lambda_{i_j,1}'\}=\{\lambda_{i_j,0}, \lambda_{i_j,1}\}
\text{~for all~} j\in[h], \\
\lambda_i\in\{\lambda_{i,0},\lambda_{i,1}\},\; i\in[n]\setminus\mathcal{F}.
\end{gather*}
Therefore the elements $\lambda_{i_1,0}',\lambda_{i_2,0}',\dots,\lambda_{i_h,0}',
\lambda_{i_1,1}',\lambda_{i_2,1}',\dots,\lambda_{i_h,1}',
\lambda_i,i\in[n]\setminus\mathcal{F}$ are all distinct.
Now we can write \eqref{eq:eli} as
\begin{align*}
\sum_{j=1}^h (\lambda_{i_j,0}')^t c_{i_j,a(g(\mathcal{F}),0)} + \sum_{i\in[n]\setminus\mathcal{F}} \lambda_i^t c_{i,a(g(\mathcal{F}),0)} = 0 ,\; t=0,1,\dots,r-1; \\
(\lambda_{i_u,1}')^t c_{i_u,a(g(\mathcal{F}),u)} + \sum_{j\in[h]\setminus\{u\}} (\lambda_{i_j,0}')^t c_{i_j,a(g(\mathcal{F}),u)} + \sum_{i\in[n]\setminus\mathcal{F}} \lambda_i^t c_{i,a(g(\mathcal{F}),u)} = 0 \\ t=0,1,\dots,r-1;\; u=1,2,\dots,h.
\end{align*}
These equations have exactly the same form as the equations in \eqref{eq:eov}.
Therefore the remainder of the proof of Lemma~\ref{lem:lo} follows the steps in the proof of Lemma~\ref{lem:gh} (or Lemma~\ref{lem:bb}),
and we do not repeat them here.
\section{Cooperative $(h,d)$-optimal codes for general $h$ and general $d$}\label{sect:fg}
\subsection{Repairing the first $h$ nodes from any $d$ helper nodes} \label{sect:hd0}
In this section we present a construction of MDS array codes that can optimally repair the first $h$ nodes from any $d\ge k+1$ helper nodes for any given $2\le h\le n-d\le r-1.$ (We do not consider the case of $d=k$ because codes for it were constructed earlier in \cite{Shum13}.)
Let $s:=d+1-k$.
Given a finite field $F$ of
cardinality $|F|\ge n+h(s-1)$, we present an $(n,k,l=(h+s-1)(s-1)^{h-1})$ MDS array code $\mathcal{C}=\mathcal{C}_{h,d}^{(0)}$ over the field $F$ that has the following property:
When the first $h$ nodes of $\mathcal{C}$ fail, the repair of each failed node can be accomplished by connecting to {\em any} $d$ helper nodes and downloading
$$
(d+h-1)\frac{l}{d+h-k} =
(d+h-1)(s-1)^{h-1}
$$
symbols of $F$ in total from these helper nodes as well as from the other failed nodes.
Clearly, the amount of downloaded data meets the cut-set bound \eqref{eq:cutset} with equality.
Let $(\lambda_{ij},i=1,\dots,h, j=0,1,\dots,s-1),
\lambda_{h+1},\lambda_{h+2},\dots,\lambda_n$ be $hs+n-h$ distinct elements of the field $F$.
Define
\begin{equation}\label{eq:dA}
A:=\{\underline{a}=(a_1,a_2,\dots,a_h) :\underline{a}\in \{0,1,\dots, s-1\}^h, \sum_{i=1}^h \mathbbm{1}\{a_i=s-1\}\le 1\},
\end{equation}
i.e., $A$ is the subset of $\{0,1,\dots, s-1\}^h$ consisting of all the $\underline{a}$ such that at most one of its coordinates is $s-1$.
It is easy to verify that
\begin{equation}\label{eq:cardA}
|A|=(h+s-1)(s-1)^{h-1} = l.
\end{equation}
Let $C=(C_1,C_2,\dots,C_n)\in \mathcal{C}$ be a codeword of the code $\mathcal{C}$.
In this section, we use a multi-index (vector) notation $\underline{a}=(a_1,a_2,\dots,a_h)$ to label the entries of each node $C_i$, so
the node has the form $C_i=(c_{i,\underline{a}},\underline{a}\in A).$ In previous sections we opted for numbering the
entries of $C_i$ with integers even though on several occasions (e.g., in Sections \ref{sect:warmup}, \ref{sect:rb})
we have essentially relied on the multi-index notation. We could follow this pattern in this section as well, however the integer numbering
would not be consecutive, and we find the vector notation much more convenient for the presentation.
We note that, according to \eqref{eq:cardA}, the dimension of $C_i$ over $F$ is indeed $l$.
\begin{definition}
The code $\mathcal{C}$ is defined by the following parity check equations.
\begin{equation}\label{eq:pcq}
\sum_{i=1}^h \lambda_{i,a_i}^t c_{i,\underline{a}} + \sum_{i=h+1}^n \lambda_i^t c_{i,\underline{a}}=0, \quad t=0,1,\dots,r-1, \quad \underline{a}\in A.
\end{equation}
\end{definition}
Since for each $\underline{a}\in A$, the set of vectors
$\{(c_{1,\underline{a}}, c_{2,\underline{a}},\dots, c_{n,\underline{a}})\}$ forms an $(n,k)$ MDS code, $\mathcal{C}$ is indeed an $(n,k,l)$ MDS array code.
\subsubsection{Intuition behind the repair scheme}
We begin with an informal discussion of the code construction and the accompanying repair scheme.
According to the cut-set bound \eqref{eq:cutset}, if we assume that the amount of communication between any two nodes is the same ({\em uniform download}), which is the case for our repair scheme, then this amount is equal to $\frac{l}{h+d-k} = (s-1)^{h-1}$ symbols of $F$. More precisely, in the first round of repair process, each failed node should download $(s-1)^{h-1}$ symbols of $F$ from each helper node, and in the second round, each failed node should download $(s-1)^{h-1}$ symbols of $F$ from each of the other failed nodes.
For $i\in[h]$ and $u\in\{0,1,\dots,s-1\}$,
define $\underline{a}(i,u):=(a_1,a_2,\dots,a_{i-1},u,a_{i+1},a_{i+2},\dots,a_h)$.
For $i\in[h]$, define the set of indices
$$
B_i:=\{\underline{a}=(a_1,a_2,\dots,a_h): a_i\in[0,s-1], a_j\in[0,s-2] \text{ for all }j\ne i\},
$$
where $[0,t]:=\{0,1,\dots,t\}$ for an integer $t$.
Define $A_0:=\{0,1,\dots,s-2\}^h$. It is easy to see that
$$
\bigcup_{i=1}^h B_i=A,\quad
\bigcap_{i=1}^h B_i=A_0.
$$
In the first round of repair, each failed node $C_i,i\in[h]$ connects to $d$ helper nodes $C_j,j\in\mathcal{R}$ and downloads $(s-1)^{h-1}$ symbols from each of them, so altogether it acquires $d (s-1)^{h-1}$ symbols of $F$.
This enables $C_i$ to recover a certain portion of its entries, which we can quantify relying on the cut-set bound.
For this, we observe that this bound gives a lower estimate on the repair bandwidth for a given size of each node $l$. At the same time,
given the repair bandwidth, it gives an upper estimate on the node size, including in particular a
bound on the maximum number of entires of the node that can be recovered from a certain amount of the downloaded data.
Using this observation, let us take $|\mathcal{F}|=1$ and $|\mathcal{R}|=d$ in \eqref{eq:csce} (or in \eqref{eq:cutset}), and
replace the left-hand side with $d (s-1)^{h-1}$. Solving for $l$, we see that each failed node can recover at most $s(s-1)^{h-1}$ coordinates.
At the same time, the cardinality of the set $B_i$ is exactly $s(s-1)^{h-1}$, and this is the subset of the entries of $C_i$ that will be
repaired after the first round of communication. Namely, according to Lemma~\ref{lem:tch}, the set of values
$\{c_{i,\underline{a}}:\underline{a}\in B_i\}$ can be found relying on the values
$$
\Big\{\Big(\sum_{u=0}^{s-1} c_{j,\underline{a}(i,u)}: \underline{a}\in B_i, a_i = 0 \Big), j\in \mathcal{R}\Big\}
$$
(see Lemma \ref{lem:sfg} below), and therefore, the node $C_i$ downloads the set $\{\sum_{u=0}^{s-1} c_{j,\underline{a}(i,u)}: \underline{a}\in B_i, a_i = 0\}$
from each of the helper nodes $C_j,j\in\mathcal{R}$. Since for every $\underline{a}\in B_i$ the coordinate $a_i$ can take $s$ possible values, the number of symbols downloaded from each of them is exactly $\frac{|B_i|}{s}=(s-1)^{h-1}$.
To move forward, we note that Lemma~\ref{lem:tch} gives us more: namely, apart from the values $\{c_{i,\underline{a}}:\underline{a}\in B_i\},$ each $C_i, i\in [h]$ can also compute
$(s-1)^{h-1}$ {\em sums of coordinates of the other failed nodes}. Namely, after the first round, $C_i$ can find the values
\begin{equation}\label{eq:kjk}
\Big\{\sum_{u=0}^{s-1} c_{j,\underline{a}(i,u)}: \underline{a}\in B_i, a_i = 0\Big\} \quad \text{for all }j\in[h]\setminus\{i\}.
\end{equation}
This is the information that will be exchanged between the failed nodes $C_i, i\in [h]$ in the second round.
To describe the second part of the repair scheme, we note that the number of coordinates still not available at the node $C_i$ equals
$$
|A\setminus B_i|=
l-s(s-1)^{h-1}=(h-1)(s-1)^{h-1}.
$$
As noted above (again assuming uniform download), in the second round each failed node should download $(s-1)^{h-1}$ symbols of $F$ from each of the other $(h-1)$ failed nodes. Therefore, in the second round, each failed node should acquire $(h-1)(s-1)^{h-1}$ symbols of $F,$ which matches the number of the still missing symbols of the node.
To decide what to download we turn to \eqref{eq:kjk}, noting that
each failed node $C_i$ knows the sums in \eqref{eq:kjk} for all the other failed nodes $C_j,j\in[h]\setminus\{i\}.$
For a fixed $j$, there are $(s-1)^{h-1}$ symbols in the set \eqref{eq:kjk},
so a natural thing to do in the second round is to let $C_i$ transmit the sums in \eqref{eq:kjk} to each
of the remaining failed nodes $C_j,j\in[h]\setminus\{i\}$.
Since every failed node $C_j$ knows $\{c_{j,\underline{a}}:\underline{a}\in B_j\}$ after the first round and $A_0\subset B_j$ for all $j\in[h]$, every failed node $C_j$ knows $\{c_{j,\underline{a}}:\underline{a}\in A_0\}$. We observe that each sum in \eqref{eq:kjk} has $s$ terms and that the indices of $s-1$ of them belong to the set $A_0$, so $C_j$ can calculate the
single remaining term from each of these sums. Upon completing this calculation, the node $C_j$
knows the values of all the summands of all the sums in the set \eqref{eq:kjk}, i.e., $C_j$ knows all the coordinates in the set
$\{c_{j,\underline{a}}:\underline{a}\in B_i\}.$
Since $C_j$ downloads these sums from all the other failed nodes $C_i,i\in[h]\setminus\{j\}$, the downloaded symbols in the second round enable $C_j$ to calculate the coordinates
$$
\bigcup_{i\in[h]\setminus\{j\}} \{c_{j,\underline{a}}:\underline{a}\in B_i\big\}.
$$
Recall that after the first round, $C_j$ already knows the values of coordinates $\{c_{j,\underline{a}}:\underline{a}\in B_j\}$. Thus after the whole repair process, $C_j$ can find the entries
$$
\Big\{c_{j,\underline{a}}:\underline{a}\in \bigcup_{i=1}^h B_i\big\}
=\{c_{j,\underline{a}}:\underline{a}\in A\}.
$$
This concludes the repair procedure because $C_j$ has found all the missing $l$ entries.
\vspace*{.1in}\subsubsection{Formal description and validity proof of the repair scheme}
The discussion in the previous subsection contains most of what is needed to justify the repair scheme.
The omitted step is a connection with Lemma~\ref{lem:tch}
which we include next.
\begin{lemma}\label{lem:sfg}
Let $C_i,i\in[h]$ be one of the failed nodes, and let $\mathcal{R}\subseteq [n]\setminus[h]$ be the indices of helper nodes, where
$|\mathcal{R}|=d$.
For any $\underline{a}\in B_i$, the elements $c_{i,\underline{a}(i,0)},c_{i,\underline{a}(i,1)},\dots,c_{i,\underline{a}(i,s-1)}$ and the values of $\{\sum_{u=0}^{s-1} c_{j,\underline{a}(i,u)}:j\in[h]\setminus\{i\}\}$
can be calculated from the values in the set $\{\sum_{u=0}^{s-1} c_{j,\underline{a}(i,u)}:j\in \mathcal{R}\}$.
\end{lemma}
\begin{IEEEproof}
We again use Lemma~\ref{lem:tch}. Let us write out the parity-check equations \eqref{eq:pcq} that
correspond to the indices $\underline{a}(i,0), \underline{a}(i,1),\dots, \underline{a}(i,s-1)$:
\begin{align}\label{eq:uson}
\lambda_{i,u}^t c_{i,\underline{a}(i,u)} +
\sum_{j\in[h]\setminus\{i\}} \lambda_{j,a_j}^t c_{j,\underline{a}(i,u)} &+ \sum_{j=h+1}^n \lambda_j^t c_{j,\underline{a}(i,u)}=0,
\nonumber\\ &t=0,1,\dots,r-1, \quad u=0,1,\dots,s-1
\end{align}
We can see that this set of equations has the same form as \eqref{eq:org}: In \eqref{eq:uson} only the coefficients of $c_{i,\underline{a}(i,u)}$ vary with $u$ while the coefficients of $c_{j,\underline{a}(i,u)}$ are independent of $u$ for all $j\in[n]\setminus\{i\}$;
in \eqref{eq:org} only the coefficients of $c_{1,u}$ vary with $u$ while the coefficients of $c_{j,u}$ are independent of $u$ for all $j\in[n]\setminus\{1\}$.
Therefore Lemma~\ref{lem:tch} applies directly, and the proof is complete.
\end{IEEEproof}
In the first round, each failed node $C_i,i\in[h]$ downloads
\begin{equation}\label{eq:wh}
\Big\{\sum_{u=0}^{s-1} c_{j,\underline{a}(i,u)}: \underline{a}\in B_i, a_i = 0\Big\}
\end{equation}
from each helper node $C_j,j\in\mathcal{R}$.
As already explained, the cardinality of the set in \eqref{eq:wh} is $(s-1)^{h-1}$.
According to Lemma~\ref{lem:sfg}, after the first round, each failed node $C_i,i\in[h]$ knows the following field elements:
\begin{align*}
\{c_{i,\underline{a}}:\underline{a}\in B_i\}
\bigcup \Big(\bigcup_{j\in[h]\setminus\{i\}} \Big\{ \sum_{u=0}^{s-1} c_{j,\underline{a}(i,u)}:
\underline{a}\in B_i, a_i = 0 \Big\} \Big).
\end{align*}
In the second round, each failed node $C_j,j\in[h]$ downloads
$$
\Big\{\sum_{u=0}^{s-1} c_{j,\underline{a}(i,u)}: \underline{a}\in B_i, a_i = 0 \Big\}
$$
from each of the other failed nodes $C_i,i\in[h]\setminus\{j\}$.
According to the arguments above, after the second round each failed node can recover all its coordinates, and the repair bandwidth achieves the cut-set bound \eqref{eq:cutset} with equality.
\subsubsection{Connections with $\mathcal{C}_{2,d}^{(0)}$ and $\mathcal{C}_{h,k+1}^{(0)}$}\label{sect:connections}
Let us look back at the codes $\mathcal{C}_{2,d}^{(0)}$ and $\mathcal{C}_{h,k+1}^{(0)}$ which are special cases of the above construction (although this may be not immediate to see, which justifies their independent description earlier in the paper).
Namely, the code $\mathcal{C}_{h,d}^{(0)}$ with $h=2$ becomes the same as $\mathcal{C}_{2,d}^{(0)},$
albeit with a different way of indexing the entries of each node $C_i$, and similarly, letting $d=k+1$ in $\mathcal{C}_{h,d}^{(0)}$,
we obtain the code $\mathcal{C}_{h,k+1}^{(0)}$ with a different way of indexing.
First, using Table~\ref{table:parameters}, it is immediate to see that the sub-packetization values match.
Now let us verify the easier of the two specializations, checking the case of $h=2.$
Indeed, in this case the set $A$ defined in \eqref{eq:dA} becomes
$$
A=\{\underline{a}=(a_1,a_2): a_1,a_2\in\{0,1,\dots,s-1\}, (a_1,a_2) \neq (s-1,s-1)\}.
$$
A natural way to transform the multi-index $\underline{a}=(a_1,a_2)$ into an integer index is to use the mapping
$a=a_1+sa_2$. It is clear that the image of $A$ under this mapping is $\{0,1,2,\dots,s^2-2\}$, which is exactly the same as the set of integer indices in Section~\ref{sect:fd}.
One can further check that when $h=2$, the parity check equations of $\mathcal{C}_{h,d}^{(0)}$ given in \eqref{eq:pcq}
are the same as the parity check equations \eqref{eq:asj} of $\mathcal{C}_{2,d}^{(0)}$.
Let us now explain that using $d=k+1$ in the description of the code $\mathcal{C}_{h,d}^{(0)},$ we obtain $\mathcal{C}_{h,k+1}^{(0)}.$
When $d=k+1$, the set $A$ defined in \eqref{eq:dA} becomes
$$
A=\{\underline{0},e_1,e_2,\dots,e_h\},
$$
where $\underline{0}$ is an all-zero vector of length $h$, and for $i\in[h]$, $e_i$ is the $h$-dimensional vector whose only nonzero coordinate is located at the $i$th position, and this coordinate is $1$.
We map $\underline{0}$ to $0$ and $e_i$ to $i$ for all $i\in[h]$. It is easy to check that under this mapping
the parity-check equations \eqref{eq:pcq} of the code $\mathcal{C}_{h,d}^{(0)}$ are the same as the parity-check equations
\eqref{eq:eov} of $\mathcal{C}_{h,k+1}^{(0)}$.
\subsection{Repairing any $h$ nodes from any $d$ helper nodes}\label{sect:hd1}
Finally, in this section we present the codes $\mathcal{C}=\mathcal{C}_{h,d}$ that address the most general case of the repair problem.
As above, we let $s:=d+1-k$ and suppose that $F, |F|\ge sn$ is a finite field.
We present an $(n,k,l=((h+s-1)(s-1)^{h-1})^m)$ MDS array code $\mathcal{C}=\mathcal{C}_{h,d}$ over $F$, where $m:=\binom{n}{h}$.
The code $\mathcal{C}$ has the property that for {\em any} $h$-subset $\mathcal{F}$ of $[n],$ the repair of each failed node $C_i,i\in\mathcal{F}$ can be accomplished by
connecting to {\em any} $d$ helper nodes and downloading $(d+h-1)l/(h+s-1)$ symbols of $F$ in total from these helper nodes as well as from the other failed nodes.
Clearly, the amount of downloaded data meets the cut-set bound \eqref{eq:cutset} with equality, and so the code $\mathcal{C}$ supports optimal repair.
Let $\{\lambda_{ij},i=1,\dots,n, j=0,1,\dots,s-1\}$ be $sn$ distinct elements of the field $F$. We will rely on the
definition of the set $A$ in \eqref{eq:dA}. To remind ourselves, this is the set of $h$-tuples of integers between $0$ and $s-1$ that contain at most one entry equal to $s-1.$
We use the shorthand notation $[0,i]:=\{0,1,\dots,i\}$ for an integer $i$, and
define a set of integer vectors $A^{[m]}\subset [0,s-1]^{hm}$ such that
each of the $m$ subvectors is contained in $A$. More specifically, in this section we use $\underline{a}$ to denote an integer vector of length $hm$:
\begin{equation}\label{eq:a}
\underline{a}= (\underline{a}^{(1)}, \underline{a}^{(2)},\dots,\underline{a}^{(m)}),
\end{equation}
where $\underline{a}^{(i)}= (a^{(i)}_1,\dots,a^{(i)}_h) \in[0,s-1]^h.$ Define the set
$$
A^{[m]} := \{\underline{a}\in [0,s-1]^{hm}: \underline{a}^{(i)}\in A, i=1,\dots,m\}.
$$
According to \eqref{eq:cardA}, each $\underline{a}^{(i)}$ can take $(h+s-1)(s-1)^{h-1}$ possible values, so
\begin{equation}\label{eq:aml}
\big| A^{[m]} \big| = \big((h+s-1)(s-1)^{h-1} \big)^m =l.
\end{equation}
Let $g$ be the bijection between the set of $h$-subsets $\{\mathcal{F}:\mathcal{F}\subseteq [n],|\mathcal{F}|=h\}$ and the numbers $\{1,2,\dots,m\}$
defined in \eqref{eq:Dg}.
For a set $\mathcal{F}\subseteq[n]$ and an element $i\in\mathcal{F}$, let
$z(\mathcal{F},i)=|\{j:j\in\mathcal{F},j\le i\}|$ be the number of elements in $\mathcal{F}$ that are not greater than $i$.
Define the following function:
\begin{equation}\label{eq:dke}
\begin{aligned}
f:\,&[n]\times A^{[m]} \to\{0,1,\dots,s-1\}\\
&(i,\underline{a})\mapsto\biggl(\sum_{\mathcal{F}\subseteq [n],|\mathcal{F}|=h,\;\mathcal{F}\ni\, i}
\underline{a}^{(g(\mathcal{F}))}_{z(\mathcal{F},i)} \biggl) \Mod s,
\end{aligned}
\end{equation}
Let $C=(C_1,C_2,\dots,C_n)\in \mathcal{C}$ be a codeword of the code $\mathcal{C}$. We index the entries of the code $C_i$ using
the multi-index $\underline{a}$ defined above in \eqref{eq:a},
writing $C_i=(c_{i,\underline{a}},\underline{a}\in A^{[m]})$. According to \eqref{eq:aml}, the dimension of $C_i$ over $F$ is indeed $l$.
The last element of notation is as follows: for every $\underline{a}\in A^{[m]}$, $i\in[m]$ and $\underline{b}\in A$, let
$$
\underline{a}(i,\underline{b}):=(\underline{a}^{(1)},\underline{a}^{(2)},\dots,\underline{a}^{(i-1)},\underline{b},
\underline{a}^{(i+1)},\dots\underline{a}^{(m)}).
$$
\begin{definition}
The code $\mathcal{C}=\mathcal{C}_{h,d}$ is defined by the following $rl$ parity-check equations:
\begin{equation}\label{eq:hfg}
\sum_{i=1}^n \lambda_{i,f(i,\underline{a})}^t c_{i,\underline{a}} = 0
,\; t=0,1,2,\dots,r-1;\, \underline{a}\in A^{[m]}.
\end{equation}
\end{definition}
For every $\underline{a}\in A^{[m]}$, the vectors $(c_{1,\underline{a}},c_{2,\underline{a}},\dots,c_{n,\underline{a}})$ form an $(n,k)$ MDS code.
Therefore $\mathcal{C}$ is indeed an $(n,k,l)$ MDS array code.
Let us show that $\mathcal{C}$ has the $(h,d)$-optimal repair property.
Let $\mathcal{F}=\{i_1,i_2,\dots,i_h\}$, where $1\le i_1<i_2<\dots<i_h \le n$, be the set of indices of $h$ failed nodes.
For every codeword $C=(C_1,C_2,\dots,C_n)\in \mathcal{C}$ and every $\underline{a}\in A^{[m]}$,
we form a vector $C^{(\underline{a})}$ by taking a subset of coordinates from each node $C_i,i\in[n]$:
$$
C^{(\underline{a})}:=(C_1^{(\underline{a})},C_2^{(\underline{a})},\dots,C_n^{(\underline{a})} ),
$$
where
\begin{equation}\label{eq:iod}
C_i^{(\underline{a})}:=(c_{i,\underline{a}(g(\mathcal{F}),\underline{b})}: \underline{b}\in A), \quad i=1,\dots,n.
\end{equation}
By definition the set $C_i^{(\underline{a})}$ contains $(h+s-1)(s-1)^{h-1}$ coordinates of $C_i$.
Since the indices of these coordinates are obtained by replacing the subvector $\underline{a}^{(g(\mathcal{F}))}$ with all the vectors of the set $A$, the vectors $C^{(\underline{a})}$ and $C_i^{(\underline{a})}$ do not depend on the original value of $\underline{a}^{(g(\mathcal{F}))},$ i.e.,
\begin{equation}\label{eq:fje}
C^{(\underline{a})}=C^{(\underline{a}(g(\mathcal{F}),\underline{b}))} \text{~and~}
C_i^{(\underline{a})}=C_i^{(\underline{a}(g(\mathcal{F}),\underline{b}))}
\text{~for all~} C\in\mathcal{C}, i\in[n] \text{~and~} \underline{b}\in A.
\end{equation}
Moreover, consider the following $((h+s-1)(s-1)^{h-1})^{m-1}$ sets of coordinates of $C_i$:
\begin{equation}\label{eq:ho}
\{C_i^{(\underline{a})}:\underline{a}\in A^{[m]}, \underline{a}^{(g(\mathcal{F}))}=\underline{0}\},
\end{equation}
where we view each vector $C_i^{(\underline{a})}$ defined in \eqref{eq:iod} as a set. Since we are limiting the subvector
$\underline{a}^{(g(\mathcal{F}))}$ to $0$ while originally it can take $|A|= (h+s-1)(s-1)^{h-1}$ values, the vector
$\underline{a}$ in \eqref{eq:ho} takes
$$
\frac{l}{(h+s-1)(s-1)^{h-1}}=((h+s-1)(s-1)^{h-1})^{m-1}
$$
possible values. Therefore \eqref{eq:ho} contains $((h+s-1)(s-1)^{h-1})^{m-1}$ distinct sets of coordinates of $C_i$.
This amounts to saying that the sets in \eqref{eq:ho} form a partition of
the coordinates of $C_i$.
For every $\underline{a}\in A^{[m]}$, we define an $(n,k,(h+s-1)(s-1)^{h-1})$ MDS array code $\mathcal{C}^{(\underline{a})}$ as follows:
$$
\mathcal{C}^{(\underline{a})}:= \{(C_1^{(\underline{a})},C_2^{(\underline{a})},\dots,C_n^{(\underline{a})}):
C\in\mathcal{C}\},
$$
where the MDS property and the dimension of $\mathcal{C}^{(\underline{a})}$ follow directly from the definition of the code $\mathcal{C}$; see \eqref{eq:hfg}, \eqref{eq:iod}.
To better understand the connection between the code $\mathcal{C}$ and its subcodes $\mathcal{C}^{(\underline{a})},\underline{a}\in A^{[m]}$, we
can view each codeword of $\mathcal{C}$ as a two-dimensional array of size $l\times n$.
We use multi-index $\underline{a}\in A^{[m]}$ to index each row and $i\in[n]$ to index each column of the codeword.
Each subcode $\mathcal{C}^{(\underline{a})},\underline{a}\in A^{[m]}$ contains $(h+s-1)(s-1)^{h-1}$ rows of the codewords in $\mathcal{C}$, and the indices of these $(h+s-1)(s-1)^{h-1}$ rows are in the set
$\{\underline{a}(g(\mathcal{F}),\underline{b}): \underline{b}\in A\}$.
From \eqref{eq:fje} it is clear that
$$
\mathcal{C}^{(\underline{a})}=\mathcal{C}^{(\underline{a}(g(\mathcal{F}),\underline{b}))}
\text{~for all~} \underline{b}\in A.
$$
Thus, the code $\mathcal{C}$ can be partitioned into $((h+s-1)(s-1)^{h-1})^{m-1}$ subcodes
$$
\{\mathcal{C}^{(\underline{a})}:\underline{a}\in A^{[m]}, \underline{a}^{(g(\mathcal{F}))}=\underline{0}\},
$$
and each subcode contains $(h+s-1)(s-1)^{h-1}$ rows of the code $\mathcal{C}$.
We will show that each of these subcodes has the same structure as the code $\mathcal{C}_{h,d}^{(0)}$ defined in Section~\ref{sect:hd0}, and can therefore
be optimally repaired.
\begin{lemma}\label{lem:vhc}
For every $\underline{a}\in A^{[m]}$, the $(n,k,(h+s-1)(s-1)^{h-1})$ MDS array code $\mathcal{C}^{(\underline{a})}$ can optimally repair the failed nodes $C_i^{(\underline{a})},i\in\mathcal{F}$ from any $d$ helper nodes, i.e., the bandwidth of repairing $C_i^{(\underline{a})},i\in\mathcal{F}$ from any $d$ helper nodes achieves \eqref{eq:cutset} with equality.
\end{lemma}
\begin{IEEEproof}
Our goal is to show that the code $\mathcal{C}^{(\underline{a})}$ has the same structure as the code $\mathcal{C}_{h,d}^{(0)}$. Then we can apply the optimal repair scheme for the first $h$ nodes of $\mathcal{C}_{h,d}^{(0)}$ to the repair of the failed nodes of $\mathcal{C}^{(\underline{a})}$ whose indices are in $\mathcal{F}$.
By definition \eqref{eq:dke}, the function $f$ has the following property:
For any $\underline{a}\in A^{[m]}$ and any $\underline{b}=(b_1,b_2,\dots,b_h)\in A$,
\begin{equation}\label{eq:gej}
\begin{aligned}
f(i,\underline{a}(g(\mathcal{F}),\underline{b})) &= f(i,\underline{a}) \text{~for all~} i\in[n]\setminus \mathcal{F}, \\
f(i_u,\underline{a}(g(\mathcal{F}),\underline{b})) &= f(i_u,\underline{a}(g(\mathcal{F}),\underline{0})) \oplus b_u
\text{~for all~} u\in[h],
\end{aligned}
\end{equation}
where $\underline{0}$ is the all-zero vector of length $h$, and $\oplus$ is addition modulo $s$.
From now on we fix an $\underline{a}\in A^{[m]}$ and prove the claim for this fixed $\underline{a}$.
According to \eqref{eq:gej}, we are justified in using the following notation:
\begin{equation}\label{eq:lkj1}
\lambda_i := \lambda_{i,f(i,\underline{a})} = \lambda_{i,f(i,\underline{a}(g(\mathcal{F}),\underline{b}))}
\text{~for all~} i\in[n]\setminus \mathcal{F} \text{~and all~} \underline{b}\in A.
\end{equation}
We further define
$$
\lambda_{i_u,j}' := \lambda_{i_u,f(i_u,\underline{a}(g(\mathcal{F}),\underline{0})) \oplus j}
\text{~for all~} u\in[h] \text{~and all~} j\in\{0,1,\dots,s-1\}.
$$
Again by \eqref{eq:gej}, we have
\begin{equation}\label{eq:lkj2}
\lambda_{i_u,b_u}' = \lambda_{i_u, f(i_u,\underline{a}(g(\mathcal{F}),\underline{0})) \oplus b_u}
= \lambda_{i_u,f(i_u,\underline{a}(g(\mathcal{F}),\underline{b}))}
\text{~for all~} u\in[h] \text{~and all~} \underline{b}\in A.
\end{equation}
By \eqref{eq:iod}, $C_i^{(\underline{a})}$ consists of the coordinates $(c_{i,\underline{a}(g(\mathcal{F}),\underline{b})}: \underline{b}\in A)$. Using \eqref{eq:hfg}, \eqref{eq:lkj1} and \eqref{eq:lkj2}, we can write out the parity check equations of $\mathcal{C}^{(\underline{a})}$ as follows:
\begin{equation}\label{eq:poi}
\sum_{u=1}^h (\lambda_{i_u,b_u}')^t c_{i_u,\underline{a}(g(\mathcal{F}),\underline{b})} +
\sum_{i\in[n]\setminus\mathcal{F}} \lambda_i^t c_{i,\underline{a}(g(\mathcal{F}),\underline{b})} = 0,
\quad t=0,1,\dots,r-1, \quad \underline{b}\in A.
\end{equation}
We can check that \eqref{eq:poi} has the same form as \eqref{eq:pcq}. Indeed, $\underline{b}$ in \eqref{eq:poi} plays the role of $\underline{a}$ in \eqref{eq:pcq}; the first sum in both equations consists of coordinates of the $h$ failed nodes,
and the second sum in both equations consists of coordinates of the other available nodes; in both equations, only the coefficients of the coordinates of the failed nodes vary with the indices, and they vary in exactly the same way.
Therefore the repair scheme of code $\mathcal{C}_{h,d}^{(0)}$ can be directly applied to the repair of $C_i^{(\underline{a})},i\in\mathcal{F}$ from any $d$ helper nodes, and the repair bandwidth of this scheme achieves the bound \eqref{eq:cutset}. This completes the proof of Lemma~\ref{lem:vhc}.
\end{IEEEproof}
Since every subcode can optimally repair the failed nodes whose indices are in the set $\mathcal{F},$
the same is true for the code $\mathcal{C}$: namely it is capable of repairing $C_i,i\in\mathcal{F}$ from any $d$ helper nodes with optimal repair bandwidth.
\vspace*{.1in}
{\em Remark:} Expanding the discussion in Section~\ref{sect:connections}, we can see
that both the codes $\mathcal{C}_{2,d}$ and $\mathcal{C}_{h,k+1}$ are special cases of the code $\mathcal{C}_{h,d}:$
taking $h=2$ in the definition of $\mathcal{C}_{h,d}$, we obtain the code $\mathcal{C}_{2,d}$ with a different indexing of the node's coordinates, and in the same way, taking $d=k+1$ in $\mathcal{C}_{h,d}$, we obtain the code $\mathcal{C}_{h,k+1},$ with a different way of indexing.
\subsection{A family of universal codes}\label{sect:universal}
Using the construction in the previous subsection as a building block and exploiting the concatenation operation defined in Section~\ref{sect:hew}, we can easily construct an $(n,k)$ MDS array code $\mathcal{C}^U$ with universal $(h,d)$-optimal repair property for all $1\le h\le n-d\le n-k$ simultaneously. In other words, the codes that we construct can optimally repair any number of erasures from any number of helper nodes.
Indeed, let
$$
\mathcal{C}^U := \bigodot_{1\le h\le n-d\le n-k} \mathcal{C}_{h,d}.
$$
The code $\mathcal{C}^U$ is simply a concatenation of all $\mathcal{C}_{h,d}$ for $1\le h\le n-d\le n-k$, where the codes $\mathcal{C}_{h,d}$ for $h\ge 2$
are defined in the previous subsection, and the code $\mathcal{C}_{1,d}$ is given in Sec.~\ref{sect:hew} \cite{Ye16}. It can be constructed
over a field $F$ with size $|F|\ge rn,$ and it supports optimal repair of any single node, and optimal cooperative repair of any $h\ge 2$ nodes.
\bibliographystyle{IEEEtran}
|
2,877,628,090,481 | arxiv | \section{Introduction}
\emph{Lazarsfeld-Mukai} bundles were introduced by Lazarsfeld
\cite{RL} and Mukai \cite{Mu} in the 1980s. They are an important
class of vector bundles obtained from certain elementary
transformations and have found applications in studying syzygies and
Brill-Noether theory. These bundles play a crucial role in
Lazarsfeld's proof of the Gieseker-Petri theorem \cite{RL} and
Voisin's proof of the generic Green's conjecture \cite{CV1,CV2}.
Suppose $X$ is a smooth projective surface over $\mathbb{C}$, and $C$
is a smooth, irreducible curve on $X$. Consider a globally generated
line bundle $A$ on $C$. Denote by $i_*A$, the direct image of $A$ on
$X$ where $i:C\hookrightarrow X$ is the inclusion. Then $i_*A$ is a globally
generated coherent sheaf on $X$. We thus have the following exact
sequence on $X$ where the kernel $F$ is a vector bundle:
$$0\longrightarrow F\longrightarrow H^0(A)\otimes\mathcal{O}_X\xrightarrow{ev} i_*A\longrightarrow 0\,.$$
The dual of $F$ is called the Lazarsfeld-Mukai bundle on $X$
associated to the pair $(C,A)$. Lelli-Chiesa \cite{ML} has studied
the (semi)stability of the Lazarsfeld-Mukai bundles on K3-surfaces,
and similar results have been obtained by us on abelian surfaces
\cite{NP}. Also, \cite{AP} and references therein give a general
survey of Lazarsfeld-Mukai bundles with other applications.
In this article we generalize the above construction to higher
dimensional varieties. We in fact obtain reflexive sheaves as
kernels. We study their $\mu$-(semi)stability properties in various
cases. This construction also enables us to obtain on any smooth
projective variety $X$, semistable vector bundles $E$ with
$\trm{rank}\,E=\trm{dim}\,X$.
Suppose $X$ is a smooth projective variety of dimension $N\geq 2$
over $\mathbb{C}$. Let $D\xhookrightarrow{i} X$ be a smooth,
irreducible divisor on $X$ and $A$ be an ample and globally generated
line bundle on $D$. Consider a general subspace $V\subset H^0(D,A)$
of dimension $r\geq 2$. Let $Z(V)\hookrightarrow D$ be the closed subscheme
defined by the vanishing of sections of $V$ and $\mathcal{I}_{Z(V)}\subset\mathcal{O}_D$
be its ideal sheaf. Then $A\otimes\mathcal{I}_{Z(V)}$ is a globally generated
torsion-free sheaf on $D$. We have the following short exact
sequence, which defines the sheaf $\mathcal{F}_{D,A,V}$ associated to the triple
$(D,A,V)$ on $ X$:
\[0\longrightarrow\mathcal{F}_{D,A,V}\longrightarrow V\otimes\mathcal{O}_X\longrightarrow i_*(A\otimes\mathcal{I}_{Z(V)})\longrightarrow 0\,.\] The
kernel $\mathcal{F}_{D,A,V}$ is a reflexive sheaf of rank $r$ on $X$, whose dual is
called the \emph{Lazarsfeld-Mukai} reflexive sheaf (see
$\mathcal{x}\,$\ref{construct}).
We remark that the same construction can be carried out under the
weaker assumption that $D$ is just reduced and irreducible but not
necessarily smooth. But for the purpose of this paper, we confine
ourselves mainly with smooth and irreducible divisors $D$.
The $\mu$-(semi)stability properties of the sheaves $\mathcal{F}_{D,A,V}$ are
studied in $\mathcal{x}\,$\ref{studystab}. The first case we consider
is that of a variety $X$ whose Picard group is cyclic.
\begin{stabonprojintro}\label{stabonprojintro}
Suppose $X$ is an irreducible smooth projective variety over
$\mathbb{C}$ of dimension $N\geq 2$, such that
$\emph{Pic}\,X=\mathbb{Z}\cdot [H]$, where $[H]$ is the class of an
ample divisor. Let $D\in|\mathcal{O}_X(H)|$ and $A$ be an ample, globally
generated line bundle on $D$. Consider $V\subset H^0(D,A)$, an
$r$-dimensional subspace where $r\geq 2$. Then the reflexive sheaf
$\mathcal{F}_{D,A,V}$ is $\mu_{H}$-stable.
\end{stabonprojintro}
Any $D\in|\mathcal{O}_X(H)|$ is reduced, irreducible and
Cohen-Macaulay. Hence we can consider reflexive sheaves $\mathcal{F}_{D,A,V}$ for all
such $D$. In the specific case when $X$ is the projective space, we
have the following theorem.
\begin{everycaseintro}\label{finalprojintro}
Suppose $X=\mathbb{P}^N_{\mathbb{C}}$ for $N\geq 2$. Consider
$L=\mathcal{O}(d)$ with $d>0$ on $X$. Let $D\in |L|$ be a
\emph{general} smooth, irreducible hypersurface and $A$ be an ample,
globally generated line bundle on $D$. Suppose that $V$ is a
\emph{general} $r$-dimensional subspace of $H^0(D,A)$, where
$r\geq 2$. Then the following table summarizes the conditions on
$L$, $A$ and $r$ under which the sheaves $\mathcal{F}_{D,A,V}$ are
$\mu_{\mathcal{O}(1)}$-(semi)stable.
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
& $L$ & which $A$ & $r$ & {stability} \\
\hline
(a) & $L=\mathcal{O}(1)$ & all $A$ & $r\geq 2$ & $\mu_{\mathcal{O}(1)}$-stable \\
(b) & $L=\mathcal{O}(2)$ & all $A$ & $r=2$ & $\mu_{\mathcal{O}(1)}$-semistable \\
(c) & $L=\mathcal{O}(2l)$ & $A=\mathcal{O}(l)|_D$ & $r=2$ & $\mu_{\mathcal{O}(1)}$-semistable \\
(d) & $L=\mathcal{O}(d)$ & $A=\mathcal{O}(md)|_D$ & $2\leq r\leq {N-1+m\choose m}$ & $\mu_{\mathcal{O}(1)}$-semistable \\
& when $d>1$ & & & \\
\hline
\end{tabular}
\end{center}
\end{everycaseintro}
See $\mathcal{x}\,$\ref{caseprojspace} for a proof. Part (a) of the
above theorem follows from Theorem \ref{stabonprojintro}. Part (b) is
proved by applying a lemma from \cite{OK}. In case of part (c), we
prove that $\mathcal{F}_{D,A,V}|_D$ is $\mu_{\mathcal{O}(1)|_D}$-semistable, which implies
our result. We prove part (d) of the theorem by proving it in general
for any smooth, irreducible projective variety $X$, cf. Theorem
\ref{calabiyauintro}. We remark that, in case (d) of the above
theorem, if the condition $A=\mathcal{O}(md)|_D$ is weakened, the assertion
is not necessarily true. Lemma \ref{notsemist} in
$\mathcal{x}\,$\ref{caseprojspace} gives a class of such
examples. Note that parts (a) and (b) of the theorem hold for all
reduced and irreducible $D$ in the linear system and all
$r$-dimensional subspaces $V$.
We prove the following theorem on the $\mu$-(semi)stability of the
reflexive sheaves $\mathcal{F}_{D,A,V}$ on arbitrary smooth projective varieties.
\begin{calabiyauintro}\label{calabiyauintro}
Suppose $X$ is an irreducible, smooth projective variety of
dimension $N\geq 2$ over $\mathbb{C}$. Consider an ample, globally
generated line bundle $L$ on $X$ and an irreducible, smooth
$D\in |L|$. For $m>0$, let $V\subset H^0(D,L|_D^{\otimes m})$
be an $r$-dimensional subspace, where
$2\leq r\leq {N-1+m\choose m}$. Then, for a general pair $(D,V)$,
the reflexive sheaf $\mathcal{F}_{D,L|_D^{\otimes m},V}$ is
$\mu_{L}$-semistable.
\end{calabiyauintro}
See $\mathcal{x}\,$\ref{irred}. The method of proof employed is the
following. Consider an appropriate finite morphism
$X\rightarrow\mathbb{P}^N$, where $N=\trm{dim}\,X$. We prove the
corresponding (semi)stability statement for the projective space. We
know by \cite[Lemma 1.17]{Mar}, that the pullback of a semistable
torsion-free sheaf under a finite morphism is semistable; and that
semistability is an open condition in flat families \cite[Proposition
2.3.1]{HL}. We thus get the required result.
The same technique is applied to study the (semi)stability of some
kernel bundles. A kernel bundle $M_{L,W}$ is defined as
follows. Consider an ample and globally generated line bundle $L$
on a smooth, irreducible projective variety $X$ of dimension $n$. Let
$W\subset H^0(X,L)$ be a subspace such that the linear system
$\mathbb{P}W$ is base-point free. Hence we have the following short
exact sequence where $M_{L,W}$ is the kernel vector bundle:
$$0\longrightarrow M_{L,W}\longrightarrow W\otimes\mathcal{O}_X\longrightarrow L\longrightarrow 0\,.$$
Let $W\subset H^0(X,L)$ be a general $(n+1)$-dimensional subspace. We
prove that the kernel bundle $M_{L,W}$ associated to $(L,W)$ is
$\mu_{L}$-polystable, cf. Proposition \ref{kernel}. We mention
that for curves, certain surfaces and projective spaces, the
$\mu_{L}$-(semi)stability of $M_{L,W}$ has been obtained in
\cite{AB,BUT,CC,CC1,LE,LEM,HF,EM,RP}, for $W=H^0(X,L)$ with certain
assumptions on $L$.
\subsection*{Acknowledgements}
I thank Dr. Jaya NN Iyer for her guidance during the course of this
project. I also thank Dr. T. E. Venkata Balaji and Prof. D. S. Nagaraj
for helpful discussions and their support. I also thank the referee
and the editor for useful comments which helped make the exposition
better.
\section{Preliminaries}\label{Preliminaries}
\subsection{Definitions and Notations} Let $X$ be a smooth projective
variety of dimension $N\geq 2$ over $\mathbb{C}$.
\begin{enumerate}
\item Let $L$ be an ample and globally generated line bundle on
$X$. By Bertini's theorem, the set $\trm{sm}|L|$ as given below is a
dense open set of $|L|$,
\begin{equation*}\label{sm|L|}
\trm{sm}|L|=\{D\in |L|:D\trm{ is smooth and irreducible}\}\,.
\end{equation*}
\item Let $W$ be a vector space over $\mathbb{C}$. Then $G(m,W)$
($1\leq m\leq \trm{dim}\,W$) denotes the Grassmannian of
$m$-dimensional subspaces of $W$.
\end{enumerate}
\subsection{Mumford-Takemoto (semi)stability} Let $L$ be an ample line
bundle on $X$ (as above). Consider a torsion-free coherent sheaf $F$
of rank $r$ on $X$.
\begin{enumerate}
\item The slope of $F$ with respect to $L$ is defined as:
\[\mu_L(F)=\frac{c_1(F)\cdot (L^{N-1})}{r}\,.\]
\item The sheaf $F$ is said to be $\mu_L$-semistable
(resp. $\mu_L$-stable), if for any coherent subsheaf $E\subset F$ of
rank $s$ where $0<s<r$, one has $\mu_L(E)\leq\mu_L(F)$
(resp. $\mu_L(E)<\mu_L(F)$).
\item The torsion-free coherent sheaf $F$ is $\mu_L$-polystable if it
is a direct sum of $\mu_L$-stable sheaves of the same slope.
\end{enumerate}
\section{Construction of Reflexive sheaves}\label{construct}
Consider a smooth projective variety $X$ of dimension $N\geq 2$ over
$\mathbb{C}$, and an ample, globally generated line bundle $L$ on
$X$. For a divisor $D\in\trm{sm}|L|$, let $i:D\hookrightarrow X$ denote the
inclusion. Let $A$ be an ample, globally generated line bundle on
$D$.
By the Noether-Lefschetz Theorem, if $N\geq 4$, then
$\text{Pic}\,X\longrightarrow\text{Pic}\,D$ is an isomorphism, cf. \cite[Example
3.1.25]{Laz}. Hence, the line bundle $A$ is the restriction of a line
bundle from $X$.
Consider $G(r,H^0(D,A))$, the Grassmannian of $r$-dimensional
subspaces of the space of global sections $H^0(D,A)$, where
$2\leq r\leq h^0(D,A)$. For $V\in G(r,H^0(D,A))$, let $Z(V)$ denote
the closed subscheme of $D$ defined by the vanishing of sections of
$V$. Recall that $Z(V)$ has codimension at most $r$ in $D$. In fact,
$Z(V)$ has codimension exactly $r$ in $D$ for a general
$V\in G(r,H^0(D,A))$.
The base locus of the linear system $\mathbb{P} V$ corresponding to $(A,V)$
on $D$ is $Z(V)$. The ideal sheaf $\mathcal{I}_{Z(V)}$ of $Z(V)$ is the image of
the morphism $V\otimes A^{\vee} \longrightarrow \mathcal{O}_D$ on $D$. This gives the
surjective evaluation map
$V\otimes\mathcal{O}_D\twoheadrightarrow A\otimes \mathcal{I}_{Z(V)}$ on $D$. Push-forward
this morphism by the closed immersion $i$ to $X$, and consider the
following composition:
\begin{equation*}\label{pushfwdseq}
V\otimes\mathcal{O}_X\twoheadrightarrow V\otimes i_*\mathcal{O}_D\longrightarrow i_*(A\otimes \mathcal{I}_{Z(V)})\longrightarrow 0\,.
\end{equation*}
Let $\mathcal{F}_{D,A,V}$ denote the kernel of the composition. Thus, we
get:
\begin{equation}\label{definingses}
0\longrightarrow \mathcal{F}_{D,A,V}\longrightarrow V\otimes\mathcal{O}_X\longrightarrow i_*(A\otimes\mathcal{I}_{Z(V)})\longrightarrow 0\,.
\end{equation}
\begin{prelim}\label{prelim}
The kernel sheaf $\mathcal{F}_{D,A,V}$ associated to $(D,A,V)$ has the following initial properties:
\begin{enumerate}
\item[(a)] The sheaf $\mathcal{F}_{D,A,V}$ is reflexive of rank $r$.
\item[(b)] For a general $V\in G(r,H^0(D,A))$, the sheaf $\mathcal{F}_{D,A,V}$ is locally free when $r\geq N$.
\item[(c)] The determinant of $\mathcal{F}_{D,A,V}$ is $\mathcal{O}_X(-D)\simeqL^{\vee}$.
\item[(d)] The sheaf $\mathcal{F}_{D,A,V}$ has no non-zero global sections, i.e. $H^0(X,\mathcal{F}_{D,A,V})=0$.
\end{enumerate}
\end{prelim}
\begin{proof}
For part (a), we note that in the exact sequence
\eqref{definingses}, $\mathcal{F}_{D,A,V}$ is the elementary transformation of a
locally free sheaf by a torsion-free sheaf supported on the smooth
divisor $D$. By \cite[Lemma 2.4]{Ab}, $\mathcal{F}_{D,A,V}$ is a reflexive sheaf
of rank $r$. When $r\geq N$, a general $V\in G(r,H^0(D,A))$ has
$\trm{codim}_D Z(V)=r\geq N$. Hence, $Z(V)$ is empty as $D$ is of
dimension $N-1$. In this case, $A\otimes\mathcal{I}_{Z(V)}\,\simeq\, A$ and
$\mathcal{F}_{D,A,V}$ is the usual elementary transformation, and is locally free.
From \eqref{definingses},
$\trm{det}\,\mathcal{F}_{D,A,V}\simeq\trm{det}\, i_*(A\otimes\mathcal{I}_{Z(V)})^{\vee}$. Since
$Z(V)$ is of codimension at least 2 in $X$, we have
$\trm{det}\,i_*(A\otimes\mathcal{I}_{Z(V)})\simeq \trm{det}\,i_*A\simeq
\mathcal{O}_X(D).$ Part (d) can be proved by consider the long exact
sequence of cohomology associated to \eqref{definingses}.
\end{proof}
This construction gives us reflexive sheaves of rank $r\geq 2$ on
smooth projective varieties of dimension $N\geq 2$. We call the dual
sheaves $\mathcal{E}_{D,A,V}=\mathcal{F}_{D,A,V}^{\vee}$, the \emph{Lazarsfeld-Mukai
reflexive sheaves}. Dualizing the exact sequence \eqref{definingses}
defining $\mathcal{F}_{D,A,V}$, we get:
\begin{equation}\label{dualses}
0\longrightarrow V^{\vee}\otimes\mathcal{O}_X\longrightarrow \mathcal{E}_{D,A,V}\longrightarrow\mathcal{E}xt^1(i_*(A\otimes\mathcal{I}_{Z(V)}),\mathcal{O}_X)\longrightarrow 0\,.
\end{equation}
As with Lazarsfeld-Mukai bundles, the Lazarsfeld-Mukai reflexive
sheaves are naturally equipped with an $r$-dimensional space of
global sections $V^{\vee}\subset H^0(X,\mathcal{E}_{D,A,V})$.
\theoremstyle{definition}
\newtheorem{Remarkonflatfamily}[Lefschetz]{Remark}
\begin{Remarkonflatfamily}\label{Remarkonflatfamily}
Suppose $X$ is a smooth, irreducible projective variety of dimension
$N\geq 2$ over $\mathbb{C}$. Let $D$ be a smooth and irreducible
divisor on $X$ and $A$ be an ample and globally generated line
bundle on $D$. Let $r$ be such that $2\leq r \leq h^0(D,A)$.
Consider the dense open subscheme $\mathcal{U}_r\subset G(r,H^0(D,A))$
given by:
\begin{equation*}\label{correctcodim}
\mathcal{U}_r=\{V\in G(r,H^0(D,A))\,|\,Z(V)\trm{ has codimension } r\trm{ in } D\}\,.
\end{equation*}
If $V\in\mathcal{U}_r$, it is well-known that the ideal sheaves $\mathcal{I}_{Z(V)}$ form a
flat family of sheaves parametrized by $\mathcal{U}_r$. Consequently, the
torsion-free sheaves $\{A\otimes \mathcal{I}_{Z(V)}\}_{V\in \mathcal{U}_r}$ and the dual
Lazarsfeld-Mukai reflexive sheaves $\{ \mathcal{F}_{D,A,V} \}_{V\in \mathcal{U}_r}$ form
flat families of sheaves parametrized by $\mathcal{U}_r$.
\end{Remarkonflatfamily}
\section{(Semi)stability of the sheaves $\mathcal{F}_{D,A,V}$}\label{studystab}
In this section we study the (semi)stability properties of the
reflexive sheaves $\mathcal{F}_{D,A,V}$.
\subsection{Varieties with cyclic Picard group}\label{picardgroupZ}
Suppose that $X$ is a smooth projective variety of dimension $N\geq 2$
such that $\trm{Pic}\,X\simeq \mathbb{Z}\cdot [H]$. Here $[H]$ is the
class of the ample generator the Picard group. Projective spaces,
smooth hypersurfaces in $\mathbb{P}^n$ for $n\geq 4$, general complete
intersections in higher dimensional projective spaces are some
examples of such varieties. We now prove Theorem
\ref{stabonprojintro}. When $X$ is a K3-surface with cyclic Picard
group and $\mathcal{F}_{D,A,V}$ is a vector bundle, the $\mu_H$-stability and the
simplicity of $\mathcal{F}_{D,A,V}$ is known, cf. \cite[Lemma 1.3]{RL}. We generalize
this to higher dimensional varieties when $\mathcal{F}_{D,A,V}$ is a reflexive sheaf.
\begin{proof}[Proof of Theorem \ref{stabonprojintro}.]
Denote $\mathcal{E}:=\mathcal{E}_{D,A,V}=\mathcal{F}_{D,A,V}^{\vee}$. Recall the exact sequence
\eqref{dualses} defining the Lazarsfeld-Mukai reflexive sheaf. Since
the cokernel sheaf $\mathcal{E}xt^1(i_*(A\otimes\mathcal{I}_{Z(V)}),\mathcal{O}_X)$ is
supported only on $D$, there is a generically surjective morphism
$\mathcal{O}_X^r\longrightarrow \mathcal{E}$.
Assume the contrary, i.e. $\mathcal{E}$ (equivalently $\mathcal{F}_{D,A,V}$) is not
$\mu_{H}$-stable. Then, there is a torsion-free quotient $Q$ of
$\mathcal{E}$, i.e. $\mathcal{E}\twoheadrightarrow Q$ of rank $s< r$, such that
$\mu_{H}(\mathcal{E})\geq\mu_{H}(Q)$. From Proposition \ref{prelim},
$\trm{det}\,(\mathcal{E})\simeq\mathcal{O}_X(H)$,
thus \[\frac{c_1(Q)\cdot(H^{N-1})}{s}\leq\frac{H^N}{r}.\] Since
$\trm{Pic}\,X\simeq\mathbb{Z}$, we get $c_1(Q)\leq 0$.
Let $\widetilde{Q}=Q^{\vee\vee}$, the double dual of the torsion-free
sheaf $Q$. The sheaf $\widetilde{Q}$ on $X$ is reflexive with
$c_1(\widetilde{Q})=c_1(Q)$, and there is a natural inclusion
$Q\hookrightarrow \widetilde{Q}$ which is generically an isomorphism. We have
morphisms $\mathcal{O}_X^r\longrightarrow \mathcal{E}\longrightarrowQ\hookrightarrow \widetilde{Q}.$ As each morphism is
generically surjective, we have a generically surjective morphism
$\mathcal{O}_X^r\longrightarrow \widetilde{Q}\,.$ The $s$-th wedge product of the above
morphism gives a generically surjective morphism:
\[\bigwedge^s\mathcal{O}_X^r\longrightarrow \bigwedge^s
\widetilde{Q}\simeq\trm{det}\,\widetilde{Q}\,.\] By \cite[Proposition
1.2.7]{HL}, we have
$0=\mu_{H}(\bigwedge^s\mathcal{O}_X^r)\leq\mu_{H}(\trm{det}\,\widetilde{Q})=c_1(\trm{det}\,\widetilde{Q})\cdot
(H^{N-1})\,,$ implying that $c_1(\widetilde{Q})\geq 0$. Hence,
$c_1(\widetilde{Q})=0$,
i.e. $\trm{det}\,Q\simeq\trm{det}\,\widetilde{Q}\simeq\mathcal{O}_X$.
Let $\xi$ be the generic point of $X$. The generically surjective
morphism $\mathcal{O}_X^r\longrightarrow \widetilde{Q}$, gives the surjective morphism of
$\mathcal{O}_{\xi}$-vector spaces
$(\mathcal{O}_X^r)_{\xi}\twoheadrightarrow \widetilde{Q}_{\xi}$. Suppose the
morphism $\mathcal{O}_X^r\longrightarrow \widetilde{Q}$ is given by global sections
$t_1,t_2,\cdots,t_r$ of $\widetilde{Q}$, then the stalks of these at $\xi$
generate $\widetilde{Q}_{\xi}$ as an $\mathcal{O}_{\xi}$-vector space. Thereby,
there are $s$ among these, say $t_1,t_2,\cdots, t_s$ which form a
basis of $\widetilde{Q}_{\xi}$. These sections give a generically
surjective morphism:
\[f:\mathcal{O}_X^s\longrightarrow \widetilde{Q}\,.\] The map $f$ is in fact an
isomorphism. Indeed, if $K$ and $C$ denote the kernel and cokernel
of $f$, we get:
\[0\longrightarrow K\longrightarrow \mathcal{O}_X^s\xrightarrow{f} \widetilde{Q}\longrightarrow C\longrightarrow 0\,.\] Since $f$ is
generically surjective, $\trm{rank}\,C=0$. Then, $\trm{rank}\,K=0$,
and thus $K=0$. Also, $\trm{det}\,C\simeq\mathcal{O}_X$. Therefore, $C$ is
supported on a closed set of codimension $\geq 2$ on $X$. Hence,
$\mathcal{O}_X^s$ and $\widetilde{Q}$ are isomorphic on an open set whose
complement has codimension $\geq 2$. By \cite[Proposition 1.6
(iii)]{RH}, $\widetilde{Q}\simeq\mathcal{O}_X^s$.
By \cite[Corollary 1.2]{RH}, $Q^{\vee}$ is reflexive. This gives $\mathcal{O}_X^s\simeq \widetilde{Q}^{\vee}\simeqQ^{\vee\vee\vee}\simeq Q^{\vee}$. From the surjection $\mathcal{E}\twoheadrightarrowQ$, we get $Q^{\vee}\simeq\mathcal{O}_X^s\hookrightarrow \mathcal{F}_{D,A,V}$. This contradicts $H^0(X,\mathcal{F}_{D,A,V})=0$ (Proposition \ref{prelim} (d)). Hence, the kernel reflexive sheaf is $\mu_H$-stable.
\end{proof}
\theoremstyle{plain}
\newtheorem{tftrivialdet}[stabonproj]{Remark}
\begin{tftrivialdet}
From the proof of theorem, the following commutative diagram gives
$Q\simeq \mathcal{O}_X^s$.
\begin{displaymath}
\xymatrix{\mathcal{O}_X^s\ar[r]^{\simeq} \ar@{->}[d] & \widetilde{Q}\simeq \mathcal{O}_X^s\\
Q\ar@{^{(}->}[ur] &}
\end{displaymath}
The following result can be inferred from the above. Consider a
smooth projective variety $X$. Let $Q$ be a torsion-free sheaf on
$X$ with trivial determinant over an open set whose complement has
codimension $\geq 2$. Suppose $Q$ admits a generically surjective
morphism $\mathcal{O}_X^r\longrightarrow Q$, then the torsion-free sheaf $Q$ is itself
trivial. We thank the referee for pointing this out.
\end{tftrivialdet}
\begin{stabonproj}\label{elementofpf}
The proof of Theorem \ref{stabonprojintro} proves essentially the
following statement: Suppose a smooth projective variety $X$ has
$\emph{Pic}\,X=\mathbb{Z}\cdot[H]$, where $H$ is ample. Let $F$ be a
reflexive sheaf on $X$ such that
\begin{enumerate}
\item[a.] the determinant $\emph{det}\,F=\mathcal{O}_X(-H)$,
\item[b.] there is a generically surjective morphism from a trivial
bundle on $X$ to $F^{\vee}$,
\item[c.] the space of global sections of $F$, i.e. $H^0(X,F)=0$.
\end{enumerate}
Then the reflexive sheaf $F$ is $\mu_H$-stable.
\end{stabonproj}
\subsection{(Semi)stability in case of arbitrary smooth projective varieties}\label{irred}
Suppose that $X$ is an irreducible, smooth projective variety of
dimension $N\geq 2$ over $\mathbb{C}$. Let $L$ be an ample and
globally generated line bundle on $X$.
\begin{flatHtoD}\label{flatHtoD}
For a general $D\in\emph{sm}|L|$, there is a finite, flat morphism
$\phi:X\longrightarrow\mathbb{P}^N$ such that $D$ maps to the hyperplane
$H=Z(x_0)$ in $\mathbb{P}^N$, where
$x_0,x_1,\cdots,x_N\in H^0(\mathbb{P}^N,\mathcal{O}(1))$ are the homogeneous
coordinates.
\end{flatHtoD}
\begin{proof}
Since $L$ is globally generated and $\trm{dim}\,X=N$, any general
collection of $N+1$ sections in $H^0(X,L)$ generate $L$. If
$D\in\trm{sm}|L|$ is general, then $D=Z(s_0)$ for some
$s_0\in H^0(X,L)$, and we can find $s_1,s_2,\cdots, s_N\in H^0(X,L)$
such that $\{s_0,s_1,s_2,\cdots,s_N\}$ generate $L$. These sections
give a morphism $\phi:X\longrightarrow \mathbb{P}^N$ such that
$\phi^*(\mathcal{O}(1))=L$ and $s_i=\phi^*x_i$. If $H$ is the hyperplane
$Z(x_0)\subset\mathbb{P}^N$, we get the following commutative diagram.
\begin{displaymath}
\xymatrix{D\,\ar@{^{(}->}[r]_{i}\ar[d]_{\phi|_D} & X\ar[d]^{\phi} \\
H\,\ar@{^{(}->}[r]_j & \mathbb{P}^N}
\end{displaymath}
Since $X$ is a smooth projective variety over $\mathbb{C}$ and $\phi$ is
defined by the sections of an ample line bundle $L$, the morphism
$\phi$ is finite \cite[Corollary 1.2.15]{Laz}. This also implies that
$\phi$ is surjective. Thereby, $\phi$ is a flat morphism
\cite[Exercise III.9.3 (a)]{RH1}.
\end{proof}
We remark that, over positive characteristic we will obtain a finite
map from $X$ to the projective space by choosing a very ample line
bundle $L$.
Any ample, globally generated line bundle on the hyperplane $H$ is of
the form $\mathcal{O}_H(m)\simeq\mathcal{O}_{\mathbb{P}^N}(m)|_H$ for some
$m>0$. Consider $V'\in G(r,H^0(H,\mathcal{O}_H(m))$ where
$2\leq r\leq h^0(H,\mathcal{O}_H(m))$, such that $\trm{codim}_H
Z(V')=r$. Then we have:
\begin{equation}\label{sesonH}
0\longrightarrow\mathcal{F}_{H,\mathcal{O}_H(m),V'}\longrightarrow V'\otimes {\mathcal{O}}_{\mathbb{P}^N}\longrightarrow {j}_*(\mathcal{O}_H(m)\otimes\mathcal{I}_{Z(V')})\longrightarrow 0\,.
\end{equation}
By Theorem \ref{stabonprojintro}, $\mathcal{F}_{H,\mathcal{O}_H(m),V'}$ is
$\mu_{\mathcal{O}(1)}$-stable. We now have the following lemma.
\begin{pbfromhpisstable}\label{pbfromhpisstable}
Let $A=(L|_D)^{\otimes m}$ on $D$. Then
\begin{enumerate}
\item[(a)] the subspace $V=\phi|_D^*V'\subset H^0(D,A)$ and $\emph{codim}_{D} Z(V)=r$,
\item[(b)] the sheaf $\mathcal{F}_{D,A,V}\simeq \phi^*\mathcal{F}_{H,\mathcal{O}_H(m),V'}$ and is $\mu_L$-semistable.
\end{enumerate}
\end{pbfromhpisstable}
\begin{proof} Note that,
$A= (L|_D)^{\otimes m} \simeq \phi|_D^*\mathcal{O}_H(m)\,. $ This gives
$V=\phi|_D^*V'\subset \phi|_D^*H^0(H,\mathcal{O}_H(m))\subset H^0(D,A)\,.$
Since $V=\phi|_D^*V'$, the closed subscheme $Z(V)$ maps to $Z(V')$
under $\phi$, and we get the following commutative diagram.
\begin{displaymath}
\xymatrix{Z(V)\,\ar@{^{(}->}[r]^{i'} \ar[d]^{\phi|_{Z(V)}} & D \ar[d]^{\phi|_D}\ar@{^{(}->}[r]^{i} & X\ar[d]^{\phi} \\
Z(V') \ar@{^{(}->}[r]^{j'} & H \ar@{^{(}->}[r]^{j} & \mathbb{P}^N}
\end{displaymath}
Since $\phi$ is a finite, surjective and flat morphism, so are
$\phi|_D$ and $\phi|_{Z(V)}$. This implies that $Z(V)$ and $Z(V')$
have the same dimension. Thus,
$\trm{codim}_D(Z(V))=\trm{codim}_H(Z(V')) =r$. This proves (a).
Consider the pullback of the exact sequence \eqref{sesonH} by $\phi$:
\begin{equation}\label{pbfromhyperplane}
0\longrightarrow \phi^*\mathcal{F}_{H,\mathcal{O}_H(m),V'}\longrightarrow V\otimes \mathcal{O}_X\longrightarrow \phi^*{j}_*(\mathcal{O}_H(m)\otimes\mathcal{I}_{Z(V')})\longrightarrow 0\,.
\end{equation}
By \cite[Proposition
III.9.3]{RH1},
$$\phi^*{j}_*(\mathcal{O}_H(m)\otimes\mathcal{I}_{Z(V')})\simeq
{i}_*(\phi|_D)^*(\mathcal{O}_H(m)\otimes\mathcal{I}_{Z(V')})\simeq
i_*(A\otimes \phi|_D^*\mathcal{I}_{Z(V')})\,.$$ Note that
$\phi|_D^*\,\mathcal{I}_{Z(V')}\simeq
\mathcal{I}_{Z(V)}\subset\mathcal{O}_D$. Hence, the short exact sequence
\eqref{pbfromhyperplane} becomes
\[0\longrightarrow \phi^*\mathcal{F}_{H,\mathcal{O}_H(m),V'}\longrightarrow V\otimes \mathcal{O}_{X}\longrightarrow
i_*(A\otimes\mathcal{I}_{Z(V)})\longrightarrow 0\,.\] Therefore,
$\phi^*\mathcal{F}_{H,\mathcal{O}_H(m),V'}\simeq\mathcal{F}_{D,A,V}$. Further, since
$\mathcal{F}_{H,\mathcal{O}_H(m),V'}$ is $\mu_{\mathcal{O}(1)}$-stable, and $\phi$
is a finite morphism, by \cite[Lemma 1.17]{Mar}, $\mathcal{F}_{D,A,V}$ is
$\mu_{L}$-semistable.
\end{proof}
We prove Theorem \ref{calabiyauintro}.
\begin{proof}[Proof of Theorem \ref{calabiyauintro}.] Let
$D\in\trm{sm}|L|$ be a divisor which is general in the linear
system. By Lemma \ref{flatHtoD}, there is a finite morphism
$\phi:X\longrightarrow\mathbb{P}^N$ such that $\phi(D)$ is the hyperplane
$H=Z(x_0)$. Consider the line bundle $A=(L|_D)^{\otimes m}$ on $D$
for any $m>0$. By Remark \ref{Remarkonflatfamily}, there is a flat
family of rank $r$ reflexive sheaves (where $2\leq r\leq h^0(D,A)$)
parametrized by
$V\in\mathcal{U}_r=\{V\in G(r,H^0(D,A))\,|\,\trm{codim}_D Z(V)=r\}$. Each
$V\in\mathcal{U}_r$ corresponds to the reflexive sheaf
$\mathcal{F}_{D,A,V}\simeq\mathcal{F}_{D,L|_D^{\otimes m},V}$ .
Let $V=\phi^*V'$, where $V'\in G(s,H^0(H,\mathcal{O}_H(m))$ is such that
$\trm{codim}_H Z(V')=s$. Then by Lemma \ref{pbfromhpisstable},
$V\in\mathcal{U}_s$ and $\mathcal{F}_{D,A,V}$ is $\mu_L$-semistable. Note that $s$ can
only vary in the range
$2\leq s\leq h^0(H,\mathcal{O}_H(m))={N-1+m\choose m}$. Hence, for any $r$
in the range $2\leq r\leq {N-1+m\choose m}$, we have a $V\in\mathcal{U}_r$
with the corresponding $\mathcal{F}_{D,A,V}$ being
$\mu_{L}$-semistable. By \cite[Proposition 2.3.1]{HL}, semistability
is an open condition in flat families. Therefore, for a general
$V\in G(r,H^0(D,A))$ where $2\leq r\leq {N-1+m\choose m}$, $\mathcal{F}_{D,A,V}$ is
$\mu_L$-semistable.
\end{proof}
\subsection{(Semi)stability in case of projective space}\label{caseprojspace}
Suppose ${X=\mathbb{P}^N}$ for $N\geq 2$.
\subsubsection{} We first consider the case $L=\mathcal{O}(2)$ and $r=2$.
Let $D\in \trm{sm}|\mathcal{O}(2)|$ be a degree two hypersurface. Let $A$ be
an ample, globally generated line bundle on $D$. We now discuss the
$\mu_{\mathcal{O}(1)}$-stability of the rank 2 reflexive sheaf $\mathcal{F}_{D,A,V}$ where
$V\in G(2,H^0(D,A))$.
We recall the concept of normalization of a torsion-free sheaf
\cite[Chapter II, $\mathcal{x}\,$1.2]{OK}. A torsion-free sheaf $E$
of rank 2 over $X=\mathbb{P}^N$ has a uniquely determined integer
$k_E$ associated to it, namely,
\begin{equation}\label{eqn29}
k_E = -\frac{c_1(E)}{2} \trm{ if } c_1(E)\trm{ even, and } k_E=-\frac{c_1(E)+1}{2} \trm{ for } c_1(E)\trm{ odd}\,.
\end{equation}
Note that, $c_1(E(k_E))\in \{0,-1\}$. We set $E_{\trm{norm}}:=E(k_E)$,
and call $E$ normalized if $E=E_{\trm{norm}}$.
\begin{reflexstab}\label{okolemma}
\cite[Lemma II.1.2.5]{OK} A reflexive sheaf $E$ of rank 2 over
$X=\mathbb{P}^N$ is stable if and only if $E_{\trm{norm}}$ has no
sections : $H^0(X ,E_{\trm{norm}}) = 0.$ If $c_1 (E)$ is even, then
$E$ is semistable if and only if
$H^0(X , E_{\trm{norm}} (-1)) = 0. $
\end{reflexstab}
\begin{alterpf}\label{firstcase}
Consider $L=\mathcal{O}(2)$ on $X=\mathbb{P}^N$ for $N\geq 2$. Let
$D\in\emph{sm}|L|$, $A$ be an ample, globally generated line bundle
on $D$ and $V\in G(2,H^0(D,A))$ be a 2-dimensional subspace. Then
the associated rank two $\mathcal{F}_{D,A,V}$ is $\mu_{\mathcal{O}(1)}$-semistable.
\end{alterpf}
\begin{proof} If $L=\mathcal{O}(2)$, then $c_1(\mathcal{F}_{D,A,V})=-2$. Then,
$k_{\mathcal{F}_{D,A,V}}=1.$ Thus, ${\mathcal{F}_{D,A,V}}_{\trm{norm}}=\mathcal{F}_{D,A,V}(1)$. Now, the result
follows from the fact that $H^0(X,\mathcal{F}_{D,A,V})=0$ (Proposition \ref{prelim}
(d)).
\end{proof}
\subsubsection{}
We consider the case $L=\mathcal{O}(2l)$ and $r=2$.
Start with $L=\mathcal{O}(d)$ for any $d>0$. Consider $D\in \trm{sm}|L|$
where $i:D\hookrightarrow X$ denotes the inclusion. Suppose $A$ is an ample,
globally generated line bundle on $D$. For $r\geq 2$, let
$V\in G(r,H^0(D,A))$ be such that $\trm{codim}_D
Z(V)=r$.
We restrict the exact sequence \eqref{definingses} to the open set
$\widetilde{U}=X\setminus Z(V)$ to get:
\begin{equation}\label{eq6}
0\longrightarrow\mathcal{F}_{D,A,V}|_{\widetilde{U}}\longrightarrow V\otimes\mathcal{O}_{\widetilde{U}}\longrightarrow i_*(A\otimes\mathcal{I}_{Z(V)})|_{\widetilde{U}}\longrightarrow 0.
\end{equation}
Let $U=D\setminus Z(V)$, an open subset of $D$. By \cite[Proposition
II.6.5]{RH1}, $U$ is a divisor in $\widetilde{U}$ and we have the following
commutative diagram.
\begin{displaymath}
\xymatrix{U\,\ar@{^{(}->}[r] \ar@{^{(}->}[d]_{i|_{U}} & D\, \ar@{^{(}->}[d]^i\\
\widetilde{U}\,\ar@{^{(}->}[r] & X}
\end{displaymath}
By \cite[Proposition III.9.3]{RH1},
$i_*(A\otimes\mathcal{I}_{Z(V)})|_{\widetilde{U}}\simeq
(i|_{U})_*(A|_{U}\otimes\mathcal{I}_{Z(V)}|_U)\simeq (i|_U)_*(A|_U)$. The short
exact sequence \eqref{eq6} then becomes:
\begin{equation*}
0\longrightarrow\mathcal{F}_{D,A,V}|_{\widetilde{U}}\longrightarrow V\otimes\mathcal{O}_{\widetilde{U}}\longrightarrow (i|_{U})_*(A|_{U})\longrightarrow 0\,.
\end{equation*}
Thus by \cite[Lemma 1.1]{Ab}, $\mathcal{F}_{D,A,V}|_{\widetilde{U}}$ is an elementary
transformation, and hence a locally free sheaf of rank $r$ on
$\widetilde{U}$. Then, $(i|_{U})^*\mathcal{F}_{D,A,V}|_{\widetilde{U}}=\mathcal{F}_{D,A,V}|_{U}$ on $U\subset D$
is also locally free of rank $r$. Now, $U$ is an open subset of $D$
whose complement $Z(V)$ is of codimension $r\geq 2$ in $D$. This
implies that the stalks of $\mathcal{F}_{D,A,V}|_D$ are torsion-free.
\begin{FonD}\label{FonD}
The sheaf $\mathcal{F}_{D,A,V}|_D$ is torsion-free of rank $r$.
\end{FonD}
Assume now that $r=2$. Thus, $\mathcal{F}_{D,A,V}$ and $\mathcal{F}_{D,A,V}|_D$ have rank
2. Restrict the exact sequence \eqref{definingses} to $D$ to get:
\begin{equation*}\label{eq9}
0\longrightarrow K\longrightarrow \mathcal{F}_{D,A,V}|_D\longrightarrow V\otimes\mathcal{O}_D\longrightarrow A\otimes\mathcal{I}_{Z(V)}\longrightarrow 0\trm{ on } D.
\end{equation*}
Here $K$ denotes the kernel, which is torsion-free by Remark
\ref{FonD}. Note that $K$ is a sheaf of rank 1 on $D$. Consider the
kernel $M$ of the surjective evaluation map
$V\otimes \mathcal{O}_D\twoheadrightarrow A\otimes\mathcal{I}_{Z(V)}$. The sheaf $M$ is
reflexive, cf. \cite[Proposition 1.1]{RH}. Since $M$ is of rank 1, by
\cite[Proposition 1.9]{RH}, $M$ is a line bundle. By comparing
determinants, $M\simeq A^{\vee}$, and we get:
\begin{equation*}
0\longrightarrow A^{\vee}\longrightarrow V\otimes\mathcal{O}_D\longrightarrow A\otimes\mathcal{I}_{Z(V)}\longrightarrow 0\,.
\end{equation*}
From the exact sequences above, we get the following exact sequence on
$D$:
\begin{equation}\label{eq10}
0\longrightarrow K\longrightarrow\mathcal{F}_{D,A,V}|_D\longrightarrow A^{\vee}\longrightarrow 0.
\end{equation}
\newtheorem{Rmktfree}[stabonproj]{Remark}
\begin{Rmktfree}\label{Remark}
As $\mathcal{F}_{D,A,V}|_D$ is torsion-free,
$\emph{det}\,\mathcal{F}_{D,A,V}|_D\simeq(\emph{det}\mathcal{F}_{D,A,V})|_D\simeq(L|_D)^{\vee}$.
\end{Rmktfree}
Thus, from the exact sequence \eqref{eq10}, the determinant of the
rank 1 torsion-free sheaf $K$ is:
$$\trm{det}\,K\simeq A\otimes \trm{det}\,\mathcal{F}_{D,A,V}|_D\simeq A\otimes(L|_D)^{\vee}\,.$$
\newtheorem{sstabonD}[stabonproj]{Proposition}
\begin{sstabonD}\label{sstabonD}
Consider $L=\mathcal{O}(2l)$ for $l>0$. Let $D\in\emph{sm}|L|$,
$A=\mathcal{O}(l)|_D$ and $V\in G(2,H^0(D,A))$ such that
$\emph{codim}_D Z(V)=2$. Then the torsion-free sheaf $\mathcal{F}_{D,A,V}|_D$ on
$D$ is $\mu_{\mathcal{O}(1)|_D}$-semistable. Thus $\mathcal{F}_{D,A,V}$ is
$\mu_{\mathcal{O}(1)}$-semistable.
\end{sstabonD}
\begin{proof} Both $K$ and $A^{\vee}$ are rank one torsion-free
sheaves on $D$ and hence are $\mu_{\mathcal{O}(1)|_D}$-stable. Further they
have the same determinant. Indeed,
$$A^{\vee}\simeq \mathcal{O}(-l)|_D\trm{ and } \trm{det}\,K\simeq A\otimes(L|_D)^{\vee}\simeq \mathcal{O}(l)|_D\otimes \mathcal{O}(-2l)|_D\simeq\mathcal{O}(-l)|_D\,.$$
Hence $\mathcal{F}_{D,A,V}|_D$ is a torsion-free sheaf which is an extension of
$\mu_{\mathcal{O}(1)|_D}$-stable sheaves with the same slope. By \cite[Lemma
1.10 (3)]{Mar}, $\mathcal{F}_{D,A,V}|_D$ is $\mu_{\mathcal{O}(1)|_D}$-semistable. This
implies that $\mathcal{F}_{D,A,V}$ is $\mu_{\mathcal{O}(1)}$-semistable, cf. \cite[Chapter
11]{LP}.
\end{proof}
We collect our observations to prove Theorem \ref{finalprojintro}.
\begin{proof}[Proof of Theorem \ref{finalprojintro}.] Part (a) of the theorem follows from Theorem \ref{stabonprojintro}. Part (b) of the theorem follows from Corollary \ref{firstcase}. Proposition \ref{sstabonD} proves part (c) of the theorem. Finally, part (d) follows from Theorem \ref{calabiyauintro}.
\end{proof}
\begin{exceptions}\label{exceptions}
If $L=\mathcal{O}(d)$, then for all $r$ such that $(r,d)=1$, a
$\mu_{\mathcal{O}(1)}$-semistable $\mathcal{F}_{D,A,V}$ is in fact stable. Indeed, if
$D\in |\mathcal{O}(d)|$, then
$\emph{deg}\,\mathcal{F}_{D,A,V}=c_1(\mathcal{F}_{D,A,V})\cdot (\mathcal{O}(1)^{N-1})=-d$ and
$\emph{rank}\,\mathcal{F}_{D,A,V}=r$. Since the rank and degree are coprime, by
\cite[Lemma 1.2.14]{HL}, semistability and stability coincide.
\end{exceptions}
Theorem \ref{finalprojintro} shows that there are families of
$\mu_{\mathcal{O}(1)}$-(semi)stable rank $r$ reflexive sheaves on the
projective space with any prescribed $c_1$.
The following lemma shows that if we weaken our condition on $A$ in
part (d) of Theorem \ref{finalprojintro}, then we may not get the
required (semi)stability.
\begin{notsemist}\label{notsemist}
Consider $L=\mathcal{O}_X(d)$ ($d>0$) on $X=\mathbb{P}^N$ for $N\geq 2$. Let
$D\in |L|$ be a general smooth and irreducible hypersurface. Let
$A=\mathcal{O}_X(l)|_D$ and $V\subset H^0(D,A)$ be a general
$r$-dimensional subspace ($r\geq 2$) such that $r$ and $l$ satisfy
the following condition:
$$0<l<\frac{d(r-1)}{r}\,.$$
Then $\mathcal{F}_{D,A,V}$ is not $\mu_{\mathcal{O}(1)}$-semistable.
\end{notsemist}
Note that if $l$ is an integer such that $0<l<\frac{d}{2}$, then for
any $r\geq 2$, the above inequality is satisfied.
\begin{proof} Let $l$ and $r$ be integers that satisfy the given condition, i.e.
\[0<l<\frac{d(r-1)}{r}\,.\] Consider the line bundle $\mathcal{O}_X(l)$ on
the projective space $X$. Since $D\in |\mathcal{O}_X(d)|$ and
$A=\mathcal{O}_X(l)|_D$, we have $H^0(X,\mathcal{O}_X(l))\simeq H^0(D,A)$. Choose
a general $V\in G(r,H^0(X,\mathcal{O}_X(l)))\simeq G(r,H^0(D,A))$.
Let $Z_X(V)$ (resp. $Z_D(V)$) denote the closed subscheme of $X$
(resp. $D$), defined by the vanishing of sections of
$V\subset H^0(X,\mathcal{O}_X(l))$ (resp. $V\subset H^0(D,A)$). In fact
$Z_D(V)=Z_X(V)\cap D$. Note that for a general $D$ and $V$,
$\trm{codim}_X Z_X(V)=\trm{codim}_D Z_D(V)=r$. We get the following
exact sequence on $X$:
\begin{equation*}\label{ceg-1}
0\longrightarrow M\longrightarrow V\otimes \mathcal{O}_X\longrightarrow \mathcal{O}_X(l)\otimes\mathcal{I}_{Z_X(V)}\longrightarrow 0\,.
\end{equation*}
Here $M$ is a torsion-free (in fact reflexive) sheaf of rank $r-1$ and
determinant $\mathcal{O}_X(-l)$. Thus $\mu_{\mathcal{O}_X(1)}(M)=\frac{-l}{r-1}$.
Also, corresponding to the triple $(D,A,V)$, we get the following
exact sequence on $X$ (where $i:D\hookrightarrow X$ denotes the inclusion):
\begin{equation*}\label{ceg-2}
0\longrightarrow \mathcal{F}_{D,A,V}\longrightarrow V\otimes\mathcal{O}_X\longrightarrow i_*(A\otimes\mathcal{I}_{Z_D(V)})\longrightarrow 0\,.
\end{equation*}
Note that $\mathcal{F}_{D,A,V}$ is of rank $r$ and $\trm{det}\,\mathcal{F}_{D,A,V}=\mathcal{O}_X(-d)$. We
get $\mu_{\mathcal{O}_X(1)}(\mathcal{F}_{D,A,V})=\frac{-d}{r}$.
Comparing the above exact sequences, there is an inclusion
$M\hookrightarrow\mathcal{F}_{D,A,V}$. But
\[\mu_{\mathcal{O}_X(1)}(M)=\frac{-l}{r-1}>\frac{-d(r-1)}{r(r-1)}= \frac{-d}{r}=\mu_{\mathcal{O}_X(1)}(\mathcal{F}_{D,A,V})\,.\]
Hence $\mathcal{F}_{D,A,V}$ is not $\mu_{\mathcal{O}(1)}$-semistable.
\end{proof}
\section{An application to Kernel bundles}\label{Applications}
In this section we see an application of the techniques used so
far. As before, we work over the field of complex numbers.
Consider the line bundle $\mathcal{O}(d)$ ($d>0$) on $\mathbb{P}^n$, $n\geq
2$. We have the following exact sequence corresponding to the line
bundle $\mathcal{O}(d)$ where $M_{\mathcal{O}(d)}$ is the kernel vector bundle.
\begin{equation}\label{kernel1}
0\longrightarrow M_{\mathcal{O}(d)}\longrightarrow H^0(\mathcal{O}(d))\otimes\mathcal{O}_{\mathbb{P}^n}\longrightarrow \mathcal{O}(d)\longrightarrow 0\,.
\end{equation}
Flenner \cite[Corollary 2.2]{HF} proved that the kernel bundles
$M_{\mathcal{O}(d)}$ are $\mu_{\mathcal{O}(1)}$-semistable.
\begin{hpstable}\label{hpstable}
In fact, in case of the line bundle $\mathcal{O}(1)$ on $\mathbb{P}^n$, the
kernel bundle $M_{\mathcal{O}(1)}$ is stable. Indeed, note that
$M_{\mathcal{O}(1)}$ is a locally free sheaf on $\mathbb{P}^n$ with
$\emph{det}\,M_{\mathcal{O}(1)}=\mathcal{O}(-1)$. Further, there is a surjective
morphism from a trivial bundle on $\mathbb{P}^n$ to
$M_{\mathcal{O}(1)}^{\vee}$. For, dualizing \eqref{kernel1} when $d=1$, we
get the exact sequence:
\[0\longrightarrow\mathcal{O}(-1)\longrightarrow H^0(\mathcal{O}(1))^{\vee}\otimes \mathcal{O}_{\mathbb{P}^n}\longrightarrow
M_{\mathcal{O}(1)}^{\vee}\longrightarrow 0\,.\] Finally, since
$H^0(X,M_{\mathcal{O}(1)})=0$, by Remark \ref{elementofpf}, $M_{\mathcal{O}(1)}$
is $\mu_{\mathcal{O}(1)}$-stable.
\end{hpstable}
\begin{Kernelbundle}\label{kernel}
Let $X$ be an irreducible, smooth projective variety of dimension
$n$ over $\mathbb{C}$. Consider an ample and globally generated line
bundle $L$ on $X$. For a general subspace $W\subset H^0(X,L)$
of dimension $n+1$, the kernel bundle $M_{L,W}$ associated to
$(L,W)$ is $\mu_{L}$-polystable.
\end{Kernelbundle}
\begin{proof} For a general $W\in G(n+1, H^0(X,L))$, the corresponding
linear system $\mathbb{P} W$ is basepoint-free and gives a finite
surjective morphism $\psi_W:X\longrightarrow \mathbb{P}^n\,.$ Note that
$\psi_W^*\mathcal{O}(1)\simeqL$ and that
$\psi_W^*H^0(\mathbb{P}^n,\mathcal{O}(1))\simeq W$. We have the kernel bundle
$M_{\mathcal{O}(1)}$ on $\mathbb{P}^n$ defined by the exact sequence of the
form \eqref{kernel1} for $d=1$. Pullback this sequence to $X$, we
get:
\begin{equation*}
0\longrightarrow \psi_W^*M_{\mathcal{O}(1)}\longrightarrow W\otimes \mathcal{O}_{X}\longrightarrow L\longrightarrow 0\,.
\end{equation*}
Hence, $\psi_W^*M_{\mathcal{O}(1)}$ is the kernel bundle on $X$ associated to
$(L,W)$, i.e. $\psi_W^*M_{\mathcal{O}(1)}\simeq M_{L,W}$. Again, since
$M_{\mathcal{O}(1)}$ is $\mu_{\mathcal{O}(1)}$-stable, by \cite[Lemma 1.17]{Mar},
$M_{L,W}$ is $\mu_L$-semistable. In fact, by a result of Kempf
\cite[Theorem 1]{GK}, $M_{L,W}$ is $\mu_L$-polystable on $X$.
\end{proof}
|
2,877,628,090,482 | arxiv | \section{Introduction}
\IEEEPARstart{T}{he} initial motivation for frequent pattern mining (FPM) was to analyze the shopping behavior of customers using transactional databases and recommend frequently purchased patterns to customers \cite{agrawal1994fast,han2000mining,chen1996data,mannila1997discovery,goethals2003survey}. In this case, researchers believed that the item is binary and whether an item appears in a transaction is considered primary. However, frequent purchase patterns are occasionally less profitable than infrequent purchase patterns with high profits, which poses a fundamental problem. Hence, the discovery of high-utility patterns that consider not only the internal utility (e.g., quantity) but also the external utility (e.g., profit, interest, or weight) \cite{shen2002objective,chan2003mining,yao2006mining,yao2006unified} has gained substantial research attention. Moreover, a framework called high-utility pattern mining (HUPM) \cite{gan2021survey,ahmed2009efficient} was proposed to address this practical issue. In contrast with frequent pattern mining (FPM), the lack of a downward closure property makes HUPM more difficult and intractable.
Up until now, numerous approaches and strategies have been designed to increase the efficiency and convenience of mining high-utility patterns \cite{li2008isolated,liu2005two}. These methods can be broadly divided into three major categories. The first category involves the candidate generation-and-test approach \cite{yao2006mining,li2008isolated,liu2005two}: The eligible patterns are selected as candidates based on an upper-bound evaluation and then calculated and tested to determine whether they are qualified. The second category entails tree-based algorithms \cite{ahmed2009efficient,ahmed2011huc,tseng2012efficient,tseng2010up}. The necessary information is projected onto a tree in the database, which can avoid multiple traversals of the database. The third category employs vertical data structures (e.g., utility-lists) \cite{lin2016fhn,fournier2014fhm}. Similar to the tree-based approach, crucial information is stored in a vertical data structure, and the utility of any pattern can be calculated using this structure. HUPM has dozens of practical applications in real life, such as gene regulation \cite{zihayat2017mining} and web click-stream analysis \cite{li2008fast,shie2010online}. A challenging problem arises in selecting a pattern to represent the transactions.
To the best of our knowledge, only a few studies have been carried out on this problem. The concept of occupancy \cite{tang2012incorporating} was originally defined as the share of the number of items to one of the transactions in which the item existed. This requires the patterns to occupy the majority of the supporting transactions. In addition, an occupancy measurement can be widely used in pattern recommendation. When a user browses a website, if the number of times the user clicks on a URL is greater than a certain threshold, then the URL has a high occupancy pattern. However, if this user browses for a long time at a URL that is not frequently clicked, the URL may be valuable. Hence, Shen \textit{et al.} \cite{shen2016ocean} incorporated occupancy and utility to create an original concept called the utility-occupancy and designed an algorithm called OCEAN. Although this algorithm is novel, it has a limitation in that it may exclude certain patterns that should be qualified. To address this drawback, Gan \textit{et al.} \cite{gan2019huopm} proposed an efficient algorithm called high utility-occupancy pattern mining. Taking advantage of two compact list structures, namely, a utility-occupancy list and a frequency-utility-occupancy table, respectively, HUOPM can reduce the running time and memory consumption, which are two of its principal merits. It is clear that utility-occupancy has a wide applicability in an information-driven society. For instance, during a holiday, tourists may want to go out but not know where to visit. They can then check the travel route recommendation, which analyzes and calculates the utility-occupancy of the tourist route, and finally recommends the best route for the tourists.
During the process of pattern mining, researchers tend to extract all patterns. However, they may not necessarily be useful to actual production or management. For example, supermarket managers generally display milk and bread together, but they rarely bundle \{milk, bread, strawberry jam, and gum\}. Although both sets are elegant patterns selected by the algorithm, the shorter one is obviously more popular with decision makers. In previous studies, Pei \textit{et al.} \cite{pei2002constrained} applied length constraints on frequent pattern mining by appending no items after the patterns. Next, Fournier-Viger \textit{et al.} \cite{fournier2016fhm} proposed the FHM+ algorithm focusing on utility, which improved the FHM by incorporating the length constraints into utility mining. This narrows the upper bound of the patterns and further trims the search space. Nevertheless, no method has been proposed in the field of utility-occupancy to address the problem of a length constraint.
In this study, we focus on mining flexible high utility-occupancy patterns by developing a novel algorithm called HUOPM$^+$. The proposed algorithm is dedicated to discovering high utility-occupancy patterns with length constraints; in addition, it is a generic framework (or called an extension) of the state-of-the-art HUOPM algorithm. The major contributions of this paper are briefly summarized as follows.
\begin{itemize}
\item A generic and practical algorithm is proposed to exploit flexible high utility-occupancy patterns. During the execution of this algorithm, the minimum and maximum lengths are needed in advance to determine the length range of the derived patterns.
\item To avoid scanning the database multiple times, two compact data structures, called a utility-occupancy nested list (UO-nlist) and a frequent-utility-occupancy table (FUO-table), are constructed to store vital information from the databases.
\item It is recommended to tighten the upper bound with the newly designed LUB, which is less than the original upper-bound in the HUOPM algorithm.
\item Subsequent experiments have been carried out on both real-world and synthetic datasets, the results show that all patterns with length constraints can be obtained and that the efficiency of the proposed algorithm is significantly high in terms of the execution time and memory consumption.
\end{itemize}
The remainder of this paper is broadly organized as follows. Related studies are introduced in Section \ref{sec:2}. In Section \ref{sec:background}, some fundamental knowledge regarding this study is presented. The presented HUOPM$^+$ algorithm and three novel pruning strategies are detailed in Section \ref{sec:4}. In Section \ref{sec:experiments}, subsequent experiments confirming the effectiveness and efficiency of the proposed algorithm are described. Finally, Section \ref{sec:conclusion} provides some concluding remarks regarding this research.
\section{Related Studies}
\label{sec:2}
The studies related to the HUOPM$^+$ algorithm mainly deal with three aspects, i.e., high-utility pattern mining, high utility-occupancy pattern mining, and flexible pattern mining, which are discussed below.
\subsection{High-Utility Pattern Mining}
Thus far, numerous studies have been carried out on HUPM, which aims at miming qualified patterns whose utilities are greater than or equal to a predefined minimum utility threshold. Because HUPM provides guidance for many applications such as decision-making, it has attracted significant attention. HUPM was initially proposed in \cite{yao2004foundational}. Each pattern has two aspects, one being the quantity of items whose alias is an internal utility, and the other being a unit utility (e.g., profit), which is defined by experts as an external utility. If any patterns satisfy the minimum utility threshold, they can be derived as high-utility patterns. HUPM is technically more challenging than FPM because FPM has a downward closure property, whereas HUPM does not. To overcome this critical issue, Liu \textit{et al.} \cite{liu2005two} creatively proposed a two-phase algorithm and established a transaction-weighted utilization (TWU) model by adopting the anti-monotonic property of a transaction-weighted utility. Liu \textit{et al.} \cite{liu2012mining} then presented HUI-Miner, achieving a better performance than previous approaches. This algorithm scans the transaction database twice to construct the initial utility-list, and thus there is no longer a need to access the database. The utility-list contains the utility and the remaining utility of the patterns, which is a necessary condition for calculating the upper bound of the extended patterns. The above algorithms are all itemset-based utility mining algorithms, although different types of data also exist. Utility mining of temporal data \cite{lin2015efficient}, uncertain data \cite{lin2016efficient}, dynamic data \cite{lin2015incremental,yun2019efficient,nguyen2019mining}, sequence data \cite{gan2020proum,gan2020fast}, and other factors are all interesting research directions, as highlighted in a literature review \cite{gan2021survey}.
\subsection{High Utility-Occupancy Pattern Mining}
In terms of the contribution ratio of a pattern, the above presented utility approaches are useless. Thus, it is sensible to introduce a new definition, i.e., occupancy. The initial concept of occupancy, whereby occupancy is defined as the number of occurrences of a pattern, was designed by Tang \textit{et al.} \cite{tang2012incorporating}. Unfortunately, this method is unsuitable for research on utility mining. Subsequently, Shen \textit{et al.} \cite{shen2016ocean} started conducting research on utility-occupancy and developed a representative algorithm called OCEAN to find patterns whose share of utility in the supporting transactions is greater than a specific value. However, the OCEAN algorithm fails to discover complete high utility-occupancy patterns. Gan \textit{et al.} \cite{gan2019huopm} proposed a successful and efficient algorithm called HUOPM to address this disadvantage. Two compact data structures are applied to deposit the vital data in the database. A utility-occupancy list (UO-list) is used to store the utility and the remaining utility of each pattern. Each entry records the details of a transaction. An FUO-table is then obtained for the convenience of integrating and revising the information in the UO-list. HUOPM just focuses on precise data, thus Chen \textit{et al.} \cite{chen2021discovering} recently extended HUOPM to deal with the uncertainty in uncertain data. However, these algorithms can not discover flexible patterns based on various constraints.
\subsection{Flexible Pattern Mining}
Through this research, we found that the patterns adopted by managers generally have a shorter length and that longer patterns are not universal because they are too specific. Hence, it is advisable for users to pursue a flexible algorithm. According to the demand, the generation of a large number of patterns considerably decreases the efficiency of the mining. Pei et al. \cite{pei2002constrained} emphasized adding a constraint-based approach during frequent pattern mining. The authors mainly adopted a pattern-growth method, which is applicable to various constraints. When this method is applied to the field of utility mining, however, it is too superficial and does not penetrate into the interior of the utility mining algorithms. FHM$^+$ \cite{fournier2016fhm} has an interesting feature and it discovers high-utility item sets with length constraints. The authors considered the maximum length of the patterns as the dominant control parameter, and designed length upper-bound reduction (LUR) to prune the search space. Several studies regarding flexible sequential pattern mining such as CloSpan \cite{yan2003clospan}, BIDE \cite{wang2004bide}, and MaxFlex \cite{arimura2007mining} have also been conducted to meet the requirements of different applications.
\section{Preliminary and Problem Statement}
\label{sec:background}
To assist with this discussion, in this section, some symbols in connection with the proposed algorithm are introduced and defined. Provided that there are several items in a transaction database, and let $I$ = \{$i_{1}$, $i_{2}$, $\ldots$, $i_{m}$\} be a collection of m distinct items, if an itemset contains \textit{k} distinct items, it can then be denoted as $k$-itemset. A transaction database $\mathcal{D}$ consists of n transactions in which $\mathcal{D}$ = \{$T_{1}$, $T_{2}$, $\ldots$, $T_{n}$\}. Each transaction holds its own particular identifier \textit{tid} and is a subset of $I$. Every transaction includes three parts, i.e., the transaction identifier \textit{tid}, item name, and number of purchases of the corresponding item. Next, we take advantage of the transaction database in TABLE \ref{table:db1} and the profit presentation of each item in TABLE \ref{table:profit} as a running example.
\begin{table}[!htbp]
\centering
\small
\caption{Transaction database.}
\label{table:db1}
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{\textit{tid}} & \textbf{Transaction (item, quantity)} & \textbf{\textit{tu}} \\ \hline \hline
$ T_{1} $ & \textit{a}:3, \textit{b}:4, \textit{c}:2, \textit{d}:6, \textit{e}:2 & \$63 \\ \hline
$ T_{2} $ & \textit{a}:7, \textit{b}:4, \textit{c}:1, \textit{e}:2 & \$62 \\ \hline
$ T_{3} $ & \textit{a}:5, \textit{b}:2, \textit{e}:1 & \$35 \\ \hline
$ T_{4} $ & \textit{b}:4, \textit{c}:1, \textit{d}:2 & \$25 \\ \hline
$ T_{5} $ & \textit{a}:2, \textit{d}:4 & \$14 \\ \hline
$ T_{6} $ & \textit{a}:2, \textit{b}:2, \textit{c}:6, \textit{d}:4, \textit{e}:3 & \$60 \\ \hline
$ T_{7} $ & \textit{a}:1, \textit{b}:2 & \$13 \\ \hline
$ T_{8} $ & \textit{d}:3 & \$6 \\ \hline
$ T_{9} $ & \textit{b}:3, \textit{c}:5, \textit{d}:2, \textit{e}:5 & \$74 \\ \hline
$ T_{10} $ & \textit{b}:3, \textit{e}:5 & \$65 \\ \hline
\end{tabular}
\end{table}
\begin{table}[!htbp]
\centering
\small
\caption{Unit utility of each item}
\label{table:profit}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\textbf{Item} & $ a $ & $ b $ & $ c $ & $ d $ & $ e $ \\ \hline
\textbf{Utility (\$)} & $ 3 $ & $ 5 $ & $ 1 $ & $ 2 $ & $ 10 $ \\ \hline
\end{tabular}
\end{table}
\begin{definition}
\label{def_1}
\rm The number of itemsets $X$ contained in a database is usually defined as \textit{support count} \cite{agrawal1994fast}, which can be denoted as \textit{SC(X)}. Given a \textit{support threshold} $\alpha$ ($ 0 < \alpha \leq 1 $), if $SC(X) \geq$ $\alpha$ $\times$ $|D|$, we can determine that $X$ is a frequent pattern. The collection of transactions containing $X$ is expressed as $ \varGamma_X$. That is, if an itemset $X$ appears in the transaction $T_q$, then $T_q$ belongs to $ \varGamma_X$, and naturally, $ SC(X)$ = $|\varGamma_X| $.
\end{definition}
As shown in TABLE \ref{table:db1}, the pattern $ac$ appears in transactions $T_1$, $T_2$, and $T_6$. Therefore, $SC (ac) $ = 3 and $\varGamma_{(ac)}$ = $ \{T_1$, $T_2$, $T_6\} $. Assuming that the value of $\alpha$ is 0.3, $SC(ac)$ $\geq $ $\alpha$ $\times$ $|D| $, and thus $ac$ is frequent.
In a market basket analysis, based on the preference of the customers for the products or an evaluation of experts regarding the items, each item in the database is associated with a positive number, or in other words, a unit profit.
\begin{definition}
\label{def_4}
\rm We define the \textit{internal utility} of an item $i$ in transaction $T_q$ as \textit{iu(i, $T_q$)}, which refers to the number of occurrences of the corresponding item. We define the \textit{external utility} of an item $i$ in the database as \textit{eu(i)}, namely, the unit utility of $i$ in TABLE \ref{table:profit}, which is generally set subjectively.
\end{definition}
\begin{definition}
\label{def_5}
\rm The utility of an item $i$ in the supporting transaction $T_{q}$ is defined as $u(i, T_{q})$ = $iu(i, T_{q})$ $\times$ $eu(i)$. Moreover, the utility of a pattern $X$ in a transaction $T_{q}$ is equal to the total utility of each item in the pattern and is represented as $u(X, T_{q})$ = $\sum _{i \in X \wedge X \subseteq T_{q}} u(i, T_{q}) $. Hence, the utility of $X$ existing in a transaction database $\mathcal{D}$ is denoted as $u(X)$ = $\sum_{X\subseteq T_{q}\wedge T_{q} \in \mathcal{D}} u(X, T_{q}) $. A summary of all utilities in a transaction is recorded as the transaction utility (\textit{tu}).
\end{definition}
Let us take $e$ and $ac$ as examples for an easier understanding of the utility calculation. Here, $ u(e) $ = $ u(e, T_1) $ + $ u(e, T_2) $ + $ u(e, T_3) $ + $ u(e, T_{6})$ + $ u(e, T_{9}) $ + $ u(e, T_{10})$ = \$20 + \$20 + \$10 + \$30 + \$50 + \$50 = \$180, $ u(ac) $ = $ u(ac, T_1) $ + $ u(ac, T_2) $ + $ u(ac, T_6) $ = \$11 + \$22 + \$12 = \$45. Each transaction utility refers to the last column of TABLE \ref{table:db1}.
The \textit{occupancy} was originally proposed to measure the proportion of pattern $X$ in the supporting transactions \cite{tang2012incorporating}. In the field of utility mining, it is crucial to discover HUOPs \cite{shen2016ocean,gan2019huopm}.
\begin{definition}
\label{def_7}
\rm Assume $X$ is present in transaction $T_q$, and the utility-occupancy of $X$ in supporting transaction $T_q$ is defined as follows:
\begin{equation}
uo(X, T_q) = \dfrac{u(X, T_q)}{tu(T_q)}.
\end{equation}
Assume $\varGamma_X$ is a collection of all transactions containing pattern $X$. The utility-occupancy of a pattern in a database is calculated as follows:
\begin{equation}
uo(X) = \dfrac{\sum_{X \subseteq T_q \wedge T_q \in \mathcal{D}}uo(X,T_q)}{|\varGamma_X|}.
\end{equation}
\end{definition}
\begin{definition}
\label{def_9}
\rm Let there be a transaction database $\mathcal{D}$. A pattern $X$ is denoted as a HUOP if and only if $SC(X)$ $\geq$ $\alpha$ $\times$ $|\mathcal{D}|$ and $ uo(X)$ $\geq$ $\beta $, where $ \alpha $ ($ 0 < \alpha \leq 1 $) is the predefined minimum support threshold and $ \beta $ ($ 0 < \beta \leq 1 $) is the predefined minimum utility-occupancy threshold.
\end{definition}
In a previous study, the full transaction utility and support were calculated. It is convenient to obtain $SC(ac)$ = 3, $tu(T_1)$ = \$63, $tu(T_2)$ = \$62, and $tu(T_6)$ = \$60. Therefore, $ uo(ac, T_1) $ is calculated as \$11/\$63 $\approx $ 0.1746. Similarly, $ uo(ac, T_2) $ and $ uo(ac, T_6) $ are calculated as 0.3548 and 0.2, respectively. Furthermore, $uo(ac)$ = \{0.1746 + 0.3548 + 0.2\} /3 $\approx $ 0.2431. Provided that the value of $\alpha$ and $\beta$ is 0.3, the pattern $ac$ is not a HUOP.
There have been dozens of studies conducted on HUOPM; however, one outstanding issue of HUOPM algorithms is that they normally focus on discovering massive patterns containing numerous items. As we discussed earlier, these patterns might not work for users because they usually represent an unusual situation. To increase the utility of the discovered patterns, we address the issue of mining flexible high utility-occupancy patterns with a length constraint. The minimum length \textit{minlen} and the maximum length \textit{maxlen} of the required patterns are predefined.
\begin{definition}
\label{def_13}
\rm (\textit{Flexibly mining high utility-occupancy patterns}) The flexibly mining of high utility-occupancy patterns aims to discover HUOPs containing up to \textit{maxlen} items and at least \textit{minlen} items.
\end{definition}
\textbf{Problem Statement.} Consider the existence of a given transaction database $\mathcal{D}$, a utility-table recording the unit utility of each item, and four input parameters ($\alpha$, $\beta$, \textit{minlen}, and \textit{maxlen}) used as the mining constraints. The purpose of this study is to mine flexible eligible patterns with the length being at least \textit{minlen} and at most \textit{maxlen}, under the condition that not only is the support count greater than or equal to the minimum support count threshold $\alpha$ $\times$ $|D|$, the value of the utility-occupancy is also no less than the minimum utility-occupancy threshold $\beta$ $\times$ $|\mathcal{D}|$.
Assuming that \textit{minlen} = 1 and \textit{maxlen} = 3, the length of all derived patterns should range from 1 and 3. Thus, although \{$caeb$\} is a HUOP, it is not the desired pattern because its length exceeds \textit{maxlen}. In addition, assuming that the value of $\alpha$ and $\beta$ are both 0.3, all of the desired patterns can be obtained from Table \ref{table:patterns}.
\begin{table}[!htbp]
\centering
\small
\caption{Desired patterns with length constraints}
\label{table:patterns}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\textbf{HUOP} & \textbf{\textit{sup}} & \textbf{\textit{uo}} & \textbf{HUOP} & \textbf{\textit{sup}} & \textbf{\textit{uo}} \\ \hline \hline
$d$ & 6 & 0.3515 & $db$ & 4 & 0.5062 \\ \hline
$e$ & 6 & 0.4784 & $ eb $ & 6 & 0.7328 \\ \hline
$b$ & 8 & 0.3869 & $ cae $ & 3 & 0.6232 \\ \hline
$ce$ & 4 & 0.5078 & $ cab $ & 3 & 0.5120 \\ \hline
$cb$ & 5 & 0.4130 & $ cde $ & 3 & 0.6901 \\ \hline
$ad$ & 3 & 0.5222 & $ cdb $ & 4 & 0.5660 \\ \hline
$ae$ & 4 & 0.6090 & $ ceb $ & 4 & 0.7601 \\ \hline
$ab$ & 5 & 0.6205 & $ aeb $ & 4 & 0.8821 \\ \hline
$de$ & 3 & 0.6237 & $ deb $ & 3 & 0.8526 \\ \hline
\end{tabular}
\end{table}
\section{Proposed Flexible HUOPM$^+$ Algorithm}
\label{sec:4}
In this section, some novel definitions based on two length constraints are first describe. Then, according to these definitions and combined with the utility-list \cite{liu2012mining}, two novel data structures are further constructed. A UO-nlist and a FUO-table are designed to maintain the information regarding the given database. In addition, to avoid an exhaustive search, several pruning strategies are proposed to further narrow the upper bound of utility-occupancy on the subtree nodes in a $SC$-tree, the definition of which is introduced in the next subsection. Finally, the designed algorithm should be described with the help of a pseudocode. The detailed flowchart of the proposed HUOPM$^+$ algorithm is shown in Fig. \ref{fig:flowchart}.
\begin{figure*}[!htbp]
\centering
\includegraphics[scale=0.48]{figs/chart.pdf}
\caption{Flowchart of the proposed HUOPM$^+$ algorithm.}
\label{fig:flowchart}
\end{figure*}
\subsection{Revised Remaining Utility-Occupancy}
As is well-known, the HUOPM algorithm \cite{gan2019huopm} adopts a depth-first search strategy. Thus, an intuitive method for controlling the length of the discovered patterns is to output only those patterns with no less than the minimum length and to stop the extension, where the number of items of a pattern is equal to the established maximum length. Nevertheless, the advantages of this approach are not obvious and are too superficial, which may fail to decrease the number of visited nodes within the specified length of range. As the reason behind this approach, it is unable to reduce the upper bound of the utility-occupancy of the patterns and prune the search range. To handle this drawback, we developed a novel strategy called the LUB, which provides a revision of the remaining utility-occupancy for discovering the HUOPs. A description of this approach is detailed below.
When we run the program, we should traverse the items in each transaction in a certain order. Without a loss of generality, in this study, we take the support of the ascending order as the arrangement and denote it as $\prec$. For example, from the database shown in TABLE \ref{table:db1}, we can easily see that $SC(c)$ $<$ $ SC(a)$ $\leq $ $SC(d)$ $\leq$ $SC(e)$ $<$ $SC(b)$. Therefore, the support-ascending order is $ c \prec a \prec d \prec e \prec b $. TABLE \ref{table:db} shows the revised database modified from TABLE \ref{table:db1} according to the order of $\prec$.
\begin{table}[!htbp]
\centering
\small
\caption{Revised transaction database.}
\label{table:db}
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{\textit{tid}} & \textbf{Transaction (item, quantity)} & \textbf{\textit{tu}} \\ \hline \hline
$ T_{1} $ & \textit{c}:2, \textit{a}:3, \textit{d}:6, \textit{e}:2, \textit{b}:4 & \$63 \\ \hline
$ T_{2} $ & \textit{c}:1, \textit{a}:7, \textit{e}:2, \textit{b}:4 & \$62 \\ \hline
$ T_{3} $ & \textit{a}:5, \textit{e}:1, \textit{b}:2 & \$35 \\ \hline
$ T_{4} $ & \textit{c}:1, \textit{d}:2, \textit{b}:4 & \$25 \\ \hline
$ T_{5} $ & \textit{a}:2, \textit{d}:4 & \$14 \\ \hline
$ T_{6} $ & \textit{c}:6, \textit{a}:2, \textit{d}:4, \textit{e}:3, \textit{b}:2 & \$60 \\ \hline
$ T_{7} $ & \textit{a}:1, \textit{b}:2 & \$13 \\ \hline
$ T_{8} $ & \textit{d}:3 & \$6 \\ \hline
$ T_{9} $ & \textit{c}:5, \textit{d}:2, \textit{e}:5, \textit{b}:3 & \$74 \\ \hline
$ T_{10} $ & \textit{e}:5, \textit{b}:3 & \$65 \\ \hline
\end{tabular}
\end{table}
\begin{definition}
\label{def_11}
\rm According to the order of $\prec$, there may still be some items after pattern $X$ in transaction $T_q$ whose proportion in the transaction is defined as the remaining utility-occupancy (\textit{ruo}) \cite{gan2019huopm}, the formula of which is expressed as follows:
\begin{equation}
ruo(X,T_q) = \dfrac{\sum_{ i_j \notin X \wedge X \subseteq T_q \wedge X \prec i_j }u(i_j,T_q)}{tu(T_q)}.
\end{equation}
Furthermore, the remaining utility-occupancy of $X$ in a database is defined as follows \cite{gan2019huopm}:
\begin{equation}
ruo(X) = \dfrac{\sum_{X \subseteq T_q \wedge T_q \in \mathcal{D}}ruo(X,T_q)}{|\varGamma_X|)}.
\end{equation}
\end{definition}
For example, as shown in TABLE \ref{table:db}, $ruo(a, T_1)$ = $(u(d, T_1)$ + $u(e, T_1)$ + $u(b, T_1))$/$tu(T_1)$ $ \approx $ 0.8254. In addition, $ruo(a)$ = $(ruo(a, T_1)$ + $ruo(a, T_2)$ + $ruo(a, T_3)$ + $ruo(a, T_5)$ + $ruo(a, T_6)$ + $ruo(a, T_7))$/6 = (0.8254 + 0.6452 + 0.5714 + 0.5714 + 0.8 + 0.7692)/6 = 0.6971. To facilitate the representation and mining of flexible HUOPs, we denote the required maximum length as \textit{maxlen} and the minimum length as \textit{minlen}.
\begin{definition}
\label{def_14}
\rm \textit{(Largest utility-occupancy in a transaction with respect to a pattern)}. Let a pattern $X$ exist in a transaction $T_q$ and the maximum length of patterns is set as \textit{maxlen}, and then put all items appearing after $X$ into a list in order of $\prec$ and denote the list as $V(X, T_q)$ = \{$i_1$, $i_2$, $\ldots$, $i_l$\}. Next, calculate the utility-occupancy corresponding to these items and express it as $W(X, T_q)$ = \{$uo(i_1, T_q)$, $uo(i_2, T_q)$, $\ldots$, $uo(i_l, T_q)$\}. The maximum number of items appended to $X$ is calculated as \textit{maxExtendLen} = \textit{maxlen} - $|X|$, where $|X|$ is the length of $X$. Thus, the largest utility-occupancy in transaction $T_q$ in regard to a pattern $X$ is the collection of the \textit{maxExtendLen} largest values in $W(X, T_q)$. For simplicity, we take this as $luo(X, T_q)$.
\end{definition}
For example, consider $a$ in $T_6$ in TABLE \ref{table:db}. Let \textit{maxlen} be 3; the utility-occupancy values of all items after $a$ in transaction $T_1$ can be calculated as \{0.1333, 0.5. 0.1667\}. Thus, \textit{maxExtendLen} = 3 - 1 = 2 and $luo(a, T_6)$ = \{0.5, 0.1667\}.
\begin{definition}
\label{def_15}
\rm \textit{(Revised remaining utility-occupancy)} Suppose there is a pattern $X$ in transaction $T_q$ and the maximum length of patterns is set as \textit{maxlen}. To reduce the upper bound on the utility-occupancy of $X$ in $T_q$ with a length constraint, we define the revised remaining utility-occupancy as $\textit{rruo}(X, T_q)$ = $\sum{luo(X, T_q)}$. Furthermore, the revised remaining utility-occupancy of a pattern $X$ in a transaction database $\mathcal{D}$ is calculated as $\textit{rruo}(X)$ = $\dfrac{\sum_{X \in T_q, T_q \in \mathcal{D}}{\textit{rruo}(X, T_q)}}{|\varGamma_X|}$.
\end{definition}
Similar to the above example, we take a pattern $a$ in transaction $T_6$ as an example. The total utility-occupancy before a revision is 0.8. After optimization, however, the value reaches 0.6667, which is less than the original result. Moreover, in the entire database, the value of $\textit{rruo}(a)$ = 0.64315, which is much less than the former result of 0.6971.
\begin{property}
\label{pro_1}
\rm The upper bound of the revised remaining utility-occupancy of a pattern $X$ must be tighter than that of the remaining utility-occupancy.
\end{property}
\begin{proof}
No matter which transaction $T_q$ pattern $X$ exists in, it is simple to obtain $\textit{rruo}(X, T_q)$ $\leq$ $\textit{ruo}(X, T_q)$. Hence, we can conclude that $\textit{rruo}(X)$ $ \leq $ $\textit{ruo}(X)$, the detailed proof of which is shown below:
\begin{tabbing}
$rruo(X)$ \=
= $\dfrac{\sum_{X \in T_q, T_q \in \mathcal{D}}{\textit{rruo}(X, T_q)}}{|\varGamma_X|}$ \\
\>$ \leq $ $\dfrac{\sum_{X \in T_q, T_q \in \mathcal{D}}{\textit{ruo}(X, T_q)}}{|\varGamma_X|}$ \\
\> = $ruo(X)$.
\end{tabbing}
\end{proof}
We have found that the value of $\textit{rruo}(X)$ is smaller than that of $\textit{ruo}(X)$, and the next step is to determine how this property can be applied to find a tighter upper bound for the extension of pattern $X$. Before that, we should first build a data structure to store necessary information in the database.
\subsection{Revised List Structure for Storing Information}
In the previous sections, we introduced some basic concepts of the utility-occupancy and revised the remaining utility-occupancy. In this subsection, two compact data structures are designed for maintaining the essential information and avoiding going through databases multiple times. These structures are called a UO-nlist and FUO-table, respectively. In addition, as the main meaning of the nested list, there is one list within the larger list. The specific details of this are described below.
\begin{definition}
\label{def_16}
\rm (\textit{UO-nlist}) The UO-nlist related to a pattern $X$ is composed of several tuples, and one tuple corresponds to one transaction where $X$ occurs. Let $X$ be a pattern appearing in transaction $T_q$. We then define a tuple as consisting of three elements, namely, a transaction identifier $\textit{tid}$, the utility-occupancy of $X$ in $T_q$ (abbreviated as $\textit{uo}(X, T_q)$), and the largest utility-occupancy $\textit{luo}(X, T_q)$. As defined in Definition \ref{def_14}, $\textit{luo}(X, T_q)$ is a list recording a set of the largest occupancy utility of \textit{maxlen} - $|X|$ items after $X$ in $T_q$. Therefore, the UO-nlist can be marked as ($\textit{tid}$, $uo(X, T_q)$, $luo(X, T_q)$).
\end{definition}
For example, consider $ca$ in $T_6$, as shown in TABLE \ref{table:db}, and let \textit{maxlen} be 3. It is possible to obtain $uo(ca, T_6)$ = (\$6 + \$6) / \$60 = 0.2 and \textit{maxlen} - $|ca|$ = 3 - 2 = 1. Next, $luo(ca, T_6)$ = $\{uo(e, T_6)\}$ = $\{0.5\}$. Thus, UO-nlist of $ca$ in $T_6$ is \{$T_6$, 0.2, \{0.5\}\}. After scanning the database once, the UO-nlist of each item is constructed. For more details, refer to Fig. \ref{fig:UO-nlist}, which shows the support-ascending order of each item.
\begin{figure}[!htbp]
\centering
\includegraphics[scale=0.55]{figs/uonlist.pdf}
\caption{Constructed UO-nlists of each item in Table \ref{table:db}.}
\label{fig:UO-nlist}
\end{figure}
Although UO-nlist already contains all necessary information for mining flexible HUOPs, it is troublesome to recalculate the elements on the list when we make use of the support, the utility-occupancy, and revised remaining utility-occupancy of a pattern. In this case, the execution time and other algorithm performance may be compromised. To remedy this problem, we further designed a data structure called a FUO-table, which is defined as follows:
\begin{definition}
\label{def_17}
\rm \textit{(Frequency-utility-occupancy table, FUO-table)} The FUO-table of a pattern $X$ consists of three elements, which are extracted from the corresponding UO-nlist. Among them, the support \textit{sup} is related to the number of tuples, $uo$ is the average utility-occupancy of $X$ on the UO-nlist, and \textit{rruo} is equal to the average of all values \textit{luo} defined in Definition \ref{def_15}.
\end{definition}
To understand the concept of a FUO-table, take the construction process of the FUO-table of $c$ as an example, as displayed in Fig. \ref{fig:FWTableOfC}. Because $c$ in Fig. \ref{fig:UO-nlist} appears in five transactions, \textit{sup} is equal to 5. Here, $uo$ of $c$ in FUO-table is (0.0317 + 0.0161 + 0.04 + 0.1 + 0.0676)/5 = 0.05108. Then, the calculation process of \textit{rruo} is (0.3175 + 0.3175 + 0.3387 + 0.3226 + 0.8 + 0.16 + 0.5 + 0.1667 + 0.6757 + 0.2027)/5 = 0.76028. In addition, the FUO-tables of each item are shown in Fig. \ref{fig:FUO-table}.
\begin{figure}[!htbp]
\centering
\includegraphics[scale=0.55]{figs/uolistofitemc.pdf}
\caption{UO-nlist and FUO-table of item (c).}
\label{fig:FWTableOfC}
\end{figure}
\begin{figure}[!htbp]
\centering
\includegraphics[scale=0.64]{figs/fuotable.pdf}
\caption{Constructed FUO-tables of all items.}
\label{fig:FUO-table}
\end{figure}
Because promising or potential HUOPs are bound to meet the condition of frequent patterns, the initial UO-nlists and FUO-tables of frequent 1-itemsets are constructed after scanning the database twice. For the construction of the storage container of $k$-itemsets ($k > 1$), it is recommended to extend the already established subsets instead of scanning the database multiple times in a similar miner. Next, Algorithm 1 describes how to make use of subsets with the same prefix to calculate their extensions. To obtain the pattern $X_{ab}$, $X$ and its two extensions $X_a$ and $X_b$ ($a \prec b$) are applied in advance. For simplicity, we address the UO-nlist of $ X_{ab}$ as \textit{X$_{ab}$.UONL} and FUO-table of $ X_{ab} $ as \textit{X$_{ab}$.FUOT}. Lines 2 and 3 and lines 18–22 in the algorithm check whether $X_a$ and $X_b$ appear in an entry. At the same time, each time an entry is checked, the support of $X_a$ is reduced by 1 until the value is less than $\alpha \times |D|$. The inherent reason for this is that when the support of the extensions of a pattern cannot satisfy the minimum support threshold, then it cannot be a HUOP, which is a necessary but insufficient condition. Lines 4–17 illustrate the generation process of $X_{ab}$ when the construction conditions are met. If $X$ is an empty set, then the utility-occupancy of $X_{ab}$ is the total value of $X_a$ and $X_b$. Otherwise, the result is equal to the sum of the utility-occupancy of $X_a$ and $X_b$, and the utility-occupancy of $X$ is subtracted from $X$ because it is taken twice in this calculation. Moreover, the \textit{luo} of $X_{ab}$ is the same as $X_b$ owing to the latter order. In addition, the \textit{rruo} of $X_{ab}$ is the sum of the value in each $luo$, which is then divided by the support of $X_{ab}$. Thus far, the approach of building $(k+1)$-itemsets from $k$-itemsets has been realized.
\begin{algorithm}
\label{Construction}
\caption{Construction($ X $, $ X_{a} $, $ X_{b} $)}
\begin{algorithmic}[1]
\REQUIRE $X$, a pattern with its corresponding \textit{UO-nlist} and \textit{FUO-table}; $ X_{a} $, an extension of $X$ with an item $a$; $ X_{b} $, an extension of $X$ with an item $b$.
\ENSURE $ X_{ab} $.
\STATE set \textit{X$_{ab}$.UONL} $\leftarrow$ $\emptyset $, $ X_{ab}.\textit{FUOT}$ $\leftarrow$ $\emptyset $;
\STATE set \textit{supUB} = \textit{X$_{a}$.PFU.sup};
\FOR {each tuple $ E_{a}$ $\in$ \textit{X$_{a}$.UONL} }
\IF {$ \exists E_{a}$ $\in X_{b}.UONL$ $\wedge$ $E_{a}.tid$ == $E_{b}.tid $}
\IF{\textit{X.UONL} $\neq$ $\emptyset $}
\STATE search for $ E \in$ \textit{X.UONL}, $E.tid$ = $E_{a}.tid $;
\STATE $E_{ab}$ $\leftarrow$ $<$$E_{a}.tid$, $E_{a}.uo$ + $E_{b}.uo$ - $E.uo$, \textit{E$_{b}$.luo}$>$;
\STATE \textit{$X_{ab}$.FUOT.uo} += $E_{a}.uo$ + $E_{b}.uo$ - $E.uo $;
\ELSE
\STATE $E_{ab}$ $\leftarrow$ $<$$E_{a}.tid$, $E_{a}.uo$ + $E_{b}.uo$, \textit{$E_{b}$.luo}$>$;
\STATE \textit{$X_{ab}$.FUOT.uo} += $ E_{a}.uo$ + $E_{b}.uo $;
\ENDIF
\FOR {each \textit{value} $\in$ \textit{E$_{ab}$.luo} }
\STATE \textit{$ X_{ab}$.FUOT.rruo} += \textit{value};
\ENDFOR
\STATE \textit{$X_{ab}$.UONL} $\leftarrow$ $X_{ab}.\textit{UONL}$ $\cup$ $E_{ab} $;
\STATE \textit{X$_{ab}$.FUOT.sup} ++;
\ELSE
\STATE \textit{supUB} - -;
\IF{\textit{supUB} $< \alpha \times |D| $}
\STATE \textbf{return} \textit{null};
\ENDIF
\ENDIF
\ENDFOR
\STATE $ X_{ab}.\textit{FUOT.uo}$ = $\dfrac{X_{ab}.\textit{FUOT.uo}}{X_{ab}.\textit{FUOT.sup}}$;
\STATE $X_{ab}.\textit{FUOT.rruo}$ = $\dfrac{X_{ab}.\textit{FUOT.rruo}}{X_{ab}.FUOT.sup}$;
\STATE \textbf{return} $ X_{ab} $
\end{algorithmic}
\end{algorithm}
\subsection{Length-based Upper-Bound on Utility-Occupancy}
To the best of our knowledge, utility and utility-occupancy do not yet have downward closure properties such as the frequency, which results in a significant challenge with regard to increasing the performance of the algorithm and pruning the search space. Although Gan \textit{et al.} \cite{gan2019huopm} have previously designed an upper-bound on the utility-occupancy in the HUOPM algorithm, this upper-bound is relatively superficial for patterns with a length constraint. Thus, to achieve a tighter upper-bound, we propose the LUB strategy.
\begin{definition}
\label{def_18}
\rm \textit{(support count tree, $SC$-tree)}
In this study, the support ascending order on items is applied to the complete algorithm and marked as $\prec$. For convenience, a set-enumeration tree called a $SC$-tree with an order of $\prec$ is built to simulate the depth-first search path.
\end{definition}
For a clearer description, refer to \cite{gan2019huopm}. For example, the extension nodes of $(ea)$ consist of (\textit{eab}), (\textit{ead}), and (\textit{eac}). If no action is taken, all possible nodes in a $SC$-tree are to be visited and their relative UO-nlists and FUO-tables are to be constructed, resulting in a massive space being occupied. Suppose the existence of a pattern $X$ in transaction $T_q$, namely, $X \subseteq T_q$. After the database is revised in the order of $\prec$, all items that appear after $X$ in the transaction can be written as $T/X$. Inspired by the HUOPM algorithm \cite{gan2019huopm}, we propose the following lemmas.
\begin{lemma}
\label{lemma_1}
\rm Let $Y$ be an extension node of $X$. Here, $ \varGamma_X $ is a collection of transactions containing $X$, and $|\mathcal{D}|$ is the size of the given database. In addition, \textit{top} and $\downarrow$ imply the descending order of the sum of $uo$ and \textit{rruo} in each tuple and then take the top $\alpha$ $\times$ $|\mathcal{D}|$ as valid values for the numerator. The upper bound on the utility-occupancy of $Y$ can be calculated as follows:
\begin{equation}
\hat{\phi}(Y)) = \dfrac{\sum_{top \alpha \times |\mathcal{D}|, T_q \in \varGamma_X}\{uo(X,T_q) + rruo(X,T_q)\}^{\downarrow}}{|\alpha \times |\mathcal{D}||}.
\end{equation}
\end{lemma}
\begin{proof}
\label{pro_2}
\rm Because $Y$ is obtained by appending items behind $X$, the equation $Y$/$X$ $\subseteq$ $T_q$/$X$ is derived. We can then achieve $ \sum{uo(Y / X, T_q)}$ $\leq$ $\sum{uo(T_q / X, T_q)} $. In addition, because the length of the required HUOPs cannot exceed \textit{maxlen}, the largest utility-occupancy and revised remaining utility-occupancy in a transaction are applied to handle a tighter upper bound. For an easier formula derivation, we write $Y \subseteq T_q \wedge T_q \in \mathcal{D}$ as $Z$ and \textit{maxlen} as $M$.
\begin{tabbing}
$ uo(Y)$ \=
$= \dfrac{\sum_{Z \wedge |Y| \leq M}uo(Y,T_q)}{|\varGamma_Y|} $\\
\>$ = \dfrac{\sum_{Z\wedge |Y/X| \leq M - |X|}(uo(X,T_q) + uo(Y/X,T_q))}{|\varGamma_Y|} $\\
\>$ \leq \dfrac{\sum_{Z \wedge |T_q/X| \leq M - |X|}(uo(X,T_q) + uo(T_q/X,T_q))}{|\varGamma_Y|} $\\
\>$ \leq \dfrac{\sum_{Z}\{uo(X,T_q) + \sum_{v \in luo(X, T_q)}(v)\}}{|\varGamma_Y|} $\\
\>$ \leq \dfrac{\sum_{Z}\{uo(X,T_q) + \sum_{v \in luo(X, T_q)}(v)\}}{|\varGamma_Y|} $\\
\>$ = \dfrac{\sum_{Z}\{uo(X,T_q) + rruo(X, T_q)\}}{|\varGamma_Y|} $\\
$\Longrightarrow uo(Y) \leq \dfrac{\sum_{Z}\{uo(X,T_q) + rruo(X, T_q)\}}{|\varGamma_Y|} $.
\end{tabbing}
As the most critical step in the derivation process above, $uo(T_q/X$, $T_q)$ is less than the sum of all values in $luo(X$, $T_q)$ because the \textit{maxlen} - $|X|$ values in $luo(X$, $T_q)$ are definitely greater than the other utility-occupancy of items behind $X$.
We place all supporting transactions of $Y$ extending $X$ into $\varGamma_Y$; however, the value of $|\varGamma_Y|$ cannot be obtained until the entire UO-nlist of $Y$ has been constructed. Therefore, it is meaningless and contradictory to consider the above formula as the upper bound of the utility-occupancy of $Y$. Our purpose here is to minimize the construction of the list structures; however, this formula requires constructing a completed UO-nlist. In addition, when judging the high utility-occupancy pattern, we must first determine whether it is a frequent pattern, and if so, continue to judge whether the utility-occupancy meets the requirements. In addition, inequalities $\alpha \times |\mathcal{D}| \leq SC(Y) \leq SC(X)$ hold. Therefore, according to the discussion above, it is appropriate to replace the sum of $uo$ and \textit{rruo} in complete supporting transactions with the sum in the top $\alpha \times |\mathcal{D}|$ supporting transactions. The above formula can be further transformed to a tighter upper-bound by using the following property.
\begin{tabbing}
$uo(Y) \leq \dfrac{\sum_{Z}\{uo(X,T_q) + \textit{rruo}(X, T_q)\}}{|\varGamma_Y|} $.\\
$ uo(Y) \leq \dfrac{\sum_{top \alpha \times |\mathcal{D}|, T_q \in \varGamma_X}\{uo(X,T_q) + \textit{rruo}(X,T_q)\}^{\downarrow}}{|\alpha \times |\mathcal{D}||} $\\
$\Longrightarrow uo(Y) \leq \hat{\phi}(Y) $.
\end{tabbing}
\end{proof}
Thus, given a pattern $X$, we can calculate the upper bounds $\hat{\phi}(Y)$ regarding the utility-occupancy with the length constraints of the nodes of all subtrees rooted at $X$.
\subsection{Pruning Strategies and Proposed Algorithm }
This section states the pruning strategies designed to prune the search space and improve the performance of the algorithm. Moreover, the proposed algorithm is also described in detail.
\begin{strategy}
\label{stra_1}
If the support count of a pattern $X$ in the designed $SC$-tree is greater than the minimum support threshold $\alpha$ multiplied by the size of the database $|\mathcal{D}|$, then this node is a frequent pattern; otherwise, this node and its subtree nodes rooted at the node can be directly pruned.
\end{strategy}
\begin{proof}
The source of the above strategy is the Apriori \cite{agrawal1994fast} algorithm, the principle of which can generally be expressed as $SC(X_{k+1}) \leq SC(X_{k})$. Provided that $SC(X_{k})$ $\textless \alpha$ $\times$ $|\mathcal{D}|$, $SC(X_{k+1})$ $\textless \alpha$ $\times$ $|\mathcal{D}|$ is then acquired. Thus, node $X_{k}$ and the subtree rooted at this node can be trimmed immediately.
\end{proof}
\begin{strategy}
\label{stra_2}
In $SC$-tree, the upper bound of the child node $Y$ of node $X$ can be calculated immediately when the UO-nlist and FUO-table corresponding to $X$ have been constructed. If the derived upper bound is less than the predefined minimum utility-occupancy threshold $\beta$, then any nodes in the subtree of $X$ should be pruned.
\end{strategy}
\begin{proof}
We concluded that the upper bound of $Y$ is certainly less than the real utility-occupancy of $Y$ in Lemma \ref{lemma_1}. Thus, if the upper bound is less than the minimum utility-occupancy threshold $\beta$, then $Y$ is not a HUOP.
\end{proof}
\begin{strategy}
\label{stra_3}
Similar to the downward closure property in Strategy \ref{stra_1}, each time a tuple in $X_a$ is scanned, the support of $X_a$ decreases by one until the support is less than $\alpha \times |\mathcal{D}|$.
\end{strategy}
\begin{proof}
The proof is the same as that of strategy \ref{stra_1}, only further limiting the support.
\end{proof}
\begin{algorithm}
\label{HUOPM$^+$-algorithm}
\caption{HUOPM$^{+}$($\mathcal{D}$, \textit{ptable}, $\alpha$, $\beta$, \textit{minlen}, \textit{maxlen})}
\begin{algorithmic}[1]
\REQUIRE a transaction database $\mathcal{D}$, utility table \textit{ptable}, the minimum support threshold $ \alpha $, the minimum utility-occupancy threshold $\beta$, the minimum length \textit{minlen}, and the maximum length \textit{maxlen}.
\ENSURE \textit{HUOPs}.
\STATE scan $\mathcal{D}$ to calculate the $SC(i)$ of each item $ i \in I $ and the $tu$ value of each transaction;
\STATE find $ I^* \gets \left\{ i \in I | SC(i) \geq \alpha \times |\mathcal{D}| \right\} $, with respect to $ FP^1 $;
\STATE sort $ I^* $ in the designed total order $ \prec $;
\STATE using the total order $ \prec $, scan $\mathcal{D}$ once to build the UO-nlist and FUO-table for each 1-item $ i \in I^*$;
\IF{\textit{maxlen} $\geq$ 1}
\STATE \textbf{call \textit{HUOP$^+$-Search}}($\phi$, $I^*$, $\alpha$, $\beta$, \textit{minlen}, \textit{maxlen}).
\ENDIF
\STATE \textbf{return} \textit{HUOPs}
\end{algorithmic}
\end{algorithm}
We introduced three feasible pruning strategies above. Next, the overall description concerning the proposed algorithm is presented as follows.
\begin{algorithm}
\label{HUOP$^+$-Search procedure}
\caption{HUOP$^+$-Search($X$, \textit{extenOfX}, $\alpha$, $\beta$, \textit{minlen}, \textit{maxlen})}
\begin{algorithmic}[1]
\FOR {each itemset $ X_{a}\in $ \textit{extenOfX}}
\STATE obtain $ SC(X_a) $ and $ uo(X_a) $ from the $ X_{a}.\textit{FUOT} $;
\IF{$ SC(X_a)$ $\geq \alpha$ $ \times |\mathcal{D}| $}
\IF{$ uo(X_a)$ $\geq \beta \wedge$ $|X_a|\geq minlen$ }
\STATE \textit{HUOPs} $\leftarrow$ \textit{HUOPs} $\cup$ $X_{a} $;
\ENDIF
\STATE $ \hat{\phi}(X_a) \leftarrow $ \textbf{\textit{LengthUpperBound}}($X_a.UONL, \alpha) $;
\IF{$ \hat{\phi}(X_a) \geq \beta $}
\STATE \textit{extenOfX}$_{a}\leftarrow \emptyset $;
\FOR {each $ X_{b}$ $\in$ \textit{extenOfX} that $ X_{a} $ $ \prec $ $ X_{b} $}
\STATE $ X_{ab}$ $\leftarrow$ $X_{a} \cup X_{b} $;
\STATE call \textbf{\textit{Construction}}($X, X_{a}, X_{b}) $;
\IF{$ X_{ab}.\textit{UONL}$ $\not= $ $\emptyset$ $\wedge SC(X_{ab})$ $\geq \alpha$ $\times |\mathcal{D}| $}
\STATE \textit{extenOfX}$_{a} \leftarrow$ $\textit{extenOfX}_{a} \cup X_{ab} $;
\ENDIF
\ENDFOR
\IF {$|X| + 2 \leq maxlen$}
\STATE \textbf{call \textit{HUOP$^+$-Search}}$\boldmath{(X_{a}, \textit{extenOfX}_{a}, \alpha, \beta)}$;
\ENDIF
\ENDIF
\ENDIF
\ENDFOR
\STATE \textbf{return} \textit{HUOPs}
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\label{LengthUpperBound procedure}
\caption{LengthUpperBound(\textit{X$_q$.UONL}, $ \alpha $)}
\begin{algorithmic}[1]
\STATE \textit{sumTopK} $\leftarrow 0$, $\hat{\phi}(X_a)$ $\leftarrow 0$, $V_{occu}$ $\leftarrow$ $\emptyset $;
\STATE calculate $ (uo(X,T_q)$ + \textit{rruo}($X,T_q$)) of each tuple from the built \textit{X$_{a}$.UONL} and put them into the set of $ V_{occu} $;
\STATE sort $ V_{occu} $ by descending order as $ V_{occu}^{\downarrow} $;
\FOR {$ k \leftarrow $ 1 to $\alpha \times |\mathcal{D}| $ in $ V_{occu}^{\downarrow} $}
\STATE \textit{sumTopK} $ \leftarrow$ \textit{sumTopK} + $V_{occu}^{\downarrow}[k] $;
\ENDFOR
\STATE $ \hat{\phi}(X_a)$ = $\dfrac{\textit{sumTopK}}{\alpha \times |\mathcal{D}|} $.
\STATE \textbf{return} $ \hat{\phi}(X_a) $
\end{algorithmic}
\end{algorithm}
The main objective of the HUOPM$^+$ algorithm is to mine flexible high utility-occupancy patterns whose length is within a certain range. The pseudocode for the entire algorithm is shown in Algorithm 2. First, six parameters need to be entered in advance as the input of the algorithm, i.e., a transaction database $\mathcal{D}$, a profit table abbreviated as \textit{ptable}, the minimum support threshold $\alpha$ (0 $< \alpha \leq 1)$, the minimum utility-occupancy threshold $\beta$ (0 $< \beta \leq 1)$, the minimum length of the patterns \textit{minlen}, and the maximum length of the patterns \textit{maxlen} $(0 \leq $ \textit{minlen} $\leq $ \textit{maxlen}). This is the first time to scan the database to calculate the support of each item and the utility of each transaction. Then, rearrange the items in the transactions in support descending order ($\prec$) and thus obtain a revised database. Next, a rescanning of the revised database and establishment of the initial UO-nlist and FUO-table are necessary, which are the bases for the following search process. Finally, if the required maximum length \textit{maxlen} is greater than 1, the search procedure is executed.
To allow the search process to proceed in Algorithm 3, we need to input a prefix pattern $X$, a collection of extensions of $X$ that are initially distinct items, i.e., \textit{extenOfX}, $\alpha$, $\beta$, \textit{minlen}, and \textit{maxlen}. If a pattern $X_a$ in \textit{extenOfX} is expected to be output as a HUOP, it must satisfy three conditions. The support and utility-occupancy of $X_a$ are no less than the threshold $\alpha$ and $\beta$, respectively. In addition, it is still necessary to detect whether the length of $X_a$ is greater than or equal to \textit{minlen}. If the support of $X_a$ does not meet the restriction, then it is directly skipped, and the next pattern in the sequence is checked. Next, the upper bound in Algorithm 4 is calculated to determine whether the extending nodes of $X_a$ should be executed during a later calculation. If the upper bound $ \hat{\phi}(X_a) $ is no less than $\beta$, we can establish the extensions of $X_a$ and their relative UO-nlists and FUO-tables through the construction procedure described in Algorithm 1. For example, $X_{ab}$ is developed by merging $X_a$ with $X_b$ and using the specific process described above. Next, if the support of $X_{ab}$ meets the given requirement, it is placed in \textit{extenOfX}$_a$ for the next cycle. Finally, when the length of the extensions is less than \textit{maxlen}, the above mining program is recursed; otherwise, the program exits.
\section{Experiments}
\label{sec:experiments}
To evaluate the performance of our proposed HUOPM$^+$ algorithm, some experiments conducted for comparison with the state-of-the-art utility-occupancy-based HUOPM algorithm are described in this section.
\textit{Experimental environment}. First, an introduction of the computer parameters used will be quite helpful to allow readers to reproduce the experiments. The computer is running on Windows 10 with 7.88 GB of free RAM. We compared the presented algorithm HUOPM$^+$ with the state-of-the-art HUOPM algorithm for discovering the HUOPs. Both algorithms are implemented using Java.
\textit{Parameter settings}. As an innovation, the algorithm is mainly aimed at controlling the length of the output patterns; thus, to test this advantage, we set the maximum length of the patterns from one to five during the comparison between the HUOPM and HUOPM$^+$ algorithm outputs for all desired patterns. The minimum length is not a variable and is set to a constant of 1 because, even if the minimum length is greater than 1 and the maximum length is unchanged, the efficiency and performance of the algorithm are basically unchanged. Setting the minimum length ensures that our proposed algorithm can output patterns that are greater than the minimum length. The HUOPM$^+$ algorithm mainly involves two parameters, support and utility-occupancy. The following experiments analyze the runtime, visited nodes, and found patterns at different maximum lengths. For simplicity, we record the minimum support threshold as \textit{minsup} and the minimum utility-occupancy as \textit{minuo}. In the legend, the maximum length \textit{maxlen} in the HUOPM$^+$ algorithm is recorded as HUOPM$^{+}$-\textit{maxlen}. For example, HUOPM$^{+}$5 implies that the maximum length of the discovered patterns is 5.
\subsection{Tested Datasets}
Four standard datasets are applied to test the efficiency of the compared algorithms in terms of runtime, visited nodes, ie memory consumption, and patterns found. Three real-life datasets and one synthetic dataset were used, namely, retail, mushroom, kosarak, and T40I10D100K, respectively. It is worth noting that these datasets have different characteristics and that all aspects of the two algorithms (HUOPM and HUOPM$^{+}$) can be compared. We adopted a simulation method that is widely used in previous studies \cite{gan2019huopm,liu2012mining} to generate the quantitative and unit utility information for each object/item in each dataset. Next, we introduce these datasets in detail.
\begin{itemize}
\item \textbf{retail\footnotemark[1]}: There are 88,162 transactions and 16,470 distinct items in the retail dataset, and the average transaction length is 76. Because there are few transactions and the average length of a transaction is not large, retail is a sparse dataset.
\item \textbf{mushroom\footnotemark[1]}: This is a dense dataset, which has 8,124 transactions and 119 distinct items.
\item \textbf{kosarak\footnotemark[1]}: This has 990,002 transactions and 41,270 items, and note that the average length of its transactions is relatively longer, reaching up to 2,498, making it a dense dataset.
\item \textbf{T40I10D100K\footnotemark[2]}: This is a synthetic dataset with 942 distinct items and 100,000 transactions.
\end{itemize}
\footnotetext[1]{\url{http://www.philippe-fournier-viger.com/spmf/index.php?link=datasets.php}}
\footnotetext[2]{\url{http://fimi.uantwerpen.be/data/}}
\subsection{Runtime Analysis}
Let us first analyze the runtime of the compared algorithms. Figs. \ref{fig:runtimeMS} and \ref{fig:runtimeMU} show the runtime changes when the minimum support threshold $minsup$ and the minimum utility-occupancy threshold \textit{minuo} are gradually increasing. HUOPM$^{+}$1 to HUOPM$^{+}$5 mean that the maximum length of the patterns varies from 1 to 5. In general, it can be observed that, as one of the three thresholds increase, the execution time of each algorithm becomes increasingly short. In addition, when the maximum length of the discovered patterns is set to a distinct integer, the compared runtimes are clearly different. The runtime of the HUOPM algorithm can reach up to 2- to 5-times the runtime of HUOPM$^+$, which is crucial to the efficiency of the algorithms. As the reason for this phenomenon, the HUOPM$^+$ algorithm tightens the upper bound of the derived patterns, reduces the number of traversal nodes, and directly finds the derived patterns. From the comparison of the subplots in Figs. \ref{fig:runtimeMS} and \ref{fig:runtimeMU}, the performance of the proposed algorithm in compact datasets is better than that in sparse datasets, which is probably in that there are more longer patterns in compact datasets and plenty of patterns are pruned as a result of the designed utility-occupancy upper bound.
\begin{figure}[!hbt]
\centering
\includegraphics[height=0.3\textheight,width=1.03 \linewidth,trim=80 0 50 0,clip,scale=0.35]{figs/runtimems.pdf}
\caption{Runtime under a changed $minsup$ with a fixed $minuo$}
\label{fig:runtimeMS}
\end{figure}
\begin{figure}[!hbt]
\centering
\includegraphics[height=0.3\textheight,width=1.03 \linewidth,trim=80 0 50 0,clip,scale=0.35]{figs/runtimemu.pdf}
\caption{Runtime under a changed $minuo$ with a fixed $minsup$}
\label{fig:runtimeMU}
\end{figure}
\subsection{Visited Node Analysis}
In this subsection, we take into account the number of visited patterns, that is, we discuss the memory consumption. It is a common knowledge that, owing to the limitations of the current storage technology, memory consumption still accounts for a large proportion of the algorithm performance. Therefore, the fewer nodes visited during the process, the better the memory consumption of the algorithm.
\begin{figure}[!hbt]
\centering
\includegraphics[height=0.3\textheight,width=1.05\linewidth,trim=80 0 50 0,clip,scale=0.35]{figs/memoryms.pdf}
\caption{Visited nodes under a changed $minsup$ with a fixed $minuo$}
\label{fig:memoryMS}
\end{figure}
\begin{figure}[!hbt]
\centering
\includegraphics[height=0.3\textheight,width=1.03\linewidth,trim=80 0 50 0,clip,scale=0.35]{figs/memorymu.pdf}
\caption{Visited nodes under a changed $minuo$ with a fixed $minsup$}
\label{fig:memoryMU}
\end{figure}
Figs. \ref{fig:memoryMS} and \ref{fig:memoryMU} respectively show that, when the support and utility-occupancy threshold change, the visited node changes. Clearly, the number of nodes visited by the HUOPM algorithm is greater than or equal to that of HUOPM$^+$. In particular, the smaller the maximum length set in HUOPM$^+$, the fewer the number of visited nodes. For example, in Fig. \ref{fig:memoryMS}, the minimum support of \textit{kosarak} is set to 0.0016, and the minimum utility-occupancy is set to 0.01. It can be seen that, when the maximum length is 5, the number of visited nodes of the HUOPM$^+$ algorithm is about one-third that of the HUOPM algorithm.
These phenomena suggest there is a difference between the actual memory consumption of the compared algorithms. It also illustrates the effectiveness of the mining model based on the pattern length-constraints proposed in this paper, that is, by setting different lengths, we not only can mine patterns that are more in line with the needs, we can also further reduce the actual memory consumption of the algorithm.
\subsection{Pattern Analysis}
The main goal of this study is the flexible mining of high utility-occupancy patterns, and during the process of mining to innovatively set the length-constraints. This allows patterns that are out of the range to not be traversed and the number of patterns found to be much less than that of HUOPM.
\begin{figure}[!hbt]
\centering
\includegraphics[height=0.3\textheight,width=1.02\linewidth,trim=80 0 50 0,clip,scale=0.35]{figs/patternms.pdf}
\caption{Patterns under a changed \textit{minsup} with a fixed \textit{minuo}}
\label{fig:patternMS}
\end{figure}
\begin{figure}[!hbt]
\centering
\includegraphics[height=0.3\textheight,width=1.02\linewidth,trim=80 0 50 0,clip,scale=0.36]{figs/patternmu.pdf}
\caption{Patterns under a changed \textit{minuo} with a fixed \textit{minsup}}
\label{fig:patternMU}
\end{figure}
From the detailed results in Figs. \ref{fig:patternMS} and \ref{fig:patternMU}, we can observe that, in fact, the required patterns are often much fewer than the total number of patterns. Using this proposed algorithm, we not only can directly output the required patterns, we can also reduce the time required to process the data. For example, as shown in Fig. \ref{fig:patternMU} (b), $\alpha$ was set to 0.0001, and $\beta$ changed from 0.1 to 0.35 in increments of 0.05. The number of patterns desired from HUOPM$^+$1, HUOPM$^+$2, HUOPM$^+$3, HUOPM$^+$4, and HUOPM$^+$5 is 1,350, 20,946, 65,223, 98,166, and 110,676, respectively. In particular, the number of patterns of the mushroom dataset in Fig. \ref{fig:patternMU} with no length constraint is more than 10 times the number of patterns with \textit{maxlen} set to 5. From the two figures, we can notice that the number of flexibility patterns extracted by the proposed algorithm from sparse datasets is not significantly different from the number of the full patterns, while it is exactly the opposite in compact datasets. This is because the algorithm plays a stronger role in dense datasets.
\section{Conclusion and Future Studies}
\label{sec:conclusion}
This paper proposes a novel algorithm called HUOPM$^+$, aimed at mining flexible high utility-occupancy patterns. It integrates length-constraints using a state-of-the-art HUOPM algorithm. In particular, unlike stopping the process of pattern growth directly, it deepens the length constraints of the procedure and narrows the upper bound by introducing the concept of a length upper-bound, which is one of the merits of the proposed algorithm. In addition, UO-nlist and FUO-table are designed to maintain the information in the database. The results of a subsequent experiment confirm that our proposed strategies can indeed discover HUOPs within a certain range of length, from \textit{minlen} to \textit{maxlen}, and greatly reduce the memory consumption and execution time. As part of a future study, we will apply the pattern length constraint to other utility mining algorithms, such as utility mining in dynamic profit databases \cite{nguyen2019mining}, utility-driven sequence mining \cite{gan2020proum} and nonoverlapping pattern mining \cite{wu2021ntp}.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
\section{Introduction}
\IEEEPARstart{T}{he} initial motivation for frequent pattern mining (FPM) was to analyze the shopping behavior of customers using transactional databases and recommend frequently purchased patterns to customers \cite{agrawal1994fast,han2000mining,chen1996data,mannila1997discovery,goethals2003survey}. In this case, researchers believed that the item is binary and whether an item appears in a transaction is considered primary. However, frequent purchase patterns are occasionally less profitable than infrequent purchase patterns with high profits, which poses a fundamental problem. Hence, the discovery of high-utility patterns that consider not only the internal utility (e.g., quantity) but also the external utility (e.g., profit, interest, or weight) \cite{shen2002objective,chan2003mining,yao2006mining,yao2006unified} has gained substantial research attention. Moreover, a framework called high-utility pattern mining (HUPM) \cite{gan2021survey,ahmed2009efficient} was proposed to address this practical issue. In contrast with frequent pattern mining (FPM), the lack of a downward closure property makes HUPM more difficult and intractable.
Up until now, numerous approaches and strategies have been designed to increase the efficiency and convenience of mining high-utility patterns \cite{li2008isolated,liu2005two}. These methods can be broadly divided into three major categories. The first category involves the candidate generation-and-test approach \cite{yao2006mining,li2008isolated,liu2005two}: The eligible patterns are selected as candidates based on an upper-bound evaluation and then calculated and tested to determine whether they are qualified. The second category entails tree-based algorithms \cite{ahmed2009efficient,ahmed2011huc,tseng2012efficient,tseng2010up}. The necessary information is projected onto a tree in the database, which can avoid multiple traversals of the database. The third category employs vertical data structures (e.g., utility-lists) \cite{lin2016fhn,fournier2014fhm}. Similar to the tree-based approach, crucial information is stored in a vertical data structure, and the utility of any pattern can be calculated using this structure. HUPM has dozens of practical applications in real life, such as gene regulation \cite{zihayat2017mining} and web click-stream analysis \cite{li2008fast,shie2010online}. A challenging problem arises in selecting a pattern to represent the transactions.
To the best of our knowledge, only a few studies have been carried out on this problem. The concept of occupancy \cite{tang2012incorporating} was originally defined as the share of the number of items to one of the transactions in which the item existed. This requires the patterns to occupy the majority of the supporting transactions. In addition, an occupancy measurement can be widely used in pattern recommendation. When a user browses a website, if the number of times the user clicks on a URL is greater than a certain threshold, then the URL has a high occupancy pattern. However, if this user browses for a long time at a URL that is not frequently clicked, the URL may be valuable. Hence, Shen \textit{et al.} \cite{shen2016ocean} incorporated occupancy and utility to create an original concept called the utility-occupancy and designed an algorithm called OCEAN. Although this algorithm is novel, it has a limitation in that it may exclude certain patterns that should be qualified. To address this drawback, Gan \textit{et al.} \cite{gan2019huopm} proposed an efficient algorithm called high utility-occupancy pattern mining. Taking advantage of two compact list structures, namely, a utility-occupancy list and a frequency-utility-occupancy table, respectively, HUOPM can reduce the running time and memory consumption, which are two of its principal merits. It is clear that utility-occupancy has a wide applicability in an information-driven society. For instance, during a holiday, tourists may want to go out but not know where to visit. They can then check the travel route recommendation, which analyzes and calculates the utility-occupancy of the tourist route, and finally recommends the best route for the tourists.
During the process of pattern mining, researchers tend to extract all patterns. However, they may not necessarily be useful to actual production or management. For example, supermarket managers generally display milk and bread together, but they rarely bundle \{milk, bread, strawberry jam, and gum\}. Although both sets are elegant patterns selected by the algorithm, the shorter one is obviously more popular with decision makers. In previous studies, Pei \textit{et al.} \cite{pei2002constrained} applied length constraints on frequent pattern mining by appending no items after the patterns. Next, Fournier-Viger \textit{et al.} \cite{fournier2016fhm} proposed the FHM+ algorithm focusing on utility, which improved the FHM by incorporating the length constraints into utility mining. This narrows the upper bound of the patterns and further trims the search space. Nevertheless, no method has been proposed in the field of utility-occupancy to address the problem of a length constraint.
In this study, we focus on mining flexible high utility-occupancy patterns by developing a novel algorithm called HUOPM$^+$. The proposed algorithm is dedicated to discovering high utility-occupancy patterns with length constraints; in addition, it is a generic framework (or called an extension) of the state-of-the-art HUOPM algorithm. The major contributions of this paper are briefly summarized as follows.
\begin{itemize}
\item A generic and practical algorithm is proposed to exploit flexible high utility-occupancy patterns. During the execution of this algorithm, the minimum and maximum lengths are needed in advance to determine the length range of the derived patterns.
\item To avoid scanning the database multiple times, two compact data structures, called a utility-occupancy nested list (UO-nlist) and a frequent-utility-occupancy table (FUO-table), are constructed to store vital information from the databases.
\item It is recommended to tighten the upper bound with the newly designed LUB, which is less than the original upper-bound in the HUOPM algorithm.
\item Subsequent experiments have been carried out on both real-world and synthetic datasets, the results show that all patterns with length constraints can be obtained and that the efficiency of the proposed algorithm is significantly high in terms of the execution time and memory consumption.
\end{itemize}
The remainder of this paper is broadly organized as follows. Related studies are introduced in Section \ref{sec:2}. In Section \ref{sec:background}, some fundamental knowledge regarding this study is presented. The presented HUOPM$^+$ algorithm and three novel pruning strategies are detailed in Section \ref{sec:4}. In Section \ref{sec:experiments}, subsequent experiments confirming the effectiveness and efficiency of the proposed algorithm are described. Finally, Section \ref{sec:conclusion} provides some concluding remarks regarding this research.
\section{Related Studies}
\label{sec:2}
The studies related to the HUOPM$^+$ algorithm mainly deal with three aspects, i.e., high-utility pattern mining, high utility-occupancy pattern mining, and flexible pattern mining, which are discussed below.
\subsection{High-Utility Pattern Mining}
Thus far, numerous studies have been carried out on HUPM, which aims at miming qualified patterns whose utilities are greater than or equal to a predefined minimum utility threshold. Because HUPM provides guidance for many applications such as decision-making, it has attracted significant attention. HUPM was initially proposed in \cite{yao2004foundational}. Each pattern has two aspects, one being the quantity of items whose alias is an internal utility, and the other being a unit utility (e.g., profit), which is defined by experts as an external utility. If any patterns satisfy the minimum utility threshold, they can be derived as high-utility patterns. HUPM is technically more challenging than FPM because FPM has a downward closure property, whereas HUPM does not. To overcome this critical issue, Liu \textit{et al.} \cite{liu2005two} creatively proposed a two-phase algorithm and established a transaction-weighted utilization (TWU) model by adopting the anti-monotonic property of a transaction-weighted utility. Liu \textit{et al.} \cite{liu2012mining} then presented HUI-Miner, achieving a better performance than previous approaches. This algorithm scans the transaction database twice to construct the initial utility-list, and thus there is no longer a need to access the database. The utility-list contains the utility and the remaining utility of the patterns, which is a necessary condition for calculating the upper bound of the extended patterns. The above algorithms are all itemset-based utility mining algorithms, although different types of data also exist. Utility mining of temporal data \cite{lin2015efficient}, uncertain data \cite{lin2016efficient}, dynamic data \cite{lin2015incremental,yun2019efficient,nguyen2019mining}, sequence data \cite{gan2020proum,gan2020fast}, and other factors are all interesting research directions, as highlighted in a literature review \cite{gan2021survey}.
\subsection{High Utility-Occupancy Pattern Mining}
In terms of the contribution ratio of a pattern, the above presented utility approaches are useless. Thus, it is sensible to introduce a new definition, i.e., occupancy. The initial concept of occupancy, whereby occupancy is defined as the number of occurrences of a pattern, was designed by Tang \textit{et al.} \cite{tang2012incorporating}. Unfortunately, this method is unsuitable for research on utility mining. Subsequently, Shen \textit{et al.} \cite{shen2016ocean} started conducting research on utility-occupancy and developed a representative algorithm called OCEAN to find patterns whose share of utility in the supporting transactions is greater than a specific value. However, the OCEAN algorithm fails to discover complete high utility-occupancy patterns. Gan \textit{et al.} \cite{gan2019huopm} proposed a successful and efficient algorithm called HUOPM to address this disadvantage. Two compact data structures are applied to deposit the vital data in the database. A utility-occupancy list (UO-list) is used to store the utility and the remaining utility of each pattern. Each entry records the details of a transaction. An FUO-table is then obtained for the convenience of integrating and revising the information in the UO-list. HUOPM just focuses on precise data, thus Chen \textit{et al.} \cite{chen2021discovering} recently extended HUOPM to deal with the uncertainty in uncertain data. However, these algorithms can not discover flexible patterns based on various constraints.
\subsection{Flexible Pattern Mining}
Through this research, we found that the patterns adopted by managers generally have a shorter length and that longer patterns are not universal because they are too specific. Hence, it is advisable for users to pursue a flexible algorithm. According to the demand, the generation of a large number of patterns considerably decreases the efficiency of the mining. Pei et al. \cite{pei2002constrained} emphasized adding a constraint-based approach during frequent pattern mining. The authors mainly adopted a pattern-growth method, which is applicable to various constraints. When this method is applied to the field of utility mining, however, it is too superficial and does not penetrate into the interior of the utility mining algorithms. FHM$^+$ \cite{fournier2016fhm} has an interesting feature and it discovers high-utility item sets with length constraints. The authors considered the maximum length of the patterns as the dominant control parameter, and designed length upper-bound reduction (LUR) to prune the search space. Several studies regarding flexible sequential pattern mining such as CloSpan \cite{yan2003clospan}, BIDE \cite{wang2004bide}, and MaxFlex \cite{arimura2007mining} have also been conducted to meet the requirements of different applications.
\section{Preliminary and Problem Statement}
\label{sec:background}
To assist with this discussion, in this section, some symbols in connection with the proposed algorithm are introduced and defined. Provided that there are several items in a transaction database, and let $I$ = \{$i_{1}$, $i_{2}$, $\ldots$, $i_{m}$\} be a collection of m distinct items, if an itemset contains \textit{k} distinct items, it can then be denoted as $k$-itemset. A transaction database $\mathcal{D}$ consists of n transactions in which $\mathcal{D}$ = \{$T_{1}$, $T_{2}$, $\ldots$, $T_{n}$\}. Each transaction holds its own particular identifier \textit{tid} and is a subset of $I$. Every transaction includes three parts, i.e., the transaction identifier \textit{tid}, item name, and number of purchases of the corresponding item. Next, we take advantage of the transaction database in TABLE \ref{table:db1} and the profit presentation of each item in TABLE \ref{table:profit} as a running example.
\begin{table}[!htbp]
\centering
\small
\caption{Transaction database.}
\label{table:db1}
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{\textit{tid}} & \textbf{Transaction (item, quantity)} & \textbf{\textit{tu}} \\ \hline \hline
$ T_{1} $ & \textit{a}:3, \textit{b}:4, \textit{c}:2, \textit{d}:6, \textit{e}:2 & \$63 \\ \hline
$ T_{2} $ & \textit{a}:7, \textit{b}:4, \textit{c}:1, \textit{e}:2 & \$62 \\ \hline
$ T_{3} $ & \textit{a}:5, \textit{b}:2, \textit{e}:1 & \$35 \\ \hline
$ T_{4} $ & \textit{b}:4, \textit{c}:1, \textit{d}:2 & \$25 \\ \hline
$ T_{5} $ & \textit{a}:2, \textit{d}:4 & \$14 \\ \hline
$ T_{6} $ & \textit{a}:2, \textit{b}:2, \textit{c}:6, \textit{d}:4, \textit{e}:3 & \$60 \\ \hline
$ T_{7} $ & \textit{a}:1, \textit{b}:2 & \$13 \\ \hline
$ T_{8} $ & \textit{d}:3 & \$6 \\ \hline
$ T_{9} $ & \textit{b}:3, \textit{c}:5, \textit{d}:2, \textit{e}:5 & \$74 \\ \hline
$ T_{10} $ & \textit{b}:3, \textit{e}:5 & \$65 \\ \hline
\end{tabular}
\end{table}
\begin{table}[!htbp]
\centering
\small
\caption{Unit utility of each item}
\label{table:profit}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\textbf{Item} & $ a $ & $ b $ & $ c $ & $ d $ & $ e $ \\ \hline
\textbf{Utility (\$)} & $ 3 $ & $ 5 $ & $ 1 $ & $ 2 $ & $ 10 $ \\ \hline
\end{tabular}
\end{table}
\begin{definition}
\label{def_1}
\rm The number of itemsets $X$ contained in a database is usually defined as \textit{support count} \cite{agrawal1994fast}, which can be denoted as \textit{SC(X)}. Given a \textit{support threshold} $\alpha$ ($ 0 < \alpha \leq 1 $), if $SC(X) \geq$ $\alpha$ $\times$ $|D|$, we can determine that $X$ is a frequent pattern. The collection of transactions containing $X$ is expressed as $ \varGamma_X$. That is, if an itemset $X$ appears in the transaction $T_q$, then $T_q$ belongs to $ \varGamma_X$, and naturally, $ SC(X)$ = $|\varGamma_X| $.
\end{definition}
As shown in TABLE \ref{table:db1}, the pattern $ac$ appears in transactions $T_1$, $T_2$, and $T_6$. Therefore, $SC (ac) $ = 3 and $\varGamma_{(ac)}$ = $ \{T_1$, $T_2$, $T_6\} $. Assuming that the value of $\alpha$ is 0.3, $SC(ac)$ $\geq $ $\alpha$ $\times$ $|D| $, and thus $ac$ is frequent.
In a market basket analysis, based on the preference of the customers for the products or an evaluation of experts regarding the items, each item in the database is associated with a positive number, or in other words, a unit profit.
\begin{definition}
\label{def_4}
\rm We define the \textit{internal utility} of an item $i$ in transaction $T_q$ as \textit{iu(i, $T_q$)}, which refers to the number of occurrences of the corresponding item. We define the \textit{external utility} of an item $i$ in the database as \textit{eu(i)}, namely, the unit utility of $i$ in TABLE \ref{table:profit}, which is generally set subjectively.
\end{definition}
\begin{definition}
\label{def_5}
\rm The utility of an item $i$ in the supporting transaction $T_{q}$ is defined as $u(i, T_{q})$ = $iu(i, T_{q})$ $\times$ $eu(i)$. Moreover, the utility of a pattern $X$ in a transaction $T_{q}$ is equal to the total utility of each item in the pattern and is represented as $u(X, T_{q})$ = $\sum _{i \in X \wedge X \subseteq T_{q}} u(i, T_{q}) $. Hence, the utility of $X$ existing in a transaction database $\mathcal{D}$ is denoted as $u(X)$ = $\sum_{X\subseteq T_{q}\wedge T_{q} \in \mathcal{D}} u(X, T_{q}) $. A summary of all utilities in a transaction is recorded as the transaction utility (\textit{tu}).
\end{definition}
Let us take $e$ and $ac$ as examples for an easier understanding of the utility calculation. Here, $ u(e) $ = $ u(e, T_1) $ + $ u(e, T_2) $ + $ u(e, T_3) $ + $ u(e, T_{6})$ + $ u(e, T_{9}) $ + $ u(e, T_{10})$ = \$20 + \$20 + \$10 + \$30 + \$50 + \$50 = \$180, $ u(ac) $ = $ u(ac, T_1) $ + $ u(ac, T_2) $ + $ u(ac, T_6) $ = \$11 + \$22 + \$12 = \$45. Each transaction utility refers to the last column of TABLE \ref{table:db1}.
The \textit{occupancy} was originally proposed to measure the proportion of pattern $X$ in the supporting transactions \cite{tang2012incorporating}. In the field of utility mining, it is crucial to discover HUOPs \cite{shen2016ocean,gan2019huopm}.
\begin{definition}
\label{def_7}
\rm Assume $X$ is present in transaction $T_q$, and the utility-occupancy of $X$ in supporting transaction $T_q$ is defined as follows:
\begin{equation}
uo(X, T_q) = \dfrac{u(X, T_q)}{tu(T_q)}.
\end{equation}
Assume $\varGamma_X$ is a collection of all transactions containing pattern $X$. The utility-occupancy of a pattern in a database is calculated as follows:
\begin{equation}
uo(X) = \dfrac{\sum_{X \subseteq T_q \wedge T_q \in \mathcal{D}}uo(X,T_q)}{|\varGamma_X|}.
\end{equation}
\end{definition}
\begin{definition}
\label{def_9}
\rm Let there be a transaction database $\mathcal{D}$. A pattern $X$ is denoted as a HUOP if and only if $SC(X)$ $\geq$ $\alpha$ $\times$ $|\mathcal{D}|$ and $ uo(X)$ $\geq$ $\beta $, where $ \alpha $ ($ 0 < \alpha \leq 1 $) is the predefined minimum support threshold and $ \beta $ ($ 0 < \beta \leq 1 $) is the predefined minimum utility-occupancy threshold.
\end{definition}
In a previous study, the full transaction utility and support were calculated. It is convenient to obtain $SC(ac)$ = 3, $tu(T_1)$ = \$63, $tu(T_2)$ = \$62, and $tu(T_6)$ = \$60. Therefore, $ uo(ac, T_1) $ is calculated as \$11/\$63 $\approx $ 0.1746. Similarly, $ uo(ac, T_2) $ and $ uo(ac, T_6) $ are calculated as 0.3548 and 0.2, respectively. Furthermore, $uo(ac)$ = \{0.1746 + 0.3548 + 0.2\} /3 $\approx $ 0.2431. Provided that the value of $\alpha$ and $\beta$ is 0.3, the pattern $ac$ is not a HUOP.
There have been dozens of studies conducted on HUOPM; however, one outstanding issue of HUOPM algorithms is that they normally focus on discovering massive patterns containing numerous items. As we discussed earlier, these patterns might not work for users because they usually represent an unusual situation. To increase the utility of the discovered patterns, we address the issue of mining flexible high utility-occupancy patterns with a length constraint. The minimum length \textit{minlen} and the maximum length \textit{maxlen} of the required patterns are predefined.
\begin{definition}
\label{def_13}
\rm (\textit{Flexibly mining high utility-occupancy patterns}) The flexibly mining of high utility-occupancy patterns aims to discover HUOPs containing up to \textit{maxlen} items and at least \textit{minlen} items.
\end{definition}
\textbf{Problem Statement.} Consider the existence of a given transaction database $\mathcal{D}$, a utility-table recording the unit utility of each item, and four input parameters ($\alpha$, $\beta$, \textit{minlen}, and \textit{maxlen}) used as the mining constraints. The purpose of this study is to mine flexible eligible patterns with the length being at least \textit{minlen} and at most \textit{maxlen}, under the condition that not only is the support count greater than or equal to the minimum support count threshold $\alpha$ $\times$ $|D|$, the value of the utility-occupancy is also no less than the minimum utility-occupancy threshold $\beta$ $\times$ $|\mathcal{D}|$.
Assuming that \textit{minlen} = 1 and \textit{maxlen} = 3, the length of all derived patterns should range from 1 and 3. Thus, although \{$caeb$\} is a HUOP, it is not the desired pattern because its length exceeds \textit{maxlen}. In addition, assuming that the value of $\alpha$ and $\beta$ are both 0.3, all of the desired patterns can be obtained from Table \ref{table:patterns}.
\begin{table}[!htbp]
\centering
\small
\caption{Desired patterns with length constraints}
\label{table:patterns}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\textbf{HUOP} & \textbf{\textit{sup}} & \textbf{\textit{uo}} & \textbf{HUOP} & \textbf{\textit{sup}} & \textbf{\textit{uo}} \\ \hline \hline
$d$ & 6 & 0.3515 & $db$ & 4 & 0.5062 \\ \hline
$e$ & 6 & 0.4784 & $ eb $ & 6 & 0.7328 \\ \hline
$b$ & 8 & 0.3869 & $ cae $ & 3 & 0.6232 \\ \hline
$ce$ & 4 & 0.5078 & $ cab $ & 3 & 0.5120 \\ \hline
$cb$ & 5 & 0.4130 & $ cde $ & 3 & 0.6901 \\ \hline
$ad$ & 3 & 0.5222 & $ cdb $ & 4 & 0.5660 \\ \hline
$ae$ & 4 & 0.6090 & $ ceb $ & 4 & 0.7601 \\ \hline
$ab$ & 5 & 0.6205 & $ aeb $ & 4 & 0.8821 \\ \hline
$de$ & 3 & 0.6237 & $ deb $ & 3 & 0.8526 \\ \hline
\end{tabular}
\end{table}
\section{Proposed Flexible HUOPM$^+$ Algorithm}
\label{sec:4}
In this section, some novel definitions based on two length constraints are first describe. Then, according to these definitions and combined with the utility-list \cite{liu2012mining}, two novel data structures are further constructed. A UO-nlist and a FUO-table are designed to maintain the information regarding the given database. In addition, to avoid an exhaustive search, several pruning strategies are proposed to further narrow the upper bound of utility-occupancy on the subtree nodes in a $SC$-tree, the definition of which is introduced in the next subsection. Finally, the designed algorithm should be described with the help of a pseudocode. The detailed flowchart of the proposed HUOPM$^+$ algorithm is shown in Fig. \ref{fig:flowchart}.
\begin{figure*}[!htbp]
\centering
\includegraphics[scale=0.48]{figs/chart.pdf}
\caption{Flowchart of the proposed HUOPM$^+$ algorithm.}
\label{fig:flowchart}
\end{figure*}
\subsection{Revised Remaining Utility-Occupancy}
As is well-known, the HUOPM algorithm \cite{gan2019huopm} adopts a depth-first search strategy. Thus, an intuitive method for controlling the length of the discovered patterns is to output only those patterns with no less than the minimum length and to stop the extension, where the number of items of a pattern is equal to the established maximum length. Nevertheless, the advantages of this approach are not obvious and are too superficial, which may fail to decrease the number of visited nodes within the specified length of range. As the reason behind this approach, it is unable to reduce the upper bound of the utility-occupancy of the patterns and prune the search range. To handle this drawback, we developed a novel strategy called the LUB, which provides a revision of the remaining utility-occupancy for discovering the HUOPs. A description of this approach is detailed below.
When we run the program, we should traverse the items in each transaction in a certain order. Without a loss of generality, in this study, we take the support of the ascending order as the arrangement and denote it as $\prec$. For example, from the database shown in TABLE \ref{table:db1}, we can easily see that $SC(c)$ $<$ $ SC(a)$ $\leq $ $SC(d)$ $\leq$ $SC(e)$ $<$ $SC(b)$. Therefore, the support-ascending order is $ c \prec a \prec d \prec e \prec b $. TABLE \ref{table:db} shows the revised database modified from TABLE \ref{table:db1} according to the order of $\prec$.
\begin{table}[!htbp]
\centering
\small
\caption{Revised transaction database.}
\label{table:db}
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{\textit{tid}} & \textbf{Transaction (item, quantity)} & \textbf{\textit{tu}} \\ \hline \hline
$ T_{1} $ & \textit{c}:2, \textit{a}:3, \textit{d}:6, \textit{e}:2, \textit{b}:4 & \$63 \\ \hline
$ T_{2} $ & \textit{c}:1, \textit{a}:7, \textit{e}:2, \textit{b}:4 & \$62 \\ \hline
$ T_{3} $ & \textit{a}:5, \textit{e}:1, \textit{b}:2 & \$35 \\ \hline
$ T_{4} $ & \textit{c}:1, \textit{d}:2, \textit{b}:4 & \$25 \\ \hline
$ T_{5} $ & \textit{a}:2, \textit{d}:4 & \$14 \\ \hline
$ T_{6} $ & \textit{c}:6, \textit{a}:2, \textit{d}:4, \textit{e}:3, \textit{b}:2 & \$60 \\ \hline
$ T_{7} $ & \textit{a}:1, \textit{b}:2 & \$13 \\ \hline
$ T_{8} $ & \textit{d}:3 & \$6 \\ \hline
$ T_{9} $ & \textit{c}:5, \textit{d}:2, \textit{e}:5, \textit{b}:3 & \$74 \\ \hline
$ T_{10} $ & \textit{e}:5, \textit{b}:3 & \$65 \\ \hline
\end{tabular}
\end{table}
\begin{definition}
\label{def_11}
\rm According to the order of $\prec$, there may still be some items after pattern $X$ in transaction $T_q$ whose proportion in the transaction is defined as the remaining utility-occupancy (\textit{ruo}) \cite{gan2019huopm}, the formula of which is expressed as follows:
\begin{equation}
ruo(X,T_q) = \dfrac{\sum_{ i_j \notin X \wedge X \subseteq T_q \wedge X \prec i_j }u(i_j,T_q)}{tu(T_q)}.
\end{equation}
Furthermore, the remaining utility-occupancy of $X$ in a database is defined as follows \cite{gan2019huopm}:
\begin{equation}
ruo(X) = \dfrac{\sum_{X \subseteq T_q \wedge T_q \in \mathcal{D}}ruo(X,T_q)}{|\varGamma_X|)}.
\end{equation}
\end{definition}
For example, as shown in TABLE \ref{table:db}, $ruo(a, T_1)$ = $(u(d, T_1)$ + $u(e, T_1)$ + $u(b, T_1))$/$tu(T_1)$ $ \approx $ 0.8254. In addition, $ruo(a)$ = $(ruo(a, T_1)$ + $ruo(a, T_2)$ + $ruo(a, T_3)$ + $ruo(a, T_5)$ + $ruo(a, T_6)$ + $ruo(a, T_7))$/6 = (0.8254 + 0.6452 + 0.5714 + 0.5714 + 0.8 + 0.7692)/6 = 0.6971. To facilitate the representation and mining of flexible HUOPs, we denote the required maximum length as \textit{maxlen} and the minimum length as \textit{minlen}.
\begin{definition}
\label{def_14}
\rm \textit{(Largest utility-occupancy in a transaction with respect to a pattern)}. Let a pattern $X$ exist in a transaction $T_q$ and the maximum length of patterns is set as \textit{maxlen}, and then put all items appearing after $X$ into a list in order of $\prec$ and denote the list as $V(X, T_q)$ = \{$i_1$, $i_2$, $\ldots$, $i_l$\}. Next, calculate the utility-occupancy corresponding to these items and express it as $W(X, T_q)$ = \{$uo(i_1, T_q)$, $uo(i_2, T_q)$, $\ldots$, $uo(i_l, T_q)$\}. The maximum number of items appended to $X$ is calculated as \textit{maxExtendLen} = \textit{maxlen} - $|X|$, where $|X|$ is the length of $X$. Thus, the largest utility-occupancy in transaction $T_q$ in regard to a pattern $X$ is the collection of the \textit{maxExtendLen} largest values in $W(X, T_q)$. For simplicity, we take this as $luo(X, T_q)$.
\end{definition}
For example, consider $a$ in $T_6$ in TABLE \ref{table:db}. Let \textit{maxlen} be 3; the utility-occupancy values of all items after $a$ in transaction $T_1$ can be calculated as \{0.1333, 0.5. 0.1667\}. Thus, \textit{maxExtendLen} = 3 - 1 = 2 and $luo(a, T_6)$ = \{0.5, 0.1667\}.
\begin{definition}
\label{def_15}
\rm \textit{(Revised remaining utility-occupancy)} Suppose there is a pattern $X$ in transaction $T_q$ and the maximum length of patterns is set as \textit{maxlen}. To reduce the upper bound on the utility-occupancy of $X$ in $T_q$ with a length constraint, we define the revised remaining utility-occupancy as $\textit{rruo}(X, T_q)$ = $\sum{luo(X, T_q)}$. Furthermore, the revised remaining utility-occupancy of a pattern $X$ in a transaction database $\mathcal{D}$ is calculated as $\textit{rruo}(X)$ = $\dfrac{\sum_{X \in T_q, T_q \in \mathcal{D}}{\textit{rruo}(X, T_q)}}{|\varGamma_X|}$.
\end{definition}
Similar to the above example, we take a pattern $a$ in transaction $T_6$ as an example. The total utility-occupancy before a revision is 0.8. After optimization, however, the value reaches 0.6667, which is less than the original result. Moreover, in the entire database, the value of $\textit{rruo}(a)$ = 0.64315, which is much less than the former result of 0.6971.
\begin{property}
\label{pro_1}
\rm The upper bound of the revised remaining utility-occupancy of a pattern $X$ must be tighter than that of the remaining utility-occupancy.
\end{property}
\begin{proof}
No matter which transaction $T_q$ pattern $X$ exists in, it is simple to obtain $\textit{rruo}(X, T_q)$ $\leq$ $\textit{ruo}(X, T_q)$. Hence, we can conclude that $\textit{rruo}(X)$ $ \leq $ $\textit{ruo}(X)$, the detailed proof of which is shown below:
\begin{tabbing}
$rruo(X)$ \=
= $\dfrac{\sum_{X \in T_q, T_q \in \mathcal{D}}{\textit{rruo}(X, T_q)}}{|\varGamma_X|}$ \\
\>$ \leq $ $\dfrac{\sum_{X \in T_q, T_q \in \mathcal{D}}{\textit{ruo}(X, T_q)}}{|\varGamma_X|}$ \\
\> = $ruo(X)$.
\end{tabbing}
\end{proof}
We have found that the value of $\textit{rruo}(X)$ is smaller than that of $\textit{ruo}(X)$, and the next step is to determine how this property can be applied to find a tighter upper bound for the extension of pattern $X$. Before that, we should first build a data structure to store necessary information in the database.
\subsection{Revised List Structure for Storing Information}
In the previous sections, we introduced some basic concepts of the utility-occupancy and revised the remaining utility-occupancy. In this subsection, two compact data structures are designed for maintaining the essential information and avoiding going through databases multiple times. These structures are called a UO-nlist and FUO-table, respectively. In addition, as the main meaning of the nested list, there is one list within the larger list. The specific details of this are described below.
\begin{definition}
\label{def_16}
\rm (\textit{UO-nlist}) The UO-nlist related to a pattern $X$ is composed of several tuples, and one tuple corresponds to one transaction where $X$ occurs. Let $X$ be a pattern appearing in transaction $T_q$. We then define a tuple as consisting of three elements, namely, a transaction identifier $\textit{tid}$, the utility-occupancy of $X$ in $T_q$ (abbreviated as $\textit{uo}(X, T_q)$), and the largest utility-occupancy $\textit{luo}(X, T_q)$. As defined in Definition \ref{def_14}, $\textit{luo}(X, T_q)$ is a list recording a set of the largest occupancy utility of \textit{maxlen} - $|X|$ items after $X$ in $T_q$. Therefore, the UO-nlist can be marked as ($\textit{tid}$, $uo(X, T_q)$, $luo(X, T_q)$).
\end{definition}
For example, consider $ca$ in $T_6$, as shown in TABLE \ref{table:db}, and let \textit{maxlen} be 3. It is possible to obtain $uo(ca, T_6)$ = (\$6 + \$6) / \$60 = 0.2 and \textit{maxlen} - $|ca|$ = 3 - 2 = 1. Next, $luo(ca, T_6)$ = $\{uo(e, T_6)\}$ = $\{0.5\}$. Thus, UO-nlist of $ca$ in $T_6$ is \{$T_6$, 0.2, \{0.5\}\}. After scanning the database once, the UO-nlist of each item is constructed. For more details, refer to Fig. \ref{fig:UO-nlist}, which shows the support-ascending order of each item.
\begin{figure}[!htbp]
\centering
\includegraphics[scale=0.55]{figs/uonlist.pdf}
\caption{Constructed UO-nlists of each item in Table \ref{table:db}.}
\label{fig:UO-nlist}
\end{figure}
Although UO-nlist already contains all necessary information for mining flexible HUOPs, it is troublesome to recalculate the elements on the list when we make use of the support, the utility-occupancy, and revised remaining utility-occupancy of a pattern. In this case, the execution time and other algorithm performance may be compromised. To remedy this problem, we further designed a data structure called a FUO-table, which is defined as follows:
\begin{definition}
\label{def_17}
\rm \textit{(Frequency-utility-occupancy table, FUO-table)} The FUO-table of a pattern $X$ consists of three elements, which are extracted from the corresponding UO-nlist. Among them, the support \textit{sup} is related to the number of tuples, $uo$ is the average utility-occupancy of $X$ on the UO-nlist, and \textit{rruo} is equal to the average of all values \textit{luo} defined in Definition \ref{def_15}.
\end{definition}
To understand the concept of a FUO-table, take the construction process of the FUO-table of $c$ as an example, as displayed in Fig. \ref{fig:FWTableOfC}. Because $c$ in Fig. \ref{fig:UO-nlist} appears in five transactions, \textit{sup} is equal to 5. Here, $uo$ of $c$ in FUO-table is (0.0317 + 0.0161 + 0.04 + 0.1 + 0.0676)/5 = 0.05108. Then, the calculation process of \textit{rruo} is (0.3175 + 0.3175 + 0.3387 + 0.3226 + 0.8 + 0.16 + 0.5 + 0.1667 + 0.6757 + 0.2027)/5 = 0.76028. In addition, the FUO-tables of each item are shown in Fig. \ref{fig:FUO-table}.
\begin{figure}[!htbp]
\centering
\includegraphics[scale=0.55]{figs/uolistofitemc.pdf}
\caption{UO-nlist and FUO-table of item (c).}
\label{fig:FWTableOfC}
\end{figure}
\begin{figure}[!htbp]
\centering
\includegraphics[scale=0.64]{figs/fuotable.pdf}
\caption{Constructed FUO-tables of all items.}
\label{fig:FUO-table}
\end{figure}
Because promising or potential HUOPs are bound to meet the condition of frequent patterns, the initial UO-nlists and FUO-tables of frequent 1-itemsets are constructed after scanning the database twice. For the construction of the storage container of $k$-itemsets ($k > 1$), it is recommended to extend the already established subsets instead of scanning the database multiple times in a similar miner. Next, Algorithm 1 describes how to make use of subsets with the same prefix to calculate their extensions. To obtain the pattern $X_{ab}$, $X$ and its two extensions $X_a$ and $X_b$ ($a \prec b$) are applied in advance. For simplicity, we address the UO-nlist of $ X_{ab}$ as \textit{X$_{ab}$.UONL} and FUO-table of $ X_{ab} $ as \textit{X$_{ab}$.FUOT}. Lines 2 and 3 and lines 18–22 in the algorithm check whether $X_a$ and $X_b$ appear in an entry. At the same time, each time an entry is checked, the support of $X_a$ is reduced by 1 until the value is less than $\alpha \times |D|$. The inherent reason for this is that when the support of the extensions of a pattern cannot satisfy the minimum support threshold, then it cannot be a HUOP, which is a necessary but insufficient condition. Lines 4–17 illustrate the generation process of $X_{ab}$ when the construction conditions are met. If $X$ is an empty set, then the utility-occupancy of $X_{ab}$ is the total value of $X_a$ and $X_b$. Otherwise, the result is equal to the sum of the utility-occupancy of $X_a$ and $X_b$, and the utility-occupancy of $X$ is subtracted from $X$ because it is taken twice in this calculation. Moreover, the \textit{luo} of $X_{ab}$ is the same as $X_b$ owing to the latter order. In addition, the \textit{rruo} of $X_{ab}$ is the sum of the value in each $luo$, which is then divided by the support of $X_{ab}$. Thus far, the approach of building $(k+1)$-itemsets from $k$-itemsets has been realized.
\begin{algorithm}
\label{Construction}
\caption{Construction($ X $, $ X_{a} $, $ X_{b} $)}
\begin{algorithmic}[1]
\REQUIRE $X$, a pattern with its corresponding \textit{UO-nlist} and \textit{FUO-table}; $ X_{a} $, an extension of $X$ with an item $a$; $ X_{b} $, an extension of $X$ with an item $b$.
\ENSURE $ X_{ab} $.
\STATE set \textit{X$_{ab}$.UONL} $\leftarrow$ $\emptyset $, $ X_{ab}.\textit{FUOT}$ $\leftarrow$ $\emptyset $;
\STATE set \textit{supUB} = \textit{X$_{a}$.PFU.sup};
\FOR {each tuple $ E_{a}$ $\in$ \textit{X$_{a}$.UONL} }
\IF {$ \exists E_{a}$ $\in X_{b}.UONL$ $\wedge$ $E_{a}.tid$ == $E_{b}.tid $}
\IF{\textit{X.UONL} $\neq$ $\emptyset $}
\STATE search for $ E \in$ \textit{X.UONL}, $E.tid$ = $E_{a}.tid $;
\STATE $E_{ab}$ $\leftarrow$ $<$$E_{a}.tid$, $E_{a}.uo$ + $E_{b}.uo$ - $E.uo$, \textit{E$_{b}$.luo}$>$;
\STATE \textit{$X_{ab}$.FUOT.uo} += $E_{a}.uo$ + $E_{b}.uo$ - $E.uo $;
\ELSE
\STATE $E_{ab}$ $\leftarrow$ $<$$E_{a}.tid$, $E_{a}.uo$ + $E_{b}.uo$, \textit{$E_{b}$.luo}$>$;
\STATE \textit{$X_{ab}$.FUOT.uo} += $ E_{a}.uo$ + $E_{b}.uo $;
\ENDIF
\FOR {each \textit{value} $\in$ \textit{E$_{ab}$.luo} }
\STATE \textit{$ X_{ab}$.FUOT.rruo} += \textit{value};
\ENDFOR
\STATE \textit{$X_{ab}$.UONL} $\leftarrow$ $X_{ab}.\textit{UONL}$ $\cup$ $E_{ab} $;
\STATE \textit{X$_{ab}$.FUOT.sup} ++;
\ELSE
\STATE \textit{supUB} - -;
\IF{\textit{supUB} $< \alpha \times |D| $}
\STATE \textbf{return} \textit{null};
\ENDIF
\ENDIF
\ENDFOR
\STATE $ X_{ab}.\textit{FUOT.uo}$ = $\dfrac{X_{ab}.\textit{FUOT.uo}}{X_{ab}.\textit{FUOT.sup}}$;
\STATE $X_{ab}.\textit{FUOT.rruo}$ = $\dfrac{X_{ab}.\textit{FUOT.rruo}}{X_{ab}.FUOT.sup}$;
\STATE \textbf{return} $ X_{ab} $
\end{algorithmic}
\end{algorithm}
\subsection{Length-based Upper-Bound on Utility-Occupancy}
To the best of our knowledge, utility and utility-occupancy do not yet have downward closure properties such as the frequency, which results in a significant challenge with regard to increasing the performance of the algorithm and pruning the search space. Although Gan \textit{et al.} \cite{gan2019huopm} have previously designed an upper-bound on the utility-occupancy in the HUOPM algorithm, this upper-bound is relatively superficial for patterns with a length constraint. Thus, to achieve a tighter upper-bound, we propose the LUB strategy.
\begin{definition}
\label{def_18}
\rm \textit{(support count tree, $SC$-tree)}
In this study, the support ascending order on items is applied to the complete algorithm and marked as $\prec$. For convenience, a set-enumeration tree called a $SC$-tree with an order of $\prec$ is built to simulate the depth-first search path.
\end{definition}
For a clearer description, refer to \cite{gan2019huopm}. For example, the extension nodes of $(ea)$ consist of (\textit{eab}), (\textit{ead}), and (\textit{eac}). If no action is taken, all possible nodes in a $SC$-tree are to be visited and their relative UO-nlists and FUO-tables are to be constructed, resulting in a massive space being occupied. Suppose the existence of a pattern $X$ in transaction $T_q$, namely, $X \subseteq T_q$. After the database is revised in the order of $\prec$, all items that appear after $X$ in the transaction can be written as $T/X$. Inspired by the HUOPM algorithm \cite{gan2019huopm}, we propose the following lemmas.
\begin{lemma}
\label{lemma_1}
\rm Let $Y$ be an extension node of $X$. Here, $ \varGamma_X $ is a collection of transactions containing $X$, and $|\mathcal{D}|$ is the size of the given database. In addition, \textit{top} and $\downarrow$ imply the descending order of the sum of $uo$ and \textit{rruo} in each tuple and then take the top $\alpha$ $\times$ $|\mathcal{D}|$ as valid values for the numerator. The upper bound on the utility-occupancy of $Y$ can be calculated as follows:
\begin{equation}
\hat{\phi}(Y)) = \dfrac{\sum_{top \alpha \times |\mathcal{D}|, T_q \in \varGamma_X}\{uo(X,T_q) + rruo(X,T_q)\}^{\downarrow}}{|\alpha \times |\mathcal{D}||}.
\end{equation}
\end{lemma}
\begin{proof}
\label{pro_2}
\rm Because $Y$ is obtained by appending items behind $X$, the equation $Y$/$X$ $\subseteq$ $T_q$/$X$ is derived. We can then achieve $ \sum{uo(Y / X, T_q)}$ $\leq$ $\sum{uo(T_q / X, T_q)} $. In addition, because the length of the required HUOPs cannot exceed \textit{maxlen}, the largest utility-occupancy and revised remaining utility-occupancy in a transaction are applied to handle a tighter upper bound. For an easier formula derivation, we write $Y \subseteq T_q \wedge T_q \in \mathcal{D}$ as $Z$ and \textit{maxlen} as $M$.
\begin{tabbing}
$ uo(Y)$ \=
$= \dfrac{\sum_{Z \wedge |Y| \leq M}uo(Y,T_q)}{|\varGamma_Y|} $\\
\>$ = \dfrac{\sum_{Z\wedge |Y/X| \leq M - |X|}(uo(X,T_q) + uo(Y/X,T_q))}{|\varGamma_Y|} $\\
\>$ \leq \dfrac{\sum_{Z \wedge |T_q/X| \leq M - |X|}(uo(X,T_q) + uo(T_q/X,T_q))}{|\varGamma_Y|} $\\
\>$ \leq \dfrac{\sum_{Z}\{uo(X,T_q) + \sum_{v \in luo(X, T_q)}(v)\}}{|\varGamma_Y|} $\\
\>$ \leq \dfrac{\sum_{Z}\{uo(X,T_q) + \sum_{v \in luo(X, T_q)}(v)\}}{|\varGamma_Y|} $\\
\>$ = \dfrac{\sum_{Z}\{uo(X,T_q) + rruo(X, T_q)\}}{|\varGamma_Y|} $\\
$\Longrightarrow uo(Y) \leq \dfrac{\sum_{Z}\{uo(X,T_q) + rruo(X, T_q)\}}{|\varGamma_Y|} $.
\end{tabbing}
As the most critical step in the derivation process above, $uo(T_q/X$, $T_q)$ is less than the sum of all values in $luo(X$, $T_q)$ because the \textit{maxlen} - $|X|$ values in $luo(X$, $T_q)$ are definitely greater than the other utility-occupancy of items behind $X$.
We place all supporting transactions of $Y$ extending $X$ into $\varGamma_Y$; however, the value of $|\varGamma_Y|$ cannot be obtained until the entire UO-nlist of $Y$ has been constructed. Therefore, it is meaningless and contradictory to consider the above formula as the upper bound of the utility-occupancy of $Y$. Our purpose here is to minimize the construction of the list structures; however, this formula requires constructing a completed UO-nlist. In addition, when judging the high utility-occupancy pattern, we must first determine whether it is a frequent pattern, and if so, continue to judge whether the utility-occupancy meets the requirements. In addition, inequalities $\alpha \times |\mathcal{D}| \leq SC(Y) \leq SC(X)$ hold. Therefore, according to the discussion above, it is appropriate to replace the sum of $uo$ and \textit{rruo} in complete supporting transactions with the sum in the top $\alpha \times |\mathcal{D}|$ supporting transactions. The above formula can be further transformed to a tighter upper-bound by using the following property.
\begin{tabbing}
$uo(Y) \leq \dfrac{\sum_{Z}\{uo(X,T_q) + \textit{rruo}(X, T_q)\}}{|\varGamma_Y|} $.\\
$ uo(Y) \leq \dfrac{\sum_{top \alpha \times |\mathcal{D}|, T_q \in \varGamma_X}\{uo(X,T_q) + \textit{rruo}(X,T_q)\}^{\downarrow}}{|\alpha \times |\mathcal{D}||} $\\
$\Longrightarrow uo(Y) \leq \hat{\phi}(Y) $.
\end{tabbing}
\end{proof}
Thus, given a pattern $X$, we can calculate the upper bounds $\hat{\phi}(Y)$ regarding the utility-occupancy with the length constraints of the nodes of all subtrees rooted at $X$.
\subsection{Pruning Strategies and Proposed Algorithm }
This section states the pruning strategies designed to prune the search space and improve the performance of the algorithm. Moreover, the proposed algorithm is also described in detail.
\begin{strategy}
\label{stra_1}
If the support count of a pattern $X$ in the designed $SC$-tree is greater than the minimum support threshold $\alpha$ multiplied by the size of the database $|\mathcal{D}|$, then this node is a frequent pattern; otherwise, this node and its subtree nodes rooted at the node can be directly pruned.
\end{strategy}
\begin{proof}
The source of the above strategy is the Apriori \cite{agrawal1994fast} algorithm, the principle of which can generally be expressed as $SC(X_{k+1}) \leq SC(X_{k})$. Provided that $SC(X_{k})$ $\textless \alpha$ $\times$ $|\mathcal{D}|$, $SC(X_{k+1})$ $\textless \alpha$ $\times$ $|\mathcal{D}|$ is then acquired. Thus, node $X_{k}$ and the subtree rooted at this node can be trimmed immediately.
\end{proof}
\begin{strategy}
\label{stra_2}
In $SC$-tree, the upper bound of the child node $Y$ of node $X$ can be calculated immediately when the UO-nlist and FUO-table corresponding to $X$ have been constructed. If the derived upper bound is less than the predefined minimum utility-occupancy threshold $\beta$, then any nodes in the subtree of $X$ should be pruned.
\end{strategy}
\begin{proof}
We concluded that the upper bound of $Y$ is certainly less than the real utility-occupancy of $Y$ in Lemma \ref{lemma_1}. Thus, if the upper bound is less than the minimum utility-occupancy threshold $\beta$, then $Y$ is not a HUOP.
\end{proof}
\begin{strategy}
\label{stra_3}
Similar to the downward closure property in Strategy \ref{stra_1}, each time a tuple in $X_a$ is scanned, the support of $X_a$ decreases by one until the support is less than $\alpha \times |\mathcal{D}|$.
\end{strategy}
\begin{proof}
The proof is the same as that of strategy \ref{stra_1}, only further limiting the support.
\end{proof}
\begin{algorithm}
\label{HUOPM$^+$-algorithm}
\caption{HUOPM$^{+}$($\mathcal{D}$, \textit{ptable}, $\alpha$, $\beta$, \textit{minlen}, \textit{maxlen})}
\begin{algorithmic}[1]
\REQUIRE a transaction database $\mathcal{D}$, utility table \textit{ptable}, the minimum support threshold $ \alpha $, the minimum utility-occupancy threshold $\beta$, the minimum length \textit{minlen}, and the maximum length \textit{maxlen}.
\ENSURE \textit{HUOPs}.
\STATE scan $\mathcal{D}$ to calculate the $SC(i)$ of each item $ i \in I $ and the $tu$ value of each transaction;
\STATE find $ I^* \gets \left\{ i \in I | SC(i) \geq \alpha \times |\mathcal{D}| \right\} $, with respect to $ FP^1 $;
\STATE sort $ I^* $ in the designed total order $ \prec $;
\STATE using the total order $ \prec $, scan $\mathcal{D}$ once to build the UO-nlist and FUO-table for each 1-item $ i \in I^*$;
\IF{\textit{maxlen} $\geq$ 1}
\STATE \textbf{call \textit{HUOP$^+$-Search}}($\phi$, $I^*$, $\alpha$, $\beta$, \textit{minlen}, \textit{maxlen}).
\ENDIF
\STATE \textbf{return} \textit{HUOPs}
\end{algorithmic}
\end{algorithm}
We introduced three feasible pruning strategies above. Next, the overall description concerning the proposed algorithm is presented as follows.
\begin{algorithm}
\label{HUOP$^+$-Search procedure}
\caption{HUOP$^+$-Search($X$, \textit{extenOfX}, $\alpha$, $\beta$, \textit{minlen}, \textit{maxlen})}
\begin{algorithmic}[1]
\FOR {each itemset $ X_{a}\in $ \textit{extenOfX}}
\STATE obtain $ SC(X_a) $ and $ uo(X_a) $ from the $ X_{a}.\textit{FUOT} $;
\IF{$ SC(X_a)$ $\geq \alpha$ $ \times |\mathcal{D}| $}
\IF{$ uo(X_a)$ $\geq \beta \wedge$ $|X_a|\geq minlen$ }
\STATE \textit{HUOPs} $\leftarrow$ \textit{HUOPs} $\cup$ $X_{a} $;
\ENDIF
\STATE $ \hat{\phi}(X_a) \leftarrow $ \textbf{\textit{LengthUpperBound}}($X_a.UONL, \alpha) $;
\IF{$ \hat{\phi}(X_a) \geq \beta $}
\STATE \textit{extenOfX}$_{a}\leftarrow \emptyset $;
\FOR {each $ X_{b}$ $\in$ \textit{extenOfX} that $ X_{a} $ $ \prec $ $ X_{b} $}
\STATE $ X_{ab}$ $\leftarrow$ $X_{a} \cup X_{b} $;
\STATE call \textbf{\textit{Construction}}($X, X_{a}, X_{b}) $;
\IF{$ X_{ab}.\textit{UONL}$ $\not= $ $\emptyset$ $\wedge SC(X_{ab})$ $\geq \alpha$ $\times |\mathcal{D}| $}
\STATE \textit{extenOfX}$_{a} \leftarrow$ $\textit{extenOfX}_{a} \cup X_{ab} $;
\ENDIF
\ENDFOR
\IF {$|X| + 2 \leq maxlen$}
\STATE \textbf{call \textit{HUOP$^+$-Search}}$\boldmath{(X_{a}, \textit{extenOfX}_{a}, \alpha, \beta)}$;
\ENDIF
\ENDIF
\ENDIF
\ENDFOR
\STATE \textbf{return} \textit{HUOPs}
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\label{LengthUpperBound procedure}
\caption{LengthUpperBound(\textit{X$_q$.UONL}, $ \alpha $)}
\begin{algorithmic}[1]
\STATE \textit{sumTopK} $\leftarrow 0$, $\hat{\phi}(X_a)$ $\leftarrow 0$, $V_{occu}$ $\leftarrow$ $\emptyset $;
\STATE calculate $ (uo(X,T_q)$ + \textit{rruo}($X,T_q$)) of each tuple from the built \textit{X$_{a}$.UONL} and put them into the set of $ V_{occu} $;
\STATE sort $ V_{occu} $ by descending order as $ V_{occu}^{\downarrow} $;
\FOR {$ k \leftarrow $ 1 to $\alpha \times |\mathcal{D}| $ in $ V_{occu}^{\downarrow} $}
\STATE \textit{sumTopK} $ \leftarrow$ \textit{sumTopK} + $V_{occu}^{\downarrow}[k] $;
\ENDFOR
\STATE $ \hat{\phi}(X_a)$ = $\dfrac{\textit{sumTopK}}{\alpha \times |\mathcal{D}|} $.
\STATE \textbf{return} $ \hat{\phi}(X_a) $
\end{algorithmic}
\end{algorithm}
The main objective of the HUOPM$^+$ algorithm is to mine flexible high utility-occupancy patterns whose length is within a certain range. The pseudocode for the entire algorithm is shown in Algorithm 2. First, six parameters need to be entered in advance as the input of the algorithm, i.e., a transaction database $\mathcal{D}$, a profit table abbreviated as \textit{ptable}, the minimum support threshold $\alpha$ (0 $< \alpha \leq 1)$, the minimum utility-occupancy threshold $\beta$ (0 $< \beta \leq 1)$, the minimum length of the patterns \textit{minlen}, and the maximum length of the patterns \textit{maxlen} $(0 \leq $ \textit{minlen} $\leq $ \textit{maxlen}). This is the first time to scan the database to calculate the support of each item and the utility of each transaction. Then, rearrange the items in the transactions in support descending order ($\prec$) and thus obtain a revised database. Next, a rescanning of the revised database and establishment of the initial UO-nlist and FUO-table are necessary, which are the bases for the following search process. Finally, if the required maximum length \textit{maxlen} is greater than 1, the search procedure is executed.
To allow the search process to proceed in Algorithm 3, we need to input a prefix pattern $X$, a collection of extensions of $X$ that are initially distinct items, i.e., \textit{extenOfX}, $\alpha$, $\beta$, \textit{minlen}, and \textit{maxlen}. If a pattern $X_a$ in \textit{extenOfX} is expected to be output as a HUOP, it must satisfy three conditions. The support and utility-occupancy of $X_a$ are no less than the threshold $\alpha$ and $\beta$, respectively. In addition, it is still necessary to detect whether the length of $X_a$ is greater than or equal to \textit{minlen}. If the support of $X_a$ does not meet the restriction, then it is directly skipped, and the next pattern in the sequence is checked. Next, the upper bound in Algorithm 4 is calculated to determine whether the extending nodes of $X_a$ should be executed during a later calculation. If the upper bound $ \hat{\phi}(X_a) $ is no less than $\beta$, we can establish the extensions of $X_a$ and their relative UO-nlists and FUO-tables through the construction procedure described in Algorithm 1. For example, $X_{ab}$ is developed by merging $X_a$ with $X_b$ and using the specific process described above. Next, if the support of $X_{ab}$ meets the given requirement, it is placed in \textit{extenOfX}$_a$ for the next cycle. Finally, when the length of the extensions is less than \textit{maxlen}, the above mining program is recursed; otherwise, the program exits.
\section{Experiments}
\label{sec:experiments}
To evaluate the performance of our proposed HUOPM$^+$ algorithm, some experiments conducted for comparison with the state-of-the-art utility-occupancy-based HUOPM algorithm are described in this section.
\textit{Experimental environment}. First, an introduction of the computer parameters used will be quite helpful to allow readers to reproduce the experiments. The computer is running on Windows 10 with 7.88 GB of free RAM. We compared the presented algorithm HUOPM$^+$ with the state-of-the-art HUOPM algorithm for discovering the HUOPs. Both algorithms are implemented using Java.
\textit{Parameter settings}. As an innovation, the algorithm is mainly aimed at controlling the length of the output patterns; thus, to test this advantage, we set the maximum length of the patterns from one to five during the comparison between the HUOPM and HUOPM$^+$ algorithm outputs for all desired patterns. The minimum length is not a variable and is set to a constant of 1 because, even if the minimum length is greater than 1 and the maximum length is unchanged, the efficiency and performance of the algorithm are basically unchanged. Setting the minimum length ensures that our proposed algorithm can output patterns that are greater than the minimum length. The HUOPM$^+$ algorithm mainly involves two parameters, support and utility-occupancy. The following experiments analyze the runtime, visited nodes, and found patterns at different maximum lengths. For simplicity, we record the minimum support threshold as \textit{minsup} and the minimum utility-occupancy as \textit{minuo}. In the legend, the maximum length \textit{maxlen} in the HUOPM$^+$ algorithm is recorded as HUOPM$^{+}$-\textit{maxlen}. For example, HUOPM$^{+}$5 implies that the maximum length of the discovered patterns is 5.
\subsection{Tested Datasets}
Four standard datasets are applied to test the efficiency of the compared algorithms in terms of runtime, visited nodes, ie memory consumption, and patterns found. Three real-life datasets and one synthetic dataset were used, namely, retail, mushroom, kosarak, and T40I10D100K, respectively. It is worth noting that these datasets have different characteristics and that all aspects of the two algorithms (HUOPM and HUOPM$^{+}$) can be compared. We adopted a simulation method that is widely used in previous studies \cite{gan2019huopm,liu2012mining} to generate the quantitative and unit utility information for each object/item in each dataset. Next, we introduce these datasets in detail.
\begin{itemize}
\item \textbf{retail\footnotemark[1]}: There are 88,162 transactions and 16,470 distinct items in the retail dataset, and the average transaction length is 76. Because there are few transactions and the average length of a transaction is not large, retail is a sparse dataset.
\item \textbf{mushroom\footnotemark[1]}: This is a dense dataset, which has 8,124 transactions and 119 distinct items.
\item \textbf{kosarak\footnotemark[1]}: This has 990,002 transactions and 41,270 items, and note that the average length of its transactions is relatively longer, reaching up to 2,498, making it a dense dataset.
\item \textbf{T40I10D100K\footnotemark[2]}: This is a synthetic dataset with 942 distinct items and 100,000 transactions.
\end{itemize}
\footnotetext[1]{\url{http://www.philippe-fournier-viger.com/spmf/index.php?link=datasets.php}}
\footnotetext[2]{\url{http://fimi.uantwerpen.be/data/}}
\subsection{Runtime Analysis}
Let us first analyze the runtime of the compared algorithms. Figs. \ref{fig:runtimeMS} and \ref{fig:runtimeMU} show the runtime changes when the minimum support threshold $minsup$ and the minimum utility-occupancy threshold \textit{minuo} are gradually increasing. HUOPM$^{+}$1 to HUOPM$^{+}$5 mean that the maximum length of the patterns varies from 1 to 5. In general, it can be observed that, as one of the three thresholds increase, the execution time of each algorithm becomes increasingly short. In addition, when the maximum length of the discovered patterns is set to a distinct integer, the compared runtimes are clearly different. The runtime of the HUOPM algorithm can reach up to 2- to 5-times the runtime of HUOPM$^+$, which is crucial to the efficiency of the algorithms. As the reason for this phenomenon, the HUOPM$^+$ algorithm tightens the upper bound of the derived patterns, reduces the number of traversal nodes, and directly finds the derived patterns. From the comparison of the subplots in Figs. \ref{fig:runtimeMS} and \ref{fig:runtimeMU}, the performance of the proposed algorithm in compact datasets is better than that in sparse datasets, which is probably in that there are more longer patterns in compact datasets and plenty of patterns are pruned as a result of the designed utility-occupancy upper bound.
\begin{figure}[!hbt]
\centering
\includegraphics[height=0.3\textheight,width=1.03 \linewidth,trim=80 0 50 0,clip,scale=0.35]{figs/runtimems.pdf}
\caption{Runtime under a changed $minsup$ with a fixed $minuo$}
\label{fig:runtimeMS}
\end{figure}
\begin{figure}[!hbt]
\centering
\includegraphics[height=0.3\textheight,width=1.03 \linewidth,trim=80 0 50 0,clip,scale=0.35]{figs/runtimemu.pdf}
\caption{Runtime under a changed $minuo$ with a fixed $minsup$}
\label{fig:runtimeMU}
\end{figure}
\subsection{Visited Node Analysis}
In this subsection, we take into account the number of visited patterns, that is, we discuss the memory consumption. It is a common knowledge that, owing to the limitations of the current storage technology, memory consumption still accounts for a large proportion of the algorithm performance. Therefore, the fewer nodes visited during the process, the better the memory consumption of the algorithm.
\begin{figure}[!hbt]
\centering
\includegraphics[height=0.3\textheight,width=1.05\linewidth,trim=80 0 50 0,clip,scale=0.35]{figs/memoryms.pdf}
\caption{Visited nodes under a changed $minsup$ with a fixed $minuo$}
\label{fig:memoryMS}
\end{figure}
\begin{figure}[!hbt]
\centering
\includegraphics[height=0.3\textheight,width=1.03\linewidth,trim=80 0 50 0,clip,scale=0.35]{figs/memorymu.pdf}
\caption{Visited nodes under a changed $minuo$ with a fixed $minsup$}
\label{fig:memoryMU}
\end{figure}
Figs. \ref{fig:memoryMS} and \ref{fig:memoryMU} respectively show that, when the support and utility-occupancy threshold change, the visited node changes. Clearly, the number of nodes visited by the HUOPM algorithm is greater than or equal to that of HUOPM$^+$. In particular, the smaller the maximum length set in HUOPM$^+$, the fewer the number of visited nodes. For example, in Fig. \ref{fig:memoryMS}, the minimum support of \textit{kosarak} is set to 0.0016, and the minimum utility-occupancy is set to 0.01. It can be seen that, when the maximum length is 5, the number of visited nodes of the HUOPM$^+$ algorithm is about one-third that of the HUOPM algorithm.
These phenomena suggest there is a difference between the actual memory consumption of the compared algorithms. It also illustrates the effectiveness of the mining model based on the pattern length-constraints proposed in this paper, that is, by setting different lengths, we not only can mine patterns that are more in line with the needs, we can also further reduce the actual memory consumption of the algorithm.
\subsection{Pattern Analysis}
The main goal of this study is the flexible mining of high utility-occupancy patterns, and during the process of mining to innovatively set the length-constraints. This allows patterns that are out of the range to not be traversed and the number of patterns found to be much less than that of HUOPM.
\begin{figure}[!hbt]
\centering
\includegraphics[height=0.3\textheight,width=1.02\linewidth,trim=80 0 50 0,clip,scale=0.35]{figs/patternms.pdf}
\caption{Patterns under a changed \textit{minsup} with a fixed \textit{minuo}}
\label{fig:patternMS}
\end{figure}
\begin{figure}[!hbt]
\centering
\includegraphics[height=0.3\textheight,width=1.02\linewidth,trim=80 0 50 0,clip,scale=0.36]{figs/patternmu.pdf}
\caption{Patterns under a changed \textit{minuo} with a fixed \textit{minsup}}
\label{fig:patternMU}
\end{figure}
From the detailed results in Figs. \ref{fig:patternMS} and \ref{fig:patternMU}, we can observe that, in fact, the required patterns are often much fewer than the total number of patterns. Using this proposed algorithm, we not only can directly output the required patterns, we can also reduce the time required to process the data. For example, as shown in Fig. \ref{fig:patternMU} (b), $\alpha$ was set to 0.0001, and $\beta$ changed from 0.1 to 0.35 in increments of 0.05. The number of patterns desired from HUOPM$^+$1, HUOPM$^+$2, HUOPM$^+$3, HUOPM$^+$4, and HUOPM$^+$5 is 1,350, 20,946, 65,223, 98,166, and 110,676, respectively. In particular, the number of patterns of the mushroom dataset in Fig. \ref{fig:patternMU} with no length constraint is more than 10 times the number of patterns with \textit{maxlen} set to 5. From the two figures, we can notice that the number of flexibility patterns extracted by the proposed algorithm from sparse datasets is not significantly different from the number of the full patterns, while it is exactly the opposite in compact datasets. This is because the algorithm plays a stronger role in dense datasets.
\section{Conclusion and Future Studies}
\label{sec:conclusion}
This paper proposes a novel algorithm called HUOPM$^+$, aimed at mining flexible high utility-occupancy patterns. It integrates length-constraints using a state-of-the-art HUOPM algorithm. In particular, unlike stopping the process of pattern growth directly, it deepens the length constraints of the procedure and narrows the upper bound by introducing the concept of a length upper-bound, which is one of the merits of the proposed algorithm. In addition, UO-nlist and FUO-table are designed to maintain the information in the database. The results of a subsequent experiment confirm that our proposed strategies can indeed discover HUOPs within a certain range of length, from \textit{minlen} to \textit{maxlen}, and greatly reduce the memory consumption and execution time. As part of a future study, we will apply the pattern length constraint to other utility mining algorithms, such as utility mining in dynamic profit databases \cite{nguyen2019mining}, utility-driven sequence mining \cite{gan2020proum} and nonoverlapping pattern mining \cite{wu2021ntp}.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
2,877,628,090,483 | arxiv | \section{Introduction}
An {\it ornament} is a continuous map $f=\bigsqcup_{i=1}^n f_i$ from $X=\bigsqcup_{i=1}^n X_i$ to $Y$ that has
no {\it $i=j=k$ points}, i.e.\ $f(X_i)\cap f(X_j)\cap f(X_k)=\emptyset$, whenever $i$, $j$ and $k$ are
pairwise distinct.
Note that $f$ is allowed to have {\it triple points} $f(x)=f(y)=f(z)$, where $x,y,z$ belong one or two of
the $X_i$'s.
We are interested in ornaments up to {\it ornament homotopy}, i.e.\ homotopy through ornaments.
Ornaments of circles in the plane were introduced by Vassiliev \cite{Va1} as a generalization of doodles,
previously studied by Fenn and Taylor \cite{FT}.
Fenn and Taylor additionally required each circle to be embedded; however, Khovanov \cite{Kh} redefined doodles
as triple point free maps of circles in the plane, and Merkov proved that doodles in Khovanov's sense are
classified by their finite-type invariants \cite{Mer}.
Further references and examples can be found in \cite{M4'}, which is a more thorough companion paper to
this brief note.
The problem of classification of ornaments of spheres in $\R^m$ is motivated, in particular, by geometric
and algebraic constructions that go from link maps and their ``quadratic'' invariants to ornaments and their
``linear'' invariants; and conversely \cite{M4'}.
Link maps are, in turn, related to links by the Jin suspension and its variations, which likewise reduce some
``quadratic'' invariants of links to ``linear'' invariants of link maps \cite{M3}, \cite{Sk}*{\S3}.
\section{$\mu$-invariant}
We will consider only ornaments of the form $X_1\sqcup X_2\sqcup X_3\to\R^m$.
If $f=f_1\sqcup f_2\sqcup f_3$ is such an ornament, let $F$ be the composition
\[X_1\x X_2\x X_3\xr{f_1\x f_2\x f_3}\R^m\x\R^m\x\R^m\but\Delta_{\R^m}\xr{\simeq}S^{2m-1},\]
where $\Delta_{\R^m}=\{(x,x,x)\mid x\in\R^m\}$ and the homotopy equivalence is given, for instance, by
$(x,y,z)\mapsto\frac{(2x-y-z,\,2y-x-z)}{||(2x-y-z,\,2y-x-z)||}$.
Let $\mu(f)\in H^{2m-1}(X_1\x X_2\x X_3)$ be the image
under $F^*$ of a fixed generator $\xi\in H^{2m-1}(S^{2m-1})$; to be precise, let us choose $\xi$ to correspond to
the orientation of $S^{2m-1}$ given by its inwards co-orientation in the standardly oriented $\R^{2m}$.
Clearly, $\mu(f)$ is invariant under ornament homotopy.
Let us now assume that each $X_i$ is a connected closed oriented $(2k-1)$-manifold and $m=3k-1$.
Then $F$ is a map between connected closed oriented $(6k-3)$-manifolds, and so $\mu(f)$ is an integer.
In this simplest case, assuming additionally that each manifold $X_i$ is either PL or smooth, one can compute
$\mu(f)$ as follows.
First let us note that since each $X_i$ is compact, for each ornament $f\:X\to\R^m$ there exists an $\eps>0$ such that
every map $f'\:X\to\R^m$, $\eps$-close to $f$ (in the sup-metric), is also an ornament, and moreover the rectilinear
homotopy between $f$ and $f'$ is an ornament homotopy.
Thus we are free to replace ornaments by their generic (PL or smooth) approximations.
Similarly, ornament homotopies can be replaced by their generic approximations.
Now let us consider a homotopy between $f$ and the {\it trivial} ornament, which sends $X_1$, $X_2$ and $X_3$ to three
distinct fixed points in $\R^m$.
Its generic (PL or smooth) approximation $h_t$, if viewed as a map $X\x I\to\R^m\x I$, $(x,t)\mapsto\big(h_t(x),\,t\big)$, has only
finitely many transverse $1=2=3$ points, which are naturally endowed with signs
\footnote{Every triple point of a generic map $F\:N\to M$ from a $2k$-manifold to a $3k$-manifold corresponds
to a transversal intersection point between the $3k$-manifold $\Delta_M$ and the map $F^3\:N^3\to M^3$ from
a $6k$-manifold to a $9k$-manifold.}
(See \cite{BRS}*{II.4} concerning PL transversality.)
The algebraic number of these $1=2=3$ points is easily seen to equal $\mu(f)$
\footnote{Each $1=2=3$ point of $h_t$ corresponds to a transversal intersection point between $\Delta_{\R^m}\x I$ and
the map $X_1\x X_2\x X_3\x I\to\R^m\x\R^m\x\R^m\x I$, $(x,y,z,t)\mapsto\big(h_t(x,t),\,h_t(y,t),\,h_t(z,t),\,t\big)$.
It is easily seen to be of the same sign.}
\begin{example}
The inclusions of the unit disks in the coordinate $2k$-planes $\R^k\x\R^k\x 0$, $\R^k\x 0\x\R^k$ and $0\x\R^k\x\R^k$
in $\R^{3k}$ yield a smooth map $B^{2k}\sqcup B^{2k}\sqcup B^{2k}\to B^{3k}$ with one transverse $1=2=3$ point.
Restricting to the boundaries, we get the {\it Borromean} ornament $b\:S^{2k-1}\sqcup S^{2k-1}\sqcup S^{2k-1}\to S^{3k-1}$.
By stereographically projecting $S^{3k-1}$ e.g.\ from $z=\frac{1}{\sqrt{3k}}(1,\dots,1)$ we also get an ornament
$b_z\:S^{2k-1}\sqcup S^{2k-1}\sqcup S^{2k-1}\to\R^{3k-1}$.
On the other hand, the sphere of radius $\eps\sqrt{k}$ centered at $(\eps,\dots,\eps)$ for a sufficiently
small $\eps>0$ is tangent to each of the three unit $2k$-disks.
By appropriately identifying the exterior of this sphere in the unit $3k$-disk $B^{3k}$ with $S^{3k-1}\x I$,
we get a smooth homotopy of $b$, and hence also of $b_z$, to the trivial ornament.
It has one transverse $1=2=3$ point, which can be seen to be positive, and it follows that $\mu(b_z)=1$.
\end{example}
In the case of doodles, the $\mu$-invariant was introduced in \cite{FT}.
See \cite{M4'} concerning relations between the $\mu$-invariant of ornaments and the triple $\mu$-invariant of link maps.
\section{Classification}
\begin{theorem} \label{t1} Let $m=3k-1$, $k>2$ and let $X_1$, $X_2$, $X_3$ be connected closed oriented PL
$(2k-1)$-manifolds.
Then $\mu$ is a complete invariant of ornaments $X_1\sqcup X_2\sqcup X_3\to\R^m$.
\end{theorem}
The proof is in the PL category.
If the $X_i$ are smooth manifolds, the same construction with minimal (straightforward) amendments can be
carried out in the smooth category.
\begin{proof}
Let $f$ and $g$ be generic PL ornaments of $X:=X_1\sqcup X_2\sqcup X_3$ in $\R^m$ with $\mu(f)=\mu(g)$.
Let $h\:X\x I\to\R^m\x I$ be a generic PL homotopy between them.
Since $\mu(f)=\mu(g)$, the $1=2=3$ points of $h$ can be paired up with opposite signs.
Every such pair $(p^+,p^-)$ will now be canceled by a triple-point Whitney trick.
Let $p_i^\pm$ be the preimage of $p^\pm$ in $M_i:=X_i\x I$.
We first arrange that $(p_1^+,p_2^+)$ and $(p_1^-,p_2^-)$ be in the same component of the double point set
$\Delta_{12}:=\{(x,y)\in M_1\x M_2\mid h(x)=h(y)\}$ (in case that initially they are not).
To this end we pick points $(q_1^\pm,q_2^\pm)$ in the same components of $\Delta_{12}$ with
$(p_1^\pm,p_2^\pm)$ and such that the double points $f(q_1^+)=f(q_2^+)$ and
$f(q_1^-)=f(q_2^-)$ are not triple points.
Let us connect $q_1^+$ and $q_1^-$ by an arc $J_1$ in $M_1$, disjoint from the preimages of any double points
(using that $k>1$).
Now we attach a thin $1$-handle to $h(M_2)$ along the image of $J_1$.
That is, we modify $h(M_2)$ into $h'(M_2')$, where $M_2'$ is obtained from $M_2$ by removing an oriented copy of
$B^{2k}\x\partial I$ and pasting in $\partial B^{2k}\x I$.
The embedded $1$-handle $h'(\partial B^{2k}\x I)$ is constructed in a straightforward way.
Namely, since $h$ is generic, $\Delta_{12}$ is an oriented $k$-manifold, immersed into the $2k$-manifold $M_1$
by the projection $\pi\:M_1\x M_2\to M_1$.
Let us take an oriented connected sum of its components along a ribbon $r(D^k\x I)$ in $M_1$ (going near $J_1$)
\footnote{Namely, $q_1^\pm$ has a regular neighborhood $N_\pm$ in $M_1$ that is homeomorphic to $[-1,1]^{2k}$ by
an orientation preserving homeomorphism $\phi_\pm$ such that $\phi_\pm^{-1}(\pi(\Delta_{12}))=[-1,1]^k\x\{0\}^k$
and $\phi_\pm^{-1}(J_1)=\{0\}^{2k-1}\x [0,1]$.
Let $Q=[-1,1]^k\x\{0\}^{k-1}$ and let $N$ be a regular neighborhood of
$\cl{J_1\but (N_+\cup N_-)}\cup\phi_+(Q\x 1)\cup\phi_-(Q\x 1)$ in
$\cl{M_1\but (N_+\cup N_-)}$.
Since a $k$-ball unknots in the interior of a $(2k-1)$-ball, there is a homeomorphism $\psi\:[-2,2]^{2k}\to N$
such that $\psi^{-1}(\partial N_\pm)=[-2,2]^{2k-1}\x\{\pm2\}$ and $\psi(x,\pm 2)=\phi_\pm(x,1)$ for all $x\in Q$.
Then $\phi_+(Q\x I)\cup\phi_-(Q\x I)\cup\psi(Q\x [-2,2])$ is the desired ribbon $r(D^k\x I)$.}
Then $hr(D^k\x I)$ is naturally thickened to a solid rod $R(B^{2k}\x I)$ in $\R^m\x I$ whose lateral surface
$R(\partial B^{2k}\x I)$ is the desired embedded 1-handle
\footnote{If $N_1$ is a disk neighborhood of $J_1$ that is embedded by $h$, we may assume that $h(M_2)$ is transverse to
a normal block bundle $\nu$ to $h(N_1)$, that is, $h(M_2)$ meets the total space
$E(\nu)$ in $E(\nu|_{h(N_1)\cap h(M_2)})$.
Since $\nu$ is trivial, there is a homeomorphism $R\:B^{2k}\x I\to E(\nu|_{hr(D^k\x I)})$
sending $B^{2k}\x\partial I$ onto $E(\nu|_{hr(D^k\x\partial I)})$.}
To restore the topology of $M_2$, we cancel the $1$-handle geometrically by attaching a $2$-handle along an embedded
$2$-disk $D$, which is disjoint from $h(M_1\sqcup M_3)$ and meets $h'(M_2')$ only in $\partial D$ (such a disk
exists since $k>2$).
That is, we modify $h'(M_2')$ into $h''(M_2'')$, where $M_2''$ is obtained from $M_2'$ by removing an appropriately
embedded copy of $B^{2k-1}\x\partial D^2$ and pasting in $\partial B^{2k-1}\x D^2$.
As is well-known, this can be done so that $M_2''$ is homeomorphic to $M_2$
\footnote{In more detail, let us connect $q_2^+$ and $q_2^-$ by an arc $J_2$ in $M_2$, disjoint from the preimages of
any double points.
Let $H_1$ be a small regular neighborhood of $J_1':=J_2\x 1\cup\partial J_2\x [0,1]$ in $M_2\x [0,2]$.
Let $H_2$ be a small regular neighborhood of $D':=\cl{J_2\x [0,1]\but H_1}$ in $\cl{M_2\x[0,2]\but H_1}$.
Then $M_2'$ can be identified with the frontier of $M_2\x[-1,0]\cup H_1$ in $M_2\x [-1,2]$ so that
$h'(\partial D')$ gets identified with $\partial D$; and $M_2''$ with the frontier of $M_2\x[-1,0]\cup H_1\cup H_2$ in
$M_2\x [-1,2]$, which is homeomorphic to $M_2$.}
Since we do not care about self-intersections of individual components, we may define $h''$ on
$\partial B^{2k-1}\x D^2$ to be an arbitrary generic map into a small neighborhood of
$D\cup h'(B^{2k-1}\x\partial D^2)$.
Thus we may assume that $(p_1^+,p_2^+)$ and $(p_1^-,p_2^-)$ are in the same component of $\Delta_{12}$.
To cancel the original $1=2=3$ points $p^+$ and $p^-$, let us connect $(p_1^+,p_2^+)$ and $(p_1^-,p_2^-)$
by an arc $J_{12}$ in $\Delta_{12}$ and attach a thin $1$-handle to $h(M_3)$ along the image of $J_{12}$.
(This $1$-handle is the spherical block normal bundle of $h(M_1)\cap h(M_2)$ over the image of $J_{12}$.
It is attached orientably since the two $1=2=3$ points have opposite signs.)
The topology of $M_3$ can be restored using another $2$-disk like before.
In particular, this $2$-disk is disjoint from $h(M_1\sqcup M_2)$, so no
new $1=2=3$ points arise.
Finally, we need to apply the ``ornament concordance implies ornament homotopy in codimension three'' theorem
\cite{M1}, \cite{M2}.
(Alternatively, it should be possible to rework the above construction so as to keep the levels preserved
at every step --- but it would be a rather laborious exercise; compare \cite{M3}*{proofs of Lemmas 5.1, 5.4, 5.5}.)
\end{proof}
\section{Discussion}
Theorem \ref{t1} and its proof (in slightly less detailed form) were originally contained in the preprint \cite{M4},
which I presented at conferences and seminars in 2006--07 and privately circulated at that time and in later years.
For instance, the referee of the present paper (whose identity I know from his idiosyncratic remarks) does not deny
that he received my preprint containing the proof of Theorem \ref{t1}, exactly as it appears in \cite{M4},
by email on May 23, 2006 and then again on July 7, 2006.
I hesitated to publish \cite{M4} at that time as I hoped to get more progress on the conjectures stated in
the introduction there; but other projects are still distracting me from this task.
In the meantime I. Mabillard and U. Wagner independently found and vastly generalized a version of
the triple-point Whitney trick and also obtained nice applications leading to a disproof of the Topological
Tverberg Conjecture \cite{MW}.
(My only step in that direction was a feeble attempt to advertise the possibility of disproving
the Topological Tverberg Conjecture by generalizing the construction of the present note --- addressed,
for instance, to P. Blagojevi\'c at the 2009 Oberwolfach Workshop on Topological Combinatorics.)
Mabillard and Wagner call their construction the ``triple Whitney trick'', but I prefer to reserve this title
for a certain other device, extending Koschorke's version of the Whitney--Haefliger construction
\cite{Ko}*{Proof of Theorem 1.15} and involving the triple-point Whitney trick as only one of several steps.
It can be used to obtain a geometric proof of the Habegger--Kaiser classification of link maps in
the 3/4 range \cite{HK}, which will hopefully appear elsewhere (a sketch of this proof was presented
in my talk at the Postnikov Memorial Conference in B\c edlewo, 2007).
|
2,877,628,090,484 | arxiv | \section{Introduction}
\label{Sec:intro}
The source \object{\iras} was first identified by \citet{prusti92}
in an IRAS survey of pre-main sequence (PMS) stars in the Chamaeleon\,II
(Cha\,II) cloud. The source was associated with a relatively bright
($R\approx$12\,mag) off-cloud star. A later analysis of the star's near-IR
magnitudes led \citet{larson98} to conclude that its IR colours are more similar
to those of giants than those of PMS stars. The source was then classified
as a young stellar object candidate using selection criteria based on
{\it Spitzer} mid-IR data \citep{alcala08}, but it lacks the IR-excess
\citep{larson98, alcala08} typical of young stars. Since lithium is efficiently
destroyed by convective mixing in the interior of low-mass stars when
the temperature at the bottom of the convective layer reaches about
2.5$\times$10$^6$\,K, the presence of strong Li\,{\sc i} $\lambda$6707.8\,\AA\
absorption represents an important criterion for identification of
low-mass young stars. Thus, a late-type spectrum and the detection of
a strong Li\,{\sc i} $\lambda$6707.8\,\AA\ absorption in two VLT-FLAMES/Giraffe
spectra, have led \citet{spezzi08} to assign a young stellar object nature to \iras.
Surprisingly, this star turned out to be over-luminous by about two orders of
magnitude on the HR diagram with respect to other young stars in the Cha\,II
cloud \citep{spezzi08}. This result and the conclusion by \citet{larson98}
that \iras\ could be a giant star, triggered a new high-resolution spectroscopic
study. We characterise the star as a lithium-rich M-type giant.
Several spectroscopic studies of K-type giants in the past two decades
have provided evidence of lithium enhancements in the atmosphere of these
stars \citep[][ and references therein]{kumar11}.
We study \iras\ in the context of other well known lithium-rich
giants and discuss possible mechanisms for its lithium enhancement.
\begin{table*}
\caption[ ]{\label{physpar} Physical parameters of \iras.}
\centering
\begin{tabular}{ccccccccc}
\hline
$T_{\rm eff}$ & $\log{g}$ & [Fe/H] & $v\sin{i}$ & $RV$ & $A$(Li) & $\log{(L/L_{\odot})}$ & $\log{(R/R_{\odot})}$ & $M/M_{\odot}$ \\
(K) & (dex) & (dex) & (km s$^{-1}$) & (km s$^{-1}$) & (dex) & & & \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
3460$\pm$60 & 0.6$\pm$0.2 & $-$0.08$\pm$0.20 & 8$\pm$3 & 67.68$\pm$0.02 & $2.4\pm0.2$ & 2.99$\pm$0.15 & 1.9$\pm$0.2 & 1.0$\pm$0.2 \\
\noalign{\smallskip}
\hline
\end{tabular}
\tablefoot{$T_{\rm eff}$ and $\log{g}$: from the $\chi^2$ minimization;
[Fe/H] and $v\sin{i}$: average values from $\chi^2$ minimization and MOOG results.}
\end{table*}
\begin{figure*}[ht]
\vskip 3.3cm
\special{psfile=plot_Na.eps angle=0 hscale=85 vscale=63 voffset=-5 hoffset=0}
\caption{Portion of the HARPS spectrum of \iras\ in the \ion{Na}{i} D region. The M5\,III best template
(\object{HD\,175865}) is overplotted (dotted line). A dwarf M star with nearly the same temperature
(\object{HD\,173740}) is also overplotted with a dashed line. The interstellar absorption
components (IS) are also indicated.}
\label{harps_spec}
\end{figure*}
\section{HARPS spectroscopy}
\label{Sec:spectra}
With the aim of ascertaining the nature of \iras,
two high-resolution (R$\sim$110,000) spectra (range from 3800\,\AA\ to 6900\,\AA),
obtained in February 2-3, 2011 with the HARPS (High Accuracy Radial
velocity Planet Searcher) spectrograph at the ESO 3.6\,m telescope in
La Silla (Chile), were combined.
The strong Li\,{\sc i} absorption line at $\lambda$6707.8\,\AA~was the
first feature to be immediately confirmed, for which we measured an
equivalent width of 550($\pm$10)\,m\AA. Likewise, the H$\alpha$
line is in absorption. Overall, a number of strong molecular absorption
bands, mainly of titanium oxide, can be identified, and they confirm it
as a very cool object.
The interstellar (IS) Na\,{\sc i}~D ($\lambda\lambda$5890,5996\,\AA)
absorption components, distinguishable from the photospheric ones,
are clearly detected (see Fig.~\ref{harps_spec}). The narrower \ion{Na}{i}\,D
wings of \iras\ with respect to \object{HD\,173740} (see Fig.~\ref{harps_spec})
definitely exclude it as a main sequence star.
The mean radial velocity of \iras, as drawn from the HARPS spectra,
is $RV$=$67.68\pm$0.02\,km\,s$^{-1}$, which is far beyond the range
of the Chamaeleon region \citep[$\sim$15$\pm2$\,km\,s$^{-1}$;][]{covino97}.
Likewise, the proper motion components of the star,
$\mu_{\alpha}\cos\delta$$=-13.1\pm$5\,mas~yr$^{-1}$,
$\mu_\delta$$=10.7\pm$5\,mas~yr$^{-1}$, as retrieved from the PPMXL Catalog
\citep{roeser10}, are not compatible with those of the Chamaeleon young
stars \citep[e.g.,][]{frink98}. Therefore, we conclude that the kinematics
of \iras\ is inconsistent with those of young stars in the Chamaeleon region
and that the star is unrelated to the star-forming cloud.
The detection of the interstellar Na\,{\sc i} absorption components,
for which we measure an $RV_{\rm IS}\sim 14\pm2$\,km\,s$^{-1}$ consistent with
the radial velocity of Chamaeleon, is an indication that such interstellar
components can be produced by the Cha\,II cloud itself and that \iras\ must
be located at a much greater distance. Because it is unrelated to the star-forming
region, \iras\ is most likely a field giant star, far behind the Cha\,II cloud.
This confirms the suggestion by \citet{larson98}.
\section{Physical parameters and lithium abundance}
\label{Sec:param}
Using the combined (S/N$\approx$30) HARPS spectrum, we obtained
first estimates of the physical parameters by a procedure outlined
in detail in Appendix~\ref{appendix1}. This procedure provides us with
the best effective temperature, gravity, and metallicity by comparing
the combined HARPS spectrum with template spectra of real stars with
well known parameters via a $\chi^2$ minimization criterion.
The derived stellar parameters come from a weighted mean of the
parameters of the 50 reference stars (10 per each spectral region) that
independently match the target spectrum most closely. The results are
$T_{\rm eff}$$=3460\pm60$\,K, $\log{g}$$=0.6\pm0.2$,
[Fe/H]$=-0.05\pm0.10$, and v$\sin{i}$$=6.2\pm3.0$\,km~s$^{-1}$.
From the $\log{g}$ vs. $T_{\rm eff}$ relationship by \citet{houdashelt00}
for solar metallicity giants, we estimate $\log{g(3500\,K)}\sim0.4$,
in fair agreement with the value derived from the $\chi^2$
minimization.
\subsection{Lithium abundance}
As pointed out by \citet{gratton89}, a high value ($\sim$500\,m\AA)
of the $\lambda$6707.8\,\AA\ Li\,{\sc i} line equivalent width implies
that the resonance doublet should be strongly saturated and that
deviations from LTE conditions may be important. This requires the
use of other lithium lines. The spectral range covered by HARPS
also allows us to investigate the $\lambda$6103.6\,\AA\ Li\,{\sc i}
line.
The Li abundance $A$(Li) was derived by spectral synthesis of the
\ion{Li}{i} 6103.6\,\AA\ line using the MOOG code \citep{sneden73},
which assumes LTE conditions, along with the GAIA model
atmospheres (Brott \& Hauschildt 2010, priv. comm.).
The hyperfine line structure has been considered following
the guidelines by \citet{wahlgren05} to compute the atomic parameters.
The atomic and molecular line lists were taken from the
VALD\footnote{http://vald.astro.univie.ac.at/$\sim$vald/php/vald.php.}
database.
The physical parameters derived from the $\chi^2$ minimization explained
above were used as starting values for the synthesis. We let
metallicity, $v\sin{i}$, $A$(Li), and microturbulence velocity $\xi$ vary,
while keeping $T_{\rm eff}$ and $\log{g}$ fixed.
The low temperature of the star makes the spectral synthesis rather
difficult, because of the uncertainty on continuum determination caused
by several overlapping molecular lines. The continuum was determined
by a spline fit made after visually selecting regions free of absorption
lines. We estimated that the uncertainty on the continuum determination,
which is the main source of error on $A$(Li), is less than 5\%. The results are
shown in Fig.~\ref{NLi_03}. From our best fit, we obtained values of
$A$(Li)$=2.4\pm0.2$, [Fe/H]$=-0.1\pm0.2$, and $v\sin{i}=9\pm2$\,km~s$^{-1}$
for a microturbulence velocity $\xi=0.8\pm0.2$\,km\,s$^{-1}$.
The [Fe/H] and $v\sin{i}$ values are in good agreement with those from
the $\chi^2$ minimization, so average values were computed. The final
adopted physical parameters are reported in Table~\ref{physpar}.
An attempt at spectral synthesis of the lithium line at 6707.8\,\AA\
with MOOG (using the \citealt[][]{reddy02} line list implemented within
the VALD database) resulted in large residuals (c.f. Fig.~\ref{NLi_07}),
mainly owing to non-LTE conditions, such as the effects of over-ionization,
and line asymmetry due to the convective motions, which mainly
influence the $\lambda$6707.8\,\AA\ Li\,{\sc i} line formation in
very cool stars (\citealt{carlssonetal1994}). Since the EW of this line
could be measured with high accuracy, we could estimate the non-LTE Li
abundance through extrapolating the curves of growth by \citet{lind09}
to a temperature of $\sim$3500\,K, which yield $A{\rm (Li)}\sim$2.5.
We thus conclude that a reliable value for the lithium abundance
of \iras\ is $A$(Li)$=$$2.4\pm0.2$. As a result, the low temperature and high
lithium abundance of the star make \iras\ the coolest lithium-rich giant
known to date. As shown below, our determinations of temperature and
gravity place the star close to the tip of the red giant branch (RGB)
on the HR diagram.
\begin{figure}[ht]
\vskip 9.5cm
\special{psfile=IRAS12556_SyntRealSpectra.ps angle=0 hscale=48 vscale=48 voffset=-77 hoffset=0}
\caption{Portion of HARPS spectrum in the interval around the
lithium $\lambda$6103.6\,\AA\ absorption line. The top panel shows
the synthetic spectra for three values of $A$(Li) at
fixed metallicity, while the lower one shows the synthetic spectra
for three values of metallicity at fixed $A$(Li). The best fit
is for a sligthly subsolar metallicity and $A$(Li)=2.4.}
\label{NLi_03}
\end{figure}
\section{Discussion and conclusions}
The stellar parameters of \iras\ indicate that the star is a lithium-rich M-giant.
To investigate its evolutionary status in more detail, we used
the tracks and isochrones by \citet{girardi00} for solar metallicity stars.
A comparison of the position of \iras\ in the $\log{g}$ vs. $\log{T_{\rm eff}}$
diagram (Fig.~\ref{HRD}) showed that the physical parameters are consistent
with an age $\sim$10\,Gyr. Reasonable values for the stellar luminosity and
mass at that age, which are consistent with $\log{T_{\rm eff}}$, are
$\log{(L/L_{\odot})}=$2.99$\pm$0.15 and $M/M_{\odot}=$1.0$\pm$0.2,
respectively. The position of \iras\ on the HR diagram is shown in Fig.~\ref{HRD},
along with several lithium-rich stars, as compiled by \citet{kumar11}.
Most of the known lithium-rich giants are concentrated within a luminosity
range of 1.3$< \log{(L/L_{\odot})}<$2, with only a minority having
$\log{(L/L_{\odot})}>$2.2. Among these luminous K-type objects, HD~39853
is the coolest one; nevertheless, \iras\ is about 450\,K cooler, placing it
among the least massive and most luminous lithium-rich giants known so far.
Being close to the tip of the RGB, one should then consider three
possibilities for the evolutionary stage of the star:
{\it i) }~giant branch ascent, i.e. H-shell burning phase;
{\it ii)}~AGB phase; or
{\it iii)}~post He-core flash.
Unfortunately, the CNO abundances and the $^{12}$C/$^{13}$C carbon isotopic
ratios, tracers of the degree of mixing that provide further constraints on
the evolutionary status, cannot be derived because the spectral range of
HARPS does not include the appropriate wavelength range.
\begin{figure}[ht]
\vskip 10.0cm
\special{psfile=Te_logg.ps angle=-90 hscale=34 vscale=27 voffset=303 hoffset=-10}
\special{psfile=HRD.ps angle=-90 hscale=31 vscale=30 voffset=165 hoffset=-10}
\caption{{\em Upper panel}. The $\log{g}$ versus $\log{T_{\rm eff}}$ diagram. The
\citet{girardi00} isochrones for three different ages are overplotted.
The black dot represents the position of \iras.
{\em Lower panel}. HR diagram of several Li-rich giants. The evolutionary
tracks by \citet{girardi00} for several masses as labelled are represented with
continuous lines. The lithium-rich K-type giants \citep{kumar11}
are over-plotted with open circles. The asterisk represents
HD~39853, while the position of \iras\ is indicated by the
big black dot.}
\label{HRD}
\end{figure}
Assuming a distance $d_{\rm Cha\,II}=178$\,pc, \citet{spezzi08} have
derived a luminosity $\log{(L/L_{\odot})}\approx$1. By comparing
it with the luminosity estimated above, we conclude that
\iras\ should be located at $d\approx$1.75\,kpc from the Sun.
By adopting this distance and using the HARPS radial velocity and the
proper motion components (cf. Sec.~\ref{Sec:spectra}), the resulting
spatial velocity components of the star in a left-handed coordinate
system are ($U, V, W$)$=$($+30, -121, +79)$~km\,s$^{-1}$, where
$U$, $V$, and $W$ are directed towards the Galactic anti-centre,
the Galactic rotation direction, and the North Galactic Pole,
respectively. According to the criteria of
\citet{oort26}\footnote{ $ | W + 10 | \ge $30\,km~s$^{-1}$
and/or ($U^2 + V^2$)$^{1/2} \ge $65\,km~s$^{-1}$.},
\iras\ can thus be considered as a high-velocity star.
The Galactic latitude, distance, and kinematics of the star imply that
it most likely belongs to the old thin-disk population, consistent
with its almost solar metallicity. But what is the cause of its high
lithium content?
Three possible scenarions can be considered to explain the high lithium
abundance observed in \iras:
{\it i)} the star somehow preserved the original lithium in its atmosphere,
{\it ii)} lithium has been regenerated in later evolutionary stages by the
Cameron--Fowler mechanism\footnote{Conversion of $^3$He to $^7$Li by
$\alpha$-capture with $^7$Be as a radioactive intermediary.} \citep{cameron71},
{\it iii)} lithium was enhanced by the engulfment of a brown dwarf or
planetary companion.
As discussed in \citet{fekel93}, the scenario of the preservation of initial
lithium in red giants is very unlikely. They also note that high lithium
abundance was not revealed in a sample of 200 F-type stars with masses
between 1 and 2\,$M_{\odot}$, just evolved off the main sequence.
As they point out, if anything, lithium preservation might eventually
work in stars of the early-F and A types, that are more massive than \iras.
As a solar mass star, the initial lithium was most likely
burned during the pre-main sequence phase.
Depending on the stellar mass and luminosity, the production of lithium
via the Cameron--Fowler mechanism supossedly occurs both at
the RGB luminosity function bump \citep{charbonnel00} and during the
He-core flash \citep{kumar11}. It is possible that lithium was
synthesized in \iras\ by the Comeron--Fowler mechanism during the
luminosity bump, but most likely it had already been destroyed as the star
evolved up to the RGB. Although the \citet{sackmann99} model predicts that
a high lithium abundance may be preserved all the way up the RGB, their
parameterization requires very high mixing rates. It was assumed that
rotation could provide such rates, but \citet{palacios06} find that a
self-consistent model of rotational mixing cannot generate enough
circulation to account for the mechanism working efficiently.
It has been suggested \citep{delareza96} that the IR-excess observed
in some lithium-rich giants may be due to a dust shell possibly also
generated via the Cameron-Fowler mechanism, but for \iras\ there
is no evidence of IR excess \citep[][]{larson98, alcala08}.
On the other hand, it is unlikely that, assuming an AGB status for the
star, lithium has been synthesized during the He-flash phase, because
such a process should work for stars within a narrow mass range around
2$M_{\odot}$ \citep{kumar11}.
A pure Cameron-Fowler mechanism should only provide $^7$Li, with no
$^6$Li.
An attempt at a simultaneous fit of the resonance and subordinate
lithium line by including the $^6$Li/$^7$Li isotopic ratio as a free
parameter improves the fit of the $\lambda$6707.8 line
(see Fig.~\ref{NLi_07}), but there is no way to get a good fit
for values of $^6$Li/$^7$Li less than 0.11.
In the brown dwarf/planet engulfment scenario, the accreted matter
would also result in a simultaneous enhancement of $^6$Li, $^7$Li,
and Be. Unfortunately, our HARPS spectrum does not achieve the wavelength
range to investigate beryllium, but the suggestion that the $^6$Li/$^7$Li
isotopic ratio may be as high as 0.11 would support the accretion
scenario. Also, the large radius of \iras\ in comparison with most of
the known lithum-rich giants makes the planet engulfment scenario
plausible, because planet ingestion would be more likely to occur when
a star evolves more in the RGB and achieves a larger radius.
Finally, in discussing the case of \iras, we have to consider that its
projected rotational velocity, $v\sin{i}\sim$8\,km s$^{-1}$, is rather
high in comparison with other lithium-rich giants. Several authors have
argued that accretion of a planet may explain both lithium enhancement
and rapid rotation in giants
\citep[][and references therein]{denissenkov04,calsberg09,calsberg10}.
Given its physical properties, we cannot rule out that such a
process has led to the lithium enhancement and rapid rotation
in \iras. Some rapidly rotating giants can even become magnetically active
and may be detected in X-rays \citep[][]{guillout09}, however \iras\ was not
detected in a ROSAT pointed observation \citep{alcala00}.
With the observational data available so far, we cannot be conclusive
about what process dominates the lithium enhancement in \iras.
Perhaps several mechanisms working at different times along the RGB
phase of this star have contributed to its lithium enrichment.
\acknowledgements{We thank the referee, Dr. P. Bonifacio, for his
useful comments and suggestions and for information on the lithium
hyperfine line structure. K.B. acknowledges financial support from the
INAF Postdoctoral fellowship programme. We also thank V. Andretta,
L. Belluzzi, and V. D'Orazi for discussions of hyperfine line structure.}
\bibliographystyle{/home/jmae/aa-package/bibtex/aa}
|
2,877,628,090,485 | arxiv | \section{Introduction}
Precise measurement of inertial acceleration is vital to many space-borne
gravitational science missions, including satellite geodesy \cite{GRACE},
fundamental
physics experiments \cite{microscope, prl} and gravitational wave observation
\cite{LISA}.
The most precise accelerometers manufactured to date are the electrostatic
accelerometers produced by ONERA, which are capable of measuring
spacecraft acceleration relative the the inertial frame to $\sim 10^{-11}
\ \mathrm{m/sec^2 Hz^{1/2}}$ from roughly 1 mHz to 1 Hz
\cite{touboul2012}.
These accelerometers have been used for Low-low Satellite-to-satellite tracking
missions including GRACE \cite{GRACE}
and for gravity gradiometer missions such as GOCE \cite{GOCE}.
These instruments are comprised of an internal free-floating metallic
test mass that is surrounded by an electrode housing.
The electrodes on the internal surface of the housing both sense the test
mass' position capacitively and actuate it via electrostatic forces.
The position measurement is used to drive the electrostatic suspension system
to keep the test mass centered in its housing.
The inertial acceleration of the spacecraft is proportional to the suspension
force applied to the test mass to keep it centered.
Electrostatic accelerometers are limited by two inter-related factors: 1)
suspension force noise and 2) acceleration measurement noise.
Both are ultimately
related to the stability of voltage references, where the current
state of the art is $\sim 2 \times 10^{-6}$ \cite{GOCEaccel}.
For the application of Earth geodesy the low frequency acceleration of a
low Earth orbiting satellite can be as high as
$\sim 10^{-5} \ \mathrm{m/sec^2}$.
Therefore the resulting acceleration
noise on the test mass due to the suspension system is at least
$2 \times 10^{-11} \ \mathrm{m/sec^2}$.
Since the applied suspension force \emph{is} the acceleration measurement,
the acceleration measurement noise would be on this same order.
To improve accelerometers significantly beyond the $10^{-11}
\ \mathrm{m/sec^2}$ level,
the suspension force noise must be removed \emph{and} the sensor used to
measure acceleration must be changed.
Drag-free technology, conceived of in the 1960's \cite{Lange, debra},
has been the
most promising approach to breaking through these acceleration noise limits.
Two drag-free approaches have been demonstrated on three separate missions.
The first is an ``accelerometer-mode'' drag-free, where an electrostatic
accelerometer is used as the primary sensor and a propulsion system is used
to counter the drag-force acting on the satellite so that the nominal
test mass suspension force is reduced.
The spacecraft acceleration measurement is still limited by voltage reference
stability, but the nominal voltage applied to the housing electrodes
is reduced, therefore the electrostatic force noise is also reduced.
Both Gravity Probe B \cite{prl}
and the GOCE \cite{GOCE} missions operated in accelerometer-mode
drag-free.
Using this approach Gravity Probe B achieved an acceleration
noise of $4 \times 10^{-11} \ \mathrm{m/sec^2 Hz^{1/2}}$ \cite{bencze}
and GOCE achieved a differential acceleration noise measurement between test
masses
accurate to $\sim 10^{-12} \ \mathrm{m/sec^2 Hz^{1/2}}$ in the 1 mHz to 1 Hz
frequency band \cite{GOCEaccel}.
The other drag-free operating mode is 'true' drag-free, where the suspension
force is turned completely off, at least in one degree of freedom.
Triad I with its DISturbance COmpensation System (DISCOS)
operated in this manor \cite{discos1, discos2},
as will the Laser Interferometer Space Antenna (LISA)
in the future.
A fundamental difference between accelerometers and true drag-free is that
the basic measurement for a true drag-free system is displacement
variations, instead of acceleration variations.
Of course one can always convert displacement to acceleration and vice-versa.
A drift-mode accelerometer (DMA) as defined here is a traditional electrostatic
accelerometer where the test mass suspension force is operated with a low
duty cycle.
Larger suspension forces are used, but over a much shorter period of time
so that the average suspension force is the same as that of a traditional
accelerometer.
By switching the suspension system on and off with a constant frequency and
low duty cycle ($< 0.1$), the suspension system force noise is
restricted to known, short intervals, which repeat with a frequency chosen to be
above the science frequencies of interest.
Cycling the suspension system eliminates suspension force noise while the
suspension system is off,
but there still is the problem of precisely
measuring the inertial acceleration of the satellite in the presence of a
large zero-frequency acceleration.
Here, laser interferometry provides the solution.
In most upcoming precision gravity missions, the measurement of interest
is the relative displacement (or acceleration) between two or more
inertially fixed test masses.
GRACE Follow-on, GRACE-II, and LISA \cite{LISA}
are all examples.
In all of these missions, the laser interferometer system already exists and is
used to measure range variations between spacecraft.
If an interferometer is used to also measure distance variations
between a reference point on the spacecraft and the test mass, then this
measurement can be used to estimate the inertial acceleration of the
spacecraft, assuming that the test mass can be treated as inertially fixed
over the short interval when the suspension system is off.
Second order finite differencing provides the simplest method.
Although other approaches discussed in this paper
can provide more accurate estimates.
Laser interferometers have been demonstrated with extremely large dynamic
range.
The LISA Interferometric Measurement System for example can measure
pm variations over 1000 sec between spacecraft that have relative
velocities of 10 m/sec.
This represents a dynamic range of $10^{22}$.
The name drift mode is taken from an operating mode of LISA Pathfinder (LPF)
\cite{LPF, LPFdriftmode}.
LPF contains two free-floating test masses.
The spacecraft can only fly drag-free about one of them (naturally) and,
therefore, the other test mass must be suspended against the gravity
gradient (and other) forces which act upon it.
In order to assess the acceleration noise associated with the suspension
electronics, the drift mode was conceived.
The suspension system is turned on and off with a low duty cycle (1 sec on and
200 sec off).
In between ``kicks'' the test mass follows approximate parabolic
trajectories, when measured relative the the other test mass.
These parabolas are fit to second order polynomials and the fit residuals
are used to calculate variations in the differential acceleration between the
two test masses.
Since the goal of the drift mode for LPF is simply to determine the
acceleration noise on the test masses due to the actuation electronics,
the time between kicks was chosen to be relatively long (200 sec).
The interferometer data during the kicks is discarded and replaced with
a model of the acceleration noise that makes various assumptions
about the nature of the noise \cite{LPFdriftmode}.
In contrast, for the DMA we wish to make no assumptions about the
inertial acceleration of the satellite and therefore, we choose a kicking
frequency that lies above the science signal of interest.
\section{Acceleration noise}
The acceleration noise budget for precision accelerometers typically
contains roughly 30 known acceleration noise terms.
The acceleration noise budgets provided here are based on
models used for the LISA mission \cite{schumaker2003, gerardi2014}.
These individual noise terms can be categorized by their physical nature
such as
magnetic, electrical, thermal, Brownian, etc.
In this paper the individual noise terms are grouped into four main
categories: (1) gap-dependent, (2) gap-independent, (3) actuation and sensing,
and (4) stiffness.
Gap-dependent acceleration noise sources are those which fundamentally depend
of the size of the gap between the test mass and its housing.
For GRACE-like accelerometers, these gaps are on the order of
$\sim 100 \ \mathrm{\mu m}$.
Gap-dependent noise sources are typically the dominant source of acceleration
noise and are the reason why the LISA gravitational reference sensors,
which were originally based on the ONERA accelerometers use relatively large
gaps of 4 mm along the sensitive direction.
Gap-independent acceleration noise comprises all bulk test mass
forces, including magnetic and gravitational noise, as well as surfaces
forces which do not depend on the gap size.
The third type of acceleration noise is actuation and measurement noise.
As discussed in the introduction section, for electrostatic accelerometers
both actuation and measurement
noise is ultimately due to the instability of voltage references.
Measurement noise represents the noise associated with making the
acceleration measurement.
For electrostatic accelerometers, the actuation force applied to the
test mass to keep it centered in its housing \emph{is} the
acceleration measurement.
Figure, \ref{fig:GRACEaccel} provides a rough breakdown of the contributions
to acceleration noise for GRACE like accelerometers.
Table \ref{tab:GRACEparam} provides the key parameters used to produce
Figure \ref{fig:GRACEaccel} following the methodology outlined in
\cite{gerardi2014}.
The acceleration noise budget for GRACE-like accelerometers is limited by
measurement and actuation noise as previously stated.
If we assume a relative stability of the voltage reference of $2\times10^{-6}$,
and a nominal drag-induced acceleration of $10^{-5}
\ \mathrm{m/sec^2 Hz^{1/2}}$,
then the suspension force noise acting on the test mass is $2\times10^{-11}
\ \mathrm{m/sec^2 Hz^{1/2}}$.
Here, the measurement noise is assumed to be the same.
\begin{figure}
\centering
\includegraphics[width = 10 cm]{GRACEaccelNoise.eps}
\caption{Approximate acceleration noise budget for a GRACE-like
accelerometer. \label{fig:GRACEaccel}}
\end{figure}
\begin{table}
\caption{\label{tab:GRACEparam} Basic design parameters of a GRACE-like
electrostatic accelerometer and a candidate DMA for Earth geodesy
following the methodology of \cite{gerardi2014}.}
\lineup
\begin{tabular}{@{}lll}
\br
Parameter & GRACE-like accelerometer \cite{touboul2012}
& DMA for Earth geodesy \\
\mr
Mass of TM & 72 g & 243 g \\
TM/housing gap & 175 $\mu$m & 1 mm \\
Surface area of TM & $4 \times 10^{-4} \ \mathrm{m^2}$ &
$9 \times 10^{-4} \ \mathrm{m^2}$\\
Charge control & Au wire & UV photoemission \\
Surface area of spacecraft & $1 \ \mathrm{m^2}$ & $1 \ \mathrm{m^2}$ \\
Mass of spacecraft & 100 kg & 100 kg \\
Magnetic susceptibility of TM & $2 \times 10^{-6}$ & $2 \times 10^{-6}$ \\
TM stray voltage & 100 mV & 100 mV \\
Max. TM charge & $1 \times 10^{7}$ e &
$1 \times 10^{7}$ e\\
Max. dc magnetic field & 50 $\mu$T & 50 $\mu$T \\
Max. magnetic field fluctuation & 1 $\mathrm{ \mu T / Hz^{1/2} }$
& 1 $\mathrm{ \mu T / Hz^{1/2} }$ \\
Max. magnetic field gradient & 10 $\mu$T/m & 10 $\mu$T/m \\
Max. field gradient fluctuations & 0.25 $ \mathrm{ \mu T/m Hz^{1/2} }$
& 0.25 $ \mathrm{ \mu T/m Hz^{1/2} }$ \\
Pressure inside housing & 10 $\mu$ Pa & 10 $\mu$ Pa \\
Temperature difference
& $10^{-2}$(1mHz/$f$)$^{(1/3)}$ $\mathrm{ K / Hz^{1/2} }$
& $10^{-2}$(1mHz/$f$)$^{(1/3)}$ $\mathrm{ K / Hz^{1/2} }$ \\
fluctuations across housing & & \\
\br
\end{tabular}
\end{table}
In Figure \ref{fig:GRACEaccel} there is one additional noise source related
to controlling the buildup of charge on the test mass.
These accelerometers use an extremely thin
$(\sim 10 \ \mathrm{\mu m}$ diameter)
gold fiber to electrically ground the test mass to its electrode housing.
This wire contributes a thermal force noise on the test mass with a $1/f^{1/2}$
spectrum.
\begin{figure}
\centering
\includegraphics[width = 10 cm]{LISAaccelNoise.eps}
\caption{Acceleration noise budget for LISA. \label{fig:LISAaccel}}
\end{figure}
Figure \ref{fig:LISAaccel} shows the approximate acceleration noise
budget for LISA, with individual noise terms grouped as before.
This budget follows that of \cite{schumaker2003, gerardi2014}.
Also shown in Figure \ref{fig:LISAaccel} is the requirement for LISA,
$3 \times 10^{-15} \ \mathrm{m/sec^2Hz^{1/2}}$ from roughly 0.1 - 10 mHz.
Since LISA is operated 'true' drag-free mode
there is no test mass actuation and therefore no associated acceleration
noise.
Two other factors that greatly improve the performance of the LISA
GRS relative to GRACE are larger gaps (10$\times$ that of GRACE),
and a non-contact charge control system, based on photoemission using
UV light \cite{shaul2008}.
The second difference eliminates the thermal noise of the gold fiber
used in the GRACE accelerometers.
\subsection{A drift mode accelerometer model}
In order to estimate the acceleration noise performance of a drift mode
accelerometer
the following model, depicted in Figure \ref{fig:model} is used.
In this model two test masses are widely separated.
Test mass 2 (TM 2) is assumed to be inertially fixed for simplicity.
The goal of the DMA is measure the inertial acceleration of the spacecraft which
houses test mass 1 (TM 1).
Measurement of the spacecraft's motion relative the TM 1 is made relative
to a optical bench (OB), which is assumed to contain two laser interferometers.
The first measures the position of TM 1 relative to OB, $x_{1B}$, and the
second measures OB relative to TM 2, $x_{B2}$.
\begin{figure}
\centering
\includegraphics[width = 14 cm]{layout.eps}
\caption{One-dimensional model used to estimate the performance
of a drift mode accelerometer. \label{fig:model}}
\end{figure}
Three forces act on the TM 2: control forces denoted $F_c$, position-dependent
forces (stiffness forces) denoted $F_s(x_{1B})$, and all other disturbance
forces
$F_{a}$.
The force $F_a$ consists of both gap-dependent and gap-independent
forces described
above.
The disturbance force applied to the spacecraft is denoted $F_d$ and
is largely due to atmospheric drag in the case of Earth geodesy missions
and solar radiation pressure for deep space gravitational wave and
other fundamental physics missions.
A simple PID control law is implemented to keep TM 1 centered in its housing
$(x_{1B} = 0)$.
The controller was cycled on and off with a periodicity of $T_{\mathrm{kick}}$
seconds with a duty cycle of 0.1.
The disturbance force applied to the spacecraft is mission dependent
and therefore Earth geodesy and gravitational wave applications
were analyzed separately. The results are described below.
\subsection{DMA for Earth geodesy}
For the simulation of a low Earth orbiting satellite, atmospheric drag
is calculated using the NRLMSISE-00 empirical atmospheric model
acting on the spacecraft in a 400 km circular polar orbit.
The spacecraft mass and cross sectional area is 100 kg and 1 $\mathrm{m^2}$
respectively, and its coefficient of drag is $C_D = 1$.
Figure, \ref{fig:dragVtime} shows the time history of the atmospheric drag
force acting on the satellite and Figure \ref{fig:dragAccel} shows the
spectrum of the corresponding drag acceleration.
The main variations in the drag force occur at twice the orbital frequency.
\begin{figure}
\centering
\includegraphics[width = 10 cm]{dragVtime.eps}
\caption{Time-history of the atmospheric drag force acting on
a 1 $\mathrm{m^2}$ satellite in a 400 km circular polar orbit.
\label{fig:dragVtime}}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = 10 cm]{dragAccel.eps}
\caption{Amplitude spectral density of atmospheric drag acceleration
acting on a 100 kg satellite in a 400 km circular polar orbit.
\label{fig:dragAccel}}
\end{figure}
An actuation cycling period of $T_{\mathrm{kick}} = 5 \ \mathrm{sec}$ is
chosen and the
resulting position time history of TM 1 relative to the spacecraft
($x_{1B}$) is shown in Figure \ref{fig:juggle}.
Parabolic trajectories with a frequency of 0.2 Hz and an amplitude of
$\sim 4 \
\mathrm{\mu m}$ are apparent.
\begin{figure}
\centering
\includegraphics[width = 10 cm]{GRACEjuggle.eps}
\caption{Time history of test mass position along the sensitive axis
for a drift mode accelerometer in low Earth orbit. \label{fig:juggle}}
\end{figure}
To calculate the acceleration noise performance of a drift mode
accelerometer designed for Earth geodesy, a candidate instrument
is chosen with basic properties listed in Table \ref{tab:GRACEparam}.
All key design features of the DMA are kept the same as that of the
GRACE-like accelerometer, expect the size and mass of the test mass are
increased to 30 cm and 243 g respectively, and a
TM-to-housing gap size of 1 mm is used.
In addition, it is assumed that the gold wire used for test mass charge control
is eliminated and replaced with a charge control system utilizing
UV photoemission.
Figure \ref{fig:GRACEaccelDM} shows the estimated performance of such an
instrument.
Gap-dependent and gap-independent acceleration noise terms are calculated
as before.
Actuation noise is modeled simply as the spectrum of maximum applied
acceleration
multiplied by a relative voltage noise of $2 \times 10^{-6}$, with a
0.1 duty cycle a repetition rate of 1/5 sec.
In reality, for a DMA the acceleration data is discarded while the actuation
system is on and spacecraft acceleration is only estimated using the data
when the actuation system is off.
Therefore, if we assume we retrieve one acceleration measurement per actuation
cycle, then the maximum frequency of the acceleration noise spectrum
should be 0.2 Hz.
By comparing Figures \ref{fig:GRACEaccel} and \ref{fig:GRACEaccelDM}
we see that the broadband acceleration noise of
$2\times10^{-11} \ \mathrm{m/sec^2}$ for the traditional accelerometer
is frequency shifted to 0.2 Hz plus harmonics.
\begin{figure}
\centering
\includegraphics[width = 10 cm]{GRACEaccelNoiseDM_R1.eps}
\caption{Acceleration noise for a drift mode accelerometer for
Earth geodesy. \label{fig:GRACEaccelDM}}
\end{figure}
As we can see in Figure \ref{fig:GRACEaccelDM}
the limiting acceleration noise term is stiffness, which
is the coupling of the $\sim 4 \ \mathrm{\mu m}$ motion of the
spacecraft relative to TM 1 and the stiffness, $k = 2 \times 10^{-6} \
\mathrm{sec^{-2}}$.
Most of the stiffness related acceleration noise contribution occurs
at the suspension cycling frequency of 0.2 Hz and its harmonics.
Contributions at lower frequencies, especially twice the orbital frequency,
are caused by the low frequency
contribution of the atmospheric drag.
As discussed below, if the stiffness $k$ can be be determined through
calibration, then the stiffness-related acceleration noise can be subtracted
in the data analysis.
The resulting acceleration noise of the DMA for Earth geodesy would then
be $\sim 4 \times 10^{-13} \ \mathrm{m/sec^2 Hz^{1/2}}$ around 1 mHz.
\subsection{DMA for gravitational wave observation}
In order to assess the performance of the DMA for with respect to
gravitational wave observation, the geometry and other properties of the
accelerometer were assumed to be the same as the LISA Pathfinder
gravitational reference sensor.
The LPF GRS is a 2 kg, 46 mm, Au/Pt cube, with 4 mm gaps along the
sensitive axis between the test mass and its electrode housing.
For a 500 kg spacecraft at a distance of 1 AU from the Sun, the zero-frequency
spacecraft acceleration due to solar radiation pressure is,
\begin{eqnarray}
a_0^{\mathrm{SRP}} & = & P_\odot \, A / M =
( 4.6 \times 10^{-6} \ \mathrm{N/m^2} )(4 \ \mathrm{m^2})
/ (500 \ \mathrm{kg}) \\
& = & 4 \times 10^{-8} \ \mathrm{m/sec^2}. \nonumber
\end{eqnarray}
The high frequency solar radiation pressure, taken from \cite{schumaker2003}
is, $a^{\mathrm{SRP}} \approx 1.6 \times 10^{-10} \left( 1 \ \mathrm{mHz} /
f \right)^{1/3} \ \mathrm{m/sec^2 \, Hz^{1/2}}$.
Solar radiation pressure acceleration amplitude spectral density and the
spectrum of the numerically simulated acceleration are shown in Figure
\ref{fig:SRP}.
\begin{figure}
\centering
\includegraphics[width = 10 cm]{SRPspectrum.eps}
\caption{Actual and simulated spectra of the solar radiation pressure
acceleration acting on a LISA-like
spacecraft a distance of 1 AU from the Sun \label{fig:SRP}}
\end{figure}
If the suspension system is operated with a repetition rate of 0.1 Hz
and a duty cycle of 0.1, the resulting parabolic motion of TM 1
relative to the optical bench has an amplitude on the order of
250 $\mathrm{n m}$ as shown in Figure \ref{fig:LISAjuggle}.
\begin{figure}
\centering
\includegraphics[width = 10 cm]{LISAjuggle_10sec.eps}
\caption{Time history of test mass position along the sensitive axis
for a drift mode accelerometer in a LISA-like spacecraft in
heliocentric orbit. \label{fig:LISAjuggle}}
\end{figure}
Figure \ref{fig:LISAaccelDM} shows the resulting performance of the DMA
with respect to gravitational wave observation.
As is the case for the Earth geodesy DMA,
the limiting acceleration noise term is stiffness, which
again is the coupling of the $\sim 250 \ \mathrm{nm}$ motion of the
spacecraft relative to TM 1 and the stiffness, $k = 10^{-7} \
\mathrm{sec^{-2}}$.
Most of the stiffness related acceleration noise contribution occurs
at the suspension cycling frequency of 0.1 Hz and its harmonics.
Contributions at lower frequencies are caused by the low frequency
contribution of the solar radiation pressure acceleration noise acting
on the satellite.
As with the geodesy application, if the stiffness can be determined through
calibration, then the stiffness-related acceleration noise can be subtracted
in the data analysis.
\begin{figure}
\centering
\includegraphics[width = 10 cm]{LISAaccelNoise_10sec.eps}
\caption{Acceleration noise for a drift mode accelerometer for
gravitational wave astrophysics. \label{fig:LISAaccelDM}}
\end{figure}
In both applications there exist at least
two acceleration noise sources that may be calibrated and removed in the
data analysis.
They are stiffness (position-dependent) forces and actuation
cross-coupling forces.
Both are not fundamentally limiting (e.g. unavoidable quantum mechanical
effects) and can therefore be calibrated and removed.
However, a rigorous determination of the possible accuracy of such a
calibration must still be determined.
\subsection{Position-dependent noise}
Because of the increased motion of the test mass relative to its housing
the stiffness related force noise is
much larger than
that of the GRACE accelerometers or the drag-free LISA GRS.
These position dependent forces do not represent a fundamental limit to the
performance of the DMA.
If stiffness $k$ can be determined through calibration, then the measured
position of TM 1 relative to the spacecraft $x_{1B}$ can be used to
estimate $F_s(x_{1B})$ and subtract it in the data analysis.
Procedures for estimating the stiffness to high precision
have been developed
for LISA Pathfinder \cite{LTPDA2011},
though these techniques rely on measuring the motion of one
test mass relative to another.
Determination of $k$ for the DMA might therefore require
measurement of position of TM 1 relative to TM 2 ($x_{12} = x_{1B} - x_{B2}$),
which is of course readily available.
The stiffness related signal present in $x_{12}$ would be primarily at the
kicking frequency, which is chosen to be above the dominant science signals
of interest.
Therefore, estimation of $k$ from these data should be cleaning separable from
the science signal.
In order for position-dependent acceleration noise to be reduced below the
fundamental limit of $\sim 4 \times 10^{-13} \ \mathrm{m/sec^2 Hz^{1/2}}$
around 1 mHz for the Earth geodesy DMA,
the stiffness $k$ must be determined with a relative accuracy of 0.1 or
an absolute accuracy of $2 \times 10^{-7} \ \mathrm{sec^{-2}}$.
For the gravitational wave DMA the stiffness is much lower
($1\times10^{-7} \ \mathrm{sec^{-2}}$)
because of the increased gap size, the larger test mass,
and the stricter requirements on the
environmental stability of the GRS.
Already, the position-dependent acceleration noise, shown in Figure
\ref{fig:LISAaccelDM}, for gravitational wave observation is near the
fundamental limit.
To drop it below this limit calibration accuracy must be a modest 0.2
relative to $k$ or $5\times10^{-8} \ \mathrm{sec^{-2}}$ absolute.
Of course, increasing the actuation cycling frequency reduces the
spacecraft-to-TM motion and therefore reduces stiffness related acceleration
noise.
One could therefore choose a cycling frequency that is high enough
to reduce the stiffness related acceleration noise to below the fundamental
limit.
However, as we will see in Section \ref{sec:measNoise}, reducing the
actuation cycling frequency dramatically increases the interferometric
acceleration measurement noise.
\subsection{Actuation cross-coupling}
Two different methods can be used to suspend the test mass in all
rotational degrees of freedom and in all
translational degrees of freedom orthogonal to the sensitive direction.
These degrees of freedom can either be continuously supported or operated in
drift mode just like the sensitive degree of freedom.
All degrees of freedom can be operated in drift mode only if the resulting
motion does not cause loss of performance of the interferometer, which
measures displacement along the sensitive axis.
This generally requires a relatively high cycling frequency, which again results
in a relatively large acceleration measurement noise as discussed
in Section \ref{sec:measNoise}.
If we assume that all degrees of freedom except the degree of freedom along
the sensitive axis are suspended continuously against
the external forces applied to the host spacecraft, then we
must consider the additional
acceleration noise acting in the sensitive direction due to
actuation cross-coupling.
Actuation cross-coupling is the inadvertent forcing of the test mass in the
sensitive direction, which occurs when actuating the test mass in another
degree of freedom due to a small residual coupling $\lambda$.
This cross coupling can be as large as $\lambda = 5\times10^{-3}$ for inertial
sensors like that of LISA.
For both geodesy and gravitational wave applications, this cross coupling
acceleration exceeds the fundamental acceleration noise limit in the sensitive
direction.
If these cross coupling coefficients can be determined,
then using the known
applied forces in all degrees of freedom, the resulting force
in the sensitive direction can be calculated or eliminated with the
appropriate combination of applied electrode voltages.
Determination of such coefficients has been precisely demonstrated by the
GOCE mission and will also be performed during the LISA Pathfinder
mission \cite{LTPDA2011}.
One technique for determining these cross-coupling coefficients is to
dither the actuation voltages in each of the non-sensitive degrees of freedom
and fit a model of the cross-coupling to the interferometric measurement
of the test mass motion along the sensitive axis.
To roughly determine how well the cross coupling coefficient $\lambda$
can be determined using this approach,
the numerical simulation described above was modified to include a
dither voltage equivalent to a test mass acceleration of
$0.5 \ \mathrm{\mu m/sec^2}$ on a perpendicular axis with a frequency of
10 mHz.
For a $10^4$ sec simulation, the interferometer readout along the sensitive
axis with an assumed measurement noise of $10^{-11} \ \mathrm{m/Hz^{1/2}}$
was capable of estimating $\lambda$ with a relative accuracy of
$5\times10^{-4}$.
Examining Figure \ref{fig:dragAccel} we see that the atmospheric drag
acceleration at 1 mHz is $\sim 3 \times 10^{-8} \ \mathrm{m/sec^2 Hz^{1/2}}$.
If we assume a cross coupling coefficient of $\lambda = 5 \times 10^{-3}$
and a desired acceleration noise of
$4 \times 10^{-13} \ \mathrm{m/sec^2 Hz^{1/2}}$, then we must determine
$\lambda$ to a relative accuracy of 0.003 for the Earth geodesy DMA.
Likewise, from Figure \ref{fig:SRP}, the solar radiation pressure
acceleration noise around 1 mHz is
$1.6 \times 10^{-10} \ \mathrm{m/sec^2 Hz^{1/2}}$.
Again, using $\lambda = 5 \times 10^{-3}$
and a desired acceleration noise of
$3 \times 10^{-15} \ \mathrm{m/sec^2 Hz^{1/2}}$, we must determine
$\lambda$ also to a relative accuracy of 0.003 for the
gravitational wave DMA.
There does exist a fundamental limit to how well these cross-coupling
forces can be determined and subtracted in the analysis.
We assume that the best possible voltage reference
is limited to a relative voltage stability of $2\times10^{-6}$.
For the geodesy application it is reasonable to assume that maximum
cross-track acceleration, which occurs with a polar Earth orbit is $\sim 10^{-6}
\ \mathrm{m/sec^2}$.
Therefore, it is also reasonable to assume that the maximum dynamic range
of the cross track suspension force results in an acceleration that is ten
times this value, or $10^{-5} \ \mathrm{m/sec^2}$.
Finally, assuming a cross coupling coefficient of $5\times10^{-3}$, resulting
acceleration in the along track (sensitive direction) is,
\begin{equation}
a_x = (10^{-5})(2\times10^{-6})(5\times10^{-3}) = 10^{-13} \ \mathrm{m/sec^2},
\nonumber
\end{equation}
which is below the fundamental limit shown in Figure \ref{fig:GRACEaccelDM}.
For the the gravitational wave application, again we assume the a 500 kg
LISA-like spacecraft, with cross sectional area of $4 \ \mathrm{m^2}$ is
1 AU from the Sun.
The resulting nominal solar radiation pressure is $4\times10^{-8}
\ \mathrm{m/sec^2Hz^{1/2}}$.
Therefore, if we assume that the maximum required test mass suspension force
is $4\times10^{-7} \ \mathrm{m/sec^2Hz^{1/2}}$, the fundamental
cross-coupling acceleration noise limit is,
\begin{equation}
a_x = (4\times10^{-7})(2\times10^{-6})(5\times10^{-3}) =
4\times10^{-15} \ \mathrm{m/sec^2},
\nonumber
\end{equation}
which is roughly
equal to the LISA acceleration noise requirement at low frequency.
\section{Measurement noise}
\label{sec:measNoise}
In a DMA we
use a laser interferometer to measure the acceleration of a reference point
on the spacecraft (an optical bench) relative to the test mass, which
we assume is inertially fixed.
Therefore, in addition to the acceleration noise acting on the TM, we
must also consider the acceleration measurement noise of the interferometer.
For the discussions here, we will assume that the interferometer
exhibits a flat amplitude spectral density.
We analyze the position measurement provided by the interferometer between
kicks to estimate the acceleration of the spacecraft.
There are several approaches that can be used, including second order
finite differencing.
One of the best approaches is to fit a parabola to the sampled position data
between kicks.
We fit the following model to the measured data $z(t)$:
\begin{equation}
z(t) = x_0 + v_0 \, (t - t_0) + \frac{1}{2} \, a_0 \, (t - t_0)^2
\end{equation}
The fit parameters are $x_0$, the mean position, $v_0$, the mean velocity,
and $a_0$, the mean acceleration, which is what we wish to estimate.
This approach, which has the advantage of being linear and
using all of the measured data,
provides one acceleration measurement per kick period,
$T_{\mathrm{kick}}$.
The resulting acceleration measurement noise (standard deviation), $\sigma_a$,
depends linearly on the
interferometer noise level $\sigma_I$,
quadratically on $T_{\mathrm{kick}}^{-1}$,
and inversely on the square root of the number of samples, $N$.
If we assume a constant sampling frequency, say 10 Hz, and a small
but constant duty cycle, say 0.1, then the
number of samples $N$ is roughly proportional to $T_{\mathrm{kick}}$.
We then have the following relationship between acceleration measurement noise,
interferometer noise and kick period:
\begin{equation}
\sigma_a \approx \alpha \, \frac{ \sigma_I }{ T_{\mathrm{kick}}^{5/2} }
\end{equation}
The parameter $\alpha$, of order 1, depends on the cross correlation between
the mean acceleration $a_0$ and the constant and linear terms $x_0$ and $v_0$.
Larger kick periods greatly decrease the acceleration measurement noise,
but also greatly increase the maximum displacement of the test mass
relative to its housing.
Larger kick periods also proportionally reduce the bandwidth of the measurement
since one acceleration noise measurement is made every $T_{\mathrm{kick}}$.
Assuming that the interferometer exhibits a white noise spectrum in
displacement, then the
acceleration measurement noise also has a white spectrum
(a linear function of a Gaussian is a Gaussian).
This is one disadvantage of the DMA since the measurement noise spectrum is
flat in acceleration, while a continuous test mass
displacement measurement, uninterrupted by kicks (e.g. using drag-free),
which is then twice differentiated has a $1/f^2$ spectrum in acceleration.
Therefore, the measurement noise in acceleration for a drag-free systems
is much lower at lower frequencies where most of the interesting science is,
assuming a given interferometer noise level.
Figure \ref{fig:measNoise} plots the relationship between acceleration
measurement noise and
interferometer noise for $T_{\mathrm{kick}} = 5 \ \mathrm{sec},
10 \ \mathrm{sec}$, and $50 \ \mathrm{sec}$.
The estimated acceleration measurement noise calculated using the standard
covariance analysis is shown in blue, while red curves show the measurement
error obtain through a numerical simulation.
The simulation assumed a spacecraft acceleration due to the solar radiation
pressure model discussed above, a 0.1 duty cycle, and a sampling rate
of 10 Hz.
The interferometer (IFO) noise was assumed to be white with a standard
deviation as shown on the plot after averaging over 1 sec (10 samples).
We see from Figure \ref{fig:measNoise} that the solar radiation pressure noise
at high frequencies does not adversely affect the acceleration measurement.
For a desired acceleration measurement noise of $3 \times 10^{-15}
\ \mathrm{m/sec^2 Hz^{1/2}}$ and a kicking period of 10 sec
an interferometer with a white noise level of 40 $\mathrm{fm/Hz^{1/2}}$
is needed.
For a 50 sec kicking period a 2 $\mathrm{pm/Hz^{1/2}}$ interferometer is
needed.
\begin{figure}
\centering
\includegraphics[width = 12 cm]{measNoiseLISA.eps}
\caption{Acceleration measurement noise as a function
of the kicking period and interferometer noise. \label{fig:measNoise}}
\end{figure}
These interferometer requirements only apply to the local (short-arm)
interferometer, which is far from being shot noise limited,
and not the intra-spacecraft (long-arm) interferometer.
In addition, these noise requirements only apply at frequencies above
$(1/T_{\mathrm{kick}}) = 0.1 \ \mathrm{Hz}$ in the case $T_{\mathrm{kick}} =
10 \ \mathrm{sec}$.
Therefore, we need not worry about challenging
low frequency measurement noise, for example due to temperature changes
and thermal expansion or index of refraction changes of materials.
Each short-arm interferometer measurement lasting $T_{\mathrm{kick}}$
seconds is independent of all others.
\section{DMA electrode geometry}
Figure \ref{fig:DMAelec} shows a proposed electrode geometry that is
slightly modified from that of the
LISA Pathfinder GRS \cite{strayFields2012}.
The geometry shown in Figure \ref{fig:DMAelec} maximizes the actuation
authority along the sensitive $x$-axis and at the same time
decouples $x$-axis actuation from that of all other degrees of freedom.
This allows a clean separation of drift-mode operation along $x$ and
continuous suspension in all other degrees of freedom.
A small port is needed in the middle of the $x$-axis electrode to allow
for the interferometric readout along $x$.
Mechanical pins required to cage the test mass during launch would be
located between the two injection electrodes along the $y$-axis.
\begin{figure}
\centering
\includegraphics[width = 10 cm]{DMAelectrodes.eps}
\caption{Proposed electrode geometry for the Drift Mode
Accelerometer. \label{fig:DMAelec}}
\end{figure}
\section{Testing drift mode accelerometry}
Precision torsion pendula thus far represent the best method of testing the
performance of precision inertial instruments in the laboratory
\cite{trentoPendulum}.
One such pendulum at the University of Florida
consists of a cross bar supported by a 1 m long, 50 $\mu m$ diameter W fiber.
A light-weighted aluminum cubic test mass is mounted at each of the four
ends of the cross bar.
Two electrode housings surround two opposing test masses.
The cross bar is used to convert the rotational motion of the torsion
pendulum into mostly translational motion of the four test masses.
The electrode housings can both electrostatically force the test masses
and readout their position capacitively.
A small port is also incorporated into the electrode housings to allow
for an interferometric readout of the test mass' position.
The entire apparatus is housed in a vacuum chamber.
In order to test the performance of the DMA, the neutral orientation
of the pendulum can be biased so that the pendulum restoring force can
be made equivalent to the dc acceleration of the spacecraft
either due to atmospheric drag or solar radiation pressure.
The electrostatic actuation system can be operated with a low duty cycle
just as described above and a laser interferometer can be used to estimate
the test mass' acceleration.
higher frequency spacecraft disturbances can be simulated by varying the
neutral orientation of the pendulum or by applying noise voltages to the
electrodes that are equivalent to the spacecraft acceleration noise.
With this approach the acceleration noise floor can be measured
and compared with the acceleration noise floor obtained with
the actuation turned off and the pendulum in its neutral orientation
set with the test masses centered in their housings.
The best way to determining the performance of the DMA would be
to test the instrument in space.
The LISA Pathfinder mission offers one opportunity to do this.
If the drag-free and micropropulsion systems were turned off and both
test masses were operated in a drift mode, then the resulting differential
acceleration noise between the two test masses could be estimated using the
on-board laser interferometer.
All cross couplings and stiffness can be determined and accounted for
in the analysis of the data.
\section{Conclusion}
The drift mode accelerometer is a modified electrostatic accelerometer
potentially capable of acceleration noise performance
similar to that of drag-free
systems without the need for drag-free control or associated precision
propulsion.
A DMA consists of a dense test mass that is freely floating inside an
electrode housing, which can both sense its position capacitively and actuate
it electrostatically.
Unlike traditional electrostatic accelerometers, the suspension system
is operated with a low duty cycle and with a cycling frequency that is
chosen to be above the science signals of interest.
Measurement of spacecraft acceleration is made using a laser interferometer,
which is not limited by dynamic range.
Two applications of the DMA, Earth geodesy and
gravitational wave observation, are studied.
Both represent gravitational
science missions where the DMA might be used to replace drag-free operation.
For gravitational wave observation, the combination of the existing
LISA Pathfinder gravitational reference sensor and the LISA
local (short-arm) interferometer
can be operated as a drift mode accelerometer, with acceleration noise
performance close to that required for LISA.
Detailed modeling and analysis is still required to
fully determine the acceleration noise performance and instrument
requirements and constraints.
Laboratory testing
using torsion pendula provide one promising approach for demonstrating the
performance and operation of the drift mode accelerometer.
\section{Acknowledgments}
The author would like to thank Guido M\"uller and Giacomo Ciani at the
University of Florida and William Weber at the University of Trento
for their valuable insights related to this work.
\vskip 10pt
|
2,877,628,090,486 | arxiv | \section{Introduction}
Detailed interstellar medium (ISM) and chemical abundance properties of galaxies are sensitive tests of the underlying physical processes that govern galaxy evolution. Examining these in more detail in galaxy scale simulations is an important and exciting new discriminator between models. There is a considerable body of work studying the chemodynamical evolution of galaxies using cosmological hydrodynamics simulations \citep[e.g.][]{OppenheimerDave2008,Wiersma2009,Shen2010,Simpson2013,Snaith2015,OWLS,EAGLE,FIRE}.
These simulations, coupled with additional attention to feedback processes, have made remarkable progress in reproducing global galaxy trends such as evolution of the mass-metallicity relationship \citep[e.g.][]{Obreja2014, Ma2016, Dave2017, Torrey2017} and more detailed quantities such as metallicity distribution functions (MDFs) and the evolution of individual species abundances \citep{Marcolini2008,Revaz2009,Sawala2010,RevazJablonka2012,Jeon2017,Hirai2017} .
However, much of this work has been done with Lagrangian smoothed particle hydrodynamics schemes, with a few recent exceptions \citep{Few2012,Simpson2013,Few2014,Vorobyov2015,Corlies2018}. In its original form, this scheme does not capture mixing between chemically inhomogeneous particles, as necessary for chemical evolution. Mixing can be modeled with sub-grid models of turbulent metal diffusion \citep[e.g.][]{Shen2010,Shen2013,Brook2014,Su2017a,Escala2018}, but there are many possible models and each is not necessarily applicable in every regime \citep[see ][]{Revaz2016}. While mixing occurs in Eulerian codes even without sub-grid models, numerical diffusion tends to result in over-mixing in simulations without sufficiently high spatial resolution. Molecular diffusion or even turbulent mixing is certainly not resolved in any galaxy-scale simulation with either method, requiring additional sub-grid models; this can be particularly important for understanding the initial pollution of otherwise pristine gas \citep[see ][ and references therein]{PanScannapiecoScalo2013,Sarmento2017}. Moreover, metal mixing efficiencies may vary species-by-species \citep[e.g.][]{Cohen2013, Roederer2014, FrebelNorris2015, Hirai2017, Cote2018, KrumholzTing2018}. Mixing behavior is tied critically to the feedback source (stellar winds, supernovae, and possibly more exotic sources) that inject metals into different phases of
the ISM with different energies and on different timescales; the observational effect of this is poorly understood, however. The variations in how different methods handle sub-grid metal injection and metal mixing schemes can lead to uncertainties in connecting models to observations and the fundamental physics that drives galaxy evolution.
Increasing physical resolution reduces reliance on sub-grid physics for mixing. However, at high particle mass resolution ($M \lesssim 10^3$ M$_{\odot}$) standard schemes for modeling stars as simple stellar populations lose validity \citep[as studied in detail by][]{Revaz2016}. Below 10$^4$ M$_{\odot}$, such schemes do not fully sample the initial mass function (IMF), and cannot be considered average representations of stellar populations. This is acutely problematic at low star formation rate densities with star particle masses comparable to or below the mass of the most massive individual star ($\sim 100$ M$_{\odot}$). At high star formation rate densities, an undersampled IMF in a single low mass star particle can be compensated for by having many adjacent star particles. Various approaches exist to address this issue \citep[e.g.][]{Kobayashi2000,WeidnerKroupa2004,Pflamm-AltenburgKroupa2006,RevazJablonka2012,Kroupa2013,Rosdahl2015,Su2018}, but none are without caveats \citep{Revaz2016}, save for schemes which begin to track the star-by-star information within a given particle by directly sampling the IMF at formation time \citep[e.g.][]{Hu2017}. The most straightforward solution is to remove the single stellar population formalism entirely and simply track stars as individual particles.
We introduce a new method for studying galactic chemical evolution that follows stars as individual star particles implemented in the adaptive mesh refinement code \texttt{Enzo}, designed for high resolution simulations of isolated galaxies. The relative simplicity of idealized, isolated galaxy evolution simulations allows for a focused, first-principles approach to studying multi-channel feedback mechanisms. We follow recent work using low mass dwarf galaxies as laboratories to study in detail how feedback governs galaxy evolution \citep{Forbes2016,Hu2016,Hu2017}.
Our work builds upon our current understanding of feedback and galactic chemodynamics while making three notable advances: 1) direct star-by-star modeling, 2) stellar winds from both massive and asymptotic giant branch (AGB) stars, and 3) using an adaptive ray tracing method to follow stellar ionizing radiation. We also include core collapse and Type Ia supernova feedback, photoelectric heating from stellar far ultra-violet (FUV) radiation, and Lyman-Werner dissociation from stellar radiation.
Using star-by-star modeling, we capture in more detail the stellar yields from individual stars released over their lifetime. This includes yields from massive and AGB stellar winds, and supernovae (SNe). In addition to better capturing how individual metal species enrich the ISM, this allows us to chemically tag individual stars. This ability opens an exciting new channel for testing models of galaxy evolution by leveraging current and ongoing observations probing the detailed distributions of chemical abundances of stars in the Milky Way and Local Group, such as APOGEE and APOGEE2 \citep{APOGEE2010,APOGEE}, the Gaia-ESO survey \citep{Gaia}, and GALAH \citep{GALAH}. This paper is the first in a series examining in detail the role that individual components of multi-channel stellar feedback play in galaxy dynamical and chemical evolution. In \cite{Emerick2018b} we investigate the importance of ionizing radiation in regulating star formation and driving outflows in our galaxy. In \cite{Emerick2018c} we explore how individual metal species mix within the ISM and are ejected via galactic winds.
We describe each mode of our multi-channel feedback
in detail in Section~\ref{sec:methods}, describe their implementation in an isolated dwarf galaxy simulation in Section~\ref{sec:IC}, show results from this simulation in Section~\ref{sec:results}, discuss the results in Section~\ref{sec:discussion}, and conclude in Section~\ref{sec:conclusion}. Readers who may want to only briefly skim (or skip) over the details of our included physics are advised to just read the beginning of Section~\ref{sec:methods}, which contains a complete---yet brief---summary of the included physics.
\section{Methods}
\label{sec:methods}
We produce high-resolution, galaxy-scale simulations tracking stars not as single stellar populations, but as individual stars sampled from an assumed IMF.
This allows us to follow star-by-star variations in feedback physics and stellar yields in detail. To properly model the ISM, we track non-equilibrium, primordial chemistry (including molecular hydrogen) using \texttt{Grackle} \citep{GrackleMethod}, with heating and approximate self-shielding from a metagalactic ultraviolet (UV) background. We assume collisional ionization equilibrium for all other elements and use updated \texttt{Cloudy} metal-line cooling tables consistent with our self-shielding approximation (see Appendix~\ref{appendix:cooling}). We also include an updated observationally motivated dust model for the low metallicity regimes studied here ($Z \lesssim 0.1$ Z$_{\odot}$). Each star is assigned basic properties including surface gravity, effective temperature, radius, and lifetime from tabulated stellar evolution models, which inform how the stars deposit their feedback. We directly track ionizing radiation from massive stars using an adaptive ray tracing radiative transfer method that includes the effects of radiation pressure on HI. In addition, we follow the optically thin, non-ionizing UV radiation from these stars that cause photoelectric heating and Lyman-Werner dissociation of molecular hydrogen. We track the stellar wind feedback and SNe from these stars, depositing individual metal yields from both. We include AGB wind feedback and yields for lower mass stars, and track these directly as Type Ia SN progenitors. We follow yields for 15 individual metal species (C, N, O, Na, Mg, Si, S, Ca, Mn, Fe, Ni, As, Sr, Y, and Ba), chemically tagging each star as it forms with the associated local gas abundances for each species. In addition, we track a total metal density field which is the sum of all metals, including those not directly tracked. This field is used to inform the heating/cooling physics, and determines the metallicity of each star at birth. These methods are discussed in full detail below.
\subsection{Hydrodynamics and Gravity}
\label{sec:hydro}
We use the adaptive mesh refinement hydrodynamics and N-body code \texttt{Enzo}\footnote{http://www.enzo-project.org} to simulate the chemodynamical evolution and detailed feedback physics in a set of high resolution, isolated, low-mass dwarf galaxies. \texttt{Enzo} is an open-source code that is undergoing continual, active development by many researchers across several institutions. We use a substantially modified version of the current development version of \texttt{Enzo} (version 2.X) in this work.\footnote{This version is contained in a publicly available fork of the main repository: https://bitbucket.org/aemerick/enzo-emerick. Specifically, simulations presented here were conducted at changeset 7001d99.} We solve the equations of hydrodynamics using a direct-Eulerian piecewise parabolic method \citep{ColellaWoodward1984, Bryan1995} and a two-shock approximate Riemann solver with progressive fallback to more diffusive Riemann solvers in the event that higher order methods produce negative densities or energies. We compute the total gravitational potential from gas self-gravity, stars, and a static background dark matter potential (see Section~\ref{sec:IC}). Self-gravity is computed with a multigrid Poisson solver. The collisionless star particles are evolved with an adaptive particle-mesh N-body solver at an effective force resolution of $\sim 2 \Delta x$, where $\Delta x$ is the local cell size.
We refine the mesh whenever the thermal Jeans length is no longer resolved by a minimum of 8 cells, continually refining a given region until this criterion is met or until the region reaches the maximum resolution (1.8~pc). At maximum resolution, the Jeans length can become under-resolved, leading to artificial numerical fragmentation. \citet{Truelove1997} showed that resolving the Jeans length by at least 4 cells is required to suppress this fragmentation.
We set the star formation density threshold to the value at which the Jeans length becomes resolved by only 4 cells in sub-200 K gas, or about 200 cm$^{-3}$ (as discussed further in Section~\ref{sec:star formation}). Forming stars from this gas will reduce the local density, ensuring the Jeans length is resolved. However, since star formation is not instantaneous, we employ a pressure floor to support gas against artificial fragmentation and collapse. A non-thermal pressure term is added to cells once their thermal Jeans length becomes unresolved. This prevents dense, self-gravitating gas from rapidly reaching densities significantly above our resolution limit. The use of a pressure floor is common in galaxy scale simulations with limited dynamic range \citep[e.g][]{Machacek2001, 2008ApJ...680.1083R}.
Due to computational constraints we found it necessary to institute a temperature ceiling in low density, diffuse gas. These high temperatures, typically well above 10$^{7}$~K and up to 10$^{8}$~K, would place an onerous constraint on the limiting time step at high spatial resolution. At these temperatures, with typical velocities up to $\sim$10$^{3}$~km~s$^{-1}$, satisfying the Courant condition requires time steps on order of 100~yr. We institute a maximum temperature of 8$\times 10^6$~K in gas with densities between 10$^{-30}$~g~cm$^{-3}$ and 10$^{-26}$~g~cm$^{-3}$. These densities were somewhat arbitrarily chosen, but ensure that this threshold does not impact very low density gas in the halo of our dwarf galaxy or higher density gas in supernova injection regions. This threshold decreases the required run-time by factors of a few. The value of the temperature threshold was chosen to ensure the affected hot gas remained just above the high-temperature minimum of our cooling curve (see Appendix~\ref{appendix:cooling}.)
\subsection{Chemistry and Cooling Physics}
\label{sec:chemistry}
We use the chemistry and cooling library \texttt{Grackle}\footnote{https://grackle.readthedocs.io/en/grackle-3.0/} v 3.0 to follow a nine species non-equilibrium chemistry network (H, H$^+$, He, He$^+$, He$^{++}$, e$^{-}$, H$_2$, H$^{-}$, and H$_{2}$) which includes radiative heating and cooling from these species and metals.\footnote{We use a slightly modified version of the the main \texttt{Grackle} repository, available at https://bitbucket.org/aemerick/grackle-emerick at changeset c2c0faf.} \texttt{Grackle} is a freely available, open source, multi-code library, designed to interface with a wide variety of astrophysical codes. We outline specific model choices made in our simulations and refer the reader to \citet{GrackleMethod} for a detailed discussion of the code. We apply the \citet{Glover2008} three-body rate for H$_{2}$ formation and include a model for H$_2$ formation on dust, dust heating, and dust cooling following the methods in \citet{2000ApJ...534..809O} and \citet{2005ApJ...626..627O} as included in \texttt{Grackle}. However, we update the default dust to gas ratio scaling in \texttt{Grackle} to account for the steeper scaling in low metallicity regimes ($Z \lesssim 0.1 Z_{\odot}$), using the broken power law scalings from \citet{Remy-Ruyer2014}. For metallicities above $\sim 0.1 Z_{\odot}$, this is equivalent to the default behavior of \texttt{Grackle}, where dust content scales linearly with metallicity.
As part of the \texttt{Grackle} package, metal line cooling is modeled using pre-computed \texttt{Cloudy} \citep{Cloudy2013} \footnote{http://www.nublado.org/} tables interpolated as a function of density, temperature, and redshift, using the \citet{HM2012} UV metagalactic background. As discussed in more detail in Section~\ref{sec:diffusive heating}, we account for approximate self-shielding of H and He against this UV background. Using this prescription with metal line cooling tables computed under an optically thin assumption can lead to an order of magnitude overestimation of the cooling rate at certain densities, as discussed in \citet{Hu2017} and Appendix~\ref{appendix:cooling}. To address this issue, we use re-computed metal line tables consistent with the self-shielding approximation. We have made these new tables public in the main \texttt{Grackle} repository. These are discussed in greater detail in Appendix ~\ref{appendix:cooling}. Finally, we ignore the effect the stellar radiation field (see Section~\ref{sec:diffusive heating}) may have on the interpolated metal line cooling rates.
\subsection{Star Formation Algorithm}
\label{sec:star formation}
In order to resolve individual star formation events on galactic scales, we implement a stochastic star formation algorithm adopted from \citet{Goldbaum2015,Goldbaum2016}. Each cell at the maximum refinement level is capable of forming stars if it contains gas that meets the following local criteria on number density $n$, temperature $T$, cell mass $M$, and velocity $\vec{v}$: 1) $n > n_{\rm thresh}$, 2) $T < T_{\rm thresh}$, 3) $M > M_{\rm Jeans}$, and 4) $\vec{\nabla} \cdot \vec{v} < 0$, where $n_{\rm thresh}$ is a resolution dependent density threshold, $T_{\rm thresh}$ is a temperature threshold, and $M_{\rm Jeans}$ is the local thermal Jeans mass. Our fiducial values are $n_{\rm thresh} = 200 \mbox{ cm}^{-3}$ and $T_{\rm thresh}= 200$~K. We limit the fraction of a cell's gas mass that is converted into stars by requiring $M > f_{\rm thresh} M_{\rm max,*}$, where $f_{\rm thresh} = 2.0 $ and the maximum star mass $M_{\rm max,*} = 100 \mbox{ M}_{\odot}$. No star formation occurs when $M < f_{\rm thresh} M_{\rm max,*}$, ensuring that a star formation episode can not produce negative densities.
We make the common ansatz that star formation occurs by converting gas into stars in a free fall time $\tau_{\rm ff}$ with a star formation efficiency, $\epsilon_{\rm f} \simeq 0.02$. At high resolution, the choice of $\epsilon_{\rm f}$ should be irrelevant \citep{Orr2018, FIRE2}, as star formation is ultimately self-regulated by feedback.
The stellar mass formed during a timestep $\Delta t$ from a region with a total gas mass $M_{\rm gas}$ is
\begin{equation}
M_* = \epsilon_{\rm f} M_{\rm gas} \frac{\Delta t}{\tau_{\rm ff}}
\end{equation}
In practice, $\Delta t/\tau_{\rm ff} \ll 1$, and $M_*$ is smaller than the minimum star particle mass at parsec scale resolution. We therefore allow star formation to proceed stochastically, following the methods in \citet{Goldbaum2015, Goldbaum2016}, modified for variable stellar masses. In each cell that could potentially form stars, we compute the probability that 100 M$_{\odot}$ of gas will be converted into stars in that time step, and use a random number draw to determine whether or not star formation actually occurs. If it does, we randomly sample from
the adopted IMF until approximately 100 M$_{\odot}$ of stars form, keeping the last sampled particle when the total stellar mass formed exceeds 100 M$_{\odot}$. The total mass of formed star particles is subtracted from the gas mass in the star forming region to ensure mass conservation. We assume a Salpeter IMF \citep{Salpeter1955} with $\alpha = 2.35$, sampling over the range between a minimum stellar mass of 1 M$_{\odot}$ and an arbitrarily chosen maximum stellar mass of 100 M$_{\odot}$. Our lower limit on stellar masses ensures that we are able to both directly track all particles that contribute in some way to feedback and metal enrichment, and follow longer lived star particles, while reducing the computational expense of following a large number of low mass stars that have no dynamical impact on the galaxy evolution. By ignoring the formation of stars below 1~M$_{\odot}$, our model in effect spreads this mass into higher mass stars, changing the normalization of the IMF slightly from what would be expected for an IMF that extends below 1~M$_{\odot}$.
Formed stars are deposited with random positions within the star forming cell and assigned velocities equal to the cell bulk velocity with a 1~km~s$^{-1}$ velocity dispersion. This dispersion captures some of the unresolved gas motions below the resolution limit that are smoothed out by numerical diffusion; it is comparable to, but less than, the velocity dispersion of the coldest gas in our simulations. Stars are assigned metallicities corresponding to the metallicity of the star forming zone, and are chemically tagged with the 17 individual species abundances (H, He, and the 15 metals) that we follow in our simulations.
Stars evolve during the simulation, by losing mass from stellar winds and SNe as described below, and by changing types, but persist throughout the entire simulation. For example, low mass stars are tagged as white dwarfs (WDs) at the end of their life, which may eventually explode as a Type Ia SN (discussed below), after which they persist as massless tracer particle remnants. Finally, each star is marked as a ``must refine'' particle, requiring that it be surrounded by a four-cell region at the highest level of refinement. This ensures that both stellar winds and SNe feedback are maximally resolved, and that any ejected yields are deposited over a consistent physical scale throughout the simulation.
\subsection{Stellar Properties}
\label{sec:properties}
Given each star's birth mass and metallicity, we interpolate over the PARSEC grid of stellar evolution tracks \citep{Bressan2012} to assign a lifetime and AGB phase start time (if any) to it, as well as the effective temperature $T_{\rm eff}$ and surface gravity $g$ used in computing radiation properties (see Section \ref{sec:ionizing radiation}). We use the largest subset of the PARSEC models that are regularly sampled in our mass/metallicity space of interest, with 26 mass bins over M$_{*} \in \left[0.95, 120.0 \right] {\rm M_{\odot}}$ and 11 metallicity bins over $Z \in \left[10^{-4}, 0.017 \right]$. Although $T_{\rm eff}$ and $g$ evolve over time for stars, modifying stellar radiative properties, following a stellar evolution track for each of our stars is beyond the scope of this work. We instead fix these properties at their zero age main sequence values.
\subsection{Stellar Feedback and Chemical Yields}
\subsubsection{Stellar Yields}
\label{sec:yields}
For the first time in galaxy-scale simulations, we track galactic chemodynamical evolution using stellar yields ejected from star particles that represent individual stars. We adopt the NuGrid\footnote{http://www.nugridstars.org} collaboration's set of stellar yields given on a uniformly sampled grid in stellar mass and metallicity with 12 mass bins over $M_{*} \in \left[1.0, 25.0\right]$ M$_{\odot}$ and five metallicity bins at metal fractions of $Z =$ 0.02, 0.01, 0.006, 0.001, and 10$^{-4}$ \citep{Pignatari2016, Ritter2018}. This grid includes yields from the AGB phase of stars from 1--7 M$_{\odot}$, as well as yields from both stellar winds and core collapse SNe of massive stars from 12--25 M$_{\odot}$. We complement these tables with tables from Slemer et al.\ (in prep), based on the PARSEC stellar evolution tracks \citep{Bressan2012, Tang2014}, to track stellar winds for stars more massive than 25 M$_{\odot}$. We ignore SN yields from these stars (see next paragraph). We combine all stable isotope yields for a given element into a single elemental abundance for all stable elements from hydrogen to bismuth. Although we can in principle follow an arbitrary number of metal species, practical considerations of memory use prevent this in any given simulation. We refer the reader to previous uses of the NuGrid yields in one-zone galactic chemical enrichment models \citep{Cote2016, Cote2016_feb,Cote2017a} for a detailed discussion of how various model uncertainties can influence galactic chemical evolution.
Above some mass $M_{\rm trans}$ within the unsampled range of 7--12 M$_{\odot}$, stars no longer undergo AGB wind phases but end their lives instead as core collapse SNe. Where this transition occurs is uncertain, but is commonly taken to be at a mass $M_{\rm trans} \sim 8$--10~M$_{\odot}$; we take $M_{\rm trans} = 8$~M$_{\odot}$. In our model, stars below this mass eject their wind yields in an AGB phase only at the end of their lives, typically over a period comparable to or less than a few time steps ($\lesssim 10$ kyr). Stars above this mass are assumed to eject their stellar yields via line-driven stellar winds at a constant mass loss rate throughout their lifetime (neglecting Wolf-Rayet and luminous blue variable phases), ending their lives as a core-collapse SN (see Sect.~\ref{sec:stellar winds} for details on the wind energetics). Varying $M_{\rm trans}$ changes both the time at which yields for stars around this mass are ejected (for reference, the lifetime of an 8 M$_{\odot}$ star is about 35--40~Myr), and the energy injection from these winds. \citet{Cote2017a} explores how the choice of $M_{\rm trans}$ affects galaxy abundances in a one-zone model. We neglect the effects of binary star evolution on stellar feedback, and discuss the significance of this in Sect.~\ref{sec:binary stars}.
There are large uncertainties in stellar yields for stars more massive than 25 M$_{\odot}$ \citep[see ][and references therein]{Cote2016}. Indeed, even the exact fate of these stars is uncertain \citep[e.g.][]{Woosley2002,Zhang2008,Ugliano2012}, particularly as a function of metallicity \citep{Fryer2012} with potentially multiple stable and unstable regimes as a function of mass \citep{Heger2003}. Due to this uncertainty, and to avoid erroneously extrapolating from our yield tables, we adopt the simplest model and assume all stars above 25 M$_{\odot}$ end their life through direct collapse to a compact object with no further mass, metal, or energy ejection.
Type Ia SNe are an important additional source of galactic chemical enrichment. These iron group rich events are responsible for the $\sim$1 Gyr timescale turnover, or ``knee", in $[\alpha/\rm{Fe}]$ vs $[\rm{Fe}/\rm{H}]$ diagrams. We use the Type Ia SN yields given in \citet{Thielemann1986}, adopting a Type Ia SN model as discussed in Sect.~\ref{sec:Type Ia}. We emphasize that we only track Type Ia SNe occurring within the population of stars formed in this model, neglecting SNe from any possible pre-existing population, substantially limiting the number of Type Ia SNe occurring during the initial gigayear in our models.
\subsubsection{Stellar Winds}
\label{sec:stellar winds}
Stellar winds are important sources of enrichment and feedback in galaxies at both early times from massive stars and late times from AGB stars. Although the energy injected via winds over the lifetime of a cluster of stars is much less than that from SNe and radiation \citep{Shull1995}, stellar winds are potentially important sources of pre-SN feedback. We assume complete thermalization of the wind kinetic energy, taking the total injected energy injected in timestep $\Delta t$ as $E_w = \frac{1}{2}\dot{M} v^2_w\Delta t + E_{\rm th}$, where $E_{\rm th}$ is the thermal energy of the ejected gas mass $M_w = \dot{M}\Delta t$ given the star's interpolated effective temperature $T_{\rm eff}$. This mass and energy is injected evenly over a three-cell radius spherical volume centered on the star particle. The edges of this spherical region will only partially overlap grid cells. We use a Monte Carlo sampling method to properly compute the volume of this overlap to scale the injection in these cells appropriately. We assume constant mass loss rates for all winds as set by the yield tables over either the lifetime of the star (for massive stars) or the length of the AGB phase (for low mass stars).
Massive stellar winds have typical velocities of order 10$^{3}$ km s$^{-1}$ \citep{Leitherer1992}. Satisfying the Courant time step becomes prohibitively expensive following this gas, with time steps dropping to $\Delta t \sim$~100~yr. For this reason, we adopt the common simplification of reducing the wind velocity \citep[e.g][]{Offner2015}. In our case, we fix massive stellar wind velocities to $v_w = 20$~km~s$^{-1}$ for stars above 8 M$_{\odot}$. Our initial tests show that turning off energy injection from stellar winds like this does not significantly affect the global star formation rate of our galaxies. Due to the substantial additional computational expense of following stellar winds for gigayear timescales, we reserve examining the detailed importance of winds to future work. These points are discussed in more detail in Sect.~\ref{sec:stellar winds discussion}.
Stars that only undergo an AGB phase deposit their feedback at the end of their lives, as determined by the PARSEC evolution tracks. AGB wind velocities vary dramatically over their relatively short lifetimes, but are typically on the order of 10 km s$^{-1}$. For simplicity, we adopt a fixed wind velocity of 20 km s$^{-1}$ for all AGB stars as well.
\subsubsection{Core Collapse SNe}
\label{sec: core collapse}
Stars between $M_{\rm trans} = 8$ M$_{\odot}$ and 25 M$_{\odot}$ end their lives as core collapse SNe, ejecting mass and metals as determined by the NuGrid stellar yield tables, along with 10$^{51}$ erg of thermal energy. Due to the high resolution of our simulations (1.8~pc), we generally resolve the Sedov phase of each SN explosion well (see Appendix~\ref{appendix:SN}). We inject thermal energy alone in a three-cell radius spherical region around the star particle, which we find to be sufficient to resolve the SN explosions. We use the same Monte Carlo sampling method as used in our stellar winds to map the spherical injection region to the grid. We continue to track any remaining stellar mass after the SNe occurs as a massive remnant tracer particle. In future work these particles can be used to self-consistently account for more exotic sources of feedback and chemical enrichment such as X-ray binaries and neutron-star binary merger events which, while rare, could be important in long term galaxy evolution \citep[e.g.][]{Artale2015}.
\subsubsection{Type Ia SNe}
\label{sec:Type Ia}
We continue to track low mass stars ($M < 8$ M$_{\odot}$) after their death as WD particles, marking a subset as Type Ia SN progenitors. This is the most self-consistent model for Type Ia SNe in galaxy-scale simulations. We note however that for the low SFRs in our isolated dwarf galaxy simulation, the first Type Ia SN only appears after a few hundred megayears of simulation time. By the end of the simulation presented here (500~Myr), only 18 have gone off. At the end of their life, we assign a new mass to these particles following the initial-to-final-mass relation of \citet{Salaris2009}. We follow the common assumption that progenitor stars with initial masses between 3 and 8 M$_{\odot}$ form WDs that are Type Ia progenitors \citep[see][ and references therein]{Cote2017a}.
We compute the probability that a given Type Ia progenitor will explode as a function of time using an observationally motivated delay time distribution model. The Type Ia SN rate is taken to be a power law in time, $\Psi (t) \propto t^{-\beta}$, whose slope $\beta$ and normalization $N_{\rm Ia}/M_{\rm SF}$ are observables. The latter represents the number of Type Ia SNe per mass of star formation. By assuming an IMF $dN/dm$, one can write down the fraction $\eta$ of stars capable of forming a Type Ia SN progenitor that \textit{will} explode within a Hubble time. This is given as
\begin{equation}
\eta = \frac{N_{\rm Ia}}{M_{\rm SF}} \frac{\int_{M_{\rm min}}^{M_{\rm max}} m (dN/dm) dm }{\int_{M_{1}}^{M_{2}} (dN/dm) dm},
\end{equation}
where $M_{\rm min}$ and $M_{\rm max}$ are the lower and upper bounds of the IMF, and $M_{1}$ and $M_{2}$ are the lower and upper bounds of the range of stars that can form Type Ia candidates. The distribution slope $\beta$ is of order unity, with typical values between 1.0 and 1.2 \citep[see][for a recent review]{Maoz2014}. $N_{\rm Ia}/M_{\rm SF}$ can be derived by taking observed values of the Type Ia SN rate and integrating over a Hubble time. Typical values for this are on order of 10$^{-3}$ M$_{\odot}^{-1}$ \citep{Maoz2014}. For our fiducial values, we adopt $\beta = 1.2$ \citep{Maoz2010} and $N_{\rm Ia}/M_{\rm SF} = 0.8\times 10^{-3}$ M$_{\odot}^{-1}$ \citep{GraurMaoz2013}. Given our choice of IMF, and with $M_{\rm min} = 1$~M$_{\odot}$, $M_{\rm max}=100$~M$_{\odot}$, $M_{1}=3$~M$_{\odot}$, and $M_{2}=8$~M$_{\odot}$, this gives $\eta = 0.041$.
Finally, we can normalize $\Psi(t)$ to give the probability per unit time $\dot{P}(t)$ that a Type Ia candidate will explode at a time $t$ after the formation of its main sequence progenitor. Integrating this gives the total probability at any given time as
\begin{equation}
P(t) = \int \dot{P}(t)dt = \frac{\eta}{{ \int_{t_{\rm o}}^{t_{\rm H} + t_{\rm o}} \tau^{-\beta} d\tau}} \int t^{-\beta} dt,
\end{equation}
where $t_{\rm o}$ is the formation time of the WD and the leading term on the right hand side properly normalizes the total probability over a Hubble time to $\eta$. This naturally accounts for both a prompt and delayed Type Ia SN population in our simulations.
In practice, rather than drawing a random number for each candidate every timestep, we make a single random number draw, $u$, at the formation time of the white dwarf particle. For $u \in [0,1]$, we interpolate its position on a pre-tabulated and inverted cumulative probability distribution function to assign a single time at which the WD particle will explode as a Type 1a supernova. We institute a minimum delay time by defining $P(t)$ only for $t > t_o$, such that a particle cannot be assigned an explosion time before its formation time.
\subsubsection{Ionizing Radiation from Discrete Sources}
\label{sec:ionizing radiation}
Radiation feedback, including ionization, ionization heating, and radiation pressure, is an important source of feedback in galaxies. \ion{H}{2} regions carved out by stellar radiation change the ISM structure in regions where SNe eventually explode, generally increasing their dynamical importance. However, accounting for angular effects, radiation can also allow energy from SNe to dissipate more readily by escaping out of channels carved through dense clouds. Radiation feedback effects have been included with various approximations in a wide range of simulations \citep[e.g.][]{OppenheimerDave2006, Krumholz2007, HopkinsQuataertMurray2012, Agertz2013, Renaud2013, Stinson2013, Roskar2014, Ceverino2014, FIRE, AgertzKravtsov2015, Forbes2016, Hu2016, Hu2017, FIRE2}, with a smaller subset using full radiation hydrodynamics \citep{WiseAbel2012,Wise2012a,Wise2014,Kim2013a, Kim2013b,Pawlik2013,Rosdahl2015,Aubert2015,Ocvirk2016,BaczynskiGloverKlessen2015,Pawlik2017} due to the additional computational expense of direct ray tracing. As we seek a complete accounting of stellar feedback physics, we follow HI and HeI ionizing radiation from our stars through the ray tracing methods described below.
Enzo includes an adaptive ray tracing implementation, \textsc{Enzo+Moray} \citep{WiseAbel2011}, to solve the equations of radiative transfer coupled to the hydrodynamics of the simulation. We follow HI and HeI ionizing photons which are coupled to the \texttt{Grackle} primordial chemistry and heating and cooling routines to track photoionization and heating, as well as radiation pressure on hydrogen.
We determine the HI and HeI ionizing photon rates for each star using the OSTAR2002 \citep{Lanz2003} grid of O-type stellar models, appropriate for $M_{*} \gtrsim 15$~M$_{\odot}$ at solar metallicity\footnote{The exact stellar mass range on the OSTAR2002 grid is model dependent and a function of metallicity}. We use linear interpolation in stellar effective temperature, surface gravity, and metallicity to compute the ionizing photon fluxes and rates for each star. Stars less massive than about 15 M$_{\odot}$ and very massive stars with sub-solar metallicity are generally not well sampled by the OSTAR2002 grid. In this case, we integrate a black body spectrum at $T_{\rm eff}$ to obtain the ionizing photon fluxes, but normalize the result to be continuous with the OSTAR2002 grid (see Appendix \ref{appendix:radiation}).
Instead of assigning a fixed ionizing photon energy across all sources, we integrate over each star's blackbody curve to compute the average ionizing photon energy individually for each source (see Appendix~\ref{appendix:radiation}). The average energy for HI and HeI ionizing photons changes significantly over the OSTAR2002 temperature range $\log(T_{4,\rm{eff}} [K]) \in \left[2.75,5.5\right]$, ranging from 15.72 eV to 20.07 eV and 26.52 eV to 31.97 eV respectively.
We also include the effects of radiation pressure on HI. This has been shown to be important in suppressing the star formation rates of dwarf galaxies by influencing turbulence and the dense gas content of the ISM \citep{WiseAbel2012,Ceverino2014}. We ignore the absorption of ionizing radiation by dust and re-radiation in the infrared. This is included in other models \citep[e.g.][]{Rosdahl2015,FIRE,FIRE2} as this may increase by a factor of a few to several the effective radiation pressure \citep{ZhangDavis2017}. However, the importance of multiple scattering is still unclear. Other works have shown the effect to only increase the radiation pressure by a factor of order unity \citep{Krumholz2012,Krumholz2013a,Krumholz2018,Reissl2018,Wibking2018}. Due to these uncertainties, and given that our dwarf galaxy has a low dust content, and therefore a low infrared opacity, we ignore this effect.
\subsubsection{Diffuse Heating}
\label{sec:diffusive heating}
We include two forms of diffuse heating in our simulations, each tied directly to the non-equilibrium primordial chemistry network in \texttt{Grackle}: 1) the optically thin, uniform metagalactic UV background \citep{HM2012}, and 2) localized photoelectric heating from the FUV (6 eV $<h\nu< 13.6$ eV) radiation from each of our star particles. The FUV flux for each star is again obtained from the OSTAR2002 grid by directly integrating over the spectral energy distributions for each gridded star. Like the ionizing radiation, we again use an adjusted black body spectrum to compute the flux for stars off of the grid (see Sect.~\ref{sec:ionizing radiation} and Appendix~\ref{appendix:radiation}). Photoelectric heating can be a dominant heating mechanism in the ISM of the Milky Way \citep{Parravano2003}, and could be significant in regulating star formation in dwarf galaxies \citep{Forbes2016}. However, this conclusion warrants further research as its exact importance in dwarf galaxies relative to other feedback mechanisms is contentious \citep{Hu2016,Hu2017}. Generally, models for photoelectric heating and Lyman-Werner radiation in hydrodynamic simulations of galaxies adopt a constant value or a static, radial profile. Only recently has the localization and time variation of these processes been considered.
Self-shielding of gas against the metagalactic UV background is important in high-resolution simulations, particularly for low-mass, low-metallicity dwarf galaxies where the UV background is capable of gradually photoevaporating unshielded gas from the galaxy \citep{Simpson2013}. We have implemented the \citet{Rahmati2013} approximate self-shielding method in \texttt{Grackle} to account for HI self-shielding against the UV background \citep[see][ for more details of this implementation]{GrackleMethod}. We assume HeI ionization generally follows HI. This allows us to approximate HeI self-shielding using the same form (A. Rahmati, private communication). We ignore HeII photoionization from the UV background entirely. For consistency, we additionally reduce the reaction rates for direct H$_2$ ionization (15.4 eV) and H$_2^+$ destruction (30 eV) by the same shielding factors computed for HI and HeI shielding.\footnote{Ignoring this effect leads to unrealistically high electron fractions in self-shielding gas from direct $H_2$ ionization, which drives significant production of H$_2$ via gas-phase reactions.} Accounting for self-shielding in this manner leads to an inconsistency in using tabulated, optically-thin metal line cooling rates from \textsc{Cloudy} (see Section 4.1.1 of \citet{Hu2017}). As mentioned previously, we have re-computed metal line cooling tables using \textsc{Cloudy} models of optically thick clouds to be consistent with our self-shielding prescription. This is described in more detail in Appendix ~\ref{appendix:cooling}.
We assume the galaxy is mostly optically thin to stellar FUV and use only local approximations for shielding. We calculate the stellar FUV flux in each cell as summed over the contributions from each star to parameterize the local photoelectric heating rate as \citep{BakesTielens1994,Wolfire2003,Bergin2004}
\begin{equation}
\label{eq:PE}
\Gamma_{\rm pe} = (1.3 \times 10^{-24} \rm{erg~s^{-1}~cm^{-3}})\, \epsilon n_{\rm H} G_{\rm eff} D
\end{equation}
where $\epsilon$ is an efficiency factor that depends on $G_{\rm{o}} T^{1/2} /n_{\rm{e}}$, the attenuated local FUV flux \begin{equation} G_{\rm eff} = G_{\rm o}~\exp(-1.33\times10^{-21}~D~N_{\rm H}), \end{equation} $D$ is the dust-to-gas ratio, normalized to the solar value, and $G_{\rm o}$ is the local FUV flux normalized to the solar neighborhood \citep{Habing1968}. Aside from a different treatment of $D$ and the attenuation, both discussed below, this is equivalent to the method used in \citet{Hu2016,Hu2017}.
The value of $D$ is computed consistently with our \texttt{Grackle} dust model, using the broken power law fit from \citet{Remy-Ruyer2014}, as described in Section~\ref{sec:chemistry}. The extremely low dust-to-gas ratio in our modeled galaxies leads to a reduction in the photoelectric heating rate by approximately two orders of magnitude, as compared to a model that assumes a ratio $D$ that scales linearly with metallicity at very low metallicity. At these low metallicities, the FUV field only becomes optically thick at length scales of $\sim$ 100 pc for densities of $n \sim 10^2$~cm$^{-3}$. Given that the ambient density of the ISM is generally 1--10~cm$^{-3}$, we can safely assume the FUV field to be optically thin. However, we do include a localized attenuation prescription that may influence high-density or metal-enriched regions of the galaxy. We approximate $N_{\rm H}$ given in the equation above locally, as $n_{\rm H}\Delta x$, where $\Delta x$ is the cell width; this approximation is necessarily resolution dependent, but substantially more computationally efficient than direct ray tracing.
Properly computing $\epsilon$ in Eq.~\ref{eq:PE} requires an accurate account of the electron number density $n_e$. This is non-trivial in dense, neutral regions where $n_e$ is dominated by contributions from carbon, dust, and PAH ionizations. Our chemical network only includes contributions from H, He, and H$_2$ to n$_e$. Instead, we use a power-law fit of $\epsilon$ as a function of $n_{\rm H}$ from the \citet{Wolfire2003} model of $\Gamma_{\rm{Pe}}$ in the solar neighborhood (see Figure 10b of that work); we adopt $\epsilon = 0.0148n_{\rm{H}}^{0.235}$.
\subsubsection{Lyman-Werner Radiation}
\label{sec:LW}
In addition to the Lyman-Werner radiation from the UV background, we account for localized Lyman-Werner flux from each of our stars to compute the total, local H$_2$ dissociation rate. We compute the stellar Lyman-Werner flux again from the OSTAR2002 grid by integrating the spectral energy distributions over photon energies from 11.2~eV to 13.6~eV (see Appendix~\ref{appendix:radiation}). Given the local Lyman-Werner flux, the H$_2$ dissociation rate is taken as $k_{\rm diss} = \sigma_{\rm H2} F_{\rm LW}$, where $\sigma_{\rm H2}$ is the H$_2$ dissociation cross section. We account for approximate H$_2$ self-shielding against these sources of Lyman-Werner flux by implementing the Sobolev-like approximation from \citet{Wolcott-Green2011} in \texttt{Grackle}.
\section{Galaxy Initial Conditions}
\label{sec:IC}
We apply these methods to a first test case of the evolution of an isolated, low mass dwarf galaxy. The galaxy is constructed to have initial properties similar to those observed for the Local Group dwarf galaxy Leo P \citep{Giovanelli2013,McQuinn2013,McQuinn2015a,McQuinn2015}, although it is not intended to be a matched model to this galaxy. Leo P is gas rich, with a neutral hydrogen mass $M_{\rm HI} = 8.1\times 10^{5}$~M$_{\odot}$ and stellar mass $M_{*} = 5.6^{+0.4}_{-1.9} \times 10^{5}$~M$_{\odot}$ \citep{McQuinn2015a} extending to an observed neutral hydrogen radius $r_{\rm HI} = 500$~pc. LeoP has a low metallicity, with an oxygen to hydrogen abundance ratio (O/H) of $12 + \rm{log(O/H)} = 7.17 \pm 0.04$ \citep{Skillman2013}, or a metallicity of $Z \sim 5.4\times10^{-4}$ ($Z/Z_{\odot} = 0.03$, adopting $Z_{\odot} = 0.018$ from \citet{Asplund2009}). Our dwarf galaxy model is constructed without an initial background stellar population, with a total gas mass of $1.8 \times 10^{6}$~M$_{\odot}$, of which $M_{\rm HI} = 1.35 \times 10^{6}$~M$_{\odot}$, and $Z = 4.3\times 10^{-4}$, comparable to the average redshift $z = 0$ metallicity from the stellar models computed in \citet{McQuinn2015}.
The galaxy initially consists of a smooth, exponential gas disk in hydrostatic equilibrium with a static, background dark matter potential. The gas profile follows \citet{Tonnesen2009} and \citet{Salem2015}, with
\begin{equation}
\rho_{\rm gas} (R,z) = \frac{M_{\rm o}}{2\pi a^2_{\rm gas}b_{\rm gas}} 0.5^2{\rm sech}\left(\frac{R}{a_{\rm gas}}\right){\rm sech}\left(\frac{z}{b_{\rm gas}}\right)
\end{equation}
where $a_{\rm gas}$ and $b_{\rm gas}$ are the radial and vertical gas disk scale heights, and $M_{\rm o}$ is approximately 70\% of the total gas mass. We set $a_{\rm gas} = 250$~pc, $b_{\rm gas} = 100$~pc, and $M_o = 1.26\times 10^6$~M$_{\odot}$. We adopt a \citet{Burkert1995} dark matter potential with virial mass and radius $M_{\rm vir} = 2.48\times 10^{9}~M_{\odot}$ and $R_{\rm vir}~=~27.4$~kpc as defined in \citep{BryanNorman1998} and scale radius r$_{\rm s} = 990$~pc. This gives a maximum circular velocity $V_{\rm max} = 30.1$~km~s$^{-1}$ at $R_{\rm vmax}~=~3.2$~kpc. These parameters were adopted specifically to match the observed dynamical mass of Leo P interior to 500 pc of $M_{\rm dyn} (r < 500~\rm{pc}) = 2.7\times 10^{7}$ M$_{\odot}$, and represent virial properties within the halo mass expected for galaxies of this size \citep{Ferrero2012,Read2017}.
Following the initialization procedure of \citet{Hu2017}, we use artificial SN driving to generate realistic initial densities and turbulent properties in the galaxy ISM. This prevents an otherwise uniform collapse of the gas disk at the beginning of the simulation. During this period, SNe explode at a fixed rate of $0.4$~Myr$^{-1}$, corresponding to the SFR obtained given the central HI surface density and the relation presented in \citep{Roychowdhury2009}. We stop the artificial driving 25~Myr after the first star particle forms. These artificial SNe do not drive chemical evolution of the galaxy; their metal yields correspond to the mean ISM abundances. We note this initial driving is ad hoc in that we do not include other effects from the stellar population that would have caused these SNe.
We emphasize here that our model, with no initial stellar distribution, is not intended to identically reproduce the evolution of Leo P, which has formed stars continuously over cosmological timescales. In addition, the mass fractions for the individual metals we track are set to zero so that we follow only the evolution of metals self-consistently produced in the simulations. Otherwise, the galaxy chemical properties would be dominated by the somewhat arbitrary choice of initial abundances. In some ways this is similar to the first pollution of pristine gas in the early Universe, but we note that we cannot directly make this comparison as the environmental conditions and UVB properties were different at high redshift, and we explicitly ignore Pop III and Pop II stellar evolution. Instead, this work is intended as a numerical experiment to investigate how metal enrichment from ongoing star formation proceeds in a gas / dark matter environment similar to low mass halos at both low redshift and the early Universe. The subsequent metal enrichment from the stars in our simulation can be thought of as tracking a change in abundances from arbitrary initial conditions. We will discuss how to make proper comparisons to observed stellar and gas abundance properties of dwarf galaxies in future work, where we will investigate the abundance evolution of our simulations in more detail.
\section{Results}
\label{sec:results}
We present our initial results here, providing an overview of the morphological (Section~\ref{sec:structure}), star formation (Section~\ref{sec:sfr}), ISM (Section~\ref{sec:phase}), radiation field (Section~\ref{sec:ISRF}), outflow (Section~\ref{sec:outflows}), and chemical (Section~\ref{sec:chemical evolution}) properties of our dwarf galaxy during the 500~Myr after the first new star forms. Unless otherwise noted, $t = 0$ is defined as the time at which that first star particle forms, which is 43~Myr after the actual beginning of the simulation run. The galaxy disk is defined as the fixed physical region within a cylindrical radius of 600~pc and vertical height $|z| < 200$~pc relative to the center of the galaxy. ISM properties are calculated considering only the gas contained within the disk of the galaxy. We include a resolution study in Appendix \ref{appendix:resolution_study}.
Our analysis makes extensive use of the open-source \textsc{yt} toolkit \citep{yt}. All analysis scripts used in this work can be found at https://github.com/aemerick/galaxy-analysis at changeset dd76ad10.
\subsection{Morphological Structure and Evolution}
\label{sec:structure}
We begin by characterizing the morphological properties of our dwarf galaxy, as demonstrated in a series of face-on and edge-on images, presented in Fig.~\ref{fig:panel_x} and Fig.~\ref{fig:panel_z}. These figures show inside-out star formation, as star formation propagates from the inner regions outward during the galaxy's evolution. This is clear in the face-on panels, which demonstrate the growth of the stellar population from the center outward, and the declining gas densities inside-out as a result of stellar feedback driven winds. This central region quickly fills with warm and hot gas generated from radiation feedback and SNe respectively. Both the ISM and the halo gas are multi-phase, containing gas at cold, warm, and hot temperatures with a range of densities, as evident in the temperature slices in both panels. The ISM properties are quantified further in Section~\ref{sec:phase}.
\begin{figure*}
\centering
\includegraphics[width=0.975\linewidth]{multiplot_4x4_x}
\caption{Edge-on views of our dwarf galaxy at four different times in its evolution, 0, 150, 300, and 500 Myr after the beginning of star formation. Shown are the density weighted projection of number density (top row), temperature slices (second row), HI column density (third row), and H$_2$ column density (fourth row). Each individual main sequence star particle is shown in the number density projections as a single white dot.}
\label{fig:panel_x}
\end{figure*}
The initially puffy gas distribution collapses to a thin disk, with scale heights between 10--30~pc, as shown by the blue line in Fig~\ref{fig:scale_height}. This figure shows the scale height of all gas in the disk, averaged over 20~Myr periods centered on each given time. Stellar feedback acts to heat up this initially thin disk substantially, creating typical scale heights of 50--120~pc. Towards the end of the simulation time, the half-light radius is $391 \pm 19$ pc, where the uncertainty represents the 1$\sigma$ variation in this quantity during the final 20 Myr. Although the disk remains thin beyond the half-light radius, with a scale height of around 50~pc, it is fully resolved at all radii. By the end of the simulation, the majority of the disk has a scale height of $\sim$100~pc.
Constraining the gas scale height in ultra-faint dwarf galaxies observationally is challenging. For Leo P, located at 1.7~Mpc, HI observations that are capable of detecting the diffuse HI throughout the galaxy have a resolution of 100--200 pc, with higher resolution observations identifying only the densest HI clumps in the galaxy \citep[e.g.][]{Bernstein-Cooper2014}. In the final column of Fig.~\ref{fig:panel_z}, the peak HI column density reaches $N_{\rm HI} = 9.4 \times 10^{21}$~cm$^{-2}$, but is located in dense regions with sizes $<$ 10$\sim$pc. With a resolution of 100~pc, the peak column density in an edge-on view is $N_{\rm HI} = 4.3 \times 10^{20}$~cm$^{-2}$, and $N_{\rm HI} = 2.8 \times 10^{20}$~cm$^{-2}$ in a face-on view. At 500 pc resolution, this corresponds to $N_{\rm HI} = 7.5 \times 10^{19}$~cm$^{-2}$, and $N_{\rm HI} = 6.2 \times 10^{19}$~cm$^{-2}$. These column densities are consistent with the resolution-dependent peak column densities found in the low mass dwarf galaxy sample of \cite{Teich2016}, and consistent with the observed peak column density of Leo P, $N_{\rm HI} = 6.5 \times 10^{20}$~cm$^{-2}$, observed with a spatial resolution of about 33~pc.
\begin{figure*}
\centering
\includegraphics[width=0.975\linewidth]{multiplot_4x4_z}
\caption{Same as Fig.~\ref{fig:panel_x}, but showing face-on views.}
\label{fig:panel_z}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.975\linewidth]{scale_height}
\caption{The total gas scale height at various times throughout the simulation. These times match the images in Figs.~\ref{fig:panel_x} and ~\ref{fig:panel_z}.}
\label{fig:scale_height}
\end{figure}
\subsection{Star Formation Rate and Mass Evolution}
\label{sec:sfr}
We present the star formation rate (SFR) and core collapse SN rate (SNR) evolution of our dwarf galaxy as measured in 10 Myr bins in the left panel of Fig.~\ref{fig:sfr_mass_evolution}.\footnote{We do not show the Type Ia rate as there have only been 16 by the end of the simulation.} Within the first 50 Myr of evolution the SFR rises quickly to nearly $10^{-3}$ M$_{\odot}$ yr$^{-1}$, declining to $\sim 3 \times 10^{-4}$~M$_{\odot}$~yr$^{-1}$ until a significant drop off at about 130 Myr. The remainder of the evolution is characterized by periods of little to no star formation interspersed with periods of continual, but low star formation around $10^{4}$~M$_{\odot}$~yr$^{-1}$. The SNR tracks the SFR with a time delay, with roughly one core collapse SNe per 100 M$_{\odot}$ of star formation. Averaging over the entire simulation time, we obtain $<\rm{SFR}> = 1.19 \times 10^{-4}$ M$_{\odot}$ yr$^{-1}$. We discuss how the SFR of this galaxy compares to observed galaxies in Section~\ref{sec:observation}.
We note that the granularity in our star formation algorithm creates a lower limit to the SFR that depends on the period $\Delta t$ over which the SFR is measured. Since we produce stars in $\sim 100$~M$_{\odot}$ sets, the smallest value for our measured SFR is $\sim 100/ \Delta t$. For $\Delta t = 10$ Myr this is 10$^{-5}$ yr$^{-1}$. Removing the granularity requires a fundamental change in our star formation algorithm, likely at the cost of increased complexity and computational expense. Sink particles, which track pre-main sequence stellar mass accumulation, would be the most viable way to do this \citep[see for example ][]{Krumholz2004,Federrath2010,GongOstriker2013,BleulerTeyssier2014,Sormani2017}.
At initialization, all H and He of our dwarf galaxy is neutral, with no molecular hydrogen component. By the time of first star formation ($t=0$ in Fig.~\ref{fig:sfr_mass_evolution}), HI still dominates the mass of the galaxy, with a molecular hydrogen mass fraction of only $\sim$~0.3~\%. The molecular component declines rapidly as this gas is both converted into stars and is destroyed by stellar radiation feedback. For the remainder of the simulation, the H$_2$ mass generally increases, with small fluctuations during periods of star formation, reaching a peak mass fraction of 5\% at 500~Myr. The growth of the molecular fraction is due in part to a decline in the total gas content of our galaxy from feedback-driven galactic winds. During these outflows, the densest gas, the molecular gas, is preferentially retained over the more diffuse ISM. Examining the molecular properties of the ISM in low mass dwarf galaxies in more detail is a vital avenue of future research, as there are significant observational uncertainties in deriving H$_2$ content of galaxies in this low metallicity regime \citep{Leroy2008,McQuinn2012,Amorin2016}. The molecular properties of our galaxy are discussed further in context with other works in Section~\ref{sec:observation}.
\begin{figure*}
\centering
\includegraphics[width=0.475\linewidth]{sfr_snrx100}
\includegraphics[width=0.475\linewidth]{mass_evolution}
\caption{Left: The SFR and core collapse SN rate in our dwarf galaxy in 10 Myr bins. Broken portions of this histogram are time periods with no star formation or supernovae. Note that the SN rate has been scaled by a factor of 100 to fit on the same vertical axis as the SFR. Right: The evolution of the total gas mass (black), HI mass (blue), H$_2$ mass (orange), and stellar mass (red) in the disk of our galaxy over time.}
\label{fig:sfr_mass_evolution}
\end{figure*}
\subsection{ISM Properties}
\label{sec:phase}
Our simulations include sufficient resolution and microphysics to capture a multi-phase medium within the ISM and halo of our simulated galaxy. We define five different gas phases following those defined in \citep{Draine2011}: molecular gas, cold neutral medium (CNM), warm neutral medium (WNM), warm ionized medium (WIM), and hot ionized medium (HIM). We emphasize that the molecular ISM phase is defined as all cells with M$_{\rm H_2}$/M$_{\rm gas} = $~f$_{\rm H_2} > 0.5$, and is thus somewhat different than simply considering the total H$_2$ content. By this definition, although our galaxy certainly contains molecular hydrogen, molecular gas as a phase does not exist; the peak f$_{\rm H_2}$ in any single cell remains below 30\%. See Appendix \ref{appendix:phases} for a quantitative definition of these phases. The properties of these phases are regulated by the complex interplay between cooling, turbulence, self-gravity, and radiative and shock heating from stellar feedback throughout the galaxy's evolution. Here we discuss the thermodynamic properties of the gas within the inner halo of our dwarf galaxy.
Fig.~\ref{fig:phase} shows the temperature-density distribution of all gas within 0.25 R$_{\rm vir}$ of the center of the galaxy, averaged over the time period 300--350~Myr. One can readily identify the two regimes containing most of the mass in the simulation: low density, warm gas produced through ionization and SN heating, and cold, high density gas that makes up most of the mass in the galaxy's disk (see Fig.~\ref{fig:ISM_evolution}). Several notable features of the distribution include: broad ranges of temperature even in quite dense gas, perhaps produced by photoionization and photoelectric heating, a substantial amount of extremely cold gas below 10~K, and the lack of well-defined thermal phases due to the complexity of both the heating and cooling in a turbulent medium. We note that we are likely missing important physics, such as cosmic ray heating and ionization, that would prevent the formation of the coldest gas in this diagram (below about 10~K), but we do not expect this to significantly alter our results. Our artificial temperature ceiling in diffuse gas (see Section~\ref{sec:hydro}) is seen clearly by the horizontal feature in the top left. The boxed regime in the lower right corner shows our star formation density and temperature threshold. Gas in this regime is rapidly consumed by star formation and subsequent feedback. Given the small size of our dwarf galaxy, the total amount of mass in this regime at any given instant can be small, but does appear in this time average.
\begin{figure}
\includegraphics[width=0.95\linewidth]{phase_diagram}
\caption{The temperature vs.\ number density phase diagram of our dwarf galaxy simulation showing all gas interior to 0.25~$R_{\rm vir}$, averaged over a 50~Myr period from t~=~300~Myr to t~=~350~Myr. The dashed lines are lines of constant pressure, separated by factors of 10. The region in the lower right corner indicates our density and temperature thresholds for star formation.}
\label{fig:phase}
\end{figure}
The mass of the ISM in our dwarf galaxy is dominated by the CNM for the entirety of the simulation, as shown in the left panel of Fig.~\ref{fig:ISM_evolution}. The mass fraction of the remaining phases are ordered by temperature, with the WNM as the next most-significant component. The WNM is initially comparable to the CNM, but comprises a mass fraction of about 0.1 by the end of the run. The WIM and HIM fluctuate significantly, corresponding to fluctuations in the SFR and associated feedback, but are subdominant throughout the simulation. During periods of peak stellar feedback, however, the WIM can reach a mass fraction above 0.1. Although the CNM dominates the mass fraction, it is a negligible component of the ISM volume, which is WIM dominated. However, the large, anti-correlated fluctuations in the WNM and HIM make these three phases often comparable. Together, these figures better quantify the general properties observed in the panel plots in Fig.~\ref{fig:panel_x} and Fig.~\ref{fig:panel_z}.
These results are in contrast with those found for the more massive dwarf galaxy modeled by \citet{Hu2016,Hu2017}. They find the mass and volume fraction of the ISM are nearly entirely dominated by warm gas (defined in those works as gas with 100~K$<$T$< 3\times 10^{4}$), with cold gas having between 1 and 10\% of the mass, and occupying negligible volume. Hot gas (defined as gas with $T > 3\times 10^{4}$~K) occupies 10\% of the volume, with negligible mass, in their galaxy, while our WIM alone occupies $>$ 50\% of the volume. Our lower mass, lower metallicity galaxy contains more cold gas (by mass fraction) and hot gas (by volume fraction) that seen in the more massive dwarf galaxy in these works. The driver of these differences, which are likely somewhat related to differences in the dark matter halo potential, will be investigated in future work. We have compared our cooling curves to those used in \citep{Hu2017} and found them to be comparable; though this could contribute to the differences, it is likely not the dominant source.
\begin{figure*}
\includegraphics[width=0.45\linewidth]{phase_mass_fraction_evolution_log}
\includegraphics[width=0.45\linewidth]{phase_volume_fraction_evolution_log}
\caption{The evolution of the mass and volume fractions for each phase of the model galaxy's ISM. See Appendix~\ref{appendix:phases} for definitions of each phase.}
\label{fig:ISM_evolution}
\end{figure*}
\subsection{Interstellar Radiation Field}
\label{sec:ISRF}
The interstellar radiation field (ISRF) of our galaxy varies dramatically in both space and time, as has been seen previously in works modeling varying radiation fields both as expected from stellar motions in our own galaxy \citep{Parravano2003}, and in models including radiation \citep[e.g.][]{Hu2017}. This is not particularly surprising in our low SFR regime, where there can be large fluctuations over time as individual massive stars form, move about, and evolve. In Fig.~\ref{fig:ISRF} we present azimuthally averaged radial profiles of the ISRF in various bands, time averaged over 100 Myr during the period of star formation from roughly 250--350~Myr. The top panel shows $G_{\rm o}$, the ISRF flux between 6--13.6~eV normalized to the value in the solar neighborhood of the Milky Way (see Sec.~\ref{sec:diffusive heating}). The averaged profile varies between values of 0.02 and 0.1, with peaks located at radii of the few active star formation regions. At any given radius, there is over a two order magnitude variation in the ISRF during this period of time.
The bottom panel gives the HI ionizing photon flux from stellar radiation. The ionizing radiation profile follows a similar trend, yet with significantly more variation, anywhere from two to four orders of magnitude. As this radiation is followed through radiative transfer, the profile encodes information about local attenuation by dense, neutral gas. This is the main driver of the differences between the two panels. The total fluctuation in both panels is due in part to the low-level, stochastic star formation in our galaxy. A higher star formation rate would produce a more regular population of massive stars and more uniform (in time) ISRF.
\begin{figure}
\includegraphics[width=0.95\linewidth]{G_o_profile} \\
\includegraphics[width=0.95\linewidth]{ionizing_photon_profile}
\caption{
Azimuthally averaged radial profiles of the ISRF in the mid-plane of our galaxy in two different bands, time-averaged over 50 Myr from 300 -- 350~Myr. Here we define the midplane as within 2.5~$dx$ of z = 0, or 4.5~pc. The top panel gives $G_{\rm o}$, the flux of radiation between 6--13.6 eV normalized to the value in the solar neighborhood, shaded between minimum and maximum values at each position, with the average shown as a black line. The bottom panel gives the HI ionizing stellar radiation flux. Since this radiation is tracked directly through radiative transfer, the minimum value at all radii is 0 at some point. For this reason we only shade between the first quartile and maximum values. HeI ionizing radiation is very similar to HI ionizing radiation, with a small vertical offset, and is not shown for clarity. In the top panel, the minimum of the vertical axis is the UVB value of $G_o$.}
\label{fig:ISRF}
\end{figure}
\begin{figure*}
\includegraphics[width=0.45\linewidth]{g_o_2D_phase}
\includegraphics[width=0.45\linewidth]{q_o_2D_phase}
\caption{Single-snapshot 2D radial profile plots at 300~Myr of the ISRF in two flux bands, $G_{\rm o}$ and HI ionizing radiation, illustrating the full dynamic range of radiation flux at a given radius in the galaxy. Here, we include all gas within the mid-plane of our dwarf galaxy. Since a majority of the mass of the galaxy is in the cold phase (see Fig.~\ref{fig:phase}), and is therefore optically thick to HI ionizing radiation, it does not show up in the HI ionizing radiation diagram. This gas readily appears in the $G_{\rm o}$ diagram since we assume it to be optically thin, though we do apply a localized shielding approximation.}
\label{fig:ISRF_2D}
\end{figure*}
To further quantify the local variations in these radiation fields, we present the full distribution of $G_{\rm o}$ and the HI ionizing flux in Fig.~\ref{fig:ISRF_2D} at a single snapshot at 300~Myr. This diagram shows how dramatic the increase in ISRF near young, massive stars is (the spikes in both diagrams), while much of the mid-plane sees an ISRF orders of magnitude lower. The striking contrast between the two diagrams is due to the shielding of the HI ionizing flux in the most massive (cold and dense) regions of the galaxy through the radiative transfer calculations; shielding of the FUV radiation is approximate and in general weaker, making these regions more prominent in the left hand figure (the pink/white clumps). From both of these diagrams, it is clear that the ISRF of a low mass dwarf galaxy varies greatly over time and space in a way that cannot be appropriately captured by an analytic profile. Although one could adopt an averaged radial profile to provide a realistic, global source of energy for thermal pressure support of the gas against collapse, it is unclear how sufficient this would be in suppressing star formation. In particular, the large increases around sites of recent star formation could be important sources of feedback to destroy molecular clouds and reduce their effective star formation efficiency. It remains to be seen which of these two modes of feedback is more important in regulating star formation.
\subsection{Outflow Properties}
\label{sec:outflows}
The recent FIRE cosmological simulations of dwarf galaxies over a range of dark matter halo masses find that they exhibit large outflows, with mass loading factors ($\eta = \dot{M}_{\rm out}/<\rm{SFR}>$) on order of 100--1000 \citep{Muratov2015}. However, comparable models of idealized dwarf galaxies with detailed feedback and physics treatments find more modest mass loading factors \citep{Hu2016,Hu2017}. In Fig.~\ref{fig:mass_outflow} we present the mass outflow and mass loading rates for our dwarf galaxy as a function of time, computed at five different positions from the galaxy. We follow \citet{Muratov2015} in defining the mass outflow rate at any given radius to be the sum of the outflow rate in all cells in a spherical annulus of width $dL$ centered at that radius,
\begin{equation} \label{eq:dotM}
\dot{M}_{\rm out} = \sum M_{\rm gas} \times v_{\rm r} / dL.
\end{equation}
We choose $dL = 0.1~R_{\rm vir}$, or 2.74 kpc.
The total mass outflow rates and mass loading factors at 0.1, 0.25, 0.5, and 1.0 $R_{\rm vir}$ are shown in Fig.~\ref{fig:mass_outflow}. Generally, other works use gigayear timescale measurements of the SFR to compute the mass loading factor. For consistency with those works, we use the 500~Myr average SFR for computing the mass loading factor. The outflow rate at 0.1 $R_{\rm vir}$ is high, corresponding to mass loading factors between 20--100 throughout the simulation time. This declines towards larger radii, however, with substantially less outflow past the virial radius. \citet{Muratov2015} finds typical mass loading factors at 0.25 R$_{\rm vir}$ on order of 20--40 for galaxies with $v_{c} = 30$ km s$^{-1}$ at low redshift, consistent with our results. The fluctuations in both of these panels are directly correlated with the SFR, with increased outflow during periods of star formation, and decreased outflow during periods of quiescence.
Interestingly, the v$_{\rm c}\sim$ 30 km s$^{-1}$ halos examined in \citet{Muratov2015} are more massive than the M$_{\rm vir} = 2.5\times 10^9$ M$_{\odot}$ halo examined here by a factor of a few. Using a fit provided in \citet{Muratov2015} to extrapolate and compare $\eta$ at fixed halo mass, one would expect mass loading factors on order of 100 at 0.25 R$_{\rm vir}$ for our dwarf galaxy, a factor of a few higher than what we find. These differences could be attributed to our lack of cosmological evolution in these isolated simulations, but ultimately requires a larger set of dwarf galaxy simulations to make a more robust comparison. We note, however, that our results are closer to the \citet{Muratov2015} results than those in \citet{Hu2016,Hu2017}, which find lower mass loading factors even closer to the disk, at 0.05 $R_{\rm vir}$, between 1 and 10 for a dwarf galaxy with M$_{\rm vir} = 10^{10}$ M$_{\odot}$; certainly this implies even smaller mass loading factors at 0.25 $R_{\rm vir}$.
\begin{figure*}
\includegraphics[width=0.45\linewidth]{total_mass_outflow}
\includegraphics[width=0.45\linewidth]{total_mass_loading}
\caption{Spherical mass outflow rates (Eq.~[\ref{eq:dotM}]) and mass loading rates over time at 4 different radii from the galaxy.}
\label{fig:mass_outflow}
\end{figure*}
Detailed outflow properties, beyond outflow rates and mass loading factors, can help discriminate between the model dependent feedback physics included in galaxy simulations. In Fig.~\ref{fig:outflow_velocity} we present radial velocity distributions of all material outside our dwarf galaxy's disk, and within the halo, broken into three gas phases. Gas with a negative velocity is moving towards the center of the halo. Roughly 25\% of this mass is inflowing, mostly with modest negative velocities, and corresponds to previously ejected gas mixing and recycling throughout the halo. Half of the outflowing gas (positive velocities) is moving at velocities below 30 km s$^{-1}$, 75\% at velocities below 70 km s$^{-1}$, and 95\% at velocities below 100 km s$^{-1}$. Although the mass contained in the tails of these distributions is a sub-dominant fraction of the total, there is still a non-negligible amount of gas moving at velocities of a few hundreds of km/s, with a peak velocity of over 700 km s$^{-1}$. The WNM and WIM together dominate the mass of both the inflowing and outflowing gas, with the WIM and HIM dominating at velocities above 200 km s$^{-1}$. The dominant launching mechanism in this simulation is SN feedback, which generates a rapidly moving and volume-filling WIM and HIM, consistent with the results in \citet{Hu2016,Hu2017}. However, as shown, the HIM, which is mostly the SN ejecta itself, comprises very little of the outflow by mass. Most of the outflowing gas (by mass) comes from the warm phase, pushed out by the high pressure, fast moving HIM. Some of this warm gas certainly originates from adiabatically and radiatively cooled HIM, however. The amount of transfer between phases in the halo of our galaxy will be investigated in future work.
\begin{figure}
\includegraphics[width=0.95\linewidth]{velocity_distribution_time_average}
\caption{The time averaged radial velocity distribution of outflowing material external to the disk and within the virial radius of our dwarf galaxy. This is averaged over the same time interval as Fig.~\ref{fig:ISRF}. The outflowing material is multiphase, broken into WNM, WIM, and HIM. See Section~\ref{sec:phase} for definitions of these regimes. We note that WNM is often labeled simply as ``cold'', or gas below $T = 10^{4}$~K. There is little to no outflowing mass in the CNM.}
\label{fig:outflow_velocity}
\end{figure}
\subsection{Metal Evolution}
\label{sec:chemical evolution}
\subsubsection{Metal Enriched Outflows}
Dwarf galaxies efficiently, and preferentially, eject metals released in stellar feedback from their shallow potential wells \citep{MacLowFerrara1999,FerraraTolstoy2000}. This has been better quantified recently both observationally \citep[e.g.][]{Kirby2011-metals,Zahid2012,Peeples2014,McQuinn2015} and with more detailed cosmological simulations \citep{Simpson2013,Angles-Alcazar2017,Muratov2017}. In the top panel of Fig.~\ref{fig:metal_evolution}, we give the metal mass loading factor for our galaxy over time, at the same radii as in Fig.~\ref{fig:mass_outflow}. The parameter used to quantify metal outflow efficiencies varies among works. Here, we define the metal mass loading factor as the metal outflow rate divided by the metal production rate, or
\begin{equation} \label{eq:eta-metal}
\eta_{\rm metal} = \frac{\dot{M}_{\rm metal}}{\rm{SFR} \times (M_{\rm metal}/M_{*})},
\end{equation}
where $\dot{M}_{\rm metal}$ is the metal mass outflow rate, $M_{\rm metal}$ is the total mass in metals produced, and $M_{*}$ is the total mass in stars. These metal loading factors fluctuate significantly with the SFR, just as was shown in Fig.~\ref{fig:mass_outflow}, reaching a minimum of about 0.05, but peaking at around 5. On average, over the simulation time, $\eta_{\rm metal}$ is below unity (around 0.5). Recent simulations of outflows from a Milky Way type disk indicate typical $\eta_{\rm metal}$ comparable to our results, usually between 0.5 and 1 \citep{Li2017,Fielding2017}. \cite{Muratov2017} computes a slightly different quantity for their galaxies, the normalized metal outflow rate $\eta_{Z} = \dot{M}_{\rm metal}/{\rm SFR}$, finding values of about 0.02 at 0.25~$R_{\rm vir}$ regardless of galaxy circular velocity. Our galaxy is consistent with this value, with an average $\eta_Z = 0.015$, fluctuating between 0.007 and 0.02.
\begin{figure}
\includegraphics[width=0.9\linewidth]{metal_mass_loading} \\
\includegraphics[width=0.9\linewidth]{metal_retention}
\caption{{\bf Top}: Metal mass loading factor (see Eq.[~\ref{eq:eta-metal}]) at the same radii as in Fig.~\ref{fig:mass_outflow}. This is the ratio between the metal outflow rate and the metal production rate. {\bf Bottom}: The fraction of metals contained in the disk, CGM, and outside the halo of our dwarf galaxy over time. In both panels, we consider the total mass of all individually tracked metal species, which is zero at initialization, not the aggregate total metallicity field, which is non-zero at initialization.}
\label{fig:metal_evolution}
\end{figure}
These large metal mass loading factors indicate that a majority of the metals produced in our dwarf galaxy are ejected from the disk. This is quantified in the lower panel of Fig.~\ref{fig:metal_evolution}, where we show the mass fraction of metals in the disk, circumgalactic medium (CGM), and outside the virial radius of our galaxy over time. After the first 20 Myr, SN-driven winds rapidly drive out large quantities of metals from the disk and into the galaxy's halo. This continues throughout the simulation, with only $\sim$4\% of produced metals residing in the disk of the galaxy. It only takes 150 Myr for some metals to reach the virial radius of the halo, with a steadily increasing fraction continually leaving the virial radius until about 350 Myr where the fraction levels off to just above 50\%. Likewise, the CGM metal content continually decreases until the end of the simulation from loss through the virial radius of the halo. See Section~\ref{sec:obs_metals} for further discussion.
\subsubsection{Differential Evolution of Elements Within the ISM}
It is important to understand how metals from each source of stellar yields enrich the ISM. Observations of more massive dwarf galaxies than those simulated here indicate fairly uniform radial gas-phase metallicity profiles, even beyond the stellar radius \citep[e.g.][]{Werk2011,Belfiore2017}. This requires that metal mixing and transport occur on hundred megayear timescales, much more rapidly than the gigayear timescale expected from assuming transport at the cold gas sound speed. Therefore, either metals are transported first through a hot phase with high sound speed, or through efficient turbulent mixing within the ISM \citep[e.g.][]{Avillez2002,Tassis2008,YangKrumholz2012}. It remains uncertain how metal abundances vary in detail within these galaxies, beyond one-dimensional radial profiles, and whether or not abundance distributions depend on the metal species. It is even more unclear how metals are transported and distributed within low-mass dwarf galaxies, which generally host too few \ion{H}{2} regions for a detailed examination.
We demonstrate the power of our simulations, which capture a realistic ISM at high resolution with multiple feedback sources, by addressing these questions in Fig~\ref{fig:metal_slices}. The left panel gives the abundance ratio of N to O throughout the ISM. The right
two panels give the slices of number density (top right) and temperature (bottom right) in the mid-plane of a portion of our dwarf galaxy. These show regions with dense, cold gas clouds ($n \sim 100~\rm{cm}^{-2}$, $T \lesssim 100~$K) connected by cold filaments, warm, diffuse gas ($n\sim 0.1$~cm$^{-3}$, $T\sim 10^{4}$~K), and hot gas from a recent SN explosion ($T\sim10^{6}$~K).
\begin{figure*}
\includegraphics[width=0.98\linewidth]{log_NO_panel}
\caption{Three slices in the mid-plane of our dwarf galaxy at 300~Myr after the start of star formation showing the variation in gas phase metal abundances. The left slice gives the ratio of the abundance between N and O, normalized to the solar abundance, while the number density and temperature are shown on the right. In each, we mark massive stars with active stellar winds as white points and SNe and AGB-phase enrichment events that occurred in the preceding 5 Myr as black stars and orange diamonds respectively.}
\label{fig:metal_slices}
\end{figure*}
As shown in the left panel, [N/O] varies significantly over this section of the ISM with notable differences across the various phases and ISM structures. The hottest gas, dominated by recent supernova explosions is overabundant in oxygen, relative to solar (purple). However, the relative abundance of nitrogen increases in the WIM and WNM, being overabundant relative to solar (green).
At our adopted metallicity, nitrogen is predominantly produced in AGB star winds, with very little production in core collapse SNe and winds from more massive stars. Therefore, nitrogen is injected into the ISM with significantly less energy ($v \sim 10$~km~s$^{-1}$) than elements produced in SNe, like oxygen, ($v\sim 10^3$~km~s$^{-1}$). Given the variations in Fig.~\ref{fig:metal_slices}, the energetic differences between injection sources can drive abundance variations within the ISM of our dwarf galaxy. The regions most rich in N are sites of recent AGB winds that have yet to mix with the rest of the ISM. This suggests that metal mixing within the ISM (and also metal ejection from the ISM) is species dependent. A more detailed analysis is beyond the scope of this work, but we investigate this in detail in \cite{Emerick2018c}.
\section{Discussion}
\label{sec:discussion}
\subsection{Comparison to Observed Low Mass Dwarf Galaxies}
\label{sec:observation}
As noted in Section~\ref{sec:IC}, our galaxy model is not intended to directly reproduce the observed properties of Local Group ultra-faint dwarfs. Notably, our initial conditions neglect a pre-existing stellar population and are only followed for 500~Myr, a fraction of the age of $z = 0$ dwarf galaxies. However, we can still place our model in context with observations using simple comparisons to the star formation rate (Section~\ref{sec:gas_sf}), molecular gas (Section~\ref{sec:molecular gas content}), and metal retention fraction (Section~\ref{sec:obs_metals}) properties of observed dwarf galaxies. We show that these properties are broadly consistent with observations.
\subsubsection{Gas and Star Formation}
\label{sec:gas_sf}
The observational sample of isolated, gaseous, low mass dwarf galaxies is limited compared to more massive galaxies, but has improved substantially with recent blind and targeted HI surveys \citep[e.g.][]{Giovanelli2005, Geha2006, Geha2012, Walter2008, Cannon2011, Haynes2011, Hunter2012, Bradford2015, James2015, Tollerud2015, Sand2015, Wang2017}. However, the sample of isolated, gaseous dwarf galaxies with $M_{*} < 10^{7}$~M$_{\odot}$ remains small. In Figure ~\ref{fig:KS} we show where our galaxy lies relative to the observed Kennicutt-Schmidt relation and extended Schmidt law for low mass galaxies. In both diagrams, our simulations are given by the colored points, sampled every megayear throughout the entire simulation.
Although simple to measure in simulations, these quantities are challenging to directly compare to observations. We have attempted to make a reasonable analog to how $\Sigma_{\rm sfr}$ and $\Sigma_{\rm gas}$ are measured observationally for low mass dwarfs \citet[see ][]{Roychowdhury2014}. We define $\Sigma_{\rm sfr} = \dot{M}_{*,10} / A_{*,10}$, where $\dot{M}_{*,10}$ is the SFR measured over the preceding 10~Myr, and $A_{*,10}$ is the area of the disk within the radius of the outermost star formed within the previous 10 Myr. Likewise, $\Sigma_{\rm gas} = M_{\rm gas,10} / A_{*,10}$, where M$_{\rm gas,10}$ is the total gas mass within this defined disk. However, the total gas content cannot be determined observationally. To match this limitation, we follow \cite{Roychowdhury2014} and take $\Sigma_{\rm gas, obs} = 1.34 \times \Sigma_{\rm HI}$, where the factor 1.34 attempts to account for He. We note that there is generally no correction made for any possible H$_{\rm 2}$ or HII content. As shown in Section~\ref{sec:molecular gas content}, these components may be significant.
\begin{figure*}
\centering
\includegraphics[width=0.475\linewidth]{all_gas_schmidt_law_evolution}
\includegraphics[width=0.475\linewidth]{all_gas_efficiency_evolution}
\caption{The Kennicutt-Schmidt law (left) and extended Schmidt law (right) relationships for our galaxy as measured every megayear, plotted as points colored by time, with dark / purple early and light / green late. See text for details of the calculation. Recent observations from the SHIELD sample \protect\citep{Teich2016} are plotted as black points with error bars. On the left, we also give the best fit line to galaxies from the FIGGS sample from \protect\cite{Roychowdhury2014}, and on the right we also show the best fit line and 1 $\sigma$ errors from \protect\cite{Shi2011}. There is no clear correlation with time in this diagram.}
\label{fig:KS}
\end{figure*}
We include recent observational constraints on these relationships in Figure ~\ref{fig:KS}. Our galaxy fluctuates significantly about both relationships with no clear trends in time. However, in both cases, it is consistent with the available observational sample. At times our galaxy exhibits gas surface densities below the observational constraints. The trend is still consistent with higher densities at this point, but with a larger scatter towards lower star formation rate densities and efficiencies.
In constructing our galaxy model, we employed \textit{no} tuning of the underlying physics, adopting only canonical values for any available free parameters. It is thus non-trivial that our galaxy should oscillate about the median relationships in Fig.~\ref{fig:KS}, and signifies a proper accounting of the relevant physics governing gas and star forming properties in our galaxy. This result is consistent with galactic evolution simulations run at high resolution with a detailed accounting of stellar feedback physics \citep[see ][ and references therein]{NaabOstriker2017} and with the demonstration by \citet{Li2005} that the Kennicutt-Schmidt law can be reproduced by gravitation acting on an isothermal disk. The net result of the complex interactions of heating, cooling , chemistry, and feedback physics on star formation is to offset to a level not too dissimilar to more simple simulations considering gravity alone.
\subsubsection{Molecular Gas Content}
\label{sec:molecular gas content}
The molecular gas content of low-mass dwarf galaxies is generally assumed
to be small, but is not well constrained by either theory or observations. Assuming the relationship in \citet{Leroy2013} and \citet{Momose2013}, \citet{Roychowdhury2014} finds typical molecular gas mass fractions can be anywhere from $f_{\rm H_2} = 0.05$ to $f_{\rm H_2} = 0.5$; a significant range in values. As discussed in Section~\ref{sec:properties}, our galaxy has $f_{H_2}$ of 0.001 -- 0.05, just overlapping with this range. H$_2$ formation in our model is possible through formation on dust, the three body interaction, or the gas-phase reaction H$^-$~+~H~$\rightarrow$~H$_2$~+~e$^{-}$. The gas-phase reaction dominates in our low-metallicity galaxy over the other two channels by several orders of magnitude. In our model, H$^{-}$ is produced solely through the reaction H~+~e$^{-} \rightarrow$ H$^{-}$~+~$\gamma$. Thus the presence of some ionizing background is required to generate the molecular fractions we find in our simulations, as confirmed in separate test simulations.
In contrast, \citet{Hu2016} and \citet{Hu2017} find low molecular fractions ($f_{\rm H_2} \sim 10^{-4}$) even in their simulations without any feedback ($f_{\rm H_2} \sim 2 \times 10^{-3}$). However, although these works do contain a non-equilibrium chemical model, they do not include either H$^{-}$ or a background radiation field. Our model suggests that both H$^{-}$ and the UV background are critical components in H$_2$ formation in small dwarf galaxies. Indeed it is not surprising that the gas phase reactions dominate over grain catalysis at low metallicity \citep{Glover2003}, though it is worth noting that the rate coefficients associated with gas-phase H$_2$ formation are still uncertain by an order of magnitude \citep{Glover2006,Glover2007}. Our model does lack additional chemistry that may be important to the formation and destruction of H$_2$, including HD chemistry, C and O chemistry, a detailed dust model, and cosmic rays.
In addition, our model does not account for the stellar component of the radiation field that leads to H$^{-}$ photodetachment ($E_\gamma > 0.76$~eV), though we do account for the contribution from the UVB. We find that the H$_2$ fraction is more strongly dependent upon the Lyman-Werner radiation field, which is followed for each star, than the H$^{-}$ photodetachment rate. Our tests suggest that, by ignoring the stellar contribution to H$^{-}$ photodetachment, our results may represent upper limits on the H$_2$ mass. However, including this component will likely only make a substantial difference during periods of no star formation, when there are no massive stars with significant Lyman-Werner luminosities. We do not anticipate this to have a significant dynamical impact on our simulations, as is discussed in more detail in Appendix~\ref{appendix:H minus}.
It is unclear how the combination of all of the above effects will behave, especially considering that, even at this resolution, we are unable to resolve the high density turbulent density perturbations in which H$_2$ forms most efficiently \citep{Glover2007}. These uncertainties certainly warrant further study of molecular gas in low metallicity dwarf galaxies.
\subsubsection{Metal Retention}
\label{sec:obs_metals}
The simulations presented here have not yet been run for the gigayear timescales required to begin to make direct comparisons to the observed stellar and gas phase metallicities of comparable dwarf galaxies at $z = 0$. However, we can compare to a key observable: the retention fraction of metals within stars and the galaxy's ISM compared to what would be expected from closed box stellar evolution models given the galaxy's star formation history. This can be done readily with Milky Way dSph's. Their stars seem to retain very little of the expected metal production: on order of a few percent or less depending on the galaxy and the species \citep{Kirby2011-metals}. However, environmental effects, namely ram pressure and tidal stripping, complicate the understanding of how these metals were removed from the galaxy.
Leo P, the dwarf galaxy we approximate in our initial conditions, is extremely valuable as a gas-rich, star forming, low-mass dwarf galaxy, with an observable HII region, necessary for determining gas phase abundances, that is close enough to the Milky Way to conduct this experiment. Leo P retains $\sim$ 5\% $\pm$ 2\% of its metals, $\sim$ 1\% in stars and the rest in ISM gas \citep{McQuinn2015}. As discussed in Section~\ref{sec:chemical evolution}, more than 90\% of the tracked metals produced during our simulation no longer reside within the galaxy's disk, agreeing with observations. However, this is an evolving quantity that also depends on how much (if any) subsequent re-accretion of these metals occurs. Although more than half of these metals are expected to eventually re-accrete \citep{Christensen2016,Christensen2018,Angles-Alcazar2017}, it is still possible that most of the metals that have been produced at these early times will remain outside the galaxy disk.
It is interesting to consider whether ejected metals in our model reside in the CGM or have been ejected into the intergalactic medium. While this cannot be observationally determined for dwarf galaxies at this mass, cosmological simulations of an $M_{\rm vir} = 10^{10}$ M$_{\odot}$ galaxy show that, by redshift zero 40\% is ejected from the galaxy's halo \citep{Angles-Alcazar2017}. We find 5.3\% of all metals lie within 1 kpc of the center of our galaxy, 24.5\% within 0.25 $R_{\rm vir}$ (or 6.85 kpc), and 51\% outside $R_{\rm vir}$. This is consistent with previous works, but this is again an evolving quantity. In addition, the amount of gas that escapes the virial radius is certainly sensitive to the details of gas accretion from the IGM on this galaxy, which we cannot capture in this model.
The re-accretion or final ejection of this gas is directly relevant to the chemical evolution of low mass dwarf galaxies. Recycling of metal enriched gas could be a significant driver of long-term chemical evolution in low mass galaxies, particularly if a majority of metals ejected from the disk (itself nearly all the metals produced by the galaxy) return. In addition, the accretion of pristine gas from the intergalactic medium could significantly affect the gas flows around the galaxy, possibly promoting the retention of ejected metals. This effect is not included in our isolated galaxy simulations, and its role is beyond the scope of this work.
\subsection{Missing Physics}
Although we include many detailed physical models in our simulations, there remain additional physical processes that may be relevant, which we now discuss.
\subsubsection{Massive Stellar Wind Energy}
\label{sec:stellar winds discussion}
Our massive stellar wind model drastically reduces the injected wind velocity from $\sim$1000~km~s$^{-1}$ to 20~km~s$^{-1}$. Although our algorithm is entirely capable of generating realistic stellar winds with velocities comparable to those observed, such fast winds place a near constant and severe constraint on the Courant time step that renders $\gtrsim$100~Myr simulations impractical. When considered in isolation, stellar winds are an important source of pre-SN feedback and can dramatically influence dynamical evolution on molecular cloud and galaxy scales \citep{Dale2008,Peters2017,Gatto2017}. However, when considered together with ionizing radiation, stellar winds contain less total energy \citep{Agertz2013} and do not seem to have a significant dynamical influence in either idealized simulations \citep{Geen2015} or individual giant molecular clouds \citep{Dale2014}, unless densities near the ionizing source are high enough to trap the HII region in the source cell. In that case, they can clear out a cavity to allow initial establishment of the HII region.
They are even less relevant in the low-metallicity regime studied here, as stellar winds become weaker with decreasing metallicity \citep{Puls2000, Vink2005}. Although they likely have minimal dynamical importance at resolutions where peak densities are not anyway high enough to trap ionization fronts, a full model of stellar winds may affect detailed ISM properties and metal mixing, warranting closer examination in future work.
\subsubsection{Cosmic Rays}
\label{sec:CRs}
Recent work has explored the importance of cosmic ray feedback in regulating the ISM and wind properties in galactic disks \citep{Hanasz2013,GirichidisCR,Simpson2016,Farber2018}, isolated galaxies \citep{SalemBryanCorlies,Salem2015,Pakmor2016,Ruszkowski2017}, and galaxies in cosmological context \citep{SalemBryanHummels}. These relativistic charged particles act as a source of non-thermal pressure support in the galaxy's ISM, capable of driving outflows at different velocities and containing different thermal phases than those driven through thermal feedback alone \citep{SalemBryanCorlies}. Modeling cosmic rays is challenging, however, as they encompass a wide range of energies, and there are significant uncertainties in how they propagate through the ISM \citep[e.g.][]{Wiener2017}. Their propagation is often modeled as a diffusive process, but in truth this diffusion should vary depending on cosmic ray energy. In addition, cosmic rays couple effectively to the magnetic fields of galaxies, diffusing preferentially along structured magnetic field lines within the ISM. Modeling cosmic ray feedback completely thus requires both an accurate cosmic ray model and magnetohydrodynamics in order to capture the interplay of these two physical phenomena. Finally, including MHD presents additional difficulties in untangling the effects of each individual feedback mechanism on galaxy chemodynamics.
We do note that an isotropic, two-fluid model for cosmic ray feedback exists in \textsc{Enzo} \citep{SalemBryan2014,Salem2015} and has been well tested. Mechanically, including this relatively simple treatment of cosmic ray feedback in our model is trivial. However, the cosmic ray population, their diffusion coefficient, and the magnetic field structure of the lowest mass dwarf galaxies each have significant enough uncertainties to warrant reserving their full inclusion into our model to later work.
\subsection{Detailed Stellar Evolution and Binary Stars}
\label{sec:binary stars}
Roughly half of massive stars live in binary pairs \citep{Sana2013}. Their interactions, primarily through mass transfer, can significantly alter their radiation properties and lifetimes. This can change both how much and how long these stars emit ionizing radiation, an important source of stellar feedback, and where and when these stars explode as SNe. This effect could be significant, but is rarely accounted for in galaxy evolution models, which are commonly based on calculations of single star evolution (e.g. STARBURST99). For example, \citet{Zapartas2017} finds that binarity extends the timescales over which core collapse SNe occur from a given star formation event, from a maximum time of $\sim$ 50~Myr to $\sim$~200~Myr. Although they find only $\sim 15\%$ of core collapse SNe explode after 50~Myr, this could still be an important effect. Properly accounting for the delay times due to variations in individual star lifetimes has already been shown to change the significance of feedback and influence galaxy metallicity properties \citep{Kimm2015}. Extending the lifetimes of these stars combined with accounting for binary effects that change the luminosities of these stars \citep{Gotberg2017,Gotberg2018} could increase the importance of radiation feedback; however this may be less important as these additional photons are more likely to escape the galaxy \citep[e.g.][]{Ma2016-binary}.
Since we model stars on a star-by-star basis, both of these effects could be reasonably accounted for by stochastically assigning binary star properties to some subset of our individual stars. This is beyond the scope of this project, but will be investigated in future work.
\section{Conclusion}
\label{sec:conclusion}
We have developed a new method for simulating galaxy evolution with detailed feedback and chemical enrichment. For the first time on galaxy scales, we simultaneously model multi-channel stellar feedback in detail, using individual star particles to model core collapse and Type Ia SNe, ionizing radiation followed through radiative transfer, photoelectric heating, Lyman-Werner radiation and pollution from AGB and massive stellar winds. This treatment of feedback, coupled with the detailed chemistry and heating/cooling physics followed with \textsc{Grackle}, allows us to capture realistic galaxy evolution in detail. In this work, we apply these methods to simulate the evolution of an isolated, low-mass, dwarf galaxy modeled after the $z=0$ properties of the Leo P dwarf galaxy. We present an overview of the properties of this simulation in this work.
For our simulated dwarf galaxy, we find:
\begin{enumerate}
\item Multi-channel feedback is effective in regulating star formation to a rate consistent with the Kennicutt-Schmidt relationship and the extended Schmidt law in observed galaxies. (See Figs.~\ref{fig:sfr_mass_evolution} and \ref{fig:KS}).
\item This feedback drives large outflows having mass loading factors of $\eta \sim 50$ at 0.25~R$_{\rm vir}$, falling to $\eta \sim 10$ at R$_{\rm vir}$, and
metal mass loading factors near unity. By mass, nearly all of this outflow is moving with velocities below 100~km~s$^{-1}$, but there is a significant tail towards velocities up to 1000~km~s$^{-1}$ (See Fig.~\ref{fig:outflow_velocity}).
\item
Only $\sim$4\% of metals are retained in the disk of our simulated galaxy, consistent with the observed metal retention fractions of low-mass dwarfs. By the end of the simulation $\sim$45\% of the remaining metals stay within the virial radius (but outside the galaxy), while $\sim$50\%
have been ejected beyond the virial radius. (See Figs.~\ref{fig:mass_outflow}, \ref{fig:outflow_velocity}, and \ref{fig:metal_evolution}.)
\item
Beyond the stellar radius, the gas scale height is thin ($\sim 50$~pc), yet resolved, with larger scale heights ($\sim 100$--200~pc) driven by feedback interior to the stellar radius. This is comparable to the resolution limit of the diffuse HI in observed, gaseous low-mass dwarfs. At a spatial resolution of 100~pc, our galaxy has a peak HI column density $N_{\rm HI} = 2.8-4.3 \times 10^{20}$~cm$^{-2}$, depending on inclination (See Fig.~\ref{fig:scale_height}).
\item The ISRF of our galaxy varies strongly in both space and time by orders of magnitude. It is unclear how important these fluctuations are as a source of feedback, or if the affect can be approximated with a time-averaged radial profile. The importance of radiation feedback in our model is investigated in more detail in \cite{Emerick2018b}. (See Figs.~\ref{fig:ISRF} and \ref{fig:ISRF_2D}.)
\item
We find H$_2$ fractions below 5\% in our dwarf galaxy, consistent with the poor constraints on molecular gas formation in low metallicity dwarf galaxies. This H$_2$ forms entirely through gas-phase reactions facilitated by H$^{-}$ in self-shielding regions; H$_2$ formation on dust grains and in the three body reaction are both insignificant. Cold, neutral hydrogen dominates the mass of our galaxy. While warm, neutral hydrogen is present, it does not dominate the mass fraction (See Fig.~\ref{fig:sfr_mass_evolution} and Fig.~\ref{fig:ISM_evolution}.)
\item Finally, we present gas-phase oxygen and nitrogen distributions as examples to briefly demonstrate that there are marked differences in how individual metal species are distributed within the ISM of our galaxy. These variations could be tied to differences in elemental yields among different sources (for example, AGB winds vs. SNe), as suggested by \cite{KrumholzTing2018}. This is explored in more detail in \citep{Emerick2018c}.
\end{enumerate}
\section*{Acknowledgments:}
We would like to thank the following for their advice and valuable discussions, without which this work would not have been possible: B. C\^ot\'e, S. Glover, K. Hawkins, C. Hu, K. Johnston, B. O'Shea, M. Putman, B. Smith, J. Wall, and J. Wise. We would additionally like to thank the referee for their careful report. A.E. is funded by the NSF Graduate Research Fellowship DGE 16-44869. G.L.B. is funded by NSF AST-1312888, NASA NNX15AB20G, and NSF AST-1615955. M.-M.M.L. was partly funded by NASA grant NNX14AP27G. We gratefully recognize computational resources provided by NSF XSEDE through grant number TGMCA99S024, the NASA High-End Computing Program through the NASA Advanced Supercomputing Division at Ames Research Center, Columbia University, and the Flatiron Institute. This work made significant use of many open source software packages, including \textsc{yt}, \textsc{Enzo}, \textsc{Grackle}, \textsc{Python}, \textsc{IPython}, \textsc{NumPy}, \textsc{SciPy}, \textsc{Matplotlib}, \textsc{HDF5}, \textsc{h5py}, \textsc{Astropy}, \textsc{Cloudy} and \textsc{deepdish}. These are products of collaborative effort by many independent developers from numerous institutions around the world. Their commitment to open science has helped make this work possible.
\bibliographystyle{mnras}
|
2,877,628,090,487 | arxiv |
\section{Introduction}
Aggregation and analysis of news related to business fields of a pharma company is a cumbersome process. Users read and process individual websites, partially aggregated news feeds and items collected from various sources, as well as edited digests, commercial feeds and newsletters received by email. There are also no tools for creating and searching categorized content or for querying and retrieving content from sources defined ad hoc. Typically, users have access to different pieces of information published in various formats on distinct media platforms. Preparation of digests for sharing with other users requires manual extraction, adaptation, consolidation and grouping of articles. This is a time-consuming approach with results varying on a case-by-case basis, which leads to the demand for an improved process.
To reduce quality variations and manual overhead, an automated system has been put in place to aggregate the wide spectrum of sources. The platform ensures that data are harmonized and processed into a standardized format, indexed with both internal and external terminologies and presented in a unified feed. In order to take into account individual user interests when sorting items in the presentation layer, we employ a deep learning solution.
We considered regular expressions (RE) as an alternative. Furthermore, we used RE to bootstrap the training dataset. However, articles found by manually constructing the expressions suffered heavily from false instances (positives and negatives). For example, when searching for "clinical phases" one may use a RE like \texttt{(C$|$clinical$\backslash$s)? ((phases?$|$stages?)$\backslash$s)([1-4IV/]\{1,3\})} to identify the relevant articles. During curation, we found false positives, in particular for stage (\texttt{[1-4IV]\{1-3\}}) related strings used with disease stages (i.e. in the wrong context) and false negatives for less commonly used expressions, like early/late, first/second phase/stage. Boolean and RE queries are prone to these types of mismatches and omissions, whereas DL approaches can build awareness of certain concepts and account for ambiguities.
Based on a questionnaire, users inform the system of the level of interest they have in the given topics. Each of the topics is predefined by a set of curated articles. Consequently, a natural language processing (NLP) model is trained to differentiate between categories of news selected and rated by the user. This allows to pivot, filter and sort articles based on the interest scores. A presentation enriched in relevant items minimizes the time to find the required information, thereby meeting the user's needs faster and more accurately.
This article describes our approach to developing an optimal production model for automated pharma news category assignment.
\section{Dataset preparation}
The dataset consists of news articles grouped in 23 categories. The definitions of most were derived from interviews with stakeholders belonging to the involved working groups. Definitions consist of inclusion and exclusion criteria (e.g. exclusion of veterinary topics to keep the focus on human medicines).
The articles (i.e. title plus body) of the training dataset were always assigned to a single category using the main aspect of the article. We collected only the directly accessible parts of articles, i.e. not following full text links. Therefore, articles consisting of only the title or the title and truncated content were included. The dataset shows a bi-modal distribution, where the lengths of the articles vary between 22 and 32852 characters (4 and 5338 words), with an average of 1912.40 (289.76 words), a median of 380.00 characters (59.00 words). We identified peaks around 280 and 3150 characters and 40 and 380 words, respectively. Very long articles (i.e. more than max/2 characters) are rare (5) in the training set.
The identification of matching articles and the categorization process were initiated using a set of regular expressions matching topic-specific keywords and synonyms.
For the sake of brevity, the full description of categories is provided in table \ref{tab:catdesc} in the Appendix. A single category may cover more than one topic (e.g. JobBizLaw). These categories may be split in the future to achieve more granularity if required by the users.
On a high level, 21 out of the 23 categories can be grouped into 3 major areas - "Clinical Trials" (Appr/Withd/Sub, C1, C1/C2, C2, C2/C3, C3, C4 \& L-M), "Research" (Chem, Conf, DIA, DigiBioMLDev, MoA, PHC, PhDev, RWD) and "Business/Regulatory" (JobBizLaw, M\&AA, OTC, Patents, Pricing/Costs, PV-Reg). The remaining two categories are broader. The category "Other" contains all relevant Pharma topics (mainly biology topics) not included in any other classification, whereas the "OffTopic" category contains a range of examples of non-Pharma-related news which are included in relevant news streams but covering broader or more generic scientific interests (e.g. astronomy).
The "Clinical Trials" categories refer to the events of beginning and end of different clinical drug development phases, as well as submissions to, approvals and withdrawals of approval by the health authorities. The "Research" categories contain news items treating recent developments in the respective research areas. Finally, the "Business/Regulatory" categories track articles related to business and law with different aspects of emphasis.
Detailed statistics of the dataset are presented in table \ref{tab:dsstats} in the Appendix, including the number of examples in each category, the distribution of stopwords and non-stopwords in each category and totals for the entire dataset.
Our approach to determine the number of articles per category was guided by a) defining the minimum number of articles per category as approximately 200, b) the number available in a pool consisting of recent articles, i.e. not older than one year and c) strictly avoiding strong imbalance between the categories, i.e. striving for a ratio lower than 1:4~\citep{krawczyk2016learning}. We achieved a balance ratio of 1:3.65 or better for the listed categories. In each round of curation, we split the dataset between two curators, followed by two rounds of cross-validation on large samples.
\section{Architectures}
The majority of current state of the art NLP Deep Learning architectures can be traced back to the original Transformer~\citep{vaswani2017attention} model. The Transformer follows an encoder-decoder architecture. The encoder maps an input sequence of binary symbol representations to a sequence of continuous representations. The decoder then generates an output sequence of symbols one element at a time. The encoder is composed of a stack of N identical layers, each composed of a multi-head self-attention mechanism, and a position-wise fully connected network. A residual connection passes around each of the above components, followed by layer normalization. The decoder is similarly composed of N identical layers. Its layers feature a third component, which performs multi-head attention over the output of the encoder. Like in the encoder, residual connections are placed around each of the components, followed by layer normalization.
NLP transformer models are frequently used in a two-step fashion. We distinguish between model pre-training and fine-tuning. Pre-training is typically performed on a very large corpus and uses the model's native task (e.g. missing token reconstruction) and loss function. Fine-tuning uses a downstream, usually smaller dataset to build a simple estimator (e.g. a classifier) while leveraging the knowledge accumulated in the transformer during the training on its native task. To this end, the architecture is modified by providing a classifier head and loss function. In this work, we consider only the fine-tuning of pre-trained NLP models in order to perform the Pharma news classification task.
We use the NLP architecture implementations from the Transformers~\citep{wolf2020transformers} package. The software manages as well the retrieval of pre-trained model weights from the free and public repository at \url{https://huggingface.co/models}.
We evaluate fine-tuning of multiple pre-trained models from the following families: BERT~\citep{devlin2018bert}, BART~\citep{lewis2019bart}, GPT~\citep{radford2019language}, XLnet~\citep{yang2019xlnet}, XLM~\citep{lample2019cross}, RoBERTa~\citep{liu2019roberta}, DistilBERT~\citep{sanh2019distilbert}, AlBERT~\citep{lan2019albert}, XLM-RoBERTa~\citep{conneau2019unsupervised}, Flaubert~\citep{le2019flaubert}, Electra~\citep{clark2020electra} and Longformer~\citep{beltagy2020longformer}.
\section{Methods}
For each pretrained model we use the corresponding tokenizer to generate embeddings for the summaries of the articles (i.e. title+body). Due to the Graphical Processing Units (GPU) memory constraints and to maintain a reasonable batch size we truncate the length of the embeddings to 128 elements, as opposed to the default, usually 512~\citep{korotkova2020exploration}. We pad the embeddings shorter than 128 elements with model-specific padding tokens up to the desired length. This setup allows us to use 32 examples per batch per GPU. We run all the procedures on 2 GPUs by means of the pytorch-lightning~\citep{falcon2019pytorch} package.
%
We use train-validation-test splits of the dataset with ratios .5~:~.25~:~.25 respectively and a fixed random seed, so that the splits are identical across different pretrained models.
%
The pretrained model weights are loaded from the \url{https://huggingface.co/models} repository. The pretrained classifier head is subsequently replaced with an untrained one, consisting of 23 outputs corresponding to the number of labels in our dataset.
%
We use the AdamW~\citep{loshchilov2017decoupled} optimizer for all models, with a fixed learning rate of 2e-5.
%
The performance of the models is evaluated on the test partition of the dataset using the accuracy metric, as well as F1, precision and recall with micro-averaging. In addition, we log the confusion matrices for debugging purposes.
%
The gradient for optimization is provided by model-specific loss functions. We train for a maximum of 50 epochs with an early stopping condition of 5 epochs of no improvement in the validation F1 score.
%
Model checkpoints are saved at every epoch. We use the model weights that resulted in the best validation F1 score in order to perform the measures on the test split.
%
In order to assess the prediction certainty of the model on correctly and incorrectly classified examples, we perform a Monte Carlo dropout~\citep{gal2016dropout} procedure on the top-performing model.
%
To visually assess the level of separation between categories in the latent space, we compute and visualize a t-SNE~\citep{hinton2002stochastic} embedding of the transformed [CLS] token representations.
%
We use the top 6 best-performing individual predictor models in order to build an ensemble classifier~\citep{opitz1999popular,polikar2006ensemble,rokach2010ensemble}. The ensemble algorithm computes the histogram of individual predictions and returns the most frequently predicted label. In case of a draw, a random label among the most-voted ones is returned. We repeat the training of all the 6 top-performing individual models 10 times (with different initial weights but the same train/validation/testing splits) and compute individual F1 scores, as well as ensemble F1 scores on the test dataset.
\section{Results}
All individual predictor models finished training early by achieving the state of no further improvement in the validation F1 score over 5 consecutive epochs. The average duration of the training was 16.34 epochs with a standard deviation of 8.86 epochs.
%
We present the accuracy, F1, precision and recall metrics of the top 10 models on the test partition of the dataset in table \ref{tab:metrshort}. Full summary can be found in table \ref{tab:metrics} in the Appendix. In order to pick the top performing models we arbitrarily focus on the F1 metric as a good summary of precision and recall. The top 6 models are in order of decreasing test F1 - \textit{roberta-large}, \textit{roberta-base}, \textit{distilbert-base-uncased}, \textit{facebook/mbart-large-en-ro}, \textit{facebook/bart-large} and \textit{xlnet-large-cased}.
%
We show individual F1 metrics on the test partition for all models and all categories in table \ref{tab:percat} in the Appendix.
The Monte Carlo dropout~\citep{gal2016dropout} results on the best-performing model are presented in figure \ref{fig:results} B. The prediction probabilities for incorrectly predicted instances are distributed almost uniformly between 0 an 1, whereas the probabilities for correctly predicted instances are close to 1.0 in the majority of cases. This result is expected and desired and would likely allow to achieve an even higher enrichment of articles of interest when using a confidence prediction threshold.
The visualization of t-SNE~\citep{hinton2002stochastic} embeddings of the transformed [CLS] token representations from the fine-tuned \textit{roberta-large} model are presented in figure \ref{fig:results} A. Visual inspection confirms the formation of clearly separated clusters for each category. The clusters contain some problematic instances that are labeled differently than the cluster majority. This is in accord with the observed confusion matrix (figure \ref{fig:results} C), e.g. the misclassifications between M\&AA and JobBizLaw or C1 and C2.
Overall, the top individual models have good performance and behave correctly in terms of prediction probability and the visual inspection of their embeddings.
%
The results of the ensemble model created from the top 6 best-performing individual predictors are presented in table \ref{tab:ensemble}. As expected, the average F1 score is slightly higher than any of the individual models.
\begin{table}
\centering
\label{tab:metrshort}
\caption{Summary metrics for the top-10 models.}
\begin{tabular}{ l c c c c c }
\toprule
\textbf{Pretrained Model} & \textbf{Accuracy} & \textbf{F1} & \textbf{Loss} & \textbf{Precision} & \textbf{Recall} \\
\midrule
roberta-large & 0.57 & 0.56 & 1.4 & 0.56 & 0.57 \\
roberta-base & 0.56 & 0.55 & 1.5 & 0.55 & 0.56 \\
distilbert-base-uncased & 0.55 & 0.54 & 1.5 & 0.55 & 0.55 \\
facebook/mbart-large-en-ro & 0.54 & 0.54 & 1.7 & 0.55 & 0.54 \\
facebook/bart-large & 0.55 & 0.54 & 1.5 & 0.55 & 0.55 \\
xlnet-large-cased & 0.55 & 0.54 & 1.7 & 0.56 & 0.55 \\
bert-large-uncased & 0.54 & 0.53 & 1.5 & 0.54 & 0.54 \\
facebook/bart-base & 0.54 & 0.53 & 1.6 & 0.53 & 0.54 \\
facebook/bart-large-xsum & 0.54 & 0.53 & 1.5 & 0.54 & 0.54 \\
xlm-mlm-en-2048 & 0.54 & 0.53 & 1.7 & 0.55 & 0.54 \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}
\centering
\begin{tikzpicture}
\node[inner sep=0](conf) at (6, 0) { \includegraphics[width=.25\textwidth]{confusion_matrix_3.png} };
\node[inner sep=0](swarm) at (6, 3.25) { \includegraphics[width=.25\textwidth]{label_proba_swarmplot.png} };
\node[inner sep=0](tsne) at (0, .1\textwidth) { \includegraphics[width=.5\textwidth,height=.4\textwidth]{tsne_roberta.png} };
\node[inner sep=0] at (-4cm,4.3cm) { A };
\node[inner sep=0] at (8.2cm,4.3cm) { B };
\node[inner sep=0] at (8.2cm,1.4cm) { C };
\end{tikzpicture}
\caption{A. T-SNE embedding of the last hidden state of [CLS] token in the Roberta network; B. Prediction probability distribution in incorrectly vs. correctly predicted instances; C. Normalized confusion matrix of the best model - \textit{roberta-large} - on the test split of the dataset. To obtain the normalized version, confusion matrix row values are divided by the number of examples in the respective categories.}
\label{fig:results}
\end{figure}
\begin{table}
\centering
\caption{Results of the Ensemble model testing. Test F1 scores of the individual predictors and the Ensemble model are presented, averaged over 10 runs with different initial weights but the same train/validation/test splits. M1, roberta-large; M2, roberta-base; M3, distilbert-base-uncased; M4, facebook/mbart-large-en-ro; M5, facebook/bart-large; M6, xlnet-large-cased.}
\label{tab:ensemble}
\begin{tabular}{lrrrrrrr}
\toprule
{} & \textbf{M1} & \textbf{M2} & \textbf{M3} & \textbf{M4} & \textbf{M5} & \textbf{M6} & \textbf{Ensemble} \\
\midrule
\textbf{F1} & 0.56 & 0.55 & 0.54 & 0.54 & 0.54 & 0.54 & 0.58 \\
\bottomrule
\end{tabular}
\end{table}
\section{Discussion}
Development of well-balanced and curated training sets is a key prerequisite for performing categorization in a consistent and correct manner. In this study, we did not investigate the effects of the bimodal distribution of number of characters/words on the inference performance. One potential venue for future research is to examine if the DL algorithm obtains better results when operating on subsets of articles grouped by length.
In our current setup, the top individual models have good performance and behave correctly in the terms of prediction probability and the visual inspection of their embeddings. The ensemble model performs better than any of the individual models and has the test F1 score higher by 0.02 than the best performing individual model. We are satisfied with this level of performance and successfully deploy this setup in a production environment. Nevertheless, many interesting opportunities for improvement exist.
We must consider that despite our best efforts some labels in the training dataset might have been attributed incorrectly by us. On inspection of a sample, we found a few articles which were assigned to different categories by the respective curators because of ambiguities in the texts or insufficient precision of curation rules. Another source for labeling flaws derives from articles being inherently noisy due to the overlap between certain pairs of categories. Therefore, exploring techniques such as bootstrapping~\citep{reed2014training} or loss correction~\citep{arazo2019unsupervised} might be one of the future venues to improve our model metrics. The observation of a bimodal distribution in the collected articles needs further analysis, as the semantic difference between the two groups seem to be that one contains single- and the other one - multi-topic articles. A multi-label approach at the level of articles or paragraphs/sentences could be another alternative to address the ambiguity. In particular, longer articles (i.e. more than 300-400 words, according to our observations) are at risk for mislabelling because they usually summarize several aspects of a story (e.g. event reports or articles on mergers and acquisitions) and provide a historical background or discuss future plans and prospects.
Curation of the dataset by two curators showed both its advantages and limits. A slight improvement could be obtained by comparing the results from corpora created independently and/or using a crowd-sourced approach, with additional cross-checked labeling, thereby further reducing potential bias caused by the curators and optimizing the future curation process. However, this approach would be much more time- and resource consuming.
Furthermore, the slight improvement offered by the ensemble model is not necessarily justifying the need to maintain 6 transformer models instead of just one. It seems reasonable that a single architecture might exist that would perform on the level of the Ensemble classifier. To this end, employing the neural architecture search approach described in~\citep{zhu2020autotrans} or more generally~\citep{pham2018efficient} could be a very interesting alternative to the ensemble approach.
Lastly, we are interested in reformulating the problem in a few-shot learning setup, where the user provides a few examples of what is within the scope of his interest instead of formally declaring a category. Some of the most successful approaches in this area make use of the siamese networks~\citep{yan2018few,chicco2021siamese,reimers2019sentence,sentencetransformers}. In our scenario, we can train a model to determine for any pair of abstracts whether they belong to the same category or not. Subsequently, new articles can be evaluated in this manner against the examples provided by the user, therefore determining if they belong to a category of interest of the user without formalizing what the category is.
\section{Summary}
We prepared a curated dataset of news articles grouped in 23 categories relevant to pharma. Subsequently, we used it to evaluate a large cross-section of fine-tuned state-of-the-art NLP transformer models in the context of a classification task. We identified the best performing architectures and pretrained weight sets and validated their behavior in terms of prediction certainty and the visual quality of their embeddings. We constructed an ensemble model consisting of the top 6 of such individual predictors and demonstrated that it outperforms the best individual model by 0.02 F1 score on the test partition of the dataset. We identified three major areas for future research - addressing the label ambiguity problem, neural architecture search and few-shot learning. The first two aim respectively at improving the model metrics and reducing the ensemble model back to a single architecture. The third reformulates the problem as an arbitrary category differentiation task eliminating altogether the need for formal category definitions.
\bibliographystyle{unsrtnat}
|
2,877,628,090,488 | arxiv | \section{Introduction}
The AdS/CFT correspondence~\cite{Maldacena:1997re} between
string states on Anti--de Sitter (AdS) space-time and conformal field theories (CFT) in physical space-time has
brought a new perspective for the study of the dynamics of strongly coupled quantum field theories and has led
to new analytical insights into the confining dynamics of QCD, which is difficult to realize using other methods.
In practice, the duality provides an effective gravity description in a ($d+1$)-dimensional AdS
space-time in terms of a flat $d$-dimensional conformally-invariant quantum field theory defined at the AdS
asymptotic boundary.~\cite{Gubser:1998bc} Thus, in principle, one can compute physical observables
in a strongly coupled gauge theory in terms of a classical gravity theory.
The original correspondence~\cite{Maldacena:1997re} is the duality between
${\cal N}=4$ supersymmetric SU$(N_C)$ Yang-Mills theory (SYM) and
the supergravity approximation to Type IIB string theory on AdS$_5 \times S^5$ space.
QCD is fundamentally different from SYM
theories where all the matter fields transform in adjoint multiplets of SU$(N_C)$. Unlike SYM theories, where the conformal invariance
implies that the coupling does not run with energy, its scale invariance is broken in QCD by quantum effects.
The most general group of transformations that leave the AdS metric
\begin{equation} \label{AdSz}
ds^2 = \frac{R^2}{z^2} \left(\eta_{\mu \nu} dx^\mu dx^\nu - dz^2\right),
\end{equation}
invariant, the isometry group, has dimensions $(d+1)(d+2)/2$. Thus, for $d=4$, five dimensional anti-de Sitter space AdS$_5$ has 15 isometries, in agreement with the number of generators of the conformal group in four dimensions. The metric (\ref{AdSz}) is invariant under the transformation $x \to \lambda x$, $z \to \lambda z$. The variable $z$ is thus like a scaling variable in Minkowski space: different values of $z$ correspond to different energy scales at which the hadron is examined.
A gravity dual of QCD is not known, and it has proven difficult to extend the gauge/gravity duality beyond theories which are to a great extent constrained
by their symmetries. We shall follow here a simplified approach, which is limited to the study of the propagation of hadronic modes in a fixed effective gravitational background
which encodes salient properties of the QCD dual theory, such
as the ultraviolet conformal limit at the AdS boundary at $z \to 0$, as well as modifications of the background geometry in the
large $z$ infrared region from confinement. The introduction of an infrared cutoff at a finite value $z_0 \sim \Lambda_{\rm QCD}$ is a simple way to get confinement and discrete
normalizable modes. Thus the ``hard-wall'' at $z_0$ breaks conformal invariance and allows the introduction of the QCD scale and a spectrum of particle states.~\cite{Polchinski:2001tt} As first shown by Polchinski and Strassler,~\cite{Polchinski:2001tt} the AdS/CFT duality, modified
to incorporate a mass scale, provides a derivation of dimensional counting
rules~\cite{Brodsky:1973kr, Matveev:ra} for the leading
power-law fall-off of hard scattering beyond the perturbative regime.
The modified theory generates the hard behavior expected from QCD, instead of the soft
behavior characteristic of strings.
In the usual AdS/QCD approach~\cite{Erlich:2005qh, DaRold:2005zs} bulk fields are introduced to match the
$SU(2)_L \times SU(2)_R$ chiral symmetries of QCD and its spontaneous breaking, but without explicit connection with the internal constituent structure of
hadrons.~\cite{Brodsky:2003px} Instead, axial and vector currents become the
primary entities as in effective chiral theory.
The conformal metric of AdS space can be modified within the gauge/gravity framework with the introduction of a dilaton field to reproduce the observed linear Regge behavior in the hadronic spectrum.~\cite{Karch:2006pv} The additional warp factor in the metric, or, equivalently, the introduction of a dilaton
background $\varphi(z)$ introduces an energy scale in the five-dimensional Lagrangian, thus breaking the conformal invariance.
A particularly interesting case is a dilaton profile $\exp{\left(\pm \kappa^2 z^2\right)}$ of either sign, the ``soft-wall'', since it
leads to linear Regge trajectories~\cite{Karch:2006pv} and avoids the ambiguities in the choice of boundary conditions at the infrared wall.
Light-front quantization is the ideal framework to describe the
structure of hadrons in terms of their quark and gluon degrees of
freedom. The simple structure of the light-front (LF) vacuum allows an unambiguous
definition of the partonic content of a hadron in QCD and of hadronic light-front wavefunctions (LFWFs)
which relate its quark
and gluon degrees of freedom to their asymptotic hadronic state. The LFWFs of relativistic bound states in QCD provide a description of the structure and internal dynamics of hadronic states in terms of their constituent quark and gluons at the same LF time $\tau = x^0 + x^3$, the time marked by the
front of a light wave,~\cite{Dirac:1949cp} instead of the ordinary instant time $t = x^0$. The constituent spin and orbital angular momentum properties of the hadrons are also encoded in the LFWFs. Unlike instant time quantization, the Hamiltonian equation of motion in the light-front is frame independent and has a structure similar to the eigenmode equations in AdS space.
This makes a direct connection of QCD with AdS/CFT methods possible. The identification of orbital angular momentum of the constituents is a key element in our description of the internal structure of hadrons using holographic principles,
since hadrons with the same quark content but different orbital angular momentum have different masses.
A physical hadron in four-dimensional Minkowski space has four-momentum $P_\mu$ and invariant
hadronic mass states determined by the light-front
Lorentz-invariant Hamiltonian equation for the relativistic bound-state system
$P_\mu P^\mu \vert \psi(P) \rangle = M^2 \vert \psi(P) \rangle$,
where the operator $P_\mu P^\mu$ is determined canonically from the QCD Lagrangian.
On AdS space the physical states are
represented by normalizable modes $\Phi_P(x, z) = e^{-iP \cdot x} \Phi(z)$,
with plane waves along Minkowski coordinates $x^\mu$ and a profile function $\Phi(z)$
along the holographic coordinate $z$. The hadronic invariant mass
$P_\mu P^\mu = M^2$ is found by solving the eigenvalue problem for the
AdS wave equation. Each light-front hadronic state $\vert \psi(P) \rangle$ is dual to a normalizable string mode $\Phi_P(x,z)$.
For fields near the AdS boundary the behavior of $\Phi(z)$
depends on the scaling dimension of corresponding interpolating operators.
We have shown recently a remarkable
connection between the description of hadronic modes in AdS space and
the Hamiltonian formulation of QCD in physical space-time quantized
on the light-front at equal light-front time $\tau$.~\cite{deTeramond:2008ht} Indeed, one may take the LF bound state Hamiltonian equation of motion in QCD as a starting point to derive relativistic wave equations in terms of an invariant transverse variable $\zeta$, which measures the
separation of the quark and gluonic constituents within the hadron
at the same LF time. The result is a single-variable light-front relativistic
Schr\"odinger equation, which is
equivalent to the equations of motion which describe the propagation of spin-$J$ modes in a fixed gravitational background asymptotic to AdS space. Its eigenvalues give the hadronic spectrum and its eigenmodes represent the probability distribution of the hadronic constituents at a given scale. Remarkably, the AdS equations correspond to the kinetic energy terms of the partons inside a hadron, whereas the interaction terms build confinement and
correspond to the truncation of AdS space in an effective dual gravity approximation.~\cite{deTeramond:2008ht}
Light-front holographic mapping was originally obtained
by matching the expression for electromagnetic current matrix
elements in AdS space with the corresponding expression for the
current matrix element using light-front theory in physical space
time.~\cite{Brodsky:2006uqa, Brodsky:2007hb} More recently we have shown that one
obtains the identical holographic mapping using the matrix elements
of the energy-momentum tensor.~\cite{Brodsky:2008pf}
\section{A Semiclassical Approximation to QCD\label{LFholog}}
We start with the QCD light-front Hamiltonian equation for a relativistic bound state
$\vert \psi \rangle$ \begin{equation} \label{LFH}
P_\mu P^\mu \vert \psi(P) \rangle = M^2 \vert \psi(P) \rangle,
\end{equation}
where $M^2$ is the invariant hadron mass
and $P_\mu P^\mu = P^- P^+ - \mbf{P}_\perp^2$.
We can compute $M^2$ from the hadronic matrix element
\begin{equation} \label{eq:Matrix}
\langle \psi(P') \vert P_\mu P^\mu \vert\psi(P) \rangle =
M^2 \langle \psi(P' ) \vert\psi(P) \rangle,
\end{equation}
expanding the initial and final hadronic states in terms of their Fock basis of non interacting components: $\vert \psi \rangle = \sum_n \psi_n \vert n \rangle$.
The matrix element can then be
expressed as a sum of overlap integrals with diagonal elements for the non interacting terms
in the LF Hamiltonian. We find~\cite{deTeramond:2008ht}
\begin{equation} \label{Mbj}
M^2 = \sum_n \prod_{j=1}^{n-1} \int d x_j \, d^2 \mbf{b}_{\perp j} \,
\psi_n^*(x_j, \mbf{b}_{\perp j})
\sum_q \left(\frac{ \mbf{- \nabla}_{ \mbf{b}_{\perp q}}^2 + m_q^2 }{x_q} \right)
\psi_n(x_j, \mbf{b}_{\perp j})
+ ({\rm interactions}) ,
\end{equation}
where the light-front wave functions $\psi$ depend only on the $n \! - \! 1 $ independent relative partonic coordinates,
the longitudinal momentum fraction $x_i = k_i^+/P^+$, the transverse impact variables $\mbf{b}_{\perp i}$
(canonical conjugate to the transverse momentum $\mbf{k}_{\perp i}$) and $\lambda_i$, the
projection of the constituent's spin along the $z$ direction. Momentum conservation requires
$\sum_{i=1}^n x_i =1$ and $\sum_{i=1}^n \mbf{b}_{\perp i}=0$.
The normalization is defined by
\begin{equation} \label{eq:LFWFbnorm}
\sum_n \prod_{j=1}^{n-1} \int d x_j d^2 \mbf{b}_{\perp j}
\left\vert \psi_n(x_j, \mbf{b}_{\perp j})\right\vert^2 = 1.
\end{equation}
To simplify the discussion we will consider a two-parton bound state. In the limit
$m_q \to 0$
\begin{equation} \label{Mb}
M^2 = \int_0^1 \! \frac{d x}{x(1-x)} \int \! d^2 \mbf{b}_\perp \,
\psi^*(x, \mbf{b}_\perp)
\left( - \mbf{\nabla}_{\mbf{b}_ \perp}^2 \right)
\psi(x, \mbf{b}_\perp) + ({\rm interactions}).
\end{equation}
To identify the key variable in (\ref{Mbj}) we notice that the functional dependence for a given Fock state is given in terms of its off-mass shell energy
$M^2 \! - M_n^2$, where $M_n^2 = \left( \sum_{i=1}^n k_i^\mu\right)^2$. For $n=2$, $M_{n=2}^2 = \frac{\mbf{k}_\perp^2}{x(1-x)}$.
Similarly, in impact space the relevant variable for a two-parton state is $\zeta^2= x(1-x)\mbf{b}_\perp^2$.
As a result, to first approximation LF dynamics depend only on the boost invariant variable
$M_n$ or $\zeta,$
and hadronic properties are encoded in the hadronic mode $\phi(\zeta)$
from the relation,
\begin{equation} \label{eq:psiphi}
\psi(x,\zeta, \varphi) = e^{i L \varphi} X(x) \frac{\phi(\zeta)}{\sqrt{2 \pi \zeta}} ,
\end{equation}
thus factoring out the angular dependence $\varphi$ and the longitudinal, $X(x)$, and transverse mode $\phi(\zeta)$.
We choose the normalization of the LF mode $\phi(z) = \langle \zeta\vert \psi\rangle$ as $\langle \phi \vert \phi \rangle =
\int d\zeta \, \vert \langle \zeta \vert \phi \rangle \vert^2 = 1$.
We can write the Laplacian operator in (\ref{Mb}) in circular cylindrical coordinates
$(\zeta, \varphi)$ with $ \zeta = \sqrt{x(1-x)} \vert \mbf{b}_\perp \vert$:
$\nabla_\zeta^2 = \frac{1}{\zeta} \frac{d}{d\zeta} \left( \zeta \frac{d}{d\zeta} \right)
+ \frac{1}{\zeta^2} \frac{\partial^2}{\partial \varphi^2}$,
and factor out the angular dependence of the
modes in terms of the $SO(2)$ Casimir representation $L^2$ of orbital angular momentum in the
transverse plane.
Using (\ref{eq:psiphi}) we find~\cite{deTeramond:2008ht}
\begin{equation} \label{eq:KV} \nonumber
M^2 = \int \! d\zeta \, \phi^*(\zeta) \sqrt{\zeta}
\left( -\frac{d^2}{d\zeta^2} -\frac{1}{\zeta} \frac{d}{d\zeta}
+ \frac{L^2}{\zeta^2}\right)
\frac{\phi(\zeta)}{\sqrt{\zeta}}
+ \int \! d\zeta \, \phi^*(\zeta) U(\zeta) \phi(\zeta),
\end{equation}
where $L = L^z$. In writing the above equation we have summed up the complexity of the interaction terms in the QCD Lagrangian in the addition of the effective
potential $U(\zeta)$, which is then modeled to enforce confinement at some IR scale.
The light-front eigenvalue equation $P_\mu P^\mu \vert \phi \rangle = M^2 \vert \phi \rangle$
is thus a light-front wave equation for $\phi$
\begin{equation} \label{LFWE}
\left(-\frac{d^2}{d\zeta^2}
- \frac{1 - 4L^2}{4\zeta^2} + U(\zeta) \right)
\phi(\zeta) = M^2 \phi(\zeta),
\end{equation}
an effective single-variable light-front Schr\"odinger equation which is
relativistic, covariant and analytically tractable. Its eigenmodes $\phi(\zeta) = \langle \zeta \vert \phi \rangle$
determine the hadronic mass spectrum and represent the probability
amplitude to find $n$-partons at transverse impact separation $\zeta$,
the invariant separation between pointlike constituents within the hadron~\cite{Brodsky:2006uqa} at equal
light-front time. Extension of the results to arbitrary $n$ follows from the $x$-weighted definition of the
transverse impact variable of the $n-1$ spectator system:~\cite{Brodsky:2006uqa}
\begin{equation} \label{zeta}
\zeta = \sqrt{\frac{x}{1-x}} ~ \Big\vert \sum_{j=1}^{n-1} x_j \mbf{b}_{\perp j} \Big\vert ,
\end{equation}
where $x = x_n$ is the longitudinal
momentum fraction of the active quark. One can also
generalize the equations to allow for the kinetic energy of massive
quarks using (\ref{Mbj}). In this case, however,
the longitudinal mode $X(x)$ does not decouple from the effective LF bound-state equations.
\section{Higher Spin Hadronic Modes in AdS Space}
The description of higher spin modes in AdS space is a notoriously difficult problem.~\cite{Fronsdal:1978vb, Fradkin:1986qy}
A spin-$J$ field in AdS$_{d+1}$ is represented by a rank $J$ tensor field $\Phi(x^A)_{M_1 \cdots M_J}$, which is totally symmetric in all its indices. Such a tensor contains also lower spins, which can be eliminated by imposing gauge conditions. The action for a spin-$J$ field in AdS$_{d+1}$ space time in presence of a dilaton background field $\varphi(z)$ is given by
\begin{eqnarray} \label{SJ} \nonumber
S = {\frac{1}{2}} \int \! d^d x \, dz \,\sqrt{g} \,e^{\varphi(z)}
\Big( g^{N N'} g^{M_1 M'_1} \cdots g^{M_J M'_J} D_N \Phi_{M_1 \cdots M_J} D_{N'} \Phi_{M'_1 \cdots M'_J} \\
- \mu^2 g^{M_1 M'_1} \cdots g^{M_J M'_J} \Phi_{M_1 \cdots M_J} \Phi_{M'_1 \cdots M'_J} + \cdots \Big) ,
\end{eqnarray}
where $D_M$ is the covariant derivative which includes parallel transport
\begin{equation} \label{Dco}
[D_N, D_K] \Phi_{M_1 \cdots M_J} = - R^L_{\, M_1 N K} \Phi_{L \cdots M_J} - \cdots - R^L_{\, M_J N K} \Phi_{M_1 \cdots L},
\end{equation}
and the omitted terms refer to
terms with different contractions. Conformal invariance in (\ref{SJ}) is broken by $\varphi(z)$ which is a function of the holographic coordinate $z$ and vanishes
in the conformal limit $z \to 0$. The coordinates of AdS are the Minkowski coordinates $x^\mu$ and the holographic variable $z$ labeled $x^M = \left(x^\mu, z\right)$.
A physical hadron has plane-wave solutions and polarization indices $\mu_i$, $i = 1 \cdots J$, along the 3 + 1 physical coordinates
$\Phi_P(x,z)_{\mu_1 \cdots \mu_J} = e^{- i P \cdot x} \Phi(z)_{\mu_1 \cdots \mu_J}$,
with four-momentum $P_\mu$ and invariant hadronic mass $P_\mu P^\mu \! = \! M^2$. All other components vanish identically:
$\Phi_{z \mu_2 \cdots \mu_J} = \cdots = \Phi_{\mu_ 1 \mu_2 \cdots z} = 0$. One can then construct an effective action in terms
of high spin modes $\Phi_J = \Phi_{\mu_1 \mu_2 \cdots \mu_J}$, with only the physical degrees of freedom.~\cite{Karch:2006pv, BDDE:2010xx} In this case the system of coupled differential equations which follow from (\ref{SJ}) reduce to a homogeneous equation in terms of the physical field $\Phi_J$.
We retain only physical modes $\Phi_{\mu_1 \mu_2 \cdots \mu_J}$, and start with the scalar wave equation which follows from the variation of (\ref{SJ}) for $J = 0$. This case is particularly simple as the covariant derivative of a scalar field is the usual derivative. We obtain the eigenvalue equation
\begin{equation} \label{WeS}
\left[-\frac{ z^{d-1}}{e^{\varphi(z)}} \partial_z \left(\frac{e^{\varphi(z)}}{z^{d-1}} \partial_z\right)
+ \left(\frac{\mu R}{z}\right)^2\right] \Phi = M^2 \Phi.
\end{equation}
A physical spin-$J$ mode $\Phi_{\mu_1 \cdots \mu_J}$ with all indices
along 3+1 is then constructed by shifting dimensions
$\Phi_J(z) = ( z/R)^{-J} \Phi(z)$. Its normalization is given by
\begin{equation} \label{Phinorm}
R^{d - 1 - 2 J} \int_0^{\infty} \! \frac{dz}{z^{d -1 - 2 J}} \, e^{\varphi(z)} \Phi_J^2 (z) = 1.
\end{equation}
The shifted field $\Phi_{\mu_1 \mu_2 \cdots \mu_J}$ obeys the wave equation~\cite{deTeramond:2008ht, deTeramond:2010ge}
\begin{equation} \label{WeJ}
\left[-\frac{ z^{d-1 -2 J}}{e^{\varphi(z)}} \partial_z \left(\frac{e^{\varphi(z)}}{z^{d-1 - 2 J}} \partial_z\right)
+ \left(\frac{\mu R}{z}\right)^2\right] \Phi_{\mu_1 \mu_2 \cdots \mu_J} = M^2 \Phi_{\mu_1 \mu_2 \cdots \mu_J},
\end{equation}
which follows from (\ref{WeS})
upon mass rescaling $(\mu R)^2 \to (\mu R)^2 - J(d-J)$ and $M^2 \to M^2 - J z^{-1} \partial_z \varphi$.
For $J=1$ our results are identical with the wave equation for a massive AdS vector field in presence of a dilaton background.
\section{Light-Front Holographic Mapping and Hadronic Spectrum}
The structure of the QCD Hamiltonian equation (\ref{LFH}) is similar to the structure of the AdS wave equation (\ref{WeJ}); they are both frame-independent and have identical eigenvalues $M^2$, the mass spectrum of the color-singlet states of QCD, a possible indication of a more profound connection between physical QCD and the physics of hadronic modes in AdS space. However, important differences are also apparent: Eq. (\ref{LFH}) is a linear quantum-mechanical equation of states in Hilbert space, whereas Eq. (\ref{WeJ}) is a classical gravity equation; its solutions describe spin-$J$ modes propagating in a higher dimensional
warped space. Physical hadrons are composite and thus inexorably endowed of orbital angular momentum. Thus, the identification
of orbital angular momentum is of primary interest in finding a connection between both approaches.
As shown in the Sect. \ref{LFholog}, one can indeed systematically reduce the LF Hamiltonian eigenvalue Eq. (\ref{LFH}) to an effective relativistic wave equation, analogous to the AdS equations, by observing that each $n$-particle Fock state has an essential dependence on the invariant mass of the system and
thus, to a first approximation, LF dynamics depend only on $M_n^2$.
In impact space the relevant variable is the boost invariant variable $\zeta$ (\ref{zeta})
which measures the separation of the constituents and which also allows one to separate the dynamics
of quark and gluon binding from the kinematics of the constituent
internal angular momentum.
Upon the substitution $z \! \to\! \zeta$ and
$\phi_J(\zeta) = \left(\zeta/R\right)^{-3/2 + J} e^{\varphi(z)/2} \, \Phi_J(\zeta)$,
in (\ref{WeJ}), we find for $d=4$ the QCD light-front wave equation (\ref{LFWE}) with effective potential~\cite{deTeramond:2010ge}
\begin{equation} \label{U}
U(\zeta) = {\frac{1}{2}} \varphi''(z) +\frac{1}{4} \varphi'(z)^2 + \frac{2J - 3}{2 z} \varphi'(z) ,
\end{equation}
The fifth dimensional mass $\mu$ is not a free parameter but scales as $(\mu R)^2 = - (2-J)^2 + L^2$.
If $L^2 < 0$ the LF Hamiltonian is unbounded from below
$\langle \phi \vert H_{LF} \vert \phi \rangle <0$ and the spectrum contains an
infinite number of negative values of $M^2 $, which can be arbitrarily large.
The critical value $L=0$ corresponds to the lowest possible stable solution, the ground state of the light-front Hamiltonian.
For $J = 0$ the five dimensional mass $\mu$
is related to the orbital momentum of the hadronic bound state by
$(\mu R)^2 = - 4 + L^2$ and thus $(\mu R)^2 \ge - 4$. The quantum mechanical stability condition $L^2 \ge 0$ is thus equivalent to the
Breitenlohner-Freedman stability bound in AdS.~\cite{Breitenlohner:1982jf}
The scaling dimensions are $2 + L$ independent of $J$ in agreement with the
twist-scaling dimension of a two-parton bound state in QCD.
It is important to notice that in the light-front the $SO(2)$ Casimir for orbital angular momentum $L^2$
is a kinematical quantity, in contrast with the usual $SO(3)$ Casimir $L(L+1)$ from non-relativistic physics which is
rotational, but not boost invariant.
We consider here the positive-sign dilaton profile $\exp(+ \kappa^2 z^2)$ which confines the constituents to distances
$\langle z \rangle \sim 1/\kappa$.~\cite {deTeramond:2009xk, Andreev:2006ct}
From (\ref{U}) we obtain the effective potential~\cite{deTeramond:2009xk}
$U(\zeta) = \kappa^4 \zeta^2 + 2 \kappa^2(L + S - 1)$, where $J^z = L^z + S^z$, which corresponds to a transverse oscillator in the light-front.
Equation (\ref{LFWE}) has eigenfunctions
\begin{equation} \label{phi}
\phi_{n, L}(\zeta) = \kappa^{1+L} \sqrt{\frac{2 n!}{(n\!+\!L\!)!}} \, \zeta^{1/2+L}
e^{- \kappa^2 \zeta^2/2} L^L_n(\kappa^2 \zeta^2) ,
\end{equation}
and eigenvalues
\begin{equation} \label{M2}
M_{n, L, S}^2 = 4 \kappa^2 \left(n + L + \frac{S}{2} \right).
\end{equation}
The meson spectrum has a string-theory Regge form: the square of the masses are linear in both the internal orbital angular momentum $L$ and radial quantum number $n$, where $n$ counts the number of nodes of the wavefunction in the radial variable $\zeta$. The spectrum also depends on the internal spin S.
The lowest possible solution for $n = L = S = 0$ has eigenvalue $M^2 = 0$.
This is a chiral symmetric bound state of two massless quarks with scaling dimension 2 and size
$\langle \zeta^2 \rangle \sim 1/\kappa^2$, which we identify with the lowest state, the pion.
Thus one can compute the hadron spectrum by simply adding $4 \kappa^2$ for a unit change in the radial quantum number, $4 \kappa^2$ for a change in one unit in the orbital quantum number and $2 \kappa^2$ for a change of one unit of spin to the ground state value of $M^2$. Remarkably, the same rule holds for baryons.~\cite{deTeramond:2009xk} This is an important feature of light-front holography, which predicts the same multiplicity of states for mesons
and baryons as it is observed experimentally.~\cite{Klempt:2007cp}
The LFWFs (\ref{phi}) for different orbital and radial excitations are depicted in Fig. \ref{LFWFs}.
\begin{figure}[h]
\centering
\includegraphics[angle=0,width=4.6cm]{LFWF_L} \hspace{20pt}
\includegraphics[angle=0,width=4.7cm]{LFWF_n}
\caption{Light-front wavefunctions $\phi_{n,L}(\zeta)$ is physical spacetime corresponding to a dilaton $\exp(\kappa^2 z^2)$: a) orbital modes ($n=0$) and b)
radial modes ($L=0$).}
\label{LFWFs}
\end{figure}
Individual hadron states are identified by their interpolating operators at $z\to 0.$ Pion interpolating operators are constructed by examining the behavior of
bilinear covariants $\bar \psi \Gamma \psi$ under charge conjugation and parity transformation.
Thus, for example, a pion interpolating operator $\bar q \gamma_5 q$ create a state with quantum numbers $J^{PC} = 0^{- +}$, and a vector meson
interpolating operator $\bar q \gamma_\mu q$ a state $1^{- -}$. Likewise the operator $\bar q \gamma_\mu \gamma_5 q$ creates a state with
$1^{++}$ quantum numbers, the $a_1(1260)$ positive parity meson. If we include orbital excitations the pion interpolating operator is
$\mathcal{O}_{2+L} = \bar q \gamma_5 D_{\{\ell_1} \cdots D_{\ell_m\}} q$. This is an operator with total internal orbital
momentum $L = \sum_{i=1}^m \ell_i$, twist $\tau = 2 + L$ and canonical dimension $\Delta = 3 + L$. The scaling of the AdS field $\Phi(z) \sim z^\tau$ at $z \to 0$ is precisely the scaling required to match the scaling dimension of the local meson interpolating operators. The spectral predictions for light meson and vector meson states are compared with experimental data
in Fig. \ref{pionspec} for the positive sign dilaton model discussed here.
\begin{figure}[h]
\begin{center}
\includegraphics[width=7.0cm]{8796A05.pdf} \hspace{10pt}
\includegraphics[width=7.0cm]{8796A01.pdf}
\caption{Parent and daughter Regge trajectories for (a) the $\pi$-meson family with
$\kappa= 0.6$ GeV; and (b) the $I\!=\!1$ $\rho$-meson
and $I\!=\!0$ $\omega$-meson families with $\kappa= 0.54$ GeV. Only confirmed PDG states~\cite{Amsler:2008xx} are shown.}
\label{pionspec}
\end{center}
\end{figure}
\section{Higher Fock Components in Light Front Holography}
The light front holographic variable $\zeta$ (\ref{zeta}) is particularly useful in describing a multiple parton state, as it incorporates a cluster decomposition: one particle (the active quark) {\it vs.} the rest (the spectator system). Thus, for example, for a baryon the LF cluster decomposition is equivalent to a quark-diquark system, and this may explain why LF holography is successful in predicting the same multiplicity of meson and baryon states.~\cite{deTeramond:2009xk}
The LF Hamiltonian eigenvalue equation (\ref{LFH}) is a matrix in Fock space. Writing $P_\mu P^\mu \equiv H_{LF}$ as a sum of terms representing the kinetic energy of the partons $H^0_{LF}$ plus an interaction
potential $V$, $H_{LF} = H^0_{LF} + V$, we find upon expanding in Fock eigenstates of $H^0_{LF}$, $\vert \psi \rangle = \sum_n \psi_n \vert n \rangle$,
\begin{equation}
\left(M^2 - \sum_{i=1}^n \frac{\mbf{k}^2_{ \perp i} + m^2}{ x_i} \right) \psi _n =
\sum_m \langle n \vert V \vert m \rangle \psi _m ,
\end{equation}
which represents an infinite
number of coupled integral equations. In AdS/QCD the only interaction is the confinement potential. The resulting potential in quantum field theory is the four-point effective interaction
$H_I ={\overline \psi} \psi ~ V \!\left(\zeta^2\right) {\overline \psi }\psi$,
which leads to $q q \to q q$ , $q \bar q \to q \bar q$, $ q\to q q \bar q$ and $\bar q\to \bar q q \bar q$, thus creating
states with extra quark-antiquark pairs. In this approximation there is no mixing with the $q \bar q g$ Fock states from the interaction term $g_s {\overline \psi} \gamma \cdot A \psi$ in QCD. Since models based on AdS/QCD are particularly successful in the description of exclusive processes,~\cite{Brodsky:2010cq}
this may explain the dominance of quark interchange~\cite{Gunion:1972qi}
over quark annihilation or gluon exchange contributions in large angle elastic scattering.~\cite{Baller:1988tj}
To show the relevance of higher Fock states we discuss in the next section a simple semi-phenomenological model where we include the first two components in a Fock expansion of the pion wave function
$\vert \pi \rangle = \psi_{q \bar q /\pi} \vert q \bar q \rangle_{\tau=2}
+ \psi_{q \bar q q \bar q} \vert q \bar q q \bar q \rangle_{\tau=4} + \cdots$ ,
where the $J^{PC} = 0^{- +}$ twist-two and twist-4 states $\vert q \bar q \rangle$ and $\vert q \bar q q \bar q \rangle$ are created by the interpolating operators
$\bar q \gamma_5 q$ and $ \bar q \gamma_5 q \bar q q$ respectively.
\section{Space- and Time-Like Structure of the Pion Form Factor}
In the soft wall model the electromagnetic probe propagates in modified AdS metrics. As a result the current is dual to a dressed current, {\it i.e.}, a hadronic electromagnetic current including virtual $\bar q q$ pairs and thus confined. In this case, the bulk-to-boundary propagator $J(Q,z)$ has the integral representation~\cite{Grigoryan:2007my}
\begin{equation} \label{Jkappa}
J(Q,z) = \kappa^2 z^2 \int_0^1 \! \frac{dx}{(1-x)^2} \, x^{\frac{Q^2}{4 \kappa^2}}
e^{-\kappa^2 z^2 x/(1-x)}.
\end{equation}
The form factor corresponding to (\ref{Jkappa}) for a state with twist $\tau = N$, is expressed as an $N - 1$ product of poles, corresponding to the first $N-1$ states along the vector meson radial trajectory~\cite{Brodsky:2007hb}
\begin{equation} \label{FF}
F(Q^2) = \frac{1}{\Big(1 + \frac{Q^2}{\mathcal{M}^2_\rho} \Big)
\Big(1 + \frac{Q^2}{\mathcal{M}^2_{\rho'}} \Big) \cdots
\Big(1 + \frac{Q^2}{\mathcal{M}^2_{\rho^{N-2}}} \Big)}.
\end{equation}
For a pion, for example, the lowest Fock state -- the valence state -- is a twist 2 state, and thus the form factor is the well known monopole form.~\cite{Brodsky:2007hb} Since the charge form factor is a diagonal operator, the final expression for the form factor corresponding to the truncation up to twist four is the sum of two terms, a monopole and a three-pole term.
In the strongly coupled semiclassical gauge/gravity limit hadrons have zero widths and are stable. One can nonetheless modify the formula (\ref{FF}) to introduce a finite width:
$q^2 \to q^2 + 2 i \kappa \Gamma$. We choose the values $\Gamma_\rho = 130$ MeV, $\Gamma_\rho = 400$ MeV and $\Gamma_\rho = 300$ MeV. The results for the pion form factor with higher Fock states (twist two and four) are shown in Fig. (\ref{pionFF}). The results correspond to $P_{q \bar q q \bar q}$ = 13 \%, the admixture of the
$\vert q \bar q q \bar q \rangle$ state. The value of $P_{q \bar q q \bar q}$ (and the widths) are input in the model. The value of $\kappa$ is determined from the $\rho$ mass and the masses of the radial excitations follow from (\ref{M2}). The time-like structure of the pion form factor displays a rich pole structure with constructive and destructive interferences.
Conserved currents correspond to five dimensional massless fields in AdS according to the relation
$(\mu R)^2 = (\Delta - p) (\Delta + p - 4)$ for a $p$ form in $d=4$. In the usual AdS/QCD framework~\cite{Erlich:2005qh, DaRold:2005zs} this corresponds to $\Delta = 3$ or 1, the canonical dimensions of
an EM current and field strength respectively. Normally one uses a hadronic interpolating operator with minimum twist $\tau$ to identify a hadron to predict the power-law fall-off behavior of its form factors and other hard
scattering amplitudes;~\cite{Polchinski:2001tt} {\it e.g.}, for a two-parton bound state $\tau = 2$. However, in the case of a current, one needs to use an effective field operator with dimension $\Delta =3.$ The apparent inconsistency between twist and dimension is removed by noticing that in the light-front one chooses to calculate the matrix element of the twist-3 plus component of the current $J^+$,~\cite{Brodsky:2006uqa, Brodsky:2007hb} in order to avoid coupling to Fock states with different numbers of constituents.
\begin{figure}[h]
\begin{center} \label{pionFF}
\includegraphics[width=6.45cm]{SLpionFF.pdf} \hspace{8pt}
\includegraphics[width=7.10cm]{TLpionFF.pdf}
\caption{Structure of the space- and time-like pion form factor in light-front holography for a truncation of the pion wave function up to twist four.
Triangles are the data compilation from Baldini {\it et al.},~\cite{Baldini:1998qn} red squares are JLAB 1~\cite{Tadevosyan:2007yd} and green squares are JLAB 2.~\cite{Horn:2006tm}}
\end{center}
\end{figure}
\section{Conclusions}
Light-front holography provides a direct correspondence between an effective gravity theory defined in a fifth-dimensional warped space and a semiclassical approximation to strongly coupled QCD quantized on the light-front. This duality leads to a remarkable Lorentz-invariant relativistic Schr\"odinger-like equation~\cite{deTeramond:2008ht} which
provides a successful prediction for the light-quark meson and baryon spectra as
a function of hadron spin, quark angular momentum, and radial quantum numbers. It also predicts the same multiplicity of states for mesons
and baryons, which is observed experimentally.
We originally derived this correspondence using the identity between electromagnetic and gravitational form factors computed in AdS and light-front theory.~\cite{Brodsky:2006uqa,Brodsky:2007hb,Brodsky:2008pf} The results for hadronic form factors are also successful, and the predicted power law fall-off agrees with dimensional counting rules as required by conformal invariance at small $z$.~\cite{Brodsky:2007hb, Brodsky:2008pg} As in the Schr\"odinger equation, the semiclassical approximation to light-front QCD described in this paper does not account for particle creation and absorption; it is thus expected to break down at short distances
where hard gluon exchange and quantum corrections become important.
However, one can systematically improve the semiclassical approximation, for example by introducing nonzero quark masses and short-range Coulomb
corrections.~\cite{Branz:2010ub, Arriola:2010up} We have discussed the relevance of higher Fock-states for describing the detailed structure of form factors. A simple model including
twist-two and twist-four Fock components for the pion wavefunction describes remarkable well the pole structure and the effects of constructive and destructive interferences in the time-like region.
The hadron eigenstate generally has components with different orbital angular momentum. For example, the proton eigenstate in light-front holography with massless quarks has $L=0$ and $L=1$ light-front Fock components with equal probability -- a novel manifestation of chiral invariance.~\cite{Brodsky:2010px}
Light-front holographic mapping of effective classical gravity in AdS space, modified by the positive-sign dilaton background, predicts the form of a non-perturbative effective coupling $\alpha_s(Q)$ and its $\beta$-function.~\cite{Brodsky:2010ur} The AdS running coupling is in very good agreement with the effective
coupling extracted from the Bjorken sum rule.~\cite{Deur:2008rf} The holographic $\beta$-function displays a transition from nonperturbative to perturbative regimes at a momentum scale $Q \sim 1$ GeV.
\vspace{20pt}
\noindent{\bf \large Acknowledgements}
\vspace{10pt}
We thank Alexander Deur, Josh Erlich and Hans Guenter-Dosch for collaborations. GdT thanks the members of the High Energy Physics Group at Imperial College in London for their hospitality. Invited talk presented by GdT at Light Cone 2010: Relativistic Hadronic and Particle Physics, 14-18 June 2010, Valencia, Spain. We are grateful to the organizers for their outstanding hospitality. This work was supported by Fondo de Incentivos CONICIT/MICIT, Costa Rica and by the Department of Energy contract DE--AC02--76SF00515. SLAC-PUB-14259.
|
2,877,628,090,489 | arxiv | \section{Introduction}
\label{sec:intro}
Ultrasound imaging is a popular medical imaging modality widely used in clinics. This modality is non-invasive, inexpensive and portable. However, in comparison to other modalities, such as computed tomography or magnetic resonance imaging, ultrasound images of human tissues are of relatively lower quality, making them more difficult to interpret by the radiologists. Various computer-aided diagnosis (CAD) systems have been developed to help the radiologists assess ultrasound images \cite{shiraishi2008computer, jalalian2013computer, flores2015improving}. Recently, we can observe an increasing interest in incorporating deep learning techniques into CAD systems \cite{yap2018automated,qi2019automated, jarosik2020breast}. Currently, the majority of the research is focused on developing solutions based on ultrasound B-mode images reconstructed using radio-frequency (RF) backscattered ultrasound signals. Due to the reconstruction, however, information about RF signal's spectrum and phase is partially removed in order to make the output ultrasound image human-readable \cite{szabo2004diagnostic}. The lost frequency content may contain useful information about the examined structure.
Attenuation coefficient (AC) is one of the basic quantitative acoustic properties of human tissues. AC can be utilized for medical diagnosis, and has been used to differentiate liver \cite{lu1999ultrasound, kuc1980clinical} and breast \cite{d1986frequency} tissue pathologies. AC is commonly estimated based on RF signal's spectrum, by tracking the signals frequency content change with depth. For example, the mean frequency slope can be used to calculate the coefficient \cite{kuc1979estimating}. However the accuracy of the established methods may be disturbed by several factors, including the impact of transducer's characteristics and electrical noise.
In this work, we experimentally investigate a deep learning based approach to the AC estimation. We use RF signals to train convolutional neural networks (CNNs) for the direct AC calculations. In our feasibility study, we verify model's performance depending on the amount of RF data provided to the CNN. We also visualize its internal representations to verify if any information related to the expected change in signal frequency content can be discovered.
Deep convolutional networks have been already successfully used for the processing of raw acoustic signals -- e.g. in automatic speech recognition \cite{sainath2015learning, sainath2015convolutional, golik2015convolutional, hoshen2015speech}. In particular, Sainath et al. presented that convolutional layers, when properly trained, can learn to do a spectral filtering of the input signal. Our paper extends prior work by verifying (1) if the signal frequency content loss, specific for higher ultrasound attenuation, is truly represented in the neural network's weights and (2) what is the CNN's performance depending on the amount of the input data. The purpose of the first point is to increase CNN's interpretability, what is of great importance in medical applications. The answer to the second point will show what is the possible output resolution of the proposed method.
\section{Method}
\begin{figure}[t!]
\begin{minipage}[b]{1\linewidth}
\begin{minipage}[b]{.48\linewidth}
\centering
\centerline{\includegraphics[width=1\linewidth]{imgs/examples/ac01_sample.pdf}}
\end{minipage}
\hfill
\begin{minipage}[b]{.48\linewidth}
\centering
\centerline{\includegraphics[width=1\linewidth]{imgs/examples/ac01_sample_spectrogram.pdf}}
\end{minipage}
\centerline{(a)}\medskip
\end{minipage}
\begin{minipage}[b]{1\linewidth}
\begin{minipage}[b]{.48\linewidth}
\centering
\centerline{\includegraphics[width=1\linewidth]{imgs/examples/ac07_sample.pdf}}
\end{minipage}
\hfill
\begin{minipage}[b]{.48\linewidth}
\centering
\centerline{\includegraphics[width=1\linewidth]{imgs/examples/ac07_sample_spectrogram.pdf}}
\end{minipage}
\centerline{(b)}\medskip
\end{minipage}
\begin{minipage}[b]{1\linewidth}
\begin{minipage}[b]{.48\linewidth}
\centering
\centerline{\includegraphics[width=1\linewidth]{imgs/examples/ac15_sample.pdf}}
\end{minipage}
\hfill
\begin{minipage}[b]{.48\linewidth}
\centering
\centerline{\includegraphics[width=1\linewidth]{imgs/examples/ac15_sample_spectrogram.pdf}}
\end{minipage}
\centerline{(c)}\medskip
\end{minipage}
\caption{Examples of 10 mm RF data chunks (waveform, spectrogram), which were used in our experiments. The data were acquired from simulation of ultrasound scattering in a soft tissue with AC equal to (a)~0.1 (b)~0.7 (c)~1.5. High-frequency components are more strongly suppressed by the structures with higher AC value.}
\label{fig:dataset}
\end{figure}
\subsection{Dataset}
We used Field-II software (created by Jensen et al. \cite{jensen1992calculation, jensen1996field}) to simulate an ultrasound wave propagation and to generate 1024 RF lines for each attenuation level $\{0.1, 0.2, ..., 1.5\}$ dB/(MHz*cm), 15360 lines in total. A piston transducer with a diameter of $d_t = 12$ (mm) and center frequency $f_0 = 5$ (MHz) was used. Backscattered echo signal was digitized with $f_s = 50$ (MHz) sampling rate. An ultrasound impulse was propagating through tissue mimicking medium with a given attenuation level. The maximum depth of signal acquisition was equal to 50 (mm). A speed of sound with value $c = 1540$ (m/s) was set. A digitized RF scanline consisted of approximately 3400 samples.
We applied the sliding window technique to split RF lines into multiple smaller fragments (1-D patches). A rectangular window of length $k \in \{1, 2, 5, 10\}$ (mm) was used. Each patch was normalized by subtracting its mean and dividing by its standard deviation. Next, processed data were used to train deep learning models.
\begin{figure}
\centering
\scalebox{.9}{
\begin{tikzpicture}[->,node distance=1cm, auto,]
\node[](env){};
\node[anchor=south east,inner sep=0pt] at ($(env.south east)-(-.5cm,.4cm)$) {RF};
\node[anchor=south east,inner sep=0pt] at ($(env.south east)-(-1.5cm,0cm)$) {
\includegraphics[width=.4\linewidth]{imgs/examples/ac15_sample_no_axis.pdf}
};
\node[node, tf, inner sep=5pt,below=.7cm of env, align=left, text width=9em](bf){
\textbf{1-D Conv. Layer}\\
filter size: 64\\
number of filters: 32\\
stride: 1
};
\node[node, tf, inner sep=5pt,below=.5cm of bf, align=left, text width=9em](as){
\textbf{Nonlinearity}\\
activation: ReLU
};
\node[node, tf, inner sep=5pt,below=.5cm of as, align=left, text width=9em](abs){\textbf{Average Pooling}\\
pooling size: 10\\
stride: 10
};
\node[node, wf, inner sep=5pt,below=.5cm of abs, text width=5em, align=left, text width=9em](drc){
\textbf{DNN}\\
number of FC layers: \\
\hspace{0.2cm} CNN-c: 1\\
\hspace{0.2cm} CNN-r: 2
};
\node[draw=none] (end) [below=.7cm of drc] {};
\path[every node/.style={transform shape, text centered}]
(env) edge[pil] node [right, pos=0.1] {} (bf)
(bf) edge[pil] node [above] {} (as)
(as) edge[pil] node [above] {} (abs)
(abs) edge[pil] node [above] {} (drc)
(drc) edge[pil] node [right] {AC} (end);
\end{tikzpicture}
}
\caption{Neural network architecture evaluated in this work. DNN block consists of multiple fully connected (FC) layers.}
\label{fig:nn}
\end{figure}
\subsection{Models and evaluation procedure}
\begin{figure*}[t!]
\begin{minipage}[b]{1\linewidth}
\begin{minipage}[b]{.24\linewidth}
\centering
\centerline{\includegraphics[width=1\linewidth]{imgs/analysis/clf_1cmwaveform_7.pdf}}
\end{minipage}
\hfill
\begin{minipage}[b]{.24\linewidth}
\centering
\centerline{\includegraphics[width=1\linewidth]{imgs/analysis/clf_1cmwaveform_15.pdf}}
\end{minipage}
\begin{minipage}[b]{.24\linewidth}
\centering
\centerline{\includegraphics[width=1\linewidth]{imgs/analysis/clf_1cmwaveform_30.pdf}}
\end{minipage}
\hfill
\begin{minipage}[b]{.24\linewidth}
\centering
\centerline{\includegraphics[width=1\linewidth]{imgs/analysis/clf_1cmwaveform_31.pdf}}
\end{minipage}
\centerline{(a)}\medskip
\end{minipage}
\begin{minipage}[b]{.48\linewidth}
\centering
\centerline{\includegraphics[width=.75\linewidth]{imgs/analysis/clf_1cm_conv1d_freq.pdf}}
\hspace*{0.1cm}
\centerline{(b)}\medskip
\end{minipage}
\begin{minipage}[b]{.48\linewidth}
\centering
\centerline{\includegraphics[width=.75\linewidth]{imgs/analysis/clf_1cm_dense_layer_1.pdf}}
\hspace*{0.1cm}
\centerline{(c)}\medskip
\end{minipage}
\caption{Visualization of the CNN-c model developed to distinguish between AC values of 0.1 and 1.5. (a) Four sample filters of the first 1-D convolutional layer (waveform). (b). Magnitude spectrum of the first 1-D convolutional layer filters. Each column presents frequency content of one kernel. Filters were sorted according to the mean frequency. (c) Weights of the second, fully-connected layer, which was detecting AC = 1.5. The model conducted the classification decision based on the frequency content of input signals.}
\label{fig:cnn-r}
\end{figure*}
\label{sec:models}
We developed multi-layered artificial neural networks for the purposes of our experiments. Each neural model had a similar structure: input signal was processed by a 1-D convolutional layer followed by a 1-D average pooling layer, (see Fig. \ref{fig:nn}). The output was flattened, then processed by several fully-connected layers. Similar approaches have been applied for acoustic signal processing, for instance by Golik et al. \cite{golik2015convolutional} and Sainath et al. \cite{sainath2015convolutional}. We used ReLU activation function \cite{nair2010rectified} and applied batch normalization \cite{ioffe2015batch}. Neural network weights were initialized using Glorot technique \cite{glorot2011deep}.
We experimented with two neural network models to verify and interpret CNN's ability to distinguish attenuation levels and to estimate the AC value. The first architecture (named CNN-c) contained 1 fully connected layer, had a sigmoid output activation and was trained to discriminate attenuation levels 0.1 and 1.5 (a classification task) by minimizing binary cross-entropy. We attempted to interpret a knowledge discovered by a learning algorithm and hidden in the CNN-c parameters (see Fig. \ref{fig:cnn-r}). The second architecture (named CNN-r) contained 2 fully connected layers, had a ReLU output activation and was trained to estimate AC value (a regression task) by minimizing mean absolute error:
\begin{equation}
\label{eq:mae}
L = \frac{1}{N}\sum_{i=1}^{N}|y_i-\hat{y}_i|
\end{equation}
where $\hat{y}_i$ is an estimated AC value and $y_i$ is a true coefficient value.
We used CNN-r to evaluate the method's performance depending on the size of the RF input data.
We used cross validation procedure to assess method's performance. We randomly split the dataset into train, validation and test subsets. We used 50\% of all RF lines for training, 20\% for validation and hyper-parameter tuning and 30\% for the final testing. We used Adam optimization algorithm with learning rate equal $10^{-3}$ to minimize the loss functions. We assessed model's performance using mean absolute error (Eq. \ref{eq:mae}) and standard deviation of the absolute errors $e = |y_i-\hat{y}_i|$.
\section{Results and Discussion}
\subsection{Interpretability}
Visualization of the CNN-c parameters is presented in Fig. \ref{fig:cnn-r}. First, several kernels of the 1-D convolutional layer resembled bandpass filters (both in terms of waveform and the spectrum). Similar after-training observations were reported for auditory-like filters in related publications \cite{sainath2015convolutional, golik2015convolutional}. Each kernel had a mean frequency located in the range [0, 10] (MHz), most were close to 5 (MHz). That conforms with dataset generation parameters: the center frequency of an ultrasound impulse was equal $f_0$ = 5 (MHz).
Moreover, filters with the highest mean frequency (indices 22-31, $f_m \approx$ 7 (MHz)) have relatively narrow band. It is important to note here, that nonexistence of higher-frequency in the acquired ultrasound signal may be a good indicator, that the ultrasound signal comes from an area with sufficiently high attenuation. According to Kuc et al. \cite{kuc1980clinical}, some of the human tissues (like the liver), can "\emph{behave like a distributed acoustic low-pass filter}". The natural way is thus to expect, that this kind of information will be used by the appropriately trained model.
Our analysis conforms with the next observation: weights of the output fully-connected layer (which detects AC = 1.5) were negative for the output from kernels with $f_m \approx 7$ (MHz), approximately at the end of the processed 1-D patch. This means that the model performed AC detection based on the temporal changes of the spectrum. The greater the loss of high-frequency components, the greater probability that the attenuation was high. Thus the underlying CNN's operations behaved similarly to other state-of-the-art AC estimation methods.
\begin{figure*}[t!]
\begin{minipage}[b]{.24\linewidth}
\centering
\centerline{\includegraphics[width=1\linewidth]{imgs/results/regr_1cm_errorbar.pdf}}
\hspace*{0.25cm}
\centerline{k = 10 (mm)}\medskip
\end{minipage}
\begin{minipage}[b]{.24\linewidth}
\centering
\centerline{\includegraphics[width=1\linewidth]{imgs/results/regr_05cm_errorbar.pdf}}
\hspace*{0.25cm}
\centerline{k = 5 (mm)}\medskip
\end{minipage}
\begin{minipage}[b]{.24\linewidth}
\centering
\centerline{\includegraphics[width=1\linewidth]{imgs/results/regr_02cm_errorbar.pdf}}
\hspace*{0.25cm}
\centerline{k = 2 (mm)}\medskip
\end{minipage}
\begin{minipage}[b]{.24\linewidth}
\centering
\centerline{\includegraphics[width=1\linewidth]{imgs/results/regr_01cm_errorbar.pdf}}
\hspace*{0.25cm}
\centerline{k = 1 (mm)}\medskip
\end{minipage}
\caption{Average AC estimate $\hat{y}$ computed by CNN based on RF echoes collected from tissue mimicking numerical phantoms with attenuation $y \in \{0.1, 0.2, ..., 1.5\}$. Each point represents estimate's average $\hat{y}$; whiskers shows standard deviation range. The average AC estimate and standard deviation both increased with the sliding window size, equal to $k$.}
\label{fig:res}
\end{figure*}
\subsection{Performance}
Evaluation results are presented in Table \ref{tbl:results}. We obtained the smallest error (and its standard deviation) for the largest window size k = 10 (mm). The smaller the window, the less useful information network obtained, and the worse the quality of the estimation was. This observation is consistent with our initial assumption that the size of the input RF data can impact CNN-r's performance.
The average estimate values ($\pm$ standard deviation) for individual attenuation levels are presented in Fig. \ref{fig:res}. The larger the input size was, the closer the average estimate was to the true AC. A similar relation can be observed for the standard deviation. Moreover, the smaller the data size, the closer to AC of 0.5 the average estimate was. For example, for k = 10 (mm), the average estimate for AC = 0.1 and AC = 1.5 data was approx. 0.13 and 1.43; for k = 1 (mm) it was 0.43 and 1.1 respectively. Finally, it is important to note that the points in the Figure \ref{fig:res} arranged in a straight line -- that is, on average, the true order of AC values was retained by CNN-r.
\begin{table}
\centering
\begin{tabular}{|c|c|}
\hline
Window size & Average absolute error ($\pm$ std. dev.) \\ \hline \hline
10 mm & 0.08 ($\pm$ 0.07) \\ \hline
5 mm & 0.12 ($\pm$ 0.11) \\ \hline
2 mm & 0.20 ($\pm$ 0.19) \\ \hline
1 mm & 0.25 ($\pm$ 0.22) \\ \hline
\end{tabular}
\caption{Average error $|\hat{y_i} - y_i|$ ($\pm$ standard deviation) for several selected window sizes.}
\label{tbl:results}
\end{table}
\section{Conclusions}
In this work, we positively verified the feasibility of using convolutional neural networks to estimate ultrasound attenuation coefficient based on RF signal data. We presented for a simple two layer neural model that, after an appropriate number of training iterations, its weights can reflect expected loss of signal's spectra. In our experiments we noticed, that CNN's performance depended directly on the size of the input ultrasound data.
Our work can be extended in several ways. It would be interesting to verify what is the real-world case scenario performance of the neural network models trained using simulated RF data. The idea of preparing model on a large synthetic dataset and employing it to estimate AC for real data is very promising. This method may also help asses and improve ultrasound simulation software.
\bibliographystyle{IEEEbib}
|
2,877,628,090,490 | arxiv | \section{Introduction}
Parkinson's Disease (PD) is a neurodegenerative disorder with well-known symptoms such as slowed movement, rigidity, tremor and various non-motor symptoms (NMS). The appearance of these symptoms and the disease progress, however, highly differ from patient to patient and clinical documentation does not capture fine-grained objective phenotypical characteristics. Clinical documentation of motor symptoms, for instance, only describes three main PD subtypes: 1) Tremor-dominant PD, 2) Akineto-rigid PD, 3) Mixed/Equivalence type. Although there is no neuroprotective or regenerative treatment to date, early diagnosis and treatment is important in reducing burden and potential treatment costs \cite{postuma2019prodromal}. Thus, there is a need for early objective biomarkers.
Various systems have already demonstrated promising diagnostic potential when analyzing data modalities like electronic questionnaires, hand movement and voice captures \cite{lee2016validation, rusz2018smartphone, carignan2015measuring}. These studies were able to differentiate between PD and healthy subjects based on digital biomarkers, yielding an important step towards potential clinical adaptation. However, to the best of our knowledge, there is still a lack of comprehensive evaluation of combinations of these biomarkers in an interactive smart-device-based assessment setting. In particular, it is important to consider other similar movement disorders in the analysis as well to improve disease-specificity of the biomarkers.
To approach this problem, we have developed a Smart Device System (SDS) to analyze PD patients based on multi-modal data recording. In a compact assessment, the SDS was used to record self-completed electronic questionnaires and smartwatch-based sensor measures from a series of movement tasks. Given this system, we have recorded a total of 503 patient sessions in a prospective study from 2018 to 2021. Based on this study data, we have already found high diagnostic potential utilizing Machine Learning (ML) methods with movement and questionnaire data \cite{varghese2021sensor}. In a later stage of the study, further selected modalities were added to the assessments, these are speech recordings and a smartphone-based finger tapping task.
In this work, we analyze the advances of the multi-modality of our system. We therefore 1) train ML models to discriminate PD from healthy controls and other movement disorders, and 2) perform cluster analysis within the PD group. Our research question is whether the usage of the multi-modal data compared to single-modal data increases information gain and thus 1) improves diagnostic accuracy when combined, and 2) lets us discover distinguishable PD subgroups that go beyond the aforementioned three clinically established main types.
\section{Study Data}
The study has been registered (ClinicalTrials.gov ID: NCT03638479) and approved by the ethical board of the University of Münster and the physician’s chamber of Westphalia-Lippe (Reference number: 2018-328-f-S).
Three participant groups have been recorded: 1) Parkinson’s disease (PD), including a broad range of different PD progress states according to Hoehn and Yahr \cite{bhidayasiri2012parkinson}, 2) differential diagnoses (DD) and 3) healthy controls (HC). Diagnoses were based on ICD-10 codes, confirmed by neurologists and reviewed by one senior movement disorder expert.
Our analysis focuses on participants that completed an assessment that included all data modalities. From each participant we collected the following data:
\begin{enumerate}
\item \textbf{Self-completed questionnaire:} The first part includes information about age, height, weight, kinship with PD, alcohol consume and medication. The second part consists of 30 yes/no items about NMS based on Chaudhuri et al. \cite{chaudhuri2006international}.
\item \textbf{Smartwatch-recorded movement tasks:} 11 different movement tasks of 10 to 20 seconds length were performed with one smartwatch attached to each of the participants wrist respectively. Acceleration and rotation data were recorded synchronously.
\item \textbf{Voice recording:} 3 types of speech tasks were recorded: (i) holding vowel tones ("a"/"i"/"o") per one breath, (ii) fast repetition of syllables "pah"/"tah"/"kah" and (iii) sentences reading.
\item \textbf{Finger tapping:} Using 3 fingers, participants were asked to tap the smartphone screen repeatedly for 15 seconds as quick as possible.
\end{enumerate}
Details about the individual assessment steps are described in Varghese et al. \cite{varghese2019smart}. The sample size of all participants is summarized in \autoref{tab:0}.
\begin{table}[h!]
\caption{Participant sample}
\centering
\begin{tabular}{c|c|c}
\hline
Data modalities & Disease class & Sample size \\ \hline
\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Questionnaires, \\ movement\end{tabular}} & PD & 279 \\
& DD & 133 \\
& HC & 90 \\ \hline
\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Questionnaires,\\ movement, voice,\\ finger tapping\end{tabular}} & PD & 21 \\
& DD & 27 \\
& HC & 23 \\ \hline
\end{tabular}
\label{tab:0}
\end{table}
\subsection{Feature Extraction} \label{sec:FeatureExtraction}
Given the assessment data, we performed a feature extraction procedure in order to prepare the data for ML. The following feature sets were generated for the respective modalities and used for classification:
\begin{enumerate}
\item \textbf{Self-completed questionnaire:} All 30 NMS answers were used in binary format, other personal data was not considered.
\item \textbf{Smartwatch-recorded movement tasks:} Two representative tasks were selected, "Relaxed" and "Lift and Hold". The recorded movements consist of time series for both smart-watches in three spatial axes for acceleration and rotation sensor measures. On these time series we computed frequency powers for 2 to 12 Hz in 1 Hz steps using Welch's power spectral density (PSD).
\item \textbf{Voice recording:} We computed Jitter via autocorrelation on all vocal tasks, measuring the extent of variation of the voice range.
\item \textbf{Finger tapping:} We divided the 15 seconds long record in three equal size segments and calculated the average speed and total count of display touches in every segment.
\end{enumerate}
In addition, we generated a subset for the cluster analysis to account for the small sample size of the PD group with all data modalities (see \autoref{tab:0}).
Voice and finger tapping features were fully included in the subset. Questionnaire data were reduced to one feature by summing positively answered questions. Moreover, the movement features were reduced by only including the assessment "Relaxed" and summing the frequency powers from 2 to 12 Hz.
\section{Classification}
Given the previously described features, we have trained and optimized ML classifiers for the individual data modalities. We used the scikit-learn implementation of the support-vector machine (SVM) \cite{sklearn_api} and CatBoost, a gradient boosting decision-tree-based model \cite{dorogush2018catboost}. To evaluate the potential information gain of combining features from different data modalities, we performed an adapted version of classifier stacking. In this version, a certain classifier is trained on each respective source of sensor data, e.g. the movement data is only fed to a movement classifier. In this way, we trained one classifier for each data modality respectively and thus can utilize the additional samples for smartwatch and questionnaire data in the training process. A simple linear model was trained on top of the individual outputs to consider all data modalities in the classification process and compute the final label for the input samples. \autoref{fig:architecture} summarizes the architecture and the utilized classifiers. To account for sample size differences, we used balanced class weighting in the training process and report results based on balanced classification accuracy. For evaluation, a 3 times randomly repeated 5-fold cross-validation was used. Two classification tasks were performed: PD vs. HC and PD vs. DD.
\begin{figure}[h!]
\centering
\includegraphics[width=0.6\textwidth]{images/stacking.pdf}
\caption{Stacking classifier for the classification of PD samples. For each data modality, a selected classifier is trained. The outputs of each subclassifier are forwarded to a logistic regression that performs the final classification.}
\label{fig:architecture}
\end{figure}
\subsection{Results}
Classification performance has been evaluated for the individual classifiers for each respective data modality and the combination of all features using the previously described stacking approach. \autoref{tab:classification} summarizes the averaged classification scores from the cross-validation.
\begin{table}[h!]
\caption{Evaluation of questionnaire, movement, voice and finger tapping data on the sample subset with all data records (44 samples for PD vs. HC, 48 samples for PD vs. DD). Performance is measured with balanced accuracy (STD). Best results are marked in bold.}
\centering
\begin{tabular}{l|l|l}
\hline
Task & PD vs. HC & PD vs. DD \\ \hline
Quest. & 0.843 (0.098) & 0.667 (0.112) \\ \hline
Mov. & 0.825 (0.112) & 0.614 (0.102) \\ \hline
Voice & 0.702 (0.154) & 0.560 (0.198) \\ \hline
Finger Tapping & 0.6383 (0.157) & 0.570 (0.084) \\ \hline
Quest. + Mov. + Voice + Finger Tapping & \textbf{0.918 (0.074)} & \textbf{0.722 (0.121)} \\ \hline
\end{tabular}
\label{tab:classification}
\end{table}
\section{Clustering}
We conducted hierarchical clustering within the PD group using the scikit-learn implementation of the agglomerative cluster algorithm \cite{sklearn_api}. To analyze information gain through multi-modality, we compared clustering results of a single data modality (movement features) with multiple data modalities (movement, voice, finger tapping and questionnaire features). The optimal number of clusters was determined from dendrograms by identifying the longest distance between joined clusters. For each cluster, we summarized the cluster composition by distinguishing between the clinically established PD types: The tremor-dominant type (T-type), the akineto-rigid type (AR-type) and the equivalence type (ART-type). These PD types were assigned to the participants by physicians in advance. Participants that could not be categorized to any of the types were documented as Unknown.
\subsection{Results}
\autoref{fig:clusteringMov} shows the dendrograms for (a) a single data modality (movement features) and (b) multiple data modalities (movement, finger tapping, voice and questionnaire features). In the single-modal analysis, clusters were labeled with the letter S (e.g. cluster S1), in the multi-modal analysis with the letter M (e.g. cluster M1). \autoref{tab:clusterMov} displays the corresponding cluster composition.
\begin{figure}[h]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{images/clustering_m_final.png}
\caption{}
\label{fig:sub1}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{images/clustering_all_final.png}
\caption{}
\label{fig:sub2}
\end{subfigure}
\caption{Dendrograms of the hierarchical cluster analysis for (a) single-modal data (movement features) and (b) multi-modal data (movement, finger tapping, voice and questionnaire features). The gray horizontal line intersects the largest vertical distance between joined clusters.}
\label{fig:clusteringMov}
\end{figure}
\begin{table}[h]
\caption{The cluster composition by PD types corresponding to the cluster analysis in \autoref{fig:clusteringMov}. All values are given in percent and rounded to the second decimal place.}
\centering
\begin{tabular}{l cc cccc}
\toprule
& \multicolumn{2}{c}{Movement} & \multicolumn{4}{c}{Movement, Finger Tapping, Voice, Questionnaire} \\
\cmidrule(lr){2-3} \cmidrule(lr){4-7}
Type & Cluster S1 & Cluster S2 & Cluster M1 & Cluster M2 & Cluster M3 & Cluster M4 \\
\midrule
T-Type & 33.33 & 0 & 0 & 33.33 & 0 & 0 \\
AR-Type & 0 & 44.44 & 25 & 0 & 50 & 50 \\
ART-Type & 33.33 & 27.78 & 50 & 33.33 & 16.67 & 50 \\
Unknown & 33.33 & 27.78 & 25 & 33.33 & 33.33 & 0 \\
\bottomrule
\label{tab:clusterMov}
\end{tabular}
\end{table}
\section{Discussion}
We have collected multi-modal data with the SDS to extract digital biomarkers. With these, we aimed to distinguish PD patients from other movement disorders and find subgroups within the PD patients. We therefore evaluated two ML tasks: classification and clustering.
For the classification, we reported results based on classifiers that were trained on one data modality only. Classifiers for movement and questionnaire data generally performed better than those utilizing voice or finger tapping data. One argument for this observation is that significantly more training samples were available for questionnaire and movement data (see \autoref{tab:0}). When using a single modality, the questionnaire classifier achieved the highest balanced accuracy in both classification task with 84.3\% for PD vs. HC and 66.7\% for PD vs. DD. Combining the data modalities improved performance, resulting in a balanced classification accuracy of 91.8\% for PD vs. HC and 72.2\% for PD vs. DD. These results support our hypothesis that the proposed data recordings add informational value to the system, allowing more accurate discrimination of PD from healthy controls and, in particular, from other movement disorders. Further, we observed that PD vs. DD generally yield far less accurate classification results, indicating that more research is needed to precisely characterize and distinct PD from other similar disorders.
In clustering, we analyzed the optimal cluster number and cluster composition for single- and multi-modal data. Single-modal data resulted in an ideal cluster number of two. Cluster S2 contained mostly the AR-type, as well as the ART-type and samples labeled Unknown. It did not, however, contain the T-type. In contrast, Cluster S1 did not contain the AR-type, it did, however, include the T- and ART-type, as well as samples labeled Unknown. The results indicate that smartwatch based features capture the clinically established PD types. The cluster analysis with multiple data modalities resulted in a finer subdivision of participants. Cluster M2 included the same samples as cluster S1. The remaining samples formed three additional cluster. Cluster M4 consisted equally of the AR- and the ART-type, whereas cluster M1 and cluster M3 consisted of the AR-type, the ART-type and samples labeled Unknown. In cluster M1, the ART-type prevailed, while cluster M3 mostly consisted of the AR-type.
The cluster analysis showed that an increase from single to multiple data modalities results in an increase in the number of clusters. Because each cluster grouped at least two different PD types, we hypothesize that clusters cannot be explained by the clinically established PD types alone.
A limitation of our analysis is the relatively small sample size for the clustering within PD patients. Therefore, to find a stable and representative set of digital biomarkers, further evaluation with more multi-modal measurements - preferably > 200 PD participants and as many controls - is required.
\section{Conclusion}
We have conducted a study with a multi-modal recording system based on mobile smart devices to research a broad phenotypical spectrum of PD. In this work, we evaluated the information gain that results from using data from different modalities, including questionnaires, movement recordings, voice captures and smart-phone based finger tapping. Our ML analysis resulted in two main findings.
First, combining information from different sensor sources of smart devices improved classification accuracy when distinguishing PD from the HC group. More importantly, we have seen a similar improvement in the classification between PD and the DD group, which consists of other movement disorders. These results indicate that the different data modalities complement each other and in this way aid in characterizing PD more precisely when comparing it to other disorders.
The second observation is related to the cluster analysis. Our methods have shown that we were able to identify certain subgroups within the PD group when utilizing movement data. These representations are in line with medical expectation as PD is medically categorized based on movement symptoms. However, when adding additional data modalities to the clustering, we observed a finer subdivision between clusters. This observation indicates that there are potentially more PD sub-phenotypes beyond the well-known movement-based classifications.
Finding and specifying such yet unknown groups could strongly aid in more personalized PD treatment. As our system is fully based on consumer-grade devices, it could easily be integrated to support early diagnosis and disease monitoring by giving relevant indications from combinatory digital biomarkers.
\bibliographystyle{unsrt}
|
2,877,628,090,491 | arxiv | \section{Introduction}
Elusive DM has so far evaded direct, indirect, collider and astrophysical searches. For now, the experiments PICO-2L and PICASSO have constrained the effective DM-proton spin-dependent (SD) cross section $\sigma_{\chi n}^{SD}$ to values just below $10^{-37} \, \text{cm}^{2}$ for a DM particle with a mass $m_{\chi} = 5 \, \text{GeV}$ \cite{Olive2014c}. If DM particles are thermal relics such as Weakly Interactive Massive Particles (WIMPs), then the natural scale for the (thermally averaged) annihilation cross section $\left\langle \sigma_{A} v \right\rangle = 3 \times 10^{-26} \, \text{cm}^{3}/\text{s}$ \cite{Steigman2012}. However, the latest Fermi-LAT dwarf satellite galaxy observations have constrained the annihilation cross section to values just slightly lower than that natural scale \cite{Ahnen2016}. On the other hand, in the Asymmetric DM (ADM) scenario, present-day DM annihilation is negligible. Additionally, in general DM particles may also interact with each other without annihilating. From an analysis of colliding galaxy clusters, \citet{Harvey2015} recently set robust limits which constrain the self-interaction cross section to $\sigma_{\chi \chi} / m_{\chi} \lesssim 8.3 \times 10^{-25} \, \text{cm}^{2}/\text{GeV}$.
Meanwhile, helioseismology has been used as a complementary tool to constrain the properties of DM \cite{Turck-Chieze2012}. This is possible because DM particles accumulating in a star transport energy by conduction, affecting its stellar structure, in particular in the stellar core. The presence of WIMPs has been shown to significantly alter the local luminosity and sound speed in the Sun, allowing constraints to be set through a comparison between helioseismic data and solar models including DM \cite{Lopes2002c}. This approach has been extended by \citet{Lopes2002d} to include constraints from solar neutrinos and from solar gravity modes by \citet{Turck-Chieze2012c}. WIMPs with an annihilation cross section close to the natural scale do not accumulate in large enough numbers inside the Sun to produce an impact incompatible with the observational data. On the other hand, \citet{Frandsen2010} showed that accumulation can be greatly enhanced for self-interacting ADM, thus producing a significant impact in the Sun. Considering a WIMP annihilation cross section several orders of magnitude below the natural scale also has this effect. WIMP-like ADM, for which ADM is emulated by WIMPs with a very low annihilation rate, has been investigated, with constraints having been set by \citet{Cumberbatch2010,Taoso2010} and \citet{Lopes2012}. Furthermore, \citet{Lopes2014} studied an ADM scenario with long-range DM-baryon interactions. Recently, \citet{Vincent2015} showed that a solution including $q^{2}$ momentum-dependent ADM reasonably fits the data with $\sigma_{\chi n} = 10^{-37} \, ( q / 40 \, \text{MeV} )^{2} \, \text{cm}^{2}$ and a low $m_{\chi} = 3 \, \text{GeV}$, somewhat below the typical effective-interaction evaporation threshold \citep{Vincent2015a}.
Adding to this, the Sun is no longer the only star we can conduct these studies with. The COROT \cite{Baglin2006,Michel2008} and \textit{Kepler} \cite{Gilliland2010} missions revolutionized asteroseismology by detecting stellar oscillations with a remarkable precision for thousands of solar-like and red giant stars \citep{Chaplin2013a}. We are now in a position to take advantage of that contribution, by using those stars as laboratories for fundamental physics. \citet{Casanellas2011} suggested that the use of diagnostics from stellar oscillations could be used to constrain the properties of DM. In a follow-up, \citet{Casanellas2013} reported the first asteroseismic constraints for WIMP-like ADM from solar-like stars.
Parallel to this situation there is a long-running predicament in astrophysics, the solar composition problem. The issue is the discrepancy between the solar structure inferred from helioseismology and that obtained in Standard Solar Models (SSMs) inputing the most up-to-date photospheric abundances \citep{Serenelli2009}. The solar composition problem is relevant not only for solar models, but also for any stellar model, since the abundances in the Sun which are generally assumed to be similar to other solar-type stars are a crucial input for any stellar model. The solar metallicity worked out almost three decades ago has since been revised to lower values and more recently it was brought slightly up to a reliable present-day $Z/X = 0.0181$ \cite{Asplund2009d,Scott2014,Scott2014a,Grevesse2014}. Yet, the problem persists with possible solutions ranging from more accurate spectroscopic analysis and radiative opacities, to an enhanced neon abundance, to more accurate stellar modelling \citep{Bergemann2014d}. Interestingly though, the solar composition and the DM problems may not be as parallel as initially thought, with DM possibly, yet unlikely, explaining the difference between the solar structure inferred from helioseismology and that obtained for current SSMs.
We modelled three stars including DM in two scenarios, WIMP-like ADM with
\begin{eqnarray}
\left\langle \sigma_{A} v \right\rangle = 10^{-33} \, \text{cm}^{3}/\text{s} \nonumber
\end{eqnarray}
and ADM with
\begin{eqnarray}
\sigma_{\chi \chi} = 10^{-24} \, \text{cm}^{2}/\text{GeV} . \nonumber
\end{eqnarray}
We concern ourselves exclusively with an effective DM-proton SD interaction cross section and the region of the parameter space explored here is:
\begin{eqnarray}
4 \lesssim m_{\chi} / \text{GeV} \lesssim 15 , \quad 10^{-37} \lesssim \sigma_{\chi n}^{SD} / \text{cm}^{2} \lesssim 10^{-34} \nonumber
\end{eqnarray}
Besides the Sun we modelled the less massive KIC 7871531, with a modelled mass of $0.85 \, \text{M}_{\astrosun}$ and spectral type G5V \citep{Molenda-Zakowicz2013} and the more massive F9IV-V KIC 8379927 with a modelled mass of $1.12 \, \text{M}_{\astrosun}$.
Asteroseismology has been used before to study the effects of WIMP-like ADM in stars less massive than the Sun \citep{Casanellas2013}. However, this is the first time that asteroseismic signatures of a star less massive than the Sun are used to study self-interacting ADM, hereafter known simply as ADM. Moreover, that same less massive star is also a very old one, with a model age of $9.41 \, \text{Gyr}$, which means that DM accumulates to greater numbers, producing a significantly greater impact.
In section \ref{ADM} we discuss the differences between WIMP and ADM accumulation inside a star, then in section \ref{stars} we revisit how those trapped particles expedite the transport of energy, consequently having an impact on stellar structure during the course of stellar evolution. In section \ref{model} we elaborate on how we picked, modelled and calibrated these particularly appropriate stars. We then proceed to present our results in section \ref{results}, upon which we discuss our findings in section \ref{discussion_conclusions}.
\section{Accumulation of self-interacting Asymmetric DM inside a star} \label{ADM}
\begin{figure*}
\subfloat[$\sigma_{\chi \chi} = 10^{-24}, \, 10^{-25} \text{ and } 10^{-26} \, \text{cm}^{2}$ for the solid, dashed and dotted lines, respectively. \label{adm_wimp_xx}]
{\includegraphics[width=0.98\columnwidth]{fig1a}}
~
\subfloat[$\left\langle \sigma_{A} v \right\rangle = 10^{-33}, \, 10^{-30}, \text{ and } 10^{-26}, \, \text{cm}^{3}/\text{s}$ for the solid, dashed and dotted lines, respectively. \label{adm_wimp_an}]
{\includegraphics[width=0.98\columnwidth]{fig1b}}
\caption{Analytical approximations to the hypothetical present solar value of $N_{ADM}/N_{WIMP}$ as a function of the DM-nucleon interaction cross section. Figure \ref{adm_wimp_xx} is for $\left\langle \sigma_{A} v \right\rangle = 10^{-33} \, cm^{3}/s$ and figure \ref{adm_wimp_an}, is actually showing the $log_{10}$ of $N_{ADM}/N_{WIMP}$, for $\sigma_{\chi \chi} = 10^{-24} \, cm^{2}$. \label{adm_wimp}}
\end{figure*}
The fact that present-day baryonic and dark matter densities are of the same order of magnitude lead to the idea of a connection between these components. This connection can be realized by an asymmetry generated in any or both sectors which is then communicated between them, giving rise not only to the baryonic asymmetry, but also to ADM models \citep{Petraki2013,Zurek2014}. In ADM scenarios, today only DM particles would remain, after DM antiparticles vanished by annihilating with the former. Hence, these scenarios contrast with the self-conjugate WIMP picture in that there is no annihilation at present time. Thus, to correctly estimate the number of trapped particles inside a star in the ADM scenario, it is necessary to consider DM self-interaction, as the larger number of already accumulated particles may increase the self-capture rate significantly.
The number of trapped WIMPs inside a star would evolve as
\begin{equation}
N_{WIMP} (t) = \sqrt{ C_{\chi n} / \Gamma_{sa} } \tanh \left( t \sqrt{ C_{\chi n} \Gamma_{sa} } \right) ,
\end{equation}
where $C_{\chi n}$ is the DM capture rate due to DM halo particles scattering off nucleons and $\Gamma_{sa}$ is the self-annihilation rate. For the WIMP picture it is not necessary to consider self-interaction, because DM capture due to scattering off nucleons is much more relevant than self-capture when annihilation is considered. The balance is essentially between the dominant capture mechanism and annihilation.
On the other hand, the number of trapped ADM particles inside a star would evolve as
\begin{equation}
N_{ADM} (t) = C_{\chi n} / C_{\chi \chi} \left( \exp( C_{\chi \chi} t ) - 1 \right),
\end{equation}
until a geometric limit is reached when the trapped DM becomes optically thick at which point $N_{ADM}$ is driven only linearly with time. Here $C_{\chi \chi}$ is the capture rate due to DM particles in the halo scattering off of other DM particles already trapped inside the Sun. The capture rates $C_{\chi n} \propto \sigma_{\chi n}$ and $C_{\chi \chi} \propto \sigma_{\chi \chi}$ are approximated as in \cite{Zentner2009}. The annihilation rate $\Gamma_{sa} \propto \left\langle \sigma_{A} v \right\rangle$ was given by \cite{Griest1987}. The geometric limit is reached early on in the life of a star for the parameter space explored in this work.
The capture rates are also determined by the DM halo parameters near the star, by the galactic orbital velocity $v_{*}$ of the object and by the escape velocity at the surface of the target $v_{esc}$ \cite{Zentner2009}. The two stars studied here besides the Sun are located close to it, thus local DM halo parameters are used for all models, namely a density $\rho_{\chi} = 0.38 \, \text{GeV/cm$^{3}$} $ and a velocity dispersion $\bar{v_{\chi}} = 270 \, \text{km/s}$ \cite{Read2014, Choi2014}. The escape velocity $v_{esc}$ is computed for each star, at each time step. All models are also computed with a local Milky Way orbital velocity $v_{*} = 220 \, \text{km/s}$ which can differ, within the same order of magnitude, for the other two stars. However, that difference introduces only a small error for the capture \cite{Lopes2011}.
For comparison of ADM and WIMP accumulation, figures \ref{adm_wimp_xx} and \ref{adm_wimp_an} show analytical approximations to the present solar fraction of the number of trapped DM particles in the ADM scenario relative to the WIMP picture. The number of ADM particles trapped inside the Sun is greater than that of WIMPs by a factor of a few for $\left\langle \sigma_{A} v \right\rangle \sim 10^{-33} \, \text{cm}^{3}/\text{s}$ and by $10^{4}$ relative to the natural scale of annihilation for a thermal relic. Figure \ref{adm_wimp_xx} clearly evidences that, despite the fact that no-annihilation drastically changes DM accumulation, the number of trapped ADM particles is very stiff with respect to the self-capture cross sections. In fact, lowering the value of the self-capture cross section considered here, $\sigma_{\chi \chi} = 10^{-24} \, \text{cm}^{2}$, to $10^{-26} \, \text{cm}^{2}$ reduces the number of trapped particles by less than a factor of $2$ and only for lower values of the DM-nucleon cross section. Moreover, the annihilation cross section considered for WIMPs, $\left\langle \sigma_{A} v \right\rangle \sim 10^{-33} \, \text{cm}^{3}/\text{s}$, is several orders of magnitude lower than the natural scale and would lead to overclosure. We merely adopted the low value for comparison with self-interacting ADM, by emulating the former as WIMP-like ADM. Greater annihilation cross sections do not allow for enough WIMP accumulation in a star to produce a significant effect. In contrast, in the ADM scenario there is no annihilation, so going to unusually low annihilation cross sections is not required to obtain a significant impact on stars, and moreover, for ADM the self-capture mechanism becomes relevant.
A trapped DM particle can evaporate by scattering off a proton and gaining enough energy to escape the gravitational potential of the star. Gould determined the minimum mass a DM particle must have in order to stay in the Sun and not evaporate \citep{Gould1987a,Gould1990}. For the parameter space explored in this work, evaporation in the Sun is essentially irrelevant for masses $m_{\chi} \gtrsim 3.7 \, \text{GeV}$. The evaporation mass for the other stars can be only slightly different \citep{Zentner2011}, so evaporation can be safely neglected as we work for $m_{\chi} > 4 \, \text{GeV}$.
\section{DM energy transport and Asteroseismic diagnostics} \label{stars}
DM provides an additional energy transport mechanism in a star. DM particles can conduct energy from the stellar interior to the outer layers, significantly affecting stellar structure. Consequently, DM impacts stellar oscillations, from which precision diagnostics can be used to explore the properties of DM.
We use \verb|dmp2k| to compute stellar structure and evolution including DM energy transport. \verb|dmp2k| integrates \verb|CESAM2k| \citep{Morel2007} and a collection of routines, some based on \verb|DarkSUSY| \citep{Gondolo2004}. \verb|CESAM2k| calculates 1D quasi-static stellar structure and evolution including diffusion. Similarly to what is described by \cite{Lopes2002c,Scott2009,Lopes2011}, to emulate the effects of DM energy conduction we included an extra transport luminosity
\begin{equation}
L_{trans}(r,t) = \mathfrak{s}(K,r,t) L_{trans, LTE}(r,t) ,
\end{equation}
where $\mathfrak{s}(K,r,t)$ is a suppression factor depending on the Knudsen number $K$ and on the radius $r$ and age $t$. To emulate the non-local energy transport regime due to an isothermal DM distribution, the suppression factor decreases the energy transported by DM particles distributed in Local Thermodynamic Equilibrium (LTE), \\ \\ \\
\begin{eqnarray}
L_{trans, LTE}(r,t) = 4 \pi r^{2} \kappa(r,t) n_{\chi, LTE}(r, t) l(r,t) \times \nonumber \\
\times \left[ \frac{ k_{B} T_{\star}(r,t) }{ m_{\chi} c^{2} } \right]^{1/2} k_{B} \frac{d T_{\star}(r,t)}{dr} ,
\end{eqnarray}
with $\kappa$ the opacity, $n_{\chi, LTE}$ the number density of DM particles in LTE, $l$ the mean free path of these particles and $T_{\star}$ the stellar temperature.
The quasi-static equilibrium problem is solved to a required precision level of $10^{-4}$ using between $500$ and $1000$ mass shells and evolution is computed in about $50$ time steps. The solar photospheric abundances of Asplund, Grevesse, Sauval and Scott, AGSS09ph \citep{Asplund2009,Serenelli2009} are adopted for the chemical composition. The mixing length theory parameter is chosen fixed at $\alpha = 1.8$ without overshoot. The OPAL 2001 tables \citep{Rogers2002} are used for the equation of state and the OPAL+Alexander tables \citep{Rogers1996,Ferguson2005} for the Rosseland mean of the opacities. The NACRE compilation \citep{Angulo1999,Morel1999} is used for the thermonuclear reaction rates.
The oscillation mode frequencies $\nu_{n,\ell}$ for the radial order $n$ and spherical degree $\ell$ are computed from the stellar models using the Aarhus adiabatic pulsation package \citep{Christensen-Dalsgaard2008}. Stellar interior diagnostics can then be determined from the oscillation mode frequencies. Here, we focus on $r_{02}$, the ratio of small to large separations proposed by \cite{Roxburgh2003a},
\begin{equation}
r_{02} (n) = \frac{ \nu_{n,0} - \nu_{n-1,2} }{ \nu_{n,1} - \nu_{n-1,1} } ,
\end{equation}
which is weighted towards the stellar core, where the effects of DM are most relevant. Since $r_{02}$ is independent of the outer layers, it is not significantly affected by inaccurate atmospheric modelling. Our analysis is then based on the statistical test
\begin{equation}
\chi^{2}_{r_{02}} = \sum_{n} \left( \frac{r_{02}^{obs}(n)-r_{02}^{mod}(n)}{\sigma_{r_{02}^{obs}}(n)} \right)^{2} .
\end{equation}
Because $r_{02}(n)$ ratios with consecutive orders share one eigenfrequency, the different ratios used could in principle be correlated. However, by generating samples of the ratios through the sampling of the observed normally distributed eigenfrequencies, we found that in general the correlation is very small, bellow $0.01$. Besides $r_{02}$, \cite{Roxburgh2003a} proposed two other ratios, $r_{01}$ and $r_{13}$, as diagnostics of the interior of solar-like stars. $r_{01}$, defined as a $5$ point separation, captures fine details that the models cannot yet reproduce accurately. $r_{13}$ is defined in an analogous way to $r_{02}$, but taking the small separation between modes with $\ell=1$ and $3$. Unfortunately, due to partial cancellation \cite{Aerts2010}, we cannot yet detect $\ell=3$ modes, except for the Sun.
Numerical tests made by \cite{Roxburgh2003a} found that $r_{02}$ and similar ratios have an accuracy of $0.5 \%$, as explained by these authors the uncertainties in such ratios are solely due to the dependence of the inner phase shifts of the acoustic modes in the stellar structure. This high accuracy is possible because the influence of the problematic external layers of sun-like stars is suppressed in these ratios. Indeed, current stellar models have a poor description of the external layers of stars, with no inclusion of non-adiabatic convection and oscillations, and unrealistic stellar atmospheres. Hence, such ratios are a powerful tool to probe the core of stars when high quality data is available, like the data obtained by the space missions COROT and Kepler. Accordingly, we choose to use such ratios in this study to probe the core of our sun-like star targets, however, to be in the conservative side of the argument, we adopt an uncertainty of $1 \%$.
The discrepancies arising from the solar composition problem are the main source of uncertainty to our standard solar model. We captured the SSM uncertainty from the difference between a solar model computed with the AGSS09ph abundances and another computed with the GS98 composition, determining a $2 \%$ change in $r_{02}$ on average over the individual modes. We adopted this value as the reference uncertainty for all solar models. It is particularly important to have a conservative error estimate for the solar models because solar oscillation frequencies are determined very precisely. Otherwise, without the uncertainty from the solar model, we would be using a very precise diagnostic for a comparatively inaccurate model. Naturally, the model would be excluded due to the lack of detail in the analysis.
\section{Selecting and calibrating a star} \label{model}
\setlength{\tabcolsep}{5pt}
\renewcommand{\arraystretch}{1}
\begin{table*}
\begin{tabular}{rrrrrrr}
\toprule
Star & $Z$ & $T_{eff} (\text{K})$ & $Z/X$ & $\log(g/(\text{cm}/\text{s}^2))$ & $\Delta \nu (\mu \text{Hz})$ \\
\colrule
Sun & $0.0134$ & $5777$ & $0.0181$ & $4.438$ & $135$ \\
KIC 7871531 (Star A) & $0.0113 \pm 0.0054$ & $5400 \pm 100$ & $0.016 \pm 0.004$ & $4.4 \pm 0.2$ & $150 \pm 5$ \\
KIC 8379927 (Star B) & $0.0117 \pm 0.0054$ & $6000 \pm 200$ & $0.018 \pm 0.008$ & $4.4 \pm 0.2$ & $120 \pm 2$ \\
\botrule
\end{tabular}
\caption{Input observational constraints for the benchmark models. Solar values are shown for reference. \label{input}}
\end{table*}
\begin{table*}
\begin{tabular}{rrrrrrrrrr}
\toprule
Star & $M (\text{M}_{\astrosun})$ & $Z$ & $\tau (\text{Gyr})$ & $T_{eff} (\text{K})$ & $Z/X$ & $\log(g/(\text{cm}/\text{s}^2))$ & $R (\text{R}_{\astrosun})$ & $L (\text{L}_{\astrosun})$ & $\left\langle \Delta \nu \right\rangle (\mu \text{Hz})$ \\
\colrule
Sun & $1$ & $0.0134$ & $4.57$ & $5777$ & $0.0181$ & $4.438$ & $1$ & $1$ & $135$ \\
KIC 7871531 (Star A) & $0.85$ & $0.0140$ & $9.41$ & $5487$ & $0.0167$ & $4.47$ & $0.886$ & $0.639$ & $149.2$ \\
KIC 8379927 (Star B) & $1.12$ & $0.0180$ & $1.82$ & $6158$ & $0.0235$ & $4.39$ & $1.12$ & $1.62$ & $120.2$\\
\botrule
\end{tabular}
\caption{Resulting benchmark model parameters. Solar values are shown for reference. \label{output}}
\end{table*}
Undoubtedly our knowledge of the Sun renders it the best stellar laboratory for constraining DM. Nevertheless, many good laboratories are sometimes statistically more relevant than an excellent one. \textit{Kepler} observed many stars showing a large number of detected oscillation modes \citep{Appourchaux2012}. From this set, interesting candidates for modelling have stellar fundamental properties strongly constrained by photometric and spectroscopic observations. Within that subset, the best candidates are those for which astrometric observations are also available, for example stars observed by both \textit{Kepler} and \textit{Hipparcos} \citep{SilvaAguirre2012}. Thus, the ideal candidate is an object with highly constrained fundamental properties and a large number of detected oscillation modes. Binary stars are also very interesting since some of their fundamental properties can be determined to high precision \citep{Torres2009}. Additionally, models of sub-solar mass stars can evidence a greater DM impact than models of stars more massive than the Sun. This is because in less massive stars, the DM luminosity corresponds to a greater fraction of the total luminosity, making these objects of particular interest.
In this work, we study the Sun to check the robustness of our models and to set a standard for our analysis. We also study a sub-solar mass star, KIC 7871531, of spectral type G5V and a modelled mass of $0.85 \, \text{M}_{\astrosun}$. KIC 7871531 (hereafter refereed to as star A) was observed by \textit{Kepler} and subsequently \cite{Appourchaux2012} identified $26$ oscillation modes for this star. Additionally, we also modelled KIC 8379927, a star of spectral type F9IV-V and a modelled mass of $1.12 \, \text{M}_{\astrosun}$. KIC 8379927 (hereafter refereed to as star B) was observed by both \textit{Kepler} and \textit{Hipparcos} and $37$ modes have been detected by \citep{Appourchaux2012}.
It is possible to calibrate the solar model to precisely match the well known solar fundamental parameters. Moreover, the solar age is determined with considerable precision, for example from meteoritic analysis \citep{Connelly2012}. On the contrary, for other stars even if the mass and metallicity were known with an acceptable precision, the age would not. Consequently, there is a degeneracy in the fundamental parameters which must also be considered, even before the effects of DM come into play.
To find a calibrated stellar model for stars other than the Sun we proceeded by first building a set models of those stars within a grid of masses, metallicities and ages, $(M,Z/X,\tau)$ without including the effects of DM. We then compare each model within that set with input observational constraints and pick only a subset which satisfies those restrictions. For each star, we considered input observational constraints on the effective temperature $T_{eff}$, total iron content $[Fe/H]$, surface gravity $g$ and on the average of the large separation $\langle \Delta \nu \rangle$. For all stars, $\langle \Delta \nu \rangle$ was computed over all the possible $n$, for $l=0$ only. A different number of large separations was used to compute the average for each star, $22$ large separations for the Sun (with $n=7$ through $28$), $6$ for star A ($n=19$ through $24$), and $12$ for star B ($n=16$ through $27$). The $\langle \Delta \nu \rangle$ constraint was compared against the primary frequency splitting obtained from the scaling relation $\Delta \nu_{0} = (M/M_{\astrosun})^{1/2} (R/R_{\astrosun})^{-3/2} \, 134.9 \, \mu \text{Hz}$ \cite{Kjeldsen1995}, thus further constraining the models in the initial grid, even if they satisfy the other constraints, including the surface gravity $g$, which also relates the mass and radius of the star. These constraints given in table \ref{input} are based on \cite{Bruntt2012a} and \cite{Molenda-Zakowicz2013} for star A and on \cite{Karoff2013b} and \cite{Molenda-Zakowicz2013} for star B. We then compute the asteroseismic diagnostics for the subset satisfying the input observational constraints of models and compare those diagnostics to the results inferred from observation. The closest model, which best mimics the observed star, in particular in the core, is the benchmark model. It is against this model that we compare any model of this star including DM. Table \ref{output} shows the resulting parameters for the benchmarks, which are similar to those found throughout the literature, see for example \cite{Metcalfe2014a} for star A and \cite{Mathur2012a} and \cite{Karoff2013b} for star B. The DM stellar models of this star are build by taking the benchmark model and including the effects of DM.
\section{Results} \label{results}
In total, more than $1000$ CPU hours (@ $2.93 \text{GHz}$) were used to compute the stellar models corresponding to the results presented here.
\begin{figure}
\subfloat{\includegraphics[clip=true, trim= 0.5cm 0cm 3cm 1.7cm, width=1\columnwidth]{fig2a} \label{sun_adm_tc_pct}}
\subfloat{\includegraphics[clip=true, trim= 0.5cm 0cm 3cm 1.7cm, width=1\columnwidth]{fig2b} \label{kic7871531_adm_tc}}
\subfloat{\includegraphics[clip=true, trim= 0.5cm 0cm 3cm 1.7cm, width=1\columnwidth]{fig2c} \label{kic8379927_adm_tc}}
\caption{Relative differences in the central temperature of models including ADM ($\delta T_{c}/T_{c}^{bench}$ in \%) for the Sun (top, $T_{c}^{bench} = 15.46 \, \text{MK}$), the less massive star A, KIC 7871531 (middle, $T_{c}^{bench} = 15.2 \, \text{MK}$) and the more massive star B, KIC 8379927 (bottom, $T_{c}^{bench} = 16.6 \, \text{MK}$). \label{adm_tc}}
\end{figure}
\subsection{Sun}
\begin{figure*}
\subfloat{\includegraphics[clip=true, trim= 0.5cm 0cm 3cm 1.7cm, width=1\columnwidth]{fig3a}} \label{sun_wimp_tc}
~
\subfloat{\includegraphics[clip=true, trim= 0.5cm 0cm 3cm 1.7cm, width=1\columnwidth]{fig3b}} \label{sun_adm_tc}
\caption{Squared errors ($\chi_{T_{c}}^{2}$) for the central temperatures of solar models including WIMPs (left) and ADM (right). \\ The reference SSM central temperature $T_{c} = 15.750 \pm 0.5 \, \text{MK}$ is from a seismic solar model which stabilized the neutrino flux and gravity mode frequency predictions given by solar models \cite{Turck-Chieze2011}. Also shown are the $95\%$ and $99\%$ CLs. \label{sun_dm_tc}}
\end{figure*}
ADM has a considerable impact on the Sun. Figure \ref{sun_adm_tc_pct} shows the drop in central temperature, relative to the benchmark, for models including ADM. In the low mass, high cross section region of the parameter space, the drop due to ADM reaches $\sim 12 \%$ of the benchmark central temperature. This is in contrast with WIMPs, for which the drop does not go beyond $\sim 7 \%$, even at a very low annihilation rate $\left\langle \sigma_{A} v \right\rangle \sim 10^{-33} \, \text{cm}^{3}/\text{s}$. ADM can have a considerable greater impact than WIMPs, without the need to push the annihilation rate to extremely low values.
To further illustrate this point, we compared the differences between the two scenarios taking the SSM central temperature to be $T_{c}^{SSM} = 15.750 \pm 0.5 MK$ \citep{Turck-Chieze2011}. This central temperature is merely a reference and it corresponds to the central temperature of a seismic solar model which stabilized the neutrino flux and gravity mode frequency predictions given by solar models. An analysis solely based on this value already disfavours some regions of the parameter space as shown in figure \ref{sun_dm_tc}, where
\begin{equation}
\chi_{T_{c}}^{2} = \left( \frac{T_{c}^{SSM}-T_{c}^{mod}}{\sigma_{T_{c}}^{SSM}} \right)^{2} .
\end{equation}
For a particle of mass $m_{\chi} = 5 \, \text{GeV}$ for example, WIMPs with $\sigma^{SD}_{\chi n} \gtrsim 6.3 \times 10^{-36} \, \text{cm}^{2}$ are disfavoured, as well as ADM particles with $\sigma^{SD}_{\chi n} \gtrsim 3.1 \times 10^{-36} \, \text{cm}^{2}$, up to a $99\%$ confidence level (CL). This emphasizes the potential of a solar central temperature diagnostic for ADM, provided that the precision of solar neutrino measurements increases, together with the accuracy of SSM.
Although a solar central temperature analysis seems encouraging for the prospect of DM searches, asteroseismology arises as even more promising. To frame the asteroseismic analysis for stars A and B, we first compared the $r_{02}$ diagnostics obtained from helioseismic observational data and those determined from the computed solar models including ADM. We found that the low-mass, high-cross section region of the parameter space shows a stark disagreement not only with the observational data, but also with the benchmark. In fact, in that region, the discrepancy is much more significant than some degree of incompatibility found between the benchmark and observational data, which was to be expected due to model inaccuracies. This confirms that the low-mass, high-cross section region of the parameter space is indeed significantly disfavoured. Again, using the example of a $5 \, \text{GeV}$ particle, ADM with $\sigma^{SD}_{\chi n} \gtrsim 6.3 \times 10^{-36} \, \text{cm}^{2}$ is incompatible with the $r_{02}$ data obtained from helioseismology at least at a $99 \%$ CL.
\subsection{KIC 7871531 (star A, $0.85 \text{M}_{\astrosun}$)}
\begin{figure}
\includegraphics[clip=true, trim= 0.5cm 0cm 3cm 1.7cm, width=1\columnwidth]{fig4}
\caption{Sum of squared errors for the $r_{02}$ diagnostic of KIC 7871531 (star A) models including ADM particles with $\sigma_{\chi \chi} = 10^{-24} \, \text{cm}^{2}$. Also shown are the $90\%$, $95\%$ and $99\%$ CLs corresponding to these $\chi^{2}$s with the number of $d.o.f. = 3$. The empty region in the low mass, high cross section region of the parameter space corresponds to stellar models that did not converge. \label{kic7871531_adm_r02}}
\end{figure}
The benchmark model obtained for star A has a mass of $0.85 \text{M}_{\astrosun}$ and an age of $9.41\text{Gyr}$. Thus, we expected to find a greater DM impact for this star than for the Sun, not only because this is a less massive star, for which the energy transported by DM represents a larger fraction of the total luminosity, but also because DM would have accumulated for longer inside this star. The results shown in figure \ref{kic7871531_adm_tc}, compared with those in figure \ref{sun_adm_tc_pct}, confirm this expectation for ADM. In fact, both for WIMPs and for ADM, the drop in central temperature, relative to the benchmark, is greater for star A than for the Sun, almost everywhere in the parameter space. Take an ADM particle with $m_{\chi} = 10 \, \text{GeV}$ and $\sigma^{SD}_{\chi n} = 10^{-35} \, \text{cm}^{2}$, while for the Sun the drop is just short of $6 \%$ of the benchmark central temperature, for star A, the enhanced energy transport implies a reduction of almost $8 \%$.
For stars other than the Sun we do not have central temperature estimates based on neutrino observations, hence we cannot conduct a central temperature analysis as we did for the Sun. Nonetheless, asteroseismology offers a probe of the stellar core. A $\chi_{r_{02}}^{2}$ diagnostic evidences the disagreement between asteroseismic data and models including ADM as shown in figure \ref{kic7871531_adm_r02}. The stellar models on the top left of figure \ref{kic7871531_adm_r02} did not converge because the atmosphere could not be reconstituted. In a similar trend as the Sun, the low-mass, high-cross section region of the parameter space is clearly incompatible with the asteroseismic data. For that region, there is also a significant departure from the benchmark, which is not present throughout the rest of the parameter space. An ADM particle with $m_{\chi} = 5 \, \text{GeV}$ and $\sigma^{SD}_{\chi n} = 4 \times 10^{-36} \, \text{cm}^{2}$ is strongly disfavoured by our analysis, although this limit on the cross section increases to considerable less stringent values for heavier particles, with masses $\geq 6 \, \text{GeV}$. A comparison between the $\chi_{r_{02}}^{2}$ results of star A and of solar models including ADM also confirms the expectation that this less massive, older star evidences a greater DM impact. While the Sun disfavours ADM particles with smaller cross sections below $6 \, \text{GeV}$, star A is more competitive above that mass. In fact, whereas the Sun excludes ADM with $\sigma^{SD}_{\chi n} \lesssim 10^{-34} \, \text{cm}^{2}$ for masses just shy of $8 \, \text{GeV}$ at $99 \%$ CL, this star can go up to almost $9 \, \text{GeV}$. This is interesting considering that we are matching the Sun, outfitted with $17$ individual $r_{02}$ ratios, against a star for which only $3$ of these ratios are available.
\subsection{KIC 8379927 (star B, $1.12 \, M_{\astrosun}$)}
\begin{figure*}
\subfloat{\includegraphics[clip=true, trim= 0.5cm 0cm 3cm 1.7cm, width=1\columnwidth]{fig5a} \label{kic8379927_wimp_r02}}
~
\subfloat{\includegraphics[clip=true, trim= 0.5cm 0cm 3cm 1.7cm, width=1\columnwidth]{fig5b} \label{kic8379927_adm_r02}}
\caption{Sum of squared errors for the $r_{02}$ diagnostic of star B (KIC 8379927) models including WIMPs (left) and ADM (right).
Also shown are the $90\%$, $95\%$ and $99\%$ CLs corresponding to these $\chi^{2}$s with the number of $dof = 11$. \label{kic8379927_dm_r02}}
\end{figure*}
Star B has a benchmark model slightly more massive than the Sun, with $1.12 \, M_{\astrosun}$ and also considerably younger, with $1.82 \, \text{Gyr}$. As previously discussed, in general, we would expect DM to produce a smaller departure from standard stellar modelling for a star like this. Again, this hypothesis is supported by the comparison between the drop in central temperature for models of the more massive star B and of the Sun, figures \ref{kic8379927_adm_tc} and \ref{sun_adm_tc_pct}, respectively.
For star B, $11$ individual $r_{02}$ ratios are available. This is considerably more than the $3$ ratios accessible for star A, but still less than the $17$ available for the Sun. Also, on average, the precision of an $r_{02}$ ratio for star B is $\sim 15 \%$ better than for A. We computed the $\chi_{r_{02}}^{2}$ diagnostic for models of star B with WIMPs and ADM, the results are shown in figure \ref{kic8379927_dm_r02}.
ADM impact is most significant for this star for larger DM masses since a higher core temperature leads to a mean free path of the DM particle $\sim 10 \%$ greater than that of the less massive star A for that mass range. As a consequence, energy transport due to a single DM particle is more efficient in this case. We note however that the total energy transported by DM depends both on the number of accumulated particles and the efficiency of the transport. For a $5 \, \text{GeV}$ particle, WIMPs with $\sigma^{SD}_{\chi n} = 6.3 \times 10^{-36} \, \text{cm}^{2}$ and ADM with $\sigma^{SD}_{\chi n} = 4.5 \times 10^{-36} \, \text{cm}^{2}$ are incompatible with the observational data up to a $99 \%$ CL.
\section{Discussion \label{discussion_conclusions}}
\begin{figure*}
\subfloat{\includegraphics[clip=true, trim= 0.5cm 0cm 3cm 1.5cm, width=1\columnwidth]{fig6a} \label{limits_WIMP}}
~
\subfloat{\includegraphics[clip=true, trim= 0.5cm 0cm 3cm 1.5cm, width=1\columnwidth]{fig6b} \label{limits_ADM}}
\caption{$90\%$ CL limits ascertained from this work in the WIMP (left) and ADM (right) scenarios for: Sun $\chi_{T_{c}}^{2}$ (dotted {\color{red} red}), Sun $\chi_{r_{02}}^{2}$ (solid {\color{red} red}), star B (KIC 8379927) $\chi_{r_{02}}^{2}$ (solid {\color{blue} blue}), star A (KIC 7871531) $\chi_{r_{02}}^{2}$ (solid {\color{green} green}). The dashed {\color{blue} blue} line is the projected $90\%$ CL limit corresponding to a $10\%$ increase in precision for the mode frequencies. The impact due to WIMPs could only be significantly diagnosed for star B (KIC 8379927), hence the absence of constraints for the other stars. For comparison, $90\%$ CL limits from some direct detection experiments are also shown (references given in text): XENON100 (solid black line), COUPP flat efficiency model (dashed black line), PICO-2L (dash-dotted black line), PICASSO (dotted black line). \label{text}}
\end{figure*}
In an effort to understand the potential of asteroseismology as a complementary way to search for DM, we analysed the effects of WIMPs and ADM in three stars using the core-sensitive asteroseismic ratio $r_{02}$. As such, we attempted to constrain the properties of low-mass ADM with an effective spin-dependent coupling by comparing observational data with results of stellar models including DM energy transport.
The asteroseismic analysis disfavours the low mass, high cross section region of the ADM parameter space explored here. This incompatibility was found for all three stars. For example, at $99 \, \%$ CL, for $m_{\chi} = 5 \, \text{GeV}$, the $r_{02}$ data is incompatible with ADM models with $\sigma_{\chi n}^{SD} \gtrsim 6 \times 10^{-36} \text{cm}^{2}$ for the Sun, $\sigma_{\chi n}^{SD} \gtrsim 5 \times 10^{-36} \text{cm}^{2}$ for star A and $\sigma_{\chi n}^{SD} \gtrsim 4 \times 10^{-36} \text{cm}^{2}$ for star B.
In this work, we considered only an effective SD coupling for the DM-nucleon interaction. Our results for the Sun show less dramatic differences between data and model than what \cite{Vincent2015a} found. This can be explained by the additional reference uncertainty, which we included to reflect the unsolved solar abundance problem. One way to look at the problem is to take the most recent abundances of AGSS09ph at face value. The other, which we adopted here, is not so much to question the accuracy of those abundances, but instead to capture some of the uncertainties in the SSM by considering the differences between solar models computed with the abundances by AGSS09ph and those by GS98.
Figures \ref{limits_WIMP} and \ref{limits_ADM} show the limits and exclusion regions ascertained from this work for the WIMP and ADM scenarios, respectively. For comparison, limits are also shown for some direct detection experiments: the scintillation and ionization detector XENON100 \citep{Aprile2013} and the bubble chambers COUPP \citep{Behnke2012}, PICASSO \citep{Archambault2012} and PICO-2L \citep{Amole2015}.
In the case of WIMPs, where accumulation is considerably weaker, only the more massive star B is significantly affected. Both the Sun and star A are relatively unaffected by WIMPs within the parameter range explored here, even though they are older stars. There is a trade-off between DM accumulation and stellar age in terms of DM impact.
Notice from our solar central temperature analysis that solar models with low-mass, high-cross section ADM do not compare well with current SSM. This comparison should be taken cautiously since the central temperature of SSM is very sensitive to model input physics. The SSM has been refined over the last two decades to account for the solutions put forward to solve several problems in solar physics \citep{Turck-Chieze2011}, but a definite answer is yet to be found. More accurate abundances and opacities, as well as more accurate and precise measurements of the $^{8}B$ neutrino flux could alter the prediction for the central temperature of the SSM.
For $m_{\chi} \gtrsim 5 \, \text{GeV}$, the best constraints from the $r_{02}$ analysis come from star B, a star slightly more massive and considerably hotter, more metallic and younger than the Sun. For ADM, the $90 \%$ CL limit set by this star is competitive with the $90 \%$ CL bounds set by XENON100 for masses below $\sim 7 \, \text{GeV}$. For $m_{\chi} \lesssim 5 \, \text{GeV}$, star A gives the most stringent bounds, competitive with the $90 \%$ CL limits set by COUPP. The ADM impact is so significant in this case because this is an older star, with slightly more than twice the age of the Sun. Because we are dealing with stellar evolution, the effects of DM energy transport are cumulative, which means that significantly older stars can be competitive in constraining DM. In fact, while this star would not show a significant DM impact at a younger age, it does so at an older one.
An interesting aspect of using asteroseismology to constrain DM resides in the fact that objects with different fundamental properties can be affected at distinct levels. We have already discussed the case of less massive stars which exhibit a greater DM impact. Moreover, we have also mentioned that older stars are more affected than similar younger objects, which is only natural since DM accumulated for longer and to a greater number in those older stars. The number of ADM particles grows linearly with time after the geometrical limit is reached early on in the life of a star and the luminosity transported by DM is proportional to that number. Hence, the luminosity grows linearly with the age of the star. For an object like star A, with an age roughly twice that of the Sun, we can expect the number of accumulated particles to be greater by a factor of $2$ in comparison with the Sun. This can translate into a significant impact on the stellar structure which explains, at least in part, why we were able to set constraints using star A, for which only $3$ individual $r_{02}$ ratios are available. In the future, stellar age should also be a relevant selection criteria to look for when attempting to constrain DM properties with asteroseismology.
We asked whether these ADM particles significantly alter the star's internal structure during the course of the star's evolution. We conclude that they do. Moreover, we found that ADM models can be excluded using astroseismic data with a high statistical significance, for the regions of the parameter space presented before. However, we would like to draw attention to a couple of caveats in our analysis. First, we recall that the solar composition problem indicates some missing physics in the SSM which most definitely affect the results. We accounted for this uncertainty when comparing between solar model and observational data by considering a reference uncertainty corresponding to the difference between models computed with AGSS09ph and GS98. This is not an issue for the other two stars, for which the frequencies are determined with a worse precision, that covers the reference uncertainty. Second, we note that the calibration processes are prone to degeneracies in the stellar parameters. We would not be able to significantly disfavour a particular DM model if a small change in the benchmark parameters of the stellar model gave a very different result by bringing the $r_{02}$ from the model, closer to that of the observational data. We circumvented this issue by making sure that for a small change in the benchmark parameters of the stellar model there is only a small change in the diagnostic, for a DM model with maximal impact.
It is possible that a more sophisticated interaction such as a $q^{2}$ momentum-dependent cross sections would be favoured by a similar analysis at lower cross section values. It would be very interesting to study star B considering instead a momentum-dependent coupling as \cite{Vincent2015a} did for the Sun. For this it will be necessary to review the mass evaporation threshold and to understand whether different stars allow for different limits.
The Transiting Exoplanet Survey Satellite (TESS) and the PLAnetary Transits and Oscillations of stars (PLATO) mission are expected to allow for the determination of mode frequencies with even better precisions than \textit{Kepler}. Figure \ref{limits_ADM} displays the projected $90\%$ CL ADM limit for the asteroseismic analysis of star B with a $10 \%$ increase in the frequency precision of all modes. Furthermore, improvements of more than $10\%$ seem feasible, for example, a frequency precision of about $0.1 \mu \text{Hz}$ is expected to be achieved with PLATO \citep{Rauer2014}, corresponding to an average improvement of at least $\sim 30 \%$ across all the detected modes of star B.
Moreover, \textit{Gaia}'s first Intermediate Data Release is expected in a few months and it will already provide parallaxes for common \textit{Kepler} targets. In the future, about one billion stars will be mapped and naturally Gaia will share quite a few targets with TESS and PLATO. This raises the possibility of having a large number of stars with frequencies determined to very high precision for which high-quality astrometric data is also available, thus setting tighter input observational constraints for mode ling attempts.
As our understanding of stellar physics improves, asteroseismic diagnostics are starting to offer a complementary approach to corroborate direct detection limits. Asteroseimic studies of DM are essentially competitive for low DM masses of a few GeV, just above the evaporation mass, where DM produces significant impact in stars. This is a very interesting result considering that the present direct DM detection experiments are not able to accurately probe the parameter space of low mass DM particles.
\acknowledgments
JC acknowledges the support of the Alexander von Humboldt-Stiftung Foundation. We thank the referee for his/her comments, which significantly improved our work.
|
2,877,628,090,492 | arxiv | \section{Introduction}
In the last decade, there have been extensive studies in Extended Theories
of Gravity (ETG) such as the Lovelock and $f(R)$ gravity theories. The main
motivation to study the ETG is to understand the accelerated expansion of
the universe and the issue of dark matter/energy (see \cite{1} and
references therein for a general review). One of the most attractive
branches of the ETG is the $f(R)$ gravity theory in which the standard
Einstein's gravity is extended with an arbitrary function of the Ricci
scalar $R$ instead of the linear one \cite{1}. In this model, the Ricci
scalar $R$ in the Einstein$-$Hilbert action is replaced with $f(R)=R+\alpha
g(R),$ where $g(R)$ is an arbitrary function of $R$ so that in the limit
\alpha =0,$ one recovers the Einstein limit. Although the majority of
researchers prefer to use this ansatz, in general, finding an exact analytic
solution to the field equations is not an easy task. As far as analytic
exact solutions are concerned, static, spherically symmetric models in $f(R)$
gravity have been shown to serve for this purpose \cite{2,3,4,5,6}. In this
context of static, spherically symmetric solutions of $f(R)$ gravity, the
solutions admitting black holes have attracted much attention.
In the context of static, spherically symmetric $f(R)$ gravity, it has
recently been shown that \cite{7}, an exact analytic solution is also
possible if one assumes $f(R)$ to have the form of $f(R)=\xi \left(
R+R_{1}\right) +2\alpha \sqrt{R+R_{0}},$ in which $\xi ,\alpha ,R_{0}$ and
R_{1}$ are constants, a priority to secure the Einstein limit by setting the
constants $R_{0}=R_{1}=\alpha =0$ and $\xi =1.$ In this model of $f(R)$
gravity, exact solutions with external electromagnetic sources (both linear
and nonlinear) are found. It was shown that the solution with a linear
electromagnetic field does not admit a black hole while the solution with a
nonlinear electromagnetic source admits a black hole solution. The physical
properties of the latter solution are investigated by calculating
thermodynamic quantities and it was shown to satisfy the first law of
thermodynamics. The solution having as a source a linear electromagnetic
field resulted with a naked curvature singularity at $r=0,$ which is a
typical central singularity peculiar to spherically symmetric systems. The
solution given in \cite{7}, is a kind of extension of a global monopole
solution \cite{8} which represents a solution of the Einstein's equations
with spherical symmetry with matter that extends to infinity. It can also be
interpreted as a cloud of cosmic strings with spherical symmetry \cite{9}.
Hence, the spacetime is conical. However, with the inclusion of a linear or
nonlinear electromagnetic field, the spacetime is no more conical in the
context of $f(R)$ gravity.
Within the framework of ETG gravity, black hole solutions have been widely
studied in the literature (see \cite{1,10} and references therein for a
complete review). However, the solutions that result with naked
singularities have not been studied in detail. In physics, naked
singularities are considered to be a threat to the cosmic censorship
hypothesis. Furthermore, as in classical general relativity, compared to the
black hole solutions, naked singularities are not well understood in the
context of $f(R)$ gravity. This still remains a fundamental problem in
general relativity as well as in ETG to be solved. Another important diff\i
culty in resolving this problem is the scale on which the curvature
singularity occurs. On these small scales, it is believed that the classical
methods should be replaced with quantum techniques in resolving the
singularity problems that necessitate the use of quantum gravity. Since the
quantum theory of gravity is still "under construction", an alternative
method is proposed by Wald \cite{11} which was further developed by Horowitz
and Marolf (HM) \cite{12} in determining the character of classically
singular spacetime and to see if quantum effects have any chance to heal or
regularize the dynamics and restore the predictability if the singularity is
probed with quantum particles/fields.
In this paper, we investigate the occurrence of naked singularities in the
context of $f(R)$ gravity from the point of view of quantum mechanics. We
believe that this will be the unique example wherein the formation of a
classically naked curvature singularities in $f(R)$ gravity will be probed
with quantum fields/particles that obey the Klein$-$Gordon, Dirac and
Maxwell equations. The criterion proposed by HM will be used in this study
to investigate the occurrence of naked singularities.
This criterion has been used successfully for other spacetimes to check
whether the classically singular spacetimes are quantum mechanically regular
or not. As an example; negative mass Schwarzschild spacetime, charged
dilatonic black hole spacetime and fundamental string spacetimes are
considered in \cite{12}. An alternative function space, namely the Sobelov
space instead of the Hilbert space, has been introduced in \cite{13}, for
analyzing the singularities within the framework of quantum mechanics.
Helliwell and Konkowski have studied quasiregular \cite{14}, Gal'tsov$-
Letelier$-$Tod spacetime \cite{15}, Levi-Civita spacetimes \cite{16,17}, and
recently, they have also considered conformally static spacetimes \cite{18}.
Pitelli and Letelier have studied spherical and cylindrical topological
defects \cite{19}, Banados$-$Teitelboim$-$Zanelli (BTZ) spacetimes \cite{20
, the global monopole spacetime \cite{21} and cosmological spacetimes \cit
{22}. Quantum singularities in matter coupled $2+1$ dimensional black hole
spacetimes are considered in \cite{23}. Quantum singularities are also
considered in Lovelock theory \cite{24} and linear dilaton black hole
spacetimes \cite{25}. Recently, the occurrence of naked singularities in a
2+1$ dimensional magnetically charged solution in Einstein$-$Power$-$Maxwell
theory have also been considered \cite{26}.
The main theme in these studies is to understand whether these classically
singular spacetimes turn out to be quantum mechanically regular if they are
probed with quantum fields rather than classical particles.
The solution to be investigated in this paper is a kind of $f(R)$ gravity
extension of the analysis presented in \cite{21} for the global monopole
spacetime. The inclusion of the linear Maxwell field within the context of
f(R)$ gravity \ affects the topology significantly and removes the conical
nature at infinity. Furthermore, the true timelike naked curvature
singularity is created at $r=0$ which is peculiar to spherically symmetric
systems. We investigate this singularity within the framework of quantum
mechanics by employing three different quantum fields/particles obeying the
Klein$-$Gordon, Dirac and Maxwell fields with different spin structures.
The paper is organized as follows: In Sec.II, we review the solution found
recently in \cite{7}, and give the structure of the spacetime. In Sec.III,
first, the definition of quantum singularity for static spacetimes is
briefly introduced. Then, the quantum fields obeying the Klein$-$Gordon,
Dirac and Maxwell equations are used to probe the singularity. The paper
ends with a conclusion in Sec. IV. \ \ \
\section{The Metric for $f(R)$ Gravity Coupled to Maxwell Fields and
Spacetime Structure}
Recently, an exact analytic solution for $f\left( R\right) $ gravity coupled
with linear and nonlinear Maxwell field in four dimensions has been
presented in \cite{7}. The corresponding action for $f\left( R\right) $
gravity coupled with linear Maxwell field in four dimensions is given by
\begin{equation}
S=\int d^{4}x\sqrt{-g}\left[ \frac{f\left( R\right) }{2\kappa }-\frac{1}
4\pi }F\right] ,
\end{equation
in which $f\left( R\right) $ is a real function of the Ricci scalar $R,$ and
$F=\frac{1}{4}F_{\mu \nu }F^{\mu \nu }$ is the Maxwell invariant. The
Maxwell two-form is given b
\begin{equation}
\mathbf{F}=\frac{Q}{r^{2}}dt\wedge dr+P\sin \theta d\theta \wedge d\varphi ,
\end{equation
in which $Q$ and $P$ are the electric and magnetic charges, respectively.
The static spherically symmetric metric ansatz i
\begin{equation}
ds^{2}=-B\left( r\right) dt^{2}+\frac{dr^{2}}{B\left( r\right) }+r^{2}\left(
d\theta ^{2}+\sin ^{2}\theta d\varphi ^{2}\right) ,
\end{equation
where $B\left( r\right) $ stands for the only metric function to be found.
The Maxwell equations $\left( \text{i.e. }dF=0=d^{\ast }F\right) $\ are
satisfied and the field equations are given b
\begin{equation}
f_{R}R_{\mu }^{\nu }+\left( \square f_{R}-\frac{1}{2}f\right) \delta _{\mu
}^{\nu }-\nabla ^{\nu }\nabla _{\mu }f_{R}=\kappa T_{\mu }^{\nu },
\end{equation
in which
\begin{eqnarray}
f_{R} &=&\frac{df\left( R\right) }{dR}, \\
\square f_{R} &=&\frac{1}{\sqrt{-g}}\partial _{\mu }\left( \sqrt{-g}\partial
^{\mu }\right) f_{R}, \\
\nabla ^{\nu }\nabla _{\mu }f_{R} &=&g^{\alpha \nu }\left[ \left(
f_{R}\right) _{,\mu ,\alpha }-\Gamma _{\mu \alpha }^{m}\left( f_{R}\right)
_{,m}\right] ,
\end{eqnarray
while the energy momentum tensor is
\begin{equation}
4\pi T_{\mu }^{\nu }=-F\delta _{\mu }^{\nu }+F_{\mu \lambda }F^{\nu \lambda
}.
\end{equation
Furthermore, the trace of the field equation (4) read
\begin{equation}
f_{R}R+\left( d-1\right) \square f_{R}-\frac{d}{2}f=\kappa T,
\end{equation
with $T=T_{\mu }^{\mu }.$ The non-zero energy momentum tensor components ar
\begin{equation}
T_{\mu }^{\nu }=\frac{P^{2}+Q^{2}}{8\pi r^{4}}diag\left[ -1,-1,1,1\right] ,
\end{equation
and with zero trace, we hav
\begin{equation}
f=\frac{1}{2}f_{R}R+3\square f_{R}.
\end{equation
With reference to the paper \cite{7}, the form of the function $f\left(
R\right) $ is assumed to be ,
\begin{equation}
f\left( R\right) =\xi \left( R+\frac{1}{2}R_{0}\right) +2\alpha \sqrt{R+R_{0
},
\end{equation
which leads to
\begin{equation}
R=\frac{\alpha ^{2}}{\eta ^{2}r^{2}}-R_{0},
\end{equation
where $\alpha $ , $R_{0},$ and $\xi $ are constants. Consequently, the
metric function $B(r)$ is obtained for the free parameters $\alpha =\eta $
as,
\begin{equation}
B\left( r\right) =\frac{1}{2}-\frac{m}{r}+\frac{q^{2}}{r^{2}}-\frac{\Lambda
_{eff}}{3}r^{2},
\end{equation
where $m=\frac{-\xi }{3\eta },$ $\Lambda _{eff}=\frac{-R_{0}}{4}$ and $q^{2}
\frac{Q^{2}+P^{2}}{\xi }.$ As was explained in \cite{7}, due to the
constraints on the free parameters, this solution does not admit the Reissne
$-$Nordstr\"{o}m (RN)$-$de Sitter (dS) limit. However, in the limit $\xi =1$
and $P=Q=0,$ the solution reduces to the well known global monopole solution
reported in \cite{8}, which represents a spherically symmetric,
non-asymptotically flat solution with a matter field that extends to
infinity.\ Furthermore, this solution can also be considered as a
spherically symmetric cloud of cosmic string which gives rise to a deficit
angle \cite{9}. Therefore, the solution given in equation (14) , is a kind
of Einstein$-$Maxwell extension of the global monopole solution in $f\left(
R\right) $ gravity. One of the striking effects of the additional fields is
the removal of the conical geometry of the global monopole spacetime. The
Kretschmann scalar which indicates the formation of curvature singularity is
given b
\begin{equation*}
\mathcal{K}=\frac{1}{3}\frac{8\lambda ^{2}r^{8}+4\lambda
r^{6}+3r^{4}+12mr^{3}+12r^{2}\left( 3m^{2}-q^{2}\right) -144mq^{2}r++168q^{4
}{r^{8}}.
\end{equation*
It is obvious that $r=0$ is a typical central curvature singularity. This is
a timelike naked singularity because the behavior of the new radial
coordinate defined by $r_{\ast }=\int \frac{dr}{B(r)}$ is finite when
r\rightarrow 0.$ Hence, the new solution obtained in \cite{7} and given in
equation (14) is classically a singular spacetime.
Our aim in the next section is to investigate this classically singular
spacetime with regard to the quantum mechanical point of view.
\section{Quantum Singularities}
One of the important predictions of the Einstein's theory of general
relativity is the formation of spacetime singularities. In classical general
relativity, singularities are defined as the points in which the evolution
of timelike or null geodesics is not defined after a proper time. According
to the classification of the classical singularities devised by Ellis and
Schmidt , scalar curvature singularities are the strongest ones in the sense
that the spacetime cannot be extended and all physical quantities, such as
the gravitational field, energy density and tidal forces, diverge at the
singular point. In black hole spacetimes, the location of the curvature
singularity is at $r=0$ and is covered by horizon(s). As long as the
singularities are hidden by horizon(s), they do not constitute a threat to
the Penrose cosmic censorship hypothesis. However, there are some cases that
the singularity is not hidden and hence, it is \textit{naked}. In the case
of naked singularities, further care is required because they violate the
cosmic censorship hypothesis. The resolution of the naked singularities
stands as one of the most drastic problems in general relativity to be
solved.
Naked singularities that occur at $r=0$ are on the very small scales where
classical general relativity is expected to be replaced by quantum theory of
gravity. In this paper, the occurrence of naked singularities in $f(R)$
gravity will be analyzed through a quantum mechanical point of view. In
probing the singularity, quantum test particles/fields obeying the Klein$-
Gordon, Dirac and Maxwell equations are used. In other words, the
singularity will be probed with spin $0$, spin $1/2$ and spin $1$ fields.
The reason for using three different types of field is to clarify whether or
not the classical singularity is sensitive to the spin of the fields.
Our analysis will be based on the pioneering work of Wald, which was further
developed by HM to probe the classical singularities with quantum test
particles obeying the Klein$-$Gordon equation in static spacetimes having
timelike singularities. According to HM, the singular character of the
spacetime is defined as the ambiguity in the evolution of the wave
functions. That is to say, the singular character is determined in terms of
the ambiguity when attempting to find a self-adjoint extension of the
operator to the entire Hilbert space. If the extension is unique, it is said
that the space is quantum mechanically regular. A brief review now follows:
Consider a static spacetime $\left( M,g_{\mu \nu }\right) $\ with a timelike
Killing vector field $\xi ^{\mu }$. Let $t$ denote the Killing parameter and
$\Sigma $\ denote a static slice. The Klein$-$Gordon equation in this space
is
\begin{equation}
\left( \nabla ^{\mu }\nabla _{\mu }-M^{2}\right) \psi =0.
\end{equation
This equation can be written in the form
\begin{equation}
\frac{\partial ^{2}\psi }{\partial t^{2}}=\sqrt{f}D^{i}\left( \sqrt{f
D_{i}\psi \right) -fM^{2}\psi =-A\psi ,
\end{equation
in which $f=-\xi ^{\mu }\xi _{\mu }$ and $D_{i}$ is the spatial covariant
derivative on $\Sigma $. The Hilbert space $\mathcal{H}$, $\left(
L^{2}\left( \Sigma \right) \right) $\ is the space of square integrable
functions on $\Sigma $. The domain of an operator $A,$ $D(A),$ is taken in
such a way that it does not enclose the spacetime singularities. An
appropriate set is $C_{0}^{\infty }\left( \Sigma \right) $, the set of
smooth functions with compact support on $\Sigma $. The operator $A$ is
real, positive and symmetric; therefore, its self-adjoint extensions always
exist. If \ it has a unique extension $A_{E},$ then $A$ is called
essentially self-adjoint \cite{27,28,29}. Accordingly, the Klein$-$Gordon
equation for a free particle satisfies
\begin{equation}
i\frac{d\psi}{dt}=\sqrt{A_{E}}\psi,
\end{equation}
with the solution
\begin{equation}
\psi \left( t\right) =\exp \left[ -it\sqrt{A_{E}}\right] \psi \left(
0\right) .
\end{equation
If $A$ is not essentially self-adjoint, the future time evolution of the
wave function (18) is ambiguous. Then the HM criterion defines the spacetime
as quantum mechanically singular. However, if there is only a single
self-adjoint extension, the operator $A$ is said to be\ essentially
self-adjoint and the quantum evolution described by Eq.(18) is uniquely
determined by the initial conditions. According to the HM criterion, this
spacetime is said to be quantum mechanically non-singular. In order to
determine the number of self-adjoint extensions, the concept of deficiency
indices is used. The deficiency subspaces $N_{\pm }$ are defined by ( see
Ref. \cite{13} for a detailed mathematical background)
\begin{align}
N_{+}& =\{\psi \in D(A^{\ast }),\text{ \ \ \ \ \ \ }A^{\ast }\psi =Z_{+}\psi
,\text{ \ \ \ \ \ }ImZ_{+}>0\}\text{ \ \ \ \ \ with dimension }n_{+} \\
N_{-}& =\{\psi \in D(A^{\ast }),\text{ \ \ \ \ \ \ }A^{\ast }\psi =Z_{-}\psi
,\text{ \ \ \ \ \ }ImZ_{-}<0\}\text{ \ \ \ \ \ with dimension }n_{-} \notag
\end{align
The dimensions $\left( \text{ }n_{+},n_{-}\right) $ are the deficiency
indices of the operator $A$. The indices $n_{+}(n_{-})$ are completely
independent of the choice of $Z_{+}(Z_{-})$ depending only on whether or not
$Z$ lies in the upper (lower) half complex plane. Generally one takes
Z_{+}=i\lambda $ and $Z_{-}=-i\lambda $ , where $\lambda $ is an arbitrary
positive constant necessary for dimensional reasons. The determination of
deficiency indices is then reduced to counting the number of solutions of
A^{\ast }\psi =Z\psi $ ; (for $\lambda =1$),
\begin{equation}
A^{\ast }\psi \pm i\psi =0
\end{equation
that belong to the Hilbert space $\mathcal{H}$. If there are no square
integrable solutions ( i.e. $n_{+}=n_{-}=0)$, the operator $A$ possesses a
unique self-adjoint extension and essentially self-adjoint. Consequently,
the way to find a sufficient condition for the operator $A$ to be
essentially self-adjoint is to investigate the solutions satisfying Eq. (20)
that do not belong to the Hilbert space.
\subsection{Klein$-$Gordon Fields}
The Klein$-$Gordon equation for a scalar particle with mass $M$ is given by
\begin{equation}
\square \psi =g^{-1/2}\partial _{\mu }\left[ g^{1/2}g^{\mu \nu }\partial
_{\nu }\right] \psi =M^{2}\psi .
\end{equation
For the metric (3), the Klein$-$Gordon equation becomes
\begin{eqnarray}
\frac{\partial ^{2}\psi }{\partial t^{2}} &=&-B\left( r\right) \left\{
B\left( r\right) \frac{\partial ^{2}\psi }{\partial r^{2}}+\frac{1}{r^{2}
\frac{\partial ^{2}\psi }{\partial \theta ^{2}}+\frac{1}{r^{2}\sin
^{2}\theta }\frac{\partial ^{2}\psi }{\partial \varphi ^{2}}+\frac{\cot
\theta }{r^{2}}\frac{\partial \psi }{\partial \theta }+\left( \frac{2B\left(
r\right) }{r}+B^{^{\prime }}\left( r\right) \right) \frac{\partial \psi }
\partial r}\right\} \\
&&+B\left( r\right) M^{2}\psi . \notag
\end{eqnarray
In analogy with equation (16), the spatial operator $A$ for the massless
case is
\begin{equation}
\emph{A}=B\left( r\right) \left\{ B\left( r\right) \frac{\partial ^{2}}
\partial r^{2}}+\frac{1}{r^{2}}\frac{\partial ^{2}}{\partial \theta ^{2}}
\frac{1}{r^{2}\sin ^{2}\theta }\frac{\partial ^{2}}{\partial \varphi ^{2}}
\frac{\cot \theta }{r^{2}}\frac{\partial }{\partial \theta }+\left( \frac
2B\left( r\right) }{r}+B^{^{\prime }}\left( r\right) \right) \frac{\partial
}{\partial r}\right\} ,
\end{equation
and the equation to be solved is $\left( \emph{A}^{\ast }\pm i\right) \psi
=0.$Using separation of variables, $\psi =R\left( r\right) Y_{l}^{m}\left(
\theta ,\varphi \right) $, we get the radial portion of equation (20) as
\begin{equation}
\frac{d^{2}R\left( r\right) }{dr^{2}}+\frac{\left( r^{2}B\left( r\right)
\right) ^{^{\prime }}}{r^{2}B\left( r\right) }\frac{dR\left( r\right) }{dr
+\left( \frac{-l\left( l+1\right) }{r^{2}B\left( r\right) }\pm \frac{i}
B^{2}\left( r\right) }\right) R\left( r\right) =0.
\end{equation
where a prime denotes the derivative with respect to $r$.
\subsubsection{The case of r$\rightarrow \infty $}
The case $r\rightarrow \infty $ is topologically different compared to the
analysis reported in \cite{21}. In the present problem the geometry is not
conical. The approximate metric when $r\rightarrow \infty $ i
\begin{equation}
ds^{2}\simeq -(\frac{R_{0}r^{2}}{12})dt^{2}+\left( \frac{12}{R_{0}r^{2}
\right) dr^{2}+r^{2}\left( d\theta ^{2}+\sin ^{2}\theta d\varphi ^{2}\right)
.
\end{equation
For the above metric, the radial equation (24) becomes
\begin{equation}
\frac{d^{2}R\left( r\right) }{dr^{2}}+\frac{4}{r}\frac{dR\left( r\right) }{d
}=0,
\end{equation
whose solution i
\begin{equation*}
R\left( r\right) =C_{1}+\frac{C_{2}}{r^{3}},
\end{equation*
where $C_{1}$ and $C_{2}$ are arbitrary integration constants. It is clearly
observed that the above solution is square integrable as $r\rightarrow
\infty $ if and only if $C_{1}=0.$ Hence, the asymptotic behavior of $R(r)$
is given by $R(r)\simeq \frac{C_{2}}{r^{3}}.$
\subsubsection{The case of r$\rightarrow 0$}
Near the origin there is a true timelike curvature singularity resulting
from the existence of charge. Therefore, the approximate metric near the
origin is given b
\begin{equation}
ds^{2}\simeq -(\frac{q^{2}}{r^{2}})dt^{2}+\left( \frac{r^{2}}{q^{2}}\right)
dr^{2}+r^{2}\left( d\theta ^{2}+\sin ^{2}\theta d\varphi ^{2}\right) .
\end{equation}
The radial equation (24) for the above metric reduces to
\begin{equation}
\frac{d^{2}R\left( r\right) }{dr^{2}}-\frac{l\left( l+1\right) }{q^{2}
R\left( r\right) =0,
\end{equation
whose solution is
\begin{eqnarray}
R\left( r\right) &=&C_{3}e^{\alpha r}+C_{4}e^{-\alpha r} \\
\alpha &=&\frac{\sqrt{l\left( l+1\right) }}{q} \notag
\end{eqnarray
where $C_{3}$\ and $C_{4}$ are arbitrary integration constants. The square
integrability of the above solution is checked by calculating the squared
norm of the above solution in which the function space on each $t=$ constant
hypersurface $\Sigma $ is defined as $\mathcal{H=}\{R\mid \parallel
R\parallel <\infty \}.$ The squared norm for the metric (27) is given by,
\begin{equation}
\parallel R\parallel ^{2}=\int_{0}^{\text{constant}}\frac{\left\vert R\left(
r\right) \right\vert ^{2}r^{4}}{q^{2}}dr.
\end{equation
Our calculation has revealed that the solution above is always square
integrable near $r=0,$ even if $l=0,$ which corresponds to the $S$-wave
solutions. \
Consequently, the spatial operator $A$ has deficiency indices $n_{+}=n_{-}=1,
$ and it is not essentially self-adjoint. Hence, the classical singularity
at $r=0$ remains quantum mechanically singular when probed with fields
obeying the Klein$-$Gordon equation.
\subsection{Maxwell fields}
The Newman$-$Penrose formalism will be used to find the source-free Maxwell
fields propagating in the space of $f(R)$ gravity. Let us note that the
signature of the metric (3) is changed to $-2$ in order to use the
source-free Maxwell equations in Newman$-$Penrose formalism. Thus, the
metric (3) is rewritten as
\begin{equation}
ds^{2}=B\left( r\right) dt^{2}-\frac{dr^{2}}{B\left( r\right) }-r^{2}\left(
d\theta ^{2}+\sin ^{2}\theta d\varphi ^{2}\right) .
\end{equation
The four coupled source-free Maxwell equations for electromagnetic fields in
the Newman$-$Penrose formalism is given b
\begin{eqnarray}
D\phi _{1}-\bar{\delta}\phi _{0} &=&\left( \pi -2\alpha \right) \phi
_{0}+2\rho \phi _{1}-\kappa \phi _{2}, \\
\delta \phi _{2}-\Delta \phi _{1} &=&-\nu \phi _{0}+2\mu \phi _{1}+\left(
\tau -2\beta \right) \phi _{2}, \notag \\
\delta \phi _{1}-\Delta \phi _{0} &=&\left( \mu -2\gamma \right) \phi
_{0}+2\tau \phi _{1}-\sigma \phi _{2}, \notag \\
D\phi _{2}-\bar{\delta}\phi _{1} &=&-\lambda \phi _{0}+2\pi \phi _{1}+\left(
\rho -2\epsilon \right) \phi _{2}, \notag
\end{eqnarray
where $B(r)$ is the metric function given in Eq.(14), $\phi _{0},$ $\phi _{1}
$ and $\phi _{2}$ are the Maxwell spinors, $\epsilon ,\rho ,\pi ,\alpha ,\mu
,\gamma ,\beta $ and $\tau $ are the spin coefficients to be found and the
bar denotes complex conjugation. The null tetrad vectors for the metric
(31) are defined b
\begin{eqnarray}
l^{a} &=&\left( \frac{1}{B(r)},1,0,0\right) , \\
n^{a} &=&\left( \frac{1}{2},-\frac{B(r)}{2},0,0\right) , \notag \\
m^{a} &=&\frac{1}{\sqrt{2}}\left( 0,0,\frac{1}{r},\frac{i}{r\sin \theta
\right) . \notag
\end{eqnarray
The directional derivatives in the Maxwell's equations are defined by
D=l^{a}\partial _{a},\Delta =n^{a}\partial _{a}$ and $\delta =m^{a}\partial
_{a}.$ We define operators in the following way
\begin{eqnarray}
\mathbf{D}_{0} &=&D, \notag \\
\mathbf{D}_{0}^{\dagger } &=&-\frac{2}{B\left( r\right) }\Delta , \\
\mathbf{L}_{0}^{\dagger } &=&\sqrt{2}r\text{ }\delta \text{ and }\mathbf{L
_{1}^{\dagger }=\mathbf{L}_{0}^{\dagger }+\frac{\cot \theta }{2}, \notag \\
\mathbf{L}_{0} &=&\sqrt{2}r\text{ }\bar{\delta}\text{ and }\mathbf{L}_{1}
\mathbf{L}_{0}+\frac{\cot \theta }{2}. \notag
\end{eqnarray
The non-zero spin coefficients are
\begin{equation}
\mu =-\frac{1}{r}\frac{B(r)}{2},\text{ \ \ \ \ }\rho =-\frac{1}{r},\text{ \
\ \ }\gamma =\frac{1}{4}B^{^{\prime }}(r),\text{ \ \ \ \ }\beta =-\alpha
\frac{1}{2\sqrt{2}}\frac{\cot \theta }{r}.
\end{equation
The Maxwell spinors are defined by \cite{30}
\begin{eqnarray}
\phi _{0} &=&F_{13}=F_{\mu \nu }l^{\mu }m^{\nu } \\
\phi _{1} &=&\frac{1}{2}\left( F_{12}+F_{43}\right) =\frac{1}{2}F_{\mu \nu
}\left( l^{\mu }n^{\nu }+\overline{m}^{\mu }m^{\nu }\right) , \notag \\
\phi _{2} &=&F_{42}=F_{\mu \nu }\overline{m}^{\mu }n^{\nu }, \notag
\end{eqnarray
where $F_{ij}\left( i,j=1,2,3,4\right) $ and $F_{\mu \nu }\left( \mu ,\nu
=0,1,2,3\right) $ are the components of the Maxwell tensor in the tetrad and
tensor bases, respectively. Substituting Eq.(34) into the Maxwell's
equations together with non-zero spin coefficients, the Maxwell equations
become
\begin{gather}
\left( \mathbf{D}_{0}+\frac{2}{r}\right) \phi _{1}-\frac{1}{r\sqrt{2}
\mathbf{L}_{1}\phi _{0}=0, \\
\left( \mathbf{D}_{0}+\frac{1}{r}\right) \phi _{2}-\frac{1}{r\sqrt{2}
\mathbf{L}_{0}\phi _{1}=0, \\
\frac{B\left( r\right) }{2}\left( \mathbf{D}_{0}^{\dagger }+\frac
B^{^{\prime }}\left( r\right) }{B\left( r\right) }+\frac{1}{r}\right) \phi
_{0}+\frac{1}{r\sqrt{2}}\mathbf{L}_{0}^{\dagger }\phi _{1}=0, \\
\frac{B\left( r\right) }{2}\left( \mathbf{D}_{0}^{\dagger }+\frac{2}{r
\right) \phi _{1}+\frac{1}{r\sqrt{2}}\mathbf{L}_{1}^{\dagger }\phi _{2}=0.
\end{gather
The equations\ above will become more tractable if the variables are changed
to
\begin{equation*}
\Phi _{0}=\phi _{0}e^{ikt},\text{ \ \ }\Phi _{1}=\sqrt{2}r\phi _{1}e^{ikt}
\text{ \ \ \ \ }\Phi _{2}=2r^{2}\phi _{2}e^{ikt}.
\end{equation*
Then we hav
\begin{gather}
\left( \mathbf{D}_{0}+\frac{1}{r}\right) \Phi _{1}-\mathbf{L}_{1}\Phi _{0}=0,
\\
\left( \mathbf{D}_{0}-\frac{1}{r}\right) \Phi _{2}-\mathbf{L}_{0}\Phi _{1}=0,
\\
r^{2}B\left( r\right) \left( \mathbf{D}_{0}^{\dagger }+\frac{B^{^{\prime
}}\left( r\right) }{B\left( r\right) }+\frac{1}{r}\right) \Phi _{0}+\mathbf{
}_{0}^{\dagger }\Phi _{1}=0, \\
r^{2}B\left( r\right) \left( \mathbf{D}_{0}^{\dagger }+\frac{1}{r}\right)
\Phi _{1}+\mathbf{L}_{1}^{\dagger }\Phi _{2}=0.
\end{gather
The commutativity of the operators $\mathbf{L}$ and $\mathbf{D}$ enables us
to eliminate each $\Phi _{i}$ from above equations, and hence we have
\begin{gather}
\left[ \mathbf{L}_{0}^{\dagger }\mathbf{L}_{1}+r^{2}B\left( r\right) \left(
\mathbf{D}_{0}+\frac{B^{^{\prime }}\left( r\right) }{B\left( r\right) }
\frac{3}{r}\right) \left( \mathbf{D}_{0}^{\dagger }+\frac{B^{^{\prime
}}\left( r\right) }{B\left( r\right) }+\frac{1}{r}\right) \right] \Phi
_{0}\left( r,\theta \right) =0, \\
\left[ \mathbf{L}_{0}\mathbf{L}_{1}^{\dagger }+r^{2}B\left( r\right) \left(
\mathbf{D}_{0}^{\dagger }+\frac{1}{r}\right) \left( \mathbf{D}_{0}-\frac{1}{
}\right) \right] \Phi _{2}\left( r,\theta \right) =0, \\
\left[ \mathbf{L}_{1}\mathbf{L}_{0}^{\dagger }+r^{2}B\left( r\right) \left(
\mathbf{D}_{0}^{\dagger }+\frac{B^{^{\prime }}\left( r\right) }{B\left(
r\right) }+\frac{1}{r}\right) \left( \mathbf{D}_{0}+\frac{1}{r}\right)
\right] \Phi _{1}\left( r,\theta \right) =0.
\end{gather
The variables $r$ and $\theta $ can be separated by assuming a separable
solution in the form o
\begin{equation*}
\Phi _{0}\left( r,\theta \right) =f_{0}\left( r\right) \Theta _{0}\left(
\theta \right) ,\text{ \ \ }\Phi _{1}\left( r,\theta \right) =f_{1}\left(
r\right) \Theta _{1}\left( \theta \right) ,\text{ \ \ \ \ }\Phi _{2}\left(
r,\theta \right) =f_{2}\left( r\right) \Theta _{2}\left( \theta \right) .
\end{equation*
The separation constants for Eq. (45) and Eq. (46) are the same, because
\mathbf{L}_{n}=-\mathbf{L}_{n}^{\dagger }\left( \pi -\theta \right) ,$ or,
in other words, the operator $\mathbf{L}_{0}^{\dagger }\mathbf{L}_{1}$
acting on $\Theta _{0}\left( \theta \right) $ is the same as the operator
\mathbf{L}_{0}\mathbf{L}_{1}^{\dagger }$ acting on $\Theta _{2}\left( \theta
\right) $ if we replace $\theta $ by $\pi -\theta $. However, for Eq. (47)
we will assume another separation constant. Furthermore, by defining
R_{0}\left( r\right) =\frac{f_{0}(r)}{rB\left( r\right) }$, $R_{1}(r)=\frac
f_{1}\left( r\right) }{r}$ and $R_{2}(r)=\frac{f_{2}\left( r\right) }{r}$,
the radial equations can be written as
\begin{gather}
f_{0}^{^{\prime \prime }}(r)+\frac{2}{r}f_{0}^{^{\prime }}(r)+ \\
\left[ -i\omega \left( \frac{2}{rB\left( r\right) }-\frac{B^{^{\prime
}}\left( r\right) }{B^{2}\left( r\right) }\right) +\frac{\omega ^{2}}
B^{2}\left( r\right) }-\frac{\epsilon ^{2}}{r^{2}B\left( r\right) }\right]
f_{0}(r)=0, \notag
\end{gather
\begin{gather}
f_{2}^{^{\prime \prime }}(r)-\frac{2}{r}f_{2}^{^{\prime }}(r)+ \\
\left[ i\omega \left( \frac{2}{rB\left( r\right) }-\frac{B^{^{\prime
}}\left( r\right) }{B^{2}\left( r\right) }\right) +\frac{\omega ^{2}}
B^{2}\left( r\right) }-\frac{\epsilon ^{2}}{r^{2}B\left( r\right) }\right]
f_{2}(r)=0, \notag
\end{gather
\begin{gather}
f_{1}^{^{\prime \prime }}(r)+\frac{B^{^{\prime }}\left( r\right) }{B\left(
r\right) }f_{1}^{^{\prime }}(r)+ \\
\left[ \frac{\omega ^{2}}{B^{2}\left( r\right) }-\frac{\eta ^{2}}
r^{2}B\left( r\right) }\right] f_{1}(r)=0, \notag
\end{gather
where $\epsilon $ and $\eta $ are the separability constants.
\subsubsection{The case r$\rightarrow \infty $}
For the case $r\rightarrow \infty $, the corresponding metric is given in
Eq.(25). Hence, the radial parts of the Maxwell equations, (48) , (49) and
(50), becom
\begin{eqnarray}
f_{j}^{^{\prime \prime }}(r)+\frac{2}{r}f_{j}^{^{\prime }}(r) &=&0,\text{ \
\ \ \ \ }j=0,1\text{\ \ \ \ \ } \\
\text{\ }f_{2}^{^{\prime \prime }}(r)-\frac{2}{r}f_{2}^{^{\prime }}(r) &=&0
\end{eqnarray
Thus, the solutions in the asymptotic case ar
\begin{eqnarray}
R_{j}(r) &=&C_{1}+\frac{C_{2}}{r},\text{ \ \ \ }j=0,1\text{\ \ } \\
R_{2}(r) &=&C_{3}+\frac{C_{4}}{r^{3}},
\end{eqnarray
in which $C_{i}$ are integration constants. The solution above is square
integrable if $C_{1}=$ $C_{3}=0.$ Therefore, the asymptotic form of the
solutions behaves as $R_{j}(r)\sim \frac{C_{2}}{r},$ \ \ \ $j=0,1$ and
R_{2}(r)\sim \frac{C_{4}}{r^{3}}.$
\subsubsection{The case r$\rightarrow 0$}
The metric near $r\rightarrow 0$ is given in Eq.(27). Hence, the radial
parts of the Maxwell equations (48) , (49) and (50) for this case are given
b
\begin{eqnarray}
R_{j}^{^{\prime \prime }}(r)-\frac{2}{r}R_{j}^{^{\prime }}(r)-\frac{\alpha
^{2}}{q^{2}}R_{j}(r) &=&0\text{ , \ }j=1,2 \\
R_{0}^{^{\prime \prime }}(r)+\frac{2}{r}R_{0}^{^{\prime }}(r)-\frac{\eta ^{2
}{q^{2}}R_{0}(r) &=&0\text{\ }
\end{eqnarray
whose solutions are obtained as
\begin{eqnarray}
R_{j}(r) &=&C_{3}e^{\frac{\alpha }{q}r}\left( \alpha r-1\right) +C_{4}e^{
\frac{\alpha }{q}r}\left( \alpha r+1\right) ,\text{ \ \ \ \ \ }j=1,2, \\
R_{0}(r) &=&\frac{C_{5}}{r}\sinh \left( \frac{\eta }{q}r\right) +\frac{C_{6
}{r}\cosh \left( \frac{\eta }{q}r\right)
\end{eqnarray
where $C_{i}$ are constants. The above solution is checked for square
integrability. Calculations have revealed that
\begin{equation*}
\parallel R_{i}\parallel ^{2}=\int_{0}^{\text{constant}}\frac{\left\vert
R_{i}\left( r\right) \right\vert ^{2}r^{4}}{q^{2}}dr<\infty ,
\end{equation*
which indicates that the obtained solutions are square integrable. The
definition of the quantum singularity for Maxwell fields will be the same as
for the Klein$-$Gordon fields. Here, since we have three equations governing
the dynamics of the photon waves, the unique self-adjoint extension
condition on the spatial part of the Maxwell operator should be examined for
each of the three equations. As a result, the occurrence of the naked
singularity in $f(R)$ gravity is quantum mechanically singular if it is
probed with photon waves.
\subsection{Dirac Fields}
The Newman$-$Penrose formalism will also be used here to find the massless
Dirac fields (fermions) propagating in the space of $f(R)$-gravity. The
Chandrasekhar-Dirac (CD) equations in the Newman$-$Penrose formalism are
given by
\begin{eqnarray}
\left( D+\epsilon -\rho \right) F_{1}+\left( \bar{\delta}+\pi -\alpha
\right) F_{2} &=&0, \\
\left( \Delta +\mu -\gamma \right) F_{2}+\left( \delta +\beta -\tau \right)
F_{1} &=&0, \notag \\
\left( D+\bar{\epsilon}-\bar{\rho}\right) G_{2}-\left( \delta +\bar{\pi}
\bar{\alpha}\right) G_{1} &=&0, \notag \\
\left( \Delta +\bar{\mu}-\bar{\gamma}\right) G_{1}-\left( \bar{\delta}+\bar
\beta}-\bar{\tau}\right) G_{2} &=&0, \notag
\end{eqnarray
where $F_{1},F_{2},G_{1}$ and $G_{2}$ are the components of the wave
function, $\epsilon ,\rho ,\pi ,\alpha ,\mu ,\gamma ,\beta $ and $\tau $ are
the spin coefficients to be found. The non-zero spin coefficients are given
in Eq.(35). The directional derivatives in the CD equations are the same as
in the Maxwell equations. Substituting non-zero spin coefficients and the
definitions of the operators given in Eq.(34) into the CD equations leads to
\begin{gather}
\left( \mathbf{D}_{0}+\frac{1}{r}\right) F_{1}+\frac{1}{r\sqrt{2}}\mathbf{L
_{1}F_{2}=0, \notag \\
-\frac{B\left( r\right) }{2}\left( \mathbf{D}_{0}^{\dagger }+\frac
B^{^{\prime }}\left( r\right) }{2B\left( r\right) }+\frac{1}{r}\right) F_{2}
\frac{1}{r\sqrt{2}}\mathbf{L}_{1}^{\dagger }F_{1}=0, \notag \\
\left( \mathbf{D}_{0}+\frac{1}{r}\right) G_{2}-\frac{1}{r\sqrt{2}}\mathbf{L
_{1}^{\dagger }G_{1}=0, \notag \\
\frac{B\left( r\right) }{2}\left( \mathbf{D}_{0}^{\dagger }+\frac
B^{^{\prime }}\left( r\right) }{2B\left( r\right) }+\frac{1}{r}\right) G_{1}
\frac{1}{r\sqrt{2}}\mathbf{L}_{1}G_{2}=0.
\end{gather
For the solution of the CD equations, we assume a separable solution in the
form o
\begin{eqnarray}
F_{1} &=&f_{1}(r)Y_{1}(\theta )e^{i\left( kt+m\varphi \right) }, \\
F_{2} &=&f_{2}(r)Y_{2}(\theta )e^{i\left( kt+m\varphi \right) }, \notag \\
G_{1} &=&g_{1}(r)Y_{3}(\theta )e^{i\left( kt+m\varphi \right) }, \notag \\
G_{2} &=&g_{2}(r)Y_{4}(\theta )e^{i\left( kt+m\varphi \right) }, \notag
\end{eqnarray
where $m$ is the azimuthal quantum number and $k$ is the frequency of the
Dirac fields, which is assumed to be positive and real .Since $\left\{
f_{1},f_{2},g_{1},g_{2}\right\} $ and $\left\{
Y_{1},Y_{2},Y_{3},Y_{4}\right\} $ are functions of $r$ and $\theta ,$
respectively, by substituting Eq.(61) into Eq.(60) and applying the
assumptions given b
\begin{eqnarray}
\text{\ }f_{1}(r) &=&g_{2}(r)\text{ \ \ \ \ and \ \ \ }f_{2}(r)=g_{1}(r
\text{\ \ }, \\
Y_{1}(\theta ) &=&Y_{3}(\theta )\text{ \ \ \ \ and \ \ \ }Y_{2}(\theta
)=Y_{4}(\theta ),
\end{eqnarray
the Dirac equations transform into Eq.(64). In order to solve the radial
equations , the separation constant $\lambda $ should be defined. This is
achieved by using the angular equations. In fact, it is already known from
the literature that the separation constant can be expressed in terms of the
spin-weighted spheroidal harmonics. The radial parts of the Dirac equations
become
\begin{gather}
\left( \mathbf{D}_{0}+\frac{1}{r}\right) f_{1}\left( r\right) =\frac{\lambda
}{r\sqrt{2}}f_{2}\left( r\right) , \\
\frac{B\left( r\right) }{2}\left( \mathbf{D}_{0}^{\dagger }+\frac
B^{^{\prime }}\left( r\right) }{2B\left( r\right) }+\frac{1}{r}\right)
f_{2}\left( r\right) =\frac{\lambda }{r\sqrt{2}}f_{1}\left( r\right) .
\notag
\end{gather
We further assume that
\begin{eqnarray*}
f_{1}\left( r\right) &=&\frac{\Psi _{1}\left( r\right) }{r}, \\
f_{2}\left( r\right) &=&\frac{\Psi _{2}\left( r\right) }{r},
\end{eqnarray*
then Eq.(64) transforms into,
\begin{gather}
\mathbf{D}_{0}\Psi _{1}=\frac{\lambda }{r\sqrt{2}}\Psi _{2}, \\
\frac{B\left( r\right) }{2}\left( \mathbf{D}_{0}^{\dagger }+\frac
B^{^{\prime }}\left( r\right) }{2B\left( r\right) }\right) \Psi _{2}=\frac
\lambda }{r\sqrt{2}}\Psi _{1}. \notag
\end{gather
Note that $\sqrt{\frac{B\left( r\right) }{2}}\mathbf{D}_{0}^{\dagger }\sqrt
\frac{B\left( r\right) }{2}}=\mathbf{D}_{0}^{\dagger }+\frac{B^{^{\prime
}}\left( r\right) }{2B\left( r\right) }+\frac{1}{r}$, and using this
together with the new function
\begin{eqnarray*}
R_{1}\left( r\right) &=&\Psi _{1}\left( r\right) , \\
R_{2}\left( r\right) &=&\sqrt{\frac{B\left( r\right) }{2}}\Psi _{2}\left(
r\right) ,
\end{eqnarray*
and defining the tortoise coordinate $r_{\ast }$ a
\begin{equation}
\frac{d}{dr_{\ast }}=B\frac{d}{dr},
\end{equation}
Eqs.(65) becom
\begin{eqnarray}
\left( \frac{d}{dr_{\ast }}+ik\right) R_{1} &=&\frac{\sqrt{B}\lambda }{r
R_{2}, \\
\left( \frac{d}{dr_{\ast }}-ik\right) R_{2} &=&\frac{\sqrt{B}\lambda }{r
R_{1}, \notag
\end{eqnarray
In order to write Eq.(67) in a more compact form, we combine the solutions
in the following way
\begin{eqnarray*}
Z_{+} &=&R_{1}+R_{2}, \\
Z_{-} &=&R_{2}-R_{1}.
\end{eqnarray*
After doing some calculations we end up with a pair of one-dimensional Sch
\"{o}dinger-like wave equations with effective potentials,
\begin{gather}
\left( \frac{d^{2}}{dr_{\ast }^{2}}+k^{2}\right) Z_{\pm }=V_{\pm }Z_{\pm },
\\
V_{\pm }=\left[ \frac{B\lambda ^{2}}{r^{2}}\pm \lambda \frac{d}{dr_{\ast }
\left( \frac{\sqrt{B}}{r}\right) \right] .
\end{gather
In analogy with equation (16), the radial operator $A$ for the Dirac
equations can be written as,
\begin{equation*}
A=-\frac{d^{2}}{dr_{\ast }^{2}}+V_{\pm },
\end{equation*
If we write the above operator in terms of the usual coordinates $r$ by
using Eq.(66), we have
\begin{equation}
A=-\frac{d^{2}}{dr^{2}}-\frac{B^{^{\prime }}}{B}\frac{d}{dr}+\frac{1}{B^{2}
\left[ \frac{B\lambda ^{2}}{r^{2}}\pm \lambda B\frac{d}{dr}\left( \frac
\sqrt{B}}{r}\right) \right] ,
\end{equation
Our aim now is to show whether this radial part of the Dirac operator is
essentially self-adjoint or not. This will be achieved by considering
Eq.(20) and counting the number of solutions that do not belong to Hilbert
space. Hence, Eq.(20) becomes
\begin{equation}
\left( \frac{d^{2}}{dr^{2}}+\frac{B^{^{\prime }}}{B}\frac{d}{dr}-\frac{1}
B^{2}}\left[ \frac{B\lambda ^{2}}{r^{2}}\pm \lambda B\frac{d}{dr}\left(
\frac{\sqrt{B}}{r}\right) \right] \mp i\right) \psi (r)=0.
\end{equation
For the asymptotic case, $r\rightarrow \infty $ , the above equation
transforms to
\begin{equation}
\frac{d^{2}\psi }{dr^{2}}+\frac{2}{r}\frac{d\psi }{dr}=0,
\end{equation
whose solution i
\begin{equation}
\psi \left( r\right) =C_{1}+\frac{C_{2}}{r}.
\end{equation
Clearly the solution is square integrable if $C_{1}=0$. Hence, the solution
is asmptotically well behaved. Near $r\rightarrow 0$ , Eq.(71) becomes
\begin{gather}
\frac{d^{2}\psi }{dr^{2}}-\frac{2}{r}\frac{d\psi }{dr}+\frac{\sigma }{r^{3}
\psi =0, \\
\sigma =\mp 2\lambda q, \notag
\end{gather
whose solution is given b
\begin{equation}
\psi \left( r\right) =\left( \frac{4\sigma }{x^{2}}\right) ^{\frac{3}{2
}\left\{ C_{3}J_{3}\left( x\right) +C_{4}N_{3}\left( x\right) \right\} ,
\end{equation
where $J_{3}\left( x\right) $ and $N_{3}\left( x\right) $ are Bessel
functions of the first and second kind, and $x=2\sqrt{\frac{\sigma }{r}}.$
As $r\rightarrow 0,$ we have $x\rightarrow \infty .$ The behavior of the
Bessel functions for real $\nu \geq 0$ as $x\rightarrow \infty $ is given b
\begin{eqnarray}
J_{\nu }\left( x\right) &\simeq &\sqrt{\frac{2}{\pi x}}\cos \left( x-\frac
\nu \pi }{2}-\frac{\pi }{4}\right) , \\
N_{\nu }\left( x\right) &\simeq &\sqrt{\frac{2}{\pi x}}\sin \left( x-\frac
\nu \pi }{2}-\frac{\pi }{4}\right) ; \notag
\end{eqnarray
thus the Bessel functions asymptotically behave as $J_{3}\left( x\right)
\sim \sqrt{\frac{2}{\pi x}}\cos \left( x-\frac{7\pi }{4}\right) $ and
N_{3}\left( x\right) \sim \sqrt{\frac{2}{\pi x}}\sin \left( x-\frac{7\pi }{4
\right) .$ Checking for the square integrability has revealed that both
solutions are square integrable. Hence, the radial operator of the Dirac
field fails to satisfy a unique self-adjoint extension condition. As a
result, the occurrence of the timelike naked singularity in the context of
f(R)$ gravity remains singular from the quantum mechanical point of view if
it is probed with fermions.
\section{Conclusion}
In this paper, the formation of the naked singularity in the context of a
model of $f(R)$ gravity is investigated within the framework of quantum
mechanics, by probing the singularity with the quantum fields obeying the
Klein$-$Gordon, Maxwell and Dirac equations. We have investigated the
essential self-adjointness of the spatial part of the wave operator $A$ in
the natural Hilbert space of quantum mechanics which is a linear function
space with square integrability. Our analysis has shown that the timelike
naked curvature singularity remains quantum mechanically singular against
the propagation of the aforementioned quantum fields. Another notable
outcome of our analysis is that the spin of the fields is not effective in
healing of the naked singularity for the considered model of the $f(R)$
gravity spacetime.
Another alternative function space for analyzing the singularity in this
context is to use the Sobelov space instead of the natural Hilbert space
\cite{13}. The Analysis in Sobelov space entails square integrability both
of the wave function and its derivative. Although the details are not given
in this study, the analysis using the Sobelov space has revealed that
irrespective of the spin structure of the fields used to probe the
singularity, the model considered of $f(R)$ gravity spacetime remains
quantum mechanically singular.
Hence, the generic conclusion that has emerged from our analysis is that in
the model considered of $f(R)$ gravity, the formation of a timelike naked
singularity is quantum mechanically singular.
It will be interesting for future research to extend the quantum singularity
analysis in other ETG models. Furthermore, it will be a great achievement if
the criterion proposed by HM is extended to stationary metrics. Although the
preliminary work in this direction is considered in \cite{31}, the
formulation has not been fully completed.
|
2,877,628,090,493 | arxiv | \section{Introduction}
Copulas are widely used and well known concepts in the realm of statistics and probability theory. The keystone of the theory is Sklar's theorem and there is a vast literature solely focussing on different proofs of this fundamental result.
Among others there are proofs based on the distributional transform in \cite{Ruschendorf2009} and \cite{Deheuvels2009} and earlier already in \cite{Moore1975}, based on mollifiers in \cite{MR2847456} or the constructive approach by the extension of subcopulas, as it was proved for the bivariate case in \cite{Schweizer1974} and for the general multivariate case in \cite{Sklar1996} or \cite{Carley2002}.
The naive transfer of the subcopula-approach to an infinite-dimensional setting appears to be challenging, since, after the extension of the subcopulas corresponding to the finite-dimensional laws of an infinite-dimensional distribution, one would also have to check that this construction meets the necessary consistency conditions.
In contrast, and besides the approach via distributional transforms (as extended to an infinite dimensional setting in \cite{Benth2020}), a nonconstructive proof based on topological arguments in
\cite{Durante2013} is naturally in tune with an infinite dimensional setting.
In this paper, we will therefore adopt this ansatz and prove Sklar's theorem in infinite dimensions by equipping the space of copulas with an inverse-limit topology that makes it compact and the operation between marginals and copulas induced by Sklar's theorem continuous.
The compactness of copulas is described as "folklore" in \cite{MR2847456} for the finite dimensional case, which is why the transfer to arbitrary dimensions is desirable.
\section{Short Primer on Topological Inverse Systems}
We will frequently use the notation $\bar{\mathbb R}$ for the extended real line $[-\infty,\infty]$.
For any measure $\mu$ on a measurable space $(B,\mathcal B)$ and a measurable function $f:(B,\mathcal B)\to (A,\mathcal A)$ into another measurable space $(A,\mathcal A)$ we denote by $f_*\mu$ the pushforward measure
with respect to $f$ given by $f_*\mu(S):=\mu( f^{-1}(S))$ for all $S\in \mathcal A$.
For $I$ an arbitrary index set, $B=\bar{\mathbb{R}}^I$ and $\mathcal B=\otimes_{i\in I} \mathcal{B}(\bar{\mathbb R})$, we use the shorter notations $\pi_{J*}\mu=:\mu_J$ for a subset $J\subseteq I$ and $\pi_{\lbrace i\rbrace*}\mu=:\mu_i$ for an element $i\in I$, where $\pi_J$ denotes the canonical projection on $\mathbb{R}^J$. If $J\subset I$ is finite, we denote the corresponding finite dimensional cumulative distribution functions by $F_{\mu_J}$ or $F_{\mu_i}$ respectively, where in the latter we used $J=\{i\}$. We use the notation $\mathcal I$ for the set consisting of all finite subsets of $I$.
Moreover, for a one-dimensional Borel measure $\mu_i$ on $\mathbb R$, we use the notation $F_{\mu_i}^{[-1]}$ for the quantile functions
\begin{equation}\label{Inverse Transform}
F_{\mu_i}^{[-1]}(u) := \inf \left\lbrace x\in(-\infty,\infty) : F_{\mu_i}(x)\geq u\right\rbrace.
\end{equation}
We will refer to the one dimensional distributions $\mu_i,i\in I$ and equivalently $F_{\mu_i},i\in I$ as marginals of the measure $\mu$. We denote the set of all probability measures on $(\bar{\mathbb{R}}^I,\otimes_{i\in I}\mathcal{B}(\bar{\mathbb{R}}))$ by $\mathcal P(\bar{\mathbb{R}}^I)$.
Moreover, for two topological spaces $X,Y$ we write $X\cong Y$ if they are homeomorphic.
The remainder of the section is mainly based on \cite{MR2599132}.
Let $X_{J}$ be a set for each $J\in \mathcal{I}$ and
\begin{equation*}
(P_{J_1,J_2}:X_{J_2}\to X_{J_1})\qquad \text{for } J_1\subseteq J_2,\text{ with } J_1,J_2\in\mathcal{I}
\end{equation*}
a family of mappings, also called projections, such that
\begin{itemize}
\item[(i)]$P_{J,J}=id_J$ is the identity mapping for all $J\in \mathcal{I}$, and
\item[(ii)] $P_{J_1,J_3}=P_{J_1,J_2}\circ P_{J_2,J_3}$ for all $J_1\subseteq J_2\subseteq J_3$ in $\mathcal{I}$.
\end{itemize}
The system $$(X_J,P_{J_1,J_2},\mathcal{I}):=\left((X_J)_{J\in \mathcal{I}},((P_{J_1,J_2}:X_{J_2}\to X_{J_1})_{\overset{J_1\subseteq J_2}{J_1,J_2\in\mathcal{I}}})\right)
$$
is called an \textit{inverse system} (over the partially ordered set $\mathcal{I}$).
If $(X_{J},\tau_{J})$ are topological spaces for each $J\in \mathcal{I}$ and $(P_{J_1,J_2})$ are continuous for all $J_1\subseteq J_2$ with $J_1,J_2\in\mathcal{I}$, we call
$$(X_J,\tau_J,P_{J_1,J_2},J\in \mathcal{I}):=\left((X_J,\tau_J)_{J\in \mathcal{I}},((P_{J_1,J_2}:X_{J_2}\to X_{J_1})_{\overset{J_1\subseteq J_2}{J_1,J_2\in\mathcal{I}}})\right)$$ a \textit{topological inverse system}.
A \textit{topological inverse limit} of this inverse system
is a space $X$ together with continuous mappings $P_J:X\mapsto X_J, J\in \mathcal I$, such that $P_{J_1,J_2}P_{J_2}=P_{J_1}$ for all $J_1\subseteq J_2$ in $\mathcal{I}$
(that is, the mappings are \textit{compatible})
and the following \textit{universal property} holds:
Whenever there is a topological space $Y$, such that there are continuous mappings $(\psi_J:Y\to X_J)_{J\in \mathcal I}$ which are compatible, i.e., $P_{J_1,J_2}\psi_{J_2}=\psi_{J_1}$ for all $J_1\subseteq J_2$ in $\mathcal{I}$, then there exists a unique continuous mapping
\begin{equation}\label{universal property of inverse limit}
\Psi:Y\to X,
\end{equation}
with the property $P_J\Psi=\psi_J$ for all $J\in \mathcal I$. We have that
\begin{equation}\label{D: Projective Limit}
\left\lbrace x=(x_J)_{J\in \mathcal{I}} \in \prod_{J\in \mathcal{I}}X_J:P_{J_1,J_2}(\pi_{J_2}(x))=\pi_{J_1}(x) \text{ for }J_1\subseteq J_2\right\rbrace\subseteq \prod_{J\in \mathcal I}X_J
\end{equation}
equipped with the subspace topology with respect to the product topology is an inverse limit of the topological inverse system, induced by the canonical projections $\pi_{J'} ((x_J)_{J\in\mathcal I})=x_{J'}$.
Each topological inverse limit is homeomorphic to this space and therefore to every topological inverse limit
(See the proof of Theorem 1.1.1 in \cite{MR2599132}).
We write $\lim_{\leftarrow}X_J\subseteq \prod_{J\in\mathcal I}X_J$ for the inverse limit as a subset of the product space and we equip it
throughout with the induced subspace topology.
\begin{Lemma}\label{L: Closedness of the Inverse limit}
Let $(X_{J},\tau_{J},\pi_{J_1,J_2})$ be a topological inverse system (over the poset $\mathcal{I}$) of Hausdorff spaces. Then $\lim_{\leftarrow}X_J$ is a closed subset of $\prod_{J\in\mathcal I}X_J$ with respect to the product topology
\end{Lemma}
\begin{proof}
See \cite[Lemma 1.1.2]{MR2599132}.
\end{proof}
\begin{Lemma}\label{L: Surjectivity of the induced mapping}
Let $X$ be a compact Hausdorff space and $(X_{J},\tau_{J},\pi_{J_1,J_2})$ be a topological inverse system of compact Hausdorff spaces. Let $\psi_J:X\to X_J,\, J\in\mathcal I$ be a family of compatible surjections and $\Psi$ the induced mapping. Then either $\lim_{\leftarrow}X_J=\emptyset$ or $\Psi(X)$ is dense in $\lim_{\leftarrow}X_J$.
\end{Lemma}
\begin{proof}
See \cite[Corollary 1.1.7]{MR2599132}.
\end{proof}
\section{Copulas and Sklar's Theorem }
As they are cumulative distribution functions, copulas in finite dimension have a one-to-one correspondence to probability measures.
In infinite dimensions we will therefore work with the notion of copula measures as introduced in \cite{Benth2020}.
\begin{Definition}\label{T: Consistent Copulas}
A copula measure (or simply copula) on $\bar{\mathbb{R}}^I$ is a probability measure $C\in\mathcal P(\bar{\mathbb R}^I)$, such that its marginals $C_i$ are uniformly distributed on $[0,1]$.
We will denote the space of copula measures on $\bar{\mathbb R}^I$ by $\mathcal C(\bar{\mathbb R}^I)$.
\end{Definition}
Sklar's theorem as stated below was proved in \cite{Benth2020} by following the arguments for the finite dimensional assertion in \cite{Ruschendorf2009}. Here we give an alternative proof for the infinite dimensional setting using a topological argument as in \cite{Durante2013}
\begin{Theorem}[Sklar's Theorem]\label{T: Sklar in infinite dimensions}
Let $\mu \in\mathcal P (\bar{
\mathbb R}^I)$ be a probability measure with marginal one-dimensional distributions $\mu_i, i\in I$.
There exists a copula measure $C$, such that for each $J\in \mathcal I$, we have
\begin{equation}\label{Sklar Property}
F_{C_J}\left(\left(F_{\mu_{j}}(x_{j})\right)_{j\in J}\right)=F_{\mu_J}\left((x_{j})_{j\in J}\right)
\end{equation}
for all $(x_{j})_{j\in J}\in \bar{\mathbb{R}}^{ J}$. Moreover, $C$ is unique if $F_{\mu_{i}}$ is continuous for each $i\in I$.
Vice versa, let $C$ be a copula measure on $\bar{\mathbb{R}}^I$ and let $(\mu_i)_{i\in I}$ be a collection of (one-dimensional) Borel probability measures over $\bar{\mathbb R}$.
Then there exists a unique probability measure $\mu\in\mathcal P(\bar{\mathbb R}^I)$, such that
\eqref{Sklar Property} holds.
\end{Theorem}
\section{Topological Properties of Copulas and a Proof of Sklar's Theorem }
The collection $(\mathcal P(\bar{\mathbb R}^J), J\in \mathcal I)$, where each $\mathcal P(\bar{\mathbb R}^J)$ is considered as a topological space with the topology of weak convergence, is a topological inverse system with the projections
$\pi_{J_1,J_2}(\mu_{J_2})=(\mu_{J_2})_{J_1}$ for $\mu_{J_2}\in \mathcal P(\bar{\mathbb R}^{J_2})$ and $J_1,J_2\in \mathcal I$, $J_1\subseteq J_2$. Moreover, observe that each $\mathcal P(\bar{\mathbb R}^J)$ is a Hausdorff space, since it is metrizable by the Prohorov metric (c.f. \cite[Theorem 4.2.5]{Schweitzer2006}).
The space $\lim_{\leftarrow}\mathcal P(\bar{\mathbb R}^J)\subset \prod_{J\in\mathcal I}\mathcal P(\bar{\mathbb R}^J)$ of consistent families of probability measures is a topological inverse limit, equipped with the corresponding inverse limit topology.
The space of probability measures on $\otimes_{i\in I}\mathcal B(\mathbb R)$ has via its finite-dimensional distributions a one-to-one correspondence with this family of consistent finite-dimensional distributions, and hence
there is a natural bijection between $\lim_{\leftarrow}\mathcal P(\bar{\mathbb R}^J)$ and $\mathcal P(\bar{\mathbb R}^I)$.
We equip the space $\mathcal P(\bar{\mathbb R}^I)$ with the topology of \textit{weak convergence of the finite dimensional distributions},
which we define as follows:
\begin{Definition}
The topology of convergence of the finite dimensional distributions on $\mathcal P(\bar{\mathbb R}^I)$ is defined as the topology such that $\mathcal P(\bar{\mathbb R}^I)\cong \lim_{\leftarrow}\mathcal P(\bar{\mathbb R}^J)$.
\end{Definition}
$\mathcal P(\bar{\mathbb R}^I)$ with this topology is by definition a topological inverse limit.
Define also $\lim_{\leftarrow}\mathcal C(\bar{\mathbb R}^J):=\lim_{\leftarrow}\mathcal P (\bar{\mathbb{R}}^J)\cap \prod_{J\in\mathcal{I}}\mathcal C (\bar{\mathbb{R}}^J)$. Certainly, we have \begin{equation}
\mathcal C\left(\bar{\mathbb{R}}^I\right)\cong\lim_{\leftarrow}\mathcal C\left(\bar{\mathbb R}^J\right)
\end{equation}
with the corresponding topologies.
The following result contains among other things
the topological proof of Sklar's theorem \ref{T: Sklar in infinite dimensions}.
\begin{Theorem}\label{L: Compactness of Consistent Copula families}The following statements hold.
\begin{enumerate}
\item\label{Hausdorffness} $\mathcal{P}(\bar{\mathbb R}^I)$ with the topology of weak convergence of the finite dimensional distributions is a Hausdorff space
\item\label{Compactness of the copulas} The space of consistent copulas $\mathcal C(\bar{\mathbb{R}}^I)$ is compact with respect to the topology of convergence of finite dimensional distributions.
\item\label{Second part of Sklar}
For a copula measure $C$ on $\bar{\mathbb{R}}^I$ and (one-dimensional) Borel probability measures $(\mu_i)_{i\in I}$ over $\bar{\mathbb R}$ the push-forward measure
\begin{equation}\label{Explicit form of seond part Sklar}
\mu:=((F_{\mu_i}^{[-1]})_{i\in I})_*C
\end{equation}
satisfies
\eqref{Sklar Property}.
\item\label{Continuity of Sklar}
If we equip $\mathcal C(\bar{\mathbb{R}}^I)\times \prod_{i\in I}\mathcal P (\bar{\mathbb R})$ with the product topology of weak convergence on each $\mathcal P (\bar{\mathbb R})$ and the topology of convergence of the finite dimensional distributions on $\mathcal C(\bar{\mathbb{R}}^I)$ and $\mathcal P(\bar{\mathbb R}^I)$, then the
mapping $\Phi:\mathcal{C} (\bar{\mathbb{R}}^{\mathcal{I}})\times \prod_{i\in I} \mathcal{P}(\bar{\mathbb R})\to \mathcal{P}(\bar{\mathbb{R}}^I)$ given by $$\Phi(C,(\mu_i)_{i\in I}):= ((F_{\mu_i}^{[-1]})_{i\in I})_*C$$
is continuous and surjective. In particular, Sklar's theorem holds.
\end{enumerate}
\end{Theorem}
\begin{proof}
\eqref{Hausdorffness} Since products of Hausdorff spaces are Hausdorff and $\mathcal{P}(\bar{\mathbb R}^I)$ is homeomorphic to a subset of a product of Hausdorff spaces, it is Hausdorff.
\eqref{Compactness of the copulas} We know by \cite[Thm. 3.3]{MR2847456} that every $\mathcal C(\bar{\mathbb R}^J)$ is compact with respect to the topology of weak convergence on $\mathcal P(\bar{\mathbb{R}}^J)$.
Tychonoff's Theorem guarantees also that $\prod_{J\in\mathcal{I}}\mathcal C(\bar{\mathbb R}^J)$ is compact with respect to the product topology on $\prod_{J\in\mathcal{I}}\mathcal P(\bar{\mathbb{R}}^J)$. Therefore, as $\lim_{\leftarrow}\mathcal P (\bar{\mathbb{R}}^J)$ is closed by Lemma \ref{L: Closedness of the Inverse limit}, we obtain that $\mathcal C (\bar{\mathbb{R}}^I)$ is compact, since it is homeomorphic to an intersection of a closed and a compact set in the product topology.
\eqref{Second part of Sklar}
This corresponds to the second part of Sklar's theorem and the proof can be conducted analogously to the one in \cite{Benth2020}. Therefore, it is enough to see that $$\left(\left[0,F_{\mu_{j}}(x_j)\right]\right)_{j\in J}\setminus \left(\left(F_{\mu_j}^{[-1]}\right)^{-1}(-\infty,x_1]\right)_{j\in J}$$
is a $C_J$-nullset for all $(x_j)_{j\in J}\in\bar{\mathbb R}^J$, $J\in \mathcal I$, since then we immediately obtain
\begin{align*}
C_J\left(\left(\left(F_{\mu_j}^{[-1]}\right)^{-1}(-\infty,x_1]\right)_{j\in J}\right)
= C_J\left(\left([0,F_{\mu_j}(x_j)]\right)_{j\in J}\right) =F_{C_J}\left(F_{\mu_j}\left(\left(x_j\right)\right)_{j\in J}\right).
\end{align*}
\eqref{Continuity of Sklar}
Define $\phi_J:\mathcal C(\bar{\mathbb R}^I)\times \prod_{i\in I}\mathcal P(\bar{\mathbb R})\to \mathcal P(\bar{\mathbb R}^J)$ by $$\phi_J(C,(\mu_i)_{i\in I}):=\Phi(C,(\mu_i)_{i\in I})_J,$$
which is well defined by \eqref{Second part of Sklar}. Since the finite-dimensional distributions of a law are consistent, $(\phi_J,J\in \mathcal I)$ forms a compatible family.
Define analogously for $J\in \mathcal I$ also $\tilde{\phi}_J:\mathcal C(\bar{\mathbb R}^J)\times \prod_{j\in J}\mathcal P(\bar{\mathbb R})\to \mathcal P(\bar{\mathbb R}^J)$ by $$\tilde{\phi}_J(C_J,(\mu_j)_{j\in J})= (F_{\mu_j}^{[-1]})_{j\in J})_*C_J.$$
This is by Sklar's theorem in finite-dimensions surjective and by \cite[Thm. 2]{MR2065562} also continuous.
Hence $\phi_J=\tilde{\phi}_J\pi_J$ is continuous and surjective, since both, $\tilde{\phi}_J$ and $\pi_J$ are. $\Phi$ must be the uniquely induced continuous mapping by the family $(\phi_J,J\in \mathcal I)$ by the universality property of the inverse limit.
Moreover, since by \cite[Corollary 4.2.6]{Schweitzer2006} $\mathcal P (\bar{\mathbb R})$ is compact and by \eqref{Compactness of the copulas} also $\mathcal C(\bar{\mathbb R}^I)$ is compact, we have that $\mathcal C(\bar{\mathbb R}^I)\times \prod_{i\in I}\mathcal P(\bar{\mathbb R})$ is compact by Tychonoff's theorem. The continuity of $\Phi$ implies therefore that $\Phi(\mathcal C(\bar{\mathbb R}^I)\times \prod_{i\in I}\mathcal P(\bar{\mathbb R}))$ is compact, hence closed.
Since moreover Lemma \ref{L: Surjectivity of the induced mapping} implies that $\Phi(\mathcal C(\bar{\mathbb R}^I)\times \prod_{i\in I}\mathcal P(\bar{\mathbb R}))$ is dense, we obtain that $\Phi$ is surjective and therefore also the first part of Sklar's theorem holds.
The uniqueness of the copulas in the case of continuous marginals follows immediately by Sklar's theorem in finite dimensions via the uniqueness of the finite dimensional distribution of the corresponding copula measure.
\end{proof}
Observe that since $\mathcal P (\bar{\mathbb R}^J)$ is a locally convex Hausdorff space with respect to the topology of weak convergence for each $J\in \mathcal I$, we obtain that also the inverse limit $\mathcal P(\bar{\mathbb R}^I)$ is locally convex, as it is isomorphic to a subset of the product $\prod_{J\in \mathcal I}\mathcal{P}(\mathbb R^J)$ of locally convex Hausdorff spaces.
Hence, as mentioned for instance in \cite[p.30]{Sempi2015}, since $\mathcal C (\mathbb R^I)$ is convex, we have that it is the closure of its extremal points by the Krein-Milman theorem. As mentioned in \cite{Benes1991} this implies that
$$\sup_{C\in \mathcal C(\mathbb R^I)}g(C)=\sup_{C\in ext(\mathcal C(\mathbb R^I))}g(C)$$
where $ext(C(\mathbb R^I))$ denotes the set of extremal points of $C(\mathbb R^I)$ and $g:\mathcal C (\mathbb R^I)\mapsto \mathbb R$ is a convex function.
\iffalse
\subsection{Topological Aspects of basis copulas}
Basis copulas are essentially the set $\mathcal{C}(\mathbb R^{\mathbb N})$. Given some some continuous and strictly monotone (cdf) marginals $(\mu_n)_{n\in\mathbb N}$ that fulfill the moment condition!!!! we obtain that
$\mathcal C_{\mu,\mathbb N}:=\mathcal{C}(\mathbb R^{\mathbb N},(\mu_n)_{n\in\mathbb N})\subseteq \mathcal P (l^p)$. Observe that $\mathcal C(\mathcal{R}^{\mathbb N})$ is itself not an element of $\mathcal P (l^p)$, which is why we consider the isomorphic set $\mathcal C_{\mu,\mathbb N}$:
\begin{Corollary}
$\mathcal C_{\mu,\mathbb N}\subset \mathcal P (l^p)$ is the closure of its extremal values.
\end{Corollary}
\begin{proof}
First assume $p=2$. Observe that each sequence $(C_m^{\mu})_{m\in\mathbb N}\subset \mathcal C_{\mu,\mathbb N}$ is tight by Proposition 2.2.1 in \textbf{SUKUDA:Contributions to the theory of weak convergences in Hilbert spaces and its applications
}. Thus, it is compact with respect to the weak convergence of measures on $\mathcal P (l^p)$. Moreover $\mathcal C_{\mu,\mathbb N}$ is convex and hence, by the Krein-Milman theorem it is the closure of its extremal values.
\end{proof}
\subsection{Topological aspects of measurable copulas}
If $\mathcal C^{meas}(\mathbb R^T )\subset\mathcal C(\mathbb R^T )$ is the subset of laws $C$ of measurable copula processes $U=(U_t)_{t\in T}$ on some Hausdorff space $T$, we obtain that
\begin{Corollary}
The space of measurable copulas $\mathcal C^{meas}(\mathbb R^T )\subset\mathcal C(\mathbb R^T )$ is the closure of its extremal points.
\end{Corollary}
\begin{proof}
The space $\mathcal C^{meas}(\mathbb R^T )$ is obviously convex. Moreover, it is a subset of the locally convex Hausdorff space $\mathcal P(L^p(T))$. It remains to show its compactness.
\end{proof}
\fi
\subsection*{Acknowledgements}
This research was funded within the project STORM: Stochastics for Time-Space Risk Models, from the Research Council of Norway (RCN). Project number: 274410.
\bibliographystyle{amsplain}
\addcontentsline{toc}{section}{\refname} |
2,877,628,090,494 | arxiv | \section{Introduction}
Establishing breathing and oxygenation after birth is vital for survival and long-term health. The vast majority of newly born infants make the transition from intrauterine to extrauterine life without help. However, approximately $10\%$ of newborns require some form of assistance to breathe at birth and approximately $1\%$ require a more intensive resuscitation. Despite such care, approximately $900$ thousand newborn infants die annually worldwide due to birth asphyxia~\cite{spector2008}.
The delivery room is often a stressful environment where decisions are made quickly and resuscitators need to have good cognitive, psychomotor and communication skills. They also must have good team-management skills. However, the ``coming together'' of all theses skills is often more difficult than is widely appreciated. The neonatal training paradox~\cite{peter2005} describes the infrequent occurrence of neonatal emergencies with the risk of clinicians being ``unprepared, hesitant, and highly anxious''~\cite{aron2009} due to the lack of practical learning experiences. Such high acuity, low occurrence (HALO) situations arise infrequently but still require a high level of cognitive and technical competency. Neonatal and infant resuscitation perfectly meet the description of such HALO events~\cite{lou2008}. Such events lend themselves well to simulation-based training~\cite{aron2009}. Simulation-based medical education (SBME) has been identified as ``a highly effective instructional strategy for the acquisition and retention of skills requisite to competent performance in dynamic, high-pressure, high-consequence environments'' such as delivery rooms~\cite{jodee2011}.
The \citeauthor{jc2004} \citeyear{jc2004}, reporting on preventing infant death and injury during delivery, highlighted that inadequacies in proficient neonatal resuscitation account for over two thirds of mortality and morbidity.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\columnwidth]{images/mannequin.png}
\end{center}
\vspace{-0.4cm}
\caption{Simulation-based training on neonatal resuscitation using a mannequin.}
\label{fig:smbe}
\end{figure}
An international consensus on resuscitation emphasizes SBME to enhance performance in real-life clinical situations~\cite{pmid20956231}. However, the current neonatal training using a mannequin (Figure~\ref{fig:smbe}) requires access to a specialized facility, which needs to be equipped and staffed with instructors trained in SBME and debriefing. Consequently, SBME in its current form is time and cost intensive and therefore not offered routinely or in all healthcare facilities~\cite{pmid25153910}. Accordingly, the current certification requirements demand only a single four-hour SBME refresher course every two years to remain certified in neonatal resuscitation. While SBME has been shown to improve performance initially after training~\cite{pmid9827341,pmid15276892}, both cognitive and technical skills significantly deteriorate within months~\cite{pmid25153910}. Although the optimal frequency for a refresher training is unclear, evidence suggests performing at least biannual training to ensure maintenance of knowledge and skills~\cite{pmid25153910}.
Using a video-game-like computer simulation to train various aspects of neonatal resuscitation has several advantages, all of which are critical when training healthcare professionals: controlled and risk-free learning, repetitive deliberate practice, customizability of training experiences to individual needs, objective assessment of and feedback on trainees' performance, easy accessibility and a less stressful environment. Although SBME can provide all of the above short of easy accessibility, there are potential disadvantages including a biased or distracted instructor or a single instructor who might be unable to observe everything during a team scenario. Using a computer-based simulation the instructor can receive a computer printout (or log) at the end of the game/simulation, which would allow him/her to identify what went well and what did not and debrief the trainee accordingly. Thus, in this paper, we adopt the approach of using a computer-based simulation game~\cite{sitzmann2011meta} to compliment physical simulation-based training on neonatal resuscitation.
The primary contribution of this paper is a presentation of such a video game, called {\em REsuscitation TrAIning for Neonatal residents} (RETAIN). We describe the structure of the game and the non-playable characters driven by Artificial Intelligence (AI) that the trainee interacts with. RETAIN was developed by a team of six undergraduates as a term project for a second-year university class. Thus the second contribution of this paper are lessons learned from the production. With the proliferation of university-level courses and programs in video game development, we believe this account will be useful to other institutions.
The rest of the paper is organized as follows. We define the problem of neonatal resuscitation training with a serious game~\cite{Michael:2005:SGG:1051239} in Section~\ref{sec:problemFormulation}. Section~\ref{sec:relatedWork} reviews existing work relevant to the problem. We introduce the design of our approach (RETAIN) in Section~\ref{sec:design} and describe the production process in a video-game class in Section~\ref{sec:production}. In Section~\ref{sec:empiricalEvaluation} we present a preliminary evaluation of the approach. We then conclude the paper with a discussion of future work in Section~\ref{sec:futureWork}.
\section{Problem Formulation}
\label{sec:problemFormulation}
While, generally speaking, neonatal resuscitation training encompasses technical motor skills (e.g., bag and mask ventilation), decision-making skills (e.g., following the correct resuscitation procedure), communication and teamwork, we focus on decision-making skills. Studies have shown that not following a correct resuscitation procedure is responsible for approximately $60$-$70\%$ of all failures in the task~\cite{pmid23866717,pmid25125582}.
The current physical simulation-based training is initially effective~\cite{pmid22594362} but frequent refresher training sessions are necessary for the trainee to retain the decision-making skills~\cite{pmid9827341,pmid15276892}. With the current physical training methodologies frequent refresher sessions are cost-prohibitive. Thus, the problem we address in this paper is to create a cost-effective and easily accessible non-physical training system for neonatal resuscitation that can complement the current training regimes and enables more frequent refresher training sessions for clinical personnel.
\section{Related Work}
\label{sec:relatedWork}
Simulation, which originated from aviation and spaceflight training programs, was adopted by anesthesiologists in the 1960s which eventually led to the development of simulation-based medical education~\cite{abrahamson2004effectiveness,schwid1992anesthesiologists}.
Medical education and, in particular, surgical training has been an active area for the development of both simulations and serious games~\cite{graafland2012systematic,kapralos2014overview}.
In the following sections, we review games relevant to the current work.
{\em e-Baby}~\cite{Fonseca2015-ew} is a serious game in which players perform clinical assessment of oxygenation on preterm infants in a virtual isolette. The infants present a range of respiratory impairments from mild to serious. The players were provided with patient history and had to select appropriate tools for clinical assessment. The assessment was made by responding to a series of questions in a multiple-choice format. The questions drove the interaction and served as an assessment of the trainee's knowledge. The game was evaluated by nursing students who had free access to the simulation and was rated highly for its ease of use and as overall efficacy of learning. The goal of {\em e-Baby} was acquisition of procedural knowledge pertaining to the clinical assessment. Our goal is to create a game that trains medical personnel on the application of pre-existing knowledge of clinical intervention (resuscitation) in stressful conditions.
LISSA~\cite{wattanasoontorn2013serious,wattanasoontorn2013kinect,wattanasoontorn2014lissa} is a serious game to teach cardiopulmonary resuscitation (CPR) and use of an automated external defibrillator. Players must perform CPR procedures in the correct order within a specified time limit. The system supports play and authoring modes. Emergency scenarios are authored from a predefined set of elements, and can be complemented with expositional material (e.g., a demonstration of how to apply CPR). Scenarios are modeled as finite state machines corresponding to a CPR flowchart. LISSA was evaluated with $60$ learners with no background in CPR, and four CPR instructors. Although it was found to lead to lower learning outcomes compared to conventional instruction alone, LISSA was shown to have a higher efficacy when used to complement mannequin-based instruction.
Although relevant to our problem, LISSA differs in a number of key aspects: it targets adult cardiopulmonary resuscitation rather than neonatal resuscitation, and is intended for a general audience rather than clinical trainees. LISSA also aims to teach motor skills via a use of Kinect~\cite{wattanasoontorn2013kinect} which is beyond the scope of our problem (decision-making skills).
\citeauthor{kalz2013design} \citeyear{kalz2013design} report on the development of a mobile game-based resuscitation training for first responders. The goal of the project was to augment, rather than to replace, face-to-face training. Specifically, the authors sought to ``increase procedural knowledge, train processes in an emergency situation and to influence willingness to help and self-efficacy''~\cite{kalz2013design}. As with LISSA the intended audience were laypeople and the domain was adult CPR.
RELIVE~\cite{loconsole2015relive} and {\em Viva!}~\cite{semeraro2014relive} were developed for use by general audiences as part of an awareness week around CPR training~\cite{ristagno2014achievements}. RELIVE was intended as a low-cost trainer for non-clinical use and employed Microsoft Kinect, a realistic 3D environment, and game-like interface to provide feedback to players about their CPR performance.
{\em Viva!} is a serious game intended to raise awareness among adults and children of CPR training. Accordingly, the game employed a 2D retro-cartoon style. A variety of rescue scenarios take place in different simulated locations. Players perform CPR by clicking on icons representing actions of interest, in order to save characters from cardiac arrest. The game can be played in two modes. In story mode, players are led through structured training and must achieve a high level of performance before they can proceed. In tournament mode, players are able to engage in ready-to-play emergency scenarios, to test the accuracy of their CPR performance. Players may also challenge friends to compete on the accuracy of their maneuvers. As with previous efforts and unlike for the problem we are solving, {\em Viva!}'s target audience was laypeople.
{\em Pulse!!} is a virtual clinical-training environment for trauma management~\cite{johnston2005pulse,mcdonald2011pulse,breakaway2008}. Aimed at clinical professionals and learners, {\em Pulse!!} allows players to train clinical skills in numerous emergency situations. Players interact with patients in a highly realistic clinical environment, furnished with a variety of medical equipment and player-controllable staff members. The system includes a case-authoring tool, scene editor, tutoring system, and asset library of characters, environments, equipment and physiological processes. In an assessment of its effectiveness as a learning and assessment tool, {\em Pulse!!} was found to be significantly more effective than paper-based learning, and was rated highly engaging by players~\cite{mcdonald2011pulse}. While {\em Pulse!!} was designed to be a comprehensive simulation environment for adult trauma, the problem we are addressing is resuscitation in newborns.
{\em Triage Trainer}~\cite{knight2010serious} is a serious game designed to teach major incident triage to clinical professionals. Developed to be played on a desktop or a laptop computer, the game allows its players to practice triage (prioritizing which patients to treat when) in a realistic immersive 3D environment. Players navigate and interact with casualties using the mouse and keyboard. Assessment is done by clicking on a series of icons representing various examinations (e.g., breathing check, pulse rate check) and manipulations (e.g., open airway, tag a casualty with triage rating). The focus of the game is on rapid execution of process-based knowledge. The authors found that participants who played the game had significantly greater accuracy on a triage task than did participants who took part in the control activity (card sort). Although it addresses clinical decision-making under pressure, {\em Triage Trainer} deals with the domain of mass casualty triage, not neonatal resuscitation.
{\em Surgical Improvement of Clinical Knowledge Ops} (SICKO) is a web-based game designed to practice and assess clinical decision-making in surgery~\cite{lin2015validity}. The game was inspired by and developed from the original work on {\em Septris}~\cite{wykes2012game}, a web-based game to teach learners about sepsis.
The purpose of SICKO is to simulate decision-making under pressure rather than psychomotor skills. In the game, players must balance the care of multiple patients, as they would in real life. As they do so, Dr. Sicko, represented by a cartoon figure, shows his approval by smiling or frowning and adding or deducting points from the player's score. Scenarios cover a range of acuity and complexity and can include X-ray and MRI imaging. Similar to other work mentioned in this section, SICKO addresses clinical decision-making under pressure, however, the clinical environment is highly abstracted. Inspired by {\em Tetris}~\cite{tetris}, the patients are represented as faces which descend from the top of the screen in columns, and must be successfully treated before reaching the bottom. Additionally, the domain addressed by SICKO does not encompass neonatal resuscitation.
\section{RETAIN Design}
\label{sec:design}
\begin{figure}[t]
\begin{center}
\includegraphics[width=\columnwidth]{images/chart.png}
\end{center}
\vspace{-0.4cm}
\caption{The neonatal resuscitation algorithm by the International Liaison Committee on Resuscitation.}
\label{fig:chart}
\end{figure}
We designed RETAIN around the resuscitation flow chart shown in Figure~\ref{fig:chart}, reproduced from~\cite{pmid20956231}. To ramp the decision-making complexity gradually, we built the game with four levels of increasing difficulty. An introductory (zero-th) level served as a tutorial of the game interface where the trainee, controlling a medical resident, followed directions of a doctor experienced in resuscitation. The doctor was a non-playable character controlled by AI whose dialogue structure was implemented as a dialogue tree~\cite{aurora}. Recorded voice overs were used in conjunction with written dialogue to have the doctor converse with the trainee (Figure~\ref{fig:interface}). The trainee interacted with the game with a keyboard and mouse for level navigation and menu selections.
\begin{figure}[t]
\includegraphics[width=\columnwidth]{images/tutorial.png}
\vspace{-0.4cm}
\caption{The game interface.}
\label{fig:interface}
\end{figure}
The doctor in the tutorial level gave a complete guidance on which actions to take and served as an introduction to the interface. Having passed the tutorial, the trainee would tackle three resuscitation scenarios of different complexity. While in each of them the trainee was still paired with an AI-controlled doctor, he or she now received incomplete guidance from the AI-controlled doctor and had to use their own judgment to select the right action at the right time. Furthermore, most actions now had a parameter (e.g., the frequency of chest compressions) which had to be specified by the trainee as well (Figure~\ref{fig:chestCompression}).
\begin{figure}[t]
\includegraphics[width=\columnwidth]{images/chestCompression.png}
\vspace{-0.4cm}
\caption{Chest compression parameter choices.}
\label{fig:chestCompression}
\end{figure}
An incorrect action selection (e.g., applying suction instead of performing chest compressions) or an incorrect parameter value (e.g., using the chest compressions with the breath ratio of $5$:$1$ instead of $3$:$1$) counted as a mistake and the trainee received an immediate feedback in the form of auditory cue (a bell tone) and an utterance from the doctor. Each mistake also decreased the baby's health level (shown as a vertical bar in Figure~\ref{fig:healthBar}). After four mistakes the baby died, ending the scenario. By committing fewer than four mistakes the trainee saved the baby, also ending the scenario.
\begin{figure}[t]
\begin{center}
\includegraphics[height=2.2cm]{images/healthBar1.png}\hspace{0.2cm}
\includegraphics[height=2.2cm]{images/healthBar2.png}\hspace{0.2cm}
\includegraphics[height=2.2cm]{images/healthBar3.png}\hspace{0.2cm}
\includegraphics[height=2.2cm]{images/healthBar4.png}
\end{center}
\vspace{-0.4cm}
\caption{The baby's health represented with a health bar next to the baby's pulse.}
\label{fig:healthBar}
\end{figure}
The trainee tackled each of the three scenarios by accessing one of the three doors from a hub area representing the hospital's lobby (Figure~\ref{fig:lobby}).
The later scenarios were more difficult as in order to save the baby they required the trainee to take (i) a longer action sequence ($6$, $9$ and $13$ actions for the three scenarios respectively) and (ii) to use a wider variety of actions ($5$, $7$ and $9$ out of a total of $9$ actions RETAIN supported). Thus, the trainee was tested on a progressively larger portion of the resuscitation algorithm (Figures~\ref{fig:chart1}, \ref{fig:chart2}, \ref{fig:chart3}). Note that while RETAIN supported a total of $9$ actions, most of them took a parameter thereby greatly increasing the number of choices available to the trainee to pick from.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=\columnwidth]{images/hub.png}
\end{center}
\vspace{-0.4cm}
\caption{The hospital lobby served as the game hub.}
\label{fig:lobby}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=\columnwidth]{images/chart1.png}
\end{center}
\vspace{-0.4cm}
\caption{Parts of the resuscitation algorithm covered by training scenario 1.}
\label{fig:chart1}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=\columnwidth]{images/chart2.png}
\end{center}
\vspace{-0.4cm}
\caption{Parts of the resuscitation algorithm covered by training scenario 2.}
\label{fig:chart2}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=\columnwidth]{images/chart3.png}
\end{center}
\vspace{-0.4cm}
\caption{Parts of the resuscitation algorithm covered by training scenario 3.}
\label{fig:chart3}
\end{figure}
\section{RETAIN Production}
\label{sec:production}
The entire production was carried out as a term-project in a second-year undergraduate course. The production took three months and involved a team of six undergraduate students from different disciplines: computing science, psychology, art and design and biology. The team was advised by two neonatologists on a weekly basis. Each week the developer team also met with a former graduate of the course serving as a mentor. The team additionally received a 3D baby model from an external artist and had external voice actors record voice overs for in-game characters.
The production faced a number of challenges. First, the development toolset used for the production, {\em Aurora Toolset}~\cite{aurora} is more suitable for dialogue-based isometric-view RPG adventures such as {\em Neverwinter Nights}~\cite{nwn}. The team spent a considerable amount of time modifying the interface for the medical simulation (e.g., a level map had to be displayed off screen).
Second, the six students were from different departments/faculties and were enrolled in four to five other courses during the term. This made scheduling meetings difficult. The team ended up meeting both on- and off-campus, often on weekends. Third, none of the team members had previously worked on a term-long project with students with other backgrounds, nor had any team member previously developed a video game. Timely communications were critical and were organized through on-line collaboration tools including Google apps, Trello, Skype and Facebook.
\section{Preliminary Evaluation}
\label{sec:empiricalEvaluation}
To date, RETAIN has gone through several evaluation phases. Early vertical slices of the game were tested by the team’s mentor and the course instructor. A beta version was evaluated by the course teaching assistants (two computing science graduate students) and the classmates (thirty students). The final version was evaluated by the course instructor and by eight members of an award committee. The committee was tasked with selecting award winners for an annual game-award ceremony tied to the course and consisted of game developers (both AAA and independent) as well as academics. Not only the committee recognized RETAIN with an award but also the committee members, including experienced AAA commercial game developers, reported unexpected stress while playing the game due to the gravity of the context (saving the baby). This was in contrast to the other nominated games which were all made for entertainment purposes. Finally, the game was evaluated by two neonatologists who felt that overall the game displayed important aspects of basic neonatal resuscitation training. In particular, they deemed that the game was able to keep the player engaged and involved, with sufficient drama and stress, while providing a platform for learning by doing in a cost-effective and engaging fashion.
\section{Future Work}
\label{sec:futureWork}
While anecdotal, the evidence presented in the previous section is encouraging and opens up several directions for future work. The immediate next step is to evaluate RETAIN in a medical training environment. Preparations are underway to run user studies later this summer and in early fall. Following such an evaluation, a research-grade version of RETAIN will be developed. The future version will attempt to actively model the trainee's skills in neonatal resuscitation. This will allow an AI manager to tailor each training scenario to a given trainee. Specifically, matching the trainee's inferred skills against complexity of various scenario modules will allow the AI manager to estimate the trainee's degree of flow~\cite{flowStorytelling2015}. By keeping the trainee in a state of flow his or her learning may be improved~\cite{flow1990}. Being in a state of flow is also intrinsically rewarding and thus may encourage a trainee to practice more with the system.
With a longer development cycle and a larger development budget, the next version of RETAIN will be built with a modern toolset, support a wider range of resuscitation scenarios and achieve a greater visual and medical fidelity. Finally, we are planning to implement an AI-driven feedback system which will critique trainee’s actions and score their overall performance.
\section{Conclusions}
\label{sec:conclusions}
Neonatal resuscitation is a crucial life-saving procedure in modern hospitals. While traditional training requires access to a costly specialized facility, we conjecture that complementing it with a video-game-based, easily accessible low-cost version will add training value via more frequent training sessions. In this paper we reported on a three-month undergraduate production of the first version of such a training game carried out as a term project in a second-year undergraduate class. The initial prototype and its development appear successful and encourage a development of a research-grade follow up.
\section{Acknowledgments}
We would like to thank other members of the RETAIN development team: Erik Estigoy, Connor Hastey Palindat, Vishruth Kajaria and Derek Kwan. Baby modeling was performed by Glenn Meyer. Voice actors were Fuad Sakkab, J.D. Macnutt, Nathan Wakeman, Gerard Capiuk and Jessica Hong. We also thank the class teaching assistants Yathirajan Brammadesam Manavalan and Sergio Poo Hernandez. We also appreciate the support from the Community Service Learning centre at the University of Alberta and the Heart and Stroke Foundation of Alberta.
\bibliographystyle{theapa}
|
2,877,628,090,495 | arxiv | \section{Introduction}
The {\it Herschel} Space Observatory\footnote{{\it Herschel} is an ESA space
observatory with science instruments provided by Principal
Investigator consortia. It is open for proposals for observing time
from the worldwide astronomical community. } \citep{pilbrattetal10}
provides an unprecedented view of the far-infrared and submillimetre
emission from nearby galaxies. At wavelengths of 70-160 $\mu$m, the
PACS instrument \citep{poglitschetal10} can produce images with
resolutions of $6^{\prime\prime}$-$12^{\prime\prime}$ that are
superior to what can be achieved with the {\it Spitzer} Space
Telescope. At 250-500~$\mu$m, the SPIRE instrument
\citep{getal10} produces images with unprecedented sensitivities to
diffuse and point-like submillimetre emission. We can use these data
to construct spectral energy distributions (SEDs) that sample
the peak and Rayleigh-Jeans side of thermal dust emission, thus
allowing us to probe the coldest dust components in nearby galaxies
and place superior constraints on dust temperatures and masses. As
part of the Very Nearby Galaxies Survey (VNGS), we have imaged the
spiral galaxy M81 (NGC 3031) at 70, 160, 250, 350, and 500~$\mu$m with
PACS and SPIRE. M81 is a nearby \citep[$3.63\pm0.13$~Mpc;][]{fetal01}
SA(s)ab \citep{ddcbpf91} galaxy at an inclination of $59.0^\circ$
\citep{detal08} with well defined spiral arms. The {\it Herschel} data
allow us to extract SEDs for $\sim0.7$~kpc subregions that are small
enough that we can distinguish arm and interarm regions within M81.
We use these data to explore the dust temperatures and masses
and to understand the heating sources for the dust.
\section{Observations and data reduction}
\label{s_data}
The PACS observations were performed as four pairs of orthogonal scans
covering $40^\prime\times40^\prime$ using a
20$^{\prime\prime}$~s$^{-1}$ scan rate. PACS can perform simultaneous
observations in only two wave bands; we chose the 70 and 160 $\mu$m
bands since they were expected to bracket the peak of the SED better.
The data were reduced using a combination of an adapted {\it Herschel}
Interactive Processing Environment (HIPE) 3.0 pipeline and
Scanamorphos (Roussel et al. in prep.). Starting from the raw detector
timelines, HIPE was used to mask dead and saturated pixels, convert
the signal to Jy pixel$^{-1}$, and remove cosmic rays. Scanamorphos
was then used to map the data while simultaneously removing $1/f$
drifts in the signals. Finally, we subtracted the median backgrounds
from the images. The photometric calibration has an accuracy of 10\%
at 70 $\mu$m and 20\% at 160 $\mu$m, and the full-width half-maxima
(FWHM) of the 70 and 160~$\mu$m point spread functions (PSFs) are
$6^{\prime\prime}$ and $12^{\prime\prime}$, respectively
\citep{poglitschetal10}. The RMS noise levels are 0.12 mJy
arcsec$^{-2}$ in both the 70 and 160~$\mu$m bands.
\begin{figure*}
\centering
\includegraphics[height=6cm]{14568f1.ps}
\caption[width=\textwidth]{70-500~$\mu$m images of M81 covering
$20^\prime \times 30^\prime$ with north up and east to the left.
The images are scaled logarithmically. The green circles in the
lower left corner of each image show the FWHM of the PSF. The
cyan ellipse in the 250~$\mu$m image shows the $D_{25}$ isophote
\citep[$26^{\prime}.9\times14^{\prime}.1$;][]{ddcbpf91}; the
radius is equivalent to 14 kpc. The blue squares in the
250~$\mu$m image show the $42^{\prime\prime}$ regions for which
SEDs are plotted in Fig.~\ref{f_sed}.}
\label{f_img}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[height=6cm]{14568f2.ps}
\caption[width=\textwidth]{Images of the 70/160, 160/250, 250/350,
and 350/500~$\mu$m surface brightness ratios in the optical disc
of M81. Each image is created using data with PSFs that match
the 500~$\mu$m PSF. Regions not detected at the $3\sigma$ level
in the two bands used for each ratio are left blank. The image
sizes and orientations are the same as for Fig.~\ref{f_img}.
The circles in the lower left corner of each image show the FWHM
of the 500~$\mu$m PSF.}
\label{f_img_ratio}
\end{figure*}
The SPIRE observations were performed as two orthogonal scans covering
$40^\prime\times40^\prime$ using a 30$^{\prime\prime}$~s$^{-1}$ scan
rate. A modified HIPE 3.0 detector timeline pipeline was used to
remove cosmic rays, flux calibrate the data, and apply temperature
drift and response corrections \citep[see][for details]{pohlenetal10}.
We then removed offsets between the detector timelines in two steps.
First, we subtracted the median signal from each bolometer observed
during the entire observation. Then we applied an iterative process
to remove residual baseline signals that appear as stripes in the
maps. In this process, we first created a map. Then, for each
bolometer timeline in each scan leg, we measured the signal in the map
that corresponded to the bolometer's position, we calculated the
median difference between the bolometer signal and the corresponding
map signal, and we subtracted this function from the bolometer signal.
These steps were repeated 40 times to completely remove stripes from
the data. Finally, we subtracted median background signals from the
images. The resulting images have flux calibration uncertainties of
15\%, and the 250, 350, and 500~$\mu$m PSFs have FWHM of
$18^{\prime\prime}$, $25^{\prime\prime}$, and $37^{\prime\prime}$,
respectively \citep{swinyardetal10}. The RMS noise levels are 0.040,
0.019, and 0.008 mJy arcsec$^{-2}$ in the 250, 350, and 500~$\mu$m
bands, respectively.
To create ratios of surface brightnesses measured in two wave bands,
we matched the PSFs, which we treated as Gaussian, to the PSF of the
500~$\mu$m data. For statistical analyses on surface brightness
ratios and for creating SEDs of subregions within the galaxies, we
then rebinned the data in all images into $42^{\prime\prime}$
($\sim0.7$~kpc) square pixels (selected because it is an integer
multiple of the 500~$\mu$m pixel size that is larger than the PSF FWHM
for the 500~$\mu$m data). For these analyses, we only used used
$42^{\prime\prime}$ pixels with $3\sigma$ detections in all
bands.
\section{Results}
Figure~\ref{f_img} shows the structures traced by the various {\it Herschel}
wave bands, which look very similar to each other and to the 5.7-24~$\mu$m
{\it Spitzer} images \citep{getal04, wetal04}. All images trace the
same spiral structure and individual infrared sources within the disc
of the galaxy. Diffuse, extended sources detected outside the optical
disc of M81 in the {\it Herschel} data seem most likely to be associated
with dust in the Milky Way (Davies et al. in prep.).
\begin{figure}
\centering
\includegraphics{14568f3a.ps}
\includegraphics{14568f3b.ps}
\includegraphics{14568f3c.ps}
\includegraphics{14568f3d.ps}
\caption[width=\textwidth]{The 70/160, 160/250, 250/350, and
350/500~$\mu$m surface brightness ratios versus the surface
brightness (left and center) and inclination-corrected
galactocentric radius (right). The data were measured in
$42^{\prime\prime}$ subregions in images with PSFs that matched
the PSF of the 500~$\mu$m data. Best fit lines are shown for
all plotted data except for two relations involving the
70/160~$\mu$m ratio, where the fits were very poor;
corresponding slopes are given in the panels. The $R$ values
are the Pearson correlation coefficients for the plotted data.
Note that, in the left-side and center panels, the logarithm of
the surface brightnesses are used for the best fit lines and
correlation coefficients.}
\label{f_ratiovar}
\end{figure}
To understand the heating mechanism for the dust, we examined how
surface brightness ratios varied with surface brightness and with
radius. Variations with surface brightness would suggest that the
dust is heated locally and that the emission is linked to star
formation, whereas radial variations in the ratios would indicate that
the dust emission is more strongly affected by the evolved stellar
populations in the bulge and disc. Figure~\ref{f_img_ratio} shows
images of the 70/160, 160/250, 250/350, and 350/500~$\mu$m surface
brightness ratios. Additionally, Fig.~\ref{f_ratiovar} shows how
the ratios measured in $42^{\prime\prime}$ ($\sim0.7$~kpc) subregions
vary with surface brightness and galactocentric radius.
The absolute value of the correlation coefficient $R$ for the
relations between radius and either the 160/250, 250/350, or
350/500~$\mu$m ratios is generally higher than that for the relations
between surface brightness and the ratios, which shows that these
ratios are more strongly dependent on radius (although the
160/250~$\mu$m ratio may also be partly dependent on 160~$\mu$m
surface brightness based on the high value of $R$). This is
consistent with the weak or absent infrared-bright regions or spiral
structure in the images of the 160/250, 250/350, and 350/500~$\mu$m
ratios. Moreover, the $R^2$ values, which equal the fraction of the
variance in the data that can be accounted for by the best fit line,
indicate that $>$70\% of the variance in the 160/250, 250/350, and
350/500~$\mu$m ratios can be accounted for by the relation with
radius.
In contrast, the 70/160~$\mu$m ratio does not vary monotonically with
radius except within 2~kpc, a region in which the gradient in
the 160/250~$\mu$m ratio also increases. This could represent
enhanced dust heating within this radius that is powered by the active
galactic nucleus (AGN), by strong central star formation activity, or
by the bulge stars, which have a high central density. The
70/160~$\mu$m ratio versus 160~$\mu$m surface brightness exhibits no
obvious trend and only a statistically weak trend (with $R^2<0.3$) is
visible in the plot of the 70/160~$\mu$m ratio versus 70~$\mu$m
surface brightness, although the best fit line poorly describes the
data. None the less, Fig.~\ref{f_img_ratio} shows that the ratio
increases to $\gtrsim0.3$ in infrared-bright regions in the spiral
arms.
\begin{figure}
\centering
\includegraphics{14568f4a.ps}
\includegraphics{14568f4b.ps}
\caption[width=\textwidth]{On the left, the global SED as well as
the SEDs for the three $42^{\prime\prime}$ ($\sim 0.7$~kpc)
regions shown in Fig.~\ref{f_img}. The SEDs for the subregions
were measured in data with PSFs that matched to the PSF of the
500~$\mu$m data. The grey line is the blackbody modified with a
$\lambda^{-2}$ emissivity function fit to the data. On the
right are the residuals from the fit in logarithm space.}
\label{f_sed}
\end{figure}
Figure~\ref{f_sed} shows the SED integrated across the optical disc
(with supplemental 60 and 100~$\mu$m IRAS data added from
\citet{retal88}) as well as the SEDs for the $42^{\prime\prime}$
regions centered on the nucleus, and examples of an infrared-bright
source and an interarm region. Based on visually inspecting and
fitting functions to the SEDs, these example regions were typical to
similar regions at similar radii. In the nucleus, we were able to fit
a single blackbody modified with a $\lambda^{-2}$ emissivity function
(based on the \citet{ld01} emissivity function) to the 70-350~$\mu$m
data, but the 500~$\mu$m data point could not be fit with the same
thermal component, although the mismatch between the fit and model is
only $2\sigma$. This result and the low 350/500~$\mu$m ratio for the
nucleus seen in Fig.~\ref{f_ratiovar} (which is $3\sigma$ below the
best fit line) suggest that the 500~$\mu$m nuclear emission likely
includes a non-thermal component associated with the AGN. Based on
the SED fit, we estimate that the non-thermal 500~$\mu$m emission is
$0.05 \pm 0.03$~Jy, which is $\sim2.5\times$ below a power law
extrapolated from the mm and cm data presented by \citet{metal08}.
The discrepancy could be explained by the low signal-to-noise in the
estimate from the SED fit or by variability in the AGN emission; the
870~$\mu$m flux density has been observed to vary by $3\times$
\citep{metal08}.
In the other SEDs, we found that single blackbodies modified with
$\lambda^{-2}$ emissivity functions could be fit accurately to the
$>$100~$\mu$m data without the fit overpredicting the observed
70~$\mu$m measurement, but fits that included the 70~$\mu$m data point
did not accurately replicate the peak of the SED. No evidence is
found for the excess emission at submillimetre wavelengths sometimes
attributed to dust with $<10$~K temperatures or shallow emissivities,
although prior results had indicated that this emission would be more
prominent at $>$500~$\mu$m \citep[e.g.][]{gmjwb05, betal06, zpxkl09,
oetal10}. By applying to the data for the global SED the equation
$M_{dust}=[f_\nu D^2]/[\kappa B(\nu,T)]$ (where $D$ is distance,
$\kappa_\nu$ is the dust opacity from \citet{ld01}, and $B(\nu,T)$ is
the best fitting modified blackbody), we estimated the global dust
mass to be $3.4 \pm 0.5 \times10^7$~M$_\odot$. Given that the atomic
gas mass is $3.64 \pm 0.18\times10^9$~M$_\odot$ \citep{wetal08} and
the molecular gas mass is negligible in comparison
\citep[][S\'anchez-Gallego et al. in prep.]{s93}, we estimate that the
gas-to-dust ratio is $107 \pm 17$, which is within the range of
$\sim100$-200 expected for solar metallicity objects based on the
depletion of metals from the gaseous phase of the interstellar medium
or comparisons of gas column density to dust extinction
\citep[e.g][]{w03}. Hence, this simplistic modified blackbody fit may
be a fair representation of the emission from the bulk of the dust
mass in M81, although more sophisticated modeling should not only
yield more accurate masses but also describe the emission from warmer
dust components.
The SED fits along with the results from Figs.~\ref{f_img_ratio} and
\ref{f_ratiovar} imply that the 70~$\mu$m band traces dust heated by a
different source than the dust that primarily emits in the
160-500~$\mu$m bands. Although the 70/160~$\mu$m ratio exhibits a lot
of scatter, the enhancements in the 70/160~$\mu$m ratio in the spiral
arms implies that the 70~$\mu$m band may be affected by star formation
on local scales. Meanwhile, the radial variations in the
160-500~$\mu$m ratios and the SED fits suggest that $\sim20$\% of the
60~$\mu$m emission, $\sim30$\% of the 70~$\mu$m emission, and
$\sim100$\% of the $>100~\mu$m emission originates from dust heated by
evolved disc and bulge stars. This is consistent with prior results
suggesting that $\sim 5$-100\% of the 60 and 100~$\mu$m emission from
nearby galaxies originates from dust heated by evolved stars
\citep[e.g.][]{st92,wg96}. If this interpretation is correct, we
anticipate that dust emitting at 160-500~$\mu$m in other galaxies with
relatively large fractions of old stars (E-Sab galaxies) will also
have 160-500~$\mu$m colours that depend upon radius, but galaxies with
relatively large fractions of young stars (Sc-Im galaxies) will have
160-500~$\mu$m colours that may depend more on infrared surface
brightness, as heating by the evolved stellar population becomes
insignificant. The results also imply that the conversion of infrared
fluxes integrated over very broad ranges (e.g. 8-1000~$\mu$m) to star
formation rates, as done by \citet{zwcl08}, \citet{retal09}, and
\citet{ketal09}, will be accurate as long as the integrals contain a
significant amount of emission shortward of 160~$\mu$m that traces
dust heated by star formation. However, it may not be possible to
derive accurate star formation rates from dust emission measured
solely at $>$160~$\mu$m.
In conclusion, these results for M81 demonstrate how {\it Herschel}
70-500~$\mu$m data can be used to not only measure more accurate dust
temperatures and masses but also determine the dust heating sources in
nearby galaxies. Further work with data from the VNGS and other
surveys will allow us to determine whether dust traced by the
160-500~$\mu$m bands in other spiral galaxies is also heated by
evolved stellar populations and whether variations in the relative
strength of dust heating by evolved stars varies across the Hubble
sequence.
\begin{acknowledgements}
We thank A. Fraceschini and E. Murphy for comments on this paper.
SPIRE has been developed by a consortium of institutes led by Cardiff
Univ. (UK) and including Univ. Lethbridge (Canada); NAOC (China); CEA,
LAM (France); IFSI, Univ. Padua (Italy); IAC (Spain); Stockholm
Observatory (Sweden); Imperial College London, RAL, UCL-MSSL, UKATC,
Univ. Sussex (UK); Caltech, JPL, NHSC, Univ. Colorado (USA). This
development has been supported by national funding agencies: CSA
(Canada); NAOC (China); CEA, CNES, CNRS (France); ASI (Italy); MCINN
(Spain); SNSB (Sweden); STFC (UK); and NASA (USA). PACS has been
developed by a consortium of institutes led by MPE (Germany) and
including UVIE (Austria); KUL, CSL, IMEC (Belgium); CEA, OAMP
(France); MPIA (Germany); IFSI, OAP/AOT, OAA/CAISMI, LENS, SISSA
(Italy); IAC (Spain). This development has been supported by the
funding agencies BMVIT (Austria), ESA-PRODEX (Belgium), CEA/CNES
(France), DLR (Germany), ASI (Italy), and CICT/MCT (Spain).
\end{acknowledgements}
\bibliographystyle{aa}
|
2,877,628,090,496 | arxiv | \section{\label{sec:level1}Introduction}
Topological insulator (TI) is the quantum state of matter \cite{a1,a2,a3,a4,a5}, characterized by an energy gap in the bulk and conductive boundary (surface) states with linear dispersion law. These states are protected from impurity scattering and electron-electron interactions by time-reversal symmetry \cite{a3,a4,a5,a6,a7}. It means that the electrons in these states can move along the edge of the ultra-thin film or the surface of bulk material without energy loss that can be exploited in ultra-fast and low power consumption electronics. These materials may be used to create groundbreaking electronic devices \cite{a8,a9}, as well as to reveal unusual physical effects occurring at the interfaces of the TI/superconductor and TI/ferromagnetic \cite{a10,a11}.
The first two-dimensional (2D) system, in which TI state was predicted \cite{a1} and then experimentally observed \cite{a2}, were HgTe/CdHgTe quantum wells (QWs). The TI state originates from the inverted band structure in wide HgTe QWs. Specifically, as the QW thickness $d$ increases, the lowest 2D subband in the conduction band formed by $(|\Gamma_6, \pm1/2\rangle)$ states and light-hole $(|\Gamma_8, \pm1/2\rangle)$ states and defined as $E1$ subband, crosses at $d = d_c$ the top subband in the valence band, formed by heavy-hole $(|\Gamma_8, \pm3/2\rangle)$ states, defined as $H1$ subband \cite{a1}. The inverted alignment of electronic states in wide QWs ($d > d_c$) induces spin-polarized helical states at the sample edges \cite{a1}. The existence of such edge states in HgTe QWs has been confirmed experimentally \cite{a2, a12, a13}. At critical QW thickness $d_c$, corresponding to quantum phase transition between conventional semiconductor and TI, the energy structure in the vicinity of band crossing mimics massless Dirac fermions at the $\Gamma$ point \cite{a14, a15, a16}.
The inherent property of inverted HgTe QWs is their characteristic behavior under applied magnetic field $B$, i.e., the crossing of particular zero-mode Landau levels (LLs), arising at critical magnetic field $B_c$ \cite{a2, a12, a13}. Below this field, the lowest zero-mode LL has electron-like character, although it origins from the valence band. This LL tends toward high energies with increasing of magnetic field. The second zero-mode LL, which arises from conduction band at $B < B_c$, has the heavy-hole-like character and decreases with magnetic field. In this situation, counter propagating spin-polarized states also still exist \cite{a2}, although, owing to the presence of magnetic field and breaking of time-reversal symmetry, these states are not robustly protected. For $B > B_c$, the band structure becomes normal and only trivial quantum Hall states can be found. Recently, increasing of the critical QW thickness with temperature has been shown \cite{a17}. The latter indicates that TI state is destroyed if temperature increases. Later, the temperature-induced phase transition between inverted and normal band structure has been confirmed by magnetotransport studies \cite{a18}.
Up to now, to discriminate between trivial ($d < d_c$) and topological insulators ($d > d_c$) in HgTe QWs, especially close to the critical width, one has to performed detailed magnetotransport investigations of gated Hall bars \cite{a12, a13, a18}. In this work, we demonstrate that such difference can also be made by magnetooptical measurements at different temperatures on non-processed samples. We note that previous magnetooptical studies \cite{a15, a16, a19, a20, a21, a22, a26, z1} of HgTe QWs have been performed at low temperatures only.
We perform Landau level magnetospectroscopy in wide temperature range up to 185~K of two HgTe/Cd$_x$Hg$_{1-x}$Te QWs, which are in TI regime at low temperatures. Our samples were grown by molecular beam epitaxy (MBE) on semi-insulating GaAs(013) substrates \cite{a27}. CdTe buffer, $\sim$40~nm lower Cd$_x$Hg$_{1-x}$Te barrier, HgTe QW, and $\sim$40~nm Cd$_x$Hg$_{1-x}$Te top barrier were grown one by one. 40~nm CdTe cap layer was also grown above the structure. The Cd content $x$ in the barriers and QW width are given in the Table~\ref{tab:1}. The barriers of both samples were selectively doped with indium (from both sides of QW), that resulted in formation of 2D electron gas in the QW with a concentration of several units of $10^{11}$~cm$^{−2}$ at low temperatures. Typical mobility values at low temperatures are about 5${\times}10^4$~cm$^2$/V$\cdot$s. Both samples have inverted band structure at low temperatures.
\begin{table}
\caption{\label{tab:1}Parameters of HgTe/Cd$_x$Hg$_{1-x}$Te QWs at $T = 4.2$~K.}
\begin{ruledtabular}
\begin{tabular}{lccc}
Sample&QW width (nm)&$x_{bar}$ (\%)&$n_s$ ($10^{11}$ cm$^{-2}$)\\
\hline
1 (091223-1)&8&62&1.6\\
2 (091222-1)&8&70&3.2\\
\end{tabular}
\end{ruledtabular}
\end{table}
Magnetooptical experiments were performed in pulsed magnetic fields up to 45~T in Laboratoire National des Champs Magnetiques Intenses in Toulouse (LNCMI-T) and in Sarov State Institute of Physics and Technology (SSIPT). The pulse duration in LNCMI-T experiments was about 800~ms, while pulse duration in SSIPT did not exceed 25~ms. Solenoids were immersed into a liquid nitrogen Dewar. Each setup had its own peculiarities.
In LNCMI-T, the liquid helium cryostat was placed inside the solenoid. The variable temperature probe with a sample, quantum cascade laser (QCL) emitting at wavelength of 14.8~$\mu$m and blocked impurity band Si detectors were inserted into the cryostat. It allowed to perform magnetooptical measurements in the temperature range from 4.2~K to 30~K. Details about this setup are given in Ref.~\onlinecite{a28}.
In SSIPT, the samples were mounted on the cold finger in the vacuum chamber of liquid nitrogen cryostat accompanied by temperature sensor and magnetic field induction sensors. The temperature was varied in the range of 77-185~K \cite{a29}. As a radiation source, we used CO$_2$ laser ($\lambda = 10.6$~$\mu$m), while detector was HgCdTe photodiode operating at liquid nitrogen temperature. Magnetooptical measurements were made in the Faraday configuration. Additionally, to determine electron concentration and LL filling factors $\nu$ at different temperatures, we also measured magnetoresistance in two-terminal geometry.
\begin{figure*}
\includegraphics[width=0.33\textwidth]{fig1a.eps
\includegraphics[width=0.33\textwidth]{fig1b.eps
\includegraphics[width=0.33\textwidth]{fig1c.eps
\caption{\label{fig:1} (Color online) Landau levels (in the axial approximation) and band structure at $B = 0$ (insets) for $k\| [100]$ and $k \| [03-1]$ for the sample~1 at different temperatures: (a) $T = 4.2$~K, the band structure is inverted with an indirect band gap; (b) $T = 113.8~K$, a gapless state (the inset shows a Dirac cone in the vicinity of the $\Gamma$ point); (c) $T = 174$~K, the band structure is normal with a direct band gap, conduction subband $E1$ has an electron-like character, the valence subband is formed by ‘hole-like’ level $H1$}
\end{figure*}
To interpret the experimental results, we performed temperature-dependent band structure and LLs calculations based on the 8-band \textbf{k${\cdot}$p} Hamiltonian for (013)-oriented heterostructures (see, e.g. Refs.~\onlinecite{a15, u30}) with material parameters taken from Ref.~\onlinecite{a18}. In the model, we also take into account a tensile strain in the layers arising due to the mismatch of lattice constants in CdTe buffer, HgTe QW and Cd$_x$Hg$_{1-x}$Te barriers. The calculations were performed by expanding the envelope wave functions in the basis set of plane waves and by numerical solution of the eigenvalue problem. The energies of LLs were found within so-called axial approximation \cite{a15, u30}, while for the calculations of dispersion curves non-axial terms were held. In our approach we takes into account the temperature dependence of the bandgap, valence band offset, change in the lattice constants of the layers and the elastic constants $C_{11}$, $C_{12}$ and $C_{44}$ (bulk modulus) with the temperature \cite{a18, b31}.
Fig.~\ref{fig:1} provides a LL fan chart and dispersion curves for the sample~1 at three different temperatures. At low temperatures, the sample is in 2D TI phase and conduction band is formed by the top 'hole-like' subband $H1$. The lowest LL in the conduction band, labeled by $n = -2$, has a purely heavy-hole character $(|\Gamma_8, -3/2\rangle)$ and its energy decreases linearly with magnetic field $B$. For the notations of LLs, see Refs.~\onlinecite{a15, a19}. In contrast, the top LL in the valence band $n = 0$ goes up in energy with a magnetic field. These two LLs represent, so-called zero-mode LLs, mentioned above, which are identified within a simplified approach of the Dirac type Hamiltonian \cite{a1}. In calculations of LLs in our samples, we applied a general scheme of the 8-band \textbf{k${\cdot}$p} Hamiltonian but neglected the bulk inversion asymmetry (BIA) effect \cite{a22}. Such an approximation implies that, for any HgTe QW in the inverted regime, the two zero-mode LLs simply cross each other at given magnetic field $B_c$. These characteristic levels and their crossing can be easily recognized in Fig.~\ref{fig:1} for the sample~1. It is seen that in magnetic fields above 6.4 T (at $T = 4.2$~K), the sample~1 has normal band structure, and the level $n = 0$ becomes actually the lowest LL in the conduction band. In the models with BIA \cite{a22}, the level crossing between zero-mode LLs at $B_c$ can be avoided. The latter gives rise to specific behaviour of magnetooptical transitions from the zero-mode LL in the vicinity of critical magnetic field \cite{a15, a19, a22}. If magnetic field exceeds $B_c$ or is significantly lower than the critical field, effect of BIA is negligible small. Further, BIA effect is neglected.
As it is seen in Fig.~\ref{fig:1}, the band gap and $B_c$ are getting smaller with temperature $T$. At critical temperature $T_c = 113.8$~K for the sample~1, the band gap vanishes, and the band structure mimics dispersion of massless Dirac fermions. The further increasing of $T$ opens the band gap and makes HgTe QW a conventional semiconductor with normal band ordering, in which conduction band has ‘electron-like’ character, while the valence band at the $\Gamma$ point is formed by heavy-hole states. Thus, a temperature increase results in a qualitative transformation of the inverted band structure into the normal one.
\begin{figure}
\includegraphics[width=\columnwidth]{fig2.eps
\caption{\label{fig:2} (Color online) Magnetoresistance and magnetoabsorption spectra for the sample~1, obtained at different temperatures (solid lines -- 4.2~K, dotted -- 20~K, dashed -- 30~K) using 14.8~$\mu$m QCL.}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{fig3.eps
\caption{\label{fig:3} (Color online) Magnetoresistance and magnetoabsorption spectra for the sample~1, obtained at different temperatures (solid lines -- 80~K, dotted -- 102~K, dashed -- 174~K) using CO$_2$ laser emitting at 10.6~$\mu$m.}
\end{figure}
Fig.~\ref{fig:2} shows magnetic field dependences of two-terminal magnetoresistance and transmission measured in the sample~1 by using 14.8~$\mu$m QCL at three different temperatures. The absorption spectrum exhibits three lines denoted by $\alpha$, $\beta$ and $h$. As it can be seen from magnetoresistance, the quantum Hall plateau, corresponding to LL filling factor $\nu = 1$, occurs within the interval of the magnetic field of 5 to 10~T, and all the lines are observed at higher magnetic fields, for which $\nu$ is less than unity, i.e. in the ultra-quantum limit. In this case, the Fermi level lies at the zero-mode LL with $n = 0$. The selection rules for electric-dipole transitions in the axial approximation allow electron excitation between LLs whose numbers differ by one. Therefore, the observed absorption lines correspond to $0 \rightarrow 1$ electron transition ($\alpha$ line) between the partial occupied upper zero-model LL and high-lying LL with $n = 1$ and also an electron excitation from the lower zero-model LL with $n = -2$ into high-lying empty LL with $n = -1$ (see Fig.~\ref{fig:1}a). The latter is called $\beta$ line \cite{a19, a20, a21, a22, z1}. In addition to $\alpha$ and $\beta$ lines, an intra-band electron transition from LL with $n = –1$ in the valence band into partially filled zero-mode LL with $n = 0$ is clearly observed. This line is designated as $h$ (see Fig.~\ref{fig:1}, cf. \onlinecite{a21}).
\begin{figure}
\includegraphics[width=\columnwidth]{fig4.eps
\caption{\label{fig:4} (Color online) Temperature dependence of magnetoabsorption line positions for two wavelengths: 10.6~$\mu$m (squares), and 14.8~$\mu$m (triangles) for the sample~1. The curves stand for the calculation results; symbols are experimental data, extracted from the absorption maxima. Solid curves and symbols correspond to $\alpha$ line ($0 \rightarrow 1$ transition), dotted curves and open symbols are for the $\beta$ line ($-2 \rightarrow –1$ transition).}
\end{figure}
If temperature increases, the $\alpha$ and $\beta$ lines merge (Fig.~\ref{fig:2}a). However, at high temperature, evolution of $\alpha$ and $\beta$ lines has a different behaviours. Fig.~\ref{fig:3} shows magnetospectroscopy results for the sample~1 obtained with CO$_2$ laser at 10.6~$\mu$m at different temperatures $T \geq 80$~K. As in Fig.~\ref{fig:2}, all the absorption lines are observed in magnetic fields exceeding the range for of fundamental quantum Hall plateau $\nu=1$. The latter is shifted towards higher magnetic fields due to increased electron concentration if compared with the plateau at 4.2~K. It is seen that the $\alpha$ line is observed in higher fields than the $\beta$ line (cf. Fig.~\ref{fig:1}(b, c)). The temperature increase results in the divergence of the lines: the $\beta$ line rapidly tends to low magnetic fields, while the $\alpha$ line slowly shifts toward high field region. In order to get a detailed picture of temperature-dependent magnetospectroscopy, we plot the resonant magnetic fields, corresponding to the absorption maxima, as a function of temperature for two wavelengths used in our experiments (Fig. ~\ref{fig:4}). It is seen that the resonant fields for $0 \rightarrow 1$ transition ($\alpha$ line, closed symbols) weakly depend on $T$ at high temperatures. In contrast, resonant fields for $-2 \rightarrow -1$ transition ($\beta$ line, open symbols) strongly depend on temperature, shifting toward low fields with temperature.
\begin{figure}
\includegraphics[width=\columnwidth]{fig5.eps
\caption{\label{fig:5} (Color online) Temperature dependence of magnetoabsorption line positions for two wavelengths: 10.6~$\mu$m (squares), and 14.8~$\mu$m (triangles) for the sample~2. Lines stand for the calculation results, symbols are experimental data. Solid lines and symbols correspond to the line $\alpha$ ($0 \rightarrow 1$ transition), dotted lines and open symbols correspond to the line $\beta$ ($-2 \rightarrow -1$ transition). The inset shows typical magnetoabsorption spectra, obtained at different temperatures (solid lines – 81~K, dotted – 134~K, dashed – 184~K) in pulsed magnetic fields with CO$_2$ laser ($\lambda = 10.6$~$\mu$μm).}
\end{figure}
The sample~2 has the same QW width of 8~nm but higher cadmium content in the barriers than the one for the sample~1. Therefore, the calculated critical temperature $T_c$ and the temperature for the $\alpha$ and $\beta$ line merging in the sample~2 are lower than for the sample~1 (cf. Fig.~\ref{fig:4}, \ref{fig:5}, $\lambda = 14.8$~$\mu$m). Temperature-dependent measurements for the sample~2 were carried out with CO$_2$ laser only; those with QCL were performed at $T = 4.2$~K (Fig.~\ref{fig:5}). Just as in the sample~1, the observed line positions correspond to the quantum limit $\nu \leq 1$. One can see that at $\lambda = 14.8$~$\mu$m, $T = 4.2$~K, the $\alpha$ and $\beta$ lines, indeed, are a bit closer to each other than in the sample~1. At high temperatures $T \geq 80$~K, the experimental results at wavelength of 10.6~$\mu$m are very similar to those for the sample~1. As easy to see from Fig.~\ref{fig:4}, \ref{fig:5}, there is a good qualitative agreement between experimental data and theoretical results. The slopes of calculated temperature dependences of line positions are close to those observed experimentally. This indicates that the selected temperature dependences of the band parameters\cite{a18} used in the 8-band \textbf{k${\cdot}$p} Hamiltonian are adequate.
\begin{figure*}
\includegraphics[width=0.5\textwidth]{fig6a.eps
\includegraphics[width=0.5\textwidth]{fig6b.eps
\caption{\label{fig:6} (Color online)
Phase diagrams for HgTe/Cd$_{0.62}$Hg$_{0.38}$Te~QW at different values of temperature (a, QW width $d = 8$~nm) and QW width $d$ (b, $T = 0$). Striped regions correspond to the inverted band structure. The red dotted curves conform to the magnetic fields in the which $\alpha$ and $\beta$ lines merge ($E_\alpha = E_\beta$). The inset on the left figure shows the wavelength, at which positions of the $\alpha$ and $\beta$ lines coincide, as a function of temperature.}
\end{figure*}
Different behavior of the $\alpha$ and $\beta$ lines (Fig.~\ref{fig:4}, ~\ref{fig:5}) results from the temperature effect on the band structure in HgTe QWs. Roughly speaking, at low temperatures, the $\alpha$ line corresponds to the inter-band transition, while at high temperatures it becomes the intra-band transition. Since the inverted gap is closing up to $T_c$, at fixed magnetic field, the energy of the $\alpha$ transition should decrease with the temperature) and vice versa at fixed excitation energy, the resonant magnetic field should dramatically increase with temperature. On the contrary, the $\beta$ line at low temperature corresponds to the intra-band transition, while at high temperatures it results from the inter-band excitation. Therefore, if magnetic field is fixed, the energy of the $\beta$ line should increase with temperature due to the band gap opening over $T_c$. Therefore, at fixed excitation energy the resonant magnetic field for the $\beta$ line should decrease if the temperature increases.
Fig.~\ref{fig:6}a provides the phase diagram for the sample~1 at different values of magnetic field and temperature. The black solid curve shows dependence of $B_c$ on temperature. It confines the striped region, corresponding to the inverted band structure. Above this curves, the sample~1 has normal band structure. We note that at critical temperature $T_c = 113.8$~K, $B_c = 0$; the latter corresponds to the gapless state with a Dirac cone in the vicinity of the Γ point, shown in Fig.~\ref{fig:1}b. We also plot in Fig.~\ref{fig:6}a the temperature dependence of specific magnetic field, at which the $\alpha$ and $\beta$ lines coincide. This is given by the red dotted curve. Below this curve, resonant energy of the $\alpha$ line exceeds that of the $\beta$ line. If the temperature tends to $T_c$, energies of $\alpha$ and $\beta$ transitions coincide at $B \rightarrow 0$. As it can be demonstrated analytically, for an example, be means of simplified approach\cite{a1}, the latter results from arising of the Dirac cone in the vicinity of the $\Gamma$ point.
The crossing of $\alpha$ and $\beta$ lines with the temperature increasing is the signature of the inverted band structure at low temperatures. It is easy to verify within the Dirac type Hamiltonian\cite{a1}, that such crossing in given magnetic field is related with negative values of the mass parameter $M$. The latter is absent for the positive values of $M$, i.e. for the trivial insulator phase. In Fig.~\ref{fig:6}b, we also provide the phase diagram for HgTe/Cd$_{0.62}$Hg$_{0.38}$Te QW at zero temperature as a function of QW width $d$. One can see that the merging of the $\alpha$ and $\beta$ lines with energies $E_{\alpha}$ = $E_{\beta}$ (with increasing of magnetic field) takes place at $d > d_c$ only. The latter corresponds to the 2D TI phase in zero magnetic field.
Let us now explain a different temperature evolution for the $\alpha$ and $\beta$ absorption lines at the wavelength of 10.6~$\mu$m and 14.8~$\mu$m. To probe the crossing of $\alpha$ and $\beta$ transitions, in addition to the range of temperatures and magnetic fields, one needs also to choose a proper frequency range. The inset in Fig.~\ref{fig:6}a shows theoretical temperature dependence of the wavelength for the crossing of $\alpha$ and $\beta$ transitions in the sample~1. It is seen that, at given temperature, there is a short wavelength limit for probing coincidence between the $\alpha$ and $\beta$ absorption lines. Indeed, the wavelength of CO$_2$ laser $\lambda = 10.6$~$\mu$m does not allowed to probe a merging of the lines; the splitting between the lines increases with temperature (see Fig.~\ref{fig:2}). The picture is changes drastically for $\lambda = 14.8$~$\mu$m. It is seen that the $\alpha$ and $\beta$ transitions have the same energies for $T \approx 40$~K. In the lower temperature range, the $\alpha$ and $\beta$ absorption lines merge if temperature increases (see Fig.~\ref{fig:2}).
In summary, we have performed the temperature-dependent magnetospectroscopy study in pulsed magnetic fields up to 45~T of HgTe/CdHgTe QWs with inverted (at low temperatures) band structure by means of monochromatic sources. At low excitation energies, we have discovered a temperature-induced merging of the absorption lines, corresponding to the transitions from the zero-mode LLs. Realistic temperature-dependent calculations of LLs, based on the 8-band \textbf{k${\cdot}$p} Hamiltonian, allows us to interpret such behaviour of the observed transition as a residual signature of low-temperature TI phase, which fingerprint persists at high temperatures and high magnetic fields. Our results demonstrate that temperature-dependent magnetospectroscopy can be used as a tool to probe a difference between trivial and topological insulator phases in HgTe quantum wells.
This work was supported by the Russian Scientific Foundation (Grant No 16-12-10317), CNRS through LIA TeraMIR project, by Languedoc-Roussillon region via the “Terapole Gepeto Platform”. The authors thank Vladimir Aleshkin for helpful discussions and comments on this work. S.S. Krishtopenko also acknowledges the non-profit Dynasty foundation for financial support.
\nocite{*}
|
2,877,628,090,497 | arxiv | \section{INTRODUCTION}
In this paper we consider the critical behavior of macroscopic systems
with surfaces, walls, or interfaces on approaching the bulk
critical point \cite{bdl,hwd}. As is well known, the critical behavior
near boundaries normally differs from the bulk
behavior. The values of the bulk and surface critical exponents
characterizing thermodynamic singularities usually are different,
and the surface exponents are not generally expressible in terms of the
bulk exponents.
There are, however, a few remarkable cases in which local surface
quantities have thermal singularities of the same form $|t|^{2-\alpha}$
as the bulk free energy \cite{bm,dde,bc}. Here $t\sim T-T_c$, where
$T_c$ is the {\it bulk} critical temperature. Examples of such
quantities are (i) the surface order parameter $m_1$
at the extraordinary and normal transitions and
(ii) the surface energy density $\epsilon_1$ at the
ordinary, extraordinary, and normal transitions.
Our use of the terms \lq extraordinary\rq\ and \lq normal\rq\ requires
some explanation. The names ordinary, special, and extraordinary
were originally introduced \cite{lr} to distinguish the surface
transitions of the $d$-dimensional semi-infinite Ising model at the
bulk critical temperature {\it in the absence
of fields breaking the
$Z\!\!\!Z_2$ symmetry of the Hamiltonian.} Which surface transition
occurs depends on the enhancement $g$ of the surface interactions
\cite{nf}. The ordinary, special, and extraordinary transitions
correspond to $g<0$, $g=0$, and $g>0$, respectively. The extraordinary
transition originally referred to the transition from a
high-temperature phase with spontaneous surface order but no bulk order
to a low-temperature phase with both surface and bulk order.
Bray and Moore \cite{bm} formulated a
phenomenological theory of the
extraordinary transition based on
(a) the assumption that instead of being {\em spontaneously} ordered
in the high-temperature phase, the surface spins could just as well
be aligned by a surface magnetic field $h_1$, i.e., for $h_1\ne 0$
and arbitrary $g$ there should also be extraordinary behavior,
(b) a local free-energy scaling hypothesis.
\noindent Bray and Moore predicted that the leading non-analytic term
of the surface magnetization $m_1$ has the form $|t|^{2-\alpha}$.
Most authors accepted (somewhat
uncritically) assumption (a) and subsequently referred to both kinds of
transitions, $h_1=0,\,g>0$ and $h_1\neq 0,g<0$, as extraordinary
transitions.
The name extraordinary may be acceptable for systems with
$h_1=0,\thinspace g>0$, but it is inappropriate for systems
with $h_1\ne 0$ and $g<0$.
In contrast to magnetic systems, the Hamiltonian of fluid
systems at bulk coexistence is not normally $Z\!\!\!Z_2$ symmetric.
The surface field $h_1$ is
generically nonzero \cite{dc}. Hence the case
$h_1\ne 0$ and $g<0$ is actually quite normal for fluid
systems \cite{fu}. It applies, in particular, to the critical
adsorption of fluids \cite{fdg,lf,d,Law}. Following a suggestion by Fisher
\cite{mef}, we refer to the case $h_1\ne 0$ as {\it
normal} and reserve the name {\it extraordinary}
for $h_1=0,\,g>0$.
According to assumption (a) the normal and
extraordinary transitions should belong to the same surface
universality class. This is indeed the case. One source of evidence
is from field-theoretic
renormalization-group (RG) analysis in $4-\epsilon$
dimensions. One can show that the asymptotic critical
behavior at both transitions
is the same to any (finite) order of RG-improved perturbation
theory \cite{ds}.
In this paper we give an alternate, more direct proof of the
extraordinary-normal equivalence. This is presented in
Sec.\ II, where the semi-infinite Ising model
with supercritical surface enhancement is mapped exactly onto
a semi-infinite Ising model with $h_1\neq 0$ and subcritical surface
enhancement by tracing out the surface spins.
In Sec.\ III we show that the surface order parameter $m_1$ at the
extraordinary and normal transitions and the energy density
$\epsilon_1$ at the ordinary, extraordinary, and normal transitions
all have leading thermal singularities $B_\pm |t|^{2-\alpha}$, with
the same critical exponent and amplitude ratio
as the bulk free energy. Diehl et al.\ \cite{dde} first predicted
the $|t|^{2-\alpha}$ singularity in $\epsilon_1$ at the ordinary
transition as a consequence of a Ward identity. Later
Burkhardt and Cardy \cite{bc} presented simple,
model-independent arguments for $|t|^{2-\alpha}$ singularities
in $\epsilon_1$ at both the ordinary and normal transitions and in
$m_1$ at the normal transition. They also showed that conformal
invariance in two-dimensional semi-infinite systems implies
$|t|^{2-\alpha}$ surface singularities in a certain class of
densities. More recently it was claimed
\cite{tsallis,oo} that the leading
singularity in $m_1$ at the extraordinary transition does
not correspond to $|t|^{2-\alpha}$ but to a discontinuity
in $\partial_t m_1$. However, this has been refuted by a a detailed
field-theoretic study \cite{ds}.
The derivation in Sec.\ III is a variation of the approach
in \cite{bc}. The arguments
are particularly simple and elucidate the origin of
the bulk free-energy singularities in $m_1$ and $\epsilon_1$.
While we do not aim (nor claim) to control all
steps of our derivations in
a mathematically rigorous fashion, the
conclusions should be exact.
\section{Equivalence of the extraordinary and normal transitions}
Consider a semi-infinite $d$-dimensional hypercubic lattice of
Ising spins. There are ferromagnetic interactions with
interaction constant $K=J/k_BT$ between all the nearest-neighbor
spins except the surface pairs, which have interaction
constant $K_1=J_1/k_BT$. Denoting the surface layer by ${\cal S}$,
the rest of the system by ${\cal S}_c$, and
the spins in ${\cal S}$ and ${\cal S}_c$ by $\sigma$ and $s$,
respectively, we write the Hamiltonian as
\begin{equation}
{\cal H}\{s, \sigma\}={\cal H}_d\{s\}+{\cal H}_{d-1}\{\sigma\}+{\cal
H}_{\text{int}}\{s, \sigma\}\;, \label{Ham1}
\end{equation}
\begin{equation}
{\cal H}_d\{s\}=-K \sum_{\langle {\bf i},\,{\bf j}\rangle
\not\subset {\cal S}} s_{\bf i}\, s_{\bf j}\;,\label{Ham2}
\end{equation}
\begin{equation}
{\cal H}_{d-1} \{\sigma\}=-K_1 \sum_{\langle{\bf i},\,{\bf j}\rangle
\subset {\cal S}} \sigma_{\bf i}\, \sigma_{\bf j}\;,\label{Ham3}
\end{equation}
\begin{equation}
{\cal H}_{\text{int}} \{s, \sigma\} = -K
\sum_{\langle{\bf i},\,{\bf
j}\rangle\in {\cal S}\times{\cal S}_c}\sigma_{\bf i}\, s_{\bf j}\;,
\label{Ham4}
\end{equation}
where $\langle {\bf i},\,{\bf j}\rangle$ indicates a pair of
nearest-neighbor sites.
Being interested in phases with spontaneously
broken $Z\!\!\!Z_2$
symmetry, such as the surface-ordered, bulk-disordered phase,
we must ensure that the appropriate pure state is selected in
the thermodynamic limit. Following the conventional procedure,
we include magnetic-field terms
$-h\sum\sigma_{\bf i}-h\sum s_{\bf j}$
with $h>0$
in ${\cal H}\{s,\sigma\}$ and then
take the limit $h\to 0+$ after the thermodynamic limit.
The lower critical dimension for a surface-ordered, bulk-disordered
phase is 2. For $d>2$ the extraordinary transition
occurs on crossing the line ${ E}:\,K=K_c,\, K_1>K_1^{\text{sp}}$
in the $(K,K_1)$ plane. Here $K_c=K_c(d)$ is the (bulk)
critical value of $K$, and $K_1^{\text{sp}}(d)$ is the critical
value of $K_1$ at the special or multicritical point
\cite{bdl,hwd,bm}. (Since $K$ and $K_1$ are reduced coupling
constants, a typical approach $T \to T_c$ corresponds to $K
\to K_c$ at fixed $K_1/K$.) The surface
critical behavior across all points of ${E}$ should be the same. In
analyzing the behavior below, it will be convenient to take the
enhancement $g= (K_1-K_1^{\text{sp}})/K$ to be large.
For supercritical $K_1$ the surface spins are spontaneously ordered.
The intuitive idea underlying the extraordinary-normal equivalence
is that the ordered $\sigma$ spins subject the neighboring $s$ spins
to an effective magnetic field. To put this idea on a
firmer footing, we map the composite
system defined in Eqs. (\ref{Ham1})-(\ref{Ham4}) exactly on
a model for the $s$ spins alone by tracing out the $\sigma$ spins.
In terms of the effective Hamiltonian ${\cal H}_{\text{eff}} \{s\}$
defined by the partial trace
\begin{equation}
e^{-{\cal H}_{\text{eff}} \{s\}} = {\text{Tr}}_{\sigma}\,
e^{-{\cal H} \{s, \sigma \}}\;,
\label{Heff1}
\end{equation}
the partition function of the original system is given by
\begin{equation}
Z^{(d)} = {\text{Tr}}_s\, {\text{Tr}}_{\sigma}\, e^{-{\cal H}
\{s,\sigma \}}
= {\text{Tr}}_s\, e^{-{\cal H}_{\text{eff}} \{s\}}\;.\label{Z}
\end{equation}
Equations (\ref{Ham1})-(\ref{Ham4}) and (\ref{Z}) imply
\widetext
\begin{equation}
{\cal H}_{\text{eff}} \{s\} = F^{(d-1)}(K_1) + {\cal H}_d \{s\}
-\ln <e^{-{\cal H}_{{\rm int}} \{ s,\sigma\}} >_{{\cal H}_{d-1}}.
\label{Heff2}\end{equation}\narrowtext\noindent
The first two terms on the right-hand side are contributed by the
uncoupled subsystems of $\sigma$ and $s$ spins, respectively.
The first term is the free energy
$-\ln{\text Tr}_\sigma\exp[-{\cal H}_{d-1}\{\sigma\}]$ of
the surface subsystem, and the
second term is the same as in Eq. (\ref{Ham2}). Expanding the third
term in powers of $K$,
we obtain
\widetext
\begin{equation}
{\cal H}_{\text{eff}} \{s \} = F^{(d-1)} (K_1) +
{\cal H}_d \{s \}- h_1
\sum_{{\bf i}\in \tilde{{\cal S}}} s_{\bf i}
- \sum_{n=2}^{\infty} {1\over n!}\; K^n\,
\sum_{\{{\bf j}_{\alpha} \in\tilde{{\cal S}}\}}
G_{{\bf j}'_1 \ldots {\bf j}'_n} (K_1)\,s_{{\bf
j}_1} \ldots s_{{\bf j}_n}\label{Heff3}\;
\end{equation}
\begin{equation}
h_1=K\,m_b^{(d-1)}(K_1)=K\,G_{{\bf i}'}(K_1)\;,\qquad
G_{{\bf j}_1'\dots{\bf j}_n'}(K_1)=
\langle\sigma_{{\bf j}_1'}\dots\sigma_{{\bf j}_n'}
\rangle^C_{{\cal H}_{\rm d-1}}\label{Heff4}
\end{equation}
\narrowtext\noindent
Here $\tilde{\cal S}$ denotes the surface layer of $s$ spins
after elimination of the layer ${\cal S}$ of $\sigma$ spins. The
sites ${\bf j}'_\alpha\in {\cal S}$ and
${\bf j}_\alpha\in\tilde{\cal S}$ are nearest neighbors. The quantities
$m_b^{(d-1)}(K_1)$ and $G_{{\bf j}_1'\dots{\bf j}_n'}(K_1)$ are the
spontaneous magnetization and $n$th cumulant of the subsystem of
$\sigma$ spins with Hamiltonian ${\cal H}_{d-1}\{\sigma\}$
in Eq.(\ref{Ham3}).
The original semi-infinite system is exactly equivalent to the
semi-infinite system with one less layer and the Hamiltonian
${\cal H}_{\text{eff}}\{s\}$ in Eqs. (\ref{Heff3}) and (\ref{Heff4}).
As mentioned above, the universal features of the extraordinary
transition should be independent of $K_1$ for $K_1>K_1^{sp}(d)$.
We now study the transition
for $K_1>>K_c(d-1)>K_1^{sp}(d)\ $\cite{fp}. In this limit the
$d-1$ dimensional system of $\sigma$ spins is deep in the
low-temperature phase, and $m_b^{(d-1)}(K_1)=
1+O(e^{-4(d-1)K_1})$.
That the spontaneous ordering of the $\sigma$ spins gives rise
to a surface field $h_1$ in ${\cal H}_{\text{eff}}\{s\}$ is seen
quite explicitly in Eqs. (\ref{Heff3}) and (\ref{Heff4}).
The $n=2$ term of ${\cal H}_{\text{eff}}\{s\}$ increases the
nearest-neighbor coupling of spins $\langle{\bf i},{\bf j}\rangle$
in the surface $\tilde{\cal S}$ from the original
value $K$ to the larger value
$K+K_{\perp}^2\,G_{{\bf i}',{\bf j}'}(K_1)$.
Since the cumulant $G_{{\bf i}',{\bf j}'}(K_1)$ vanishes exponentially
as $K_1\to \infty$, the new nearest-neighbor coupling
is clearly subcritical for
sufficiently large $K_1$. The surface interactions
between more distant pairs and the multi-spin
interactions corresponding to $n\ge 3$ also vanish exponentially
for large $K_1$. (From the familiar low-temperature expansion
one sees that these interactions are bounded by increasingly large
powers of $\exp (-K_1)$ as either the separation or the number $n$ of
interacting spins increases.) All the terms with $n\geq 2$ in
Eq. (\ref{Heff3}) should be irrelevant \cite{remark}. On neglecting them,
${\cal H}_{\text{eff}}\{s\}$ reduces
to the standard nearest-neighbor Ising Hamiltonian with subcritical
surface couplings and a surface magnetic field $h_1\neq 0$. This
establishes the equivalence of the
extraordinary and normal transitions.
\section{Derivation of $|\lowercase{t}|^{2-\alpha}$ singularities}
\subsection{Critical behavior of $m_1$ at the extraordinary transition}
Next we now show that $m_1$ at the normal transition has a
leading thermal singularity $B_\pm |t|^{2-\alpha}$
with the same critical exponent and amplitude ratio as the bulk free
energy. Consider a continuum model
with a one-component order parameter $\phi ({\bf x_\parallel},z)$
defined on the $d$-dimensional block ${\cal B}$:
$0\leq x_\parallel ^i\leq M $ for $i=1,2,\dots,d-1$
and $ -L_2\leq z\leq L_1$.
Periodic and free boundary conditions are applied in
the ${\bf x}_{\parallel}$ and $z$ directions, respectively.
As shown in Fig. 1, the block
${\cal B}$ consists of an upper portion
${\cal B}_+$ with $0< z \leq L_1$, a
lower portion ${\cal B}_-$ with $-L_2 \le z < 0$,
and the interface ${\cal I}$ at $z=0$. The Hamiltonian
${\cal H}$ has a $Z\!\!\!Z_2$ symmetric part ${\cal H}_{\text {sym}}$
that describes bulk critical behavior in
the thermodynamic limit $L_1,\,L_2\,, M
\to \infty$. We choose the usual $\phi^4$ form \cite{hwd}
\begin{equation}
{\cal H}_{\text{sym}} \{\phi\} = \int_{\cal B} d^d x
\left[ \case{1}/{2}
\,(\nabla
\phi)^2 + \case{1}/{2}\,\tau \, \phi^2 + \case{1}/{4!}\, u \,
\phi^4 \right]\;.\label{Hsym}
\end{equation}
Denoting the symmetric part with critical values of $\tau$ and $u$ by
${\cal H}_{\text{sym}}^c$, we consider the complete Hamiltonian
\begin{equation}
{\cal H} \{\phi\} = {\cal H}_{\text {sym}}^c \{\phi\}
+t \,\int_{\cal B} d^d x\thinspace \phi^2
- h \,\int_{{\cal B}_+} d^d x \thinspace \phi\;. \label{Hpert}
\end{equation}
The second term corresponds to a uniform
temperature-like deviation from criticality and the
third term to a magnetic field in ${\cal B}_+$ only.
In the thermodynamic limit the total free energy $F$ of the system
has the asymptotic form
\widetext
\begin{equation}
F= A\,[L_1\,f_b (t,0) + L_2\, f_b(t,h)
+f_i(t,h) + f_s (t,h) + f_s(t,0)] + \ldots \;,\label{F}
\end{equation}
\narrowtext\noindent
where $A = M^{d-1}$ is the cross-sectional area of the system.
The first two terms are bulk contributions from ${\cal B}_-$ and
${\cal B}_+$. The next three terms are the free energies associated
with the interface at $z=0$ and the surfaces at $z=L_1$ and $-L_2$.
Now let us displace the interface upwards a small distance $\Delta L$,
increasing the height of ${\cal B}_+$ by $\Delta L$
and decreasing the height of ${\cal B}_-$ by $\Delta L$.
{}From Eq.\ (\ref{F}) the change in free energy is given by
\begin{equation}
\Delta F = A \,\Delta L\, [f_b (t,0)-f_b(t,h)] +
\Delta L \,o (A)\;.\label{DF1}
\end{equation}
The change in free energy can also be written in terms of
the corresponding change
\begin{equation}
\Delta {\cal H} = h \int_0^{\Delta L} dz
\int d^{d-1} x_{\parallel} \thinspace \phi
({\bf x}_{\parallel}, z)\;,\label{DH}
\end{equation}
of the Hamiltonian. Expressed in this way,
\begin{eqnarray}
\Delta F &=& - \ln \left \langle e^{- \Delta {\cal H}} \right \rangle
\nonumber\\
&=& A\, \Big( h \int_0^{\Delta L} dz\,\langle
\phi ( {\bf x}_\parallel, z )
\rangle + O[ ( \Delta L )^2 ] \Big)\;,\label{DF2}\
\end{eqnarray}
where translational invariance parallel to the interface has been used
in performing the ${\bf x}_{\parallel}$ integration.
Here $\langle \ldots
\rangle $ indicates a thermal average with the
Boltzmann factor $e^{-{\cal
H}}$ of the original (unperturbed) system. It is
understood that the ultraviolet
singularities of the theory have been
appropriately regulated (e.g.\ by a
large-momentum cutoff), so that all required
expressions are well-defined.
Since the (regularized) profile
$\langle \phi ({\bf x}_\parallel, z) \rangle$ varies
smoothly with $z$, the integral in Eq. (\ref{DF2}) can be replaced by
$\Delta L\,\langle \phi ({\bf x}_\parallel,0) \rangle$
in the limit $\Delta L
\to 0$. In the thermodynamic limit and the
limit $\Delta L \to 0$, Eqs. (\ref{DF1}) and (\ref{DF2})
imply
\begin{equation}
h\,m_1 = f_b (t, 0) - f_b (t, h)\;,\label{m1}
\end{equation}
where $m_1 = \langle \phi ({\bf x}_{\parallel}, 0) \rangle$ is the
magnetization at the interface. The term
$f_b (t, h \ne 0)$ is regular at
$t=0$, whereas $f_b (t, 0)$ has the
leading thermal singularity $B_{\pm}
|t|^{2-\alpha}$. Consequently $m_1$ has a leading
thermal singularity with the same universal critical
exponent $2-\alpha$
and amplitude ratio $B_+/ B_-$ as the bulk free energy.
Equation (\ref{m1}) determines the magnetization at
the interface of an infinite system
with a uniform temperature $t$ and a
magnetic field $h$ in the half-space $z>0$ only.
As $t$ is lowered, there is a transition
at $t=0$ from bulk disorder to bulk order
in the lower portion of the system. Since
the net effect of the upper portion
of the system is to provide an effective
magnetic field at the interface, the interface critical behavior
should belong to the same universality class as
the normal transition in the
semi-infinite geometry, which, as shown
in Sec.\,II, is equivalent to the extraordinary
transition. Since the interface critical behavior is determined
by Eq. (\ref{m1}), the surface magnetization
$m_1$ in both the normal and extraordinary transitions
should also have the leading thermal singularities with same thermal
critical exponent and amplitude ratio as the bulk free
energy.
\subsection{Critical behavior of $\epsilon_1$
at the ordinary and extraordinary transitions}
In a similar fashion the $|t |^{2-\alpha}$
singularity of the surface
energy density $\epsilon_1$ can be related
to the thermal singularity of
$f_b(t,0)$. Instead of the Hamiltonian (\ref{Hpert}) we consider
\begin{equation}
{\cal H} \{\phi\} = {\cal H}_{\text {sym}}^c \{\phi\}
+ t\,\int_{{\cal B}_-} d^d x \thinspace \phi ^2
+t_+\,\int_{{\cal B}_+} d^d x \thinspace \phi^2\;.
\end{equation}
Now there is no magnetic field, and the temperature deviations from
criticality $t$ and $t_+$ in ${\cal B}_-$ and ${\cal B}_+$,
respectively, are different.
Proceeding as in the previous section, we obtain
\begin{equation}
(t_+ -t)\,\epsilon_1
=f_b(t_+,0)-f_b(t,0)\;, \label{e1}
\end{equation}
where $\epsilon_1=\langle\phi^2({\bf x}_\parallel,0)\rangle$.
For fixed $t_+\neq 0$, $\epsilon_1$ clearly has
a leading thermal singularity $B_\pm |t|^{2-\alpha}$ with the
same critical exponent and amplitude ratio as $f_b(t,0)$.
Equation (\ref{e1}) gives the energy density
at the interface of an infinite system
with different temperatures $t_+$ and $t$
in the half-spaces $z>0$ and $z<0$, respectively,
and zero magnetic field.
For fixed $t_+>0$ the upper portion of
the system is in the high-temperature, disordered phase.
As $t$ is lowered, there is a transition at
$t=0$ from disorder to bulk order in the
lower portion of the system. Only for $t<0$
are the interface spins ordered, and
this order is driven by the bulk order.
For $t<t_+$ the net effect of the upper portion of the system
on the lower portion is to suppress $\phi$ near
the interface. All this suggests that the
interface transition with fixed $t_+>0$ belongs to
the universality class of the ordinary
transition in the semi-infinite geometry.
For fixed $t_+<0$, on the other hand,
the upper portion of the system is in the
low-temperature, ordered phase. As $t$ is lowered,
there is a transition at $t=0$
from a phase with interface order but no
bulk order in the lower portion to a
phase with both interface and bulk order.
The net effect of the upper portion of
the system is to provide an effective
magnetic field at the interface. Thus the
interface transition with fixed $t_+<0$ should belong to the
same universality class as the normal and
extraordinary transitions in the semi-infinite geometry.
Since the interface critical behavior for fixed $t_+$
is determined by Eq. (\ref{e1}), the surface energy
density $\epsilon_1$ at the ordinary, extraordinary,
and normal transitions should also have leading thermal singularities
with the same universal critical exponent
and amplitude ratio as the bulk
free energy.
\section{Concluding Remarks}
In summary we have derived bulk free energy
singularities in $\epsilon_1$ at the ordinary transition
and in $m_1$ and $\epsilon_1$ at the normal and extraordinary
transitions with simple, exact
arguments, without making the assumptions (a) and (b)
(see Sec. I) of Bray and Moore.
The predictions of leading thermal singularities
of the form $B_\pm |t|^{2-\alpha}$ in $\epsilon_1$ at
the ordinary transition and in $m_1$ at the
extraordinary and normal transitions check with
field theoretic results \cite{dde,ds} for the Ising model
in $d=4-\epsilon$. Comparable field theoretic results for
the singular behavior of $\epsilon_1$ at the
extraordinary transition are not yet available. As mentioned
above, $|t|^{2-\alpha}$ surface singularities in a certain class
of densities are also
implied by conformal invariance \cite{bc} in $d=2$.
As a check on our prediction that the $B_\pm |t|^{2-\alpha}$
singularities in surface quantities have the
same amplitude ratio $B_+/B_-$ as the bulk
free energy, we calculate the amplitude ratio for
$m_1$ at the extraordinary transition of the
Ising model from results of Diehl and Smock
\cite{ds} for the order-parameter profile in $d=4-\epsilon$.
Taking the distance from the surface to be small
in comparison with the bulk correlation length and
using Eqs.\ (47d), (48e), and (48f) in \cite{ds},
we obtain
\begin{equation}
{B_+\over B_-}={1\over 4}\,[1+O(\epsilon)]\;. \label{B+B-}
\end{equation}
This does indeed agree with the amplitude ratio \cite{dl}
\begin{equation}
{B_+\over B_-}=2^{\alpha-2}\,n\,\left[
1+\epsilon+O(\epsilon^2)\right]\;,\qquad
\alpha= {4-n\over 2(n+8)}\thinspace \epsilon + O(\epsilon^2)
\end{equation}
for the bulk free energy
of the $n$-vector model in the Ising case $n=1$.
Finally we point out that Ising spin variables are
not essential for the simple arguments of this paper.
The predictions should hold for a broad class of semi-infinite
systems with short range, ferromagnetic interactions and
second-order bulk transitions. Note, however, that the proof
of the extraordinary-normal equivalence in Sec. II does not
go through for systems with continuously broken symmetries,
due to the nonexponential decay of correlations
in the ordered phase.
\acknowledgements
We thank Erich Eisenriegler for useful discussions.
T.W.B. greatly appreciates the
hospitality of H. Wagner and coworkers
at the Universit\"at M\"unchen,
where part of this work was done, and the
support of the WE-Heraeus-Stiftung. H.W.D.\ acknowledges partial
support by Deutsche Forschungsgemeinschaft through Sonderforschungsbereich
237.
|
2,877,628,090,498 | arxiv | \section{The Sky is Enormous and It Varies: Why Astrophysics Needs Panoptic Spectroscopy }
The progression of sky surveys that have benefited all areas of astrophysics has, as its logical conclusion, a survey of the entire sky at as many energies as possible. Since many of the sources in the sky are changing --- in brightness, in color, in position --- multiple epochs of these maps are needed to understand fully the nature of the objects we see.
Scientifically, there are at least three basic arguments for {\it all-sky} astrophysical surveys.
\vspace{-8pt}
\begin{itemize} \itemsep -2pt
\item Both the closest sources (e.g., exoplanets and their hosts, low-mass dwarf stars) and the most distant, high-redshift sources are distributed uniformly across the entire sky. Therefore exoplanet research and cosmology, as the cornerstones spanning modern astrophysics, natively rely on all-sky surveys. Whenever one is seeking the best, brightest, or most interesting example of such astrophysical phenomena, it could be anywhere in the sky.
\item The Milky Way Galaxy is the single best ``model organism'' that we have for studying the inner workings of galaxy formation and evolution, as well as an exquisite laboratory in which to study the physics of stars from their birth to their end. The MW is the ultimate all-sky object, where neither Northern nor Southern hemisphere alone can possibly reveal the whole story.
\item The orbital requirements and scanning geometry of multi-year space survey missions naturally enable coverage of the entire celestial globe: from microwaves ({\it COBE}, {\it WMAP}, {\it Planck}), to the infrared ({\it IRAS}, {\it WISE}), optical ({\it Gaia}, soon {\it TESS}), UV ({\it Galex)}, all the way to X-rays ({\it ROSAT}, soon {\it eROSITA}) and gamma rays ({\it Swift, Fermi}). The scientific value of these costly but valuable data sets can be greatly enhanced by ground-based observations, which naturally should then be all-sky.
\end{itemize}
\vspace{-6pt}
To date, all-sky surveys --- from space or from the ground --- have been imaging
surveys. There has never been a survey providing high-quality spectra across the complete sky\footnote{The {\it Gaia} mission's RVS takes spectra for a subset of bright targets across a limited spectral range.}. {\it The SDSS-V Project now sets out to pioneer this ``panoptic\footnote{panoptic: presenting a comprehensive or encompassing view of the whole} spectroscopy'' by providing the first homogeneous survey of multi-object spectroscopy (MOS) for millions of sources spread across the entire sky.\footnote{http://www.sdss.org/future/}}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.92\linewidth]{SDSSV_schematic_v14.pdf}
\caption{\footnotesize\linespread{1.2}
{\bf A schematic representation of SDSS-V:} an all-sky, multi-epoch spectroscopic facility and its science programs. Survey operations will be carried out in two hemispheres, at Apache Point Observatory (APO) and Las Campanas Observatory (LCO). Multi-object fiber spectroscopy will be obtained with two 2.5m telescopes, each feeding a near-IR APOGEE spectrograph (300 fibers, $R \sim 22,000$) and an optical BOSS spectrograph (500 fibers, $R \sim 2,000$); this configuration enables a sky-survey rate of $\sim$40~deg$^2$~hr$^{-1}$. Ultra-wide field integral field spectroscopy will be performed mostly with smaller telescopes at these observatories, with $\sim$2,000-fiber bundles feeding three optical spectrographs in each hemisphere. This schematic also outlines the three primary science programs: the Milky Way Mapper, drawing on both APOGEE (red) and BOSS (blue) spectra; the Black Hole Mapper, acquiring BOSS spectra of fainter targets; and the Local Volume Mapper, performing IFU mapping of the ionized ISM in the MW and nearby galaxies.
}
\label{fig:SDSSV_schematic}
\end{figure}
Furthermore, many sources in the sky (perhaps most, if one is looking closely enough) are changing with time, moving through space and/or changing flux and color for a wide range of interesting physical reasons. Examples of periodic phenomena include planets that occult their hosts, stars that oscillate, and close binary components that deform each other. Transient phenomena encompass the incessant ringing and death throes of stars, magnetic flaring, and variations in black hole accretion rates.
The time domain has been recognized as one of the great astrophysical frontiers of our time; this interest is reflected in the enormous investments in the space missions {\it Kepler}, {\it Gaia}, {\it TESS}, and {\it PLATO}, and the ground-based surveys PTF, PS1, ZTF, and LSST. At heart, all of these projects are time-domain imaging surveys. There has not been a comparably coherent effort to systematically explore and exploit the time domain with spectroscopy. {\it The second key aspect of SDSS-V involves expanding its panoptic spectroscopy to include multi-epoch survey spectroscopy.}
Further, many astrophysical phenomena do not lend themselves to being parsed into sets of discrete sources to be observed by multi-object spectroscopy. These phenomena call for contiguous spectral mapping, or integral-field spectroscopy (IFS), over a wide range of angular and physical scales. Pushing IFS to the all-sky regime is even more daunting than all-sky MOS, as evidenced by the fact that the largest contiguous (optical) IFS maps of the sky cover only 0.001\% of it.
{\it SDSS-V's third central goal is to bring IFS to a regime of mapping an appreciable portion ($>$3000~deg$^2$, $\sim$8\%) of the sky with optical data-cubes.}
SDSS-V will enable high-quality, near-IR and optical MOS observations of over $6\times 10^6$ objects across the entire sky, with homogeneous multi-epoch spectroscopy for about one million of them, and it will carry out ultra-wide-field IFS mapping across more than 3000~deg$^2$ of the sky. These advances can only be achieved with a suitable combination of hardware and survey strategy. They require wide-field telescopes and the possibility to re-acquire new spectroscopic targets rapidly (e.g., every 15~minutes), with a focus on bright sources for MOS and on emission lines for IFS. SDSS-V will meet these requirements by building on the infrastructure, instrumentation, survey operational expertise, and collaboration heritage of the Sloan Digital Sky Surveys I-IV \citep[SDSS;][]{York_2000_sdss1,Eisenstein_etal_2011,Blanton_etal_2017}, as well as constructing new hardware based on proven technology (Figure~\ref{fig:SDSSV_schematic}).
This extensive survey infrastructure will enable the first panoptic spectroscopic survey, comprising three primary survey programs (Section~\ref{sect:TheThreeSurveys}). The {\it Milky Way Mapper} (MWM; Section~\ref{sec:mwm}) will provide a) a global spectroscopic map of the MW, using near-IR MOS concentrated at low Galactic latitudes; b) a multi-epoch stellar astrophysics survey, focused on interesting targets of {\it Gaia} and TESS; and c) a multi-epoch survey of young, massive stars throughout the Galaxy. The {\it Black Hole Mapper} (BHM; Section~\ref{sec:bhm}) will focus on long-term, time-domain studies of AGN, including direct measurement of black hole masses and changing-look quasars, and on the optical characterization of {\it eROSITA} X-ray sources. The {\it Local Volume Mapper} (LVM; Section~\ref{sec:lvm}) will provide the first integral field spectral map, spanning the full optical window, of the bulk of the MW disk, the Magellanic Clouds, the Andromeda Galaxy, and other galaxies in the Local Volume. Table~\ref{tab:SDSSV_program_table} presents the basic survey parameters of the three {\it Mappers}.
\begin{table}[h]
\large
\scalebox{0.75}{
\begin{tabular}{|C{3.0cm}||L{3.5cm}|m{3.5cm}|L{5cm}|m{5cm}|}
\hline
\rowcolor{SkyBlue}
{\bf Program} & {\bf Science Targets} & {\bf N$_{\rm Objects}$ and/or \newline Sky Area} & {\bf Primary Spectral Range and Hardware} & {\bf Primary Science Goals} \\
\hline
\hline
{\bf M}ilky {\bf W}ay {\bf M}apper (MWM) & Stars across the Milky Way & $>$6M stars; all-sky & IR; APOGEE ($R\sim 22,000$) with fiber-positioning system & Understanding the formation of the Milky Way and the physics of its stars \\
\hline
{\bf B}lack {\bf H}ole {\bf M}apper (BHM) & Primarily supermassive black holes & $>$400,000 sources; all-sky & Optical; e.g., BOSS ($R\sim 2000$) with fiber-positioning system & Probing black hole growth and mapping the X-ray sky \\
\hline
{\bf L}ocal {\bf V}olume {\bf M}apper (LVM) & ISM \& stellar populations in the MW, Local Group, and nearby galaxies & $>$25M contiguous spectra over 3,000 deg$^2$ & Optical; new integral field spectrographs covering 3600-10000\AA\, at $R\sim 4000$ & Exploring galaxy formation and regulation by star formation; feedback, enrichment, \& ISM physics \\
\hline
\end{tabular}
}
\caption{\footnotesize\linespread{1.2}{
A summary of the SDSS-V Mapper programs: Milky Way Mapper, Black Hole Mapper, and Local Volume Mapper.
}}
\label{tab:SDSSV_program_table}
\end{table}
In Section~\ref{sect:SurveyImplementation} we describe the survey implementation, while the upcoming preparatory developments and the collaboration building is discussed in Section~\ref{sect:DevelopmentCollaboration}. We provide an outlook of the survey prospects in Section~\ref{sec:future}.
\section{The SDSS-V Mapper Components}\label{sect:TheThreeSurveys}
\subsection{Milky Way Mapper}
\label{sec:mwm}
The ecosystem of stars, gas, dust, and dark matter in large galaxies like the MW has been shaped over billions of years by a variety of physical processes that operate across an enormous range of physical scales. Despite this complexity in galaxy formation, we now observe a population of galaxies that is very ordered, despite galaxy masses spanning from hundreds of stars to a hundred billion stars.
Explaining how such regularity emerges in a cosmological context from such complex and varied physics is a central challenge of modern astrophysics. The Milky Way Mapper (MWM) survey will exploit our unique perspective within our Galaxy to address this issue by creating a unique global Galactic map that encompasses the evolutionary record contained in its stars and interstellar material (ISM).
\begin{figure}[htbp]
\centering
\includegraphics[trim=0in 3.25in 0in 3.75in, clip, width=\textwidth]{sdss4_vs_sdss5_target-density.png}
\caption{\footnotesize\linespread{1.2}
{\bf Evolution of SDSS on-sky target density:} Left: SDSS-IV field map, in Galactic coordinates. The colored dots show regions of the sky targeted by SDSS-IV's MaNGA, eBOSS, and APOGEE-2 (orange) and APOGEE-2 only (yellow, green, and blue). There are no data where the background image is visible. Right: Density map of SDSS-V's spectroscopic targets (objects per square degree). The analysis of the data from the sparse but deep sampling provided by earlier generations of SDSS allows us to exploit new technologies and analysis techniques to cover the entire sky contiguously with spectra in SDSS-V.
}
\label{fig:SDSSV_programs}
\end{figure}
To truly utilize our MW as a galaxy model organism, surveys must trace the entire hierarchy of structure within the Galaxy with maps that are global, contiguous, and densely sampled throughout the (largely dust-obscured) disk- and bulge-dominated regions of the stellar MW. High quality spectra of stars are rich with information about their basic physical parameters, including their ages, chemistry, and kinematics. Interstellar spectral lines probe the composition and dynamics of the MW's gas and dust, from which new stars are forming. These properties provide the best approaches to quantitatively test models of the most uncertain galaxy formation physics \citep[][]{RixBovy2013,Gerhard2016}.
Within the MWM program, the Galactic Genesis Survey (GGS) will produce the first spectroscopic stellar map that is contiguous, densely sampled, and all-sky but focused on the low Galactic latitudes where most stars lie, and which includes detailed information on {\it each} star and its foreground ISM. With its near-IR, multi-epoch spectroscopy through the entire Galactic plane, SDSS-V will also significantly expand the spectroscopic census of young stars in the MW, characterizing their masses, ages, multiplicity, etc., thus painting a global picture of the ``recent Galaxy.''
\begin{figure}[ht]
\centering
\includegraphics[width=1.0\linewidth]{apogee_vs_mwm_sm.png}
\caption{\footnotesize\linespread{1.2}
{\bf Evolution of SDSS in-plane Galactic target density:} Midplane target surface density of the recent APOGEE DR14 catalog (left) and MWM's Galactic Genesis Survey (GGS; right). The maps show a face-on schematic of the Milky Way ({\it credit: NASA/JPL-Caltech/R. Hurt}) beneath target density contours. The Sun is located 8~kpc from the center of the Galaxy, at ($X, Y = -8.0, 0.0$). Light gray contours show areas with observed/anticipated stars at surface densities $<$10 per (100 pc)$^2$; colored contours follow the colorbar. These contours only contain stars within 500~pc of the midplane, summing to $1.5 \times 10^5$ in APOGEE DR14 and $3.6 \times 10^6$ stars in GGS. For APOGEE, we show stars with distances reported in the APOGEE DR14 Distance Value Added Catalog, which represent $\sim$95\% of all main survey targets. We note that ongoing APOGEE-2 observations will fill in the fourth quadrant of the Galaxy. Distance distributions for SDSS-V targets were calculated using a mock GGS observation of the Galaxia model of the MW \citep{Sharma2011} and a 3D extinction map \citep{Bovy2016}.
}
\label{fig:midplane_GG}
\end{figure}
MWM will take advantage of several factors to produce this remarkable data set: {\it Gaia} photometry and astrometry, the all-sky coverage of SDSS-V, the rapid target allocation enabled by the robotic fiber positioner (Section~\ref{sec:infrastruct_instrument}), the APOGEE spectrographs' IR wavelength coverage and resolution, the large FOV of the APO and LCO telescopes (Section~\ref{sec:infrastruct_instrument}), and novel spectral analysis techniques. GGS's rapid, wide-angle survey mode is enabled by its focus on bright ($H<11$), yet intrinsically luminous (and thus distant) sources. These include variable star distance indicators such as Cepheids and Mira variables \citep[cataloged by, e.g., VVV;][]{Minniti_2010_VVV}, which fall within GGS's magnitude limits even when in the disk beyond the bulge. GGS's immediate product will be a Galactic census of stellar orbits, ages, and detailed abundances as a function of three-dimensional position across the entire Milky Way disk and bulge. GGS will collect spectra from more than 5 million stars across the full sky, most of them from a contiguous area of $\gtrsim$3,000 deg$^2$ in the Galactic midplane (Figures~\ref{fig:SDSSV_programs} and \ref{fig:midplane_GG}; Table~\ref{tab:ggsa_classes}). These data will provide the means to address numerous long-standing questions, including the dominant formation mechanisms of the MW, hierarchical accretion, radial migration, and the place of the MW in a cosmological context.
\begin{figure}[!hptb]
\centering
\includegraphics[width=\linewidth]{hr.png}
\vspace{-1cm}
\caption{\footnotesize\linespread{1.2}
{\bf Stellar astrophysical targets in the MWM:} The $(J-K)$ color and absolute $J$ magnitude of 0.001\% of the $\sim$1 billion stars that {\it Gaia} will observe, color-coded by their expected ages based on a Besan\c{c}on Galaxy model \citep{robinetal2003}. The wide range of ages of the red giants provides a perfectly-suited exploration space for the MWM (Section~\ref{sec:mwm}), given that we can determine their asteroseismic-calibrated ages. The luminous hot stars in the upper left ionize the gas seen by the LVM in the Milky Way (Section~\ref{sec:lvm}), and the cool dwarfs on the lower right yield prime hunting ground for rocky planets in the habitable zone, whose host stars must be carefully characterized. The stars marked in bright colors represent those that are within 100 pc of the Sun and therefore part of the MWM's solar neighborhood census. The lowest-mass stars will be a major component of this census, especially since subsequent {\it Gaia} catalogs will have distances for much fainter stars than Data Release 1 does. The gray points with $M_J>10$ mark white dwarfs with {\it Gaia} DR1 distances; the number of these ``cinders'' of low-mass stars will increase by a factor of 10$^5$ as {\it Gaia} continues. With knowledge of the white dwarf initial mass-final mass relation, ages can determined for these as well.}
\label{fig:hr}
\end{figure}
The rigorous interpretation of GGS's data will ultimately rely on understanding the complete lifecycle of stars from birth to death, including multi-star systems \citep[e.g.,][]{Raghavan2010,DK2013}. However, many questions remain about these lifecycles: What determines the mass and multiplicity of stellar systems? How does multiplicity affect stellar evolution? What is the relationship between stellar and planetary properties \citep[e.g.,][]{brugamyeretal2011,Adibekyan2013}? How does nucleosynthesis proceed throughout the lifetimes and death throes of different kinds of stars?
\begin{figure}[!hptb]
\centering
\includegraphics[trim=0.1in 0.4in 0.3in 0.6in, clip, width=0.9\textwidth]{binary.png}
\caption{\footnotesize\linespread{1.2}
{\bf SDSS-V's stellar companion mass sensitivity:} The minimum companion mass SDSS-V can detect in systems with a range of periods and primary masses, in the context of several scientifically interesting regimes. For example, the left-most region has a high minimum mass because systems there have undergone common envelope evolution, and there must be at least one white dwarf to survive.
We have indicated the area where stars becoming red giants swallow their closest companions. Faint white dwarfs will need to have $\sim$15~km~s$^{-1}$ RV variability to be detected by the optical BOSS spectrographs. This precision is well matched to the RV amplitudes of several 100~km~s$^{-1}$ in double-white dwarf binaries. The ``WD Line'' shows the minimum mass of an unseen companion around a WD, and we can clearly detect neutron star and black hole companions out to a period of many months. The ``{\it Gaia} Line'' shows the minimum secondary mass detectable by {\it Gaia} around a 0.2 M$_{\odot}$ star at a distance of 250 pc. This illustrates how {\it Gaia} and SDSS-V complement each other:
astrometry becomes increasingly powerful at long periods, spectra at short periods.
Gray areas will be explored statistically, but we will not have full orbital information for systems in these regions.}
\label{fig:binary}
\end{figure}
The key data to understanding such outstanding questions of stellar astrophysics are long duration, high precision, time-series photometry and spectroscopy of large, carefully-selected samples of single- and multiple-star systems. In particular, asteroseismology and {\it absolute} flux measurements \citep[e.g., from {\it Kepler} and {\it TESS}, and {\it Gaia}, respectively; see][]{Campante_2016_TESSseismology,Huber_2017_TGASseismology,Stevens_2017_GaiaAbsFlux} have recently emerged as game-changers in terms of deepening our understanding of stellar astrophysics---literally, by letting us peer past the previous limit of stellar photospheres. SDSS-V will use its panoptic spectral capabilities for a comprehensive investigation of Stellar Astrophysics (SA) and of Stellar System Architecture (SSA) over a range of 10$^4$ in the masses of stars that belong to binaries, 0.5 hours to $>$12 years in orbital period, and a few pc to $>$15 kpc in distance from the Sun. The overarching goal of these programs is to consistently and comprehensively measure mass, age, chemical composition, internal structure, rotation, and the presence of companions for vast samples of stars across the color-magnitude diagram (Figure~\ref{fig:hr}).
The SA program will include observations necessary for precise age measurements of giant stars with asteroseismic detections \citep[e.g.,][]{Martig2016}; observations of massive stars to constrain the true relationships between masses, radii, rotation, and internal mixing; observations of several thousands of white dwarfs to improve our understanding of their evolution and mass return to the ISM; observations of deeply embedded stellar clusters, caught in the act of forming stars at numerous stages; and observations of a volume-limited sample of stars within $\sim$100~pc. The SSA program will target tens of thousands of multiple-body systems across a diverse range of Galactic environments, with a wide range of periods and primary masses (Figure~\ref{fig:binary}). SSA seeks to explain the dependence of multiplicity on stellar mass and environment, the frequency and properties of binary systems with compact objects that give rise to explosive events and gravitational waves \citep[e.g.,][]{LIGO_2017_GW170817}, and the effect of host system composition on exoplanet frequency and habitability. See Table~\ref{tab:ggsa_classes} for more details on targeting for the SA and SSA programs.
\begin{table}[!h]
\scalebox{0.8}{
\begin{tabular}{ |p{1.9cm}|p{5.3cm}|p{1.5cm}|p{1.2cm}|p{8.0cm}| }
\hline
\multicolumn{5}{|c|}{{\bf Galactic Genesis \& Stellar Astrophysics Targeting Classes}} \\
\hline
{\bf Instrument} & {\bf Selection} & {\bf N$_{\rm Targets}$} & {\bf N$_{\rm Epochs}$} & {\bf Comments} \\
\hline
\multicolumn{5}{|l|}{\textcolor{violet}{Galactic Genesis Survey:} mapping the dusty disk}\\
\hline
APOGEE& $H$$<$11, $G-H>3.5$ & 4,800,000 & 1 & dust-extinguished disk \\
APOGEE & $|z| < 200$ pc, $H$$<$11, d$<$5~kpc & 125,000 & 1 & to complete high-res ISM map \\
\hline
\multicolumn{5}{|l|}{\textcolor{violet}{Binaries with Compact Objects:} enumerating the populations of binaries with white dwarfs, neutron stars, or black holes,} \\
\multicolumn{5}{|l|}{selected by variability} \\
\hline
BOSS & PTF, ZTF, {\it Gaia} variability & 30,000 & 3 & binaries with WDs, NSs, and BHs \\
BOSS & {\it Gaia} parallaxes& 30,000 & 1 & wide WD+MS/RGB binaries \\
\hline
\multicolumn{5}{|l|}{\textcolor{violet}{Solar Neighborhood Census:} observing all stars within 100 pc, giving the best probe of low-mass stars, whether in single or} \\
\multicolumn{5}{|l|}{binary systems}\\
\hline
APOGEE, BOSS & \multirow{2}{*}{d$<$100 pc, $G<20$, $H<12$} & \multirow{2}{*}{400,000} & \multirow{2}{*}{2} & \multirow{2}{*}{1000$\times$ increase in volume \& stars} \\
\hline
\multicolumn{5}{|l|}{\textcolor{violet}{White Dwarf Chronicle:} using white dwarfs and their evolved companions to measure the SFH and age-metallicity relation } \\
\hline
BOSS & $G<20$ & 300,000 & 3 & 15$\times$ increase in sample size\\
\hline
\multicolumn{5}{|l|}{ \textcolor{violet}{TESS Exoplanet Host Candidates:} observing all TESS short-cadence targets in the CVZs } \\
\hline
APOGEE & $H \leq 13.3$ & 300,000 & 1--8 & all short-cadence targets \& planet hosts\\
\hline
\multicolumn{5}{|l|}{ \textcolor{violet}{Binaries Across the Galaxy:} measuring environmental dependence of binary fraction in the disk, bulge, halo, and stellar} \\
\multicolumn{5}{|l|}{clusters; probing the brown-dwarf desert beyond solar-type stars }\\
\hline
\multirow{2}{*}{APOGEE} & $H$$<$13.4, N$_{\rm Epoch} \geq 6$ by the start of SDSS-V & \multirow{2}{*}{60,000} & \multirow{2}{*}{6--18} & gives orbits with 24--40 epochs for all targets with long APOGEE baselines \\
\hline
\multicolumn{5}{|l|}{\textcolor{violet}{{\it Gaia} Astrometric Binaries:} characterizing rare systems that have good astrometric orbits but limited other information, } \\
\multicolumn{5}{|l|}{from {\it Gaia}'s sample of $>10$ million stars} \\
\hline
APOGEE, BOSS & \multirow{2}{*}{$d<3$ kpc} & \multirow{2}{*}{200,000} & \multirow{2}{*}{1} & \multirow{2}{*}{rare types of systems} \\
\hline
\multicolumn{5}{|l|}{\textcolor{violet}{TESS Red Giant Variability:} measuring spectroscopic properties for red giants in TESS that have seismic and/or granulation} \\
\multicolumn{5}{|l||}{lightcurve signatures} \\
\hline
APOGEE & $H<12.5$ & 250,000 & 1 & stars with at least 80 days of TESS observation \\
\hline
\multicolumn{5}{|l|}{\textcolor{violet}{Massive, Convective Core Stars:} combining dynamic and asteroseismic measurements of binary OBAF stars in the TESS CVZs} \\
\multicolumn{5}{|l|}{and characterizing their multiplicity} \\
\hline
APOGEE & \multirow{2}{*}{$H<12$} & 200,000 & 2 & detection of single vs. binary systems\\
APOGEE & & 500 & 25 & $>$10$\times$ increase in current sample size \\
\hline
\multicolumn{5}{|l|}{\textcolor{violet}{Young Stellar Objects:} quantifying the stellar populations in star-forming regions, including identifying sources of ionizing} \\
\multicolumn{5}{|l|}{radiation and characterizing the binary frequency} \\
\hline
APOGEE & $H<12$, $d<1$ kpc & 20,000 & 12 & nearby star-formation regions \\
APOGEE & $H<12$ & 3,500 & 8 & high-mass star-formation regions\\
APOGEE & $H<12$, $|b|<2^\circ$ & 10,000 & 2 & massive young stars in the Galactic Plane \\
APOGEE & $H<13$ & 10,000 & 2 & Central Molecular Zone \\
\hline
\end{tabular}
}
\caption{\footnotesize\linespread{1.2}{Current targeting strategy for the subsamples in the GG, SA, and SSA components of the MWM (Section~\ref{sec:mwm}). Note that the targeting details are still being optimized.}}
\label{tab:ggsa_classes}
\end{table}
\subsection{Black Hole Mapper}
\label{sec:bhm}
Quasars/AGN\footnote{Historically, ``quasars'' and ``Active Galactic Nuclei (AGN)'' describe different classes of objects, but here we use them interchangeably in recognition of the fact that they both describe accreting supermassive black holes \citep[e.g.,][]{Merloni_2016}.} are among the most luminous objects in the Universe. Powered by accretion onto supermassive black holes (SMBHs) at the centers of galaxies, quasars are beacons marking and tracing the growth of black holes across cosmic distance and time. The tight correlations between the masses of these central SMBHs and the properties of their hosts \citep[e.g.,][]{Kormendy_Ho_2013} demonstrate a clear connection between the formation of the stellar component of a galaxy and the growth of its central BH. This connection means that quasar studies are not only critical for understanding SMBHs and their accretion physics, but are also closely linked to galaxy formation and evolution being explored by SDSS-V's MWM (Section~\ref{sec:mwm}) and LVM (Section~\ref{sec:lvm}).
\begin{figure}[hb!]
\centering
\includegraphics[width=1.0\linewidth]{agn_schematic_gz.png}
\caption{\footnotesize\linespread{1.2}
{\bf Schematic of the innermost regions around a quasar's
central supermassive black hole (BH):} the X-ray corona, accretion disk, and broad-line region (BLR). SDSS-V will explore the physics of supermassive BH accretion and dynamics with three parallel approaches: reverberation mapping, {\it eROSITA} follow-up, and multi-epoch spectroscopy (top three panels). Top left: an example of time delays \citep{Pei_etal_2017} between UV/optical continua from the accretion disk and emission line flux from the BLR, which yields the BH mass. Top center: the {\it eROSITA} X-ray telescope that, combined with SDSS-V spectra, will conduct a census of X-ray/optical properties for $>10^5$ quasars. Top right: an example of multi-epoch spectra \citep{Liu_etal_2014} that reveal a marked change in the broad-line profile for a quasar over a rest-frame time baseline of 8.3 years. This change, similar to those the BHM will provide for large numbers of quasars, is a probe of dynamical processes within the BLR.}
\label{fig:agn_schematic}
\end{figure}
Quasar variability, which can occur across the entire electromagnetic spectrum and on time scales of hours to decades, encodes information on the structure, dynamics, and evolution of emitting regions, even in areas too small to resolve spatially with any telescope for the foreseeable future. Nearly all quasars also produce energetic X-ray emission, largely from these inner regions close to the SMBH, that is generally less affected by intervening obscuration and thus provides diagnostics complementary to those at longer wavelengths. Observational tests of BH/quasar theory therefore require three primary measurements: precise mass constraints, multi-wavelength SEDs, and a detailed characterization of variability (Figure~\ref{fig:agn_schematic}). SDSS-V's Black Hole Mapper (BHM) will provide these measurements for a large sample of quasars by adding wide-area, multi-epoch optical spectroscopy (Table~\ref{tab:bhm_target_classes}) to the current and upcoming time-domain optical imaging projects and to the next-generation of X-ray surveys (e.g., ZTF, LSST, and {\it eROSITA}).
\begin{table}[h]
\scalebox{0.9}{
\begin{tabular}{ |m{6.0cm}|m{3.8cm}|m{3.0cm}|m{1.4cm}|m{1.4cm}| }
\hline
\multicolumn{5}{|c|}{{\bf SDSS-V Black Hole Mapper Targeting}} \\
\hline
{\bf Science Goals} & {\bf Primary Selection} & {\bf Density [deg$^{-2}$]} & {\bf N$_{\rm targets}$} & {\bf N$_{\rm epochs}$}\\
\hline
\hline
Reverberation mapping, \newline BH masses & Optical QSOs, $i<20$ & 30--50 & 1,500 & 174 \\
\hline
BH accretion and outflow astrophysics, changing look quasars & Optical QSOs, $i<19$ & 10 & 25,000 & 3--13 \\
\hline
{\it eROSITA} follow-up, AGN, X-ray binaries, galaxy clusters & $f_{\rm X-ray} \geq 2.5\times 10^{-14}$ erg~s$^{-1}$~cm$^{-2}$, $i<21.5$ & 20--50 & 400,000 & 1--3 \\
\hline
\end{tabular}
}
\caption{\footnotesize\linespread{1.2}
Scope of the BHM program to be carried out with, e.g., optical BOSS spectrographs in two hemispheres.
}
\label{tab:bhm_target_classes}
\end{table}
The past few decades have witnessed the success of using quasar time-domain variability to constrain basic models of quasars. A prime example is reverberation mapping\footnote{In RM, we measure the time delay between variability in the ionizing continuum from the accretion disk and its ``echo'' from the BLR \citep[e.g.,][]{Blandford_McKee_1982,Peterson_1993}.} (RM) of the broad emission line region (BLR). RM delays measure the typical sizes of the BLR and, when combined with the velocity width of the broad emission lines, allow a virial estimate of the BH mass, the most fundamental of all BH parameters. Large, representative samples of quasars with robust spectroscopic variability studies have not yet been assembled \citep[e.g.,][]{Vestergaard_2011}, and the power of {\it spectral} variability to constrain quasar models has been insufficiently explored.
SDSS-III and -IV demonstrated the potential of this approach \citep[e.g.,][]{Shen_etal_2015a,Grier_2017_RMsdss4}, and SDSS-V's BHM will exploit this potential on ``industrial scales.'' Spectroscopic RM sampling hundreds of epochs to determine precise BH masses will be performed for $\sim$1,000--1,500 quasars with a range of redshifts ($0.1<z<4.5$) and luminosities ($L_{\rm bol}\sim 10^{45}-10^{47}\,{\rm erg\,s^{-1}}$), an increase of $\sim$25$\times$ the historical sample of nearby, low-luminosity AGN with RM-determined masses. With a more modest number of epochs (a few to a dozen per target, yielding final baselines spanning months to a decade), BHM will also characterize the optical spectral variability of approximately 25,000 quasars, illuminating the astrophysics of SMBH accretion disks, dynamical changes in the BLRs, signatures of binary BHs, and the properties of quasar outflows. In addition, studies of ``changing-look'' AGN (in which the broad lines around the AGN either appear or disappear; Figure~\ref{fig:clagn}) comprise a burgeoning field that challenges standard accretion disk theory \citep[e.g.,][]{LaMassa_2017_changinglookAGN}; many of these intriguing sources will also be discovered in the BHM's repeat spectroscopy program.
\begin{figure}[!ht]
\centering
\includegraphics[trim=0cm 1cm 0cm 0cm, clip, width=0.95\linewidth]{RunnoeAS4figs_gz.png}
\vspace{-0.2in}
\caption{\footnotesize
{\bf Spectral comparison of two epochs for a ``changing look'' quasar
(CLQ):}
(a) SDSS~J101152.98+544206.4, identified in the TDSS \citep[][]{Runnoe_etal_2016}, is
a $z=0.246$ quasar that dimmed by $\sim$0.5~mag over about 500 days in the rest frame (MJDs 52652--57073), especially in the blue part of the spectrum. In the lower panels, spectral decomposition is shown for the bright and dim epochs around the Balmer jump region (panels (b) and (c), respectively) and for the H$\beta$/[O\,III]/Fe\,II region (panels (d) and (e)).
Colors denote data (black), uncertainties (gray), total best-fitting model (red), old and young stellar templates (yellow, orange), power-law continuum (green), and emission lines (blue). In the dim state, the host stellar populations are highly prominent, while the broad emission lines have
nearly disappeared. SDSS-V quasar time-domain spectroscopy will systematically probe for such surprising accretion-state transitions in a far larger survey of quasars.
}
\label{fig:clagn}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.9\linewidth]{contours_l_z_log_chandra_xmm_dr14_sdss5.pdf}
\vspace{-0.4in}
\caption{\footnotesize {\bf Comparison of the X-ray luminosity and redshift coverage of AGN surveys:} The larger and denser the coverage in this parameter space, the more stringent the constraints that can be placed on the history of accretion onto SMBHs in the Universe. The central plot compares SDSS-V's {\it eROSITA} follow-up program (orange contours; $\sim$316,000 targets) and current state-of-the art AGN surveys: SDSS-IV/SPIDERS (blue; $\sim$4600 targets) and a compilation of deep {\it Chandra} and {\it XMM-Newton} fields (black; CDFS, CDFN, COSMOS, Lockman Hole, XMM-XXL; $\sim$4000 targets). The top and right histograms show the number of AGN expected in each of these samples.
Note the logarithmic y-axes in both histograms: the {\it eROSITA} X-ray and SDSS-V-BHM sample will be about 100$\times$ larger than any
existing sample, spanning a wide range of redshift and luminosities.
The pink shading at $z>4$ highlights the $\sim$10$\times$ improvement in X-ray selected AGN sample sizes at high redshift.}
\label{fig:SPIDERS-Lz}
\end{figure}
In addition, despite the high X-ray luminosity of nearly all AGN, we do not fully understand the physical origin of the tight coupling between the hot X-ray corona and the ``cold'' accretion disk. This is mostly due to the limited size of X-ray AGN samples compiled with the X-ray telescopes having the necessary sensitivity ({\it Chandra}, {\it XMM-Newton}) but relatively small FOVs. However, the imminent {\it eROSITA} instrument \citep[extended ROentgen Survey with an Imaging Telescope Array;][]{Predehl_etal_2014} aboard the Spektr-RG mission, set to launch in 2018 with both high sensitivity and a large FOV, will discover as many new X-ray sources in its first twelve months as are known today, after more than 50 years of X-ray astronomy. SDSS-V will provide optical spectroscopic measurements (to about $i_{AB}<21.5$), including identifications and redshifts, of $\sim$400,000 {\it eROSITA} X-ray sources detected in the first 1.5 years of the all sky survey \citep[i.e., those to a 0.5--2 keV flux limit of $\sim$$2.5 \times 10^{-14}$ erg~s$^{-1}$~cm$^{-2}$; see Figure~\ref{fig:SPIDERS-Lz} and][]{Merloni_etal_2012}. This sample will comprise mainly quasars/AGN at high Galactic latitude, but it will also contain X-ray emitting galaxy clusters and X-ray-bright stars (such as compact binaries and flaring late-type stars) in the MW and nearby galaxies. In addition, SDSS-V's BHM will characterize numerous serendipitous discoveries, extreme and rare objects, transients, and other peculiar variables found in the {\it eROSITA} survey \citep{Merloni_etal_2012}, and expand an optical$+$X-ray quasar sample with implications for observational cosmological constraints \citep[e.g.,][]{Risaliti_2015_quasarcosmology}. This combination of X-ray discovery and optical characterization will provide a great leap forward in our description of the X-ray sky, and it will reveal the connections between large, statistical populations of X-ray sources and the cosmic structures in which they are embedded.
\subsection{Local Volume Mapper}
\label{sec:lvm}
We have come to understand that galaxy formation must be a self-regulated process, with energy exchange between stars and interstellar gas occurring at numerous points, both spatially and temporally. For instance, we know that the rate of star formation (SF) scales with the density of the gas on kiloparsec scales, but feedback and the enrichment of the ISM occurs on the scale of individual stars. However, many of the observed stellar/gaseous correlations remain basically empirical --- their existence is well-established, but their physical origins are unclear.
SDSS-V's Local Volume Mapper (LVM) will take on this problem by making global ISM maps of Local Group galaxies with a resolution down to the physical scales from which the global correlations arise. Figure~\ref{fig:LVM_zoom} visualizes how the visible structure in the ISM qualitatively changes at a resolution of 25~pc, below which 50-100~pc sized ``clouds'' can be separated into individual resolved SF knots and the filamentary structures and shock networks between them. This resolution allows diffuse gas to be cleanly separated from ionization fronts and HII regions. IFU studies of MW regions, like Orion, have resolved these structures, but only across arcminute-scale areas corresponding to a few parsecs \citep{Sanchez+2007,Weilbacher+2015,McLeod+2015}. Connecting studies across the pc (sub-GMC) and kpc (galaxy-wide) scales is fundamental to understanding the physics governing star formation, the structure and energetics of the ISM, the baryon cycle, and ultimately, the evolution of galaxies. SDSS-V's LVM will provide this high resolution spectral mapping over large regions of multiple galactic disks, sampling the ISM across a wide range of local galactic environment.
\begin{figure}[tbh!]
\centering
\includegraphics[width=\textwidth]{lmc_scales_v2-bright.png}
\caption{
\footnotesize {\bf The dynamic range of spatial scales sampled by LVM:}
Optical (V-band) and narrow band ([O III], H$\alpha$, [S II]) imaging of the LMC \citep{Smith+2000}. The sequence starts on the left at the 100~pc resolution typically achieved by the best IFU data available today for external galaxies, and ends on the right with the resolution LVM will achieve in the Milky Way. Note the qualitative change around 25~pc, when networks of shocks and ionization fronts start to appear and then become resolved at $<$10~pc.
\label{fig:LVM_zoom}
}
\end{figure}
The LVM will provide optical IFU data able to resolve SF structures, giant molecular clouds, HII regions, and young stellar clusters. These data will span the bulk of the MW disk at 0.1--1~pc resolution, the whole LMC and SMC at 10~pc resolution, M31 and M33 at 20~pc resolution, and Local Volume galaxies at $\sim$20--100~pc resolution,\footnote{We note that the exact sample of more distant objects is not yet finalized and will depend on further tuning of the science case and survey strategy and priorities.} out to a distance of $\sim5$~Mpc (Figure~\ref{fig:lvm-overview}). This coverage will equal about one full steradian ($\sim$3,300~deg$^2$) of the sky; for comparison, the SDSS-IV MaNGA survey \citep{Bundy+2015} spans about 0.5$^\circ$~deg$^2$ across 10$^4$ low redshift galaxies, and {\it all} of the single-fiber SDSS spectroscopy to date sums to only a few deg$^2$ of sky. LVM will provide as many spectra on the LMC as are contained in all of MaNGA's objects taken together ($\sim$10$^6$). The LVM spectrographs will span 3600--10000\AA\ at a resolving power of $R\sim4000$. This sprawling sky coverage will overlap with datasets providing complementary information at matched spatial resolution. Stellar spectroscopy with accurate typing and abundances from previous APOGEE observations and from SDSS-V itself (Figure~\ref{fig:lvm-overview} and Section~\ref{sec:mwm}) as well as resolved stellar photometry and CMDs (in the Magellanic Clouds, M31 \& M33) will allow us to connect the structures in the ISM to the radiation field and to {\em individual} sources of feedback. Far-IR, sub-mm, and radio surveys probing the dust and H$_2$/H~\textsc{i}\ phases of the ISM connect our observations to molecular clouds and cold gas. X-ray catalogs (e.g., from {\it eROSITA}; Section~\ref{sec:bhm}) indicate the location of X-ray binaries and other additional sources of interstellar ionization.
\begin{figure}[t!]
\centering
\includegraphics[width=1.0\textwidth]{LVM_overview.jpg}
\caption{\footnotesize
{\bf The LVM footprint and sampling:} In the center, the LVM survey footprint on the MW (blue) is shown on top of the SDSS-V MWM target density map (Section~\ref{sec:mwm}). Zooming into the Orion region (lower right), we see an image of ionized emission sampled at the LVM spaxel size ($<$0.1~pc) and individual stars with high-resolution SDSS-III, -IV, and/or -V spectroscopy with APOGEE in yellow. For the LMC (left/center bottom), we see a continuum $+$ ionized emission image of the 30 Doradus star forming region, sampled at the 10~pc spaxel size LVM will deliver on the Magellanic Clouds. The top panels show the LVM IFU field-of-view over a continuum image of M31. Statistical samples of H~\textsc{ii}\ regions (green) observed at 20~pc resolution across M31, and at $\sim 50$~pc resolution across other nearby galaxies, connect small scale physics and large scale galaxy evolution.}
\label{fig:lvm-overview}
\end{figure}
LVM's extremely wide-field optical spectral mapping adds critical information about the ionized ISM and integrated stellar populations to these complementary datasets, enabling a synthesis of the local $\leftrightarrow$ global physics in galaxy disks. The key science themes that will be addressed include the connection between ionized gas, star formation, and feedback on multiple physical scales; the extraction of maximum information from the union of resolved and integrated stellar population data; and the geometric structure of the ionized and dusty ISM to better understand chemical abundances and enrichment.
Within these themes, there are numerous outstanding science questions that can only be answered by wide-area, wide-wavelength surveys. These range from understanding the energetics, sinks, and sources of stellar feedback; the dependence of star formation on the local galactic environment (e.g., gas density and local shear); the life cycle of GMCs and star-forming complexes; the co-evolution of stellar populations and the surrounding ISM; and the distribution of interstellar metals at high spatial resolution, including the mixing of metals produced in supernovae and AGB stars. LVM's uniquely well-sampled 2D maps of the gas-phase properties of nearby galaxies (including our own MW) will provide empirical constraints on the ISM at the ``energy injection scale''---the physical scale where stars return energy to their surroundings. These measurements are critical for constraining theoretical models of star formation and feedback, as well as radiative transfer modeling of ISM structures.
\section{The Implementation of SDSS-V}
\label{sect:SurveyImplementation}
\subsection{Dual-Hemisphere Infrastructure and Instrumentation}
\label{sec:infrastruct_instrument}
SDSS-V will operate multiple telescopes and spectrographs in two hemispheres as a single unified survey across the entire sky. At Apache Point Observatory (APO) in New Mexico sits the 2.5m Sloan Foundation Telescope \citep{Gunn_2006_sloantelescope}, the workhorse of past SDSS generations that will remain dedicated to SDSS-V. At Las Campanas Observatory (LCO) in Chile, the Carnegie Observatories' 2.5m du~Pont telescope \citep{Bowen_1973_duPontTelescope} will be dedicated to SDSS-V for $\gtrsim$300 nights/year. To enable the ultra-wide field IFS mapping, SDSS-V will also use
a suite of smaller telescopes with apertures ranging from 1~m to 16~cm, to be set up at APO and LCO.
Both APO and LCO will each house a set of well-established survey instruments for SDSS-V: a near-IR APOGEE spectrograph \citep[300 fibers, R$\sim$22,000, $\lambda=1.5-1.7\mu$m;][]{Wilson_2012_apogee}; a large optical spectrograph \citep[500 fibers, R$\sim$2,000, $\lambda=360-1000$nm;][]{Smee_2013_SdssBossSpectrographs}; and three medium-resolution optical spectrographs feeding a large IFU (2000 fibers, R$\sim$4,000, $\lambda=360-1000$nm). See Figure~\ref{fig:SDSSV_schematic} for a schematic layout of these instruments. For the multi-object spectroscopy used by the MWM and BHM, each 2.5m telescope will be able to observe up to 500 targets at a time, drawing on both the optical and IR multi-fiber spectrographs simultaneously. SDSS-V will rely heavily on the proven SDSS observatory infrastructure, instrumentation, and operations model, including the use of the near-IR APOGEE and optical BOSS spectrographs for the MWM and BHM. Among the infrastructure innovations, however, are a new robotic fiber positioning (RFP) system for these MOSs and an ultra-wide field IFS.
To meet the requirements for SDSS-V's rapid exposure sequences and high target densities, we will replace the current SDSS plug-plate fiber system on both telescopes with robotic positioners with 500 arms, 300 of which will hold both an optical and an IR fiber (the remaining 200 will hold just an optical fiber). This system reduces the target reconfiguration time from 20 minutes (by changing plug-plates, with the current system) to under 2 minutes. The fiber positioners can account for atmospheric refraction more easily than drilled plates can, greatly increasing the available observing window for each field and boosting the survey efficiency. Target lists can also be modified on short timescales to allow observations of transients and other targets of opportunity.
The LVM will use a new system that enables IFS over unprecedentedly large areas. It will expand upon SDSS-IV's existing MaNGA technology, producing an IFS unit based around lenslet-array coupled, tightly packed, abuttable bundles of fibers ($\sim$2000 at each site). These bundles will feed a cluster of three multi-channel spectrographs at each observatory. The LVM instrument could be fed from the Sloan 2.5m, du Pont 2.5m, or the small 1m--16cm telescopes. This flexibility leads to a cascade of possible field sizes, between 1.6$'$ and 25$'$ in diameter, with each fiber subtending between 2.7$''$ and 37$''$ on the sky. Trade studies are ongoing to optimize the exact cascade of apertures and spaxel sizes for all different LVM targets. All of these hardware developments use proven technology for rapid implementation by 2020.
\subsection{Dual-Hemisphere Observations}
The implementation of SDSS-V observations can be divided into two regimes: MOS on the 2.5-meter telescopes (MWM and BHM) and IFS, mostly on smaller telescopes (LVM). There will be small differences in the MOS observations between LCO and APO, such as plate scale \citep[e.g.,][]{Zasowski_2017_apogee2targeting}, but significant effort will be put towards making data from the two hemispheres as homogeneous as possible for the end user. Observations in both hemispheres, for the three Mappers, will run almost entirely in parallel for the duration of the whole survey.
The survey planning for MOS observations is currently based on an exposure quantum of $t_{\rm exp} \approx 15$~min (in contrast to the $\approx$1~hour exposures of SDSS-I through -IV). The details of $t_{\rm exp}$ are still undergoing study and optimization, but the requirement of rapid, short exposure is clear: it enables significantly higher survey efficiency over a wide range of target brightnesses. The RFP system will support the rapid and flexible fiber (re-)allocations necessary for this increased efficiency (see Figure~\ref{fig:SDSSV_programs} for the projected density of SDSS-V targets). Any given set of 500 MOS targets can be scheduled for a combination of optical spectroscopy (up to 500 fibers) and/or IR spectroscopy (up to 300 fibers). Repeat ``visits'' to the same field, where each ``visit'' is $t_{\rm exp}$ long, will be used to a) increase the final density of targets by observing new sources, b) boost the S/N of source spectra by adding exposure time, regardless of the observing epoch, and c) obtain time-resolved spectroscopy, where the temporal cadence matters (e.g., binary stars and quasar variability). The joint availability of these options, enabled by the RFP and the fact that both MOS instruments will share the focal plane at all times, opens up many paths to survey optimization.
As described in Section~\ref{sec:infrastruct_instrument}, the IFS observations of LVM will span a range of field sizes and plate scales (up to $37^{\prime\prime}$ per spaxel), allowing flexible optimization of sensitivity versus spatial resolution. Since these observations depend largely on the smaller SDSS-V telescopes, and use a separate cluster of spectrographs, LVM will generally operate concurrently with BHM and MWM in both hemispheres. The large-area components, such as the mapping of the MW disk and Magellanic Clouds, will be on the smallest telescopes and will be as automated as possible. The calibration scheme for the MW survey will be different from other IFS surveys, including the other LVM components, because the large spatial extent of the targets covers the whole telescope FOV and prevents simultaneous observations of sky and spectrophotometric standards. A novel calibration strategy using simultaneous observations of calibration fields with a dedicated calibration telescope will be adopted, the details of which are still being finalized.
\subsection{What sets SDSS-V Apart? The 2020 Landscape of Spectroscopic Surveys}
SDSS-V's potential for multiplying the science to come from the {\it Kepler}, {\it Gaia}, {\it TESS}, and {\it eROSITA} datasets is apparent from these missions' summaries in Table~\ref{tab:space_missions}. {\it Kepler} and {\it TESS}'s magnitude ranges are very well matched to SDSS-V's IR/APOGEE spectroscopy, with MWM providing precise stellar parameters, abundances, and multi-epoch velocities (to $\sim$30~m~s$^{-1}$) for their targets.
SDSS-V's optical/BOSS spectroscopy is well matched for the magnitude range of unextinguished {\it Gaia} sources, providing basic spectroscopic characterization at the faint end and excellent characterization at the bright end. The high optical extinction for most sources in the GGS program at low Galactic latitudes means that most of the spectroscopic GGS targets ($H<11$) span the same optical magnitude range ($14<G<19$) as most of {\it Gaia}'s sources with sensibly precise astrometry.
While a good fraction of {\it eROSITA}'s detections made later in its multi-epoch X-ray survey will be followed up from 4MOST, SDSS-V will perform the first uniform optical MOS follow-up of hundreds of thousands of X-ray sources discovered in the first 1.5 years of {\it eROSITA}.
\begin{table}[bh!]
\scalebox{0.9}{
\begin{tabular}{ |p{2cm}||p{7.9cm}|p{2cm}|p{4.4cm}| }
\hline
\multicolumn{4}{|c|}{{\bf Wide-field Space Missions Enhanced by SDSS-V}} \\
\hline
Mission & Science Goals / Data Products & Timeframe & Primary Mag Range\\
\hline
\hline
\multirow{2}{*}{{\it Kepler/K2}} & (transiting) exoplanets \& stellar astrophysics
\newline seismology from precision lightcurves & \multirow{2}{*}{2009--2018} & \multirow{2}{*}{$m_{V}\sim $ 7--17; selected fields} \\
\hline
\multirow{2}{*}{{\it Gaia}} & positions, distances, motions from astrometry; \newline basic stellar parameters & \multirow{2}{*}{2013--2020} & \multirow{2}{*}{$m_{G}\sim $7--19; all-sky} \\
\hline
\multirow{2}{*}{{\it TESS}} & (transiting) exoplanets \& stellar astrophysics
\newline seismology from precision lightcurves & \multirow{2}{*}{2018--2022} & \multirow{2}{*}{$m_{i}\sim $ 8--14; $\sim$all-sky} \\
\hline
\multirow{2}{*}{{\it eROSITA}} & \multirow{2}{*}{X-ray fluxes \& spectra} & \multirow{2}{*}{2018--2022} & $f_{\rm 0.5-2keV} >10^{-14}$ \newline erg~s$^{-1}$~cm$^{-2}$; $\sim$all-sky \\
\hline
\end{tabular}
}
\caption{\footnotesize\linespread{1.2}{Surveys by (mostly imaging) space missions in 2020-2025 timeframe}}
\label{tab:space_missions}
\end{table}
SDSS-V is different from the array of current or imminent ground-based MOS surveys (Table~\ref{tab:spectroscopic_surveys}), enhancing them by opening up a unique part of ``survey'' parameter space.
This bold claim requires quantification and discussion. First, we note that SDSS-V is not the optimal survey facility for science with sources that are faint, are not very red (intrinsically or because of low extinction), and do not call for multi-epoch or all-sky observations.
Such science includes much of cosmology, large-scale structure, high-redshift galaxy evolution, and detailed studies of substructure in the Galactic halo. This science is at the heart of many of the surveys in
Table~\ref{tab:spectroscopic_surveys}, whose outstanding design makes them better able to serve these science goals than SDSS-V can.
But when it comes to the scientific heart of SDSS-V's multi-object spectroscopy --- all-sky, multi-epoch, dust-penetrating near-IR spectroscopy of many millions of targets --- Table~\ref{tab:spectroscopic_surveys} shows that SDSS-V has a clearly unique place: it is the only survey to come close to full 4$\pi$ sky coverage; only it will cover the entire inner Galaxy at low latitudes, where most Galactic stars live;
and only it will have comprehensive multi-epoch observations across the sky. For many surveys, the science goals
argue against ``covering the sky'' with short exposures.
While {\it Gaia} has a spectroscopic survey component (with its RVS instrument), these data have relatively small spectral coverage (only 10\% the spectral resolution elements of each of SDSS-V's spectrographs), and its flux limit for high-quality spectra will be a factor of 100 brighter than for SDSS-V's optical stellar spectra. The WEAVE survey, starting a couple of years before SDSS-V, will come closest to SDSS-V's ambition and will form a crucial benchmark for SDSS-V to build upon by, e.g., providing a well-established ``ground truth'' in small portions of the Galactic plane. Starting toward the anticipated end of SDSS-V in 2024, 4MOST will provide important complementary data in the form of deeper optical spectra, including a focus on the inner Galactic disk and bulge. These facility comparisons show that SDSS-V will neither compete with nor follow these surveys: it will focus on exploring a new and different regime.
\begin{table}[h]
\scalebox{0.74}{
\begin{tabular}{|p{2.8cm}||p{1.7cm}|p{1.6cm}|p{1.2cm}|p{2.3cm}|p{2.9cm}|p{1.75cm}|p{2cm}|p{1.8cm}|}
\hline
\multicolumn{9}{|c|}{{\bf Spectroscopic Survey Facilities by 2020--2025}} \\
\hline
Survey (facility)&$N_{\rm target}$ & $R_{\rm spectra}$ & $N_{\rm multi}$ & $\lambda[\mu m]$ & $\Omega_{\rm sky}$ & $N_{\rm epoch}$ & Timeframe & $m_{\rm primary}$\\
\hline
\hline
\multirow{2}{*}{{\bf SDSS-V}} & \multirow{2}{*}{$7\times 10^6$} & {\bf 22,000\newline 2,000} & \multirow{2}{*}{{\bf 500}} & {\bf 1.51--1.7 \newline 0.37--1.0} & \multirow{2}{*}{{\bf $4\pi$}} & \multirow{2}{*}{{\bf 1--174}} & \multirow{2}{*}{{\bf 2020--2024}} & {\bf $m_H \lesssim 13.4$ \newline $m_i \lesssim 20$} \\
\hline
\hline
Gaia (RVS) & $8\times 10^6$ & 11,000 & --- & 0.85--0.87 & $4\pi$ & $\sim$60 & 2013--2020 &$m_G \lesssim 12$ \\
\hline
Gaia-ESO & $0.1\times 10^6$ & $17,000$ & 140 & 0.55 \& 0.85 & $0.02\pi$ & $\sim$1 & 2013--2018 & $m_G\lesssim 17$ \\
\hline
GALAH & $0.8\times 10^6$ & $28,000$ & 400 & 0.40--0.85 & $\pi$, $|b|\ge 10$ & $\sim$1 & 2015--2020 & $m_G \lesssim 13$\\
\hline
\multirow{2}{*}{WEAVE} & \multirow{2}{*}{$0.8\times 10^6$} & $5,000 \newline 20,000$ & \multirow{2}{*}{1000} & \multirow{2}{*}{0.37--0.9} & \multirow{2}{*}{$\sim$$\pi$} & \multirow{2}{*}{$\sim$1--2} & \multirow{2}{*}{2018--2023} & \multirow{2}{*}{$m_G \lesssim 19$} \\
\hline
DESI & $4\times 10^7$ & 3,000 & 5000 & 0.36--0.98 & 1.35$\pi$, $|b|\ge 25^\circ$ & 1--4 & 2019--2024 & $m_r \lesssim 23$ \\
\hline
LAMOST & $8\times 10^6$ & 1,800 & 4000 & 0.4--0.9 & $0.5\pi$ & $\sim$1 & 2010--2020 & $m_G \lesssim 16$ \\
\hline
\multirow{2}{*}{4MOST} & \multirow{2}{*}{$10\times 10^6$} & 5,000 \newline 20,000 & 1600 \newline 800 & \multirow{2}{*}{0.4--0.9} & \multirow{2}{*}{$1.5\pi$} & \multirow{2}{*}{1--2}& \multirow{2}{*}{2023--2028} & $m_r \lesssim 22$ \newline $m_V \lesssim 16$ \\
\hline
APOGEE-1\& 2 & $5\times 10^5$ & $22,000$ & 300 & 1.51--1.7 & $0.5\pi$ & $\sim$1--30 & 2011--2019 & $m_H \lesssim 12.2$ \\
\hline
PFS & $1\times 10^6$ & 3,000 & 2400 & 0.4--1.6 & $0.05\pi$ & $1$ & 2018--2021 & $m_i \lesssim 23$ \\
\hline
\multirow{2}{*}{MOONS} & \multirow{2}{*}{$2\times 10^6$} & $5,000 \newline 20,000$ & \multirow{2}{*}{1000} & \multirow{2}{*}{0.6--1.8} & \multirow{2}{*}{$0.05\pi$} & \multirow{2}{*}{1} & \multirow{2}{*}{2020--2025} & $m_g \lesssim 22$\newline $m_H \lesssim 17$\\
\hline
\hline
\end{tabular}
}
\caption{\footnotesize\linespread{1.2}{Surveys by spectroscopic facilities in 2020--2025 timeframe. For each survey, the columns show anticipated number of unique targets ($N_{\rm target}$), spectral resolution ($R_{\rm spectra}$), multiplex number of the MOS ($N_{\rm multi}$), wavelength range ($\lambda[\mu m]$), sky coverage ($\Omega_{\rm sky}$), timescale for survey, and approximate target magnitude range ($m_{\rm primary}$). We note that these parameters, especially $m_{\rm primary}$, for many of the surveys are approximate, based on current public documentation and comparable survey subprograms.}}
\label{tab:spectroscopic_surveys}
\end{table}
\subsection{Data, Collaboration and Science: the SDSS Legacy}
SDSS-V aims to be the successor to the highly successful SDSS I--IV.\footnote{\href{https://www.sdss.org}{www.sdss.org}} The main science programs that we have outlined here reflect the result of a nearly two-year process of open discussion to identify science directions where 2m-class (or smaller) spectroscopic telescopes can produce transformational astrophysics. The science foci that this process identified offer enormous promise, even as they signal a re-weighting of the cosmology and large-scale structure programs that have heavily influenced the SDSS framework for the better part of 20 years. This evolution makes sense: the frontier in those particular fields has moved to larger telescopes and other wavelengths, while stellar and galactic astrophysics have been rejuvenated, following, e.g., the {\it Kepler} and (anticipated) {\it TESS} space photometry revolution.
While in its astrophysical core goals the transition to SDSS-V constitutes a larger change than the gradual science evolution of SDSS I$\rightarrow$IV, SDSS-V will continue in the scientific and collaborative spirit that has made SDSS one of the leading international astronomical collaborations. These basic operational tenets of our consortium of foundations, institutions, and individuals include: fully open collaboration within the consortium, with no ``reserved'' data or topics for individuals or sub-groups; inclusive co-authorship policies; high priority given to creating well-calibrated and well-documented data sets, along with their regular release to the global science community; an inclusive, collaborative climate that allows junior researchers to carry out high-impact SDSS science and build their careers; a dedication to student research, education, and public outreach; and an organically evolving consortium, with continual opportunities to evolve the science and survey strategy.
SDSS's success and influence in the astronomical community
reflect both the high quality and broad diversity of its data and the exemplary spirit and functioning of its collaboration. SDSS-V is committed to continuing these traditions while making strides to further improve upon the supportive environment and the diversity and inclusiveness of leadership roles within its organizational structure.
\section{The Path to SDSS-V: Project Development and Collaboration Building}
\label{sect:DevelopmentCollaboration}
In the tradition of its predecessors, SDSS-V is an ambitious survey with an ambitious timeline for construction and implementation. Our goal is to begin operations at both APO and LCO with our new RFP system and IFS instrumentation (Figure~\ref{fig:SDSSV_schematic}) at the end of SDSS-IV, currently planned for mid-2020. We are planning a five-year survey, pending fund-raising success. In practical terms, this requires: 1) construction of two RFP systems to feed the BOSS and APOGEE spectrographs; 2) construction of two large IFS bundles and their marriage to six spectrograph banks; and 3) refining and finalizing our survey strategy and associated pipeline enhancements. To meet this demanding schedule, we focus on known technological solutions (rather than designing new systems) for much of the hardware described above.
Equally critical to the success of SDSS-V is the building of the SDSS-V scientific collaboration --- a world-wide network of institutions working together towards the goals described here. The extraordinary science outlined above {\it simply does not happen without the participation and support of institutional partners}. There are over 50 current partners in SDSS-IV, ranging from associate institutions (with a restricted number of individual participant slots) to full institutional members. As of the writing of this document, SDSS-V has early participation from 19 institutional partners, including full membership from The Carnegie Observatories, MPIA, MPE, and the University of Utah. Full membership affords every member of an institution full data rights to the survey data and grants the institution a vote on the Advisory Council, the high-level body responsible for decision-making in the survey. The cost of full membership in SDSS-V is \$1.15M~USD. SDSS-V members (at all levels of membership) enjoy proprietary access to all survey data for a period of two years. In addition to this access, members are crucial in the survey planning and have access to the planet-wide collaboration network.
\section{SDSS-V: A Pathfinder for Panoptic Spectroscopy}
\label{sec:future}
Several aspects of SDSS-V position it to pioneer high-quality panoptic spectroscopy: industrial-scale, all-sky, multi-epoch spectroscopy, with $\Delta t_{\rm epoch}$ ranging from 20 minutes to 20 years; contiguous wide-field IR spectroscopy to map the obscured Galaxy; optical IFS that covers an appreciable fraction of the sky, etc. SDSS-V is also an important step on the path towards a spectroscopic counterpart to LSST. Such a facility has been suggested and requested in numerous reports \citep[e.g.,][]{Najita_2016_maximizeLSST,Ellis_2017_spectroLSST}. This future system would have the light-gathering power of 8m wide-field telescopes to reach many magnitudes deeper, thousands of fibers to survey the fainter sources revealed by large telescopes, and a surveying speed and operations model that allows systematic, all-sky time-domain spectroscopy not only of variable sources, but also of transient ones. While SDSS-V is clearly focused on bright sources, we will work --- both in the preparation and the implementation of SDSS-V --- towards enabling target-of-opportunity spectroscopy of transient phenomena. The lessons learned from this pioneering survey will prove invaluable in planning for the scientific merits and requirements for a spectroscopic counterpart to LSST.
\bibliographystyle{yahapj}
|
2,877,628,090,499 | arxiv | \section{Introduction}
Relatively pseudocomplemented lattices, often called Heyting algebras (see e.g.\ \cite I and \cite{K80}) or Brouwerian lattices (see e.g.\ \cite{K81}), arise from intuitionistic logic and were first investigated by T.~Skolem about 1920, see also \cite F and \cite{Ba}. For a detailed development see e.g.\ \cite{Cu}. Within this context, the relative pseudocomplement $x*y$ of $x$ with respect to $y$ is usually considered as intuitionistic implication, see e.g.\ \cite N or \cite{Cu}.
Hence, in relatively pseudocomplemented lattices we define
\[
x\rightarrow y:= x*y.
\]
It is well-known that every finite pseudocomplemented lattice is distributive. To extend investigations in intuitionistic logic also to the non-distributive case, the first author introduced so-called {\em sectionally pseudocomplemented lattices}, see \cite{Ch} and \cite{CR}. These are lattices with a top element where for every element $y$ and every element $x$ in the interval (so-called {\em section}) $[y,1]$ there exists a pseudocomplement $x^y$ of $x$ with respect to $y$. Putting
\begin{equation}\label{equ1}
x\rightarrow y:=(x\vee y)^y
\end{equation}
the situation becomes formally analogous to the case of relatively pseudocomplemented lattices. For the typical case, consider the lattice depicted in Figure~1:
\vspace*{-2mm}
\begin{center}
\setlength{\unitlength}{7mm}
\begin{picture}(6,8)
\put(3,1){\circle*{.3}}
\put(5,3){\circle*{.3}}
\put(1,4){\circle*{.3}}
\put(5,5){\circle*{.3}}
\put(3,7){\circle*{.3}}
\put(3,1){\line(-2,3)2}
\put(3,1){\line(1,1)2}
\put(3,7){\line(-2,-3)2}
\put(3,7){\line(1,-1)2}
\put(5,3){\line(0,1)2}
\put(2.85,.25){$0$}
\put(5.4,2.85){$a$}
\put(.3,3.85){$b$}
\put(5.4,4.85){$c$}
\put(2.85,7.4){$1$}
\put(2.2,-.75){{\rm Fig.~1}}
\end{picture}
\end{center}
\vspace*{4mm}
It is evident that this lattice has pseudocomplemented sections, but the lattice is neither relatively pseudocomplemented (since the relative pseudocomplement of $c$ with respect to $a$ does not exist) nor distributive.
The operation tables for $x^y$ and $\rightarrow$ look as follows:
\[
\begin{array}{c|ccccc}
x^y & 0 & a & b & c & 1 \\
\hline
0 & 1 & - & - & - & - \\
a & b & 1 & - & - & - \\
b & c & - & 1 & - & - \\
c & b & a & - & 1 & - \\
1 & 0 & a & b & c & 1
\end{array}
\quad\quad\quad
\begin{array}{c|ccccc}
\rightarrow & 0 & a & b & c & 1 \\
\hline
0 & 1 & 1 & 1 & 1 & 1 \\
a & c & 1 & b & 1 & 1 \\
b & c & a & 1 & c & 1 \\
c & b & a & b & 1 & 1 \\
1 & 0 & a & b & c & 1.
\end{array}
\]
The notion of relatively pseudocomplemented lattices was extended to posets, see e.g.\ \cite{CLP}. It is useful when a reduct of intuitionistic logic is considered where one studies only the connective implication but not other connectives like disjunction or conjunction. Let us note that in intuitionistic logic, the connectives implication, conjunction and disjunction are independent.
\section{Posets with pseudocomplemented sections}
To extend our study also to (not necessarily relatively pseudocomplemented) posets with pseudocomplemented sections, let us introduce several necessary concepts.
Let $(P,\leq)$ be a poset $a,b\in P$ and $A,B\subseteq P$. We say $A<B$ if $x\leq y$ for all $x\in A$ and $y\in B$. Instead of $\{a\}<\{b\}$, $\{a\}<B$ and $A<\{b\}$ we simply write $a<b$, $a<B$ and $A<b$, respectively. Analogously we proceed with the relational symbols $\leq$, $>$ and $\geq$. Denote by
\[
L(A):=\{x\in P\mid x\leq A\}\text{ and }U(A):=\{x\in P\mid A\leq x\}
\]
the so-called {\em lower} and {\em upper cone} of $A$, respectively. Instead of $L(\{a\})$, $L(\{a,b\})$, $L(A\cup\{a\})$, $L(A\cup B)$ and $L\big(U(A)\big)$ we simply write $L(a)$, $L(a,b)$, $L(A,a)$, $L(A,B)$ and $LU(A)$, respectively. Analogously, we proceed in similar cases. Denote the set of all minimal and maximal elements of $A$ by $\Min A$ and $\Max A$, respectively.
Recall that a {\em pseudocomplemented poset} is an ordered quadruple $(P,\leq,{}^*,0)$ where $(P,\leq,0)$ is a poset with bottom element $0$ and $^*$ is a unary operation on $P$ such that for all $x\in P$, $x^*$ is the greatest element of $(P,\leq)$ satisfying $L(x,x^*)=0$. (Here and in the following, we often identify singletons with their unique element.) This means that $x\wedge x^*$ exists for each $x\in P$ and $x\wedge x^*=0$.
Let us mention that in every logic, both classical or non-classical, a prominent role plays the logical connective implication. The reason is that implication enables logical deduction, i.e.\ the derivation of new propositions from given ones. In order to study a logic based on a poset, one cannot expect that the result of implication will be uniquely determined. This means that the result of the implication $x\rightarrow y$ for given elements $x$ and $y$ of a given poset $P$ would be a subset of $P$, not necessarily a singleton. This is the reason why we will call such an implication ``unsharp''. On the other hand, we ask such an unsharp implication to satisfy the rules and properties usually satisfied by an implication and, moreover, the results of our implication should be as high as possible. We introduce such an unsharp implication within the next section. In Proposition~\ref{prop1} we show that our implication satisfies properties similar to those satisfied by the standard implication. We also show that the values of results of our implication are usually higher than those for implication of intuitionistic logic based on relative pseudocomplementation. In the last section we introduce also an unsharp connective conjunction which is connected with our implication via a certain kind of adjointness.
\begin{definition}
A {\em finite poset with pseudocomplemented sections} is an ordered quadruple $\big(P,\leq,(^y;y\in P),1\big)$ where $(P,\leq,1)$ is a finite poset with top element $1$ and for every $y\in P$, $([y,1],\leq,{}^y,y)$ is a pseudocomplemented poset. For every $y\in P$ and every subset $B$ of $[y,1]$ put $B^y:=\{b^y\mid b\in B\}$. Finally, for all $x,y\in P$ define the implication $x\rightarrow y$ as follows:
\[
x\rightarrow y:=\big(\Min U(x,y)\big)^y.
\]
A {\em finite poset with $0$ and pseudocomplemented sections} is an ordered quintuple $\big(P,\leq,(^y;y\in P),0,1\big)$ where $\big(P,\leq,(^y;y\in P),1\big)$ is a finite poset with pseudocomplemented sections and $0$ is the bottom element of $(P,\leq)$.
\end{definition}
Observe that because of $1\in U(x,y)$ we have $\Min U(x,y)\neq\emptyset$.
\begin{remark}\label{rem1}
If $\big(P,\leq,(^y;y\in P),1\big)$ is a finite poset with pseudocomplemented sections, $b\in P$ and $a\in[b,1]$ then
\[
a^b=\max\{x\in P\mid L(a,x)\cap[b,1]=b\}.
\]
\end{remark}
Hence, in general, $\rightarrow$ is not a binary operation on $P$ but an operator assigning to each element of $P^2$ a non-empty subset of $P$. The almost obvious relationship between the sectional pseudocomplementation and the operator $\rightarrow$ is as follows.
\begin{lemma}\label{lem1}
Let $\big(P,\leq,(^y;y\in P),1\big)$ be a finite poset with pseudocomplemented sections and $a,b\in P$. Then the following hold:
\begin{enumerate}[{\rm(i)}]
\item If $a\vee b$ exists in $(P,\leq)$ then $a\rightarrow b=(a\vee b)^b$,
\item if $b\leq a$ then $a\rightarrow b=a^b$.
\end{enumerate}
\end{lemma}
\begin{proof}
\
\begin{enumerate}[(i)]
\item If $a\vee b$ exists in $(P,\leq)$ then
\[
a\rightarrow b=\big(\Min U(a,b)\big)^b=\big(\Min U(a\vee b)\big)^b=(a\vee b)^b.
\]
\item if $b\leq a$ then because of (i) we have $a\rightarrow b=(a\vee b)^b=a^b$.
\end{enumerate}
\end{proof}
In what follows we list some elementary but important properties of this implication. We can see that these are analogous to know properties of implication in classical and non-classical propositional calculus.
\begin{proposition}\label{prop1}
Let $\big(P,\leq,(^y;y\in P),1\big)$ be a finite poset with pseudocomplemented sections and $a,b\in P$. Then the following hold:
\begin{enumerate}[{\rm(i)}]
\item $a\leq b$ if and only if $a\rightarrow b=1$,
\item if $a\vee b$ exists then $(a\vee b)\rightarrow b=a\rightarrow b$,
\item $1\rightarrow a=a$,
\item $a\leq b\rightarrow a$,
\item $a\rightarrow(b\rightarrow a)=1$.
\end{enumerate}
\end{proposition}
\begin{proof}
\
\begin{enumerate}[(i)]
\item The following are equivalent:
\begin{align*}
a & \leq b, \\
\Min U(a,b) & =b, \\
x & =b\text{ for all }x\in\Min U(a,b), \\
x^b & =1\text{ for all }x\in\Min U(a,b), \\
\big(\Min U(a,b)\big)^b & =1, \\
a\rightarrow b & =1,
\end{align*}
\item if $a\vee b$ exists then
\[
(a\vee b)\rightarrow b=\big(\Min U(a\vee b,b)\big)^b=\big(\Min U(a\vee b)\big)^b=\big(\Min U(a,b)\big)^b=a\rightarrow b,
\]
\item
\[
1\rightarrow a=\big(\Min U(1,a)\big)^a=\big(\Min U(1)\big)^a=1^a=a,
\]
\item $a\leq\big(\Min U(b,a)\big)^a=b\rightarrow a$,
\item this follows from (iii) and from (i) of Lemma~\ref{lem1}.
\end{enumerate}
\end{proof}
The next result shows that under appropriate assumptions our unsharp implication satisfies important properties already known from standard implication.
\begin{proposition}
Let $\big(P,\leq,(^y;y\in P),1\big)$ be a finite poset with pseudocomplemented sections and $a,b,c\in P$. Then the following hold:
\begin{enumerate}[{\rm(i)}]
\item If $a\leq b$ and $a\vee c$ exists in $(P,\leq)$ then $b\rightarrow c\leq a\rightarrow c$,
\item if $a\vee b$ exists in $(P,\leq)$ then $a\leq(a\rightarrow b)\rightarrow b$,
\item if $a\vee b$ exists in $(P,\leq)$ then $a\rightarrow b=\big((a\rightarrow b)\rightarrow b\big)\rightarrow b$.
\end{enumerate}
\end{proposition}
\begin{proof}
\
\begin{enumerate}[(i)]
\item Since $a\vee c$ exists in $(P,\leq)$, we have $a\rightarrow c=(a\vee c)^c$ according to (i) of Lemma~\ref{lem1}. Now, by (P2), everyone of the following assertions implies the next one:
\begin{align*}
a & \leq b, \\
U(b,c) & \subseteq U(a,c), \\
\Min U(b,c) & \subseteq U(a\vee c), \\
a\vee c & \leq x\text{ for all }x\in\Min U(b,c), \\
x^c & \leq(a\vee c)^c=a\rightarrow c\text{ for all }x\in\Min U(b,c), \\
b\rightarrow c & =\big(\Min U(b,c)\big)^c\leq a\rightarrow c
\end{align*}
\item Because of (P3) and (i) and (ii) of Lemma~\ref{lem1} we have
\[
a\leq a\vee b\leq\big((a\vee b)^b\big)^b=(a\rightarrow b)\rightarrow b.
\]
\item Because of (i) of Lemma~\ref{lem1}, (P4) and (ii) of Lemma~\ref{lem1} we have
\[
a\rightarrow b=(a\vee b)^b=\Big(\big((a\vee b)^b\big)^b\Big)^b=\big((a\rightarrow b)\rightarrow b\big)\rightarrow b.
\]
\end{enumerate}
\end{proof}
Let $\mathbf P=(P,\leq)$ be a poset and $a,b\in P$. Recall the following definitions.
\begin{itemize}
\item The greatest element $x$ of $P$ satisfying $L(a,x)\subseteq L(b)$ is called the {\em relative pseudocomplement} $a*b$ of $a$ with respect to $b$. The poset $\mathbf P$ is called {\em relatively pseudocomplemented} if any two elements of $P$ have a relative pseudocomplement, see \cite{CLP} and \cite V.
\item The greatest element $x$ of $P$ satisfying $L(U(a,b),x)=L(b)$ is called the {\em sectional pseudocomplement} $a\circ b$ of $a$ with respect to $b$. The poset $\mathbf P$ is called {\em sectionally pseudocomplemented} if any two elements of $P$ have a sectional pseudocomplement.
\end{itemize}
\begin{remark}
Let $(P,\leq,1)$ be a poset with top element $1$ and $a,b\in P$ with $b\leq a$. Further assume that the sectional pseudocomplement $a\circ b$ of $a$ with respect to $b$ and the pseudocomplement of $a^b$ of $a$ in $[b,1]$ exist. Then $a\circ b\leq a^b$.
\end{remark}
\begin{proof}
Since $b\in L(b)=L\big(U(a,b),a\circ b\big)$, we have $b\leq a\circ b$. Moreover,
\[
L(a,a\circ b)\cap[b,1]=L\big(U(a),a\circ b\big)\cap[b,1]=L\big(U(a,b),a\circ b\big)\cap[b,1]=L(b)\cap[b,1]=\{b\}.
\]
Hence $a\circ b\leq a^b$.
\end{proof}
Let us note that the sectional pseudocomplement is not the same as the pseudocomplement in the corresponding section. For example, consider the poset depicted in Fig.~2. Then $a\notin[b,1]$. Thus the pseudocomplement of $a$ in the section $[b,1]$, i.e.\ $a^b$, does not exist. On the other hand, the sectional pseudocomplement $a\circ b$ of $a$ with respect to $b$ exists and is equal to $b$ because $b$ is the greatest element $x$ satisfying $L\big(U(a,b),x\big)=L(b)$ since $U(a,b)=\{c,d,1\}$. It is worth noticing that $a\circ b$ differs from our unsharp implication $a\rightarrow b$ because $a\rightarrow b=\{c,d\}$.
\begin{example}\label{ex2}
The poset shown in Figure~2:
\vspace*{-2mm}
\begin{center}
\setlength{\unitlength}{7mm}
\begin{picture}(6,10)
\put(3,1){\circle*{.3}}
\put(1,3){\circle*{.3}}
\put(5,3){\circle*{.3}}
\put(1,7){\circle*{.3}}
\put(5,7){\circle*{.3}}
\put(3,9){\circle*{.3}}
\put(1,3){\line(1,-1)2}
\put(1,3){\line(1,1)4}
\put(1,3){\line(0,1)4}
\put(5,3){\line(-1,-1)2}
\put(5,3){\line(-1,1)4}
\put(5,3){\line(0,1)4}
\put(3,9){\line(-1,-1)2}
\put(3,9){\line(1,-1)2}
\put(2.85,.25){$0$}
\put(.3,2.85){$a$}
\put(5.4,2.85){$b$}
\put(.3,6.85){$c$}
\put(5.4,6.85){$d$}
\put(2.85,9.4){$1$}
\put(2.2,-.75){{\rm Fig.~2}}
\end{picture}
\end{center}
\vspace*{4mm}
has pseudocomplemented sections and is simultaneously relatively pseudocomplemented. The tables for $x^y$, $\rightarrow$ and $*$ look as follows:
\[
\begin{array}{c|cccccc}
x^y & 0 & a & b & c & d & 1 \\
\hline
0 & 1 & - & - & - & - & - \\
a & b & 1 & - & - & - & - \\
b & a & - & 1 & - & - & - \\
c & 0 & d & d & 1 & - & - \\
d & 0 & c & c & - & 1 & - \\
1 & 0 & a & b & c & d & 1
\end{array}
\quad
\begin{array}{c|cccccc}
\rightarrow & 0 & a & b & c & d & 1 \\
\hline
0 & 1 & 1 & 1 & 1 & 1 & 1 \\
a & b & 1 & \{c,d\} & 1 & 1 & 1 \\
b & a & \{c,d\} & 1 & 1 & 1 & 1 \\
c & 0 & d & d & 1 & d & 1 \\
d & 0 & c & c & c & 1 & 1 \\
1 & 0 & a & b & c & d & 1
\end{array}
\quad
\begin{array}{c|cccccc}
* & 0 & a & b & c & d & 1 \\
\hline
0 & 1 & 1 & 1 & 1 & 1 & 1 \\
a & b & 1 & b & 1 & 1 & 1 \\
b & a & a & 1 & 1 & 1 & 1 \\
c & 0 & a & b & 1 & d & 1 \\
d & 0 & a & b & c & 1 & 1 \\
1 & 0 & a & b & c & d & 1.
\end{array}
\]
The intuitionistic implication, i.e.\ the relative pseudocomplement $*$ differs from our ``unsharp'' implication $\rightarrow$, e.g.\ $a*b=b$ whereas $a\rightarrow b=\{c,d\}$. Hence, although $a\rightarrow b$ is an ``unsharp'' implication because its result is a two-element subset of $P$, its values $c$ and $d$ are greater than the value of intuitionistic implication $a*b$.
\end{example}
\begin{example}\label{ex1}
The poset shown in Figure~3:
\vspace*{-2mm}
\begin{center}
\setlength{\unitlength}{7mm}
\begin{picture}(6,10)
\put(3,1){\circle*{.3}}
\put(2,2){\circle*{.3}}
\put(1,3){\circle*{.3}}
\put(5,3){\circle*{.3}}
\put(1,7){\circle*{.3}}
\put(5,7){\circle*{.3}}
\put(3,9){\circle*{.3}}
\put(1,3){\line(1,-1)2}
\put(1,3){\line(1,1)4}
\put(1,3){\line(0,1)4}
\put(5,3){\line(-1,-1)2}
\put(5,3){\line(-1,1)4}
\put(5,3){\line(0,1)4}
\put(3,9){\line(-1,-1)2}
\put(3,9){\line(1,-1)2}
\put(2.85,.25){$0$}
\put(1.3,1.85){$a$}
\put(.3,2.85){$b$}
\put(5.4,2.85){$c$}
\put(.3,6.85){$d$}
\put(5.4,6.85){$e$}
\put(2.85,9.4){$1$}
\put(2.2,-.75){{\rm Fig.~3}}
\end{picture}
\end{center}
\vspace*{5mm}
has pseudocomplemented sections, but is not relatively pseudocomplemented since the relative pseudocomplement of $b$ with respect to $a$ does not exist. The tables for $x^y$ and $\rightarrow$ look as follows:
\[
\begin{array}{c|ccccccc}
x^y & 0 & a & b & c & d & e & 1 \\
\hline
0 & 1 & - & - & - & - & - & - \\
a & c & 1 & - & - & - & - & - \\
b & c & a & 1 & - & - & - & - \\
c & b & - & - & 1 & - & - & - \\
d & 0 & a & e & e & 1 & - & - \\
e & 0 & a & d & d & - & 1 & - \\
1 & 0 & a & b & c & d & e & 1.
\end{array}
\quad\quad\quad
\begin{array}{c|ccccccc}
\rightarrow & 0 & a & b & c & d & e & 1 \\
\hline
0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
a & c & 1 & 1 & \{d,e\} & 1 & 1 & 1 \\
b & c & a & 1 & \{d,e\} & 1 & 1 & 1 \\
c & b & a & \{d,e\} & 1 & 1 & 1 & 1 \\
d & 0 & a & e & e & 1 & e & 1 \\
e & 0 & a & d & d & d & 1 & 1 \\
1 & 0 & a & b & c & d & e & 1.
\end{array}
\]
\end{example}
It is a question if, having an operator $\rightarrow$ on a finite set $A$, it can be converted into a poset with pseudocomplemented sections. For this, we introduce the following structure.
\section{Implication algebras}
Our next goal is to show that this unsharp implication in fact determines the given finite poset with pseudocomplemented sections. For this purpose we define the following concept.
\begin{definition}\label{def1}
A {\em finite {\rm I}-algebra} is an ordered triple $(A,\rightarrow,1)$ with a finite set $A$, an operator $\rightarrow:A^2\rightarrow2^A\setminus\{\emptyset\}$ and $1\in A$ satisfying the following conditions:
\begin{enumerate}
\item[{\rm(I1)}] $x\rightarrow x\approx x\rightarrow1\approx1$,
\item[{\rm(I2)}] $x\rightarrow y=y\rightarrow x=1\Rightarrow x=y$,
\item[{\rm(I3)}] $x\rightarrow y=y\rightarrow z=1\Rightarrow x\rightarrow z=1$,
\item[{\rm(I4)}] $y\rightarrow z=z\rightarrow x=z\rightarrow(x\rightarrow y)=1\Rightarrow z=y$,
\item[{\rm(I5)}] $\big(y\rightarrow x=y\rightarrow u=1\text{ and }(y\rightarrow z=z\rightarrow x=z\rightarrow u=1\Rightarrow z=y)\big)\Rightarrow u\rightarrow(x\rightarrow y)=1$,
\item[{\rm(I6)}] $x\rightarrow y=\{z\rightarrow y\mid x\rightarrow z=y\rightarrow z=1,\text{ and }x\rightarrow u=y\rightarrow u=u\rightarrow z=1\Rightarrow u=z\}$.
\end{enumerate}
\end{definition}
Now we can state and prove the following result.
\begin{theorem}\label{th1}
Let $\mathbf P=\big(P,\leq,(^y;y\in P),1\big)$ be a finite poset with pseudocomplemented sections and put $x\rightarrow y:=\big(\min U(x,y)\big)^y$ for all $x,y\in P$. Then $\mathbb I(\mathbf P):=(P,\rightarrow,1)$ is a finite {\rm I}-algebra.
\end{theorem}
\begin{proof}
Let $a,b\in P$. According to (i) of Proposition~\ref{prop1}, $a\leq b$ if and only if $a\rightarrow b=1$, and according to (ii) of Lemma~\ref{lem1}, $b\leq a$ implies $a\rightarrow b=a^b$. Now (I1) follows since $\leq$ is reflexive and $1$ is the top element of $(P,\leq)$, (I2) and (I3) follow by antisymmetry and transitivity of $\leq$, respectively. Let $x,y,z,u\in P$. If $y\leq z\leq x$ and $z\leq x^y$ then $z\in L(x,x^y)\cap[y,1]=\{y\}$, i.e.\ $z=y$ which shows that (I4) holds. Now for $x,u\in[y,1]$ the following are equivalent:
\begin{align*}
y\rightarrow z=z\rightarrow x=z\rightarrow u=1 & \Rightarrow z=y, \\
z\in L(x,u)\cap[y,1] & \Rightarrow z=y, \\
L(x,u)\cap[y,1] & \subseteq\{y\}, \\
L(x,u)\cap[y,1] & =\{y\}.
\end{align*}
Since for $x,u\in[y,1]$, $L(x,u)\cap[y,1]=\{y\}$ implies $u\leq x^y$, we have (I5). Finally, (I6) follows from the definition of $\rightarrow$.
\end{proof}
However, also the converse of Theorem~\ref{th1} is true, see the following result.
\begin{theorem}\label{th2}
Let $\mathbf A=(A,\rightarrow,1)$ be a finite {\rm I}-algebra and define
\begin{align*}
x\leq y & :\Leftrightarrow x\rightarrow y=1, \\
x^y & :=x\rightarrow y\text{ whenever }y\leq x
\end{align*}
{\rm(}$x,y\in A${\rm)}. Then $\mathbb P(\mathbf A):=\big(A,\leq,(^y;y\in A),1\big)$ is a finite poset with pseudocomplemented sections.
\end{theorem}
\begin{proof}
Because of (I1) -- (I3), $(A,\leq,1)$ is a finite poset with top element $1$, because of (I4), $L(x,x^y)\cap[y,1]\subseteq\{y\}$ for all $x,y\in I$ with $y\leq x$ and hence $L(x,x^y)\cap[y,1]=\{y\}$ for all $x,y\in I$ with $y\leq x$, and because of (I5), $y\in A$, $x,u\in[y,1]$ and $L(x,u)\cap[y,1]=\{y\}$ imply $u\leq x^y$. Hence for all $y\in A$, $([y,1],\leq,{}^y,y)$ is a pseudocomplemented poset.
\end{proof}
\begin{remark}
In the above proof, condition {\rm(I6)} of Definition~\ref{def1} is not needed. We need this condition in order to prove that the above described correspondence is one-to-one.
\end{remark}
Now we show that the assignments from Theorems~\ref{th1} and \ref{th2} are mutually inverse.
\begin{theorem}
The correspondence described in Theorems~\ref{th1} and \ref{th2} is one-to-one.
\end{theorem}
\begin{proof}
Let $\mathbf P=\big(P,\leq,(^y;y\in P),1\big)$ be a finite poset with pseudocomplemented sections, put
\begin{align*}
\mathbb I(\mathbf P) & =(P,\rightarrow,1), \\
\mathbb P\big(\mathbb I(\mathbf P)\big) & =\big(P,\leq',(_y;y\in P),1\big)
\end{align*}
and let $a,b\in P$. Then because of the definition of $\leq'$ and (i) of Proposition~\ref{prop1} the following are equivalent:
\begin{align*}
a & \leq'b, \\
a\rightarrow b & =1, \\
a & \leq b.
\end{align*}
If $b\leq a$ then because of the definition of $a_b$ and (ii) of Lemma~\ref{lem1} we have $a_b=a\rightarrow b=a^b$ This shows $\mathbb P\big(\mathbb I(\mathbf P)\big)=\mathbf P$. Now let $\mathbf A=(A,\rightarrow,1)$ be a finite {\rm I}-algebra, put
\begin{align*}
\mathbb P(\mathbf A) & =\big(A,\leq,(^y;y\in I),1\big), \\
\mathbb I\big(\mathbb P(\mathbf A)\big) & =(A,\Rightarrow,1)
\end{align*}
and let $a,b\in A$. Then
\[
a\Rightarrow b=\big(\Min U(a,b)\big)^b=\{x^b\mid a,b\leq x,\text{ and }a,b\leq y\leq x\text{ implies }y=x\}=a\rightarrow b
\]
because of the definition of $\leq$ and (I6). This shows $\mathbb I\big(\mathbb P(\mathbf A)\big)=\mathbf A$.
\end{proof}
In every finite poset $\big(P,\leq,(^y;y\in P),0,1\big)$ with $0$ and pseudocomplemented sections one can define $\neg x:=x\rightarrow0$ for all $x\in P$. Observe that $\neg x=\max\{y\in P\mid L(x,y)=0\}$ for all $x\in P$ and hence $\neg x=x^0$ for all $x\in P$. Due to the fact that $\neg x$ is the pseudocomplementation as defined usually (see e.g.\ \cite{Ba} or \cite V), it satisfies the known properties as follows:
\begin{enumerate}[(P1)]
\item $\neg0=1$ and $\neg1=0$,
\item $x\leq y$ implies $\neg y\leq\neg x$,
\item $x\leq\neg\neg x$,
\item $\neg\neg\neg x=\neg x$.
\end{enumerate}
\begin{remark}
Condition {\rm(P2)} expresses the fact that our negation and implication satisfy the contraposition law, i.e.
\[
\text{if }x\rightarrow y=1\text{ then also }\neg y\rightarrow\neg x=1.
\]
\end{remark}
At the end of this section we show that every bounded pseudocomplemented poset contains a subposet where the unary negation $'$ is a complementation. This is in fact analogous to the Glivenko Theorem (see e.g.\ \cite{Bi}) for pseudocomplemented lattices.
\begin{proposition}
Let $\mathbf P=(P,\leq,{}',0,1)$ be a bounded pseudocomplemented poset. Then $(P',\leq,{}',0,1)$ with $P':=\{x'\mid x\in P\}$ is a complemented poset.
\end{proposition}
\begin{proof}
Clearly, $P'=\{x\in P\mid x''=x\}$. Let $a,b\in P'$. Then $a'\in P'$. Moreover, $L(a,a')=0$. If $b\in U(a,a')$ then $b'\in L(a',a'')=L(a,a')=0$ and hence $b=b''=0'=1$. This shows $U(a,a')=1$, i.e.\ $a'$ is a complement of $a$.
\end{proof}
\begin{example}
If $(P,\leq,{}',0,1)$ is the bounded pseudocomplemented poset of Example~\ref{ex1} then the complemented poset $(P',\leq,{}',0,1)$ is depicted in Figure~4:
\vspace*{-2mm}
\begin{center}
\setlength{\unitlength}{7mm}
\begin{picture}(6,6)
\put(3,1){\circle*{.3}}
\put(1,3){\circle*{.3}}
\put(5,3){\circle*{.3}}
\put(3,5){\circle*{.3}}
\put(1,3){\line(1,-1)2}
\put(1,3){\line(1,1)2}
\put(5,3){\line(-1,-1)2}
\put(5,3){\line(-1,1)2}
\put(2.85,.25){$0$}
\put(.3,2.85){$b$}
\put(5.4,2.85){$c$}
\put(2.85,5.4){$1$}
\put(2.2,-.75){{\rm Fig.~4}}
\end{picture}
\end{center}
\vspace*{4mm}
\end{example}
\section{Adjointness of implication with unsharp conjunction}
It is known that every relatively pseudocomplemented lattice is residuated, in fact it is a ``prototype'' of a residuated lattice where the operation multiplication is considered as the lattice meet and the relative pseudocomplement as a residuum. As mentioned in the introduction, we define sectionally pseudocomplemented lattices in the sake to extend the concept of relative pseudocomplementation to non-distributive lattices. The question concerning residuation in sectionally pseudocomplemented lattices was answered by the authors and J.~K\"uhr (\cite{CKL}) as follows.
A lattice $\mathbf L=(L,\vee,\wedge,\odot,\rightarrow,1)$ with top element $1$ and with two binary operations $\odot$ and $\rightarrow$ is called {\em relatively residuated} if
\begin{enumerate}[{\rm(i)}]
\item $(L,\odot,1)$ is a commutative groupoid with $1$,
\item $x\leq y$ implies $x\odot z\leq y\odot z$,
\item $(x\vee z)\odot(y\vee z)\leq z$ if and only if $x\vee z\leq y\rightarrow z$.
\end{enumerate}
It is worth noticing that the class of relatively residuated lattices forms a variety, see \cite{CKL}. Namely, under condition (i), conditions (ii) and (iii) are equivalent to the identities
\begin{enumerate}
\item[(iv)] $x\odot z\leq(x\vee y)\odot z$,
\item[(v)] $z\vee y\leq x\rightarrow\Big(\big((x\vee y)\odot(z\vee y)\big)\vee y\Big)$,
\item[(vi)] $(x\rightarrow y)\odot(x\vee y)\leq y$.
\end{enumerate}
Unfortunately, we cannot adopt this definition for posets $(P,\leq)$ because we cannot use the lattice operations and, moreover, our implication is not an operation but an operator, i.e.\ its result need not be a singleton. However, we can proceed as follows. Having in mind that $\rightarrow$ is unsharp, we can introduce an unsharp connective conjunction as follows:
\[
x\odot y:=\Max L(x,y)
\]
and for non-singleton subsets $A,B$ of $P$ we define $A\odot B:=\Max L(A,B)$. One can mention that this conjunction reaches the maximal possible values for given entries $x$ and $y$. Moreover, the operator $\odot$ is idempotent since for every $x\in P$ we have
\[
x\odot x=\Max L(x,x)=\Max L(x)=x.
\]
Now we can define the following concept.
\begin{definition}\label{def2}
A poset $\mathbf P=(P,\leq,\odot,\rightarrow,1)$ with top element $1$ and two operators $\odot$ and $\rightarrow$, both mappings from $P^2$ to $2^P$, such that
\begin{enumerate}[{\rm(i)}]
\item $\odot$ is commutative and associative and $x\odot1\approx x$,
\item if $x\leq y$ and $z\in P$ then there exists some $t\in y\odot z$ with $x\odot z\leq t$,
\item $z\in x\odot y$ if and only if $(z\leq x,y$ and $x\leq y\rightarrow z)$
\end{enumerate}
will be called {\em unsharply residuated}. Condition {\rm(iii)} will be called {\em unsharp adjointness}. We call an unsharply residuated poset $\mathbf P$ {\em divisible} if for all $x,y\in P$ with $x\geq y$ we have that $x\rightarrow y$ is a singleton and $\big(x\odot(x\rightarrow y)\big)\cap[y,1]=\{y\}$.
\end{definition}
We are going to show that finite posets with pseudocomplemented sections are unsharply residuated and divisible.
\begin{theorem}\label{th3}
Let $\big(P,\leq,(^y;y\in P),1\big)$ be a finite poset with pseudocomplemented sections and for $x,y\in P$ define
\begin{align*}
x\odot y & :=\Max L(x,y), \\
x\rightarrow y & :=\big(\Min U(x,y)\big)^y.
\end{align*}
Then $(P,\leq,\odot,\rightarrow,1)$ is unsharply residuated and divisible.
\end{theorem}
\begin{proof}
Let $a,b,c\in P$. Then
\[
a\odot1=\Max L(a,1)=\Max L(a)=a
\]
and, clearly, $\odot$ is commutative. Moreover,
\begin{align*}
(a\odot b)\odot c & =\Max L\big(\Max L(a,b),c\big)=\Max\Big(L\big(\Max L(a,b)\big)\cap L(c)\Big)= \\
& =\Max\big(L(a,b)\cap L(c)\big)=\Max L(a,b,c)=\Max\big(L(a)\cap L(b,c)\big)= \\
& =\Max\Big(L(a)\cap L\big(\Max L(b,c)\big)\Big)=\Max L\big(a,\Max L(b,c)\big)=a\odot(b\odot c).
\end{align*}
Thus $\odot$ satisfies (i) of Definition~\ref{def2}. If $a\leq b$ then
\[
a\circ c=\Max L(a,c)\subseteq L(a,c)\subseteq L(b,c)
\]
and hence there exists some $d\in\Max L(b,c)$ with $a\odot c\leq d$. This shows (ii) of Definition~\ref{def2}. Now unsharp adjointness remains to be proved. Because of Lemma~\ref{lem1} (ii) the following are equivalent:
\begin{align*}
& c\in a\odot b, \\
& c\in\Max L(a,b), \\
& L(a,b)\cap[c,1]=\{c\}, \\
& c\leq a,b\text{ and }a\leq b^c, \\
& c\leq a,b\text{ and }a\leq b\rightarrow c.
\end{align*}
Now assume $a\geq b$. Then $a\rightarrow b=a^b$ and
\[
\big(a\odot(a\rightarrow b)\big)\cap[b,1]=\big(\Max L(a,a^b)\big)\cap[b,1]\subseteq L(a,a^b)\cap[b,1]=\{b\}.
\]
On the other hand, $b\in L(a,a^b)$ and if $b\leq c\in L(a,a^b)$ then $c\in L(a,a^b)\cap[b,1]=\{b\}$, i.e.\ $c=b$. This shows that $b\in\Max L(a,a^b)$ and hence $b\in\big(\Max L(a,a^b)\big)\cap[b,1]$, thus
\[
\big(\Max L(a,a^b)\big)\cap[b,1]=\{b\}.
\]
proving divisibility of $(P,\leq,\odot,\rightarrow,1)$.
\end{proof}
The divisibility has an essential influence on the logic for which the considered unsharply residuated poset is an algebraic semantics. Namely, if we know the truth values of $x$ and $x\rightarrow y$ and we know that $y\leq x$ then the truth value of $y$ is exactly the conjunction of $x$ and $x\rightarrow y$, which is just the derivation rule {\em Modus Ponens}.
If an unsharply residuated poset is a lattice then clearly we have
\[
x\odot y=\Max L(x,y)=\Max L(x\wedge y)=x\wedge y
\]
and the fact that $z\leq x,y$ can be expressed by $x\vee z=x$ and $y\vee z=y$. Then unsharp adjointness can be formulated as follows:
\[
(x\vee z)\odot(y\vee z)=z\text{ if and only if }x\vee z\leq(y\vee z)\rightarrow z.
\]
However, by (ii) of Proposition~\ref{prop1} we know that
\[
(y\vee z)\rightarrow z=y\rightarrow z,
\]
and
\[
(x\vee z)\odot(y\vee z)\geq z
\]
automatically holds. Hence the left-hand side of (iii) is equivalent to $(x\vee z)\odot(y\vee z)\leq z$. Altogether, we obtain
\[
(x\vee z)\odot(y\vee z)\leq z\text{ if and only if }x\vee z\leq y\rightarrow z
\]
which is just relative adjointness as defined in \cite{CKL} and mentioned above. This means that Definition~\ref{def2} is compatible with the corresponding definition for lattices.
\begin{example}
Let us consider the poset from Example~\ref{ex1}. The table for $\odot$ looks as follows:
\[
\begin{array}{c|ccccccc}
\odot & 0 & a & b & c & d & e & 1 \\
\hline
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
a & 0 & a & a & 0 & a & a & a \\
b & 0 & a & b & 0 & b & b & b \\
c & 0 & 0 & 0 & c & c & c & c \\
d & 0 & a & b & c & d & \{b,c\} & d \\
e & 0 & a & b & c & \{b,c\} & e & e \\
1 & 0 & a & b & c & d & e & 1.
\end{array}
\]
We can see that $d\odot e=\{b,c\}$ is not a singleton, and $b\in d\odot e$ implies $b\leq d,e$ and $d\leq d=e\rightarrow b$; also, conversely, $c\leq e,d$ and $e\leq e=d\rightarrow c$ imply $c\in\{b,c\}=e\odot d$.
\end{example}
\section{Conclusion}
We constructed a binary operator on a finite poset with pseudocomplemented sections which can serve as an unsharp implication. It satisfies important properties required for implication in various sorts of propositional logics. Moreover, a negation derived by means of this implication satisfies the properties of implication in intuitionistic logic, thus our poset with this unsharp implication can be recognized as an algebraic semantics of a general case of intuitionistic logic. Moreover, an unsharp conjunction is introduced having similar properties as those satisfied by the connective conjunction in propositional calculus. This unsharp conjunction together with the mentioned unsharp implication forms an adjoint pair. Hence, the logic based on such a poset can be considered as a fairly general kind of substructural logic.
{\bf Declaration of competing interest}
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
2,877,628,090,500 | arxiv | \section{Introduction}
Let $\Omega$ be a smooth bounded domain in $\R^N$. In the study of the nonlinear equation
\begin{equation}
\label{equation-intro}
-\dvg( j_\xi(x,u,\nabla u)) +j_s(x,u,\nabla u)=g(x,u)\quad\,\,\, \text{in $\Omega$},
\end{equation}
an important r\v ole is played by the coerciveness feature of $j$, namely the fact
that there exists a positive constant $\sigma>0$ such that
\begin{equation}
\label{coerciv}
j(x,s,\xi)\geq \sigma |\xi|^2,\quad\,\,\text{for a.e. $x\in\Omega$ and all $(s,\xi)\in\R\times\R^N$}.
\end{equation}
Under condition \eqref{coerciv} and other suitable assumptions, including the
boundedness of the map $s\mapsto j(x,s,\xi)$,
equation \eqref{equation-intro} has been deeply investigated in the last twenty
years by means of variational methods and tools of non-smooth critical point theory, essentially via
two different approaches (see e.g.~\cite{Ab} and \cite{candeg} and references therein).
More recently, it was also covered the case where the map $s\mapsto j(x,s,\xi)$ is unbounded
(see e.g.~\cite{Ab1} and \cite{pelsqu}, again via different strategies). The situation is by far
more delicate under the assumption of degenerate coerciveness, namely for some function $\sigma:\R\to\R^+$
with $\sigma(s)\to 0$ as $s\to\infty$,
\begin{equation}
\label{noncoerciv}
j(x,s,\xi)\geq \sigma(s)|\xi|^2,\quad\,\,\text{for a.e. $x\in\Omega$ and all $(s,\xi)\in\R\times\R^N$}.
\end{equation}
To the authors' knowledge, in this setting, for $j$ of the form $(b(x)+|s|)^{-2\beta}|\xi|^2/2$,
the first contribution to minimization problems is \cite{BO}, while for
existence of mountain pass type solutions we refer to \cite{AbO}, the main point
being the fact that cluster points of arbitrary Palais-Smale sequences are bounded.
See \cite{albofeortr} for more general existence statements and
\cite{BDO,BoBr} for regularity results.
Relying upon a solid background for the treatment of \eqref{equation-intro} in the coercive case,
the main goal of this paper is that of building
a bridge between the theory for non-degenerate coerciveness problems
and that for problems with degenerate coerciveness. Roughly speaking, we see a solution to a degenerate problem as related
to a solution of a corresponding non-degenerate problem, preserving at the same time the main structural assumptions
typically assumed for these classes of equations. To this aim, we introduce a suitable class of diffeomorphisms
$\varphi\in C^2(\R)$ and consider the functions $j^\sharp:\Omega\times\R\times\R^N\to\R$ and $g^\sharp:\Omega\times\R\to\R$,
defined as
$$
j^\sharp(x,s,\xi)=j(x,\varphi(s),\varphi'(s)\xi), \qquad
g^\sharp(x,s)=g(x,\varphi(s))\varphi'(s),
$$
for a.e. $x\in\Omega$ and all $(s,\xi)\in\R\times\R^N$. Then, if \eqref{noncoerciv} holds,
we can find $\sigma^\sharp>0$ such that
\begin{equation*}
j^\sharp(x,s,\xi)\geq \sigma^\sharp |\xi|^2,
\end{equation*}
for a.e.~$x\in\Omega$ and all $(s,\xi)\in\R\times\R^N$, thus recovering the non-degenerate coerciveness
from the original degenerate framework.
We shall write the corresponding Euler's equation as
\begin{equation}
\label{equation-intro-sharp}
-\dvg( j_\xi^\sharp(x,v,\nabla v)) +j_s^\sharp(x,v,\nabla v)=g^\sharp(x,v) \quad\,\,\, \text{in $\Omega$}.
\end{equation}
A first natural issue is the correspondence between the solutions of \eqref{equation-intro}
and the solutions of \eqref{equation-intro-sharp} through the diffeomorphism $\varphi$. Roughly speaking, the natural connection
is that $u=\varphi(v)$ is a solution of \eqref{equation-intro} when
$v$ is a solution to \eqref{equation-intro-sharp}, in some sense.
On the other hand, in general, $\varphi(v)\not\in H^1_0(\Omega)$ although $v\in H^1_0(\Omega)$. Hence, the notion of solution
for functions in the Sobolev space $H^1_0(\Omega)$ cannot remain invariant under the action of $\varphi$, unless
$v\in L^\infty(\Omega)$. In fact, we provide a new
definition of generalized solution which is partly based
upon the notion of renormalized solution introduced in \cite{DMOP}
in the study of elliptic equations with general measure data and partly on the variational formulation adopted in \cite{pelsqu}.
The new notion turns out to be invariant under diffeomorphisms (Proposition~\ref{soluzdisrt}) as well as conveniently related
to the machinery developed in \cite{pelsqu}.
Moreover, we detect two relevant invariant conditions.
The first (Proposition~\ref{invar1}) is a modification of the standard (non-invariant)
sign condition
\begin{equation}
\label{classicsign}
j_s(x,s,\xi)s\geq 0,\quad \text{for all $|s|\geq R$ and some $R\geq 0$},
\end{equation}
namely there exist $\eps\in (0,1)$ and $R\geq 0$ such that
\begin{equation}
\label{generalsign}
(1-\eps) j_\xi(x,s,\xi)\cdot\xi+j_s(x,s,\xi)s\geq 0,
\end{equation}
for a.e.~$x\in\Omega$ and all $(s,\xi)\in\R\times\R^N$ such that $|s|\geq R$. Condition~\eqref{classicsign}
is well known \cite{Ab,Ab1,AbO,candeg,pelsqu} and plays an important r\v ole in the study of both
existence and summability issues for \eqref{equation-intro}.
The second one (Proposition~\ref{invar2}) is the generalized Ambrosetti-Rabinowitz \cite{AR} condition:
there exist $\delta>0$, $\nu>2$ and $R\geq 0$ such that
\begin{equation}
\label{genARcondition}
\nu j(x,s,\xi)-(1+\delta)j_\xi(x,s,\xi)\cdot\xi-j_s(x,s,\xi)s-\nu G(x,s)+g(x,s)s\geq 0,
\end{equation}
for a.e.~$x\in\Omega$ and all $(s,\xi)\in\R\times\R^N$ with $|s|\geq R$.
Typically, this condition guarantees that an arbitrary Palais-Smale sequence is bounded
\cite{Ab,Ab1,candeg,pelsqu}. The invariant properties for growth conditions are stated in
Proposition~\ref{rm1}, \ref{menoinf} and \ref{newgrow}. In the situations where
\begin{equation*}
j_s^\sharp(x,s,\xi)s\geq 0,\quad \text{for all $|s|\geq R^\sharp$ and some $R^\sharp\geq 0$},
\end{equation*}
the results of our paper allow to obtain existence and multiplicity
of solutions for problems with degenerate coercivity by a {\em direct}
application of the results of \cite{pelsqu} (see Theorem~\ref{existthmm}). This is new
compared with the results of \cite{AbO}, since the technique adopted therein does not allow
to obtain multiplicity results. In addition, contrary to \cite{AbO}, under certain assumptions on the nonlinearity
$g$, the solutions need not to be bounded.
The further development of the ideas in this paper, is related to strengthening some of the results
of \cite{pelsqu}, in order to allow the weaker sign condition \eqref{generalsign}
to replace the standard sign condition \eqref{classicsign}.
Then existence and multiplicity theorems for coercive equations with unbounded coefficients
automatically recover existence and multiplicity theorems for equations
with degenerate coercivity. This will be the subject of a further investigation.
\vskip4pt
\noindent
The plan of the paper is as follows.
\newline In Section~\ref{generalizedsol} we introduce
a new notion of generalized solution for \eqref{equation-intro} and prove that
it is invariant under the action of $\varphi$. In Section~\ref{growthsect} we show how
$\varphi$ affects some useful growth conditions.
In Section~\ref{signsect} we study the invariance of the sign condition \eqref{generalsign}
and get some related summability results. In Section~\ref{ambrrabsect}, we consider the invariance of an Ambrosetti-Rabinowitz
(AR, in brief) type inequality \eqref{genARcondition}. Finally, in Section~\ref{existence-sect}, we shall get a new existence results for multiple,
possibly unbounded, generalized solutions of \eqref{equation-intro}.
\section{Invariant properties}
Now let $\Omega$ be a smooth bounded domain in $\R^N$. We consider
$j:\Omega\times\R\times\R^N\to\R$ with
$j(\cdot,s,\xi)$ measurable in $\Omega$ for all $s\in\R$ and
$\xi\in\R^N$ and $j(x,\cdot,\cdot)$ of class $C^1$ for
a.e.~$x\in\Omega$. Moreover, we assume that the map
$\xi\mapsto j(x,s,\xi)$ is strictly convex and
there exist $\alpha,\gamma,\mu:\R^+\to\R^+$ continuous
with $\alpha(s)\geq 1$ for all $s\in\R^+$ and such that
\begin{gather}
\label{originalgrowths1}
\frac{1}{\alpha(|s|)} |\xi|^2 \leq j(x,s,\xi)\leq \alpha(|s|)|\xi|^2, \\
\label{originalgrowths2}
|j_s(x,s,\xi)|\leq \gamma(|s|)|\xi|^2, \quad\quad
|j_\xi(x,s,\xi)|\leq\mu(|s|)|\xi|,
\end{gather}
for a.e.~$x\in\Omega$ and all $(s,\xi)\in\R\times\R^N$. Actually, the second inequality of \eqref{originalgrowths2}
can be deduced by the strict convexity of $\xi\mapsto j(x,s,\xi)$ and the right inequality of \eqref{originalgrowths1}.
Furthermore, again by the strict convexity of $\xi\mapsto j(x,s,\xi)$ and the left inequality of \eqref{originalgrowths1} it holds
\begin{equation}\label{jxicoerc}
j_\xi(x,s,\xi)\cdot \xi\geq \frac{1}{\alpha(|s|)} |\xi|^2,
\end{equation}
see \cite[Remarks 4.1 and 4.3]{pelsqu}.
Without loss of generality, one may assume that
$\alpha,\gamma,\mu:\R^+\to\R^+$ appearing in the
growth conditions of $j,j_s,j_\xi$ are monotonically
increasing. Indeed, we can always replace them
by the increasing functions $\alpha_0,\gamma_0,\mu_0:\R^+\to\R^+$ defined by
\begin{equation*}
\alpha_0(r)=\sup\limits_{s\in [-r,r]} \alpha(|s|), \quad\,\,
\gamma_0(r)=\sup\limits_{s\in [-r,r]} \gamma(|s|),\quad\,\,
\mu_0(r)=\sup\limits_{s\in [-r,r]} \mu(|s|).
\end{equation*}
We shall also assume that $g:\Omega\times\R\to\R$ is a Carath\'eodory function such that
\begin{equation}
\label{sumbgcond}
\sup_{|t|\leq s}|g(\cdot,t)|\in L^1(\Omega),\quad\,\,\text{for every $s\in\R^+$,}
\end{equation}
and we set $G(x,s)=\int_0^s g(x,t)dt$, for every $s\in\R$.
\begin{definition}
\label{diffeoclass}
For an odd diffeomorphism $\varphi:\R\to\R$ of class $C^2$ such that $\varphi(0)=0$,
we consider the following properties
\begin{align}
\label{one}
\varphi'(s)\geq \sigma\sqrt{\alpha(|\varphi(s)|)},\qquad\text{for all $s\in\R$ and some $\sigma>0$}. \\
\label{two}
\lim_{s\to + \infty}
\frac{s\varphi'(s)}{\varphi(s)}= 1+\lim_{s\to + \infty}
\frac{s\varphi''(s)}{\varphi'(s)}=\frac{1}{1-\beta},\qquad\text{for some $\beta\in [0,1)$.}
\end{align}
\end{definition}
\noindent
A simple model satisfying the requirements of Definition~\ref{diffeoclass} is the function
\begin{equation}
\label{fimodel}
\varphi(s)=s{(1+s^2)}^{\frac{\beta}{2(1-\beta)}},\qquad\text{for
all $s\in\R$}, \qquad 0\leq \beta<1,
\end{equation}
in the case when $\alpha(t)=C(1+t)^{2\beta}$, for some $C>0$.
\begin{definition}
Consider the functions
$$
j:\Omega\times\R\times\R^N\to\R,\quad
g:\Omega\times\R\to\R, \quad
G:\Omega\times\R\to\R,
$$
and let
$\varphi\in C^2(\R)$ be a diffeomorphism according to Definition~\ref{diffeoclass}. We define
$$
j^\sharp:\Omega\times\R\times\R^N\to\R,\quad
g^\sharp:\Omega\times\R\to\R, \quad
G^\sharp:\Omega\times\R\to\R,
$$
by setting
$$
j^\sharp(x,s,\xi)=j(x,\varphi(s),\varphi'(s)\xi),
$$
for a.e. $x\in\Omega$ and all $(s,\xi)\in\R\times\R^N$ and
$$
g^\sharp(x,s)=g(x,\varphi(s))\varphi'(s), \qquad
G^\sharp(x,s)=\int_0^{s} g^\sharp(x,t)dt=G(x,\varphi(s)),
$$
for a.e. $x\in\Omega$ and all $s\in\R$.
\end{definition}
\noindent
Now we see that $\varphi$ turns a degenerate problem associated with $j$ into a non-degenerate one,
associated with $j^\sharp$ and that $j^\sharp,j_s^\sharp$ and $j_\xi^\sharp$ satisfy
growths analogous to those of $j,j_s$ and $j_\xi$.
\begin{proposition}
\label{rm1}
Let $\varphi\in C^2(\R)$ be a diffeomorphism which satisfies
the properties of Definition~\ref{diffeoclass}.
Assume that $\alpha,\gamma,\mu:\R^+\to\R^+$ satisfy the growth conditions~\eqref{originalgrowths1}-\eqref{originalgrowths2}.
Then there exist continuous functions $\alpha^\sharp,\gamma^\sharp,\mu^\sharp:\R^+\to\R^+$ and $\sigma^\sharp>0$ such that
\begin{gather*}
\sigma^\sharp |\xi|^2 \leq j^\sharp(x,s,\xi)\leq \alpha^\sharp(|s|)|\xi|^2, \\
\noalign{\vskip2pt}
|j^\sharp_s(x,s,\xi)|\leq \gamma^\sharp(|s|)|\xi|^2, \quad\,\,\,
|j^\sharp_\xi(x,s,\xi)|\leq \mu^\sharp(|s|)|\xi|,
\end{gather*}
for a.e.~$x\in\Omega$ and all $(s,\xi)\in\R\times\R^N$.
\end{proposition}
\begin{proof}
In light of \eqref{originalgrowths1} and of \eqref{one} of Definition~\ref{diffeoclass}, for $\sigma^\sharp=\sigma^2$, we have
\begin{equation*}
\sigma^\sharp|\xi|^2\leq \frac{\varphi'(s)^2}{\alpha(|\varphi(s)|)}|\xi|^2\leq j(x,\varphi(s),\varphi'(s)\xi)
\leq \alpha(|\varphi(s)|)\varphi'(s)^{2}|\xi|^2
\end{equation*}
for a.e.~$x\in\Omega$ and all $(s,\xi)\in\R\times\R^N$. Furthermore,
by virtue of \eqref{originalgrowths2}, we have
$$
|j^\sharp_\xi(x,s,\xi)|\leq (\varphi'(s))^2\mu(|\varphi(s)|)|\xi|,
$$
for a.e.~$x\in\Omega$ and all $(s,\xi)\in\R\times\R^N$, as well as
\begin{equation*}
|j^\sharp_s(x,s,\xi)| \leq [|\varphi''(s)| \mu(|\varphi(s)|)\varphi'(s)
+(\varphi'(s))^3\gamma(|\varphi(s)|)]|\xi|^2,
\end{equation*}
for a.e.~$x\in\Omega$ and all $(s,\xi)\in\R\times\R^N$. The
assertions follow with $\alpha^\sharp,\gamma^\sharp,\mu^\sharp:\R\to\R^+$,
\begin{align*}
\alpha^\sharp(s) & =\alpha(|\varphi(s)|)\varphi'(s)^2, \\
\gamma^\sharp(s) & =|\varphi''(s)|\mu(|\varphi(s)|)\varphi'(s)+(\varphi'(s))^3\gamma(|\varphi(s)|), \\
\mu^\sharp (s) & =(\varphi'(s))^2\mu(|\varphi(s)|),
\end{align*}
for all $s\in\R$. Of course, without loss of generality, one can then substitute $\alpha^\sharp,\gamma^\sharp,\mu^\sharp$ with
even functions satisfying the same growth controls.
\end{proof}
\subsection{Generalized solutions}
\label{generalizedsol}
For any $k>0$, consider the truncation $T_k:\R\to\R$,
$$
T_k(s)
=
\begin{cases}
s & \text{for $|s|\leq k$}, \\
k\,{\rm sign}(s) & \text{for $|s|\geq k$}.
\end{cases}
$$
Moreover, as in \cite{pelsqu}, for a measurable function $u:\Omega\to\R$,
let us consider the space
\begin{equation}
\label{sottopiccolo}
V_u=\big\{v\in H^1_0(\Omega)\cap L^\infty(\Omega): u\in L^\infty(\{v\neq 0\})\big\}.
\end{equation}
This functional space was originally introduced by Degiovanni and Zani
for functions $u$ of $H^1_0(\Omega)$, in which case $V_u$ turns out to be a dense subspace
of $H^1_0(\Omega)$ (cf.\ \cite{DZ}). Observe that, in view of conditions \eqref{originalgrowths2}
and \eqref{sumbgcond}, it follows
$$
\jxi{u}\cdot \nabla v \in L^1(\Omega),\quad\,\,
\js{u}v\in L^1(\Omega),\quad\,\, g(x,u)v\in L^1(\Omega),
$$
for every $v\in V_u$ and any measurable $u:\Omega\to\R$ with
$T_k(u) \in H^1_0(\Omega)$ for every $k>0$. For such functions, according to \cite{DMOP},
the meaning of $\nabla u$ will be made clear in the proof of Proposition~\ref{soluzdisrt}.
\vskip2pt
\noindent
In the spirit of \cite{DMOP}, where the notion of renormalized solution
is introduced, and \cite{pelsqu}, where the notion of generalized solution
is given, based upon $V_u$, we now introduce the following
\begin{definition}
\label{defsol-bis}
We say that $u$ is a generalized solution to
\begin{equation}\label{problema-bis}
\begin{cases}
-\,\dvg( j_\xi(x,u,\nabla u)) +j_s(x,u,\nabla u)=g(x,u), & \text{in $\Omega$}, \\
\quad u=0, & \text{on $\partial\Omega$},
\end{cases}
\end{equation}
if $u$ is a measurable function finite almost everywhere, such that
\begin{equation}
\label{trunk-sol}
T_k(u) \in H^1_0(\Omega),\quad\text{for all $k>0$},
\end{equation}
and, furthermore,
\begin{equation}
\label{summab-solger}
\jxi{u}\cdot\nabla u\in\elle1,\qquad \js{u}u\in \elle1,
\end{equation}
and
\begin{equation}
\label{condizioneVu}
\into \jxi{u}\cdot \nabla w +\into
\js{u}w=\into g(x,u)w, \quad\forall w\in V_u.
\end{equation}
\end{definition}
\begin{remark}\rm
We point out that, in \cite[Definition 1.1]{pelsqu}, a different notion of generalized
solution of problem \eqref{problema-bis} is introduced
when $u$ belongs to the Sobolev space $H^1_0(\Omega)$. On the other hand, actually,
by \cite[Theorem 4.8]{pelsqu} the two notions agree, whenever $u\in H^1_0(\Omega)$.
Also, the variational formulation \eqref{condizioneVu} with test functions in $V_u$ is
conveniently related to the weak slope \cite{DM,CDM} of the functional associated with \eqref{problema-bis}, see
\cite[Proposition 4.5]{pelsqu} (see also Proposition \ref{sommab}).
\end{remark}
\noindent
The following proposition establishes a link between the generalized solutions
of the problem under the change of variable procedure.
\begin{proposition}\label{soluzdisrt}
Let $\varphi\in C^2(\R)$ be a diffeomorphism which satisfies
the properties of Definition~\ref{diffeoclass}.
Assume that $v$ is a generalized solution to
\begin{equation}
\label{probmoddd}
\begin{cases}
-\dvg(j^\sharp_\xi(x,v,\nabla v)) +j^\sharp_s(x,v,\nabla v)=g^\sharp(x,v) & \text{in $\Omega$}, \\
\quad v=0, & \text{on $\partial\Omega$}.
\end{cases}
\end{equation}
Then $u=\varphi(v)$ is a generalized solution to
\begin{equation}
\label{ooorigg}
\begin{cases}
-\dvg( j_\xi(x,u,\nabla u)) +j_s(x,u,\nabla u)=g(x,u), & \text{in $\Omega$}, \\
\quad u=0, & \text{on $\partial\Omega$}.
\end{cases}
\end{equation}
If in addition $v\in H^1_0\cap L^\infty(\Omega)$,
then $u\in H^1_0\cap L^\infty(\Omega)$ is a distributional solution to~\eqref{ooorigg}.
\end{proposition}
\begin{proof}
As proved in \cite{DMOP}, for a measurable function $u$ on $\Omega$, finite almost everywhere,
with $T_k(u)\in H^1_0(\Omega)$ for any $k>0$, there exists
a unique $\omega:\Omega\to\R^N$, measurable and such that
\begin{equation}
\label{gradientcharacteriz}
\nabla T_k(u)=\omega\chi_{\{|u|\leq k\}},\quad \text{almost everywhere in $\Omega$ and for all $k>0$.}
\end{equation}
Then, the gradient $\nabla u$ of $u$ is naturally defined by setting $\nabla u=\omega$.
Assume that $\varphi:\R\to\R$ is a diffeomorphism with $\varphi(0)=0$ and that
for a measurable function $v$ on $\Omega$ it holds $T_k(v)\in H^1_0(\Omega)$ for every $k>0$. Then, setting $u=\varphi(v)$,
it follows $T_k(u)\in H^1_0(\Omega)$ for every $k>0$. In fact, given $k>0$, there exists $h>0$ such that
$T_k(u) = (T_k\circ\varphi)\circ T_{h}(v)$. Since $T_k\circ\varphi:\R\to\R$ is a globally
Lipschitz continuous function which is zero at zero, it follows that
$T_k(u)\in H^1_0(\Omega)$ for all $k>0$.
Moreover, if $\nabla u$ and $\nabla v$ denote
the gradients of $u$ and $v$ respectively, in the sense pointed out above, we get the following chain rule
\begin{equation}
\label{chainrule}
\nabla u=\varphi'(v)\nabla v,\quad \text{almost everywhere in $\Omega$}.
\end{equation}
In fact, for all $k>0$, since $T_k(u),T_h(v)\in H^1_0(\Omega)$, from $T_k(u) = (T_k\circ\varphi)\circ T_{h}(v)$
we can write
$$
\nabla T_k(u)=(T_k\circ\varphi)'(T_{h}(v))\nabla T_h(v),
$$
for every $k>0$, namely, by \eqref{gradientcharacteriz},
\begin{equation}
\label{gradprimaff}
\nabla u\chi_{\{|\varphi(v)|\leq k\}}=(T_k\circ\varphi)'(T_{h}(v))\nabla v\chi_{\{|v|\leq h\}},
\quad \text{almost everywhere in $\Omega$}.
\end{equation}
Let now $x\in\Omega$ be an arbitrary point with $|v(x)|\leq h$. In turn, by construction, $|\varphi(v(x))|\leq k$,
and formula~\eqref{gradprimaff} yields directly
\begin{equation}
\label{gradprimaff-2}
\nabla u=(T_k\circ\varphi)'(v)\nabla v,
\quad \text{almost everywhere in $\{|v|\leq h\}$}.
\end{equation}
Formula \eqref{chainrule} then follows by taking into account that $(T_k\circ\varphi)'(v(x))=\varphi'(v(x))$
almost everywhere in $\{|v|\leq h\}$ and by the arbitrariness of $h>0$.
Let now $v$ be a generalized solution to~\eqref{probmoddd}, so that $T_k(v)\in H^1_0(\Omega)$ for all $k>0$.
As pointed out above, it follows that $T_k(u)\in H^1_0(\Omega)$ too, for every $k>0$
and the chain rule $\nabla u=\varphi'(v)\nabla v$ holds, almost everywhere in $\Omega$.
From the definition of generalized solution we learn that
\begin{equation}
\label{startingsum}
j^\sharp_{\xi}(x,v,\nabla v)\cdot\nabla v\in\elle1, \qquad
j^\sharp_s(x,v,\nabla v)v\in \elle1,
\end{equation}
as well as
\begin{equation}
\label{solut1}
\into j^\sharp_\xi(x,v,\nabla v)\cdot \nabla w+\into
j^\sharp_s(x,v,\nabla v) w=\into g^\sharp(x,v)\,w, \quad \forall
w\in V_v.
\end{equation}
Notice that, for any $w\in V_v$, the integrands in \eqref{solut1} are in $L^1(\Omega)$, by
Proposition \ref{rm1}, the definition of $V_v$ and $\nabla v=\nabla T_k(v)
\in L^2(\{w\neq 0\})$ for any $k>\|v\|_{L^\infty(\{w\neq 0\})}$. In light of
\eqref{chainrule} and \eqref{startingsum}, it follows that
$$
j_{\xi}(x,u,\nabla u)\cdot \nabla u =j^\sharp_{\xi}(x,v,\nabla
v)\cdot\nabla v\in\elle1.
$$
Moreover, a simple computation yields
$$
j^\sharp_s(x,v,\nabla
v)v=\Big[\frac{v\varphi'(v)}{\varphi(v)}\chi_{\{v\neq 0\}}\Big]j_s(x,u,\nabla u)u
+\Big[\frac{v\varphi''(v)}{\varphi'(v)}\Big]j_\xi(x,u,\nabla
u)\cdot\nabla u.
$$
Hence, in view of \eqref{two}, it follows that
$j_s(x,u,\nabla u)u\in\elle1$,
being $j_{\xi}(x,u,\nabla u)\cdot \nabla u\in\elle1$ and $
j^\sharp_s(x,v,\nabla v)v\in\elle1$. This yields the desired
summability conditions. For any $w \in V_v$, consider now $\hat
w=\varphi'(v)w$. We have $\hat w\in V_u$. In fact, since
$v\in L^\infty(\{w\neq 0\})$, we obtain
$\hat w\in H^1_0(\Omega)\cap L^\infty(\Omega)$ and
$u=\varphi(v)\in L^\infty(\{w\neq 0\})=L^\infty(\{\hat w\neq 0\})$,
since $\varphi'$ is positive by virtue of \eqref{one}. Of course, we have
$\hat w=\varphi'(T_k(v))w$, for all $k>\|v\|_{L^\infty(\{w\neq 0\})}$. Hence, recalling \eqref{gradientcharacteriz}, from
$$
\nabla (\varphi'(T_k(v))w)=w\varphi''(T_k(v))\nabla v\chi_{\{|v|\leq k\}}+\varphi'(T_k(v))\nabla w,\quad\text{for any $k>0$,}
$$
by choosing $k>\|v\|_{L^\infty(\{w\neq 0\})}$, we conclude that
$$
\nabla \hat w=w\varphi''(v)\nabla v+\varphi'(v)\nabla w,\quad \text{almost everywhere in $\Omega$.}
$$
Therefore, by easy computations, we get
\begin{align}
\label{primaeqqq}
j_\xi(x,u,\nabla u)\cdot\nabla \hat w &= j^\sharp_{\xi}(x,v,\nabla
v)\cdot\nabla w+
\frac{\varphi''(v)w}{\varphi'(v)} j_\xi(x,u,\nabla u)\cdot\nabla u, \\
\label{secondaeqqq} j_s(x,u,\nabla u)\hat w &=
j^\sharp_s(x,v,\nabla v) w
-\frac{\varphi''(v)w}{\varphi'(v)} j_\xi(x,u,\nabla
u)\cdot\nabla u,
\end{align}
yielding
\begin{equation*}
j_{\xi}(x,u,\nabla u)\cdot\nabla \hat w\in\elle1,\qquad
j_s(x,u,\nabla u)\hat w\in \elle1,
\end{equation*}
since $j^\sharp_{\xi}(x,v,\nabla v)\cdot\nabla w\in\elle1$,
$j^\sharp_s(x,v,\nabla v)w\in \elle1$ and
$$
\int_\Omega \big|\frac{\varphi''(v)w}{\varphi'(v)}\, j_\xi(x,u,\nabla u)\cdot\nabla u\big|
=\int_{\{w\neq 0\}} \big|\frac{\varphi''(v)w}{\varphi'(v)}\, j_\xi(x,u,\nabla u)\cdot\nabla u\big|
\leq C\int_\Omega \big|j_\xi(x,u,\nabla u)\cdot\nabla u\big|.
$$
By adding identities \eqref{primaeqqq}-\eqref{secondaeqqq} and
recalling the definition of $g^\sharp(x,v)$, we get
from~\eqref{solut1}
\begin{equation*}
\into j_\xi(x,u,\nabla u)\cdot \nabla \hat w+\into j_s(x,u,\nabla u)
\hat w=\into g(x,u)\hat w, \quad \text{$\hat w=\varphi'(v)w\in V_u$.}
\end{equation*}
Given any $z\in V_u$, we have $w=\frac{z}{\varphi'(v)}=
\frac{z}{\varphi'(T_k(v))}\in V_v$ for $k>\|v\|_{L^\infty(\{z\neq 0\})}$. In turn,
\begin{equation*}
\into j_\xi(x,u,\nabla u)\cdot \nabla z+\into
j_s(x,u,\nabla u) z=\into g(x,u)z,\qquad\text{for every $z \in V_u$,}
\end{equation*}
yielding the assertion. Finally, if $v$ is a bounded generalized
solution to~\eqref{probmoddd}, $u\in H^1_0(\Omega)$ is bounded too and it follows that
$u=\varphi(v)$ is a distributional solution to~\eqref{ooorigg}.
\end{proof}
\begin{remark}\rm
The gradient $\nabla u=\omega$ does not agree, in general, with the one in the sense of distributions, since
it could be either $u\not\in L^1_{{\rm loc}}(\Omega)$ or $\omega\not\in L^1_{{\rm loc}}(\Omega,\R^N)$.
If $\omega\in L^1_{{\rm loc}}(\Omega,\R^N)$, then $u\in W^{1,1}_{{\rm loc}}(\Omega)$
and $\omega$ agrees with the distributional gradient \cite[Remark 2.10]{DMOP}.
\end{remark}
\noindent
Under natural regularity assumptions, a generalized solution is, actually, distributional.
\begin{proposition}
\label{soldistr}
Assume that $u$ is a generalized solution to problem~\eqref{problema-bis} and that, in addition
\begin{equation}
\label{summab-extra}
\jxi{u}\in L^1_{{\rm loc}}(\Omega;\R^N),\qquad \js{u}\in L^1_{{\rm loc}}(\Omega),\qquad g(x,u)\in L^1_{{\rm loc}}(\Omega).
\end{equation}
Then $u$ solves problem~\eqref{problema-bis} in the sense of distributions.
\end{proposition}
\begin{proof}
Let $H:\R\to\R$ be a smooth cut-off function such that
$0\leq H\leq 1$, $H(s)=1$ for $|s|\leq 1$ and $H(s)=0$ for $|s|\geq 2$.
Given $k>0$ and $\varphi\in C^\infty_c(\Omega)$, consider
in formula \eqref{condizioneVu} the admissible test functions $w=w_k=H(T_{2k+1}(u)/k)\varphi\in V_u$.
Whence, for every $k>0$, it holds that
\begin{align}
\label{quasi-distr}
&\into \jxi{u}\cdot H(T_{2k+1}(u)/k)\nabla \varphi+\into \jxi{u}\cdot H'(T_{2k+1}(u)/k)1/k \nabla T_{2k+1}(u)\varphi \notag \\
&+\into \js{u}H(T_{2k+1}(u)/k)\varphi=\into g(x,u)H(T_{2k+1}(u)/k)\varphi.
\end{align}
Taking into account that $\jxi{u}\cdot \nabla u \in L^1(\Omega)$ and
by \eqref{gradientcharacteriz}, for all $k>0$ we have
$$
|\jxi{u}\cdot H'(T_{2k+1}(u)/k)1/k\nabla T_{2k+1}(u)\varphi|\leq C|\jxi{u}\cdot \nabla u| \in L^1(\Omega),
$$
yielding, by the Dominated Convergence Theorem,
$$
\lim_k \into \jxi{u}\cdot H'(T_{2k+1}(u)/k)1/k \nabla T_{2k+1}(u)\varphi=0.
$$
On account of assumptions \eqref{summab-extra}, the assertion follows by letting $k\to\infty$ into \eqref{quasi-distr},
again in light of the Dominated Convergence Theorem.
\end{proof}
\subsection{Further growth conditions}
\label{growthsect}
The next proposition is useful for the study of the mountain pass geometry of the functional
associated with problem \eqref{equation-intro}.
\begin{proposition}
\label{menoinf}
Let $\varphi\in C^2(\R)$ be a diffeomorphism satisfying
the properties of Definition~\ref{diffeoclass} and such that
\begin{equation}
\label{diffeoasymptoticbb}
0<\lim_{s\to+\infty}\frac{\varphi(s)}{s^{\frac{1}{1-\beta}}}<+\infty,
\end{equation}
and let $\alpha^\sharp: \R^+\to\R^+$ be the function
introduced in Proposition~\ref{rm1}. Let $\nu>2(1-\beta)$,
$k_1\in L^{\infty}(\Omega)$ with $k_1>0$, $k_2\in \elle1$, $k_3\in \elle{2N/(N+2)}$. Assume that
\begin{equation}
\label{cdpdmm}
\lim_{s\to\infty}\frac{\alpha(|s|)}{|s|^{\nu-2}}=0
\quad
\text{and}
\quad
G(x,s)\geq k_1(x)|s|^{\nu}-k_2(x)-k_3(x)|s|^{1-\beta},
\end{equation}
for a.e.~$x\in\Omega$ and all $s\in\R$. Then there exist $\nu^\sharp>2$ such that
$$
\lim_{s\to\infty}\frac{\alpha^\sharp(|s|)}{|s|^{\nu^\sharp-2}}=0
\quad \text{and} \quad G^\sharp (x,s)\geq k^\sharp_1(x)|s|^{
\nu^\sharp}-k_2^\sharp(x)- k_3^\sharp(x)|s|,
$$
for a.e.~$x\in\Omega$ and all $s\in\R$, for some
$k_1^\sharp\in L^{\infty}(\Omega)$, $k_1^\sharp>0$,
$k_2^\sharp\in L^1(\Omega)$ and
$k^\sharp_3\in \elle{\frac{2N}{N+2}}$.
\end{proposition}
\begin{proof}
By assumption \eqref{diffeoasymptoticbb} and \eqref{two}, for $\nu^\sharp=\frac{\nu}{1-\beta}$, we have
\begin{equation*}
\lim_{s\to+\infty}\frac{\alpha^\sharp(s)}{s^{\nu^\sharp-2}}
=
\lim_{s\to\infty}\frac{\alpha(\varphi(s))}{\varphi(s)^{\nu-2}}\cdot
\lim_{s\to\infty}\frac{\varphi(s)^{\nu-2}\varphi'(s)^2}{s^{\nu^\sharp-2}}
=0.
\end{equation*}
Finally, if $G(x,s)\geq k_1(x)|s|^{\nu}-k_2(x)-k_3(x)|s|^{1-\beta}$,
condition \eqref{diffeoasymptoticbb} yields
\begin{equation*}
G^\sharp (x,s)\geq k_1(x)|\varphi(s)|^{\nu}-k_2(x)-k_3(x)|\varphi(s)|^{1-\beta}
\geq k^\sharp_1(x)|s|^{\nu^\sharp}-k^\sharp_2(x)- k^\sharp_3(x)|s|,
\end{equation*}
for a.e.~$x\in\Omega$ and all $s\in\R$, for suitable
$k_j^\sharp:\Omega\to\R$, $j=1,2,3$, with the stated summability.
\end{proof}
\noindent
Now, we see how the nonlinearity $g$ gets modified under the action of a diffeomorphism.
\begin{proposition}
\label{newgrow}
Let $\varphi\in C^2(\R)$ be a diffeomorphism which satisfies
the properties of Definition~\ref{diffeoclass} with $0\leq\beta<2/N$, $N\geq 3$ and such that
\eqref{diffeoasymptoticbb} holds. Let $g:\Omega\times\R\to\R$ satisfy
\begin{equation}
\label{growthassty}
|g(x,s)|\leq a(x)+b|s|^{p-1}\qquad\text{for a.e. $x\in\Omega$ and
all $s\in\R$},
\end{equation}
for some $a\in L^{q+\beta q(p-1)^{-1}}(\Omega)$, $q\geq \frac{2N}{N+2}$, $b\geq 0$ with $2<p\leq 2^*(1-\beta)$.
Then, we have
\begin{equation*}
|g^\sharp(x,s)|\leq
a^\sharp(x)+b|s|^{p^\sharp-1}\quad\text{for a.e.
$x\in\Omega$ and all $s\in\R$},
\end{equation*}
for some $2<p^\sharp \leq 2^*$ and $a^\sharp\in L^{q}(\Omega)$.
\end{proposition}
\begin{proof}
Taking into account~\eqref{diffeoasymptoticbb} and \eqref{two}, for a.e.~$x\in\Omega$ and all $s\in\R$ we have
\begin{equation*}
|g^\sharp(x,s)| \leq a(x)\varphi'(s)+b|\varphi(s)|^{p-1}\varphi'(s)
\leq Ca(x)+C+Ca(x)^{\frac{p+\beta-1}{p-1}}+C|s|^{\frac{p}{1-\beta}-1},
\end{equation*}
yielding the assertion with $p^\sharp=\frac{p}{1-\beta}$ and $a^\sharp=Ca+C+Ca^{\frac{p+\beta-1}{p-1}}$.
\end{proof}
\subsection{Sign conditions}
\label{signsect}
The classical sign condition \eqref{classicsign} is {\em not} invariant under diffeomorphism
as Proposition \ref{invspec} shows. The next proposition introduces a different kind of sign condition
that remains invariant under the effect of $\varphi$.
\begin{proposition}
\label{invar1}
Let $\varphi\in C^2(\R)$ be a diffeomorphism which satisfies
the properties of Definition~\ref{diffeoclass}.
Assume that there exist $\eps\in (0,1-\beta]$ and $R\geq 0$ such that
\begin{equation}
\label{generalsignn}
(1-\eps) j_\xi(x,s,\xi)\cdot\xi+j_s(x,s,\xi)s\geq 0,
\end{equation}
for a.e.~$x\in\Omega$ and all $(s,\xi)\in\R\times\R^N$ with
$|s|\geq R$. \vskip4pt \noindent Then there exist $\eps^\sharp\in
(0,1]$ and $R^\sharp>0$ such that
$$
(1-\eps^\sharp) j
^\sharp_\xi(x,s,\xi)\cdot\xi+j^\sharp_s(x,s,\xi)s\geq 0,
$$
for a.e.~$x\in\Omega$ and all $(s,\xi)\in\R\times\R^N$ with
$|s|\geq R^\sharp$.
\end{proposition}
\begin{proof}
Let us write $\eps=\eps_0(1-\beta)$, for some $\eps_0\in (0,1]$.
By taking into account~\eqref{two}, there exists $0<\delta<\eps_0 (1+\eps_0(1-\beta))^{-1}$
and $R^\sharp>0$ sufficiently large that
$$
1+\frac{\varphi''(s)s}{\varphi'(s)}\geq \frac{\varphi'(s)s}{\varphi(s)}-\delta,
\qquad
\frac{\varphi'(s)s}{\varphi(s)}\geq \frac{1}{1-\beta}-\delta,
$$
and $|\varphi(s)|\geq R$ for all $s\in\R$ such that $|s|\geq R^\sharp $. Then, in turn, we get
\begin{align*}
& j^\sharp_\xi(x,s,\xi)\cdot\xi+j^\sharp_s(x,s,\xi)s \\
&=\Big(1+\frac{\varphi''(s)s}{\varphi'(s)}\Big)j_\xi(x,\varphi(s),\varphi'(s)\xi)\cdot \varphi'(s)\xi
+\frac{\varphi'(s)s}{\varphi(s)} j_s(x,\varphi(s),\varphi'(s)\xi)\varphi(s) \\
&\geq \frac{\varphi'(s)s}{\varphi(s)}\big(j_\xi(x,\varphi(s),\varphi'(s)\xi)\cdot \varphi'(s)\xi
+j_s(x,\varphi(s),\varphi'(s)\xi)\varphi(s)\big) \\
\noalign{\vskip4pt}
&\quad -\delta j_\xi(x,\varphi(s),\varphi'(s)\xi)\cdot \varphi'(s)\xi,
\end{align*}
for a.e.~$x\in\Omega$ and all $(s,\xi)\in\R\times\R^N$ with
$|s|\geq R^\sharp$. Setting
$$
\eps^\sharp =\eps_0-\delta(1+\eps_0(1-\beta))\in (0,1],
$$
it follows by assumption that
\begin{equation*}
j^\sharp_\xi(x,s,\xi)\cdot\xi+j^\sharp_s(x,s,\xi)s
\geq \Big(\eps\frac{\varphi'(s)s}{\varphi(s)}-\delta\Big) j_\xi(x,\varphi(s),\varphi'(s)\xi)\cdot \varphi'(s)\xi
\geq \eps^\sharp j^\sharp_\xi(x,s,\xi)\cdot\xi ,
\end{equation*}
for a.e.~$x\in\Omega$ and all $(s,\xi)\in\R\times\R^N$ with
$|s|\geq R^\sharp$. This concludes the proof.
\end{proof}
\begin{remark}\rm
In the literature of quasi-linear problems like \eqref{equation-intro} the (say, positive) sign condition $j_s(x,s,\xi)s\geq 0$
is a classical assumption (cf.\ \cite{Ab,candeg} and references therein), helping to achieve both existence and summability of
the solutions. On the other hand, in \cite{pellacci-nos}, when $j(x,s,\xi)=A(x,s)\xi\cdot\xi$,
the existence of solutions is obtained either with the opposite sign
condition or even without any sign hypothesis at all. To handle this situation, alternative conditions as \cite[Assumption 1.5]{pellacci-nos}
are assumed, which imply \eqref{generalsignn} (at least for $s\geq R$) for suitable $\eps$, as it can be easily verified.
\end{remark}
Under the generalized sign condition \eqref{generalsignn}, we get a summability result
which improves \cite[Lemma 4.6]{pelsqu}.
This also shows that condition \eqref{summab-solger} in Definition \ref{defsol-bis} is natural.
For a function $f$, the notation $|df|(u)$ stands for the weak slope of $f$ at $u$ (cf.~e.g.\ \cite{CDM,DM}).
\begin{proposition}\label{sommab}
Assume that \eqref{originalgrowths2} holds and that
there exist $\eps\in (0,1)$ and $R\geq 0$ with
\begin{equation}
\label{gensign}
(1-\eps) j_\xi(x,s,\xi)\cdot\xi+j_s(x,s,\xi)s\geq 0,
\end{equation}
for a.e.~$x\in\Omega$ and all $(s,\xi)\in\R\times\R^N$ with $|s|\geq R$. Let us set
$$
I(u)=\int_\Omega j(x,u,\nabla u),\quad u\in H^1_0(\Omega).
$$
Then, for every $u\in {\rm dom}(I)$ with $|dI|(u)<+\infty$, we have
\begin{align}
\label{stimaslope} \int_\Omega j_\xi(x,u,\nabla u)\cdot\nabla u +
j_s(x,u,\nabla u)u \leq |dI|(u)\|u\|_{1,2}.
\end{align}
In particular, there holds
$$
j_\xi(x,u,\nabla u)\cdot \nabla u\in L^1(\Omega), \qquad
j_s(x,u,\nabla u)u\in L^1(\Omega),
$$
and there exists $\Psi\in H^{-1}(\Omega)$ with $\|\Psi\|_{H^{-1}}\leq |dI|(u)$ such that
\begin{equation*}
\into \jxi{u}\cdot \nabla w +\into
\js{u}w=\langle \Psi,w\rangle, \quad\forall w\in V_u.
\end{equation*}
\end{proposition}
\begin{proof}
Let $b\in\R$ be such that $b>I(u)$. Notice first that if $u$ is such that
$$
\int_\Omega j_\xi(x,u,\nabla u)\cdot \nabla u+j_s(x,u,\nabla u)u\leq 0,
$$
then the conclusion holds. Otherwise, let $\sigma$ be an arbitrary positive number such that
$$
\int_\Omega\,j_\xi(x,u,\nabla u)\cdot\nabla u\,+ j_s(x,u,\nabla u)u>\sigma\|u\|_{1,2}.
$$
Fixed $\eta>0$, we set $\alpha^{-1}=\|u\|_{1,2}(1+\eta)$.
Let us prove that there exist $\delta>0$ such that, for all $v\in B(u,\delta)$ and for any
$\tau \in L^{\infty}(\Omega)$ with $\|\tau\|_{\infty}<\delta$, it follows
\begin{equation}
\label{qdist1}
\int_\Omega [j_s(x,w,(1-\alpha \tau)\nabla v)v+ j_\xi(x,w,(1-\alpha \tau)\nabla v)\cdot\nabla v]>\sigma \|u\|_{1,2},
\end{equation}
where $w=(1-\alpha \tau)v$. In fact, assume by contradiction that this is not the case. Then, we
find a sequence $(v_n) \subset \hsob$ with $\|v_n-u\|_{1,2}\to 0$ as $n\to\infty$ and a sequence
$(\tau_n) \subset L^{\infty}(\Omega)$ with $\|\tau_n\|_{\infty}\to 0$ as $n\to\infty$
such that, denoting $w_n=(1-\alpha\tau_n)v_n$ for all $n\geq 1$, it holds
\begin{equation}
\label{contradarg}
\int_\Omega [j_s(x,w_n,(1-\alpha \tau_n)\nabla v_n)v_n+ j_\xi(x,w_n,(1-\alpha \tau_n)\nabla v_n)\cdot\nabla v_n]\leq\sigma \|u\|_{1,2}.
\end{equation}
Since $v_n \to u$ in $\hsob$ and $\tau_n\to 0$ in $L^{\infty}(\Omega)$ as $n\to\infty$, a.e.~in $\Omega$
we have that
$$
j_s(x,w_n,(1-\alpha \tau_n)\nabla v_n)v_n+ j_\xi(x,w_n,(1-\alpha \tau_n)\nabla v_n)\cdot \nabla v_n
\to
j_s(x,u,\nabla u)u+j_\xi(x,u,\nabla u)\cdot \nabla u.
$$
Moreover there exists a positive constant $C(R)$ such that, for every $n\geq 1$,
\begin{equation}
\label{4fatouprep}
j_s(x,w_n,(1-\alpha \tau_n)\nabla v_n)v_n+ j_\xi(x,w_n,(1-\alpha \tau_n)\nabla v_n)\cdot\nabla v_n\geq -C(R)|\nabla v_n|^2.
\end{equation}
In fact, if $|w_n(x)|\geq R$, from condition~\eqref{gensign} the left hand side is nonnegative.
If instead $|w_n(x)|\leq R$, we can assume $|v_n(x)|\leq 2R$, and by \eqref{originalgrowths2} we get
\begin{align*}
|j_s(x,w_n,&(1-\alpha \tau_n)\nabla v_n)v_n+ j_\xi(x,w_n,(1-\alpha \tau_n)\nabla v_n)\cdot\nabla v_n| \\
& \leq \gamma(|w_n|)|v_n||\nabla v_n|^2+\mu(|w_n|)|\nabla v_n|^2\leq (2\gamma(R)R+\mu(R))|\nabla v_n|^2.
\end{align*}
Then, we are allowed to apply Fatou's Lemma, yielding
\begin{gather*}
\liminf_{n\to \infty}
\int_\Omega [j_s(x,w_n,(1-\alpha \tau_n)\nabla v_n)v_n+ j_\xi(x,w_n,(1-\alpha \tau_n)\nabla v_n)\cdot \nabla v_n]
\\
\geq\int_\Omega j_s(x,u,\nabla u)u \,+j_\xi(x,u,\nabla
u)\cdot\nabla u >\sigma \|u\|_{1,2},
\end{gather*}
which immediately yields a contradiction with \eqref{contradarg}. Hence~\eqref{qdist1} holds,
for some $\delta>0$. Observe that,
since $j(x,\cdot,\cdot)$ is of class $C^1$ for a.e. $x \in \Omega$ then, for any $t \in [0,1]$ and
every $v\in {\rm dom}(I)$, there exists $0\leq \tau(x,t)\leq t$ such that
\begin{align}
\label{identlagrange}
&j(x,(1-\alpha t)v,(1-\alpha t)\nabla v)- j(x,v,\nabla v)=\\
&-\alpha t [j_s(x,(1-\alpha \tau)v,(1-\alpha \tau)\nabla v)v+ j_\xi(x,(1-\alpha \tau)v,(1-\alpha \tau)\nabla v)\cdot \nabla v].
\notag
\end{align}
As for the inequality \eqref{4fatouprep}, for some $C(R)>0$, for $t$ small enough it holds
\begin{equation*}
j_s(x,(1-\alpha \tau)v,(1-\alpha \tau)\nabla v)v+ j_\xi(x,(1-\alpha \tau)v,(1-\alpha \tau)\nabla v)\cdot\nabla v\geq -C(R)|\nabla v|^2.
\end{equation*}
Whence, if $v\in {\rm dom}(I)$ by \eqref{identlagrange} it follows that $(1-\alpha t)v\in {\rm dom}(I)$ for all $t\in [0,\delta]$ and
\begin{equation}
\label{summbbbb}
j_s(x,(1-\alpha \tau)v,(1-\alpha \tau)\nabla v)v+ j_\xi(x,(1-\alpha \tau)v,(1-\alpha \tau)\nabla v)\cdot\nabla v\in L^1(\Omega).
\end{equation}
Up to reducing $\delta$, we may assume that $\delta<\eta
\|u\|_{1,2}$. Then, for all $v \in B(u,\delta)$, we have $\|v\|_{1,2} \leq (1+\eta)\|u\|_{1,2}=\alpha^{-1}$.
Consider the continuous map ${\mathcal H}:B(u,\delta)\cap I^b\times [0,\delta]\to \hsob$ defined as
${\mathcal H}(v,t)=(1-\alpha t)v$, where $I^b=\{v\in H^1_0(\Omega):I(v)\leq b\}$.
From \eqref{qdist1} (applied, for each $t\in [0,\delta]$, with the function
$\tau(\cdot,t)\in L^\infty(\Omega,[0,\delta])$ for which identity~\eqref{identlagrange} holds)
and identity \eqref{identlagrange}, for every $t\in [0,\delta]$ and $v\in B(u,\delta)\cap I^b$ we have
\begin{equation*}
\|{\mathcal H}(v,t)-v\|_{1,2}\leq t,
\qquad
I({\mathcal H}(v,t))\leq I(v) - \frac{ \sigma }{1+ \eta}t.
\end{equation*}
Then, by means of \cite[Proposition 2.5]{DM} and exploiting the arbitrariness of $\eta$, we get
$|dI|(u)\geq\sigma$. In turn, \eqref{stimaslope} follows from the
arbitrariness of $\sigma$. Concerning the second part of the statement, since $|dI|(u)<+\infty$,
from~\eqref{gensign} and~\eqref{stimaslope},
\begin{equation}
\label{summbconclus}
j_\xi(x,u,\nabla u)\cdot\nabla u +
j_s(x,u,\nabla u)u\in L^1(\Omega).
\end{equation}
In turn, using again~\eqref{gensign}, it follows $j_\xi(x,u,\nabla u)\cdot\nabla u\in L^1(\Omega)$, since
\begin{align*}
\eps j_\xi(x,u,\nabla u)\cdot\nabla u &\leq
\eps\mu(R)|\nabla u|^2+\eps j_\xi(x,u,\nabla u)\cdot\nabla u\chi_{\{|u|\geq R\}} \\
& \leq \eps\mu(R)|\nabla u|^2+|j_s(x,u,\nabla u)u+j_\xi(x,u,\nabla u)\cdot\nabla u|.
\end{align*}
Then, by exploiting \eqref{summbconclus} again, $j_s(x,u,\nabla u)u\in L^1(\Omega)$.
The final assertion does not rely upon any sign condition and follows directly
from \cite[Proposition 4.5]{pelsqu}. This concludes the proof.
\end{proof}
In the next result we show that it is possible to enlarge the
class of admissible test functions. In order to do this, suppose
we have a function $u\in\hsob$ such that
\begin{equation}\label{eqb}
\int_{\Omega}j_\xi(x,u,\nabla u)\cdot \nabla z+
\into j_s(x,u,\nabla u) z=\langle w,z\rangle, \qquad \forall z\in
V_u,
\end{equation}
for $w\in \dhsob$. Under suitable assumptions, if \eqref{gensign} holds true,
we can use $\zeta u\in\hsob$ with $\zeta\in L^\infty(\Omega)$
as an admissible test functions in~\eqref{eqb}, generalizing \cite[Theorem 4.8]{pelsqu}.
\begin{proposition}
\label{BB}
Assume that \eqref{originalgrowths2} and \eqref{gensign} hold.
Let $w\in H^{-1}(\Omega)$, and let $u\in
H^1_0(\Omega)$ be such that~\eqref{eqb} is satisfied. Moreover,
suppose that $j_\xi(x,u,\nabla u)\cdot\nabla u\in L^1(\Omega)$
and that there exist $v\in H^1_0(\Omega)$ and $\eta\in
L^1(\Omega)$ such that
\begin{equation}\label{control2}
j_s(x,u,\nabla u)v\geq\eta \qquad \text{and} \qquad
j_\xi(x,u,\nabla u)\cdot\nabla v\geq\eta.
\end{equation}
Then $j_s(x,u,\nabla u)v\in L^1(\Omega)$, $j_\xi(x,u,\nabla
u)\cdot\nabla v\in L^1(\Omega)$ and
\begin{equation}\label{eqb2}
\int_{\Omega} j_\xi(x,u,\nabla u)\cdot\nabla v +\into
j_s(x,u,\nabla u)v =\langle w,v\rangle.
\end{equation}
In particular, if $\zeta\in L^\infty(\Omega)$, $\zeta\geq 0$,
$\zeta u\in H^1_0(\Omega)$ and $j_\xi(x,u,\nabla u)\cdot\nabla (\zeta u)\in L^1(\Omega)$
then it follows that $j_s(x,u,\nabla u)\zeta u\in L^1(\Omega)$ and
\begin{equation}
\label{testu}
\int_{\Omega} j_\xi(x,u,\nabla u)\cdot\nabla (\zeta u) +\into
j_s(x,u,\nabla u)\zeta u =\langle w,\zeta u\rangle.
\end{equation}
\end{proposition}
\begin{proof}
The first part of the statement follows by means of \cite[Theorem 4.8]{pelsqu}. By assumption \eqref{gensign}
and since $\zeta$ is nonnegative and bounded, we have
\begin{align*}
j_s(x,u,\nabla u)\zeta u &=\zeta j_s(x,u,\nabla u)u\chi_{\{|u|\leq R\}}+\zeta j_s(x,u,\nabla u)u\chi_{\{|u|\geq R\}} \\
& \geq -R\gamma(R)\|\zeta\|_{L^\infty(\Omega)}|\nabla u|^2-(1-\eps) \zeta j_\xi(x,u,\nabla u)\cdot\nabla u \in L^1(\Omega).
\end{align*}
The last assertion of the statement then follows from the first one.
\end{proof}
\subsection{AR type conditions}
\label{ambrrabsect}
Some Ambrosetti-Rabinowitz type conditions, typically used in order
to guarantee the boundedness of Palais-Smale sequences, remain invariant.
\begin{proposition}
\label{invar2}
Let $\varphi\in C^2(\R)$ be a diffeomorphism which satisfies
the properties of Definition~\ref{diffeoclass}.
Assume that there exist $\delta>0$, $\nu>2(1-\beta)$ and $R\geq 0$ such that
$$
\nu j(x,s,\xi)-(1+\delta)j_\xi(x,s,\xi)\cdot\xi-j_s(x,s,\xi)s-\nu G(x,s)+g(x,s)s\geq 0,
$$
and $G(x,s)\geq 0$ for a.e.~$x\in\Omega$ and all $(s,\xi)\in\R\times\R^N$ with $|s|\geq R$.
\vskip4pt
\noindent
Then there exist $\delta^\sharp>0$, $\nu^\sharp>2$ and $R^\sharp>0$ such that
$$
\nu^\sharp j^\sharp(x,s,\xi)-(1+\delta^\sharp )
j^\sharp_\xi(x,s,\xi)\cdot\xi-j^\sharp_s(x,s,\xi)s- \nu^\sharp
G^\sharp(x,s)+g^\sharp (x,s)s \geq 0,
$$
and $G^\sharp(x,s)\geq 0$ for a.e.~$x\in\Omega$ and all $(s,\xi)\in\R\times\R^N$ with $|s|\geq R^\sharp$.
\end{proposition}
\begin{proof}
A direct calculation yields
\begin{align*}
& \frac{\nu}{1-\beta}
j^\sharp(x,s,\xi)-j^\sharp_\xi(x,s,\xi)\cdot\xi
-j^\sharp_s(x,s,\xi)s-\frac{\nu}{1-\beta} G^\sharp(x,s)+ g^\sharp(x,s)s \\
&=\frac{\nu}{1-\beta} j(x,\varphi(s),\varphi'(s)\xi)-
\Big(1+\frac{\varphi''(s)s}{\varphi'(s)}\Big)j_\xi(x,\varphi(s),\varphi'(s)\xi)\cdot \varphi'(s)\xi \\
& -\frac{\varphi'(s)s}{\varphi(s)} j_s(x,\varphi(s),\varphi'(s)\xi)\varphi(s)-
\frac{\nu}{1-\beta} G(x,\varphi(s))+\frac{\varphi'(s)s}{\varphi(s)} g(x,\varphi(s))\varphi(s) \\
&= \frac{\varphi'(s)s}{\varphi(s)}\Big( \frac{\varphi(s)}{\varphi'(s)s}\frac{\nu}{1-\beta} j(x,\varphi(s),\varphi'(s)\xi) \\
\noalign{\vskip3pt}
& -\frac{\varphi(s)}{\varphi'(s)s}\Big(1+\frac{\varphi''(s)s}{\varphi'(s)}\Big)j_\xi(x,\varphi(s),\varphi'(s)\xi)\cdot \varphi'(s)\xi \\
& -j_s(x,\varphi(s),\varphi'(s)\xi)\varphi(s)-
\frac{\nu}{1-\beta}\frac{\varphi(s)}{\varphi'(s)s} G(x,\varphi(s))+g(x,\varphi(s))\varphi(s) \Big),
\end{align*}
for a.e.~$x\in\Omega$ and all $(s,\xi)\in\R\times\R^N$ such that $s\neq 0$.
We recall that $j(x,\tau,\zeta)\geq 0$, $j_\xi(x,\tau,\zeta)\cdot \zeta\geq 0$ and that the map $s\mapsto s\varphi(s)$ is nonnegative.
Therefore, on account of condition \eqref{two}, for all $\eta>0$ small enough there exists
$R^\sharp>0$ large enough that $|\varphi(s)|\geq R$ for all $s\in\R$ with $|s|\geq R^\sharp$ and
\begin{align*}
& \frac{\nu}{1-\beta}
j^\sharp(x,s,\xi)-j^\sharp_\xi(x,s,\xi)\cdot\xi
-j^\sharp_s(x,s,\xi)s-\frac{\nu}{1-\beta} G^\sharp (x,s)+g^\sharp (x,s)s \\
&\geq \frac{\varphi'(s)s}{\varphi(s)} \Big(\nu j(x,\varphi(s),\varphi'(s)\xi)-\eta(1-\beta)j(x,\varphi(s),\varphi'(s)\xi) \\
\noalign{\vskip2.5pt}
&- j_\xi(x,\varphi(s),\varphi'(s)\xi)\cdot \varphi'(s)\xi
-\eta(1-\beta) j_\xi(x,\varphi(s),\varphi'(s)\xi)\cdot \varphi'(s)\xi \\
\noalign{\vskip2.5pt}
& -j_s(x,\varphi(s),\varphi'(s)\xi)\varphi(s)-
\nu G(x,\varphi(s))-\eta(1-\beta) G(x,\varphi(s)) +g(x,\varphi(s))\varphi(s)\Big)\\
\noalign{\vskip2.5pt}
& \geq ((1-\beta)^{-1}-\eta)(\delta-\eta(1-\beta)) j^\sharp_\xi(x,s,\xi)\cdot \xi \\
\noalign{\vskip2.5pt} &
-\frac{\varphi'(s)s}{\varphi(s)}(1-\beta)\eta j^\sharp(x,s,\xi)
-\frac{\varphi'(s)s}{\varphi(s)}(1-\beta)\eta G^\sharp (x,s) \\
\noalign{\vskip2pt} &\geq
((1-\beta)^{-1}-\eta)(\delta-\eta(1-\beta))
j^\sharp_\xi(x,s,\xi)\cdot \xi -2\eta j^\sharp(x,s,\xi)
-2\eta G^\sharp(x,s),
\end{align*}
for a.e.~$x\in\Omega$ and all $(s,\xi)\in\R\times\R^N$ such that $|s|\geq R^\sharp$.
Finally, since by convexity of $j^\sharp$ and $j^\sharp(x,s,0)=0$ we have
$j^\sharp_\xi(x,s,\xi)\cdot\xi\geq j^\sharp(x,s,\xi)$, we get
\begin{gather*}
\frac{\nu}{1-\beta}
j^\sharp(x,s,\xi)-j^\sharp_\xi(x,s,\xi)\cdot\xi
-j^\sharp_s(x,s,\xi)s-\frac{\nu}{1-\beta} G^\sharp(x,s)+g^\sharp(x,s)s \\
\geq \delta^\sharp j^\sharp_\xi(x,s,\xi)\cdot \xi+2\eta
j^\sharp(x,s,\xi) -2\eta G^\sharp(x,s).
\end{gather*}
In turn, choosing $\eta$ small enough and setting
$$
\delta^\sharp=(1-\beta)^{-1}\delta-\eta(5+\delta)+\eta^2(1-\beta)>0,\qquad
\nu^\sharp =\nu(1-\beta)^{-1}-2\eta>2,
$$
the assertion follows.
\end{proof}
\begin{corollary}
\label{invar2-cor}
Let $\varphi\in C^2(\R)$ be a diffeomorphism satisfying
the properties of Definition \ref{diffeoclass}.
Assume that $\xi\mapsto j(x,s,\xi)$ is homogeneous of degree two and that
there are $\nu>2$ and $R>0$ with
\begin{equation}
\label{separatasss}
j_s(x,s,\xi)s\leq 0,\qquad 0\leq \nu G(x,s)\leq g(x,s)s,
\end{equation}
for a.e.~$x\in\Omega$ and all $(s,\xi)\in\R\times\R^N$ with $|s|\geq R$. Then
$$
\nu^\sharp j^\sharp(x,s,\xi)-(1+
\delta^\sharp)j^\sharp_\xi(x,s,\xi)\cdot\xi-j^\sharp_s(x,s,\xi)s-
\nu^\sharp G^\sharp(x,s)+g^\sharp(x,s)s \geq 0,
$$
for a.e.~$x\in\Omega$ and all $(s,\xi)\in\R\times\R^N$ with
$|s|\geq R^\sharp$, for some $\delta^\sharp>0$, $R^\sharp>0$ and $\nu^\sharp>2$.
\end{corollary}
\begin{proof}
Since $\xi\mapsto j(x,s,\xi)$ is $2$-homogeneous and $\nu>2$, there exists $\delta>0$ with
$$
\nu j(x,s,\xi)-(1+\delta)j_\xi(x,s,\xi)\cdot\xi=(\nu-2-2\delta)j(x,s,\xi)\geq 0,
$$
for a.e.~$x\in\Omega$ and all $(s,\xi)\in\R\times\R^N$. Hence, by assumptions~\eqref{separatasss}, we get
$$
\nu j(x,s,\xi)-(1+\delta)j_\xi(x,s,\xi)\cdot\xi-j_s(x,s,\xi)s-\nu G(x,s)+g(x,s)s\geq 0,
$$
for a.e.~$x\in\Omega$ and all $(s,\xi)\in\R\times\R^N$ with $|s|\geq R$.
Proposition~\ref{invar2} yields the assertion.
\end{proof}
\section{Multiplicity of solutions}
\label{existence-sect}
\noindent
As a by-product of the previous results, we obtain the following existence result.
Compared with the results of \cite{AbO} here we can get infinitely many solution, not necessarily
bounded.
\begin{theorem}
\label{existthmm}
Assume that $\varphi\in C^2(\R)$ satisfies the properties of Definition~\ref{diffeoclass},
\eqref{diffeoasymptoticbb} and let $N\geq 3$. Moreover,
let $j:\Omega\times\R\times\R^N\to\R$ satisfy \eqref{originalgrowths1}-\eqref{originalgrowths2},
$\xi\mapsto j(x,s,\xi)$ be strictly convex, and
\begin{align}
\label{parita}
j(x,-s-\xi)=j(x,s,\xi),\quad\text{for a.e.\ $x\in\Omega$ and all $(s,\xi)\in\R\times\R^N$,} \\
\label{signostrongg}
j_s^\sharp(x,s,\xi)s\geq 0,\quad \text{for all $|s|\geq R^\sharp$ and some $R^\sharp\geq 0$}.
\end{align}
Let $g:\Omega\times\R\to\R$ be continuous, satisfying \eqref{growthassty} with $2<p<2^*(1-\beta)$,
\begin{equation}
\label{g-disparit}
g(x,-s)=-g(x,s),\quad\text{for a.e.\ $x\in\Omega$ and all $s\in\R$,}
\end{equation}
$G(x,s)\geq 0$ for $|s|\geq R$ and the
joint conditions \eqref{genARcondition} and \eqref{cdpdmm}, for some $R\geq 0$. Then,
\begin{equation*}
\begin{cases}
-\dvg( j_\xi(x,u,\nabla u)) +j_s(x,u,\nabla u)=g(x,u), & \text{in $\Omega$}, \\
\quad u=0, & \text{on $\partial\Omega$}
\end{cases}
\end{equation*}
admits a sequence $(u_n)$ of generalized solutions in the sense of Definition~\ref{defsol-bis}. Furthermore,
\begin{align*}
\frac{2N}{N+2}<q<\frac{N}{2}
&\quad\Longrightarrow\quad
u_n\in L^{\frac{Nq(1-\beta)}{N-2q}}(\Omega), \\
q>\frac{N}{2} &\quad\Longrightarrow\quad u_n\in
L^{\infty}(\Omega),
\end{align*}
in the notations of assumptions \eqref{growthassty}. In particular, if $q>N/2$,
it follows that $u_h\in H^1_0(\Omega)\cap L^{\infty}(\Omega)$ are solutions in distributional sense.
\end{theorem}
\begin{proof}
Of course, $\xi\mapsto j^\sharp(x,s,\xi)$ is strictly convex.
By assumptions \eqref{originalgrowths1}-\eqref{originalgrowths2}, \eqref{growthassty}, \eqref{genARcondition} and \eqref{cdpdmm},
in light of Propositions~\ref{rm1}, \ref{menoinf}, \ref{newgrow} and \ref{invar2} and taking into account the sign condition
\eqref{signostrongg} for $j^\sharp$, \cite[assumptions (1.1)-(1.4), (1.7), (2.2), (2.4) and the variant
\eqref{genARcondition} for $j^\sharp$ of conditions (1.9) and (2.3)
joined together which still guarantees the boundedness of Palais-Smale sequences]{pelsqu}
are satisfied for $j^\sharp$ and $g^\sharp$ for some $R^\sharp$.
Also, since $\varphi$ is odd, \eqref{parita} yields
$$
j^\sharp(x,-s,-\xi)=j(x,\varphi(-s),-\varphi'(-s)\xi)=j(x,-\varphi(s),-\varphi'(s)\xi)=j^\sharp(x,s,\xi),
$$
for a.e.\ $x\in\Omega$ and all $(s,\xi)\in\R\times\R^N$ and, analogously, \eqref{g-disparit} yields
$$
g^\sharp(x,-s)=g(x,\varphi(-s))\varphi'(-s)=g(x,-\varphi(s))\varphi'(s)=-g^\sharp(x,s),
$$
for a.e.\ $x\in\Omega$ and all $s\in\R$.
Then, we are allowed to apply \cite[Theorem 2.1]{pelsqu} and obtain a sequence $(v_h)\subset H^1_0(\Omega)$
of generalized solutions of \eqref{probmoddd} in the sense of \cite{pelsqu}, namely
\begin{equation*}
j_\xi^\sharp(x,v_h,\nabla v_h)\cdot\nabla v_h\in\elle1,\qquad j_s^\sharp(x,v_h,\nabla v_h)v_h \in \elle1,
\end{equation*}
and
\begin{equation*}
\into j_\xi^\sharp(x,v_h,\nabla v_h)\cdot\nabla \psi +\into
j_s^\sharp(x,v_h,\nabla v_h)\psi=\into g^\sharp(x,v_h)\psi, \quad\forall \psi\in V_{v_h}.
\end{equation*}
In particular, $(v_n)$ is a sequence of $H^1_0(\Omega)$ generalized solutions of problem \eqref{probmoddd}
in the sense of Definition~\ref{defsol-bis}.
The desired existence assertion now follows from Proposition~\ref{soluzdisrt} for $u_n=\varphi(v_n)$.
Concerning the summability, if $a^\sharp\in L^r(\Omega)$ and
$|g^\sharp(x,s)|\leq a^\sharp(x)+b|s|^{(N+2)/(N-2)}$ for a.e. $x\in\Omega$ and all $s\in\R$,
then, by \cite[Theorem 7.1]{pelsqu}, a generalized solution $v\in H^1_0(\Omega)$ of problem \eqref{probmoddd} belongs to
$L^{Nr/(N-2r)}(\Omega)$ for any $2N/(N+2)<r<N/2$ and to $L^\infty(\Omega)$, for all $r>N/2$.
Since $g$ is subjected to \eqref{growthassty}, by Proposition~\ref{newgrow}, we also get the final conclusions.
\end{proof}
\begin{remark}\rm
We believe that Theorem~\ref{existthmm} remains true if \eqref{signostrongg} is substituted by \eqref{generalsign}.
\end{remark}
\begin{remark}\rm
For $\beta=0$, the summability of solutions coincide with the standard one.
\end{remark}
\noindent
The next proposition yields a class of $j$, which is the one studied in \cite{AbO} (condition \eqref{Ab-13}
below is precisely condition (1.3) in \cite{AbO}),
satisfying the assumptions of Theorem~\ref{existthmm}.
\begin{proposition}
\label{invspec}
Assume that $j:\Omega\times\R\times\R^N\to\R$ is of the form
$$
j(x,s,\xi)=\frac{1}{2}a(x,s)|\xi|^2,
$$
where $a(x,\cdot)\in C^1(\R,\R^+)$ for a.e.\ $x\in \Omega$. Assume furthermore that there exist $R\geq 0$ such that
\begin{equation}
\label{Ab-13}
-2\beta a(x,s) \leq D_sa(x,s)(1+|s|){\rm sign}(s) \leq 0,
\end{equation}
for a.e.\ $x \in \Omega$ and all $s\in\R$ with $|s|\geq R$.
Let $\varphi\in C^2(\R)$ be a diffeomorphism according to Definition~\ref{diffeoclass}
which is addition satisfies
\begin{equation}
\label{four}
\varphi''(s)-\frac{\beta\varphi'(s)^2}{1+\varphi(s)}\geq 0,\qquad\text{for all
$s\in\R$ with $s\geq 1$}.
\end{equation}
Then there exist $\nu^\sharp>2$, $\delta^\sharp>0$ and $R^\sharp>0$ such that
\begin{equation*}
sj^\sharp_s(x,s,\xi)\geq 0,\qquad
\nu^\sharp j^\sharp(x,s,\xi)-(1+\delta^\sharp)j^\sharp_\xi(x,s,\xi)\cdot\xi
-j^\sharp_s(x,s,\xi)s\geq 0
\end{equation*}
for a.e.~$x \in \Omega$, all $\xi \in {\R}^N$, and every $s \in
\R$ with $|s|\geq R^\sharp$.
\end{proposition}
\begin{proof}
Let $R^\sharp\geq 1$ be such that
$|\varphi(s)|\geq R$ for all $s\in\R$ with $|s|\geq R^\sharp$. Then, by \eqref{Ab-13},
for all $s\geq R^\sharp$ we have $\varphi(s)\geq R$ and
\begin{equation*}
\begin{aligned}
j^\sharp_s(x,s,\xi) &= [D_sa(x,\varphi(s))
(\varphi'(s))^3+2\varphi'(s)\varphi''(s) a(x,\varphi(s))]|\xi|^2/2 \\
&\geq a(x,\varphi(s)) \varphi'(s)\Big[\frac{-\beta
\varphi'(s)^2}{1+\varphi(s)}+\varphi''(s)\Big]|\xi|^2.
\end{aligned}
\end{equation*}
Recalling that $a(x,\varphi(s))$ and $\varphi'(s)$ are positive
and by \eqref{four}, one gets $j^\sharp_s(x,s,\xi)\geq 0$.
Similarly, if $s\leq-R^\sharp$, again by \eqref{Ab-13}, we have $\varphi(s)\leq -R$ and
\begin{equation*}
j^\sharp_s(x,s,\xi)\leq a(x,\varphi(s))
\varphi'(s)\Big[\frac{\beta \varphi'(s)^2}{1+|\varphi(s)|}+\varphi''(s)\Big]|\xi|^2,
\end{equation*}
and so that $j^\sharp_s(x,s,\xi)\leq 0$, again due to \eqref{four}, since being
$\varphi$ and $\varphi''$ odd and $\varphi'$ even yields
\begin{equation*}
\varphi''(s)+\frac{\beta\varphi'(s)^2}{1+|\varphi(s)|}\leq 0,\qquad\text{for all
$s\in\R$ with $s\leq -1$}.
\end{equation*}
The second inequality in the assertion follows from
Corollary \ref{invar2-cor} (applied with $g=0$), since $\xi\mapsto j(x,s,\xi)$ is $2$-homogeneous
and $j_s(x,s,\xi)s\leq 0$ for a.e. $x \in \Omega$, all $\xi \in \R^N$
and any $|s|\geq R$.
\end{proof}
\begin{remark}\rm
\label{penultimo}
In the statement of Proposition~\ref{invspec}, in place of condition~\eqref{Ab-13}, one could consider
the following slightly more general assumption: there exists $R\geq 0$ such that
\begin{equation}
\label{Ab-13-bis}
-2\beta |s|a(x,s) \leq D_sa(x,s)(b(x)+s^2){\rm sign}(s) \leq 0,
\end{equation}
for a.e.\ $x \in \Omega$ and all $s\in\R$ with $|s|\geq R$, for some
measurable function $b:\Omega\to\R$ such that $\nu^{-1}\leq b(x)\leq \nu$, for some $\nu>0$.
This condition is satisfied for instance by $a(x,s)=(b(x)+s^2)^{-\beta}$ with $b$ measurable
and bounded between positive constants.
\end{remark}
\begin{remark}\rm
When the maps $s\mapsto j^\sharp(x,s,\xi),j_s^\sharp(x,s,\xi),j_\xi^\sharp(x,s,\xi)$
are bounded, the variational formulation of
\eqref{probmoddd} can be meant in the sense of distributions (see Proposition~\ref{soldistr}).
For instance, as it can be easily verified, this occurs for the $a$ mentioned in Remark~\ref{penultimo},
$a(x,s)=(b(x)+s^2)^{-\beta}$.
\end{remark}
\bigskip
\noindent
{\bf Acknowledgments.} The second author wishes to thank Marco Degiovanni for useful discussions and Luigi Orsina
for some feedback on a preliminary version of the manuscript.
\bigskip
|
2,877,628,090,501 | arxiv | \section{Introduction}
Interconnects are widely used in 2.5D/3D packages and integrated circuits (ICs) [\citen{PackagingICs_1}][\citen{PackagingICs_2}]. The power integrity/signal integrity (SI/PI) are vital to be considered in design of interconnects in high frequency region for its performance [\citen{PackagingICs_1}]-[\citen{PackagingICs_4}]. Circuit parameters, like parasitic resistance and inductance, are essential to the design of SI/PI [\citen{PackagingICs_2}][\citen{SIPI_1}].
To accurately and efficiently extract those parameters of complex interconnects in packages, many efforts have been made in the last few decades. Several volumetric integral equation (VIE) formulations are proposed to extract the wideband resistance and inductance. FastHenry is one popular open-source solver based on the magneto-quasi-static (MQS) VIE formulation in conjugate with a mesh analysis, and currents are assumed to mainly flow in the longitude direction [\citen{FastHenry}]. However, since electric currents significantly crowd towards surfaces of conductors in high frequency region, extremely fine meshes are usually required to accurately model the skin effect in highly lossy interconnects. Then, several full-wave VIE formulation are proposed to model interconnects under exterior electromagnetic waves or complex inhomogeneous media [\citen{FullWaveVIE_1}]-[\citen{FullWaveVIE_3}].
To improve the computational efficiency, surface integral equation (SIE) formulations are proposed to extract parameters of interconnects. Those formulations usually show performance improvements over their volumetric counterparts, since unknowns only reside on surfaces of interconnects. For example, several full-wave methods based on the SIE formulations are developed in [\citen{SIEFullwave1}]-[\citen{SIEFullwave6}], in which the tangential electric and magnetic fields are calculated on the surfaces of objects. Many formulations based on the SIEs and circuit theory, namely the partial element equivalent circuit (PEEC) methods, are proposed to model circuit structures [\citen{SIEPEEC_0}]-[\citen{SIEPEEC_8}]. In those formulations, electromagnetic quantities are interpreted as circuit elements, and then external excitations are applied for impedance extraction. In addition, an open-source solver, namely FastImp [\citen{FastImp}], based on the mixed potential integral equation (MPIE) formulation was developed to model arbitrarily shaped conductors [\citen{MPIE1}]-[\citen{MPIE3}]. In [\citen{MQSSIE}], an MQS SIE formulation is proposed to model interconnects with rectangular panels. Compared with other formulations, it is more efficient since the surface impedance boundary condition used to model the skin effect. However, currents are still assumed to flow in the longitude direction. Arbitrary currents cannot be easy to be modeled in complex interconnects. The Ansys Q3D extractor based on the MQS SIE formulation was developed to model interconnects in industrial applications [\citen{AnsysQ3D}].
In this article, an SIE formulation under the MQS assumption is proposed to efficiently and accurately model arbitrarily shaped interconnects in packages. To conveniently apply the charge conservation condition and external voltage sources, all electromagnetic quantities are decently transferred into circuit elements. For practical complex interconnects, the equivalent circuits may be nonplanar circuits. Therefore, a loop analysis rather than the traditional mesh analysis is developed to carefully construct matrix equations with an independent and complete set of unknowns based on graph theory. In addition, the pre-corrected Fast Fourier Transform (pFFT) [\citen{PFFT2}][\citen{PFFT}] is successfully implemented to solve matrix equation for large-scale interconnects, and an efficient preconditioner is developed to accelerate the convergence. We carried out four practical examples from simple to complex structures, including a rectangular metallic interconnect, bounding wire arrays, interconnects in a real-life circuit and the power distribution networks (PDNs) used in packages, to validate its accuracy, efficiency and scalability.
Compared with other existing techniques, contributions in this article are mainly in three aspects.
\begin{enumerate}
\item An SIE formulation under the MQS assumption with the triangular discretization is used to model arbitrarily shaped interconnects in packages. Through carefully interpreting the SIE formulation as an equivalent circuit, the loop analysis is successfully developed to apply the charge conservation condition and exterior excitations.
\item An efficient preconditioner is introduced to accelerate the convergence of the proposed SIE formulation, and the pFFT algorithm is specially tailored to fast calculate the matrix-vector product for large-scale problems, Therefore, it can model practical complex interconnects.
\item Four practical complex structures in packages are used to verify the accuracy, efficiency and flexibility of the proposed formulation. Results show that it can efficiently model large-scale interconnects, and solve the practical problems in the real-life circuits with the same level of accuracy compared with the industrial solver.
\end{enumerate}
The article is organized as follows. In Section II, the proposed SIE formulation with the triangular discretization and the modified Rao-Wilton-Glisson (RWG) basis functions are detailed presented. Then, its equivalent circuit corresponding to the SIE formulation is introduced and detailed explained. In Section III, the loop analysis with graph theory is used to apply the charge conservation condition and voltage sources. An effective preconditioner is proposed to accelerate the convergence and the pFFT algorithm is detailed shown in Section IV. Then, four numerical examples are carried out to validate the accuracy and efficiency of the proposed formulation in Section V. Finally, we draw some conclusions in Section VI.
\section{The SIE Formulation for Lossy Interconnects}
\subsection{The MQS SIE formulation}
Without loss of generality, external excitations are not included inside interconnects. The electric field in the exterior space can be expressed as
\begin{equation}\label{MPIE1}
\mathbf{E} = -j\omega\mathbf{A}-\nabla\Phi,
\end{equation}
where $\mathbf{E}$, $\mathbf{A}$, $\Phi$ are the electric field, the vector potential and the scalar potential, respectively. $\omega$ denotes the angular frequency. $\mathbf{A}$ can be expressed in terms of the electric current density and the Green's function as
\begin{equation}\label{VectorPotential}
\mathbf{A} = \mu_{0}\int G_{0}\left(\mathbf{r},\mathbf{r}'\right)\mathbf{J}\left(\mathbf{r}'\right)d\mathbf{r}'.
\end{equation}
By substituting (\ref{VectorPotential}) into (\ref{MPIE1}), we have
\begin{equation}\label{MPIE2}
\mathbf{E} = -j\omega\mu_{0}\int G_{0}\left(\mathbf{r},\mathbf{r}'\right)\mathbf{J}\left(\mathbf{r}'\right)d\mathbf{r}'-\nabla\Phi,
\end{equation}
which is the well-known MPIE. In this article, we consider that the skin effect is well-developed in the high frequency region. Therefore, electric currents mainly flow towards the perimeter of interconnects. The surface impedance operator can be used to relate the tangential electric field to the surface current density, which is given by
\begin{equation}\label{ImpedanceOp}
Z_{s} = \sqrt{j\omega\mu/\sigma},
\end{equation}
where $\sigma$ is the conductivity of the interconnect. It should be noted that (\ref{ImpedanceOp}) is only valid in the high frequency region, where the skin effect is well-developed. When parameters in the low frequency region are considered, other impedance operators, such as the generalized impedance boundary condition [\citen{GIBC_1}]-[\citen{GIBC_3}], should be used.
By considering lossy effects imposed by (\ref{ImpedanceOp}), (\ref{MPIE2}) can be modified as
\begin{equation}\label{MPIE3}
Z_{s}\mathbf{J}\left(\mathbf{r}\right)+j\omega\mu_{0}\int G_{0}\left(\mathbf{r},\mathbf{r}'\right)\mathbf{J}\left(\mathbf{r}'\right)d\mathbf{r}' = -\nabla\Phi.
\end{equation}
(\ref{MPIE3}) is the SIE formulation to model lossy interconnects.
Since interconnects in packages are considered in this article, of which the large typical sizes are only several millimeters, the MQS assumption can be safely used. Therefore, the static Green's function in free space is used in (\ref{MPIE3}), which is given by
\begin{equation}\label{StaticGF}
G_{0}\left(\mathbf{r},\mathbf{r}'\right) = \frac{1}{4\pi\lvert\mathbf{r}-\mathbf{r}'\rvert}.
\end{equation}
The MQS assumption also implies the electric charge conservation condition, namely $\nabla\cdot\mathbf{J} = 0$. In Section III, the loop analysis will be selected to enforce it.
\subsection{Surface Discretization Through Triangles}
Generally, volumetric or surface elements, such as tetrahedron, filament, triangle and panel, are used to divide interconnects into small ones to calculate numerical results. Volumetric elements are usually applied in VIEs, such as filaments in FastHenry [\citen{FastHenry}], voxels in VoxHenry [\citen{VoxHenry}], and tetrahedrons in [\citen{FullWaveVIE_1}], which lead to prohibitively large modeling errors of complex structures or computational cost in terms of runtime and memory to model the well-developed skin effect. For surface elements, rectangular panels with pulse basis functions are developed in [\citen{MQSSIE}], in which currents are assumed to flow in the longitude direction. Currents in arbitrary directions are not easy to be supported.
In our implementation, triangles are used to discretize the surface of interconnects. Therefore, arbitrarily shaped interconnects can be easily discretized with considerably small modeling errors. In addition, the RWG functions can be used to support currents in arbitrary direction. Therefore, triangles are much more preferred for complex interconnects.
\subsection{Construction of Matrix Equations Through Modified RWG Basis Functions}
The method of moments (MoM) is chosen to solve the electric current density in (\ref{MPIE3}). Once triangle meshes on the surface of interconnects are constructed, the modified RWG functions are used to discretize surface current density $\mathbf{J}$, which can be expressed as
\begin{equation}\label{RWG}
{{\bf{f}}_n}({\bf{r}}) = \left\{ {\begin{array}{*{2}{cc}}
\frac{1}{{2A_n^ + }}{\bm{\rho }}_n^ + ({\bf{r}}) &{\bf{r}}\,{\rm{ in }}\,T_n^ + \\[0.7em]
\frac{1}{{2A_n^ - }}{\bm{\rho }}_n^ - ({\bf{r}}) &{\bf{r}}\,{\rm{ in }}\,T_n^ - \\[0.7em]
0 &otherwise
\end{array}} \right.
\end{equation}
Compared with the traditional RWG basis function in [\citen{RWGFunction}], it should be noted that the edge length does not exist for (\ref{RWG}) in our implementation. Then, the current density $\mathbf J$ can be expanded using the modified RWG functions
\begin{equation}\label{ExpandJ}
{\bf{J}}({\bf{r}}) = \sum\limits_{i = 1}^{{N_e}} {{I_{{b_i}}}{{\bf{f}}_i}({\bf{r}})},
\end{equation}
where $N_e$ is the total number of edges. The expansion coefficient $I_{b_{i}}$ denotes the current flows across the edge $i$ from the triangle $T_{n}^{+}$ to $T_{n}^{-}$. Since $\mathbf{f}_n$ is scaled by its corresponding edge length, $I_{b_{i}}$ denotes the electric current rather than the current density flowing through the edge $i$ as the original RWG basis function. Such modification is imported and makes the loop analysis be easily applied as shown in Section III.
In addition, the pulse functions are used to expand $\Phi$, which is expressed as
\begin{equation}\label{ExpandPotential}
\Phi ({\bf{r}}) = \sum\limits_{i = 1}^{{N_t}} {{\varphi _i}{\Pi _i}({\bf{r}})},
\end{equation}
where $N_t$ is the overall number of triangles, $\Pi _i({\bf{r}})$ is $1$ on the $i$th triangle and 0 outside it, respectively.
We substitute (\ref{ExpandJ}) and (\ref{ExpandPotential}) into (\ref{MPIE3}), and the Galerkin scheme is selected to test (\ref{MPIE3}). The following matrix equation can be obtained
\begin{equation}\label{BranchVandI}
{{\bf{Z}}_b}{{\bf{I}}_b} = {{\bf{V}}_b},
\end{equation}
where
\begin{align}
{\left[ {{{\bf{Z}}_b}} \right]_{ij}} &= \frac{{j\omega \mu }}{{4\pi }}\int\limits_{{S_i}} {{{\bf{f}}_i}({\bf{r}})} \int\limits_{{S_j}} {\frac{{{{\bf{f}}_j}({\bf{r'}})}}{{\left| {{\bf{r}} - {\bf{r'}}} \right|}}d{\bf{r}}d{\bf{r'}}}\notag\\
\label{Zb} & \quad\quad\quad\quad\quad\quad\quad\quad+{Z_s}\int\limits_{{S_i}} {{{\bf{f}}_i}({\bf{r}}){{\bf{f}}_j}({\bf{r}}')d{\bf{r}}} ,\\
\label{Vb} {\left[ {{{\bf{V}}_b}} \right]_i}\, &= \varphi _i^ + - \varphi _i^ - .
\end{align}
$\mathbf{V}_b$ is the potential difference between two triangles shared with the edge $i$, and $\mathbf{I}_b$ collects all unknowns in a column vector. To accurately solve (\ref{BranchVandI}), the modified nodal analysis (MNA) [\citen{NodaA}] or the mesh analysis [\citen{FastHenry}] is required to enforce the charge conversation condition. This part will be introduced in Section III.
\subsection{Physical Interpretation of (\ref{BranchVandI}) in the View of Circuit Theory}
In fact, under the MQS assumption, (\ref{BranchVandI}) can be completely interpreted through circuit theory, which makes it easy and convenient to enforce the charge conservation condition and external excitations.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{EquCircuit.jpg}
\caption{The triangle discretization for a triangular prism interconnect, and the equivalent circuit for a triangle pair.}
\label{CDinTp}
\end{figure}
To make the derivation clear, we introduce two domains: the electromagnetic domain and the circuit domain. As shown in Fig. \ref{CDinTp}, each triangle is treated as one circuit node in the circuit domain, and one edge between two triangles represents a circuit branch between two nodes. Under these intuitive interpretations, $\mathbf{I}_b$ and $\mathbf{V}_b$ can be regarded as branch currents and voltages in the circuit domain.
In (\ref{Zb}), $\mathbf{Z}_b$ can be separated into two parts: the resistance $\mathbf{R}_b$ and the inductance $\mathbf{L}_b$. $\mathbf{R}_b$ represents the first term of $\mathbf{Z}_b$, and $\mathbf{L}_b$ is the second term. The equivalent circuits for Row\#$i$ of $\mathbf{Z}_b$ are in Fig. \ref{CDinTp}. In the circuit domain, $\left[\mathbf{L}_b\right]_{ii}$ represents the self-inductance, while $\left[\mathbf{L}_b\right]_{ij}\left(j=1,2,\ldots,n,i\neq j\right)$ denotes voltage controlled voltage sources (VCVSs). Similarly, $\left[\mathbf{R}_b\right]_{ii}$ can be represented by a resistor $R_{ii}$, and other four terms of $\left[\mathbf{R}_b\right]_{ij}$ are treated as four current controlled voltage sources (CCVSs) [\citen{CircuitEq}], which are serially connected with $R_{ii}$. Table \ref{EMDomain2CCDomain} summarizes the relationship between quantities in the electromagnetic domain and their corresponding physical concepts or elements in the circuit domain.
Fig. \ref{CDinTp} gives a simple example, which is a prism-shaped interconnect. 8 triangular patches are used to construct its surface, which are marked through black numbers with circles. There are 12 edges in total and the red numbers with squares are used to mark them. The equivalence in Table \ref{EMDomain2CCDomain} is applied to this interconnect, and a planar circuit is obtained as shown in Fig. \ref{EqC}. Therefore, to this point, it is fully interpreted as a circuit. In Section III, we will use graph theory to select an independent unknown set, and then solve it.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{Eq_Circuit_Rot_cut.jpg}
\caption{The circuit corresponds to the interconnect in Fig. \ref{CDinTp} by applying the relationship between electromagnetic quantities and circuit elements in Table \ref{EMDomain2CCDomain}.}
\label{EqC}
\end{figure}
\renewcommand\arraystretch{1.2}
\begin{table}
\centering
\caption{Relationship between quantities in the electromagnetic domain and elements in the circuit domain}\label{EMDomain2CCDomain}
\begin{tabular}{|l|c|}
\hline
\textbf{EM domain} &\textbf{Circuit domain} \\
\hline
Triangle &Node \\
\hline
Edge &Branch \\
\hline
$\left[\mathbf{I}_{b}\right]$ &Branch current\\
\hline
$\left[\mathbf{V}_{b}\right]$ &Branch voltage\\
\hline
$\left[\mathbf{L}_{b}\right]_{ii}$ &Indcutor \\
\hline
$\left[\mathbf{L}_{b}\right]_{ij}\left(i\neq j\right)$ &VCVS\\
\hline
$\left[\mathbf{R}_{b}\right]_{ii}$ &Resistor \\
\hline
$\left[\mathbf{R}_{b}\right]_{ij}\left(i\neq j\right)$ &CCVS \\
\hline
\end{tabular}
\end{table}
\section{Loop Analysis}
In Section II, (\ref{BranchVandI}) is derived to describe the relationship between the branch current and voltage. To solve it, additional constraints are required in the practical applications, such as the charge conservation law and external excitations. The MNA [\citen{NodaA}], which enforces the Kirchoff's current law, was widely used in many applications [\citen{CircuitEq}]. For each node (triangle), summation of currents flowing into and out of one circuit node should be zero, which leads to an additional matrix equation upon branch currents. Therefore, unknowns in the MNA include branch currents and node voltages. The dimension of the final matrix equation is the summation of the number of edges and triangles. Another option is to use the mesh analysis, which can enforce the Kirchoff's voltage law, and leads to a much smaller matrix equation compared with that from the MNA. However, the mesh analysis is only applicable for planar circuit, which are not true for practical complex interconnects. To overcome this issue, we propose to use the loop analysis to enforce those additional conditions for general applications.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{Circuit_GraphACurrent_cut.jpg}
\caption{The graph for the circuit in Fig. \ref{EqC} and the direction of branch currents.}
\label{TopoGraph}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.499\textwidth]{Circuit_Tree_cut.jpg}
\caption{A tree for the graph in Fig. \ref{TopoGraph} and the corresponding twigs and links. The right panel shows the tree connection.}
\label{TreeDefinition}
\end{figure}
\subsection{Graph Theory in the Circuit Analysis}
This subsection briefly introduces some essential concepts used in our analysis, and then uses them to solve (\ref{BranchVandI}). When all elements of the equivalent circuit in Fig. \ref{EqC} are removed, a graph of its topological connection can be obtained as shown in Fig. \ref{TopoGraph}. Our following analysis is based on this graph. According to graph theory [\citen{GraphT1}], each connected graph can construct a {\it tree}, which is defined by all nodes connecting through branches, but without including any closed loops. Once a tree for the circuit is generated, all branches in the graph are divided into two groups: those in or not in the tree. Branches in the tree are called {\it twigs}, and the other branches are called {\it links}. It should be noted that the tree for a graph may have many possibilities. Fig. \ref{TreeDefinition} shows one possible tree for the graph in Fig. \ref{TopoGraph}, in which twigs are in red solid lines and links are in block dash lines.
Obviously, if there are $n$ nodes and $b$ branches for a graph with one degree of separation, the overall count of twigs should be $(n-1)$ and the overall count of links is $(b-n+1)$. The {\it degree of separation} is the number of completely separated parts, which denotes the number of fully separated conductors without any physically connection. Therefore, the overall counts of twigs and links are $(n-s)$ and $(b-n+s)$ for a graph with $s$ degree of separation, respectively.
In the loop analysis, loop currents rather than branch currents are required to be solved. The loops cannot be arbitrarily chosen, and the set of loops should be independent and complete. Generally, one possible choice is to generate the {\it fundamental loop}, which is a loop consisting of only one link and several connected twigs (see Fig. \ref{LoopIllu}). Therefore, the overall count of fundamental loops is equal to that of links. It is found that if loop currents are defined on the fundamental loops, they are independent with each other, and a set including all fundamental loops is complete. Therefore, we can easily construct such set through a tree for the graph. In Fig. \ref{LoopIllu}, all the black solid circles consist of such set.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{GTreeWithLoop_cut.jpg}
\caption{The fundamental loops for the selected tree without additional source, and the corresponding transfer matrix $\mathbf A$.}
\label{LoopIllu}
\end{figure}
\subsection{Transfer Branch Quantities to Loop Counterparts}
To get loop currents and voltages, the Kirchoff’s voltage law is applied in each fundamental loop. For each loop, the summation of branch voltages must be equal to the loop voltage, which is expressed as
\begin{equation}\label{VolT}
\mathbf{A} \mathbf{V}_b=\mathbf{V}_l,
\end{equation}
where $\mathbf{A}$ transfers the branch voltage $\mathbf{V}_b$ to the loop counterpart $\mathbf{V}_l$. Generally, the loop voltage is zero if there are no additional voltage sources in the corresponding loop. The dimension of $\mathbf{A}$ is $l \times b$, where $b$ is the overall count of branches and $l = (b-n+s)$ is the number of links. Its entries are $-1$ or $1$, which corresponds to the direction of loops and branch voltage drops. Fig. \ref{LoopIllu} gives elements of $\mathbf{A}$ for the corresponding loops. $\mathbf{A}$ is a sparse matrix and can be efficiently stored through the compressed column storage (CCS) or compressed row storage (CRS) format.
The transfer matrix from loop currents to branch counterparts is the transpose of $\mathbf{A}$, which is given by
\begin{equation}\label{CurT}
\mathbf{A}^T \mathbf{I}_l=\mathbf{I}_b,
\end{equation}
where $\mathbf{I}_l$ is a column vector including all loop currents. By substituting (\ref{VolT}) and (\ref{CurT}) into (\ref{BranchVandI}), we have
\begin{equation}\label{LoopVandI}
\mathbf{Z}_l \mathbf{I}_l=\mathbf{V}_l,
\end{equation}
where $\mathbf{Z}_l=\mathbf{A Z}_b \mathbf{A}^T$.
By using $\mathbf{A}$ and $\mathbf{A}^T$, branch quantities are replaced by their corresponding loop counterparts, and the dimension of the matrix equation significantly reduces from $(b+n)$ to $(b-n+s)$. For a closed surface, the number of edges is always one and a half times as the number of triangles. Therefore, the dimension of matrix equation generated by the loop analysis is about 20\% of that by the MNA.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{MatrixwithLoopAndSource_cut.jpg}
\caption{The loops with a voltage source between Node\#1 and Node\#2 and the modified Matrix $\mathbf{A}$.}
\label{LoopWithEx}
\end{figure}
\subsection{Apply Voltage Sources to the Circuit}
Generally, a voltage source is attached between two circuit nodes. The tree of a circuit does not need to be modified. It is also true for twigs, because no additional circuit nodes are added. The only difference is that an additional link is generated. Therefore, a new fundamental loop should be considered as shown in Fig. \ref{LoopWithEx}. The dimension of matrix in (\ref{LoopVandI}) is $(b-n+s+p)$ if external voltage sources are considered, where $p$ is the overall number of exterior voltage sources. The modified matrix $\mathbf A$ is shown in Fig. \ref{LoopWithEx}.
Once the loop analysis is carried out, and the external voltage source is attached to the circuit, the final matrix equation is expressed as
\begin{equation}\label{FinalME}
{\bf{A}}{{\bf{Z}}_b}{{\bf{A}}^T}{{\bf{I}}_l} = {{\bf{V}}_l}.
\end{equation}
\section{Preconditioning and PFFT Acceleration}
When large-scale interconnects are considered, (\ref{FinalME}) have to be solved with iterative algorithms. The generalized minimal residual method (GMRES) [\citen{GMRES}] is used to solve (\ref{FinalME}) in our implementation. To speed up its convergence, a preconditioner is carefully designed and the pFFT is implemented to accelerate the matrix-vector multiplication. This section introduces the preconditioning matrix and the pFFT acceleration algorithm.
\subsection{Preconditioning}
To reduce the overall iteration number and accelerate the convergence, an efficient preconditioner is required. In this article, the preconditioner is defined as
\begin{equation}\label{Precond}
{\bf{P}} = {\mathbf{AZ}}_b^N{{\bf{A}}^T},
\end{equation}
where $\mathbf{Z}_b^N$ is a diagonal matrix, in which the entries are obtained from $\mathbf{Z}_b$. Therefore, $\mathbf{P}$ is a symmetric matrix. The left preconditioning technique is applied to (\ref{LoopVandI}), and (\ref{LoopWithP}) can be obtained as
\begin{equation}\label{LoopWithP}
{{\bf{P}}^{ - 1}}{{\bf{Z}}_l}{{\bf{I}}_l} = {{\bf{P}}^{ - 1}}{{\bf{V}}_l}.
\end{equation}
It is obvious that the inverse of $\mathbf P$ should be effectively calculated when the GMRES is used to solve (\ref{FinalME}). Fortunately, since $\mathbf{A}$ and $\mathbf{A}^T$ are sparse matrices and $\mathbf{Z}_b^N$ is a diagonal matrix, $\mathbf{P}$ is a sparse matrix. A direct factorization algorithm can be used to efficiently calculate the inverse of $\mathbf{P}$, such as the PARDISO solver in MKL [\citen{MKL}].
Three matrix-vector products are required to be calculated for (\ref{LoopWithP}). It should be noted that ${\widetilde {\bf{I}}_l} = {{\bf{Z}}_l}{{\bf{I}}_l}$ is very time-consuming, since $\mathbf{Z}_l$ is a full matrix. However, it is easy to calculate ${{\bf{P}}^{ - 1}}{{\bf{V}}_l}$ and ${{\bf{P}}^{ - 1}}{\widetilde {\bf{I}}_l}$ because $\mathbf{P}^{-1}$ can be effectively calculated. $\mathbf{Z}_b$ can be separated into two parts mentioned in Section II-B. It can be expressed as
\begin{equation}\label{SepZb}
{{\bf{Z}}_b} = {\bf{R}}_b + {\bf{L}}_b,
\end{equation}
where ${\bf{R}}_b$ is the first term of (\ref{Zb}), which is a sparse matrix, and ${\bf{L}}_b$ is the second term, which is a full matrix. Therefore, we only need to accelerate ${\bf{AL}}_b{{\bf{A}}^T}{{\bf{I}}_l}$.
\subsection{PFFT Acceleration}
\begin{figure}
\centering
\includegraphics[width=0.3\textwidth]{pfftGridSF.jpg}
\caption{Flowchart through the pFFT algorithm to accelerate the matrix-vector multiplication.}
\label{pFFTGrid}
\end{figure}
For a Toeplitz matrix, the matrix-vector multiplication can be efficiently computed using the pFFT algorithm, which implies that the time complexity can be reduced from $O\left(N^2\right)$ to $O\left(NlogN\right)$, and the memory cost is reduced to $O\left(N\right)$. By assuming uniformly distributed grids in the three-dimensional space as shown in Fig. \ref{pFFTGrid}, $G_0\left(\mathbf{r},\mathbf{r}'\right)$ is a three-level Toeplitz matrix because the Green’s function is shift-invariant in the three-dimensional space. Therefore, the FFT can be used to efficiently accelerate the matrix-vector multiplication.
Assume ${\bf{\bar \alpha }} = {{\bf{A}}^T}{{\bf{I}}_l}$, ${\bf{AL}}_b{{\bf{A}}^T}{{\bf{I}}_l}$ can be rewritten as ${\bf{AL}}_b\bf{\bar \alpha} $.
In the pFFT algorithm, the whole computation domain can be categorized into two parts: near- and far-field regions. The pFFT algorithm consists of four steps as follows.
\begin{enumerate}
\item Construction of the project matrix from a triangle pair to its nearby grids, which can be expressed as
\begin{equation}\label{Projection}
{\overline {\bf{Q}} _g} = {\bf{B\bar \alpha }},
\end{equation}
where $\mathbf{B}$ is the projection matrix, and ${\overline {\bf{Q}} _g} $ is equivalent point currents on the uniform grids. The collection of these grids is defined as a stencil, and the order of stencil $O_s$ is determined by the number of grids $Ng$. In general, $O_s$ grids in the $x$, $y$ and $z$ direction from its nearest grid to the center of the target triangle are selected as a $O_s$-order stencil.
\item Construction the convolution matrix to efficiently calculate the potential between gird points through the FFT algorithm, which is expressed as
\begin{equation}\label{FFTCal}
{\overline {\bf{\varphi }} _g} = {\bf{H}}{\overline {\bf{Q}} _g},
\end{equation}
where $\mathbf{H}$ is a Toeplitz matrix. The FFT can be used to efficiently calculate the matrix-vector multiplication ${\bf{H}}{\overline {\bf{Q}} _g}$ with $O\left(NlogN\right)$, where $N$ is the overall count of grid points.
\item Construction of the interpolation matrix from the nearby grids to the desired triangle, which is expressed as
\begin{equation}\label{interpolation }
{{\bf{\Phi }}_g} = {\bf{I}}{\overline {\bf{\varphi }} _g},
\end{equation}
where the interpolation matrix $\mathbf{I}$ is the transposition matrix of $\mathbf{B}$ when the Galerkin scheme is applied.
\item For near-field regions, entries of $\mathbf{L}_b$ are directly calculated, and are collected in $\mathbf{L}_d$. Therefore, $\mathbf{L}_b$ can be approximated as
\begin{equation}\label{Lb}
{\bf{L}}_b \approx {{\bf{L}}_d} + {\bf{IHB}},
\end{equation}
where $\mathbf{L}_d$, $\mathbf{I}$, $\mathbf{B}$ are all sparse matrices. Finally, the matrix-vector multiplication ${\bf{AL}}_b{{\bf{A}}^T}{{\bf{I}}_l}$ is calculated by ${\bf{A}}\left[{{\bf{L}}_d} + {\bf{IHB}}\right]{{\bf{A}}^T}{{\bf{I}}_l}$.
\end{enumerate}
\section{Numerical Results And Discussion}
In this section, four numerical examples are computed using the proposed formulation and the industrial solver Ansys Q3D, which was developed based on the SIE formulation for complex structures. The resistance and inductance are calculated, and results are compared with those from the Ansys Q3D. In addition, the effectiveness of the pFFT algorithm and the proposed preconditioner are discussed and verified. Our in-house solver is developed with C++ without any parallel computations, and all simulations are carried out on the workstation with a 3.2 GHz CPU and 1 TB memory.
\subsection{One Rectangular Interconnect}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{case1_model.jpg}
\caption{A rectangular copper interconnect and the excitation configuration. The magnitude of surface current and electric potential at 100 GHz are illustrated.}
\label{Case1-Model}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{Case1_RLP.jpg}
\caption{The resistance and inductance obtained from the Ansys Q3D and the proposed formulation.}
\label{Case1_RL}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.39\textwidth]{Case1_TMP.jpg}
\caption{The time and memory consumption when different meshes are used to discretize the interconnect.}
\label{Case1_TM}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{case1_Iterations.jpg}
\caption{The number of iterations for the GMRES with and without the preconditioner versus the frequency and meshes.}
\label{Case1_It}
\end{figure}
The first structure is a rectangular copper interconnect with the size of 5 um$\times$7.5 um$\times$200 um as shown in Fig. \ref{Case1-Model}. A voltage source is added between two ends of this interconnect, and the surface with red arrows is the source, which has higher potential compared with that at the far end of the interconnect and indicate currents flow into the interconnect. The corresponding surface with the blue arrow is the sink, and has the lower potential compared with that on the source and represents that currents flow out of the interconnect. In Section II-A, it is mentioned that the surface impedance operation should be used in a limited frequency range, and it works well when the skin depth is less than $1/3$ thickness of interconnects in general. Therefore, its wideband properties between 10 GHz$\sim$150 GHz were studied, where the surface impedance operator can well represent the relationship between the surface current density and the tangential electric field. We used 428 triangles to discretize the surface of this interconnect, and there were 230 loops to construct (\ref{LoopVandI}). Fig. \ref{Case1_RL} shows the resistance and inductance calculated by the Ansys Q3D and the proposed formulation. For the resistance, two results are in good agreement in the whole frequency region. It can be noticed that the inductance does not vary with frequency for the Ansys Q3D. In fact, the formulation used in Ansys Q3D is also the MPIE, and the difference is that the objects are considered as the perfect electric conductors (PECs), which implies that the left hand side of (\ref{MPIE2}) is zero. Therefore, the inductance is independent of frequency for the Ansys Q3D, and the resistance obtained from the proposed formulation shows slightly difference compared with that from the Ansys Q3D at the very high frequency.
The effectiveness of the pFFT algorithm is discussed through the runtime and memory consumption as shown in Fig. \ref{Case1_TM}. We discretized the interconnect using the mesh with different sizes, and recorded the runtime and memory consumption, in which the frequency was set to 100 GHz, which is one of the challenging scenarios in this example. The runtime and memory increase as the number of loops increases. Curves versus the runtime and memory consumption approximately match $O(NlogN)$ and $O(N)$ when the number of unknowns is large enough. When the number of unknowns is very small, some additional operations take up a large portion of the runtime, such as I/O operations.
In addition, the convergence property in the GMRES solver is very important for the efficient parameter extraction. Therefore, we investigated the number of iterations with and without the preconditioner $\mathbf{P}$ in (\ref{Precond}) for the convergent tolerance of $10^{-3}$. As shown in Fig. \ref{Case1_It}, the top subfigure shows the number of iterations when the frequency is set to 100 GHz. As the dimension of the system matrix increases, the number of iterations also increases with or without the preconditioner. However, the preconditioner can approximately reduce the number of iterations by one order when the number of unknowns is large enough. In the bottom subfigure of Fig. \ref{Case1_It}, we used 428 triangles to discretize the interconnect, and the number of iterations with different frequencies were recorded. Similarly, the proposed preconditioner can significantly reduce the overall iteration number, and hence accelerate the convergence.
In Fig. \ref{Case1-Model}, the distribution of electric potential and magnitude of surface current are illustrated. The branch current can be calculated by (\ref{CurT}) since we have obtained the loop current. Similarly, the branch voltages are obtained through (\ref{VolT}). In our simulation, the electric potential on the surfaces of sinks is set as zero, and then the electric potential on each triangular panel can be calculated through the branch voltages.
\subsection{The Bounding Wire Array}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{case2_model_La.jpg}
\caption{The geometric details and the ports definition used in our simulation. The top subfigure is the magnitude of surface current when Port\#2 is excited, and the bottom subfigure is the electric potential.}
\label{Case2_Model}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{Case2_RLP.jpg}
\caption{The self- and mutual- resistance and inductance obtained from the the Ansys Q3D and the proposed formulation.}
\label{Case2_RL}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.39\textwidth]{Case2_TMP.jpg}
\caption{The runtime and memory consumption when different meshes are used to discretize the bounding wire.}
\label{Case2_TM}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{case2_Iteration.jpg}
\caption{The number of iterations for GMRES with and without the preconditioner versus the frequency and meshes.}
\label{Case2_It}
\end{figure}
The bounding wire array with 52 copper interconnects is considered here. The dimension of this structure is 1.2 mm $\times$ 1.2 mm $\times$ 0.15 mm, and the thickness of each interconnect is 10 um. The ports are defined on the ends of two adjacent bounding wires as shown in Fig. \ref{Case2_Model}. The frequency is set as 25 GHz, and different numbers of unknowns are used to solve this problem. To demonstrate the accuracy, we swept the frequency from 0.5 GHz $\sim$ 30 GHz, and calculated the resistance and inductance, in which 13,041 loops are found to construct the matrix equations. In Fig. \ref{Case2_RL}, results calculated by the proposed formulation and the Ansys Q3D are compared with each other. For both the self-inductance and mutual inductance, the two solvers show excellent agreement with each other for all sampling frequency, and the relative error is less than 2\%. The self-resistance also agrees well for the two solvers, but large difference occurs for the mutual resistance, which is about 25\%. In fact, the mutual resistance contributes little to the loop resistance due to their small values. The resource consumption including the runtime and memory is illustrated as shown in Fig. \ref{Case2_TM}. As the number of loops increases, the runtime and memory consumption also approximately match $O(NlogN)$ and $O(N)$.
To demonstrate the effectiveness of the preconditioner, the number of iterations as the number of loops and the frequency is recorded and illustrated in Fig. \ref{Case2_It}. As the number of loops increases, the number of iterations also increases with and without the preconditioner. However, the number of iterations increases very slowly when the preconditioner is used. For example, there is 9 iterations for 3,470 loops and 13 iterations for 967,909 loops. In general, the frequency has very little effect on the number of iterations. The overall number of iterations slightly decreases without the preconditioner and slightly increases with it when the frequency increases. In addition, as shown in Fig. \ref{Case2_Model}, the distribution of surface current and electric potential are illustrated when Port\#2 are excited by a voltage source. The scenario of the accumulation of current on the edges can be observed.
\subsection{Interconnects in A Real-life Circuit}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{case3_model.jpg}
\caption{The bottom view and top view of a real-life circuit with several vias and the excitation configuration.}
\label{Case3_Model}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{Case3_RLP.jpg}
\caption{The self- and mutual- resistance and inductance obtained from the Ansys Q3D and the proposed formulation.}
\label{Case3_RL}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.39\textwidth]{Case3_TMP.jpg}
\caption{The time and memory consumption when different meshes are used to discretize the circuit.}
\label{Case3_TM}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{case3_Iteration.jpg}
\caption{The number of iterations for the GMRES with and without the preconditioner versus the frequency and meshes.}
\label{Case3_It}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{PotentialAndCurrent_case2.jpg}
\caption{The magnitude of the surface current and the distribution of electric potential when a voltage source is applied on Port\#2.}
\label{Case3_PC}
\end{figure*}
In this example, a real-life circuit with complex interconnects is considered and two ports are defined. The sources are defined on facets at the end of bounding wires extending from the circuit, and Source\#1 consists of $1^{\text{st}}$, $4^{\text{th}}$, $5^{\text{th}}$, $7^{\text{th}}$, $10^{\text{th}}$ facets while Source\#2 is composed by others. The sinks are set on the ubumps at the bottom of circuit as shown in Fig. \ref{Case3_Model}. We excited two ports in turn, and calculated the self- and mutual- resistance and inductance in the frequency range of 0.2 GHz $\sim$ 10 GHz. 77,352 triangular facets are used to discretize the surface, and 39,220 loops are required to construct the matrix equations. Fig. \ref{Case3_RL} shows results obtained from the proposed formulation and the Ansys Q3D. For the self- and mutual- resistance and inductance, results calculated by two solvers show excellent agreement with each other.
Fig. \ref{Case3_TM} shows the runtime and memory consumption when different meshes were used and the working frequency was fixed at 10 GHz. It can be found that the runtime approximately matches $O(NlogN)$, but it is slightly larger than $O(NlogN)$ as the unknowns increase. The main reason is that the runtime may match the complexity of $O(NlogN)$ for each iteration in the GMRES, but the number of iterations increases with the number of unknowns, which leads to a slightly larger runtime. As for the memory consumption, further optimization for our in-house solver is stilled required to be carried out to match $O(N)$ well for large-scale problems.
Similar to the previous two examples, the effectiveness of the preconditioner is investigated by the number of iterations, and we set the convergent tolerance as $10^{-3}$. The top subfigure of Fig. \ref{Case3_It} focuses on the number of iterations over meshes when the frequency is fixed at 10 GHz. The bottom subfigure relates the number of iterations to the frequency. There are 39,220 loops to construct the matrix. It can be found that the preconditioner significantly accelerates the convergence of the GMRES. Especially, the number of iterations increases very slowly when the number of unknowns increases if the preconditioner is applied. There are 38 iterations with preconditioner and 123 iterations without it for 26,256 unknowns. However, the scenario with the preconditioner requires 48 iterations for 1,938,754 unknowns while 310 iterations are required for the same unknowns without it. In addition, as shown in Fig. \ref{Case3_PC}, we calculated the magnitude of surface current and the distribution of electric potential when a voltage source is applied on Port\#2. It can be found that the current accumulates on the slender bounding wires. Therefore, the electric potential goes down very fast on them.
\subsection{Large-scale PDN Modeling}
\begin{figure}
\centering
\includegraphics[width=0.35\textwidth]{case4_Geodetials.jpg}
\caption{Geometrical details of the PDN with the size of 160 um $\times$ 160 um $\times$ 82 um.}
\label{Case4_Model}
\end{figure}
\begin{figure}
\begin{minipage}[h]{0.45\linewidth}
\centerline{\includegraphics[scale=0.3]{unit_top.jpg}}
\centerline{(a)}
\end{minipage}
\hfill
\begin{minipage}[h]{0.45\linewidth}
\centerline{\includegraphics[scale=0.3]{unit_bottom.jpg}}
\centerline{(b)}
\end{minipage}
\caption{The top and bottom views of the PDN and the definition of ports.}
\label{Case4_Port}
\end{figure}
In this example, the power distribution network (PDN) is considered to verify the scalability of the proposed formulation. As shown in Fig. 16, the cross rectangular interconnects with four ubumps on the top and one pad on the bottom are considered, and the structure has a dimension of 16 um$\times$16 um$\times$8.2 um. The interconnects of different layers are connected through a number of vias and the pad is coupled to the interconnects through a slender via. The structure in Fig. \ref{Case4_Model} is considered as a unit and we extend it to 2$\times$2, 4$\times$4, 6$\times$6 arrays. The sources are defined on the ubumps on the two contactless parts and the sinks are set on the bottom of pads as shown in Fig. \ref{Case4_Port}. The resistance and inductance were calculated at 30 GHz. The simulation configuration, results and resource consumption are presented in Table \ref{table2}.
The first two rows record the overall number of triangles and loops used to construct the PDN. Next the runtime, memory, number of iterations with and without the preconditioner are presented when the convergent tolerance is set to $10^{-3}$, in which we summed the number of iterations for two solutions to the Port\#1 and Port\#2. The preconditioner still works very well for this example, and it significantly reduces the number of iterations by one order. In addition, the loop resistance and inductance are calculated by the proposed formulation and the Ansys Q3D, which are defined as $R_o = R_{11} – 2R_{12} + R_{22}$ and $L_o = L_{11} – 2L_{12} + L_{22}$. For the loop inductance, the proposed formulation shows excellent agreement with that from the Ansys Q3D, and the relative error is less than 4\% for the four cases. The loop resistance has a slightly larger error, still only 7\%. Fig. \ref{PDN66} gives the illustration of the PDNs with different dimensions and the distribution of electric potential is shown, in which the 1$\times$1, 2$\times$2 arrays are excited on Port\#1 and 4$\times$4, 6$\times$6 arrays are excited on Port \#2. There is little change in electric potential for the upper part of PDNs, and the electric potential goes down very fast on the vias connected to the pads.
\renewcommand\arraystretch{1.2}
\begin{table}
\begin{center}
\caption{Simulation configuration, resource consumption, loop resistance and inductance obtained from the proposed formulation and Ansys Q3D}\label{table2}
\begin{tabular}{|l|c|c|c|c|}
\hline
\multicolumn{5}{|c|}{$f=30$ GHz, tol = $10^{-3}$}\\
\hline
\hline
\, & 1$\times$1 & 2$\times$2 & 4$\times$4 & 6$\times$6 \\
\cline{1-5}
\rule{-3pt}{9pt}
Num. triangles & 62,590 & 261,928 & 1,051,950 & 1,933,368 \\
\cline{1-5}
\rule{-3pt}{9pt}
Num. Loops & 33,047 & 138,735 & 558,084 & 1,025,824\\
\cline{1-5}
\rule{-3pt}{15pt}
\makecell[l]{Num. iterations \\ with preconditioner} & 21 & 31 & 44 & 54\\
\cline{1-5}
\rule{-3pt}{15pt}
\makecell[l]{Num. iterations \\ w$\backslash$o preconditioner} & 230 & 423 & 804 & 1,811\\
\cline{1-5}
\rule{-3pt}{9pt}
\makecell[l]{Runtime (s)} & 485 & 3,566 & 18,401 & 47,220\\
\cline{1-5}
\rule{-3pt}{9pt}
\makecell[l]{Memory (Mb)} & 1,307 & 9,948 & 104,186 & 222,494\\
\hline
\hline
\rule{-3pt}{9pt}
\makecell[l]{$R_o$-proposed (ohm)} & 0.68 & 0.17 & 0.043 & 0.020\\
\cline{1-5}
\rule{-3pt}{9pt}
\makecell[l]{$L_o$-proposed (nH)} & 0.083 & 0.020 & 0.0050 & 0.0022\\
\cline{1-5}
\rule{-3pt}{9pt}
\makecell[l]{$R_o$-Ansys Q3D (ohm)} & 0.70 & 0.18 & 0.045 & 0.021\\
\cline{1-5}
\rule{-3pt}{9pt}
\makecell[l]{$L_o$-Ansys Q3D (nH)} & 0.083 & 0.021 & 0.0052 & 0.0022\\
\cline{1-5}
\end{tabular}
\end{center}
\end{table}
\begin{figure*}
\begin{minipage}[h]{0.1\linewidth}
\centerline{\includegraphics[scale=0.061]{1_cut.jpg}}
\centerline{(a)}
\end{minipage}
\hfill
\begin{minipage}[h]{0.2\linewidth}
\centerline{\includegraphics[scale=0.061]{2_cut.jpg}}
\centerline{(b)}
\end{minipage}
\hfill
\begin{minipage}[h]{0.22\linewidth}
\centerline{\includegraphics[scale=0.061]{3_cut.jpg}}
\centerline{(c)}
\end{minipage}
\hfill
\begin{minipage}[h]{0.4\linewidth}
\centerline{\includegraphics[scale=0.074]{4_cut.jpg}}
\centerline{(d)}
\end{minipage}
\caption{The Illustration of 1$\times$1, 2$\times$2, 4$\times$4 and 6$\times$6 PDNs. The electric potential is presented when Port\#1 is excited for 1$\times$1, 2$\times$2 PDNs and Port\#2 is excited for 4$\times$4, 6$\times$6 PDNs.}
\label{PDN66}
\end{figure*}
\section{Conclusion}
In this paper, an MQS-SIE formulation with the loop analysis is proposed to model interconnects in packages at high frequencies. The triangle discretization are used to support the surface current flowing in any direction. The graph theory in circuit analysis is introduced to construct the independent and complete loop equations. By transferring the branch quantities to loop quantities, the dimension of the system matrix can reduce by about 80\%. In addition, the pFFT algorithm is successfully carried out for the proposed formulation, and an efficient preconditioner is developed to speed up the convergence in the GMRES. The numerical examples verify the scalability and effectiveness of the proposed formulation, accurate results can be obtained compared with those from the industrial solver Ansys Q3D.
However, the uniform grid option required by the traditional pFFT algorithm will cause the waste of computational resource, especially for the multiscale structures. Therefore, the optimization of the pFFT algorithm, like the hierarchical algorithm, will be our future work.
|
2,877,628,090,502 | arxiv | \section{Walled Brauer Algebra and port-based like teleportation protocols}
\section{(Multi) Port-based teleportation protocols and their importance}
Quantum teleportation is one of the most important primitives in quantum information science. It performs an unknown quantum state transmission between two spatially separated systems. It requires pre-shared entangled resource state and consists of three elements: joint measurement, classical communication and \textit{correction operation} depending on the result of the measurements.
Except quantum teleportation protocol presented by Bennett et al. in~\cite{bennett_teleporting_1993} we distinguish Knill-Laflamme-Milburn (KLM) scheme~\cite{knill_scheme_2001}, based solely on linear optical tools and so-called Port-based Teleportation (PBT) protocols, introduced in~\cite{ishizaka_asymptotic_2008}. Although, standard teleportation and KLM scheme are of the great importance and have fundamental meaning for the field with range of important applications~\cite{boschi_experimental_1998,gottesman_demonstrating_1999,gross_novel_2007, jozsa_introduction_2005, pirandola_advances_2015, raussendorf_one-way_2001, zukowski_event-ready-detectors_1993}, here we focus on PBT schemes. One of the main reasons of that is the PBT is the only scheme where in the last step the \textit{unitary correction is absent}
The lack of correction in the last step allows for entirely new applications in modern quantum information science and the high amount of its symmetries make it tempting for analysis by representation-theoretic methods. For instance, PBT has found its place in non-local quantum computations and position-based cryptography~\cite{beigi_konig} resulted in new attacks on the cryptographic primitives, reducing the amount of consumable entanglement from doubly exponential to exponential, communication complexity~\cite{buhrman_quantum_2016} connecting the field of communication complexity and a Bell inequality violation, theory of universal programmable quantum processor performing computation by teleportation~\cite{ishizaka_asymptotic_2008}, universal simulator for qubit channels~\cite{sim} improving simulations of the amplitude damping channel and allowing to obtain limitations of the fundamental nature for quantum channels discrimination~\cite{limit}.
Some aspects of PBT play a role in the general theory of construction of universal quantum circuit for inverting general unitary operations~\cite{PhysRevLett.123.210502} as well as theory of storage and retrieval of unitary quantum channels~\cite{Stroing}.
In the original formulation of PBT scheme, see Figure~\ref{FPBT}, two parties share a resource state consisting of $N$ copies of maximally entangled state $|\psi^+\>$, each of them called a port.
\begin{figure}[h]
\begin{centering}
\includegraphics[width=0.8\textwidth]{p2v3a2v1.png}
\caption{On the left-hand side we present the vanilla scheme for the standard PBT. Two parties share $N$ copies EPR pairs $\Phi_d^+=|\psi^+_d\>\<\psi^+_d|$, where $|\psi^+_d\>=(1/\sqrt{d})\sum_i |ii\>$. Alice to teleport an unknown state $\psi_C$ applies a joint measurement (the blue trapeze) on the state to be teleported and her half $A_1\cdots A_N$ of the resource state, getting a classical outcome $i$ transmitted to by by a classical channel. The index $i$ indicates port on the Bob's side (red star) on which teleported state appears. On the right-hand side we present basic scheme for multi-port teleportation scheme. Again, two parties share $N$ copies EPR pairs $\Phi_d^+=|\psi^+_d\>\<\psi^+_d|$, where $|\psi^+_d\>=(1/\sqrt{d})\sum_i |ii\>$. Alice to teleport an unknown joint state $\psi_C=\psi_{C_1C_2\cdots C_k}$, where $k\leq \floor{N/2}$, to Bob performs a global measurement (the blue trapeze) on systems $C_1\cdots C_kA_1\cdots A_N$, getting a classical outcome $\mathbf{i}=(i_1,i_2,\ldots,i_k)$. She transmits the outcome $\mathbf{i}$ via classical communication to Bob. The index $\mathbf{i}$ indicates on which $k$ ports on the Bob's side the teleported state arrives (red stars). Bob to recover the teleported state has to pick up ports indicated by $\mathbf{i}$ with the right order.}
\label{FPBT}%
\end{centering}
\end{figure}
Alice to teleport an unknown state $\psi_C$ to Bob performs a joint measurement on it and her half of the resource state, communicating the outcome through a classical channel to Bob. It turns out that the outcome received by Bob points to the system in the resource state where the state has been teleported to. We distinguish two versions of PBT protocol - \textit{deterministic} (dPBT) and \textit{probabilistic} (pPBT). In the first case, after the measurement Alice obtains a classical outcome $i\in \{1,\ldots, N\}$. In this scenario, the unknown state is always teleported, but it decoheres during the process. To learn about the efficiency we compute entanglement fidelity, checking how well we are able to transmit half of the maximally entangled state. From the no go theorem~\cite{bennett_teleporting_1993} for the deterministic universal processor, we know that we can achieve perfect teleportation only in the asymptotic limit $N\rightarrow \infty$. In the second case, the probabilistic one, Alice obtains a classical outcome $i\in \{0,1,\ldots,N\}$, where index 0 corresponds to an additional measurement $\Pi_0^{AC}$ indicating the failure of the teleportation process. In all other cases in pPBT, when $i\in \{1,\ldots,N\}$, parties proceed with the procedure getting teleported state perfectly. To learn about efficiency, we compute the average probability of success of such a process. Similarly, as in the deterministic case, the probability is equal to 1 only in the asymptotic limit $N\rightarrow \infty$. In every case, we can consider also \textit{optimised PBT}, where Alice optimises jointly over the shared state and measurements before she runs the protocol to increase the efficiency, see~\cite{ishizaka_quantum_2009} for further details.
Effective evaluation of the performance of both variants of PBT requires determining all symmetries that occur in the problem and spectral analysis of certain operators. For qubits it has been done in~\cite{ishizaka_asymptotic_2008,ishizaka_quantum_2009} by exploiting representation theory of $SU(2)^{\ot N}$, in particular properties of Clebsch-Gordan (CG) coefficients, together with semidefinite programming. Unfortunately, such methods do not work effectively in a higher dimension, $d>2$. It is because in the case of $SU(d)^{\ot N}$ there is no closed-form of the CG coefficients and to compute them we need an exponential overhead in $N$ and $d$.
The first attempt to describe the efficiency of PBT in higher dimensions has been done in~\cite{wang_higher-dimensional_2016} by exploiting elements of Temperley-Lieb algebra theory, mostly in its graphical representation. The authors presented closed expressions for entanglement fidelity as well as the probability of success for an arbitrary $d$ and $N=2,3,4$.
Next, in papers~\cite{Studzinski2017,StuNJP,MozJPA}, authors develop new mathematical tools allowing for studies of PBT for arbitrary $N$ and $d$. From a technical point of view, the crucial role is played by the algebra of partially transposed permutation operators and its irreducible components. Or in the other words irreducible representations of the commutant of $U^{\otimes (n-1)}\otimes \overline{U}$, where the bar denotes complex conjugation, and $U$ is an element of unitary group $\mathcal{U}(d)$. It turns out that basic objects describing all variants of PBT belongs to the mentioned commutant. Knowing the full description of irreducible spaces we can reduce the analysis to every block separately and present entanglement fidelity and the probability of success in terms of parameters describing respective irreducible blocks like multiplicity or dimension.
Finally, in paper~\cite{majenz} authors investigated the asymptotic behaviour of PBT schemes which was uncovered in the previous works. Their results required advanced tools coming from connections between representation-theoretic formulas and random matrix theory.
Despite of all the results presented above still, we have many important questions to answer in the field of PBT protocols.
Here we focus on the following problem:
\textit{What is the most effective way to teleport using PBT-like protocols a state of composite system or several systems, let us say $k$?} One of the answer could be the following:
\begin{itemize}
\item The most obvious one is to run the original PBT with dimension of the port equal to $d^k$, however the performance of the PBT protocols gets worse with growing local dimension~\cite{majenz,StuNJP}.
\item We could also keep dimensions of the ports and split the resource state into $k$ packages and then run $k$ separate PBT procedures independently. Such analysis, together with some aspets of asymptotic discussion of the teleportation protocols analysed here is studied in \cite{kopszak2021multiport}.
\end{itemize}
In the next sections of this paper, we show that allowing Bob for a mild correction in a form of ports permutation we can find a class of multi-port teleportation protocols (see the right panel of Figure~\ref{FPBT}), allowing for high performance measured in terms of entanglement fidelity or probability of success. Such class of protocols allows us to transfer the state with higher performance than the respective PBT schemes mentioned above.
To obtain the final answers
we deliver {\it novel mathematical tools} concerning both standard Schur-Weyl duality based on $n$-fold tensor product of unitary transformations, $U^{\ot n}$,
as well as, its "skew" version based on the product of type $U^{\otimes N} \otimes \overline{U}^{\otimes k}$ (where bar denotes complex conjugation). By considering irreducible representations of the commutant of $U^{\otimes N} \otimes \overline{U}^{\otimes k}$ (with $n=N+k$), we show its connection with the \textit{algebra of partially transposed permutation operators} $\mathcal{A}_n^{(k)}(d)$, composed of all linear combinations of the standard permutation operators deformed by the operation of partial transposition over last $k$ subsystems. In fact, our work covers unexplored earlier field of finding irreducible matrix representations of Walled Brauer Algebra~\cite{Cox1}.
The tool kit presented here is not tailored only for effective description of port-based like teleportation protocols and mentioned kind of symmetries appear in many problems of modern physics and mathematics.
From the perspective of physics, studying quantum systems with such symmetries play an important role in antiferromagnetic systems~\cite{Candu2011}. In this paper the author considers the spectrum of an integrable antiferromagnetic Hamiltonian of the $gl(M|N)$ spin chain of alternating fundamental and dual representations. In particular, to reduce the complexity of the numerical diagonalisation of the considered Hamiltonian author applies non-trivial tools emerging from the theory of Walled-Brauer Algebra. Here, our new tools possibly enable more analytical approach to the problem or at least further numerical simplifications.
Similar kinds of symmetries have found their place even in some aspects of gravity theories~\cite{1126-6708-2007-11-078,2008PhRvD..78l6003K} and particle physics~\cite{Kimura2010}. Here, authors by applying elements of the representation theory of (Walled) Brauer Algebras and Schur-Weyl duality focus on diagonalisation of the two-point functions of gauge invariant
multi-matrix operators. In particular, they describe how labels appearing in diagonal bases are related to respective Casimir operators and irreducible components of the Brauer and Walled Brauer Algebra. It turns out that the depper understanding the spectrum of states from the point of view of the conformal field theory (CFT) yields information about space-time physics via the AdS/CFT duality~\cite{Maldacena1999}.
Next, our analysis could be applied in the study of the theory of entanglement and positive maps. The first such approach has been made by Werner and Eggeling in seminal paper~\cite{PhysRevA.63.042111}, where the full analysis of tripartite $U \otimes U \otimes U$ states has been made, concentrating on their positivity after partial trace property (PPT), which is equivalent to considering $U\otimes U\otimes \overline{U}$ invariant operators. Our tools can in principle be used for the characterisation of multipartite states after having previously chosen the systems to be transposed. This field, although old, is still under exploration, more in the context of positive and $k-$positive maps. To support our claim let us consider recent papers by Collins and co-authors~\cite{Collins_2018,Bardet_2020}.
Furthermore, motivated by the recent results on Temperley–Lieb Quantum Channels~\cite{Brannan_2020}, one can apply methods developed here, together with the above investigations, to the problem of constructive examples of new quantum channels for which the minimum output R{\'e}nyi entropy is not additive.
The tools described in paper are enough for the full description of the universal $M\rightarrow N$ quantum cloning machines (where $M<N$) in the group-theoretic manner.
Such approach has been successfully for universal $1\rightarrow N$ quantum cloning machines in~\cite{PhysRevA.89.052322}.
To address at least a part of described above problems we need to diagonalize and investigate properties of some operators representing certain physical quantity. To do so we have to construct an analogue of the celebrated Young-Yamanouchi basis for the symmetric group $S(n)$. This is the only way of investigating operators which are $U^{\otimes n}$ invariant, thus having non-trivial component only the symmetric part in the Schur-Weyl duality. However, in our case our symmetry is deformed - we have $k$ complex conjugations - the straightforward approach suggested by the Schur-Weyl duality is not enough. Also, combining with the approach based on considering the dual representation to $\overline{U}$ as it was done~\cite{majenz} is not enough, since we must have full information about matrix entries of the respective operators on the non-trivial sectors, similarly as it is for $U^{\otimes n}$ invariant operators, and pre-existing methods simply do not have access to these sectors of the space.
From the perspective of pure mathematics we deliver tools for studying and understanding the Walled Brauer Algebras~\cite{Cox1}, which is a sub-algebra of the Brauer Algebra~\cite{Brauer1} on the most friendly level for potential applications - irreducible matrix representation. Namely, the algebra of partially transposed permutation operators studied here is a representation of the Walled Brauer Algebra on the space $(\mathbb{C}^d)^{\otimes n}$. Up to our knowledge it is the first result of such kind on this level of generality. We can go even further, and built a bridge between our tools, the above-mentioned physical applications and transposed Jucys-Murphy elements~\cite{Mu,Ju} which in their not distorted form generate commutative subalgebra of $\mathbb{C}[S(n)]$. This approach opens a new path: the opportunity for studying deformation of the permutation group $S(N)$ within a novel approach to representation theory put forward in~\cite{Okunkov}.
The structure of this paper is the following. In Section~\ref{summary} we give summary of all our findings presented in the manuscript. In Section~\ref{interest} we rigorously introduce the multi-port-based teleportation schemes and discuss the quantities of interest which are entanglement fidelity and probability of success. Next, in Section~\ref{sym} and discuss briefly the occurring symmetries. We explain the connection with the algebra of partially transposed permutation operators and the necessity of finding its irreducible components. In Section~\ref{defs} we introduce the basic notions of the representation theory for the permutation group. We explain how to compute the basic quantities describing irreducible representations such as dimensions and multiplicities. We show how to construct an operator basis in every irreducible component. Schur-Weyl duality and notion of Young's lattice are also shortly explained. Most of the pieces of information are taken from~\cite{vershik}. In Section~\ref{preliminary} we prove a few results concerning partially transposed permutation operators. The notion of partially reduced irreducible representation (PRIR) in the generalized version concerning previous results is introduced. Using these two we prove certain summation rule for matrix elements of irreducible representations of permutations, which is up to our best knowledge not know in the literature. Finally, we present results on partial traces from the operator basis in every irreducible space of the permutation group. In Section~\ref{comm_structure} we present the main mathematical results of our paper. We construct an operator basis in every irreducible representation of the algebra of partially transposed permutation operators. Next, using this result, we compute matrix elements of a port-based teleportation operator determining the performance of teleportation schemes. We show that this object is diagonal in our basis, allowing us to determine its spectral decomposition. Having all mathematical results, in Section~\ref{detkPBT} and Section~\ref{probkPBT}, we describe deterministic and probabilistic MPBT scheme and derive expressions describing their performance. We end up by Section~\ref{diss}, where we discuss our results and present possible ways of further exploring the idea of multi-port-based teleportation schemes, for example by simultaneous optimization of the resource state and Alice's measurements.
\section{Summary of the main results}
\label{summary}
In this paper we present several results concerning twofold aspects. Firstly, we introduce tools relating the characterisation of the structure of the algebra $\mathcal{A}_n^{(k)}(d)$ to the new technical results for practical calculations in the symmetric group $S(n)$. Secondly, we apply our tools to characterise a class of multi-port based teleportation protocols (MPBT).
\\
{\bf Results concerning the symmetric group $S(n)$ and the algebra $\mathcal{A}_n^{(k)}(d)$:}
\begin{enumerate}[1)]
\item In Proposition~\ref{summation0} we deliver new summation (orthogonality) rule for irreducible representations of the symmetric group $S(n)$, which is motivated by the celebrated Schur orthogonality relations~\cite{Curtis}. This summation rule allows us for effective computations and simplifications quantities regarding MPBT protocols, especially when computing matrix elements of MPBT operator describing property of the deterministic scheme. It is also important by itself, giving deeper understanding of connection between matrix elements of a subgroup $H \subset S(n)$ and the whole group $S(n)$.
\item We present effective tools for computing partial traces over an arbitrary number of systems from the irreducible operator basis in every irrep of $S(n)$ emerging from the Schur-Weyl duality. This is contained in Lemma~\ref{L3} and Corollary~\ref{corL3}. Up to our best knowledge these are new results on this level of generality and extending results from~\cite{Aud,Christandl_2007}. Since these tools allow for effective calculations of partial traces in the group algebra of $S(n)$, which is often the case in quantum information science, they are of the separate interest.
\item We show that the algebra $\mathcal{A}_n^{(k)}(d)$ of partially transposed permutation operators is in fact the matrix representation of the Walled Brauer Algebra on the space $(\mathbb{C}^d)^{\otimes n}$. This connection, due to~\cite{Cox1}, gives us all the ideals of the considered algebra and show how they are nested. In particular we identify the maximal ideal $\mathcal{M}$ (see Figure~\ref{structure_M}), which is the main object for further understanding multi-port based teleportation schemes. This identification is implied by the symmetries exhibit in our new teleportation protocols.
\item We construct an orthonormal irreducible operator basis in the maximal ideal (Theorem~\ref{tmbas}). We show how the structure of the irreducible blocks looks like and explain their connection with the irreps of the symmetric groups $S(n)$ and $S(n-2k)$. In fact, this result gives us a way for constructing irreducible matrix representations of the Walled Brauer Algebra in the maximal ideal $\mathcal{M}$ on the space $(\mathbb{C}^d)^{\otimes n}$, which is the first result of such kind in the literature.
It is analogue of the following basic result regarding representations of $S(n)$ on $(\mathbb{C}^d)^{\otimes n}$:
\begin{align}
E_{ij}^\lambda=
\frac{d_{\lambda}}{n!}\sum_{\sigma\in S(n)}
\phi_{ji}^{\lambda}(\sigma^{-1}) V_{\sigma},
\end{align}
where $\lambda$ labels irreps of $S(n)$ of dimension $d_{\lambda}$, $V_{\sigma}$ is permutation operator, that permutes subsystems in
$(\mathbb{C}^d)^{\otimes n}$ according to permutation $\sigma\in S(n)$, and finally numbers $\phi_{ji}^{\lambda}(\sigma^{-1})$ are
matrix elements of irreducible representation of $\sigma$.
The above formula is actually a general formula that works for any representation of a finite group. However, in our case, we have representation of an algebra, which is not a group algebra, and there is no such general formula.
We also construct set of projectors on irreducible blocks of the algebra $\mathcal{A}_n^{(k)}(d)$ in the maximal ideal $\mathcal{M}$, see Definition~\ref{efy} and Lemma~\ref{efy2}.
\item
In the considered basis
we find {\it matrix elements} of the basic objects for our study - namely, the permutation operators partially transposed on $k$ systems
belonging to maximal ideal $\mathcal{M}$, as well as those permutation operators, that are not affected by partial transpose (Lemma~\ref{Vel}).
Our matrix elements are analogues of matrix elements
$\phi_{ij}^\lambda(\sigma)$ of irreps of $S(n)$ in Young-Yamanouchi basis.
They are connected with the parameters describing irreps of the symmetric groups $S(n)$ and $S(n-2k)$.
This is non-trivial extension of the tools used in the Schur-Weyl duality to the case when one has to deal with symmetry of a different type (partial symmetry)- $U^{\otimes(n-k)}\otimes \overline{U}^{\otimes k}$, where the existing tools cannot be applied straightforwardly. It was possible by introducing notion of partially irreducible representations, involving concept of the induced representation and properties of subgroups. These tools allow us for effective calculations of compositions and partial traces of operators exhibiting partial symmetries, see for example Lemma~\ref{A1}, Lemma~\ref{A2} or Lemma~\ref{simple}, and surely they will find applications far beyond MPBT protocols.
\end{enumerate}
{\bf Results concerning multi-port based teleportation:}
\begin{enumerate}[1)]
\item We investigate multi-port based teleportation schemes by identifying all their symmetries and present their connection with the algebra $\mathcal{A}_n^{(k)}(d)$, so in fact with matrix representations of the Walled Brauer Algebra. We describe two variants, deterministic and probabilistic one. In particular, we show explicitly how operators, like signal states and measurements, encoding the performance of MPBT decompose in terms of partially transposed permutation operators (Sections~\ref{interest},~\ref{sym}).
\item Next, having construction of the irreducible basis in the maximal ideal $\mathcal{M}$ of the algebra $\mathcal{A}_n^{(k)}(d)$ we prove Theorem~\ref{kPBTmat} and Theorem~\ref{eig_dec_rho}. In particular these results show that the MPBT operator encoding properties of our protocols is diagonal in projectors onto irreps of the algebra $\mathcal{A}_n^{(k)}(d)$ which are known thanks to the first part of the paper. It is important to stress here that adaptation of the pre-existing tools like the Schur-Weyl duality and the dual representation to $\overline{U}$, which led to re-computation of some known results in PBT~\cite{majenz}, when $k=1$, are not enough here. It is due to the fact that to obtain all the results one must have an orthogonal irreducible operator basis in every irreducible sector of the underlying algebra which has been not known previously.
\item In the deterministic case we prove Theorem~\ref{Fthm} in which we present an explicit expression for entanglement fidelity $F$ of the protocol, when parties share $N=n-k$ maximally entangled states of dimension $d$ each, and use square-root measurements:
\be
F=\frac{1}{d^{N+2k}}\sum_{\alpha \vdash N-k}\left(\sum_{\mu\in\alpha}m_{\mu/\alpha} \sqrt{m_{\mu}d_{\mu}}\right)^2,
\ee
where $m_{\mu},d_{\mu}$ denote multiplicity and dimension of irreducible representations of $S(N)$ respectively in the Schur-Weyl duality, and $m_{\mu/\alpha}$ denotes number of paths on reduced Young's lattice in which diagram $\mu$ can be obtained from diagram $\alpha$ by adding $k$ boxes. The efficiency of the new deterministic protocol compared with deterministic PBT when teleporting a composite system is depicted in Figure~\ref{fig:test2}. In this case, we perform significantly better even than the optimal PBT.
\begin{figure}[h!]
\includegraphics[width=.55\linewidth]{entfidelity_standard_pbt_vs_irrep_ver2.png}
\caption{The performance of the deterministic version of our protocol, measured in entanglement fidelity $F$, for various choices of initial parameters which are local dimension $d$, number of ports $N$ and number of teleporting particles $k$. One can see that we achieve better performance in teleporting a state of two qubits ($d=2,k=2$) then standard PBT scheme with appropriate port dimension ($d=4,k=1$) as well as the optimal one (OPT). }
\label{fig:test2}
\end{figure}
\item In the probabilistic case we prove Theorem~\ref{thm_p} in which we connect probability of success $p$ with quantities describing symmetric groups $S(N)$ and $S(N-k)$:
\be
p=\frac{k!\binom{N}{k}}{d^{2N}}\sum_{\alpha \vdash N-k}\mathop{\operatorname{min}}\limits_{\mu\in\alpha}\frac{m_{\alpha}d_{\alpha}}{\lambda_{\mu}(\alpha)},
\ee
The numbers $\lambda_{\mu}(\alpha)$ are eigenvalues of MPBT operator and $m_{\alpha}, d_{\alpha}$ denote multiplicity and dimension of the irrep labelled by $\alpha$ in the Schur-Weyl duality. The optimal measurements in this case are also derived in the same theorem. The efficiency of the new probabilistic protocol compared with probabilistic PBT when teleporting a composite system is depicted in Figure~\ref{fig:test3}. In this case we outperform the optimal PBT scheme for $k\geq 3$. We obtain these results by solving the dual with the primal problem and showing that they coincide, giving us the exact value of the probability and form of the optimal measurements. Exploiting symmetries of the protocol with the mathematical tools developed in this paper, we were able to solve the optimisation problem analytically, which is not the general case in the optimisation theory.
\begin{figure}[h!]
\includegraphics[width=.55\linewidth]{p_succ_standard_pbt_vs_irrep_ver2.png}
\caption{The performance of the probabilistic version of our protocol, measured in success probability $p$, for various choices of initial parameters which are local dimension $d$, number of ports $N$ and number of teleporting particles $k$. One can see that we start achieving better performance than the corresponding optimal PBT scheme with appropriate port dimension for a state of three qubits ($d=2,k=3$). }
\label{fig:test3}
\end{figure}
\end{enumerate}
\section{Quantities of interest - entanglement fidelity and probability of success}
\label{interest}
In multi-port based teleportation protocols Alice wishes send to Bob an unknown composite qudit quantum state $\psi_C=\psi_{C_1C_2\cdots C_k}$, for $k\leq \floor{N/2}$, through $N$ ports, every given as maximally entangled qudit state $|\psi^+\>=(1/\sqrt{d})\sum_{i=1}^d|ii\>$, where $d$ stands for the dimension of the underlying local Hilbert space. Both parties share so called resource state of the form $|\Psi\>=\bigotimes_{i=1}^N|\psi^+\>_{A_iB_i}$, see Figure~\ref{FPBT}.
Defining
the set
\be
\mathcal{I}:=\left\lbrace (i_1,i_2,\ldots,i_k) \ : \ \forall 1\leq l\leq k \ i_l=1,\ldots,N \ \text{and} \ i_1\neq i_2\neq \cdots \neq i_k \right\rbrace
\ee
consisting of $k-$tuples (not necessarily ordered) denoting ports through which subsystems of the composite state $\psi_{C}$ are teleported. For example, having $N=5, k=2$ and ${\bf i}=(5,3)$ means that particle $\psi_{C_1}$ is on fifth port and $\psi_{C_2}$ on the third port.
In the next step Alice performs a joint measurement with outcomes $\mathbf{i}$ from the set $\mathcal{I}$. Every effect is described by positive operator valued measure (POVM) satisfying $\sum_{\mathbf{i}\in\mathcal{I}}\Pi_{\mathbf{i}}^{AC}=\mathbf{1}_{AC}$.
Having that we are in the position to describe a teleportation channel $\mathcal{N}$, which maps the density operators acting on $\mathcal{H}_C=\bigotimes_{i=1}^k\mathcal{H}_{C_i}$ to those acting on Bob's side:
\be
\label{ch1}
\begin{split}
\mathcal{N}\left(\psi_{C} \right)&=\sum_{\mathbf{i}\in \mathcal{I}}\tr_{A\bar{B}_{\mathbf{i}}C}\left[ \sqrt{\Pi_{\mathbf{i}}^{AC}}\left(|\Psi\>\<\Psi|_{AB}\ot \psi_{C} \right)\sqrt{\Pi_{\mathbf{i}}^{AC}}^{\dagger}\right]=\sum_{\mathbf{i}\in \mathcal{I}}\tr_{AC}\left[\Pi_{\mathbf{i}}^{AC}\tr_{\bar{B}_{\mathbf{i}}}\bigotimes_{j=1}^N |\psi\>\<\psi|_{A_jB_j} \ot \psi_{C} \right]_{B_{\mathbf{i}}\rightarrow \widetilde{B}}\\
&=\sum_{\mathbf{i}\in \mathcal{I}} \tr_{AC}\left[\Pi_{\mathbf{i}}^{AC} \sigma_{\mathbf{i}}^{A\widetilde{B}} \ot \psi_{C}\right],
\end{split}
\ee
where $\bar{B}_{\mathbf{i}}=\bar{B}_{i_1}\bar{B}_{i_2}\cdots \bar{B}_{i_k}$ denotes discarded subsystems except those on positions $i_1,i_2,\ldots,i_k$ and operation $B_{\mathbf{i}}\rightarrow \widetilde{B}$ is assigning $B_{N+1}\cdots B_{N+k}$ for every index ${\bf i}$ on Bob's side, introduced for the mathematical convenience. The states (signals) $\sigma_{\mathbf{i}}^{AB}$ or shortly $\sigma_{\mathbf{i}}$ for $\mathbf{i}\in \mathcal{I}$, from~\eqref{ch1}, are given as
\be
\label{sigma}
\sigma_{\mathbf{i}}^{A\widetilde{B}}:= \tr_{\bar{B}_{\mathbf{i}}}\left(P^+_{A_1B_1}\ot P^+_{A_2B_2}\ot \cdots \ot P^+_{A_NB_N} \right)_{B_{\mathbf{i}}\rightarrow \widetilde{B}}=\frac{1}{d^{N-k}}\mathbf{1}_{\bar{A}_{\mathbf{i}}}\ot P^+_{A_{\mathbf{i}}\widetilde{B}}.
\ee
In above $\bar{A}_{\mathbf{i}}$ has the same meaning as $\bar{B}_{\mathbf{i}}$. Then
$P^+_{A_{\mathbf{i}}\widetilde{B}}$ is a tensor product of projectors on maximally entangled sates with respect to subsystems defined by index $\mathbf{i}$ and prescription $B_{\mathbf{i}}\rightarrow \widetilde{B}$. For example, when ${\bf i}=(5,3)$, the notation $P^+_{A_{\mathbf{i}}\widetilde{B}}$ means $P^+_{A_5B_6}\ot P^+_{A_3B_7}$.
For the further reasons, we introduce here the following multi port-based operator given as:
\be
\label{PBT_standard}
\rho:=\sum_{\mathbf{i}\in \mathcal{I}}\sigma_{\mathbf{i}}.
\ee
In the general case in above sum we have $k!\binom{N}{k}=\frac{N!}{(N-k)!}=|\mathcal{I}|$ elements. One can see that for $k=1$ we reproduce $|\mathcal{I}|=N$ number of signals from the original PBT scheme. For $k=2$ we have $|\mathcal{I}|=N(N-1)$, for $k=3$ it is $|\mathcal{I}|=N(N-1)(N-2)$ and so on.
{\bf Deterministic version} In this version of the protocol receiver always accepts state of $k$ of $N$ ports as the teleported states. Since the ideal transmission of states is impossible, we would like to know how well we are able to preform the scheme, possibly as a function of global parameters like number of ports or local dimension. We investigate this by checking how well the teleportation channel $\mathcal{N}$ transmits quantum correlations. To do so we compute its entanglement fidelity $F$, teleporting halves of maximally entangled states
\be
\label{ent_fid}
\begin{split}
F&=\tr\left[P^+_{\widetilde{B}C}(\mathcal{N}\ot \mathbf{1}_{D})P^+_{CD} \right]=\sum_{\mathbf{i}\in\mathcal{I}}\tr\left[P^+_{\widetilde{B}C} \Pi_{\mathbf{i}}^{AC}\left( \sigma_{\mathbf{i}}^{A\widetilde{B}} \ot P^+_{CD}\right) \right]=\frac{1}{d^{2k}}\sum_{\mathbf{i}\in\mathcal{I}}\tr\left[\Pi_{\mathbf{i}}^{A\widetilde{B}} \sigma_{\mathbf{i}}^{A\widetilde{B}} \right]
\end{split}
\ee
where $D=D_1D_2\cdots D_k$.
To have explicit answer what is the value of $F$ we need to choose a specific form of POVM operators $\Pi_{\mathbf{i}}^{AC}$.
As it is explained in previous papers~\cite{ishizaka_asymptotic_2008, ishizaka_quantum_2009}, PBT scheme is equivalent to the state discrimination problem, where authors use $N$ square-root measurements for distinguishing an ensemble $\{1/N,\sigma_{i}^{AC}\}$. In our case the situation is similar and the corresponding ensemble is of the form $\{1/|\mathcal{I}|,\sigma_{\mathbf{i}}^{AC}\}$ with corresponding POVMs:
\be
\label{povm}
\forall \ \mathbf{i}\in \mathcal{I}\qquad \Pi_{\mathbf{i}}^{AC}=\frac{1}{\sqrt{\rho}}\sigma_{\mathbf{i}}^{AC}\frac{1}{\sqrt{\rho}}+\Delta,
\ee
where states $\sigma_{\mathbf{i}}^{AC}$ are given in~\eqref{sigma} and $\rho$ is the port-based operator from~\eqref{PBT1}. It can be easy seen that operator $\rho$ is not of the full rank, so inversion on the support is required. Due to this, to every component in~\eqref{povm} an additional term $\Delta$ of the form
\be
\Delta=\frac{1}{|\mathcal{S}_{n,k}|}\left(\mathbf{1}_{(\mathbb{C}^d)^{\ot n}}^{AC}-\sum_{\mathbf{i}\in\mathcal{I}}\Pi_{\mathbf{i}}^{AC} \right)
\ee
is added. This addition ensures that all effects sum up to identity operator on whole space $(\mathbb{C}^d)^{\ot n}$. Such procedure does not change the entanglement fidelity $F$ in~\eqref{ent_fid}. Our goal is to evaluate the entanglement fidelity from~\eqref{ent_fid} with measurements given in~\eqref{povm}. The solution given in terms of group-theoretic parameters is presented in Theorem~\ref{Fthm} in Section~\ref{detkPBT}.
{\bf Probabilistic version} In this scenario transmission sometimes fails, but whenever succeeds then fidelity of the teleported state is maximal $F=1$. The teleportation channel in the probabilistic version looks exactly the same as it is in deterministic protocol, however is non-trace preserving. This fact is due to the reason that now Alice has access to $1+|\mathcal{I}|$ POVMs, where an additional POVM $\Pi_0^{AC}$ corresponds to the failure.
To evaluate the performance of the scheme we need to calculate the average success probability of teleportation $p$, where we average over all possible input states. This leads to the following expression:
\be
\label{p_sini}
p=\frac{1}{d^{N+k}}\sum_{\mathbf{i}\in \mathcal{I}}\tr\left(\Pi_{\mathbf{i}}^{AC} \right)
\ee
Requirement of the unit fidelity gives strong condition on the form of the measurement applied by Alice. Namely, using argumentation presented in~\cite{ishizaka_asymptotic_2008,ishizaka_quantum_2009,Studzinski2017} all the POVMs corresponding to the success of teleportation are of the form:
\be
\label{mess}
\forall \ \mathbf{i}\in\mathcal{I}\qquad \Pi_{\mathbf{i}}^{AC}=P^+_{A_{\mathbf{i}}C}\ot \Theta_{\overline{A}_{\mathbf{i}}}.
\ee
Having expression for probability of success~\eqref{p_sini} we ask what is the maximal possible value of $p$ and what is then the optimal form the operators $\Theta_{\overline{A}_{\mathbf{i}}}$ from~\eqref{mess} ensuring mentioned maximisation. It turns out that this problem can be written as a semidefinite program (SDP). We write down a primal problem whose solution $p^*$ upper bounds the real value of $p$, and then we write a dual problem where the solution $p_*$ is a respective lower bound for the real value of $p$. In the boxes below we write down explicitly primal and dual problem.
\begin{tcolorbox}
{\bf The primal problem for probabilistic scheme} The goal is to maximise the quantity:
\be
\label{primal}
p^*=\frac{1}{d^{N+k}}\sum_{\mathbf{i}\in \mathcal{I}}\tr \Theta_{\overline{A}_{\mathbf{i}}},
\ee
with respect to the following constraints:
\be
\label{con1}
(1) \ \Theta_{\overline{A}_{\mathbf{i}}}\geq 0,\qquad (2) \ \sum_{\mathbf{i}\in \mathcal{I}}P^+_{A_{\mathbf{i}}C}\ot \Theta_{\overline{A}_{\mathbf{i}}}\leq \mathbf{1}_{AC}.
\ee
\end{tcolorbox}
\begin{tcolorbox}
{\bf The dual problem for probabilistic scheme} The dual problem is to minimise the quantity
\be
\label{dual0}
p_*=\frac{1}{d^{N+k}}\tr \Omega
\ee
with respect to the following constraints
\be
\label{dual}
(1) \ \Omega \geq 0, \qquad (2) \ \forall \mathbf{i}\in\mathcal{I} \ \tr_{A_{\mathbf{i}}C}\left(P^+_{A_{\mathbf{i}}C}\Omega\right)\geq \mathbf{1}_{N-k},
\ee
where the operator $\Omega$ acts on $N+k$ systems, the identity $\mathbf{1}_{N-k}$ acts on all systems but $A_{\mathbf{i}}$ and $C$.
\end{tcolorbox}
The solutions of the above primal and the dual problem in terms of group-theoretic parameters are presented in Theorem~\ref{thm_p} in Section~\ref{probkPBT}.
\section{Symmetries in multi port-based teleportation}
\label{sym}
In every variant of (multi) port-based teleportation protocols we distinguish two type of symmetries. One is connected with covariance and invariance with respect to the symmetric group $S(n-k)$, while the second one with invariance with respect to the action of $U^{\otimes (n-k)}\otimes \overline{U}^{\otimes k}$. We now shall describe briefly connection of these two types of symmetries with the operators describing analysed teleportation schemes.
Let us take index $\mathbf{i}_0$ such that $\mathbf{i}_0=(N-2k+1,N-2k+2,\ldots,N-k)$, then having $n=N+k$ signal $\sigma_{\mathbf{i}_0}$ can be written as
\be
\begin{split}
\sigma_{\mathbf{i}_0}&=\frac{1}{d^{n-2k}}\mathbf{1}_{\bar{A}_{\mathbf{i}_0}}\ot P^+_{n-2k+1,n}\ot P^+_{n-2k+2,n-1}\ot \cdots \ot P^+_{n-k,n-k+1} \\
&=\frac{1}{d^{n-k}}\mathbf{1}_{\bar{A}_{\mathbf{i}_0}}\ot V^{t_n}_{n-2k+1,n}\ot V^{t_{n-1}}_{n-2k+2,n-1}\ot \cdots \ot V^{t_{n-k+1}}_{n-k,n-k+1},
\end{split}
\ee
where by $t_n,t_{n-1}$ etc. we denote the partial transpositions with respect to particular subsystem and by $V_{r,s}$ the permutation operator between system $r$ and $s$ (since now we drop off indices for $A,B$ unless they necessary), for the $V_{r,s}^{t_s}$ operator is proportional to $P^+_{r,s}$. Further we assume whenever it is necessary that permutation operators are properly embedded in whole $(\mathbb{C}^d)^{\ot n}$ space so we will write just $V_{r,s}$ instead of $V_{r,s}\ot \mathbf{1}_{\bar{r},\bar{s}}$. Moreover for the signal $\sigma_{\mathbf{i}_0}$ we introduce simpler notation
\be
\label{signal2}
\begin{split}
\sigma_{\mathbf{i}_0}=\frac{1}{d^{n-k}}\mathbf{1}\ot V^{t_n}_{n-2k+1,n}\ot V^{t_{n-1}}_{n-2k+2,n-1}\ot \cdots \ot V^{t_{n-k+1}}_{n-k,n-k+1}\equiv \frac{1}{d^{N}} V^{(k)},
\end{split}
\ee
where
\be
\begin{split}
&V^{(k)}\equiv \mathbf{1}\ot V^{t_n}_{n-2k+1,n}\ot V^{t_{n-1}}_{n-2k+2,n-1}\ot \cdots \ot V^{t_{n-k+1}}_{n-k,n-k+1},\\
&(k)\equiv t_n \circ t_{n-1}\circ \cdots \circ t_{n-k+1},
\end{split}
\ee
$\circ$ denotes composition of maps, and $\mathbf{1}$ denotes identity acting on the space untouched by tensor product of projectors on maximally entangled states. Form the definition of the signals $\sigma_{\mathbf{i}}$ in~\eqref{sigma} and form of $\sigma_{\mathbf{i}_0}$ from~\eqref{signal2} we can deduce that PBT operator $\rho$ can be written as
\be
\label{PBT1}
\rho=\sum_{\mathbf{i}\in \mathcal{I}}\sigma_{\mathbf{i}}=\frac{1}{d^N}\sum_{\tau \in \mathcal{S}_{n,k}}V_{\tau^{-1}}V^{(k)}V_{\tau},
\ee
where sum runs over all permutations $\tau$ from the coset $\mathcal{S}_{n,k}:= \frac{S(n-k)}{S(n-2k)}$, and $V_{\tau}$ th permutation operator corresponding to the permutation $\tau$. Due to the construction we have that $|\mathcal{I}|=\left|\frac{S(n-k)}{S(n-2k)}\right|$. Moreover, it is easy to notice that any signal state satisfies:
\begin{equation}
\label{rel1}
\forall \ \tau\in \mathcal{S}_{n,k} \qquad V_{\tau}\sigma_{\mathbf{i}}V_{\tau^{-1}}=\sigma_{\tau(\mathbf{i})}.
\end{equation}
Let us notice that the operator $\rho$ is invariant with respect to action of any permutation from $S(n-k)$ acting on the first $n-k$ systems:
\be
\label{rel2}
\forall \ \tau\in S(n-k) \qquad V_{\tau}\rho V_{\tau^{-1}}=\rho.
\ee
In particular, relation~\eqref{rel1} and~\eqref{rel2} imply covariance of the SRM measurements given in~\eqref{povm} with respect to the coset $\mathcal{S}_{n,k}$. The same type of covariance we require for POVMs in the probabilistic scheme from~\eqref{mess}.
We have also the second kind of symmetries. Notice that all operators $\sigma_{\mathbf{i}}$ as well PBT operator from~\eqref{PBT_standard} are invariant with respect to action of $U^{\ot (n-k)}\ot \overline{U}^{\ot k}$, where the bar denotes the complex wise conjugation and $U$ is an element of unitary group $\mathcal{U}(d)$. This observation follows from the structure of the signal states and the fact that every bipartite maximally entangled state $P^+_{ij}$, between system $i$ and $j$, is $U\otimes \overline{U}$ invariant.
This property with expression~\eqref{signal2} means that basic elements describing the performance of the presented teleportation protocol belong to the algebra $\mathcal{A}^{(k)}_n(d)$ of partially transposed permutation operators with respect to $k$ last subsystems.
For $k=1$ we reduce to the known case, and standard $d$ dimensional port-based teleportation introduced in~\cite{Studzinski2017} and the algebra $\mathcal{A}^{(1)}_n(d)$ discussed in~\cite{Stu1,Moz1}. In this particular case we have $\mathcal{S}_{n,1}=S(n-1)/S(n-2)$ and all permutations $\tau$ are of the form of transpositions $(a,n)$ for $a=1,\ldots,n-1$, and $|S(n-1)/S(n-2)|=n-1$.
The operation of partial transposition changes significantly properties of the operators under consideration, making the resulting set of operators no longer the group algebra of the symmetric group $S(n)$. To see it explicitly, let us consider a swap operator $V_{i,j}$ interchanging systems on positions $i$ and $j$. It is obvious that by applying the swap operator twice, we end up with an identity operator. However, applying the partial transposition to $V_{i,j}$, the swap operator is mapped to the operator $dP^+_{ij}$, which is proportional to maximally entangled state between respective systems. Applying partially transposed swap operator twice, we end up with $d^2P^+_{ij}$, since $P^+_{ij}P^+_{ij}=P^+_{ij}$. This property makes our further analysis more complex, and direct application of the standard methods from the representation theory of the group algebra of the symmetric group $S(n)$ is insufficient here.
In next sections we introduce notations and definitions and construct irreducible orthonormal basis of the algebra $\mathcal{A}^{(k)}_n(d)$ and formulate auxiliary lemmas required to spectral analysis of the operator $\rho$ and describing the performance of the protocol.
\section{Notations and Definitions}
\label{defs}
For a given natural number $n$ we can define its partition $\mu$ in the following way
\be
\mu=(\mu^{(1)},\mu^{(2)},\ldots,\mu^{(l)}) \quad \text{such that} \quad \forall_{1\leq i\leq l} \ \mu^{(i)}\in \mathbb{N}, \quad \mu^{(1)}\geq \mu^{(2)} \geq \cdots \geq \mu^{(l)}\geq 0, \quad \sum_{i=1}^l\mu^{(i)}=n.
\ee
The Young frame associated with partition $\mu$ is the array formed by $n$ boxes with $l$ left-justified rows. The $i$-th row contains exactly $\mu^{(i)}$ boxes for all $i=1,2,\ldots,l$.
Further, we denote Young diagrams by the Greek letters. The set of all Young diagrams, with up to $n\in\mathbb{N}$ boxes, is denoted as $\mathbb{Y}_n$. The restriction to the set of Young diagrams with no more then $d$ rows is denoted as $\mathbb{Y}_{n,d}$. We endow $\mathbb{Y}_n,\mathbb{Y}_{n,d}$ with a structure of a partially ordered set by setting, for $\mu=(\mu^{(1)},\mu^{(2)},\ldots,\mu^{(l)})\vdash n$ and $\alpha=(\alpha^{(1)},\alpha^{(2)},\ldots,\alpha^{(s)})\vdash n-k$,
\be
\alpha \preceq \mu,
\ee
if $\mu^{(i)}\geq \alpha^{(i)}$ for all $i=1,2,\ldots,l$. If $\alpha \preceq \mu$ we denote by $\mu/\alpha$ the array, called also a skew shape, obtained by removing from the Young frame $\mu$ the boxes of the Young frame of $\alpha$. We have illustrated this procedure by an example presented in Figure~\ref{skewshape}.
\begin{figure}[h]
\includegraphics[width=.6\linewidth]{skew_shape.png}
\caption{Graphics presents construction of a skew shape $\mu/\alpha$ for Young frames $\mu=(6,3,3,1)$ and $\alpha=(3,2,1)$. }
\label{skewshape}
\end{figure}
For any $\alpha, \mu \in \mathbb{Y}_n$ we say that $\mu$ covers $\alpha$, or $\alpha$ is covered by $\mu$ if $\alpha \preceq \mu$ and
\be
\alpha \preceq \nu \preceq \mu, \quad \nu \in \mathbb{Y}_n \Rightarrow \nu=\alpha \ \text{or} \ \nu=\mu.
\ee
In other words, $\mu$ covers $\alpha$ if and only if $\alpha \preceq \mu$ and $\mu/\alpha$ consists of at least a single box. Later we use an equivalent symbol $\mu \in \alpha$ to denote Young diagrams $\mu \vdash n$ obtained from Young diagrams $\alpha \vdash n-k$ by adding $k$ boxes. While by the symbol $\alpha \in \mu$ we denote Young diagrams $\alpha \vdash n-k$ obtained from Young diagrams $\mu \vdash n$ by subtracting $k$ boxes. Informally it means that a Young diagram with $n-k$ boxes is contained in a Young diagram with $n$ boxes.
Having the concept of Young diagram and sets $\mathbb{Y}_n, \mathbb{Y}_{n,d}$ we define Young's lattice and its reduced version (see Figure~\ref{bratteli}).
\begin{figure}[h]
\includegraphics[width=.7\linewidth]{bratteli4.png}
\caption{The Young's lattice $\mathbb{Y}_6$, i.e. with six consecutive layers labelled by permutation groups from $S(1)$ to $S(6)$. By orange and black dashed lines we depict two possible paths from irrep $\lambda=(1)$ of $S(1)$ to irrep $\lambda'=(2,1,1,1,1)$ of $S(6)$. The reduced Young's lattice $Y_{6,2}$, i.e. for $d=2$ is defined by all diagram on the right-hand side of the red line.}
\label{bratteli}
\end{figure}
The Young's lattice arises when we construct subsequent Young diagrams by adding boxes one by one. In this way we obtain subsequent layers of Young diagrams for growing $n$.
We connect a diagram with a subsequent diagram by an edge, that is obtained by
adding a box. More formally the Young's lattice of $\mathbb{Y}_n$ is the non-oriented graph with vertex set $\mathbb{Y}_n$ and an edge from $\lambda$ to $\mu$ if and only if $\lambda$ covers $\mu$. The same definition we applies for Young's lattice of $\mathbb{Y}_{n,d}$, but we remove all Young diagrams with more than $d$ rows. A path $r_{\mu/\alpha}$ in the Young's lattice is a sequence $r_{\mu/\alpha}=(\mu\equiv\mu_n \vdash n \rightarrow \mu_{n-1} \vdash n-1 \rightarrow \cdots \rightarrow \alpha \equiv \mu_{n-k} \vdash n-k)$, for some $k\in\mathbb{N}$ and $k<n$.
The integer number $m_{\mu/\alpha}$ is the total lengths of all paths from $\mu$ to $\alpha$.
All irreducible representations (irreps) of $S(n)$ are labelled by Young diagrams with $n$ boxes denoted as $\alpha \vdash_d n$, where $d$ is a natural parameter, meaning we take into account diagrams with at most $d$ rows.
The dimension $d_{\alpha}$ of the irrep $\alpha$ is given by the hook length formula
\be
d_{\mu}=\frac{n!}{\mathop{\prod}\limits_{(i,j)\in \mu}h_{\mu}(i,j)},
\ee
where $h_{\mu}(i,j)$ is so called the hook length of the hook with corner at the box $(i,j)$ given as one plus the number of boxes below $(i,j)$ plus number of boxes to the right of $(i,j)$.
Multiplicity $m_{\mu, d}$ of every irrep $\alpha \vdash_d n$ is characterised by Weyl dimension formula, saying that
\be
m_{\mu, d}=\mathop{\prod}\limits_{1\leq i\leq j\leq d}\frac{\mu_i-\mu_j+j-i}{j-i}.
\ee
Later on we suppress notation to $\alpha \vdash n$ and $m_{\mu}$, having in mind the dependence of the natural parameter $d$, playing later the role of local dimension of the space $\mathbb{C}^d$.
Having commuting representations of $S(n)$ and $\mathcal{U}(d)$ on $(\mathbb{C}^d)^{\ot n}$, acting by permuting the tensor factors, and multiplication by $U^{\ot n}$ respectively, we can decompose the space $(\mathbb{C}^d)^{\ot n}$, using Schur-Weyl duality~\cite{Wallach} into direct sum of irreducible subspaces as follows:
\be
\label{SW}
(\mathbb{C}^d)^{\ot n}\cong \bigoplus_{\alpha \vdash_d n} \mathcal{U}_{\alpha}\ot \mathcal{S}_{\alpha}.
\ee
In the above $\mathcal{S}_{\alpha}$ are representation spaces for the permutation groups $S(n)$, while $\mathcal{U}_{\alpha}$ are representation spaces of $\mathcal{U}(d)$.
In Schur basis producing the decomposition~\eqref{SW} we can define in every space $\mathcal{S}_{\alpha}$ an orthonormal operator basis $E_{ij}^{\alpha}$, for $i,j=1,\ldots,d_{\alpha}$, separating the multiplicity and representation space of permutations respectively. Namely we have
\be
\label{schur_Eij}
E_{ij}^{\alpha}=\mathbf{1}_{\alpha}^{\mathcal{U}}\otimes |\alpha,i\>\<\alpha,j|\equiv \mathbf{1}_{\alpha}^{\mathcal{U}}\otimes |i\>\<j|_{\alpha}.
\ee
We can also use representation of $E^{\alpha}_{ij}$ on the space $(\mathbb{C}^d)^{\ot n}$, which is of the form
\be
\label{Eij}
E_{ij}^{\alpha}=\frac{d_{\alpha}}{n!}\sum_{\tau \in S(n)}\phi_{ji}^{\alpha}(\tau^{-1})V_{\tau},
\ee
where $\phi_{ji}^{\alpha}(\tau^{-1})$ denotes matrix element irreducible representation of the permutation $\tau^{-1}\in S(n)$.
The operators from~\eqref{schur_Eij} have the following properties
\be
\label{tr_prop}
E_{ij}^{\alpha}E_{kl}^{\beta}=\delta^{\alpha \beta}\delta_{jk}E^{\alpha}_{il},\quad \tr E^{\alpha}_{ij}=m_{\alpha}\delta_{ij}.
\ee
Let us observe that operators $E^{\alpha}_{ii}$ are projectors. Action of the operators $E_{ij}^{\alpha}$ on an arbitrary permutation operator $V_{\sigma}$, from the left and from the right-hand side, for $\sigma\in S(n)$ is given by
\be
\label{actionE}
E_{ij}^{\alpha}V_{\sigma}=\sum_k \varphi_{jk}^{\alpha}(\sigma)E_{ik}^{\alpha},\quad V_{\sigma}E_{ij}^{\alpha}=\sum_k\varphi_{ki}^{\alpha}(\sigma)E_{kj}^{\alpha}.
\ee
Using this basis we can write matrix representation of a given permutation $\tau^{-1}\in S(n)$, on every irreducible space labelled by $\alpha \vdash n$ as
\be
\label{blaa}
\phi^{\alpha}(\tau^{-1})=\sum_{ij}\phi_{ij}^{\alpha}(\tau^{-1})E_{ij}^{\alpha}.
\ee
Moreover, using this operators we construct Young projectors, the projectors on components $\mathcal{U}_{\alpha}\ot \mathcal{S}_{\alpha}$ from~\eqref{SW}:
\be
\label{def_P}
P_{\alpha}=\sum_i E^{\alpha}_{ii}=\frac{d_{\alpha}}{n!}\sum_{\tau \in S(n)}\chi^{\alpha}(\tau^{-1})V_{\tau}.
\ee
The numbers $\chi^{\alpha}(\tau^{-1})=\sum_i \phi_{ii}^{\alpha}(\tau^{-1})$ are irreducible characters.
Sometimes instead of $|\alpha,i\>$ we write $|\alpha,i_{\alpha}\>$ or just $|i_{\alpha}\>$. Defining $c=\sum_{\alpha}m_{\alpha}d_{\alpha}$ any operator $X\in M(c\times c, \mathbb{C})$ can be written using elements $\{E_{ij}^{\alpha}\}$ as $X=\sum_{\alpha}\sum_{i,j=1}^{d_{\alpha}}x_{ij}^{\alpha}E_{ij}^{\alpha}.$
Considering $n-$particle system, by writing $V_{n-1,n}$ we understand $\mathbf{1}_{1\ldots n-2}\otimes V_{n-1,n}$, and similarly for other operators. The operator $\mathbf{1}_{1\ldots n-2}$ is identity operator on first $n-2$ particles.
\section{Preliminary Mathematical Results}
\label{preliminary}
\subsection{Partial trace over Young projectors}
In this section we present a set of auxiliary lemmas which are crucial for the presentation in the further sections.
Introducing notation $\tr_{(k)}=\tr_{n-2k+1,\ldots,n-k}$, denoting the partial trace over a set $\{n-2k+1,\ldots,n-k\}$ of particles, and recalling the notation
\be
\label{parV}
V^{(k)}:=V^{t_{n}}_{n-2k+1,n}V^{t_{n-1}}_{n-2k+2,n-1}\cdots V^{t_{n-k+1}}_{n-k,n-k+1}
\ee
we start from formulation of the following:
\begin{fact}
\label{kpartr}
For any operator acting on $n-k$ systems we have the following equality
\be
V^{(k)}X\ot \mathbf{1}_{n-k+1\ldots n}V^{(k)}=\tr_{(k)}(X)V^{(k)},
\ee
where $\mathbf{1}_{n-k+1\ldots n}$ is the identity operator on $k$ last subsystems, while the operator $X$ on first $n-k$.
\end{fact}
In particular cases, when $k=1,2$, we have respectively
\be
\label{kpareq}
V^{(1)}X\otimes \mathbf{1}_nV^{(1)}=\tr_{n-1}(X)V^{(1)}, \qquad V^{(2)}X\ot \mathbf{1}_{n-1,n}V^{(2)}=\tr_{n-3,n-2}(X)V^{(2)}.
\ee
Now we prove Fact~\ref{kpartr}:
\begin{proof}
It is enough to show that expression~\eqref{kpareq} holds for $k=1$ and then use below argumentation iteratively. Using identity $V^{(1)}=dP_+$, where $P_+$ is the projector on maximally entangled state and $d$ is the dimension of local Hilbert space, we write
\be
\begin{split}
V^{(1)}(X\ot \mathbf{1}_n)V^{(1)}=d^2(\mathbf{1}_{1,\dots,n-2}\ot P_+)\left(\sum_{ij}X_{ij}^{1,\dots,n-2}\ot e_{ij}^{(n-1)}\ot \mathbf{1}_n \right)(\mathbf{1}_{1,\dots,n-2}\ot P_+),
\end{split}
\ee
where $\{e_{ij}^{(n-1)}\}$ is standard operator basis on subsystem $n-1$ and $X_{ij}^{1,\dots,n-2}$ are operators on subsystems from $1$ to $n-2$. Using explicit form of $P_+=(1/d)\sum_{k,l}e_{kl}^{(n-1)}\ot e_{kl}^{(n)}$ and $X_{ij}^{(1,\dots,n-2)}$ we get
\begin{align}
V^{(1)}(X\ot \mathbf{1}_n)V^{(1)}&=\sum_{kl}\left(\mathbf{1}_{1,\dots,n-2}\otimes e_{kl}^{(n-1)} \otimes e_{kl}^{(n)}\right) \left(\sum_{ij}X_{ij}^{1,\dots,n-2}\ot e_{ij}^{(n-1)}\ot \mathbf{1}_n \right)\sum_{pq} \left(\mathbf{1}_{1,\dots,n-2}\otimes e_{pq}^{(n-1)} \otimes e_{pq}^{(n)}\right)\\
&=\sum_{kq} \left(\sum_{\substack{i_1,\dots,i_{n-2},p,\\ j_1, \dots, j_{n-2}, p}} x_{i_1j_1,\dots,i_{n-2}j_{n-2},pp} e_{i_1j_1}^{(1)}\otimes\dots\otimes e_{i_{n-2}j_{n-2}}^{(n-2)}\right)\otimes e_{kq}^{(n-1)}\otimes e_{kq}^{(n)}\\
&=\sum_{kq}\left(\sum_p X_{pp}^{1,\dots,n-2}\right)\otimes e_{kq}^{(n-1)}\otimes e_{kq}^{(n)}\\
&=\tr_{n-1}(X)V^{(1)}
\end{align}
where $x_{i_1j_1,\dots,i_{n-1}j_{n-1}}$ denotes matrix elements of X.
\end{proof}
\begin{fact}
\label{f1a}
Let $\{|i\>_{\alpha}\}_{i=1}^{d_{\alpha}}$ denote a basis in an irrep $\alpha \vdash n$ of dimension $d_{\alpha}$. Then for any operator $X^{\alpha}=\sum_{ij}x^{\alpha}_{ij}|i\>\<j|_{\alpha}$ acting on the space $\mathbb{C}^{d_{\alpha}}$, we have
\be
\label{ef1}
\sum_{\tau\in S(n)}\tr \left(X^{\alpha} \phi^{\alpha}(\tau^{-1}) \right)V_{\tau}=\frac{n!}{d_{\alpha}}\mathbf{1}_{\alpha}^U\otimes X^{\alpha}.
\ee
\end{fact}
\begin{proof}
Inserting expression~\eqref{blaa} into~\eqref{ef1} we obtain
\be
\begin{split}
&\sum_{\tau\in S(n)}\tr \left(X^{\alpha} \phi^{\alpha}(\tau^{-1}) \right)V_{\tau}=\sum_{\tau\in S(n)}\tr \left(X^{\alpha}\sum_{ij}\phi_{ij}^{\alpha}(\tau^{-1})E_{ij}^{\alpha}\right)V_{\tau}=\sum_{ij}\tr\left(X^{\alpha} E_{ij}^{\alpha}\right)\sum_{\tau\in S(n)}\phi_{ij}^{\alpha}(\tau^{-1})V_{\tau}\\
&=\frac{n!}{d_{\alpha}}\sum_{ij}\tr\left(X^{\alpha} E_{ij}^{\alpha}\right)E_{ji}^{\alpha}.
\end{split}
\ee
In Schur basis every operator $E_{ij}^{\alpha}$ has a form $\mathbf{1}^U_{\alpha}\otimes |i\>\<j|_{\alpha}$. This allows us to write
\be
\begin{split}
&\frac{n!}{d_{\alpha}}\sum_{ij}\tr\left( X^{\alpha} E_{ij}^{\alpha}\right)E_{ji}^{\alpha}=\frac{n!}{m_{\alpha}d_{\alpha}}\sum_{ij}\tr\left[ \left( \mathbf{1}^{U}_{\alpha}\otimes X^{\alpha}\right) \left( \mathbf{1}^{U}_{\alpha}\otimes |i\>\<j|_{\alpha}\right) \right] \left(\mathbf{1}^{U}_{\alpha}\otimes |j\>\<i|_{\alpha} \right)\\
&=\frac{n!}{d_{\alpha}}\mathbf{1}^U_{\alpha}\ot \tr\left(X^{\alpha}|i\>\<j|_{\alpha}\right) |j\>\<i|_{\alpha}=\frac{n!}{d_{\alpha}}\mathbf{1}^U_{\alpha}\ot X^{\alpha}.
\end{split}
\ee
\end{proof}
Now, let us introduce the following objects:
\be
\label{obj}
\widetilde{\phi}^{\mu}(a,n):= d^{\delta_{a,n}}\phi^{\mu}(a,n),\quad \text{and} \quad \widetilde{V}_{a,n}:= d^{\delta_{a,n}}V_{a,n},
\ee
where $\phi^{\mu}(a,n)$ is a matrix representation of permutation $V_{a,n}$ on irrep $\mu \vdash n$.
Having that we can formulate the following
\begin{lemma}
\label{object}
Let us denote by $\mathbf{1}^{\mu}_{\gamma}$ a identity on an irrep $\gamma$ of $S(n-1)$ contained in irrep $\mu$ of $S(n)$, then we have the following restriction of $\widetilde{\phi}^{\mu}(a,n)$ to irrep $\beta$ of $S(n-1)$
\be
\left[ \sum_{a=1}^{n}\widetilde{\phi}^{\mu}(a,n)\right]_{\beta}=x^{\mu}_{\beta}\mathbf{1}^{\mu}_{\beta}\quad with \quad x_{\beta}^{\mu}=n\frac{m_{\mu}d_{\beta}}{m_{\beta}d_{\mu}}.
\ee
\end{lemma}
\begin{proof}
Consider
\be
\sum_{a=1}^{n}\widetilde{\phi}^{\mu}(a,n)=\sum_{a=1}^{n-1}\phi^{\mu}(a,n)+d\mathbf{1}_{\mu}
\ee
which is clearly invariant with respect to $S(n-1)$. Hence it admits the decomposition
\be
\sum_{a=1}^{n}\widetilde{\phi}(a,n)=\sum_{\gamma \in \mu}x_{\gamma}^{\mu}\mathbf{1}^{\mu}_{\gamma}
\ee
for some $x_{\gamma}^{\mu}\in \mathbb{C}$. The restriction for chosen irrep $\beta \in \mu$ reduces the above to
\be
\label{x3}
\left[ \sum_{a=1}^{n}\widetilde{\phi}^{\mu}(a,n)\right]_{\beta}=x^{\mu}_{\beta}\mathbf{1}^{\mu}_{\beta}.
\ee
Now our goal is to compute the unknown coefficients $x_{\gamma}^{\mu}$. To do so let us first observe that we can write every projector $P_{\mu}$ in terms of coset elements $\phi(a, n)$ and permutations from $S(n-1)$. Indeed we have
\be
\label{P1}
\begin{split}
P_{\mu}&=\frac{d_{\mu}}{n!}\sum_{\sigma \in S(n)}\tr\left[ \phi^{\mu}(\sigma^{-1})\right] V_{\sigma}=\frac{d_{\mu}}{n!}\sum_{a=1}^{n}\sum_{\tau \in S(n-1)}\tr\left[\phi^{\mu}((a, n)\circ \tau^{-1}) \right]V_{a,n}V_{\tau}.
\end{split}
\ee
Since every representation is a homomorphism we have $\phi^{\mu}((a, n)\circ \tau^{-1}) =\phi^{\mu}((a, n))\phi^{\mu}(\tau^{-1})$. Moreover, because $\tau \in S(n-1)$ and $\mu \vdash n$, representation $\phi^{\mu}(\tau^{-1})$ has to be block diagonal in $\alpha \vdash n-1$:
\be
\label{phi}
\phi^{\mu}(\tau^{-1})=\bigoplus_{\alpha \in \mu}\phi^{\alpha}(\tau^{-1}),
\ee
where the symbol $\alpha \in \mu$ denotes all Young frames obtained from $\mu$ by removing a single box. Denoting by $\mathbf{1}_{\mu},\mathbf{1}_{\alpha}$ identities on irreps $\mu \vdash n$ and $\alpha \vdash n-1$ respectively, for which $\alpha \in \mu$ holds, we write $\mathbf{1}_{\mu}=\bigoplus_{\alpha \in \mu}\mathbf{1}_{\alpha}$. Applying this identity together with~\eqref{phi} to equation~\eqref{P1} we rewrite as
\be
\label{P2}
\begin{split}
P_{\mu}&=\frac{d_{\mu}}{n!}\sum_{a=1}^{n}V_{a,n}\sum_{\alpha \in \mu}\sum_{\tau \in S(n-1)}\tr\left(\left[\phi^{\mu}(a,n) \right]_{\alpha}\phi^{\alpha}(\tau^{-1}) \right) V_{\tau},
\end{split}
\ee
where $\left[\phi^{\mu}(a,n) \right]_{\alpha}\equiv \mathbf{1}_{\alpha}\phi^{\mu}(a,n)\mathbf{1}_{\alpha}$. Using Fact~\ref{f1a} to expression~\eqref{P2} we have
\be
\label{Pan}
\begin{split}
P_{\mu}&=\frac{1}{n}\sum_{a=1}^{n}V_{a,n}\sum_{\alpha \in \mu}\frac{d_{\mu}}{d_{\alpha}}\left(\mathbf{1}_{\alpha}^U\ot \left[\phi^{\mu}(a,n) \right]_{\alpha} \right).
\end{split}
\ee
Having~\eqref{Pan} and definitions~\eqref{obj}, together with~\eqref{x3}, and $\tr_n V(a,n)=d^{\delta_{a,n}}\mathbf{1}_{1\ldots n-1}$, we write
\be
\label{x2}
\tr_{n-1}P_{\mu}=\frac{1}{n}\sum_{\beta \in \mu}\frac{d_{\mu}}{d_{\beta}}\mathbf{1}_{\beta}^{U}\ot \left[\sum_{a=1}^{n}\widetilde{\phi}^{\mu}(a,n) \right]_{\beta}=\frac{1}{n}\sum_{\beta \in \mu}\frac{d_{\mu}}{d_{\beta}}x_{\beta}^{\mu}\mathbf{1}_{\beta}^U\ot \mathbf{1}_{\beta}^S=\frac{1}{n}\sum_{\beta \in \mu}\frac{d_{\mu}}{d_{\beta}}x_{\beta}^{\mu}P_{\beta}.
\ee
Using~\eqref{x2}, property $\tr(P_{\beta}P_{\mu})=m_{\mu}d_{\beta}$, for $\mu \vdash n, \beta \vdash n-1$, and
\be
\tr\left( P_{\beta}P_{\mu}\right)=\frac{1}{n}\sum_{\gamma \in \mu}\frac{d_{\mu}}{d_{\gamma}}x_{\gamma}^{\mu}\tr\left(P_{\gamma}P_{\beta} \right)=\frac{1}{n}\frac{d_{\mu}}{d_{\beta}}x^{\mu}_{\beta}d_{\beta}m_{\beta}=\frac{1}{n}d_{\mu}m_{\beta}x^{\mu}_{\beta}
\ee
we deduce that
\be
\label{coeff}
x_{\beta}^{\mu}=n\frac{m_{\mu}d_{\beta}}{m_{\beta}d_{\mu}}.
\ee
This finishes the proof.
\end{proof}
\subsection{A new summation rule for irreducible representations and PRIR (Partially Reduced Irreducible Representation) notation}
\label{prir}
Let $H\subset S(n)$ be an arbitrary subgroup of $S(n)$ with transversal $%
T=\{\tau _{k}:k=1,\ldots,\frac{n!}{|H|}\}$, i.e. we have
\be
S(n)=\bigcup _{k=1}^{\frac{n!}{|H|}}\tau _{k}H.
\ee
For the further purposes, we can also introduce simplified notation
\begin{notation}
\label{not0}
Let us take $\mu \vdash n$ and $\alpha \vdash n-k$, for $k<n$.
By index $r_{\mu/\alpha}$ we denote a path on Young's lattice from diagram $\mu$ to $\alpha$. This path is uniquely determined by choosing a chain of covered young frames from $\mu$ to $\alpha$, differencing by one box in each step:
\be
r_{\mu/\alpha}=(\mu,\mu_{n-1},\ldots,\mu_{n-k+1},\alpha)
\ee
and
\be
\mu \ni \mu_{n-1} \ni \cdots \ni \mu_{n-k+1} \ni \alpha.
\ee
\end{notation}
Consider an arbitrary unitary irrep $\phi ^{\mu }$ of $S(n)$. It
can be always unitarily transformed to $PRIR$ $\phi _{R}^{\mu },$ such that
\be
\label{M1}
\forall \varkappa \in H\quad \phi _{R}^{\mu }(\varkappa )=\bigoplus _{\alpha
\in \mu ,r_{\mu/\alpha}}\varphi ^{\alpha ,r_{\mu/\alpha}}(\varkappa )\equiv \bigoplus _{r_{\mu/\alpha}}\varphi ^{r_{\mu/\alpha}}(\varkappa ),
\ee
where $\alpha $ labels the type of if a irrep of $H$ and $r_{\mu/\alpha}$ denotes path on Young's lattice from $\mu$ to $\alpha$. It means that element $\varphi ^{\alpha ,r_{\mu/\alpha}}(\varkappa )$ is repeated $|\mathcal{R}_{\mu/\alpha}|=m_{\mu/\alpha}$ times, where $\mathcal{R}_{\mu/\alpha}$ is the set composed of all paths $r_{\mu/\alpha}$ from $\mu$ to $\alpha$. Whenever it is clear from the context we write just $\varphi ^{\alpha }(\varkappa )$ instead of $\varphi ^{\alpha ,r_{\mu/\alpha}}(\varkappa )$. Diagonal blocks in the decomposition~\eqref{M1} are labelled and in fact
ordered by the two indices $\alpha ,r_{\mu/\alpha}$. The $PRIR$
representation of $S(n)$, reduced to the subgroup $H$, has block diagonal form
of completely reduced representation, which in matrix notation takes the form
\be
\label{M2}
\forall \varkappa \in H\quad (\phi _{R}^{\mu })_{i_{\alpha }\quad
j_{\beta }}^{r_{\mu/\alpha}, \widetilde{r}_{\mu/\beta}}(\varkappa )=\delta
^{r_{\mu/\alpha} \widetilde{r}_{\mu/\beta}}\varphi _{i_{\alpha }j_{\alpha }}^{\alpha}(\varkappa),
\ee
where indices $i_{\alpha},j_{\alpha}$ run from 1 to dimension of the irrep $\alpha$, and $\delta
^{r_{\mu/\alpha} \widetilde{r}_{\mu/\beta}}=\delta^{\mu\nu}\delta^{\mu_{n-1}\nu_{n-1}}\cdots \delta^{\alpha\beta}$. The above considerations allow us to introduce the following
\begin{notation}
\label{not16}
Every basis index $i_{\mu}$, where $\mu \vdash n$, can be written uniquely using a path on Young's lattice as
\be
\label{vv}
i_{\mu}\equiv(r_{\mu/\alpha}, l_{\alpha}), \qquad \alpha\in\mu,
\ee
and $l_{\alpha}$ denotes now index running only within the range of the irrep $\alpha$. The indices $i_{\mu},l_{\alpha}$ are of the same type as $r_{\mu/\alpha}$, but with trivial last element, i.e. a single box Young diagram. Equation~\eqref{vv} defines the division of the chosen path on Young's lattice from diagram $\mu$ to single box diagram, through a diagram $\alpha$. By writing $\delta_{i_{\mu}j_{\nu}}$, where $\mu \vdash n$ and $\alpha \vdash n-k$, we understand the following
\be
\delta_{i_{\mu}j_{\nu}}=\delta^{r_{\mu/\alpha}\widetilde{r}_{\nu/\beta}}\delta_{l_{\alpha}l'_{\beta}}=\delta_{\mu\nu}\delta_{\mu_{n-1}\nu_{n-1}}\cdots \delta_{\mu_{n-k+1}\nu_{n-k+1}}\delta_{\alpha\beta}\delta_{l_{\alpha}l'_{\beta}}.
\ee
\end{notation}
Similarly as in~\cite{Studzinski2017,MozJPA} the block structure of this reduced representation
allows to introduce such a block indexation for the $PRIR$ $\phi_{R}^{\mu }$
of $S(n)$, which gives
\be
\forall \sigma \in S(n)\quad \phi _{R}^{\mu }(\sigma )=\left( (\phi _{R}^{\mu })_{i_{\alpha }\quad j_{\beta }}^{r_{\mu/\alpha}, \widetilde{r}_{\mu/\beta}}(\sigma)\right) ,
\ee
where the matrices on the diagonal $(\phi _{R}^{\mu })_{i_{\alpha }\quad j_{\beta }}^{r_{\mu/\alpha}, \widetilde{r}_{\mu/\beta}}(\sigma)$ are of dimension of corresponding irrep $
\varphi ^{\alpha }$ of $S(n-1)$. The off diagonal blocks need not to be
square.
Now we formulate the main result of this subsection, the generalized version of $PRIR$ orthogonality relation. The following proposition plays the central role in investigating matrix elements of MPBT operator in irreducible orthonormal operator basis presented later in Section~\ref{comm_structure}.
\begin{proposition}
\label{summation0}
Let $H\subset S(n)$ be an arbitrary subgroup of $S(n)$ with transversal $%
T=\{\tau _{k}:k=1,\ldots,\frac{n!}{|H|}\}$,
In the $PRIR$ notation $\phi _{R}^{\mu }$ of $S(n)$ satisfy the
following bilinear sum rule%
\be
\begin{split}
\forall r_{\mu/\alpha},\widetilde{r}_{\mu/\beta}, r_{\nu/\beta},\widetilde{r}_{\nu/\gamma}
\qquad \sum_{k=1}^{\frac{n!}{|H|}}\sum_{k_{\beta }=1}^{d_{\beta}}(\phi
_{R}^{\mu })_{i_{\alpha }\quad k_{\beta }}^{r_{\mu/\alpha},\widetilde{r}_{\mu/\beta}}(\tau _{k}^{-1})(\phi _{R}^{\nu })_{k_{\beta }\quad
j_{\gamma }}^{r_{\nu/\beta}, \widetilde{r}_{\nu/\gamma}}(\tau _{k})
=\frac{n!}{|H|}\frac{d_{\beta }}{d_{\mu }}\delta ^{r_{\mu/\alpha}\widetilde{r}_{\nu/\gamma}}\delta _{i_{\alpha
}j_{\gamma}}
\end{split}
\ee
where $\alpha ,\beta ,\gamma $ are $irreps$ of $H$ contained in
the irrep $\mu $ of $S(n)$, and $|H|$ denotes cardinality of the subgroup $H$.
\end{proposition}
\begin{proof}
The proof is based on the classical orthogonality relations for irreps, which in PRIR notation takes a form
\be
\sum_{g\in G}(\phi _{R}^{\mu })_{i_{\alpha }\quad k_{\beta }}^{r_{\mu/\alpha}\widetilde{r}_{\mu/\beta}}(g^{-1})(\phi _{R}^{\nu })_{k_{\beta }\quad
j_{\gamma }}^{r_{\nu/\beta}\widetilde{r}_{\nu/\gamma}}(g)=\frac{|G|}{%
d_{\mu}}\delta ^{\widetilde{r}_{\mu/\beta}r_{\nu/\beta}}\delta _{i_{\alpha }j_{\gamma }},
\ee
where $|G|$ denotes cardianlity of the group $G$.
It means, that even if $\alpha =\gamma$, i.e. these representations
are of the same type, but $\widetilde{r}_{\mu/\beta}\neq r_{\nu/\beta}$, the $RHS$ of the
above equation is equal to zero. Next part of the proof follows from the proof of Proposition 29 in paper~\cite{MozJPA}.
\end{proof}
\subsection{Properties of irreducible operator basis and Young projectors under partial trace}
\label{parEij}
For further purposes, namely for effective computations of performance of our teleportation schemes, we prove here how irreducible operator basis given in~\eqref{Eij} or~\eqref{schur_Eij}, and Young projectors from~\eqref{def_P} behave under taking a partial trace over last $k$ systems. Our formulas are generalisations of attempts to similar problem made in~\cite{Aud}. We start considerations from calculating the partial trace from operators~\eqref{Eij} over last system. In all lemmas presented below we use PRIR representation described in Subsection~\ref{prir}.
\begin{lemma}
\label{L3a}
For irreducible operator basis $E^{\mu}_{kl}$, where $\mu \vdash n$, introduced in~\eqref{Eij}, the partial trace over last system equals to
\be
\tr_{n}E_{i_{\beta}j_{\beta'}}^{\beta \beta'}(\mu)=\sum_{\alpha \in\mu}\frac{m_{\mu}}{m_{\alpha}}E_{i_{\alpha}j_{\alpha}}^{\alpha}\delta_{\alpha \beta}\delta_{\alpha \beta'}=\frac{m_{\mu}}{m_{\beta}}E^{\beta}_{i_{\beta}j_{\beta}}\delta_{\beta \beta'}.
\ee
\end{lemma}
\begin{proof}
Similarly as it was done for Young projectors $P_{\mu}$ in~\eqref{P1}, we can rewrite $E^{\mu}_{ij}$ as
\be
\begin{split}
E^{\mu}_{kl}&=\frac{d_{\mu}}{n!}\sum_{\sigma \in S(n)} \phi^{\mu}_{lk}(\sigma^{-1}) V_{\sigma}=\frac{d_{\mu}}{n!}\sum_{a=1}^{n}\sum_{\tau \in S(n-1)}\phi^{\mu}_{lk}((a, n)\circ \tau^{-1})V_{a,n}V_{\tau}.
\end{split}
\ee
Observing that $\phi^{\mu}_{lk}(\sigma^{-1})=\tr\left(|k\>\<l|\phi^{\mu}(\sigma^{-1}) \right)$, where $|k\>,|l\>$ are basis vector in irrep $\mu$, we can write in PRIR notation $k=k_{\mu}=(\beta,i_{\beta})$ and $l=l_{\mu}=(\beta',j_{\beta'})$ having
\be
E_{i_{\beta}j_{\beta'}}^{\beta \beta'}(\mu)=\frac{d_{\mu}}{n!}\sum_{a=1}^nV_{a,n}\sum_{\tau\in S(n-1)}\tr\left[|\beta,i_{\beta}\>\<\beta',j_{\beta'}|\phi^{\mu}(a,n)\phi^{\mu}(\tau^{-1}) \right]V_{\tau}.
\ee
Since $\tau \in S(n-1)$ and $\mu \vdash n$, we can apply directly decomposition from~\eqref{M1} writing
\be
E_{i_{\beta}j_{\beta'}}^{\beta \beta'}(\mu)=\frac{d_{\mu}}{n!}\sum_{a=1}^nV_{a,n}\sum_{\alpha\in\mu}\sum_{\tau\in S(n-1)}\tr\left( \left[|\beta,i_{\beta}\>\<\beta',j_{\beta'}|\phi^{\mu}(a,n)\right]_{\alpha}\phi^{\alpha}(\tau^{-1})\right) V_{\tau}.
\ee
In the above, by $\left[|\beta,i_{\beta}\>\<\beta',j_{\beta'}|\phi^{\mu}(a,n)\right]_{\alpha}$ we denote the restriction to irrep $\alpha$. Applying Fact~\ref{f1a} to the above expression we write
\be
\begin{split}
&E_{i_{\beta}j_{\beta'}}^{\beta \beta'}(\mu)=\frac{d_{\mu}}{n!}\sum_{a=1}^{n}V_{a,n}\sum_{\alpha \in \mu}\frac{(n-1)!}{d_{\alpha}}\mathbf{1}^U_{\alpha}\ot \left[|\beta,i_{\beta}\>\<\beta',j_{\beta'}|\phi^{\mu}(a,n) \right]_{\alpha}.
\end{split}
\ee
Taking the partial trace over last system, and having in mind the definition of $\widetilde{\phi}^{\mu}(a,n-1)$ from~\eqref{obj}, we have:
\be
\begin{split}
&\tr_{n}E_{i_{\beta}j_{\beta'}}^{\beta \beta'}(\mu)=\frac{d_{\mu}}{n!}\tr_{n}\left(\sum_{a=1}^{n}V_{a,n}\right)\sum_{\alpha \in \mu}\frac{(n-1)!}{d_{\alpha}}\mathbf{1}^U_{\alpha}\ot \left[|\beta,i_{\beta}\>\<\beta',j_{\beta'}|\phi^{\mu}(a,n) \right]_{\alpha} \\
&=\frac{d_{\mu}}{n!}\sum_{a=1}^{n}\left(\sum_{\alpha \in \mu}\frac{(n-1)!}{d_{\alpha}}\mathbf{1}^U_{\alpha}\ot \left[|\beta,i_{\beta}\>\<\beta',j_{\beta'}|\widetilde{\phi}^{\mu}(a,n) \right]_{\alpha} \right).
\end{split}
\ee
From the proof of Lemma~\ref{object} we know that the object $\left[ \sum_{a=1}^{n}\widetilde{\phi}^{\mu}(a,n)\right]_{\beta}$ is invariant with respect to $S(n-1)$. Together with property $\mathbf{1}^{\alpha}_{\mu}|\beta,i_{\beta}\>\<\beta',j_{\beta'}|\mathbf{1}^{\alpha}_{\mu}=\delta_{\alpha\beta}\delta_{\alpha,\beta'}|\alpha,i_{\alpha}\>\<\alpha,j_{\alpha}|$, we have
\be
\begin{split}
&\tr_{n}E_{i_{\beta}j_{\beta'}}^{\beta \beta'}(\mu)=\frac{d_{\mu}}{n!}\sum_{\alpha\in\mu}\frac{(n-1)!}{d_{\alpha}}x^{\mu}_{\alpha}\mathbf{1}^U_{\alpha}\ot |\alpha,i_{\alpha}\>\<\alpha,j_{\alpha}|\delta_{\alpha\beta}\delta_{\alpha\beta'}=\frac{1}{n}\frac{d_{\mu}}{d_{\beta}}x^{\mu}_{\beta}\mathbf{1}^U_{\alpha}\ot |\beta,i_{\beta}\>\<\beta,j_{\beta}|\delta_{\beta\beta'}=\frac{m_{\mu}}{m_{\beta}}E^{\beta}_{i_{\beta}j_{\beta}}\delta_{\beta\beta'}.
\end{split}
\ee
In the last step we use explicit form of coefficients $x^{\mu}_{\beta}$ given in Lemma~\ref{object} and expression~\eqref{schur_Eij}.
\end{proof}
\begin{corollary}
From Lemma~\ref{L3} we see that taking a partial trace over $n-$th subsystem we destroys all the coherences between block labelled by different $\beta\vdash n-1$.
\end{corollary}
For further purpose of having explicit connection with the structure of multi-port teleportation scheme, let us assume that now $\mu \vdash n-k$, such that $2k<n$. Having that and extended notion of $PRIR$, we are in position to present the second main result of this section.
\begin{lemma}
\label{L3}
For basis operators $E^{\mu}_{kl}$ in the irreducible representation labelled by $\mu\vdash n-k$, we have the following equality:
\be
\tr_{(k)}E_{i_{\beta}\quad j_{\beta'}}^{r_{\mu/\beta} \ \widetilde{r}_{\mu/\beta'}}=\frac{m_{\mu}}{m_{\beta}}E^{\beta}_{i_{\beta}j_{\beta}}\delta_{r_{\mu/\beta} \widetilde{r}_{\mu/\beta'}}
\ee
where we use simplified notation $\tr_{(k)}=\tr_{n-2k+1,\ldots,n-k}$.
\end{lemma}
\begin{proof}
To prove the above statement we use iteratively Lemma~\ref{L3a}. Let us write explicitly indices $k=k_{\mu}, l=l_{\mu}$ in PRIR notation:
\be
\label{paths}
\begin{split}
&k_{\mu}=(\mu_{n-k-1},r_{\mu_{n-k-1}})=(\mu_{n-k-1},\ldots,\mu_{n-2k+1},\mu_{n-2k},i_{\mu_{n-2k}})=(\mu_{n-k-1},\ldots,\mu_{n-2k+1},\beta,i_{\beta}),\\
&l_{\mu}=(\mu'_{n-k-1},s_{\mu'_{n-k-1}})=(\mu'_{n-k-1},\ldots,\mu'_{n-2k+1},\mu'_{n-2k},i_{\mu'_{n-2k}})=(\mu'_{n-k-1},\ldots,\mu'_{n-2k+1},\beta',j_{\beta'}),
\end{split}
\ee
where we put $\beta=\mu_{n-2k},\beta'=\mu'_{n-2k}$ for simpler notation. Each lower index denotes a proper layer on the reduced Young's lattice, starting from the highest layer labelled by the number $n-k$. In the first step we compute the partial trace over $(n-k)$-th system getting
\be
\label{n-k}
\tr_{n-k}E_{r_{\mu_{n-k-1}}s_{\mu'_{n-k-1}}}^{\mu_{n-k-1} \ \mu'_{n-k-1}}(\mu_{n-k})=\frac{m_{\mu_{n-k}}}{m_{\mu_{n-k-1}}}E^{\mu_{n-k-1}}_{r_{\mu_{n-k-1}}s_{\mu_{n-k-1}}}\delta_{\mu_{n-k-1}\mu'_{n-k-1}}.
\ee
This procedure reduced paths in~\eqref{paths} to
\be
\begin{split}
&r_{\mu_{n-k-1}}=(\mu_{n-k-2},q_{\mu_{n-k-2}})=(\mu_{n-k-2},\mu_{n-k-3},\ldots,\mu_{n-2k+1},\beta,i_{\beta}),\\ &s_{\mu_{n-k-1}}=(\mu'_{n-k-2},p_{\mu'_{n-k-2}})=(\mu'_{n-k-2},\mu'_{n-k-3},\ldots,\mu'_{n-2k+1},\beta',j_{\beta'}),
\end{split}
\ee
where $\beta=\mu_{n-2k},\beta'=\mu'_{\mu-2k}$.
Now computing the trace from~\eqref{n-k} over $(n-k-1)$-th particle we write
\be
\begin{split}
&\delta_{\mu_{n-k-1}\mu'_{n-k-1}}\frac{m_{\mu_{n-k}}}{m_{\mu_{n-k-1}}}\tr_{n-k-1}E^{\mu_{n-k-2}\mu'_{n-k-2}}_{q_{\mu_{n-k-2}}p_{\mu'_{n-k-2}}}(\mu_{n-k-1})\\
&=\delta_{\mu_{n-k-1}\mu'_{n-k-1}}\delta_{\mu_{n-k-2}\mu'_{n-k-2}}\frac{m_{\mu_{n-k}}}{m_{\mu_{n-k-1}}}\frac{m_{\mu_{n-k-1}}}{m_{\mu_{n-k-2}}}E^{\mu_{n-k-2}}_{q_{\mu_{n-k-2}}p_{\mu_{n-k-2}}}.
\end{split}
\ee
Continuing the above procedure, up to last system in $\tr_{(k)}$ we obtain expression
\be
\begin{split}
&\delta_{\mu_{n-k-1}\mu'_{n-k-1}}\delta_{\mu_{n-k-2}\mu'_{n-k-2}}\times \cdots \times \delta_{\mu_{n-2k+1}\mu'_{n-2k+1}}\delta_{\mu_{n-2k}\mu'_{n-2k}}\frac{m_{\mu_{n-k}}}{m_{\mu_{n-k-1}}}\frac{m_{\mu_{n-k-1}}}{m_{\mu_{n-k-2}}}\times \cdots \times \frac{m_{\mu_{n-2k+1}}}{m_{\mu_{n-2k}}}E^{\mu_{n-2k}}_{i_{\mu_{n-2k}}j_{\mu_{n-2k}}}\\
&=\delta_{\mu_{n-k-1}\mu'_{n-k-1}}\delta_{\mu_{n-k-2}\mu'_{n-k-2}}\times \cdots \times \delta_{\mu_{n-2k+1}\mu'_{n-2k+1}}\delta_{\mu_{n-2k}\mu'_{n-2k}}\frac{m_{\mu_{n-k}}}{m_{\mu_{n-2k}}}E^{\mu_{n-2k}}_{i_{\mu_{n-2k}}j_{\mu_{n-2k}}}=\delta^{r_{\mu/\beta}\widetilde{r}_{\mu/\beta'}}\frac{m_{\mu}}{m_{\beta}}E^{\beta}_{i_{\beta}j_{\beta}},
\end{split}
\ee
since in the last line we used definition of $r_{\mu/\alpha}$ and suppressed indices labelling layers on reduced Bratelli diagram. This finishes the proof.
\end{proof}
Then Lemma~\ref{L3} implies the following statement about the Young projector:
\begin{corollary}
\label{corL3}
Let $P_{\mu}$ be a Young projector on irrep labelled by $\mu \vdash n-k$, then
\be
\tr_{(k)}P_{\mu}=\sum_{\beta \in \mu}m_{\mu/\beta}\frac{m_{\mu}}{m_{\beta}}P_{\beta}
\ee
where we use simplified notation $\tr_{(k)}=\tr_{n-2k+1,\ldots,n-k}$.
\end{corollary}
Indeed, knowing that $P_{\mu}=\sum_{k}E^{\mu}_{ii}$, we write in PRIR basis
\be
\begin{split}
\tr_{(k)}P_{\mu}&=\sum_{k_{\mu}}E^{\mu}_{k_{\mu}k_{\mu}}=\sum_{\beta \in \mu}\sum_{r_{\mu/\beta}}\sum_{i_{\beta}}\tr_{(k)}E_{i_{\beta}\quad i_{\beta}}^{r_{\mu/\beta} r_{\mu/\beta}}=\sum_{\beta \in \mu}\sum_{r_{\mu/\beta}}\sum_{i_{\beta}}\frac{m_{\mu}}{m_{\beta}}E^{\beta}_{i_{\beta}i_{\beta}}\\
&=\sum_{\beta \in \mu}\sum_{r_{\mu/\beta}}\sum_{i_{\beta}}\frac{m_{\mu}}{m_{\beta}}E^{\beta}_{i_{\beta}i_{\beta}}=\sum_{\beta \in \mu}\sum_{r_{\mu/\beta}}\frac{m_{\mu}}{m_{\beta}}P_{\beta}=\sum_{\beta \in \mu}m_{\mu/\beta}\frac{m_{\mu}}{m_{\beta}}P_{\beta}.
\end{split}
\ee
\section{The Commutant Structure of $U^{\ot (n-k)}\ot \overline{U}^{\ot k}$ Transformations and MPBT operator}
\label{comm_structure}
In this section we deliver an orthonormal basis for the commutant of $U^{\ot (n-k)}\ot \overline{U}^{\ot k}$, or equivalently for the algebra $\mathcal{A}^{(k)}_n(d)$. Being more strict, we introduce an irreducible basis for an two-sided ideal $\mathcal{M}$ generated by the element $V^{(k)}$ and elements of the algebra $\mathcal{A}^{(k)}_n(d)$:
\be
\label{idealM}
\mathcal{M}=\{V_{\tau}V^{(k)}V^{\dagger}_{\tau'} \ | \ \tau,\tau'\in S(n-k)\}.
\ee
For our problem full description of $\mathcal{M}$, together with irreducible representation is enough since all basic objects describing MPBT scheme belong to this ideal, see for example definition of MPBT operator from~\eqref{PBT1}. In the most general case the algebra $\mathcal{A}^{(k)}_n(d)$ contains also two-sided ideals generated by the elements $V^{(k')}$, for $k'<k$, and elements of the algebra $\mathcal{A}^{(k)}_n(d)$. We have the following chain of inclusions
\be
\mathcal{M}\equiv \mathcal{M}^{(k)}\subset \mathcal{M}^{(k-1)}\subset \cdots \subset \mathcal{M}^{(1)}\subset \mathcal{M}^{(0)}\equiv \mathcal{A}^{(k)}_n(d).
\ee
The irreducible basis fir the ideals with $k'< k$ will be studied elsewhere, since we do not use objects from the outside of the ideal $\mathcal{M}$. In Figure~\ref{structure_M} we present nested structure of $\mathcal{A}^{(2)}_5(d)$ for $d>3$, together with labelling subsequent blocks within them.
\begin{figure}[h]
\includegraphics[width=.8\linewidth]{rys_ver2.png}
\caption{Graphic presents the interior structure of the algebra $\mathcal{A}^{(2)}_6(d)$, with the nested structure of the ideals $\mathcal{M}^{(0)}, \mathcal{M}^{(1)}, \mathcal{M}^{(2)}$, for $d\geq4$ (only with this requirement all the Young frames occur in the decomposition). In particular, we focus on the interior block structure of the ideal $\mathcal{M}^{(2)}$, on which objects describing multi-port teleportation schemes are defined. The middle figure represents the process of the induction by adding two boxes to two allowed starting Young frames which are $(2)$ and $(1,1)$. In this case we have nested structure of three layers. The most right figure presents process of the reduction from irreps labelled by Young frames of 4 boxes to irreps labelled to Young frames of two boxes. We present here the process of the reduction for the two most left upper blocks.}
\label{structure_M}
\end{figure}
Having expressions for partial trace over an arbitrary number of particles from irreducible basis operators of the symmetric group we are in the position to formulate the main result, namely we have:
\begin{theorem}
\label{tmbas}
The orthonormal operator basis of the commutant of $U^{\ot (n-k)}\ot \overline{U}^{\ot k}$ in the maximal ideal $\mathcal{M}$ is given by the following set of operators
\be
\label{tmbas2}
F^{r_{\mu/\alpha} r_{\nu/\alpha}}_{i_{\mu}\quad j_{\nu}}=\frac{m_{\alpha}}{\sqrt{m_{\mu}m_{\nu}}}E_{i_{\mu} \ 1_{\alpha}}^{\quad r_{\mu/\alpha}}V^{(k)}E^{r_{\nu/\alpha}}_{1_{\alpha}\quad j_{\nu}}
\ee
satisfying the following composition rule
\be
\label{orto}
F^{r_{\mu/\alpha} r_{\nu/\alpha}}_{i_{\mu}\quad j_{\nu}}F^{r_{\mu'/\beta} r_{\nu'/\beta}}_{k_{\mu'}\quad l_{\nu'}}=\delta^{r_{\nu/\alpha}r_{\mu'/\beta}}\delta_{j_{\nu}k_{\mu'}}F^{r_{\mu/\alpha} r_{\nu'/\alpha}}_{i_{\mu}\quad l_{\nu'}}
\ee
where $m_{\mu},m_{\nu}$ and $m_{\alpha}$ are multiplicities of respective irreps of $S(n-k)$ and $S(n-2k)$ in the Schur-Weyl duality.
\end{theorem}
\begin{proof}
The proof contains two main steps:
\begin{itemize}
\item Showing that operators are orthonormal, i.e.
\be
\label{orto2}
F^{r_{\mu/\alpha} r_{\nu/\alpha}}_{i_{\mu}\quad j_{\nu}}F^{r_{\mu'/\beta} r_{\nu'/\beta}}_{k_{\mu'}\quad l_{\nu'}}=\delta^{r_{\nu/\alpha}r_{\mu'/\beta}}\delta_{j_{\nu}k_{\mu'}}F^{r_{\mu/\alpha} r_{\nu'/\alpha}}_{i_{\mu}\quad l_{\nu'}}.
\ee
Indeed, writing explicitly the above composition and using orthogonality relation for operators $E^{\mu}_{i_{\mu}j_{\mu}}$, we have
\be
\begin{split}
F^{r_{\mu/\alpha} r_{\nu/\alpha}}_{i_{\mu}\quad j_{\nu}}F^{r_{\mu'/\beta} r_{\nu'/\beta}}_{k_{\mu'}\quad l_{\nu'}}&=\frac{m_{\alpha}}{\sqrt{m_{\mu}m_{\nu}}}\frac{m_{\beta}}{\sqrt{m_{\mu'}m_{\nu'}}}E_{i_{\mu} \ 1_{\alpha}}^{\quad r_{\mu/\alpha}}V^{(k)}E^{r_{\nu/\alpha}}_{1_{\alpha}\quad j_{\nu}}E_{k_{\mu'} \ 1_{\beta}}^{\quad r_{\mu'/\beta}}V^{(k)}E^{r_{\nu'/\beta}}_{1_{\beta}\quad l_{\nu'}}\\
&=\delta^{\nu\mu'}\delta_{j_{\nu}k_{\mu'}}\frac{m_{\alpha}}{\sqrt{m_{\mu}m_{\nu}}}\frac{m_{\beta}}{\sqrt{m_{\mu'}m_{\nu'}}}E_{i_{\mu} \ 1_{\alpha}}^{\quad r_{\mu/\alpha}}V^{(k)}E_{1_{\alpha}\quad1_{\beta}}^{r_{\nu/\alpha}r_{\mu'/\beta}}V^{(k)}E^{r_{\nu'/\beta}}_{1_{\beta}\quad l_{\nu'}}.
\end{split}
\ee
Now, applying Fact~\ref{kpartr} to operator $E_{1_{\alpha}\quad1_{\beta}}^{r_{\nu/\alpha}r_{\nu'/\beta}}$, together with Lemma~\ref{L3}, we reduce to
\be
\begin{split}
F^{r_{\mu/\alpha} r_{\nu/\alpha}}_{i_{\mu}\quad j_{\nu}}F^{r_{\mu'/\beta} r_{\nu'/\beta}}_{k_{\mu'}\quad l_{\nu'}}&=\delta^{\nu\mu'}\delta^{\alpha \beta}\delta^{r_{\nu/\alpha}r_{\mu'/\beta}}\delta_{j_{\nu}k_{\mu'}}\frac{m_{\alpha}}{\sqrt{m_{\mu}m_{\mu'}}}\frac{m_{\alpha}}{\sqrt{m_{\mu'}m_{\nu'}}}\frac{m_{\mu'}}{m_{\alpha}}E_{i_{\mu} \ 1_{\alpha}}^{\quad r_{\mu/\alpha}}E^{\alpha}_{1_{\alpha}1_{\alpha}}V^{(k)}E^{r_{\nu'/\alpha}}_{1_{\alpha}\quad l_{\nu'}}\\
&=\delta^{r_{\nu/\alpha}r_{\mu'/\beta}}\delta_{j_{\nu}k_{\mu'}}\frac{m_{\alpha}}{\sqrt{m_{\mu}m_{\nu'}}}E_{i_{\mu} \ 1_{\alpha}}^{\quad r_{\mu/\alpha}}E^{\alpha}_{1_{\alpha}1_{\alpha}}V^{(k)}E^{r_{\nu'/\alpha}}_{1_{\alpha}\quad l_{\nu'}}.
\end{split}
\ee
Finally observing that $E_{i_{\mu} \ 1_{\alpha}}^{\quad r_{\mu/\alpha}}E^{\alpha}_{1_{\alpha}1_{\alpha}}=E_{i_{\mu} \ 1_{\alpha}}^{\quad r_{\mu/\alpha}}$ we get expression~\eqref{orto}.
\item Showing that element $V^{(k)}$ generating the ideal $\mathcal{M}$, see~\eqref{idealM} can be expressed as a linear combination of basis elements $F^{r_{\mu/\alpha} r_{\nu/\alpha}}_{i_{\mu}\quad j_{\nu}}$. Indeed, we have
\be
\begin{split}
V^{(k)}=\sum_{\mu,\nu \vdash n-k} P_{\mu}V^{(k)}P_{\nu}=\sum_{\mu,\nu \vdash n-k} \ \sum_{i_{\mu},j_{\nu}}E^{\mu}_{i_{\mu}i_{\mu}}V^{(k)}E^{\nu}_{j_{\nu}j_{\nu}},
\end{split}
\ee
since $\mathbf{1}=\sum_{\mu}P_{\mu}$ together with~\eqref{def_P}. Writing indices $i_{\mu},j_{\nu}$ in PRIR notation, according to Notation~\ref{not16} we get
\be
\label{to}
\begin{split}
V^{(k)}=\sum_{\mu,\nu}\sum_{r_{\mu/\alpha},\widetilde{r}_{\nu/\beta}}\sum_{l_{\alpha},l'_{\beta}}E^{r_{\mu/\alpha}r_{\mu/\alpha}}_{l_{\alpha}\quad l_{\alpha}}V^{(k)}E^{\widetilde{r}_{\nu/\beta}\widetilde{r}_{\nu/\beta}}_{l'_{\beta}\quad l'_{\beta}}=\sum_{\mu,\nu}\sum_{r_{\mu/\alpha},\widetilde{r}_{\nu/\beta}}\sum_{l_{\alpha},l'_{\beta}}E^{r_{\mu/\alpha}r_{\mu/\alpha}}_{l_{\alpha}\quad 1_{\alpha}}E^{\alpha}_{1_{\alpha}l_{\alpha}}V^{(k)}E^{\beta}_{l'_{\beta}1_{\beta}}E^{\widetilde{r}_{\nu/\beta}\widetilde{r}_{\nu/\beta}}_{1_{\beta}\quad l'_{\beta}}.
\end{split}
\ee
Having $[E^{\alpha}_{1_{\alpha}l_{\alpha}},V^{(k)}]=[E^{\beta}_{l'_{\beta}1_{\beta}},V^{(k)}]=0$ and orthogonality relation $E^{\alpha}_{1_{\alpha}l_{\alpha}}E^{\beta}_{l'_{\beta}1_{\beta}}=\delta^{\alpha\beta}\delta_{l_{\alpha}l'_{\beta}}E^{\alpha}_{1_{\alpha}1_{\alpha}}$ we reduce~\eqref{to} to
\be
\begin{split}
V^{(k)}&=\sum_{\mu,\nu} \ \sum_{r_{\mu/\alpha},\widetilde{r}_{\nu/\alpha}}\sum_{l_{\alpha}}E^{r_{\mu/\alpha}r_{\mu/\alpha}}_{l_{\alpha}\quad 1_{\alpha}}V^{(k)}E^{\widetilde{r}_{\nu/\alpha}\widetilde{r}_{\nu/\alpha}}_{1_{\alpha}\quad l_{\alpha}}=\sum_{\mu,\nu} \ \sum_{r_{\mu/\alpha},\widetilde{r}_{\nu/\alpha}}\sum_{l_{\alpha}}\frac{\sqrt{m_{\mu}m_{\nu}}}{m_{\alpha}}\left(\frac{m_{\alpha}}{\sqrt{m_{\mu}m_{\nu}}} E^{r_{\mu/\alpha}r_{\mu/\alpha}}_{l_{\alpha}\quad 1_{\alpha}}V^{(k)}E^{\widetilde{r}_{\nu/\alpha}\widetilde{r}_{\nu/\alpha}}_{1_{\alpha}\quad l_{\alpha}}\right)\\
&=\sum_{\mu,\nu} \ \sum_{r_{\mu/\alpha},\widetilde{r}_{\nu/\alpha}}\sum_{l_{\alpha}}\frac{\sqrt{m_{\mu}m_{\nu}}}{m_{\alpha}} F^{\quad r_{\mu/\alpha} \ \widetilde{r}_{\nu/\alpha}}_{l_{\alpha} \ r_{\mu/\alpha} \ \widetilde{r}_{\nu/\alpha} \ l_{\alpha}}.
\end{split}
\ee
In the above we use representation of $F^{r_{\mu/\alpha} r_{\nu/\alpha}}_{i_{\mu}\quad j_{\nu}}$ in full PRIR basis:
\be
\label{full}
F^{r_{\mu/\alpha} r_{\nu/\alpha}}_{i_{\mu} \quad j_{\nu}} \rightarrow F^{\quad r_{\mu/\alpha} \ r_{\nu/\alpha}}_{k_{\beta} \ r_{\mu/\beta} \ r_{\nu/\gamma} \ k'_{\gamma}}
\ee
since $i_{\mu}=(r_{\mu/\beta},k_{\beta})$ and $j_{\nu}=(r_{\nu/\gamma},k'_{\gamma})$. This finishes the proof.
\end{itemize}
\end{proof}
Next we focus on the relations analogous to~\eqref{actionE} for the basis elements $F^{r_{\mu/\alpha} r_{\nu/\alpha}}_{i_{\mu} \quad j_{\nu}}$ and operators $V^{(k)}$, $V_{\tau}$, where $\tau \in \mathcal{S}_{n,k}\equiv \frac{S(n-k)}{S(n-2k)}$. To have all required tools let us first rewrite expressions from~\eqref{actionE} in PRIR notation, but for a specific choice of indices and partitions $\mu \vdash n-k$ and $\alpha\vdash n-2k$:
\be
\label{actionEE}
\begin{split}
&\forall \tau\in S(n-k)\qquad V_{\tau} E_{i_{\mu} \ 1_{\alpha}}^{\quad r_{\mu/\alpha}}=\sum_{l_{\mu}}\phi^{\mu}_{l_{\mu}i_{\mu}}(\tau)E_{l_{\mu} \ 1_{\alpha}}^{\quad r_{\mu/\alpha}}\\
&\forall \tau\in S(n-k)\qquad E^{r_{\nu/\alpha}}_{1_{\alpha}\quad j_{\nu}}V_{\tau^{-1}}=\sum_{k_{\nu}}\phi_{j_{\nu}k_{\nu}}^{\nu}(\tau^{-1})E_{1_{\alpha}\quad k_{\nu}}^{r_{\nu/\alpha}}
\end{split}
\ee
where $\phi^{\mu}_{l_{\mu}i_{\mu}}(\tau), \phi_{j_{\nu}k_{\nu}}^{\nu}(\tau^{-1})$ are the matrix elements of $V_{\tau}, V_{\tau^{-1}}$ in irreducible basis expressed in the PRIR notation, see~\eqref{blaa} and Section~\ref{preliminary}.
Having the above we are in position to prove the following
\begin{lemma}
\label{actionE'}
Let us take basis operators for the ideal $\mathcal{M}$ given through Theorem~\ref{tmbas}, together with~\eqref{full}. Then for the operator $V^{(k)}$ defined in~\eqref{parV} and an arbitrary permutation operator $V_{\tau}$, for $\tau \in S(n-k)$, the following relations hold:
\be
\label{eqactionE'}
F^{\quad r_{\mu/\alpha} \ r_{\nu/\alpha}}_{k_{\beta} \ r_{\mu/\beta} \ r_{\nu/\gamma} \ l_{\gamma}}V^{(k)}=\sum_{\mu'}\sum_{r_{\mu'/\gamma}} \frac{\sqrt{m_{\nu}m_{\mu'}}}{m_{\gamma}} F^{\quad r_{\mu/\gamma} \ r_{\mu'/\gamma}}_{k_{\beta} \ r_{\mu/\beta} \ r_{\mu'/\gamma} \ l_{\gamma}}\delta^{r_{\nu/\alpha}r_{\nu/\gamma}}
\ee
and
\be
\label{eqactionE'1}
F^{\quad r_{\mu/\alpha} \ r_{\nu/\alpha}}_{k_{\beta} \ r_{\mu/\beta} \ r_{\nu/\gamma} \ l_{\gamma}}V_{\tau}=\sum_{k_{\nu}}\phi^{\nu}_{j_{\nu}k_{\nu}}(\tau) F^{r_{\mu/\alpha} r_{\nu/\alpha}}_{i_{\mu} \quad k_{\nu}}
\ee
where $\phi^{\nu}_{j_{\nu}k_{\nu}}(\tau) $ are the matrix elements of $V_{\tau}$ in the irreducible basis expressed in the PRIR notation introduced in Section~\ref{preliminary}.
\end{lemma}
\begin{proof}
First let us calculate action of $F^{r_{\mu/\alpha} r_{\nu/\alpha}}_{i_{\mu}\quad j_{\nu}}$ on $V^{(k)}$. Using expression~\eqref{full} we have
\be
\label{x1}
\begin{split}
F^{\quad r_{\mu/\alpha} \ r_{\nu/\alpha}}_{k_{\beta} \ r_{\mu/\beta} \ r_{\nu/\gamma} \ l_{\gamma}}V^{(k)}=\frac{m_{\alpha}}{\sqrt{m_{\mu}m_{\nu}}}E^{r_{\mu/\beta}r_{\mu/\alpha}}_{k_{\beta}\quad 1_{\alpha}}V^{(k)}E^{r_{\nu/\alpha}r_{\nu/\gamma}}_{1_{\alpha}\quad l_{\gamma}}V^{(k)}=\sqrt{\frac{m_{\nu}}{m_{\mu}}}\delta^{r_{\nu/\alpha}r_{\nu/\gamma}}E^{r_{\mu/\beta}r_{\mu/\gamma}}_{k_{\beta}\quad 1_{\gamma}}E^{\gamma}_{1_{\gamma}l_{\gamma}}V^{(k)},
\end{split}
\ee
where in the second equality we used Fact~\ref{kpartr} and Lemma~\ref{L3}. Now decomposing identity acting on $n-k$ systems in PRIR basis
\be
\mathbf{1}=\sum_{\mu'\vdash n-k}P_{\mu'}=\sum_{\mu'}\sum_{r_{\mu'/\alpha'}}\sum_{s_{\alpha'}}E_{s_{\alpha'}\quad s_{\alpha'}}^{r_{\mu'/\alpha'} r_{\mu'/\alpha'}}\qquad \alpha'\vdash n-2k,
\ee
and multiplying by it the right hand side of~\eqref{x1} we have
\be
\label{x2}
\begin{split}
F^{\quad r_{\mu/\alpha} \ r_{\nu/\alpha}}_{k_{\beta} \ r_{\mu/\beta} \ r_{\nu/\gamma} \ l_{\gamma}}V^{(k)}=\sqrt{\frac{m_{\nu}}{m_{\mu}}}\sum_{\mu'}\sum_{r_{\mu'/\alpha'}}\sum_{s_{\alpha'}}E^{r_{\mu/\beta}r_{\mu/\gamma}}_{k_{\beta}\quad 1_{\gamma}}V^{(k)}E^{\gamma}_{1_{\gamma}l_{\gamma}}E_{s_{\alpha'}\quad s_{\alpha'}}^{r_{\mu'/\alpha'} r_{\mu'/\alpha'}}\delta^{r_{\nu/\alpha}r_{\nu/\gamma}}
\end{split}
\ee
since $\left[E^{\gamma}_{1_{\gamma}l_{\gamma}},V^{(k)}\right]=0$. Moreover we have $E^{\gamma}_{1_{\gamma}l_{\gamma}}E_{s_{\alpha'}\quad s_{\alpha'}}^{r_{\mu'/\alpha'} r_{\mu'/\alpha'}}=\delta^{\alpha'\gamma}\delta_{s_{\alpha'}l_{\gamma}}E_{1_{\gamma}\quad l_{\gamma}}^{r_{\mu'/\gamma} r_{\mu'/\gamma}}$. Substituting to~\eqref{x2} we write:
\be
\begin{split}
F^{\quad r_{\mu/\alpha} \ r_{\nu/\alpha}}_{k_{\beta} \ r_{\mu/\beta} \ r_{\nu/\gamma} \ l_{\gamma}}V^{(k)}&=\sqrt{\frac{m_{\nu}}{m_{\mu}}}\sum_{\mu'}\sum_{r_{\mu'/\gamma}}E^{r_{\mu/\beta}r_{\mu/\gamma}}_{k_{\beta}\quad 1_{\gamma}}V^{(k)}E^{r_{\mu'/\gamma}r_{\mu'/\gamma}}_{1_{\gamma}\quad l_{\gamma}}\delta^{r_{\nu/\alpha}r_{\nu/\gamma}}=\sum_{\mu'}\sum_{r_{\mu'/\gamma}} \frac{\sqrt{m_{\nu}m_{\mu'}}}{m_{\gamma}} F^{\quad r_{\mu/\gamma} \ r_{\mu'/\gamma}}_{k_{\beta} \ r_{\mu/\beta} \ r_{\mu'/\gamma} \ l_{\gamma}}\delta^{r_{\nu/\alpha}r_{\nu/\gamma}}.
\end{split}
\ee
This proves expression~\eqref{eqactionE'}.To prove equation~\eqref{eqactionE'1} we use directly~\eqref{actionEE} with~\eqref{tmbas2}:
\be
\begin{split}
F^{r_{\mu/\alpha} r_{\nu/\alpha}}_{i_{\mu} \quad j_{\nu}}V_{\tau}&=\frac{m_{\alpha}}{\sqrt{m_{\mu}m_{\nu}}}E_{i_{\mu} \ 1_{\alpha}}^{\quad r_{\mu/\alpha}}V^{(k)}E^{r_{\nu/\alpha}}_{1_{\alpha}\quad j_{\nu}}V_{\tau}=\sum_{k_{\nu}}\phi_{j_{\nu}k_{\nu}}^{\nu}(\tau)\frac{m_{\alpha}}{\sqrt{m_{\mu}m_{\nu}}}E_{i_{\mu} \ 1_{\alpha}}^{\quad r_{\mu/\alpha}}V^{(k)}E^{r_{\nu/\alpha}}_{1_{\alpha}\quad k_{\nu}}\\
&=\sum_{k_{\nu}}\phi_{j_{\nu}k_{\nu}}^{\nu}(\tau)F^{r_{\mu/\alpha} r_{\nu/\alpha}}_{i_{\mu} \quad k_{\nu}}.
\end{split}
\ee
This finishes the proof.
\end{proof}
Analogously we can evaluate expressions~\eqref{eqactionE'}, \eqref{eqactionE'1} for action from the right-hand side. For the further purposes we write explicitly such action on $V_{\tau}$, for $\tau \in S(n-k)$:
\be
\label{rhs_action}
V_{\tau}F^{r_{\mu/\alpha} r_{\nu/\alpha}}_{i_{\mu} \quad j_{\nu}}=\sum_{k_{\mu}}\phi_{k_{\mu}i_{\mu}}^{\mu}(\tau)F^{r_{\mu/\alpha} r_{\nu/\alpha}}_{k_{\mu} \quad j_{\nu}}.
\ee
Using the second part of the proof of Theorem~\ref{tmbas} we can formulate the following
\begin{lemma}
\label{Vel}
The operator $V^{(k)}$ defined in~\eqref{parV} and an arbitrary permutation operator $V_{\tau}$, for $\tau \in S(n-k)$ in the operator basis from Theorem~\ref{tmbas} have matrix elements equal to:
\be
\label{eqVel}
\left(V^{(k)}\right)^{\quad r_{\mu/\alpha} \ r_{\nu/\alpha}}_{k_{\beta} \ r_{\mu/\beta} \ r_{\nu/\gamma} \ l_{\gamma}}=\delta_{k_{\beta}l_{\gamma}}\delta^{r_{\mu/\alpha}r_{\mu/\beta}}\delta^{r_{\nu/\alpha}r_{\nu/\gamma}}\frac{\sqrt{m_{\mu}m_{\nu}}}{m_{\alpha}},
\ee
and
\be
\label{eqVel2}
\left( V_{\tau}\right)^{r_{\mu/\alpha}r_{\nu/\alpha}}_{i_{\mu}\quad j_{\nu}}=\delta^{r_{\mu/\alpha}r_{\nu/\alpha}}\delta_{i_{\mu}j_{\nu}}\sqrt{\frac{m_{\mu}}{m_{\nu}}}\sum_{k_{\mu}}\phi^{\mu}_{k_{\mu}i_{\mu}}(\tau),
\ee
where $m_{\mu},m_{\nu},m_{\alpha}$ are multiplicities of respective irreducible representations in the Schur-Weyl duality, and $\phi^{\mu}_{k_{\mu}i_{\mu}}(\tau)$ are the matrix elements of $V_{\tau}$ in the irreducible basis expressed in the PRIR notation introduced in Section~\ref{preliminary}.
\end{lemma}
\begin{proof}
To prove the statement of the lemma we have to compute overlap of $V^{(k)}$ with $F^{r_{\mu/\alpha} r_{\nu/\alpha}}_{i_{\mu} \quad j_{\nu}}$ written in PRIR basis:
\be
\begin{split}
\left(V^{(k)}\right)^{\quad r_{\mu/\alpha} \ r_{\nu/\alpha}}_{k_{\beta} \ r_{\mu/\beta} \ r_{\nu/\gamma} \ l_{\gamma}}=\frac{1}{m_{\alpha}}\tr\left[V^{(k)} F^{\quad r_{\mu/\alpha} \ r_{\nu/\alpha}}_{k_{\beta} \ r_{\mu/\beta} \ r_{\nu/\gamma} \ l_{\gamma}}\right]=\frac{1}{\sqrt{m_{\mu}m_{\nu}}}\tr\left[V^{(k)} E^{r_{\mu/\beta}r_{\mu/\alpha}}_{k_{\beta}\quad 1_{\alpha}}V^{(k)}E^{r_{\nu/\alpha}r_{\nu/\gamma}}_{1_{\alpha}\quad l_{\gamma}}\right].
\end{split}
\ee
Applying Fact~\ref{kpartr} and Lemma~\ref{L3} we reduce to
\be
\begin{split}
\left(V^{(k)}\right)^{\quad r_{\mu/\alpha} \ r_{\nu/\alpha}}_{k_{\beta} \ r_{\mu/\beta} \ r_{\nu/\gamma} \ l_{\gamma}}&=\delta^{r_{\mu/\alpha}r_{\mu/\beta}}\frac{1}{\sqrt{m_{\mu}m_{\nu}}}\frac{m_{\mu}}{m_{\alpha}}\tr\left[E_{k_{\alpha}1_{\alpha}}^{\alpha} V^{(k)}E^{r_{\nu/\alpha}r_{\nu/\gamma}}_{1_{\alpha}\quad l_{\gamma}}\right]\\
&=\delta^{r_{\mu/\alpha}r_{\mu/\beta}}\frac{1}{m_{\alpha}}\sqrt{\frac{m_{\mu}}{m_{\nu}}}\tr\left[E_{k_{\alpha}1_{\alpha}}^{\alpha} E^{r_{\nu/\alpha}r_{\nu/\gamma}}_{1_{\alpha}\quad l_{\gamma}} \right],
\end{split}
\ee
since only the operator $V^{(k)}$ acts non-trivially on last $k$ systems. Now, let us observe that the operator $E_{k_{\alpha}1_{\alpha}}^{\alpha}$ acts on first $n-2k$ systems, while the operator $E^{r_{\nu/\beta}}_{1_{\beta} \quad l_{\nu}}$ on $n-k$, so
\be
\begin{split}
\left(V^{(k)}\right)^{\quad r_{\mu/\alpha} \ r_{\nu/\alpha}}_{k_{\beta} \ r_{\mu/\beta} \ r_{\nu/\gamma} \ l_{\gamma}}&=\delta^{r_{\mu/\alpha}r_{\mu/\beta}}\frac{1}{m_{\alpha}}\sqrt{\frac{m_{\mu}}{m_{\nu}}}\tr\left[E_{k_{\alpha}1_{\alpha}}^{\alpha}\tr_{(k)}\left( E^{r_{\nu/\alpha}r_{\nu/\gamma}}_{1_{\alpha}\quad l_{\gamma}} \right) \right]\\
&=\delta^{r_{\mu/\alpha}r_{\mu/\beta}}\delta^{r_{\nu/\alpha}r_{\nu/\gamma}}\frac{\sqrt{m_{\mu}m_{\nu}}}{m_{\alpha}m_{\gamma}}\tr\left[E_{k_{\alpha}1_{\alpha}}^{\alpha}E_{1_{\gamma}l_{\gamma}}^{\gamma} \right]\\
&=\delta^{r_{\mu/\alpha}r_{\mu/\beta}}\delta^{r_{\nu/\alpha}r_{\nu/\gamma}}\frac{\sqrt{m_{\mu}m_{\nu}}}{m_{\alpha}^2}\tr E_{k_{\alpha}l_{\alpha}}^{\alpha}\\
&=\delta^{r_{\mu/\alpha}r_{\mu/\beta}}\delta^{r_{\nu/\alpha}r_{\nu/\gamma}}\delta_{k_{\alpha}l_{\gamma}}\frac{\sqrt{m_{\mu}m_{\nu}}}{m_{\alpha}}.
\end{split}
\ee
In the second equality we applied Lemma~\ref{L3}, while in fourth we used property from~\eqref{tr_prop}. Now we evaluate the matrix elements of $V_{\tau}$. Using expression~\eqref{rhs_action} we write
\be
\begin{split}
\left( V_{\tau}\right)^{r_{\mu/\alpha}r_{\nu/\alpha}}_{i_{\mu}\quad j_{\nu}}&=\frac{1}{m_{\alpha}}\tr\left[V_{\tau} F^{r_{\mu/\alpha} r_{\nu/\alpha}}_{i_{\mu} \quad j_{\nu}}\right]=\frac{1}{m_{\alpha}}\sum_{k_{\mu}}\phi_{k_{\mu}i_{\mu}}^{\mu}(\tau)\tr\left[F^{r_{\mu/\alpha} r_{\nu/\alpha}}_{k_{\mu} \quad j_{\nu}}\right]\\
&=\frac{1}{\sqrt{m_{\mu}m_{\nu}}}\sum_{k_{\mu}}\phi_{k_{\mu}i_{\mu}}^{\mu}(\tau)\tr\left[E_{k_{\mu} \ 1_{\alpha}}^{\quad r_{\mu/\alpha}}V^{(k)}E^{r_{\nu/\alpha}}_{1_{\alpha}\quad j_{\nu}}\right]\\
&=\frac{1}{\sqrt{m_{\mu}m_{\nu}}}\sum_{k_{\mu}}\phi_{k_{\mu}i_{\mu}}^{\mu}(\tau)\tr\left[E_{k_{\mu} \ 1_{\alpha}}^{\quad r_{\mu/\alpha}}E^{r_{\nu/\alpha}}_{1_{\alpha}\quad j_{\nu}}\right]\\
&=\delta^{r_{\mu/\alpha}r_{\nu/\alpha}}\frac{1}{\sqrt{m_{\mu}m_{\nu}}}\sum_{k_{\mu}}\phi_{k_{\mu}i_{\mu}}^{\mu}(\tau)\tr\left(E^{\mu}_{k_{\mu}j_{\mu}} \right).
\end{split}
\ee
Knowing that $\tr\left(E^{\mu}_{k_{\mu}j_{\mu}} \right)=\delta_{k_{\mu}j_{\mu}}m_{\mu}=\delta^{\mu\nu}\delta_{k_{\mu}j_{\nu}}m_{\mu}$ we simplify to
\be
\left( V_{\tau}\right)^{r_{\mu/\alpha}r_{\nu/\alpha}}_{i_{\mu}\quad j_{\nu}}=\delta^{r_{\mu/\alpha}r_{\nu/\alpha}}\delta_{k_{\mu}j_{\nu}}\sqrt{\frac{m_{\mu}}{m_{\nu}}}\sum_{k_{\mu}}\phi_{k_{\mu}i_{\mu}}^{\mu}(\tau).
\ee
This finishes the proof.
\end{proof}
Having description of the basis elements in the ideal $\mathcal{M}$ and action properties we are ready to calculate matrix elements of the multi-port teleportation operator~\eqref{PBT1}.
\begin{theorem}
\label{kPBTmat}
The matrix elements of the MPBT operator~\eqref{PBT1}, with number of ports $N$ and local dimension $d$, in operator basis from Theorem~\ref{tmbas} are of the form
\be
\label{kPBTmateq}
(\rho)_{i_{\mu}\quad j_{\nu}}^{r_{\mu/\alpha}r_{\nu/\beta}}=\frac{k!\binom{N}{k}}{d^N}\frac{m_{\mu}}{m_{\alpha}}\frac{d_{\alpha}}{d_{\mu}}\delta^{r_{\mu/\alpha}r_{\nu/\beta}}\delta_{i_{\mu}j_{\nu}}.
\ee
The numbers $m_{\mu},m_{\alpha}$ and $d_{\mu},d_{\alpha}$ denote respective multiplicities and dimensions of the irrpes in the Schur-Weyl duality, labelled by $\alpha \vdash n-2k$ and $\mu \vdash n-k$, such that $\mu\in\alpha$.
\end{theorem}
\begin{proof}
The proof proceeds similarly as the proof of Lemma~\ref{Vel}, namely we compute
\be
\label{act0}
(\rho)_{i_{\mu}\quad j_{\nu}}^{r_{\mu/\alpha}r_{\nu/\beta}}=\frac{1}{m_{\alpha}}\tr\left[\rho F^{r_{\mu/\alpha} r_{\nu/\alpha}}_{i_{\mu}\quad j_{\nu}} \right]=\frac{1}{d^N}\frac{1}{\sqrt{m_{\mu}m_{\nu}}}\sum_{\tau \in \mathcal{S}_{n,k}}\tr\left[V^{(k)}V_{\tau} E_{i_{\mu} \ 1_{\alpha}}^{\quad r_{\mu/\alpha}}V^{(k)}E^{r_{\nu/\alpha}}_{1_{\alpha} \quad j_{\nu}}V_{\tau^{-1}}\right],
\ee
where sum runs over all permutations $\tau$ from the coset $\mathcal{S}_{n,k}\equiv \frac{S(n-k)}{S(n-2k)}$. Substituting~\eqref{actionEE} to~\eqref{act0} we have
\be
\label{rho-sro}
(\rho)_{i_{\mu}\quad j_{\nu}}^{r_{\mu/\alpha}r_{\nu/\beta}}=\frac{1}{d^N}\frac{1}{\sqrt{m_{\mu}m_{\nu}}}\sum_{\tau \in \mathcal{S}_{n,k}}\sum_{l_{\mu}}\sum_{k_{\nu}}\phi^{\mu}_{l_{\mu}i_{\mu}}(\tau)\phi_{j_{\nu}k_{\nu}}^{\nu}(\tau^{-1})\tr\left[V^{(k)}E_{l_{\mu} \ 1_{\alpha}}^{\quad r_{\mu/\alpha}}V^{(k)}E_{1_{\alpha}\quad k_{\nu}}^{r_{\nu/\alpha}}\right].
\ee
Using Fact~\ref{kpartr} we write the following chain of equalities:
\be
\begin{split}
\tr\left[V^{(k)}E_{l_{\mu} \ 1_{\alpha}}^{\quad r_{\mu/\alpha}}V^{(k)}E_{1_{\alpha}\quad k_{\nu}}^{r_{\nu/\alpha}}\right]&=\tr\left[\tr_{(k)}\left(E_{l_{\mu} \ 1_{\alpha}}^{\quad r_{\mu/\alpha}} \right) V^{(k)}E_{1_{\alpha}\quad k_{\nu}}^{r_{\nu/\alpha}} \right]=\tr\left[\tr_{(k)}\left(E_{l_{\mu} \ 1_{\alpha}}^{\quad r_{\mu/\alpha}} \right) E_{1_{\alpha}\quad k_{\nu}}^{r_{\nu/\alpha}} \right]\\
&=\tr\left[\tr_{(k)}\left(E_{l_{\mu} \ 1_{\alpha}}^{\quad r_{\mu/\alpha}} \right) \tr_{(k)}\left(E_{1_{\alpha}\quad k_{\nu}}^{r_{\nu/\alpha}}\right) \right],
\end{split}
\ee
where $\tr_{(k)}=\tr_{n-2k+1,\ldots,n-k}$. Expanding rest of the indices in PRIR notation, i.e. $l_{\mu}=(s_{\mu/\beta},p_{\beta}), \ k_{\nu}=(s_{\nu/\beta'},q_{\beta'})$ and applying Lemma~\ref{L3} we have
\be
\begin{split}
\tr_{(k)}\left(E_{p_{\beta} \quad1_{\alpha}}^{s_{\mu/\beta} r_{\mu/\alpha}} \right)=\delta^{s_{\mu/\beta}r_{\mu/\alpha}}\frac{m_{\mu}}{m_{\alpha}}E^{\alpha}_{p_{\alpha}1_{\alpha}} \quad \tr_{(k)}\left(E_{1_{\alpha} \quad q_{\beta'}}^{r_{\nu/\alpha} s_{\nu/\beta'}}\right)=\delta^{r_{\nu/\alpha}s_{\nu/\beta'}}\frac{m_{\nu}}{m_{\alpha}}E^{\alpha}_{1_{\alpha}q_{\alpha}}.
\end{split}
\ee
Now, we substitute the above into~\eqref{rho-sro} writing as follows
\be
\begin{split}
(\rho)_{i_{\mu}\quad j_{\nu}}^{r_{\mu/\alpha}r_{\nu/\beta}}&=\frac{1}{d^N}\frac{\sqrt{m_{\mu}m_{\nu}}}{m^2_{\alpha}}\sum_{\tau \in \mathcal{S}_{n,k}}\sum_{r_{\mu/\alpha},p_{\alpha}}\sum_{r_{\nu/\alpha},q_{\alpha}}\left( \phi^{\nu}\right) _{j_{\nu} \ q_{\alpha}}^{ \quad r_{\nu/\alpha}}(\tau^{-1})\left( \phi^{\mu}\right)_{p_{\alpha} \ \ i_{\mu}}^{r_{\mu/\alpha}}(\tau)\tr\left(E^{\alpha}_{p_{\alpha}1_{\alpha}} E^{\alpha}_{1_{\alpha}q_{\alpha}}\right)\\
&=\frac{1}{d^N}\frac{\sqrt{m_{\mu}m_{\nu}}}{m_{\alpha}}\sum_{\tau \in \mathcal{S}_{n,k}}\sum_{r_{\mu/\alpha},p_{\alpha}}\sum_{r_{\nu/\alpha},q_{\alpha}}\delta_{p_{\alpha}q_{\alpha}}\left( \phi^{\nu}\right) _{j_{\nu} \ q_{\alpha}}^{ \quad r_{\nu/\alpha}}(\tau^{-1})\left( \phi^{\mu}\right)_{p_{\alpha} \ \ i_{\mu}}^{r_{\mu/\alpha}}(\tau)\\
&=\frac{1}{d^N}\frac{\sqrt{m_{\mu}m_{\nu}}}{m_{\alpha}}\sum_{\tau \in \mathcal{S}_{n,k}}\sum_{r_{\mu/\alpha},r_{\nu/\alpha}}\sum_{q_{\alpha}}\left( \phi^{\nu}\right) _{j_{\nu} \ q_{\alpha}}^{ \quad r_{\nu/\alpha}}(\tau^{-1})\left( \phi^{\mu}\right)_{q_{\alpha} \ \ i_{\mu}}^{r_{\mu/\alpha}}(\tau).
\end{split}
\ee
In the above we use orthonormality relation, together wit the trace property~\eqref{tr_prop}, so $\tr\left(E^{\alpha}_{p_{\alpha}1_{\alpha}} E^{\alpha}_{1_{\alpha}q_{\alpha}}\right)=\tr E^{\alpha}_{p_{\alpha}q_{\alpha}}=m_{\alpha}\delta_{p_{\alpha}q_{\alpha}}$. Finally applying summation rule from Proposition~\ref{summation0} we arrive at
\be
(\rho)_{i_{\mu}\quad j_{\nu}}^{r_{\mu/\alpha}r_{\nu/\beta}}=\frac{1}{d^N}|\mathcal{S}_{n,k}|\frac{m_{\mu}}{m_{\alpha}}\frac{d_{\alpha}}{d_{\mu}}=\frac{k!\binom{N}{k}}{d^N}\frac{m_{\mu}}{m_{\alpha}}\frac{d_{\alpha}}{d_{\mu}}\delta^{r_{\mu/\alpha}r_{\nu/\beta}}\delta_{i_{\mu}j_{\nu}}.
\ee
This finishes the proof.
\end{proof}
Let us check the consequences of Theorem~\ref{kPBTmat}. Expression~\eqref{kPBTmateq} tells us that multi-port teleportation operator $\rho$ is diagonal in the operator basis given in Theorem~\ref{tmbas}. It means $\rho$ can expressed as
\be
\label{rhodec0}
\rho=\frac{k!\binom{N}{k}}{d^n}\sum_{\alpha}\sum_{\mu\in\alpha}\sum_{r_{\mu/\alpha}}\sum_{k_{\mu}}\frac{m_{\mu}}{m_{\alpha}}\frac{d_{\alpha}}{d_{\mu}}F^{r_{\mu/\alpha}r_{\mu/\alpha}}_{k_{\mu} \quad k_{\mu}}=\sum_{\alpha}\sum_{\mu\in\alpha}\sum_{r_{\mu/\alpha}}\sum_{k_{\mu}}\lambda_{\mu}(\alpha)F^{r_{\mu/\alpha}r_{\mu/\alpha}}_{k_{\mu} \quad k_{\mu}}
\ee
where we introduced the quantity
\be
\label{rhodec0a}
\lambda_{\mu}(\alpha)\equiv \frac{k!\binom{N}{k}}{d^N}\frac{m_{\mu}}{m_{\alpha}}\frac{d_{\alpha}}{d_{\mu}}.
\ee
Now we can formulate the following
\begin{definition}
\label{efy}
Having basis elements from~\eqref{tmbas2} of Theorem~\ref{tmbas}, we define the following operators
\be
\forall \alpha \ \forall \mu\in\alpha \quad F_{\mu}(\alpha)\equiv \sum_{r_{\mu/\alpha}}\sum_{k_{\mu}}F^{r_{\mu/\alpha}r_{\mu/\alpha}}_{k_{\mu} \quad k_{\mu}}.
\ee
\end{definition}
Having the above definition we prove:
\begin{lemma}
\label{efy2}
Operators $F_{\mu}(\alpha)$ for $\alpha\vdash N-k$ and $\mu\in\alpha$ are projectors and span identity $\mathbf{1}_{\mathcal{M}}$ on the ideal $\mathcal{M}$.
\end{lemma}
\begin{proof}
First let us check that operators $F_{\mu}(\alpha)$ given through Definition~\ref{efy} are indeed orthonormal projectors. Indeed using~\eqref{orto} we have
\be
\begin{split}
F_{\mu}(\alpha)F_{\nu}(\beta)&=\sum_{r_{\mu/\alpha}}\sum_{k_{\mu}}\sum_{r_{\nu/\beta}}\sum_{l_{\nu}}F^{r_{\mu/\alpha}r_{\mu/\alpha}}_{k_{\mu} \quad k_{\mu}}F^{r_{\nu/\beta}r_{\nu/\beta}}_{l_{\nu} \quad l_{\nu}}=\sum_{r_{\mu/\alpha}}\sum_{k_{\mu}}\sum_{r_{\nu/\beta}}\sum_{l_{\nu}}\delta^{r_{\mu/\alpha}r_{\nu/\beta}}\delta_{k_{\mu}l_{\nu}}F^{r_{\mu/\alpha}r_{\nu/\beta}}_{k_{\mu} \quad l_{\nu}}\\
&=\delta^{\mu\nu}\delta^{\alpha\beta}\sum_{r_{\mu/\alpha}}\sum_{k_{\mu}}\sum_{r_{\nu/\beta}}\sum_{l_{\nu}}\delta^{r_{\mu/\alpha}r_{\nu/\beta}}\delta_{k_{\mu}l_{\nu}}F^{r_{\mu/\alpha}r_{\nu/\beta}}_{k_{\mu} \quad l_{\nu}}=\delta^{\mu\nu}\delta^{\alpha\beta}\sum_{r_{\mu/\alpha}}\sum_{k_{\mu}}F^{r_{\mu/\alpha}r_{\mu/\alpha}}_{k_{\mu} \quad k_{\mu}}\\
&=\delta^{\mu\nu}\delta^{\alpha\beta}F_{\mu}(\alpha),
\end{split}
\ee
since for fixed $\mu,\nu$ and $\alpha,\beta$ we use the property $\delta^{r_{\mu/\alpha}r_{\nu/\beta}}\equiv \delta^{\mu\nu}\delta^{\alpha\beta}\delta^{r_{\mu/\alpha}r_{\nu/\beta}}$, see Notation~\ref{not0}.
To prove $\sum_{\alpha}\sum_{\mu\in\alpha}F_{\mu}(\alpha)=\mathbf{1}_{\mathcal{M}}$ we must show that $\forall x\in\mathcal{M}$ we have $x\sum_{\alpha}\sum_{\mu\in\alpha}F_{\mu}(\alpha)=\sum_{\alpha}\sum_{\mu\in\alpha}F_{\mu}(\alpha)x=x$. Expanding $x$ in the operator basis from Theorem~\ref{tmbas}
\be
x=\sum_{\alpha',\beta'}\sum_{\mu'\in\alpha'}\sum_{\nu'\in\beta'}\sum_{i_{\mu'} \ j_{\nu'}}x_{i_{\mu'}\quad j_{\nu'}}^{r_{\mu'/\alpha'} r_{\nu'/\beta'}}F_{i_{\mu'}\quad j_{\nu'}}^{r_{\mu'/\alpha'} r_{\nu'/\beta'}},\qquad x_{i_{\mu'}\quad j_{\nu'}}^{r_{\mu'/\alpha'} r_{\nu'/\beta'}}\in \mathbb{C},
\ee
and using expression~\eqref{orto} we get the statement.
\end{proof}
Finally thanks to Lemma~\ref{efy2} and decomposition~\eqref{rhodec0}, together with~\eqref{rhodec0a} we formulate spectral theorem for the multi-port teleportation operator (the multiplicities given below come from Lemma~\ref{A3}):
\begin{theorem}
\label{eig_dec_rho}
The MPBT operator given through~\eqref{PBT1} has the following spectral decomposition
\be
\rho=\sum_{\alpha}\sum_{\mu\in\alpha}\lambda_{\mu}(\alpha)F_{\mu}(\alpha),
\ee
where eigenprojectors $F_{\mu}(\alpha)$ are given in Definition~\ref{efy} with corresponding eigenvalues $\lambda_{\mu}(\alpha)$ from~\eqref{rhodec0a} with multiplicities $m_{\mu/\alpha}m_{\alpha}d_{\mu}$.
\end{theorem}
\noindent
Checking that indeed we have $\rho F_{\mu}(\alpha)=\lambda_{\mu}(\alpha)F_{\mu}(\alpha)$, follows directly from orthonormality property of operators $F_{\mu}(\alpha)$ proven in Lemma~\ref{efy2}.
At the end of this section we prove two additionally lemmas on projectors $F_{\mu}(\alpha)$ given in Definition~\ref{efy}. Defining symbol the $\tr_{(2k)}\equiv \tr_{n-2k+1,\ldots,n}$ which is a partial trace operation with respect to last $2k$ systems we have the following
\begin{lemma}
\label{A1}
For a partially transposed permutation operator $V^{(k)}$ from~\eqref{parV} and operators $F_{\mu}(\alpha)$ given through Definition~\ref{efy} the following holds:
\be
\forall \alpha\vdash n-2k \quad \forall \mu\in \alpha \quad \tr_{(2k)}\left[V^{(k)} F_{\mu}(\alpha)\right]=m_{\mu/\alpha}\frac{m_{\mu}}{m_{\alpha}}P_{\alpha},
\ee
where the numbers $m_{\mu},m_{\alpha}$ denote respective multiplicities in the Schur-Weyl duality, while $P_{\alpha}$ is a Young projector on $n-2k$ particles.
\end{lemma}
\begin{proof}
Using definition of the operator $F_{\mu}(\alpha)$ and expression~\eqref{tmbas2} we write
\be
\sum_{r_{\mu/\alpha}}\sum_{k_{\mu}}\tr_{(2k)}\left[V^{(k)} F^{r_{\mu/\alpha}r_{\mu/\alpha}}_{k_{\mu} \quad k_{\mu}}\right]=\sum_{r_{\mu/\alpha}}\sum_{k_{\mu}}\frac{m_{\alpha}}{m_{\mu}}\tr_{(2k)}\left[V^{(k)}E_{i_{\mu} \ 1_{\alpha}}^{\quad r_{\mu/\alpha}}V^{(k)}E^{r_{\mu/\alpha}}_{1_{\alpha} \quad k_{\mu}}\right].
\ee
Using Fact~\ref{kpartr}, Lemma~\ref{L3} and $i_{\mu}=(s_{\mu/\beta},i_{\beta})$ to operator $E_{k_{\mu} \quad 1_{\alpha}}^{\quad r_{\mu/\alpha}}=E_{k_{\beta} \ \ 1_{\alpha}}^{r_{\mu/\beta} \ r_{\mu/\alpha}}$ we simplify the above equation to
\be
\begin{split}
\sum_{r_{\mu/\alpha}}\sum_{k_{\alpha}}\tr_{(2k)}\left[E^{\alpha}_{k_{\alpha}1_{\alpha}}V^{(k)}E^{r_{\mu/\alpha} r_{\mu/\alpha}}_{1_{\alpha} \quad k_{\alpha}} \right]&=\sum_{r_{\mu/\alpha}}\sum_{k_{\alpha}}\tr_{(k)}\left[E^{\alpha}_{k_{\alpha}1_{\alpha}}E^{r_{\mu/\alpha} r_{\mu/\alpha}}_{1_{\alpha} \quad k_{\alpha}} \right]=\sum_{r_{\mu/\alpha}} \sum_{k_{\alpha}}\tr_{(k)}\left[E^{r_{\mu/\alpha} r_{\mu/\alpha}}_{k_{\alpha} \quad k_{\alpha}} \right]\\
&=m_{\mu/\alpha}\frac{m_{\mu}}{m_{\alpha}}\sum_{k_{\alpha}}E^{\alpha}_{k_{\alpha}k_{\alpha}}=m_{\mu/\alpha}\frac{m_{\mu}}{m_{\alpha}}P_{\alpha},
\end{split}
\ee
where in the last equality we used the definition of projectors $P_{\alpha}$ given in~\eqref{def_P}.
\end{proof}
Further, below the proof of Lemma~\ref{simple} we discuss alternative proof method of the above lemma.
\begin{lemma}
\label{A2}
For operators $F_{\mu}(\alpha)$ given through Definition~\ref{efy} the following holds:
\be
\label{A2eq}
\forall \alpha\vdash n-2k \quad \forall \mu\in \alpha \quad \tr_{(k)}\left( F_{\mu}(\alpha)\right)=m_{\mu/\alpha}\frac{m_{\alpha}}{m_{\mu}}P_{\mu},
\ee
where the numbers $m_{\mu},m_{\alpha}$ denote respective multiplicities of the irrpes in the Schur-Weyl duality, $m_{\mu/\alpha}$ denotes number of paths on reduced Young's lattice in which diagram $\mu$ can be obtained from diagram $\alpha$, while $P_{\mu}$ is a Young projector on $n-k$ particles.
\end{lemma}
\begin{proof}
The proof is based on the straightforward calculations and observations made in the proof of Lemma~\ref{A1}. Using Definition~\ref{efy} we have
\be
\begin{split}
\tr_{(k)}\left( F_{\mu}(\alpha)\right)&=\frac{m_{\alpha}}{m_{\mu}}\sum_{r_{\mu/\alpha}}\sum_{k_{\mu}}\tr_{(k)}\left(E_{k_{\mu} \ 1_{\alpha}}^{\quad r_{\mu/\alpha}}V^{(k)}E^{r_{\mu/\alpha}}_{1_{\alpha} \quad k_{\mu}} \right)=\frac{m_{\alpha}}{m_{\mu}}\sum_{r_{\mu/\alpha}}\sum_{k_{\mu}} E_{k_{\mu} \ 1_{\alpha}}^{\quad r_{\mu/\alpha}}E^{r_{\mu/\alpha}}_{1_{\alpha} \quad k_{\mu}}\\
&=m_{\mu/\alpha}\frac{m_{\alpha}}{m_{\mu}}\sum_{k_{\mu}}E^{\mu}_{k_{\mu}k_{\mu}}=m_{\mu/\alpha}\frac{m_{\alpha}}{m_{\mu}}P_{\mu}
\end{split}
\ee
where in the last equality we used the definition of projectors $P_{\mu}$ given in~\eqref{def_P}.
\end{proof}
\begin{lemma}
\label{A3}
For operators $F_{\mu}(\alpha)$ given through Definition~\ref{efy} the following holds:
\be
\forall \alpha\vdash n-2k \quad \forall \mu\in \alpha \quad \tr\left( F_{\mu}(\alpha)\right)=m_{\mu/\alpha}m_{\alpha}d_{\mu},
\ee
where the numbers $m_{\mu},m_{\alpha}$ denote respective multiplicities of irreps in the Schur-Weyl duality, $d_{\mu}$ stands for the dimension of the irrep $\mu$, $m_{\mu/\alpha}$ denotes number of paths on reduced Young's lattice in which diagram $\mu$ can be obtained from diagram $\alpha$.
\end{lemma}
\begin{proof}
To compute the trace from $F_{\mu}(\alpha)$ is enough to compute the trace from the right-hand side of~\eqref{A2eq} of Lemma~\ref{A2}, knowing that $\tr P_{\mu}=m_{\mu}d_{\mu}$.
\end{proof}
\begin{lemma}
\label{simple}
For operators $F_{\mu}(\alpha)$ given through Definition~\ref{efy} and operator $V^{(k)}$ defined in~\eqref{parV}, the following holds:
\be
\label{cos}
V^{(k)}F_{\mu}(\alpha)=V^{(k)}P_{\alpha}P_{\mu}.
\ee
\end{lemma}
\begin{proof}
First let us write explicitly the left-hand side of~\eqref{cos} using Definition~\ref{efy} and Lemma~\ref{L3}:
\be
\label{p1}
\begin{split}
V^{(k)}F_{\mu}(\alpha)&=V^{(k)}\sum_{r_{\mu/\alpha}}\sum_{k_{\mu}}F^{r_{\mu/\alpha}r_{\mu/\alpha}}_{k_{\mu} \quad k_{\mu}}=\frac{m_{\alpha}}{m_{\mu}}\sum_{r_{\mu/\alpha}}\sum_{r_{\mu/\beta}}\sum_{i_{\beta}}V^{(k)}E^{r_{\mu/\beta}r_{\mu/\alpha}}_{i_{\beta}\quad 1_{\alpha}}V^{(k)}E^{r_{\mu/\alpha}r_{\mu/\beta}}_{1_{\alpha}\quad i_{\beta}}\\
&=V^{(k)}\sum_{r_{\mu/\alpha}}\sum_{i_{\alpha}}E^{\alpha}_{i_{\alpha}1_{\alpha}}E^{r_{\mu/\alpha}r_{\mu/\alpha}}_{1_{\alpha}\quad i_{\alpha}}=V^{(k)}\sum_{r_{\mu/\alpha}}\sum_{i_{\alpha}}E^{r_{\mu/\alpha}r_{\mu/\alpha}}_{i_{\alpha}\quad i_{\alpha}}.
\end{split}
\ee
Now, writing composition $P_{\alpha}P_{\mu}$ in PRIR basis we get:
\be
\label{p2}
\begin{split}
V^{(k)}P_{\alpha}P_{\mu}=V^{(k)}\sum_{i_{\alpha}}E^{\alpha}_{i_{\alpha}i_{\alpha}}\sum_{r_{\mu/\beta}}\sum_{j_{\beta}}E^{r_{\mu/\beta}r_{\mu/\beta}}_{j_{\beta}\quad j_{\beta}}=V^{(k)}\sum_{r_{\mu/\alpha}}\sum_{i_{\alpha}}E^{r_{\mu/\alpha}r_{\mu/\alpha}}_{i_{\alpha}\quad i_{\alpha}}
\end{split}
\ee
since $E^{\alpha}_{i_{\alpha}i_{\alpha}}E^{r_{\mu/\beta}r_{\mu/\beta}}_{j_{\beta}\quad j_{\beta}}=\delta^{\alpha\beta}\delta_{i_{\alpha}j_{\beta}}E^{r_{\mu/\alpha}r_{\mu/\alpha}}_{i_{\alpha}\quad i_{\alpha}}$. Now observing that right-hand sides of~\eqref{p1} and~\eqref{p2} coincide we finish the proof.
\end{proof}
One can observe that having~\eqref{cos} we can prove the statement of Lemma~\ref{A1} applying directly Corollary~\ref{corL3} to projector $P_{\mu}$. Indeed we have
\be
\tr_{(2k)}\left( V^{(k)}F_{\mu}(\alpha)\right) =\tr_{(2k)}\left(V^{(k)}P_{\alpha}P_{\mu}\right)=\tr_{(k)}\left(P_{\alpha}P_{\mu} \right)=m_{\mu/\alpha}\frac{m_{\mu}}{m_{\alpha}}P_{\alpha},
\ee
where $\tr_{(2k)}\equiv \tr_{n-2k+1,\ldots,n}$ and $\tr_{(k)}=\tr_{n-2k+1,\ldots,n-k}$.
\section{Entanglement fidelity in Deterministic version of the protocol}
\label{detkPBT}
Having description of the deterministic version of MPBT from Section~\ref{interest} and mathematical tools developed in Section~\ref{comm_structure}, especially the spectral decomposition of the operator $\rho$, given in Theorem~\ref{eig_dec_rho}, we can formulate the following:
\begin{theorem}
\label{Fthm}
The entanglement fidelity in the deterministic multi-port teleportation with $N$ ports and local dimension $d$ is given as
\be
\label{Feq1}
F=\frac{1}{d^{N+2k}}\sum_{\alpha \vdash N-k}\left(\sum_{\mu\in\alpha}m_{\mu/\alpha} \sqrt{m_{\mu}d_{\mu}}\right)^2,
\ee
where $m_{\mu},d_{\mu}$ denote multiplicity and dimension of irreducible representations of $S(N)$ respectively, and $m_{\mu/\alpha}$ denotes number of paths on reduced Young's lattice in which diagram $\mu$ can be obtained from diagram $\alpha$ by adding $k$ boxes.
\end{theorem}
\begin{proof}
In the first step of the proof we apply the covariance property~\eqref{rel1} and~\eqref{rel2} to equation~\eqref{ent_fid} describing the entanglement fidelity and obtain the following expression:
\be
\label{ef}
F=\frac{1}{d^{2k}}\sum_{\mathbf{i}\in\mathcal{I}}\tr\left(\Pi_{\mathbf{i}}^{A\widetilde{B}}\sigma_{\mathbf{i}}^{A\widetilde{B}} \right)=\frac{|\mathcal{S}_{n,k}|}{d^{2k}}\tr\left(\Pi_{\mathbf{i}_0}^{A\widetilde{B}}\sigma_{\mathbf{i}_0}^{A\widetilde{B}} \right)=\frac{k!\binom{N}{k}}{d^{2k}}\tr\left(\frac{1}{\sqrt{\rho}}\sigma_{\mathbf{i}_0}^{A\widetilde{B}}\frac{1}{\sqrt{\rho}}\sigma_{\mathbf{i}_0}^{A\widetilde{B}} \right),
\ee
where $\sigma_{\mathbf{i}_0}^{A\widetilde{B}}$ is defined in~\eqref{signal2}. In the second equality we used the covariance property of signals $\sigma_{\mathbf{i}}^{A\widetilde{B}}$ and invariance of $\rho$ with respect to the coset $\mathcal{S}_{n,k}$.
Using spectral decomposition of the operator $\rho$ presented in Theorem~\ref{eig_dec_rho} we expand equation~\eqref{ef} to:
\be
\label{fak}
\begin{split}
F&=\frac{k!}{d^{2k}}\binom{N}{k}\tr\left(\Pi_{\mathbf{i}_0}^{A\widetilde{B}}\sigma_{\mathbf{i}_0}^{A\widetilde{B}} \right)=\frac{k!}{d^{2N}}\binom{N}{k}\sum_{\alpha,\beta \vdash N-k}\sum_{\mu\in\alpha}\sum_{\nu\in\beta}\frac{1}{\sqrt{\lambda_{\mu}(\alpha)}}\frac{1}{\sqrt{\lambda_{\nu}(\beta)}}\tr\left(F_{\mu}(\alpha)V^{(k)}F_{\nu}(\beta)V^{(k)} \right).
\end{split}
\ee
Now applying Lemma~\ref{simple} we can rid of the operators $F_{\mu}(\alpha)$
\be
\label{fak2}
\begin{split}
F&=\frac{k!}{d^{2N}}\binom{N}{k}\sum_{\alpha,\beta \vdash N-k}\sum_{\mu\in\alpha}\sum_{\nu\in\beta}\frac{1}{\sqrt{\lambda_{\mu}(\alpha)}}\frac{1}{\sqrt{\lambda_{\nu}(\beta)}}\tr\left(F_{\mu}(\alpha)V^{(k)}F_{\nu}(\beta)V^{(k)} \right)\\
&=\frac{k!}{d^{2N}}\binom{N}{k}\sum_{\alpha,\beta \vdash N-k}\sum_{\mu\in\alpha}\sum_{\nu\in\beta}\frac{1}{\sqrt{\lambda_{\mu}(\alpha)}}\frac{1}{\sqrt{\lambda_{\nu}(\beta)}}\tr\left(P_{\mu}P_{\alpha}V^{(k)}P_{\nu}P_{\beta}V^{(k)} \right).
\end{split}
\ee
Observing $\left[ P_{\beta},V^{(k)}\right] =0$, we can apply Fact~\ref{kpartr} together with Corollary~\ref{corL3} to $V^{(k)}P_{\nu}V^{(k)}$, getting
\be
\begin{split}
F&=\frac{k!}{d^{2N}}\binom{N}{k}\sum_{\alpha,\beta \vdash N-k}\sum_{\mu\in\alpha}\sum_{\nu\in\beta}\frac{1}{\sqrt{\lambda_{\mu}(\alpha)}}\frac{1}{\sqrt{\lambda_{\nu}(\beta)}}\sum_{\beta'\in\nu}m_{\nu/\beta'}\frac{m_{\nu}}{m_{\beta'}}\tr\left(P_{\mu}P_{\alpha}P_{\beta}P_{\beta'} V^{(k)}\right)\\
&=\frac{k!}{d^{2N}}\binom{N}{k}\sum_{\alpha \vdash N-k}\sum_{\mu,\nu\in\alpha}\frac{1}{\sqrt{\lambda_{\mu}(\alpha)}}\frac{1}{\sqrt{\lambda_{\nu}(\alpha)}}m_{\nu/\alpha}\frac{m_{\nu}}{m_{\alpha}}\tr\left(P_{\mu}P_{\alpha}\tr_{(k)}V^{(k)}\right)\\
&=\frac{k!}{d^{2N}}\binom{N}{k}\sum_{\alpha \vdash N-k}\sum_{\mu,\nu\in\alpha}\frac{1}{\sqrt{\lambda_{\mu}(\alpha)}}\frac{1}{\sqrt{\lambda_{\nu}(\alpha)}}m_{\nu/\alpha}\frac{m_{\nu}}{m_{\alpha}}\tr\left(P_{\mu}P_{\alpha}\right).
\end{split}
\ee
Again applying Corollary~\ref{corL3}, this time to projector $P_{\mu}$, together with $\tr P_{\alpha}=m_{\alpha}d_{\alpha}$, we have
\be
F=\frac{k!}{d^{2N}}\binom{N}{k}\sum_{\alpha \vdash N-k}\sum_{\mu,\nu\in\alpha}\frac{1}{\sqrt{\lambda_{\mu}(\alpha)}}\frac{1}{\sqrt{\lambda_{\nu}(\alpha)}}m_{\nu/\alpha}m_{\mu/\alpha}m_{\mu}m_{\nu}\frac{d_{\alpha}}{m_{\alpha}}.
\ee
Using explicit expression for eigenvalues $\lambda_{\mu}(\alpha),\lambda_{\nu}(\alpha)$ given in~\eqref{rhodec0a} we have
\be
\begin{split}
F&=\frac{k!}{d^{2N+2k}}\binom{N}{k}\frac{d^{N}}{k!\binom{N}{k}}\sum_{\alpha \vdash N-k}\sum_{\mu,\nu\in\alpha}m_{\mu/\alpha}m_{\nu/\alpha}\sqrt{\frac{m_{\alpha}d_{\mu}}{m_{\mu}d_{\alpha}}}\sqrt{\frac{m_{\alpha}d_{\nu}}{m_{\nu}d_{\alpha}}}\frac{m_{\mu}m_{\nu}}{m_{\alpha}}d_{\alpha}\\
&=\frac{1}{d^{N+2k}}\sum_{\alpha \vdash N-k}\sum_{\mu,\nu\in\alpha}m_{\mu/\alpha}\sqrt{m_{\mu}d_{\mu}}m_{\nu/\alpha}\sqrt{m_{\nu}d_{\nu}}\\
&=\frac{1}{d^{N+2k}}\sum_{\alpha \vdash N-k}\left(\sum_{\mu\in\alpha}m_{\mu/\alpha} \sqrt{m_{\mu}d_{\mu}}\right)^2.
\end{split}
\ee
This finishes the proof.
\end{proof}
An alternative proof of Theorem~\ref{Fthm} is presented in Appendix~\ref{AppA}. One can see that by setting $k=1$ to~\eqref{Feq1} we reproduce known expression for entanglement fidelity in ordinary port-based teleportation~\cite{Studzinski2017}. Indeed, in this case always $m_{\mu/\alpha}=1$, for any $\mu\in\alpha$, since we can move only by one layer on reduced Young's lattice.
The expression from~\eqref{Feq1} is plotted in Figure~\ref{fig:test2} for different number of ports $N$ as well local dimension $d$ and number of teleported states $k$. We see that our deterministic scheme performs significantly better than standard PBT protocol, even in the optimal scheme, with respective dimension of the port.
\section{Probability of success in Probabilistic version of the protocol}
\label{probkPBT}
Having description of the probabilistic version of MPBT scheme from Section~\ref{interest} we are in position to solve SDP programs and evaluate optimal probability of success $p$ when the parties share maximally entangled states. Namely, we have the following:
\begin{theorem}
\label{thm_p}
The average probability of success in the probabilistic multi-port teleportation with $N$ ports and local dimension $d$ is given as
\be
\label{exact}
p=\frac{k!\binom{N}{k}}{d^{2N}}\sum_{\alpha \vdash N-k}\mathop{\operatorname{min}}\limits_{\mu\in\alpha}\frac{m_{\alpha}d_{\alpha}}{\lambda_{\mu}(\alpha)},
\ee
with optimal measurements of the form
\be
\label{ex_measurements}
\forall \ \mathbf{i}\in\mathcal{I}\qquad \Pi_{\mathbf{i}}^{AC}=\frac{k!\binom{N}{k}}{d^{2N}}P^+_{A_{\mathbf{i}}C}\ot \sum_{\alpha \vdash N-k}P_{\alpha}\mathop{\operatorname{min}}\limits_{\mu\in\alpha}\frac{1}{\lambda_{\mu}(\alpha)}.
\ee
Numbers $\lambda_{\mu}(\alpha)$ are eigenvalues of $\rho$ and are given in~\eqref{rhodec0a} and $m_{\alpha}, d_{\alpha}$ denote multiplicity and dimension of the irrep labelled by $\alpha$.
\end{theorem}
\begin{proof}
The solution of optimisation tasks, so proof of the above theorem, is based solely on methods and tools delivered in Section~\ref{preliminary} and Section~\ref{comm_structure}. We start from solving the primal problem. Due to symmetry in our scheme we assume that $\forall \mathbf{i}\in\mathcal{I} \quad \Theta_{\overline{A}_{\mathbf{i}}}=\sum_{\alpha \vdash N-k}x_{\alpha}P_{\alpha}$ with $x_{\alpha}\geq 0$ to satisfy constraint (1) from~\eqref{con1}. Operators $P_{\alpha}$ are Young projectors acting on subsystems defined by the symbol $\overline{A}_{\mathbf{i}}$. To satisfy constraint (2) from~\eqref{con1} we write for every irreducible block $\alpha$:
\be
\label{ineq}
\sum_{\mathbf{i}\in \mathcal{I}}P^+_{A_{\mathbf{i}}C}\ot \Theta_{\overline{A}_{\mathbf{i}}}(\alpha)=\frac{x_{\alpha}}{d^k}\sum_{\tau\in\mathcal{S}_{n,k}}V_{\tau^{-1}}V^{(k)}\ot P_{\alpha}V_{\tau}=d^{N-k}x_{\alpha}\rho(\alpha)\leq P_{\alpha}.
\ee
In the above expression we use fact that for operator $\rho$ from~\eqref{PBT1} and projection $P_{\alpha}$ we have $\rho(\alpha)=P_{\alpha}\rho P_{\alpha}$. Now, to satisfy inequality~\ref{ineq} it is enough to require:
\be
\label{border}
\forall \alpha \quad x_{\alpha}\leq d^{k-N}\mathop{\operatorname{min}}\limits_{\mu\in\alpha}\frac{1}{\lambda_{\mu}(\alpha)},
\ee
where numbers $\lambda_{\mu}(\alpha)$ are eigenvalues of $\rho$ and are given in~\eqref{rhodec0a}.
Using assumption of covariance of measurements $\forall \ \tau\in \mathcal{S}_{n,k} \quad V_{\tau}\Pi_{\mathbf{i}}V_{\tau^{-1}}=\Pi_{\tau(\mathbf{i})}$ it is enough to work with the index $\mathbf{i}_0$ only. Having that and border solution for $x_{\alpha}$ from~\eqref{border}, we calculate the quantity $p^*$ from~\eqref{primal}:
\be
\label{dualpp}
p^*=\frac{1}{d^{N+k}}\sum_{\mathbf{i}\in\mathcal{I}}\tr\left(\sum_{\alpha \vdash N-k}x_{\alpha}P_{\alpha} \right)=\frac{k!\binom{N}{k}}{d^{N+k}}\sum_{\alpha}x_{\alpha}\tr P_{\alpha}=\frac{k!\binom{N}{k}}{d^{2N}}\mathop{\operatorname{min}}\limits_{\mu\in\alpha}\frac{m_{\alpha}d_{\alpha}}{\lambda_{\mu}(\alpha)},
\ee
since $\tr P_{\alpha}=m_{\alpha}d_{\alpha}$. For showing optimality of $p^*$ we need to solve the dual problem from~\eqref{dual0} and~\eqref{dual}. We assume the following form of the operator $\Omega$ in~\eqref{dual0}:
\be
\label{omega}
\Omega=\sum_{\alpha\vdash N-k}x_{\mu^*}(\alpha)F_{\mu*}(\alpha),\qquad x_{\mu^*}(\alpha)=d^k\frac{1}{m_{\mu^*/\alpha}}\frac{m_{\alpha}}{m_{\mu^*}}.
\ee
The symbol $\mu^*$ means that we are looking for such $\mu\in\alpha$ which minimizes the quantity $p_*$ from~\eqref{dual0}. Operators $F_{\mu^*}(\alpha)$ are eigenprojectors of $\rho$ given through Definition~\ref{efy} and Theorem~\ref{eig_dec_rho}, symbol $m_{\mu^*/\alpha}$ denotes number of paths on reduced Young's lattice in which diagram $\mu^*$ can be obtained from diagram $\alpha$. Finally $m_{\mu^*},m_{\alpha}$ denote respective multiplicities of irreps. Since we are looking for any feasible solution to bound exact average probability of success $p$ from the below we are allowed for such kind of assumptions. The first constraint from~\eqref{dual} is automatically satisfied due to assumed form of $\Omega$ in~\eqref{omega}. To check the second condition we need to compute
\be
\tr_{(2k)}\left(P^+_{A_{\mathbf{i}}C}\Omega \right)= \tr_{(2k)}\left(P^+_{A_{\mathbf{i}_0}C}\Omega \right)=\frac{1}{d^k}\tr_{(2k)}\left(V^{(k)}\Omega \right),
\ee
where we used covariance property of $P^+_{A_{\mathbf{i}}C}$ and covariance of $\Omega$ with respect to the elements from the coset $\mathcal{S}_{n,k}$. Writing explicitly $\Omega$ and using Lemma~\ref{A1} we have
\be
\frac{1}{d^k}\tr_{(2k)}\left(V^{(k)}\Omega \right)=\sum_{\alpha}\frac{1}{m_{\mu^*/\alpha}}\frac{m_{\alpha}}{m_{\mu^*}}\tr_{(2k)}\left(V^{(k)}F_{\mu^*}(\alpha) \right)=\sum_{\alpha}P_{\alpha}=\mathbf{1},
\ee
so we satisfy the second constraint from~\eqref{dual} with equality. Now we are in position to compute $p_*$ from~\eqref{dual}:
\be
\label{dualp}
p_*=\frac{1}{d^{N+k}}\tr\Omega=\frac{1}{d^N}\sum_{\alpha}\frac{1}{m_{\mu^*/\alpha}}\frac{m_{\alpha}}{m_{\mu^*}}\tr\left(F_{\mu^*}(\alpha) \right)=\frac{1}{d^N}\sum_{\alpha}\frac{m_{\alpha}^2d_{\mu^*}}{m_{\mu^*}}=\frac{k!\binom{N}{k}}{d^{2N}}\mathop{\operatorname{min}}\limits_{\mu\in\alpha}\frac{m_{\alpha}d_{\alpha}}{\lambda_{\mu}(\alpha)}.
\ee
In third equality we use Lemma~\ref{A3}, in fourth we used the definition of the symbol $\mu^*$ and form of $\lambda_{\mu}(\alpha)$ from~\eqref{rhodec0a}. From expressions~\eqref{dualpp} and~\eqref{dualp} we see that $p^*=p_*$. We conclude that exact value of the average success probability indeed is given through expression~\eqref{exact} with corresponding measurements~\eqref{ex_measurements} presented in Theorem~\ref{thm_p}.
\end{proof}
\section{Discussion}
\label{diss}
In this paper, we deliver analysis of the the multi-port based teleportation schemes, which are non-trivial generalisation of the famous port-based teleportation protocol. These schemes allow for teleporting several unknown quantum states (or a composite quantum state) in one go so that the states end up in the respective number of ports on Bob's side. This protocol offers much better performance than the original PBT at the price of requiring corrections on the receiver's side which are permutations of the ports where the teleported states arrive.
We discuss the deterministic protocol where the transmission always happens, but the teleported state is distorted, and the probabilistic case, where we have to accept the probability of failure, but whenever the protocol succeeds the teleportation is perfect. In both cases, we calculate parameters describing the performance of discussed schemes, like entanglement fidelity (see Theorem~\ref{Fthm}) and the probability of success (see Theorem~\ref{thm_p}). Expressions, except the global parameters such as the number of ports $N$ and local dimension $d$, depend on purely group-theoretical quantities like for example dimensions and multiplicities of irreducible representations of the permutation group. The whole analysis is possible due to the rigorous description of the algebra of partially transposed permutation operators provided in this paper. In particular, we deliver the matrix operator basis in irreducible spaces on which respective operators describing teleportation protocol are supported (see Theorem~\ref{kPBTmat}, Theorem~\ref{eig_dec_rho}). The developed formalism applied to the considered problem allows to reduce calculations from the natural representation space to every irreducible block separately, simplifying it significantly. Moreover, symmetries occurring in the protocol allow us to solve semidefinite programming problems in an analytical way, which is not granted in general in SDP problems, see Section~\ref{probkPBT}.
The methods presented in this paper may be applied to solve some related problems, but require further development of the formalism. The first one is the construction of the optimized version of the multi-port schemes. In this case, we have to find the operation $O_A$ which Alice has to apply to her part of the resource state before she runs the protocol. Clearly in this case the resource state is no longer in the form of product of the maximally entangled pairs. The second problem is to understand the scaling of the entanglement fidelity and probability of success in the number of ports $N$, the number of teleported particles $k$ and local dimension $d$. To answer this question one needs to adapt the analysis presented in~\cite{majenz} and examine the asymptotic behavior of the quantity $m_{\mu/\alpha}$ appearing in our analysis (see for example Theorem~\ref{Fthm}). The third problem is to understand multi-port recycling schemes as a generalization of ideas in~\cite{Strelchuk}. We would like to know how much the resource state degrades after the teleportation procedure and is there, in principle, the possibility of exploiting the resource state again.
\section*{Acknowledgements}
MS, MM are supported through grant Sonatina 2, UMO-2018/28/C/ST2/00004 from the Polish National Science Centre.
Moreover, MH and MM thank the Foundation for Polish
Science through IRAP project co-financed by the EU within
the Smart Growth Operational Programme (contract no.
2018/MAB/5).
M.H. also acknowledges support from the National Science Centre, Poland, through grant OPUS 9, 2015/17/B/ST2/01945.
MM and PK would like to thank ICTQT Centre (University of Gda{\'n}sk) for hospitality where part of this work has been done.
|
2,877,628,090,503 | arxiv | \section{\label{Sec:LONoise} The effect of local oscillator noise on frequency standard stability}
\begin{figure} [tp]
\includegraphics[width=0.48\textwidth]{Fig1R_v1.pdf}
\caption{\label{fig:traces}Effect of LO noise on the performance of a locked oscillator. Simulated evolution for a noisy LO, unlocked (black) and locked with traditional feedback (red). The dotted horizontal bars indicate the measurement outcomes (\emph{samples}) over each cycle, $\y{k}$, which are applied as correction at the end of the cycle, indicated by the bent arrow in the first cycle. Measurement period of duration $T_{R}$ (white background) is followed by dead time with duration $T_{D}$ (grey background). Total cycle time $T_{c}=T_D + T_{R}$, and here we represent a 50$\%$ duty factor, $d$. Undetected evolution of the LO during the dead time leads corrections to incompletely cancel frequency offsets at the time of correction. The arrows on the far right schematically indicate how locking reduces the variance of $y(t)$ though it does not eliminate it.}
\end{figure}
Our primary objective is to suppress the impact of LO noise on the ultimate performance of the \emph{locked} LO, which is stabilized to an (in general atomic) reference. Accordingly, throughout this analysis we do not consider systematic shifts or uncertainties in the reference and explicitly assume that the reference is perfect.
We represent the fractional frequency offset of the LO relative to an ontologically perfect reference $y(t) \equiv (\nu(t) - \nu_0)/\nu_0$, where $\nu_0$ is the reference frequency and $\nu(t)$ is the LO frequency. This limit provides a reasonable approximation to the performance of many deployable frequency standards where LO stability is far worse than that of the associated atomic reference.
\subsection{Time-domain description of Ramsey measurements and feedback stabilization }
In such a setting, Ramsey spectroscopy provides a means to determine the fractional frequency offset of the LO relative to the reference over a period $T_{R}{}$. Point-like realisations of the stochastic process $y(t)$ cannot be obtained experimentally; instead, the LO frequency error produces integrated \emph{samples}, denoted $\bar{y}_k$ and indexed in time by $k$:
\begin{align}\label{eq:yk}
\y{k} &\equiv \frac{1}{\ramsey{k}} \int_{t^s_k}^{t^e_k} y(t) g(t - t^s_k) dt
\end{align}
\noindent where $\ramsey{k} \equiv t^e_k - t^s_k$, $[t^s_k,t^e_k]$ is the time interval over which the $k$th sample is taken, and $g(t)$ is a \emph{sensitivity function} capturing the extent to which LO fluctuations at some instant $t$ contribute to the measured outcome for that sample~\cite{rutman1978}. The range of $g(t)$ is $[0,1]$ and its domain is $t\in[0,\ramsey{k}]$. The ideal case is the rectangular window case, where
\begin{align}
g(t) = \begin{cases}
1 & \text{ for } t \in[0,\ramsey{k}] \\
0 & \text{ otherwise }
\end{cases}
\end{align}
\noindent in which case $\y{k}$ reduces to the time-average of $y(t)$ over the interval $[t^s_k,t^e_k]$.
In traditional feedback stabilization, the samples, $\y{k}$, are used to determine corrections to be applied to the LO in order to reduce frequency differences from the reference (Fig.~\ref{fig:traces}). Consider the trajectory of the same frequency noise realisation $y(t)$ in the cases of no correction, $y^{LO}(t)$ and correction, $y^{LLO}$(t). The relation between these two cases of $y(t)$ is
\begin{align}\label{eq:diff}
y^{LLO}(t) &= y^{LO}(t) + \sum_{k=1}^{n} C_k
\end{align}
\noindent where $C_k$ refers to the value of the $k$th frequency correction applied to the LO, $n$ of which have occurred before time $t$.
Under traditional feedback stabilization, each correction is directly proportional to the immediately preceding measurement outcome: $C_k = w_k \yllo{k}$, where $w_k$ is correction gain. Since $\yllo{k}$ is calculated by convolving $y^{LLO}(t)$ with a sensitivity function pertaining to the measurement parameters, (\ref{eq:diff}) is a recursive equation in general. It is possible to cancel all but one of the recursive terms by setting the correction gain equal to the inverse of the average sensitivity $\gb{k} \equiv \int_0^{\ramsey{k}} g(t)/\ramsey{k} dt$ of the preceding measurement, i.e. $w_k = -\gb{k}^{-1}$, where the minus sign indicates negative feedback. With this constraint we can write
\begin{align}\label{eq:nonrec}
\yllo{k} &= \ylo{k} - \frac{\gb{k}}{\gb{k-1}}\ylo{k-1}
\end{align}
\noindent and for a Ramsey interrogation and measurement with negligibly short pulses, $\gb{k} = 1$. Applying feedback corrections sequentially after measurements is able to effectively reduce $y(t)$ over many cycles, improving long-term stability.
In the limit of a static offset, a single (perfect) correction will set the frequency offset error of the LLO to zero; however, such perfect correction is in general not achieved. The primary reason for this in the limit of perfect measurements and corrections is \emph{dynamic evolution} of the LO on timescales rapid compared to the measurements which cannot be fully compensated by the feedback loop.
In Fig.~\ref{fig:traces} we demonstrate how evolution of the LO frequency during $T_{R}{}$ leads the feedback protocol to incompletely correct the offset $y(t)$. From the formalism presented above we see that incomplete feedback arises because the corrections are based only on the \emph{average} value of the frequency offset as measured over the $k$th period, $\y{k}$ (horizontal solid lines in Fig.~\ref{fig:traces}), rather than the instantaneous value of the LO frequency offset at the time of correction (here the end of a cycle) \emph{which cannot be known}. The difference between these two values leads to incomplete compensation of time-varying frequency offsets, and hence residual fractional instability in the quantity $y^{(LLO)}(t)$.
The impact of these effects on the ultimate stability of the LLO is exacerbated in circumstances where there is nonzero \emph{dead time}, $T_{D}$, during which the LO may evolve, but this evolution is not captured by a measurement. Dead time arises due to e.g. the need to reinitialize the reference between measurements, or perform classical processing of the measurement outcome before a correction can be applied.
The net impact of this uncompensated evolution is a reduction in the long-term stability of the locked local oscillator. We now move on to describe the relevant quantitative metrics for LLO \emph{variance} in both free-running and feedback-locked settings.
\subsection{Measures of frequency standard stability for unlocked and locked LOs}
The performance of the frequency standard is statistically characterized by various time-domain measures capturing the evolution of LO frequency as a function of time.
The variance of $\y{k}$, denoted $\mvar{k}{}$ and often called \emph{true variance} \cite{rutman1978} is,
\begin{align}
\mvar{k}{} &=\mrm{Var}[\y{k}]=\left( \langle \y{k}^2\rangle - \langle \y{k}\rangle^2\right)\to\ev{\y{k}^2} \\
&= \evbig{\bigg(\frac{1}{\ramsey{k}} \int_{t^s_k}^{t^e_k} y(t) g(t - t^s_k) dt \bigg)^2}
\end{align}
\noindent where in the first line we assume that the true variance is simply equal to the expected value of $\y{k}^2$, since $y(t)$ is assumed to be a zero-mean process. The true variance captures the spread of measurement outcomes due to different noise realizations in a single timestep. However, in a measurement context one does not have immediate access to an infinite ensemble of noise realizations, but rather a single series of measurement outcomes conducted sequentially over a single noise realization. As a result we rely on a measure more conducive to this setting, the \emph{sample variance}
\begin{align}
\svar{N}{} = \frac{1}{N-1} \sum_{k=1}^{N} (\y{k} - \frac{1}{N}\sum_{l=1}^{N} \y{l})^2
\end{align}
\noindent for $N$ sequential finite-duration measurements $\{\y{k}\}$~\cite{rutman1978}.
In this work we will rely on such measures of frequency stability, rather than the more commonly employed Allan variance, in line with recent experiments~\cite{Bergquist_Hg}. The Allan variance is calculated by finding the variance of the difference between consecutive pairs of measurement outcomes:
\begin{align}\label{eq:allan}
\avar{y}{}&= \frac{1}{2}\langle (\y{k+1} - \y{k})^2 \rangle
\end{align}
\noindent where $\y{k}$ is the $k$th measurement outcome and $\langle\cdots\rangle$ may indicate a time average or an ensemble average, depending on whether $y(t)$ is assumed to be ergodic. Our decision to avoid the Allan variance is deliberate, as its form -- effectively a moving average -- specifically \emph{masks} the effect of LO noise components with long correlation times. In fact the Allan variance is employed by the community in part because it does not diverge at long integration times $\tau$ due to LO drifts, as would the sample or true variance \cite{nist1990,barnes1971,rutman1978, Greenhall1999}. In the limit where the stability of a frequency reference is dominated by LO noise (and the reference can be treated as perfect) this approach gives physically meaningful results.
The standard measures for oscillator performance consider either a free-running LO or provide a means only to statistically characterize measurement outcomes under black-box conditions. We may derive explicit analytic forms for different measurements of variance in the presence of feedback locking in order to provide insights into opportunities to improve net LLO performance through modification of the stabilization protocol.
We write time-domain expressions for variance using the relevant definitions provided above and the link between corrections in feedback and the history of the LLO's evolution. For the true variance we substitute Eq.~\ref{eq:nonrec} to find
\begin{widetext}
\begin{align}
\mvar{k}{LLO} &= \mrm{Var}[\yllo{k}] \\
&= \mvar{k}{LO} + \bigg(\frac{\gb{k}}{\gb{k-1}}\bigg)^2 \mvar{k-1}{LO} - \frac{2\gb{k}}{\gb{k-1}}\cov{\ylo{k-1}}{\ylo{k}}
\end{align}
\noindent and calculate the expected value of the LLO sample variance in a similar manner using Eq.~\ref{eq:diff}
\begin{align}
\ev{\svar{N}{LLO}} &= \frac{1}{N-1} \sum_{k=1}^{N}\bigg\{ \bigg(\mvar{k}{LO} + \gb{k}^2 \sum_{r=1}^{k-1}\sum_{s=1}^{k-1} \cov{C_r}{C_s} - 2\gb{k}\sum_{u=1}^{k-1} \cov{\ylo{k}}{C_u}\bigg) \nonumber\\
&+ \frac{1}{N^2} \sum_{p=1}^{N}\sum_{q=1}^{N} \bigg( \cov{\ylo{p}}{\ylo{q}} + \gb{p}\gb{q}\sum_{w=1}^{p-1}\sum_{x=1}^{q-1} \cov{C_x}{C_y} \bigg)
- \frac{2}{N} \sum_{l=1}^{N} \bigg(\cov{\ylo{k}}{\ylo{l}} + \gb{k}\gb{l}\sum_{y=1}^{k-1}\sum_{z=1}^{l-1} \cov{C_y}{C_z} \bigg)\bigg\}
\end{align}
\end{widetext}
We see that the characteristics of the locked LO can be expressed in terms of the unlocked LO and the covariance \emph{covariance} between two quantities, $\cov{x}{y}$, capturing correlations between them. This may include the covariance of different measurement outcomes on the LO, or different corrections applied to the LO. It is this observation -- that we may express relevant statistical quantities surrounding the performance of locked local oscillators in terms of measurement covariances -- that will provide a path towards the development of new stabilization routines exploiting temporal correlations in the LO noise (and hence measurement outcomes).
\section{\label{Sec:Fourier}Performance measures for frequency standards in the fourier domain}
We require an efficient theoretical framework in which to capture these effects, and hence transition to the frequency domain, making use of the power spectral density of the LO, $\psd{y}$, in order to characterize average performance over a hypothetical statistical ensemble. In this description residual LLO instability persists because the feedback is insensitive to LO noise at high frequencies relative to the inverse measurement time. Additional instability due to the Dick effect comes from aliasing of noise at harmonics of the loop bandwidth.
We may analytically calculate the effects of measurement, dead time, and the feedback protocol itself on frequency standard performance in the frequency domain as follows. Defining a normalised, time-reversed sensitivity function $\bar{g}(t^m_k - t) = g(t - t^s_k)/\ramsey{k}$, where $g(t)$ is assumed to be time-reversal symmetric about $t^m_k$, the midpoint of $[t^s_k,t^e_k]$, we can express, for instance, the true variance as a convolution $\mvar{k}{} = \evbig{\bigg( \int_{-\infty}^{\infty} y(t) \bar{g}(t^m_k - t) dt \bigg)^2}$. Expanding this expression gives
\begin{align}
\mvar{k}{}&= \int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \ev{y(t) y(t')} \bar{g}(t^m_k - t)\bar{g}(t^m_k - t') dt' dt \\
&= \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \eacts{y} \bar{g}(t^m_k - t)\bar{g}(t^m_k - t') dt' dt
\end{align}
\noindent where $\eacts{y}$ is the two-sided autocorrelation function and $\Delta t \equiv t' - t$. Using the Wiener-Khinchin theorem we write $\eacts{y} = \ift{\psdts{y}}$, relating the autocorrelation function to the Fourier transform of the power spectral density of the LO noise. Defining the Fourier transform of $\bar{g}(t^m_k - t)$:
\begin{align}\label{eq:tf}
G_k(\omega) &\equiv \int_{-\infty}^{\infty}\bar{g}(t^m_k - t) e^{i\omega t} dt
\end{align}
\noindent We may then express the true variance
\begin{align}
\mvar{k}{}&= \frac{1}{2\pi} \int_{0}^{\infty} \psd{y} \abs{G_k(\omega)}^2d\omega \label{eq:overlap}
\end{align}
\noindent where the substitution of the one-sided PSD $\psd{y}$ is possible because $\abs{G_k(\omega)}^2$ is even. This result is similar to the convolution theorem, which states that $\ft{f \star g} = \ft{f} \cdot \ft{g}$, where $\star$ denotes a convolution and $f$ and $g$ are Fourier-invertible functions.
Here $\tf{k}$ is called the \emph{transfer function} for the $k$th sample, describing the spectral properties of the measurement protocol itself. For measurements performed using Ramsey interrogation with $\pi/2$ pulses of negligible duration and zero dead time, the transfer function has a sinc-squared analytic form $\tf{k} = (\sin{(\omega \ramsey{k}/2)}/(\omega \ramsey{k} /2))^2$. This framework has recently seen broad adoption in the quantum information community where time-varying dephasing noise is a major concern for the stability of quantum bits~\cite{KurizkiPRL2001, UhrigPRL2007, CywinskiPRB2008, BiercukNature2009, BiercukJPB2011, GreenPRL2012, GreenNJP2013, SoareNatPhys2014}.
Recalling that statistical measures of LLO variance rely not only on expressions for the true variance over noise ensembles, but also of covariances between measurements or corrections, we must equivalently express the covariance in terms of transfer functions. Using the identity $\sigma^2(A\pm B) = \sigma^2(A) + \sigma^2(B) \pm 2\sigma(A,B)$, we define a sum and a difference sensitivity function: $g^+_{k,l}(t)$ and $g^-_{k,l}(t)$, with respect to two measurements indexed $k$ and $l$. These expressions are general functions of time with two regions of high sensitivity corresponding to the individual measurement periods.
\begin{align}
g^{\pm}_{k,l}(t) &\equiv \begin{cases} g(t-t^s_k) \text{, for } t \in [t_{k}^s,t_k^e] \\
\pm g(t-t^s_l) \text{, for } t \in [t_l^s,t_l^e] \\
0 \text{, otherwise}
\end{cases}
\end{align}
\noindent These time-domain sum and difference sensitivity functions have their corresponding frequency-domain transfer functions, defined as their Fourier transforms normalised by $\ramsey{k,l}$:
\begin{align}\label{Eq:AppPairTF}
G_{k,l}^{\pm}(\omega) \equiv \int_{-\infty}^{\infty} \bigg(\frac{g(t^m_k - t)}{\ramsey{k}} \pm \frac{g(t^m_l - t)}{\ramsey{l}}\bigg) e^{i\omega t} dt
\end{align}
\noindent Substituting this and the form of the true variance (\ref{eq:overlap}) into the variance identity above and rearranging terms gives the covariance of the two measurement outcomes
\begin{align}
\cov{\y{k}}{\y{l}} &= \frac{1}{2\pi}\int_{0}^{\infty} \frac{\psd{y}}{4}\bigg(\abs{G_{k,l}^{+}(\omega)}^2 - \abs{G_{k,l}^{-}(\omega)}^2\bigg) d\omega\\
&\equiv \frac{1}{2\pi}\int_{0}^{\infty} \psd{y} \pairtf{k}{l} d\omega
\end{align}
\noindent whereby $\pairtf{k}{l}$ is defined to be the \emph{pair covariance transfer function}. For the case of flat-top Ramsey measurements over the intervals $[t^{s}_{k,l},t^{e}_{k,l}]$ this term takes the form
\begin{align}\label{eq:pairtf}
\pairtf{k}{l} &= (\omega^2 \ramsey{k} \ramsey{l})^{-1}\Big[\cos{(\omega(t^s_l - t^s_k))} + \cos{(\omega(t^e_l - t^e_k))} \nonumber\\
&~~~ - \cos{(\omega(t^e_l - t^s_k))} - \cos{(\omega(t^s_l - t^e_k))} \Big].
\end{align}
\noindent This is a generalization of the transfer function previously derived for the special case of periodic, equal-duration Ramsey interrogations~\cite{rutman1978,barnes1971}, and allows effective estimation of $y(t)$ for any $t$ and for any set of measured samples $\mb{\y{\mathnormal{k}}}$.
We thus see that this approach allows expression of time-domain LO variances as overlap integrals between $\psd{y}$ and the transfer functions capturing the effects of the measurement and feedback protocol, including correlations between measurements or corrections in time. Through this formalism we may incorporate arbitrary measurement protocols (e.g. arbitrary and dynamic Ramsey periods and dead times): the underlying physics of e.g. changing linewidth of the measurement is explicitly captured through the form and implicit time-dependence of the transfer function used to characterize the measurement protocol
\section{\label{Sec:HFF}Exploiting noise correlations to improve feedback stabilization}
Recasting variance metrics for the stability of LOs in terms of transfer functions is particularly powerful because it provides a path to craft new measurement feedback protocols designed to reduce residual variance measures for the LLO by modifying the protocol's spectral response. Our key insight is that the non-Markovianity of dominant noise processes in typical LOs -- captured through the low-frequency bias in $\psd{y}$~\cite{barnes1971,rutman1978} -- implies the presence of temporal correlations in $y(t)$ that may be exploited to improve feedback stabilization. These correlations are captured in the set of measurement outcomes $\mb{\y{k}}$; accordingly \emph{future evolution} of $y(t)$ may be predicted based on a past set of measurements within $\mb{\y{k}}$, so long as the past measurements and point of prediction fall within the characteristic \emph{correlation time} for the LO noise given by $\psd{y}$. This approach provides a direct means to account for LO evolution that is normally not compensated during \emph{dead time} in the measurement process.
\subsection{Optimal estimator for corrections}
The formal basis of our analytic approach, in summary, is to calculate a covariance matrix in the frequency domain via transfer functions to capture the relative correlations between sequential measurement outcomes of an LLO, and use this matrix to derive a linear \emph{predictor} of the LLO frequency offset at the moment of correction. Under appropriate conditions this predictor provides a correction with higher accuracy than that derived from a single measurement, allowing us to improve the ultimate performance of the LLO. Since the predictor is found using information from previous measurements (feedback) and a priori statistical knowledge of the LO noise to \emph{predict} the evolution of the LO (feedforward), we call the scheme \emph{hybrid feedforward}.
This approach shares common objectives with application of optimal control techniques such as Kalman filtering in the production of composite frequency standards from an ensemble of physical clocks~\cite{Greenhall2003}, or in compensating for deterministic frequency shifts due to e.g. aging or changes in the ambient temperature of a clock~\cite{Penrod, Kalman_Clock}. The primary advance of this work is the insight that \emph{stochastic} evolution of the LO can be predicted and compensated using optimal control protocols \emph{inside the feedback loop}.
In hybrid feedforward, results from a set of $n$ past measurements are linearly combined with weighting coefficients $\mb{c}_k$ optimized such that the $k$th correction, $C_k$, provides maximum correlation to $y(t^c_k)$ at the instant of correction $t^c_k$ (Fig.~\ref{fig:traces}c). Assuming that the LO noise is Gaussian, the optimal least minimum mean squares estimator (MMSE) is linear, and the optimal value of the correction is given by $C_k = \mb{c}_k \cdot\mb{\y{\mathnormal{k}}}$: the dot product of a set of correlation coefficients $\mb{c}_k$ derived from knowledge of $\psd{y}$ and a set of $n$ past measured samples, $\mb{\y{\mathnormal{k}}} = \{\y{k,1}, \cdots,\y{k,n} \}$. We define an $(n+1)\times(n+1)$ covariance matrix where the $(n+1)$th term represents an ideal zero-duration sample at $t^c_k$ and in the second line we write the covariance matrix in block form:
\begin{align}
\Sigma_k &\equiv \begin{bmatrix} \cov{\y{k,1}}{\y{k,1}} & \cdots & \cov{\y{k,1}}{y(t^c_k)} \\[1em]
\cov{\y{k,2}}{\y{k,1}} & \cdots & \cov{\y{k,2}}{y(t^c_k)} \\[1em]
\cdots & \cdots & \cdots \\[1em]
\cov{y(t^c_k)}{\y{k,1}} & \cdots & \cov{y(t^c_k)}{y(t^c_k)}
\end{bmatrix}\\
&\equiv \begin{bmatrix} \mb{M}_k & \mb{F}_k \\
\mb{F_\mathnormal{k}^T} & \cov{y(t^c_k)}{y(t^c_k)} \\
\end{bmatrix}.
\end{align}
\noindent In this form the matrix $\mb{M}_k$ describes correlations between measurement outcomes while the vector $\mb{F}_k$ describes correlations between each measurement and the LLO at the time of correction. The MMSE optimality condition is then fulfilled for
\begin{align}
\mb{c}_k &= \frac{\mb{F}_k }{\sqrt{\mb{F_\mathnormal{k}^T}\mb{M}_k\mb{F}_k}} \frac{w_k}{2\pi} \int_0^{\infty} \psd{y} d\omega
\end{align}
\noindent where $w_k$ is an overall correction gain. The covariance matrix elements are calculated as defined above in terms of the LO noise power spectrum.
In the practical setting of a frequency standard experiment, we wish to improve both the accuracy of each correction, by maximising the correlation between $C_k$ and $y(t^c_k)$, and the long-term stability of the LLO output, captured by the metrics of frequency variance, sample variance, and Allan variance.
\begin{figure} [bp]
\includegraphics[width=0.48\textwidth]{Fig2R_v1.pdf}
\caption{\label{Fig:HFF} Schematic diagram of hybrid feedforward with an example protocol using $n=3$. Start and end times of measurements are defined arbitrarily permitting non-uniform-duration measurements, although measurements are illustrated as uniform for clarity. Corrections $C^{(n=3)}_k$ are applied in either non-overlapping blocks of three measurements or as a moving average (depicted here). In the latter case, the covariance matrix must be recalculated to correctly account for any variations in measurement duration. Dashed red arrows indicate the first corrections performed without full calculation of the covariance matrix. This effect vanishes for $k>n$.}
\end{figure}
Although the LLO frequency variance under hybrid feedforward for more than a single cycle cannot be expressed in a closed non-recursive form, a consideration of a single cycle can provide a value for $\langle y^{LLO}(t^c_k)^2 \rangle$ in terms of covariance matrix elements. This in turn provides a metric for the \emph{correction accuracy} for hybrid feedforward, defined as the extent to which a correction brings $y^{LLO}(t)\to0$ at the instant of correction, $t=t^c_k$
\begin{align}\label{Eq:Accuracy}
A_k &\equiv \frac{\langle y^{LO}(t^c_k)^2 \rangle}{\langle y^{LLO}(t^c_k)^2 \rangle} \\
&= \bigg(1 + w_k^2 - w_k \frac{\abs{\mb{F_k}}^2}{\sqrt{\mb{F^T_k M_k F_k}}} \bigg)^{-1}
\end{align}
We can gain insights into the performance of the correction protocol by considering limiting cases. For instance, in the limit of white noise with negligible correlations, $\mb{M_k}\to\mathbb{I}$,the identity matrix. In this limit the rightmost term in Eq.~\ref{Eq:Accuracy} reduces to $w_{k}\abs{\mb{F_k}}$, which is small (there are negligible correlations between measurement outcomes and $y(t^{c}_{k})$). In this limit, accuracy $A_k \to 1/(1+ w_k)$, and is maximized by setting $w_{k}=0$ (not performing feedback at all) as corrections are uncorrelated with $y(t^{c}_{k})$. By contrast with perfect correlations all elements of the covariance matrix take value unity. Standard feedback works perfectly by selecting unity gain and selecting the number of measurements to be combined, $n=1$, to correct based on a single measurement.
In intermediate regimes the ensemble-averaged accuracy of the hybrid feedforward correction is determined in by a balance of covariance between elements of $\mb{\y{k}}$ and covariance between $\mb{\y{k}}$ and $y(t^{c}_{k})$, the LO noise at the time of correction. Achieving correction which improves LLO variance requires setting the term in parentheses in Eq.~\ref{Eq:Accuracy} to less than unity. This in turn places a condition on the correlations in the system
\begin{align}
\sqrt{\mb{F^T_k M_k F_k}}<\frac{\abs{\mb{F_k}}^2}{w_{k}}
\end{align}
\noindent We can interpret the effect of $\mb{M_k}$ as an effective rotation matrix, reducing the magnitude of the left-hand side of the expression above by effectively maximizing the ``angle'' between $\mb{M_{k}F_{k}}$ and $\mb{F_{k}}$. While it is unphysical to reduce this to zero based on the limiting cases discussed above, it is possible to appropriately select $k$, based on characteristics of $\psd{y}$ in order to improve correction accuracy.
In all slaved frequency standards we rely on repeated measurements and corrections to provide long-term \emph{stability}, a measure of how the output frequency of the LLO deviates from its mean value over time. We study this by calculating the sample variance of a time-sequence of measurement outcomes averaged over an ensemble of noise realizations, $\langle\svar{N}{}\rangle$. A ``moving average'' style of hybrid feedforward provides improved long-term stability, as the correction $C_k$ will depend on the set of measurement outcomes $\mb{\y{\mathnormal{k}}} = \{\y{k-n+1}, \cdots, \y{k} \}$, among which previous corrections have been interleaved, as illustrated in Fig.~\ref{Fig:HFF}. In this case the covariance matrix must be updated to reflect the action of each correction. See Appendix for a detailed form of the Sample Variance in the case of this form of stabilization.
\subsection{Numerical Simulations}
In order to test the general performance of hybrid feedforward in different regimes we perform numerical simulations of noisy LOs with user-defined statistical properties, characterized by $\psd{y}$. We produce a fixed number of LO realizations in the time domain and then use these to calculate measures such as the sample variance over a sequence of ``measurement'' outcomes with user-defined Ramsey measurement times, dead times, and the like. In these calculations we may assume that the LO is free running, experiencing standard feedback, or employing hybrid feedforward, and then take an ensemble average over LO noise realizations. Our calculations include various noise power spectra, with tunable high-frequency cutoffs, including common `flicker frequency' ($\psd{y} \propto 1/\omega$), and `random walk frequency' ($\psd{y} \propto 1/\omega^{2}$) noise, as appropriate for experiments incorporating realistic LOs,
Tunability in the hybrid feedforward protocol comes from the selection of $n$, in determining $\{C_k\}$ as well as the selected Ramsey periods, permitting an operator to sample different parts of $\psd{y}$. As an example, we fix our predictor to consider $n=2$ sequential measurements and permit the Ramsey durations to be varied as optimization parameters. A Nelder-Mead simplex optimization over the measurement durations finds that a hybrid feedforward protocol consisting of a long measurement period followed by a short period maximizes correction accuracy (Fig.~\ref{Fig:Accuracy}). This structure ensures that low-frequency components of $\psd{y}$ are sampled but the measurement sampling the highest frequency noise contributions are maximally correlated with $y(t^{c}_{k})$. With $\psd{y}\propto 1/\omega$ and $\psd{y}\propto 1/\omega^{2}$ we observe increased accuracy under hybrid feedforward while the rapid fluctuations in $y(t)$ arising from a white power spectrum mitigate the benefits of hybrid feedforward, as expected. In the parameter ranges we have studied numerically we find that correction accuracy is maximized for $n=2$ to $3$, with diminishing performance for larger $n$. Again, this is determined by the relevant correlation time of the LO noise.
\begin{figure}
\includegraphics[width=0.48\textwidth]{Fig3R_v1.pdf}
\caption{\label{Fig:Accuracy}Calculated correction accuracy of the first correction for hybrid feedforward normalized to feedback (accuracy = 1), under different forms of $\psd{y}$ as a function of the ratio of Ramsey periods between the two measurements employed in constructing $C^{(2)}_{k}$. Correction accuracy for feedback is calculated assuming the minimum Ramsey time; thus for the ratio of Ramsey measurements taking value unity on the $x$-axis, the hybrid feedforward scheme takes twice as long as feedback. Inset: depiction of the form of $C^{(2)}_{k}$ used in hybrid feedforward, depicting the ``slower'' measurement being performed first.}
\end{figure}
In Fig.~\ref{fig:svar}b we demonstrate the resulting \emph{normalized improvement} in $\langle\svar{N}{}\rangle$ up to $N=100$ measurements, calculated using feedback and hybrid feedforward with $n=2$, and assuming uniform $T_{R}$. We observe clear improvement (reduction) in $\langle\svar{N}{}\rangle$ through the hybrid feedforward approach, with benefits of order $5-25\%$ of $\langle\svar{N}{}\rangle$ relative performance improvement over standard measurement feedback. We present data for different functional forms of $\psd{y}$, including low-frequency dominated flicker noise ($\propto 1/\omega$), and power spectra ($\propto 1/\omega^{1/2}$) with more significant noise near $T^{-1}_{c}$. The benefits of our approach are most significant in the long term when high-frequency noise reduces the efficacy of standard feedback. Notably, because of well known relationships between LO \emph{phase noise} and LO \emph{frequency noise}~\cite{nist1990}, significant high-frequency weight in $\psd{y}$ is commonly encountered.
\begin{figure}
\includegraphics[width=0.48\textwidth]{Fig4R_v1.pdf}
\caption{\label{fig:svar}(a, b) Calculated sample variance for an unlocked LO, feedback, and hybrid feedforward, as a function of measurement number $N$, for different power spectra (indicated on graphs). Calculations assume $\psd{y}\propto 1/\omega$, with a high-frequency cutoff $\omega_{c}/2\pi=100/T_{c}$ and $\psd{y}\propto 1/\omega^{1/2}$ with a cutoff frequency $\omega_{c}/2\pi=1/T_{c}$, demonstrating the importance of high-f noise near $\omega/2\pi=T^{-1}_{c}$. PSDs with different $\omega$-dependences are normalised to have the same value at $\omega_{low} = 1/100T_{c}$. (c) Normalized sample variance data from panels (a) and (b) presented as the ratio of $\langle\svar{N}{}\rangle^{(HFF)}/\langle\svar{N}{}\rangle^{(FB)}$ in order to demonstrate improvement due to hybrid feedforward (numbers less than unity indicate smaller sample variance under hybrid feedforward). (d) Calculated $\langle\svar{N}{}\rangle$ for $N=20$ as a function of duty factor, normalized to the sample variance for the free-running LO. Data above red dashed line indicate that the standard feedback approach produces instability \emph{larger} than that for the free-running oscillator. Both data sets assume $\psd{y}\propto 1/\omega$, with $\omega_{c}/2\pi=100/T_{c}$. Crosses represent data with ten noise spurs superimposed on $\psd{y}$, starting at $\omega/2\pi=1.15T^{-1}_{c}$, and increasing linearly with step size $0.15T^{-1}_{c}$. }
\end{figure}
In Fig.~\ref{fig:svar}c we calculate the expectation value of the sample variance at a fixed value of $N=20$ for a LLO stabilized using either traditional feedback or hybrid feedforward. The sample variances are normalized by that for the free-running LO, meaning that values of this metric less than unity demonstrate improvement due to stabilization, and smaller values indicate better stabilization. On the horizontal axis we vary the duty factor $d$, defined as the ratio of the interrogation time to total cycle time: $d\equiv T_{R}/T_c$ from $1\%$ to unity (no dead time), and we compare $\psd{y}\propto 1/\omega$ and $\psd{y}\propto 1/\omega^{1/2}$. These power spectra are conservative but inspired by typical LO phase noise specifications weighted to enhanced high-frequency content due to the conversion between phase and frequency instability~\cite{nist1990}.
This improvement provided by hybrid feedforward is most marked for low duty factor $d$. As $d\to1$ the performance of traditional feedback and hybrid feedforward converge, as standard feedback corrections become most effective when dead time is shortest. However as the dead time increases, and in the presence of $\psd{y}$ with frequency weight near $T_c$, feedback efficacy diminishes due to uncompensated evolution of the LO during the dead time.
In this regime knowledge of correlations in the noise allows hybrid feedforward to provide metrologically significant gains in stability relative to traditional feedback. In Fig.~\ref{fig:svar}d we further demonstrate that in the presence of a typical $1/\omega$ power spectrum, the inclusion of noise spurs near $\omega/2\pi=T^{-1}_{c}$ results in certain regimes where standard feedback makes long-term stability \emph{worse} than applying no feedback at all, while feedforward provides useful stabilization. This significant difference arises because even though the noise processes are random, knowledge of the statistical properties of the noise provides a means to effectively model the average dynamical evolution of the system, and accurately predict how the system will evolve in the future. Exact performance depends sensitively on the form and magnitude of $\psd{y}$, but results demonstrate that systems with high-frequency noise content around $\omega/2\pi \approx T^{-1}_{c}$ benefit significantly from hybrid feedforward.
\section{\label{Sec:Conclusion}Conclusion}
In summary, we have presented a set of analytical tools describing LLO performance in the frequency domain for arbitrary measurement times, durations, and duty cycles. We have employed these generalized transfer functions to develop a new software approach to LO feedback stabilization in slaved passive frequency standards, bringing optimal estimation techniques inside the feedback loop. This technique leverages a series of past measurements and statistical knowledge of the noise to improve the accuracy of feedback corrections and ultimately improve the stability of the slaved LO. We have validated these theoretical insights using numerical simulations of noisy local oscillators and calculations of relevant stability metrics.
The results we have presented have not by any means exhausted the space of modifications to clock protocols available using this framework. For instance we have numerically demonstrated improved correction accuracy using nonuniform-duration $T_{R}$ over a cycle, as well as long-term stability improvement using only the simplest case of uniform $T_{R}$. These approaches may be combined to produce LLOs with improved accuracy relative to the reference at the time of correction and improved long-term stability. In cases where the penalty associated with increasing $T_{R}$ is modest (lower high-frequency cutoff), such composite schemes can provide substantial benefits as well, improving both accuracy of correction to the LLO and overall frequency standard stability. Other expansions may leverage the basic analytic formalism we have introduced; we have introduced the transfer functions, $\tf{}$ and $\pairtf{k}{l}$, but have assumed only the simplest form for the time-domain sensitivity function and fixed overall gain. However, it is possible to craft a measurement protocol to yield $\tf{}$ that suppresses the dominant spectral features of the LO noise. We have observed that through such an approach one may reduce the impact of aliasing on clock stabilization, indicating a path for future work on reducing of the so-called Dick limit in precision frequency references.
In the parameter regimes we have studied the relative performance benefits of the hybrid feedforward approach are of metrological significance - especially considering they may be gained using only ``software'' modification without the need for wholesale changes to the clock hardware. We believe the approach may find special significance in tight-SWAP (size, weight, and power) applications such as space-based clocks where significantly augmenting LO quality is generally impossible due to system-level limitations. Overall, we believe that this work indicates clear potential to improve passive frequency standards by incorporation of optimal estimation techniques in the feedback loop itself. \emph{Note:} While preparing this manuscript we became aware of related work seeking to employ covariance techniques to improve measurements of quantum clocks~\cite{mullan2014}.
\emph{Acknowledgements}: The authors thank H. Ball, D. Hayes, and J. Bergquist for useful discussions. This work partially supported by Australian Research Council Discovery Project DP130103823, US Army Research Office under Contract Number W911NF-11-1-0068, and the Lockheed Martin Corporation.
\section*{Appendix}
\subsection*{\label{llovariances}Variances for locked local oscillators with hybrid feedforward}
The standard measures for oscillator performance consider either a free-running LO or provide a means only to statistically characterize measurement outcomes under black-box conditions. Here we present explicit analytic forms for different measurements of variance in the presence of feedback locking.
The expected value of the LLO sample variance can be found by substituting (\ref{eq:diff}) into the definition of the sample variance, producing a generic expression for traditional feedback (one measurement per correction cycle) and hybrid feedforward (multiple measurements per cycle):
\begin{widetext}
\begin{align}
\ev{\svar{N}{LLO}} &= \frac{1}{N-1} \sum_{k'=1}^{N} \bigg\{\mvar{k'}{LLO} + \frac{1}{N^2} \sum_{p'=1}^N\sum_{q'=1}^N \cov{\yllo{p'}}{\yllo{q'}} - \frac{2}{N} \sum_{l'=1}^N \cov{\yllo{k'}}{\yllo{l'}} \bigg\} \\
&= \frac{1}{N-1} \sum_{k'=1}^{N}\bigg\{ \bigg(\mvar{k'}{LO} + \gb{k'}^2 \sum_{r=1}^{\floor{k'/n}}\sum_{s=1}^{\floor{k'/n}} \cov{C_r}{C_s} - 2\gb{k'}\sum_{u=1}^{\floor{k'/n}} \cov{\ylo{k'}}{C_u}\bigg) \nonumber\\
&+ \frac{1}{N^2} \sum_{p'=1}^{N}\sum_{q'=1}^{N} \cov{\ylo{p'} + \gb{p'}\sum_{p=1}^{\floor{p'/n}}C_p}{\ylo{n} + \gb{q'}\sum_{q=1}^{\floor{q'/n}}C_q} - \frac{2}{N} \sum_{l'=1}^{N} \cov{\ylo{k'} + \gb{k'}\sum_{u=1}^{\floor{k'/n}}C_u}{\ylo{l'} + \gb{l'}\sum_{v=1}^{\floor{l'/n}}C_v}\bigg\} \\
&= \frac{1}{N-1} \sum_{k'=1}^{N}\bigg\{ \bigg(\mvar{k'}{LO} + \gb{k'}^2 \sum_{r=1}^{\floor{k'/n}}\sum_{s=1}^{\floor{k'/n}} \cov{C_r}{C_s} - 2\gb{k'}\sum_{u=1}^{\floor{k'/n}} \cov{\ylo{k'}}{C_u}\bigg) \nonumber\\
&+ \frac{1}{N^2} \sum_{p'=1}^{N}\sum_{q'=1}^{N} \bigg( \cov{\ylo{p'}}{\ylo{q'}} + \gb{p'}\gb{q'}\sum_{p=1}^{\floor{p'/n}}\sum_{q=1}^{\floor{q'/n}} \cov{C_p}{C_q} \bigg) \nonumber\\
&- \frac{2}{N} \sum_{l'=1}^{N} \bigg(\cov{\ylo{k'}}{\ylo{l'}} + \gb{k'}\gb{l'}\sum_{k=1}^{\floor{k'/n}}\sum_{l=1}^{\floor{l'/n}} \cov{C_k}{C_l} \bigg)\bigg\}
\end{align}
\end{widetext}
\noindent where in the case of hybrid feedback, $N$ is defined to be total number of measurements and $n$ is the number of measurements per cycle. The summation signs with unprimed indices are sums over whole cycles (of which there are $\floor{N/n}$) and the primed indices are sums over all $N$ measurements. In general, $\ev{\svar{N}{LLO}}$ contains recursive terms that cannot be concisely expressed in terms of the LO PSD $\psd{y}$ and covariance transfer function $G^2(\omega)$.
The Allan variance, the conventional measure of frequency standard instability, can be expressed analogously
\begin{align}
\avar{y}{} &= \frac{1}{2\pi} \int_{0}^{\infty} \psd{y} \atf{} d\omega
\end{align}
\noindent where the transfer function, for ideal Ramsey interrogation, is
\begin{align}
\atf{} &= \frac{2 \sin^4{(\omegaT_{R}/2)}}{(\omega T_{R}/2)^2}
\end{align}
\noindent where $T_{R}$ lacks an index because the definition of the Allan variance assumes equal-duration interrogation bins \cite{rutman1978}. The Allan variance calculated via this frequency-domain approach can be compared to its value via the time-domain approach, which consists of finding the variance of the difference between consecutive pairs of measurement outcomes:
\begin{align}\label{eq:allan}
\avar{y}{}&= \frac{1}{2}\langle (\y{k+1} - \y{k})^2 \rangle
\end{align}
\noindent where $\y{k}$ is the $k$th measurement outcome and $\langle\cdots\rangle$ may indicate a time average or an ensemble average, depending on whether $y(t)$ is assumed to be ergodic.
The LLO Allan variance can be found by substituting (\ref{eq:nonrec}) into the definition of the Allan variance (\ref{eq:allan}):
\begin{widetext}
\begin{align}
\avar{k}{LLO} &= \frac{1}{2}\ev{(\yllo{k+1} - \yllo{k})^{2}} \\
&= \frac{1}{2}\evbig{\bigg(\ylo{k+1} - \frac{\gb{k+1}}{\gb{k}} \ylo{k} - \ylo{k} + \frac{\gb{k}}{\gb{k-1}} \ylo{k-1}\bigg)^{2}} \\
&= \frac{1}{2}\bigg( \mvar{k+1}{LO} + \bigg(1+\frac{\gb{k+1}}{\gb{k}}\bigg)^{2}\mvar{k}{LO} + \bigg(\frac{\gb{k}}{\gb{k-1}}\bigg)^{2} \mvar{k-1}{LO}\nonumber\\
&+ \frac{2\gb{k}}{\gb{k-1}} \cov{\ylo{k+1}}{\ylo{k-1}} - 2\bigg(1+\frac{\gb{k+1}}{\gb{k}}\bigg) \cov{\ylo{k}}{\ylo{k+1}} - \frac{2(\gb{k} + \gb{k+1})}{\gb{k-1}}\cov{\ylo{k}}{\ylo{k-1}}\bigg)
\end{align}
\end{widetext}
|
2,877,628,090,504 | arxiv | \section{Introduction \label{introduction}}
Laminar-turbulent transition of boundary-layer flows has a strong impact on the performance of flight vehicles across multiple flow regimes due to its effect on surface skin friction and aerodynamic heating. Predicting the transition location in computational fluid dynamics (CFD) simulations of viscous flows remains a challenging area~\citep{slotnick2014}. Transition to turbulence in a benign disturbance environment is typically initiated by the amplification of modal instabilities of the laminar boundary layer.
Depending on the flow configuration, these instabilities can be of several types, e.g., Tollmien-Schlichting (TS) waves, oblique first-mode instabilities, and planar waves of second-mode (or Mack-mode) type.
A description of transition prediction methods based on stability correlations can be found in a variety of references~\citep{ingen1956, ingen2008, smith1956}, but we provide a brief description here to make the paper self-contained. A somewhat expanded description may also be found in \cite{zafar2020}. The transition process begins with the excitation of linear instability waves that undergo a slow amplification in the region preceding the onset of transition. The linear amplification phase is followed by nonlinear processes that ultimately lead to turbulence. Since these nonlinear processes are relatively rapid, it becomes possible to predict the transition location based on the evolution of the most amplified instability mode. The linear amplification ratio, $e^N$, is generally computed using the classical linear stability theory~\citep{mack1987, reed1996, juniper2014, taira17, reshotko1976}. The local streamwise amplification rates ($\sigma$) along the aerodynamic surface can be determined by solving an eigenvalue problem for the wall-normal velocity perturbation, which is governed by the Orr-Sommerfeld (OS) equation \citep{drazin1981}. These local streamwise amplification rates corresponding to each frequency ($\omega$) are then integrated along the body curvature to obtain the logarithmic amplification of the disturbance amplitude (N-factor) for each disturbance frequency ($\omega$). The $e^N$ method assumes that the occurrence of transition correlates very well with the N-factor of the most amplified instability wave reaching a critical value $N_\text{tr}$. For subsonic and supersonic flows, the critical N-factor (denoted herein as $N_\text{tr}$) has been empirically found to lie in the range between 9 and 11 \citep{ingen2008,bushnell1989}. Such a prediction process may be schematically illustrated as shown in Fig.~\ref{fig:methodology_schematic}(a).
\begin{figure}
\centering
\subfloat[Linear Stability Theory (LST): Growth rates are computed for each instability wave characterized by frequency ($\omega$) and wave number ($\alpha$). Growth rates are integrated ($\int$) along the airfoil contour to obtain corresponding N-factor values, from which the N-factor envelope and transition location ($x/c_\text{tr}$) are determined for a given value of correlating N-factor ($\text{N}=\text{N}_\text{tr}$). The most amplified Tollmien-Schlichting (TS) waves in two-dimensional boundary layers correspond to two-dimensional instability waves ($\beta=0$) \citep{rajnarayan2013}. ]{\includegraphics[width=0.57\textwidth]{schematic_LST.pdf}} \\
\subfloat[Convolutional Neural Network-based model (CNN): Instead of using eigenvalue analysis, local growth rates corresponding to each frequency ($\omega$) are predicted using a neural network
\citep{zafar2020}. ]{\includegraphics[width=0.57\textwidth]{schematic_CNN.pdf}} \\
\subfloat[Recurrent Neural Network (RNN) model for N-factor envelope modelling: Growth rate of the N-factor envelope ($dN/ds$) is directly predicted, which is then integrated ($\int$) along the airfoil contour to obtain the N-factor envelope and estimated transition location ($x/c_\text{tr}$) for a given value of correlating N-factor ($\text{N}=\text{N}_\text{tr}$). ]{\includegraphics[width=0.57\textwidth]{schematic_RNN.pdf}}
\caption{Comparison of transition prediction methodologies}
\label{fig:methodology_schematic}
\end{figure}
Linear stability computations rely on highly accurate computations of the mean boundary-layer flow. Solution of the linear stability equations is also computationally expensive and often leads to the contamination of the unstable part of the spectrum by spurious eigenvalues. The nonrobust nature of the stability computations requires a significant degree of expertise in stability theory on the user's part, making such computations inapt for nonexpert users. For these reasons, transition prediction based on stability computations has been difficult to automate and renders its direct integration in CFD solvers rather impractical. Several aerodynamic applications involving flow separation also entail viscous-inviscid interactions. Such interactions lead to a strong coupling between transition and the overall flow field, which requires an iterative prediction approach. Hence, the integration of the transition prediction method in the overall aerodynamic prediction method remains an important area of research \citep{slotnick2014}.
Several methods have been proposed as simplifications or surrogate models of the $e^N$ methods, including database query techniques \citep{ingen2008, drela1987, perraud2016} and data fitting techniques \citep{dagenhart1981, stock1989, gaster1995, langlois2012, krumbein2008, rajnarayan2013, begou2017, pinna2018}. These methods are generally based on a small set of scalar input parameters representing the mean flow parameters and relevant disturbance characteristics. However, these methods do not scale well with larger sets of parameters, which tends to limit the expressive power of the transition model based on these traditional methods \citep{crouch2002}. In particular, the shape factor, is a commonly used scalar parameter to correlate the disturbance amplification rates to the mean flow of the boundary layer. However, the shape factor cannot be easily computed for many practical flows such as high speed flows over blunt leading edges, which results in a poor predictive performance of the database methods \citep{paredes2020_journal}.
Neural networks provide a more generalized way of predicting the instability characteristics, while also accounting for their dependency on high-dimensional input features in a computationally efficient and robust manner. \cite{fuller1997} applied neural network methods to instability problems in predicting the instability growth rates for a jet flow. \cite{crouch2002} used scalar parameters and the wall-normal gradient of the laminar velocity profile as an input of neural networks to predict the maximum instability growth rates. They demonstrated the generalizability of the neural network method for both Tollmien–Schlichting waves and stationary crossflow instabilities. The data for the gradient of the laminar velocity profile were coarsely defined at six equidistant points across the boundary layer. A fully connected neural network was used, which assumes no spatial structure on the input data. Such treatment of boundary-layer profiles may not be well suited for other instability mechanisms involving, for instance, Mack-mode instabilities in high-speed boundary layers that require input profiles of thermodynamics quantities along with the velocity profiles~\citep{paredes2020_journal} or the secondary instabilities of boundary-layer flows with finite-amplitude stationary crossflow vortices that include rapid variations along both wall-normal and spanwise coordinates.
By utilizing recent developments in machine learning, \cite{zafar2020} proposed a transition model based on convolutional neural networks (CNNs), which has the ability to generalize across multiple instability mechanisms in an efficient and robust manner. CNNs were used to extract a set of latent features from the boundary-layer profiles, and the extracted features were used along with other scalar quantities as input to a fully connected network. The hybrid architecture was used to predict the instability growth rates for Tollmien-Schlichting instabilities in two-dimensional incompressible boundary layers. The extracted latent feature showed a strong, nearly linear correlation with the analytically defined shape factor ($H$) of the boundary-layer velocity profile. The model was trained using a database of Falkner--Skan family of selfsimilar boundary-layer profiles.
This CNN-based method is applicable to various instability mechanisms with higher-dimensional input features, since the boundary-layer profiles are treated in a physically consistent manner (i.e., as discrete representation of the profiles accounting for their spatial structures). It has been applied to predict the instability growth rates of Mack-mode instabilities in hypersonic flows over a moderately blunt-nosed body~\citep{paredes2020_journal}. This particular application requires additional input features in the form of boundary-layer profiles of thermodynamic quantities such as temperature and/or density, where the CNN-based model demonstrated highly accurate predictions of the instability growth rates.
Despite clear advantages over earlier neural network-based models, the CNN-based transition model shares a few significant shortcomings with the direct integration of linear stability theory toward transition prediction.
Similar to the stability theory, the CNN-based transition model is based on the predictions of the instability growth rates corresponding to each selected disturbance frequency $\omega$ (and/or the spanwise wavenumber $\beta$ in the case of three-dimensional instabilities) and every station from the input set of boundary-layer profiles. The instability growth rates for each individual disturbance are integrated along the aerodynamic surface to predict the growth in disturbance amplitude, or equivalently, the N-factor curve for each combination of frequency and spanwise wavenumber. Finally, one must determine the envelope of the N-factor curves to predict the logarithmic amplification ratio for the most amplified disturbance at each station (denoted as $N_{env}$ herein), which can then be used in conjunction with the critical N-factor ($N_{cr}$) based on previous experimental measurements to predict the transition location (see Fig.~\ref{fig:methodology_schematic}b for a summary). This overall workflow not only extends over several steps, but also requires the user to estimate the range of frequencies (and/or spanwise wavenumbers) that would include the most amplified instability waves corresponding to the envelope of the N-factor curves. The selection of disturbance parameters for a given flow configuration can be somewhat challenging for nonexpert users. More important, however, the above workflow requires several redundant growth rate computations involving subdominant disturbances that do not contribute to the N-factor envelope used to apply the transition criterion, namely, $N = N_{cr}$. Finally, and similar to the earlier neural network models~\citep{crouch2002}, the growth rate prediction during the all important first step of the above workflow only uses the local boundary-layer profiles, and hence, does not utilize any information about the prior history of a given disturbance, e.g., any previously estimated instability growth rates at the upstream locations. Since the boundary-layer profiles evolve in a continuous manner, the spatial variation in the disturbance growth rate represents an analytic continuation along the aerodynamic surface. Thus, embedding the upstream history of boundary-layer profiles and/or the disturbance growth rates should lead to more accurate, robust, and computationally efficient models for the onset of transition.
A recurrent neural network (RNN) is a promising approach for modeling the history effects. The RNN is a general-purpose architecture for modeling sequence transduction by using an internal state (memory) that selectively keeps track of the information at the preceding steps during the sequence \citep{graves2012}. RNNs provide a combination of multivariate internal state as well as nonlinear state-to-state dynamics, which make it particularly well-suited for dynamic system modeling. \cite{faller1997} exploited these attributes of RNNs to predict unsteady boundary-layer development and separation over a wing surface. The RNN architectures have also been used for the modeling of several other complex dynamic systems ranging from near-wall turbulence \citep{guastoni2019}, the detection of extreme weather events \citep{wan2018}, and the spatiotemporal dynamics of chaotic systems \citep{vlachas2020}, among others.
The feature extraction capability of CNN and the sequence-to-sequence mapping enabled by RNN provide a direct correlation with the underlying physics of transition, exemplifying the machine learning models motivated by the physics of the problem, e.g. in the modeling of turbulence~\citep{wang2017-physics, wu2018-physics, duraisamy2019turbulence}. Transport equation based models, such as the well-known Langtry-Menter 4-equation model~\cite{menter2006}, are based on empirical transition correlations that are based on local mean flow parameters, therefore the connection with the underlying physics of the transition process is significantly weaker as compared to stability based correlation, whether it involves direct computations of linear stability characteristics or a proxy thereto as represented within the proposed RNN model. This paper is aimed at exploiting the sequential dependency of mean boundary-layer flow properties to directly predict maximum growth rates among all unstable modes at a sequence of stations along the airfoil surface. Such sequential growth rates can then be integrated along the airfoil surface to determine the N-factor envelope and corresponding transition location, as has been schematically illustrated in Fig.~\ref{fig:methodology_schematic}(c). To this end, an extensive airfoil database has been used that documents mean flow features and linear stability characteristics for a large set of airfoils at a range of flow conditions (Reynolds numbers and angles of attack). Furthermore, we provide insight on the similarity of stability characteristics among different families of airfoils and how a neural network trained on one set of airfoils can generalize to other ones, possibly at different flow conditions.
The rest of the manuscript is organized as follows. The proposed RNN model is introduced in \S\ref{rnn} along with the input and output features. Section~\ref{database} presents the airfoil database used to develop and evaluate the proposed transition model. Section~\ref{results} presents the results and discussion for different training and testing cases, which provide insight toward subsampling of training datasets for achieving a reasonable predictive performance from the RNN model. Section~\ref{conclusion} concludes the paper.
\section{Recurrent Neural Network \label{rnn}}
A neural network consists of successive composition of linear mapping and nonlinear squashing functions, which aims to learn the underlying relationship between an input vector ($\mathbf{q}$) and an output vector ($\mathbf{y}$) from a given set of training data. The series of functions are organized as a sequence of layers, each containing several neurons that represent specific mathematical functions. The mathematical functions in each layer are parameterized by the weight ($\mathbf{W}$) and bias ($\mathbf{b}$). Intermediate layers between the input layer ($\mathbf{q}$) and output layer ($\mathbf{y}$) are known as hidden layers. The functional mapping of a neural network with a single hidden layer can be expressed as:
\begin{equation}
\boldsymbol{y}=\boldsymbol{W}^{(2)}\left(f\left[\boldsymbol{W}^{(1)} \boldsymbol{q}+\boldsymbol{b}^{(1)}\right]\right)+\boldsymbol{b}^{(2)}
\end{equation}
where $\mathbf{W}^{(l)}$ and $\mathbf{b}^{(l)}$ represent the weight matrix and bias vector for the $l^{th}$ layer, respectively, and $f$ is an activation function. Activation functions enable the representation of complex functional mapping by introducing nonlinearity in the composite functions. The training of a neural network is a process of learning the weights and biases with the objective of fitting the training data.
Recurrent neural networks (RNNs) are architectures with internal memory (known as the \emph{hidden states}), which make them particularly suitable for sequential data such as time series, spatial sequences, and words in a text. The RNN processes the sequence of inputs in a step-by-step manner while selectively passing the information across a sequence of steps encoded in a hidden state. At any given step $i$, the RNN operates on the current input vector ($\mathbf{q}_i$) in the sequence and the hidden state $\mathbf{h}_{i-1}$ passed on from the previous step, to produce an updated hidden state $\mathbf{h}_{i}$ and an output $\mathbf{y}_i$. Figure~\ref{fig:rnn_cell} shows the schematic of a recurrent neural network.
Multiple RNNs can be stacked over each other, as shown in Fig.~\ref{fig:rnn_architecture}, to provide a deep RNN. The functional mapping for an architecture with $L$ layers of RNN stacked over each other can be expressed as:
\begin{subequations}
\label{eq:rnn}
\begin{align}
\boldsymbol{h}_{i}^{l}&=f\left[\boldsymbol{W}_{hh}^{l} \cdot \boldsymbol{h}_{i-1}^{l}+\boldsymbol{W}_{qh}^{l} \cdot \boldsymbol{h}_{i}^{l-1}\right] \label{eq:rnn_A} \\
\boldsymbol{y}_{i}&=\boldsymbol{W}_{hy} \cdot \boldsymbol{h}_{i}^{L} \label{eq:rnn_B}
\end{align}
\end{subequations}
where $\boldsymbol{W}_{\text{hh}}^{l}$, $\boldsymbol{W}_{\text{qh}}^{l}$, and $\boldsymbol{W}_{\text{hy}}$ are the model parameters corresponding to the mapping from a previous hidden state to subsequent hidden state, from an input vector to a hidden state, and from a hidden state to an output vector, respectively. The model parameters ($\boldsymbol{W}_{\text{hh}}^{l}$ and $\boldsymbol{W}_{\text{qh}}^{l}$) have sequential invariance across each layer, i.e., the input vector and hidden state at each step along the sequence are processed by the same parameters within a given layer of the RNN architecture. For the first layer, $\boldsymbol{h}_{i}^{l-1}$ is equivalent to the input vector $\boldsymbol{q}_{i}$, while for subsequent layers, $\boldsymbol{h}_{i}^{l-1}$ denotes the hidden state from the previous layer at the current step. In this manner, a multilayer RNN transmits the information encoded in the hidden state to the next step in the current layer and to the current step of the next layer by implementing Eq.~\ref{eq:rnn_A}. For the sake of brevity, the bias terms have not been mentioned in these equations.
\begin{figure}
\centering
\includegraphics[width=0.67\textwidth]{rnn_cell.pdf}
\caption{Schematic of the RNN cell shown as a blue box on the left. Within each RNN cell, the arrangement of the weight matrices is shown on the right. At any step $i$ of the sequence, the RNN cell takes input $q_i$ and previous hidden state $h_{i-1}$ and provides updated hidden state $h_i$ and output $y_i$}
\label{fig:rnn_cell}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.63\textwidth]{rnn_architecture.pdf}
\caption{Sequences of input features and output for deep RNN architecture have been illustrated with respect to stations along the airfoil surface}
\label{fig:rnn_architecture}
\end{figure}
Like deep feed-forward neural networks, deep RNNs can lead to more expressive models. However, the depth of an RNN can have multiple interpretations. In general, RNN architectures with multiple RNN layers stacked over each other are considered \emph{deep RNNs}, as shown in Fig.~\ref{fig:rnn_architecture}. Such deep RNN architectures have multiple internal memories (hidden states), one in each RNN layer. RNN architectures with multiple recurrent hidden states, stacked over each other, can model varying levels of dependencies (i.e., from short-term to long-term) in each hidden state~\citep{hermans2013}. These stacked-RNN architectures can still be considered shallow networks with limited expressivity, as all model parameters ($\boldsymbol{W}_{\text{hh}}$, $\boldsymbol{W}_{\text{qh}}$, and $\boldsymbol{W}_{\text{hy}}$) are generally represented by single linear layers. To allow for more complex functional representation, these single linear layers can be replaced by multiple nonlinear layers. \cite{pascanu2014} has shown that introducing depth via multiple nonlinear layers to represent $\boldsymbol{W}_{\text{hh}}$, $\boldsymbol{W}_{\text{qh}}$ and $\boldsymbol{W}_{\text{hy}}$ can lead to better expressivity of the RNN model. For the transition modeling problem addressed in this paper, multiple nonlinear layers are used within each RNN cell to express the complex physical mapping between the input features and output, as shown in Fig.~\ref{fig:rnn_cell}. Such architecture resulted in better learning and predictive capability of the RNN model.
\begin{table}[tbp]
\centering
\caption{Input features and output for the RNN model. \label{features}}
\begin{tabular}{>{\centering\arraybackslash}m{2cm} >{\centering\arraybackslash}m{8cm} >{\centering\arraybackslash}p{3.2cm}} \hline
\textbf{Feature/Output} & \textbf{Description} & \textbf{Definition} \\ \hline \hline
$q_1$ & Reynolds number based on edge velocity and momentum thickness & $Re_{\theta}$ \\
$q_2$ & Velocity profile as a function of wall normal coordinate y & $ u_j$, $j=1, 2, \dots, 41$ \\
$q_3$ & First-order derivative of velocity profile & $\left. \frac{du}{dy}\right\vert_j$, $j=1, 2, \dots, 41$ \\
$q_4$ & Second-order derivative of velocity profile & $\left. \frac{d^2u}{dy^2}\right\vert_j$, $j=1, 2, \dots, 41$ \\ \hline
$y_1$ & Slope of N-factor envelope, corresponding to local growth rate of the most amplified disturbance at that location & $dN_\text{env}/ds$ \\
\hline
\end{tabular}
\end{table}
The underlying physics of transition doesn't require long term memory, unlike Natural language processing (NLP) for which more involved models like long short-term memory (LSTM) and transformers have proved to be very effective. For the transition problem, keeping track of last one or two stations have proved to be sufficient. In a study at the start of this research work, an informal investigation showed no advantage of LSTMs over RNNs, despite their added complexity and higher training cost.
With this perspective, the RNN model being proposed in this paper maps the sequential dependency of mean boundary-layer flow properties as input features to instability growth rates corresponding to the N-factor envelope as output features. Such input and output features, summarized in Table~\ref{features}, have been taken at a sequence of stations along the airfoil surface as shown in Fig.~\ref{fig:rnn_architecture}. Mean boundary-layer flow properties have been introduced in terms of the Reynolds number ($Re_{\theta}$) based on the local momentum thickness of the boundary layer, the velocity boundary-layer profile ($u$), and its derivatives ($du/dy$ and $d^2u/dy^2$). \cite{zafar2020} proposed a convolutional neural network model that encodes the information from boundary-layer profiles to a vector of latent features while accounting for the spatial patterns in the input profiles~\citep{wu2018seeing,carlos2018}. Such a treatment of boundary-layer profiles allows the trained neural network models to generalize to all practical flows with different instability mechanisms~\citep{paredes2020_journal}.
The RNN model presented in this paper builds upon this idea and further combines CNN with RNN to account for nonlocal physics in both streamwise and wall-normal directions. This is shown in Fig.~\ref{fig:rnn_model}.
The hyperparameters of the proposed neural network have been empirically tuned to yield adequate complexity for learning all the required information, without causing an overfitting of the training data. After such hyperparameters tuning of the neural network model, early stopping was not required. With the boundary-layer profiles defined by using 41 equidistant points in the wall-normal direction, the CNN architecture contains 3 convolutional layers with 6, 8, and 4 channels, respectively, in those layers. Kernel size of 3$\times$1 has been used in each convolutional layer. Rectified Linear Unit (ReLU) is used as the activation function.
The CNN encodes the spatial information in the boundary-layer profiles along each station to a vector of latent features, $\Psi$. The results are not significantly sensitive to the number of latent features in the vector $\Psi$. However, following sensitivity study on the size of latent features, the number of elements in $\Psi$ has been set to 8. The latent features $\Psi$ extracted from the boundary-layer profiles are then concatenated with $Re_{\theta}$ at each station, which provides the sequential input features for the RNN architecture to predict the local growth rate of the most amplified instability mode, or equivalently, the slope ($dN/ds$) of the N-factor envelope at a sequence of stations along the airfoil surface. Each RNN cell (Fig.~\ref{fig:rnn_cell}) consists of nonlinear mappings from the input to the hidden state ($\boldsymbol{W}_{\text{qh}}$) and from the hidden state to the output ($\boldsymbol{W}_{\text{hy}}$), with each mapping involving two hidden layers with 72 neurons each. The rectified linear activation function (ReLU) is used to introduce nonlinearity in these layers. The hidden state is represented by a vector of length 9 and a linear layer is used for the mapping $\boldsymbol{W}_{\text{hh}}$ between the hidden states. The RNN architecture consists of three RNN layers stacked over each other, with corresponding three internal memories (hidden states), each representing varying level of dependency (short to long term) between the output at the current station and the input at the current as well as the preceding stations.
\begin{figure}
\centering
\includegraphics[width=0.99\textwidth]{rnn_model.pdf}
\caption{Proposed neural network for transition modeling. Convolutional neural network encodes the information from boundary-layer profiles ($u, u_y, u_{yy}$) into latent features ($\Psi$) at each station. RNN processes the input features ($Re_{\theta}$ and $\Psi$) in sequential manner to predict the growth rate ($dN/ds$) of the N-factor envelope}
\label{fig:rnn_model}
\end{figure}
As the CNN architecture is intrinsically dependent on the shape of the training data, the architecture can be tuned to different shapes of training data and the proposed model is expected to maintain its efficiency and accuracy. Future work will explore the vector-cloud neural network~\cite{zhou2021frame} which can deal with any number of arbitrarily arranged grid points across the boundary layer profiles. Since empirical tuning of hyperparemeters provided good results, we did not undertake an extensive optimization of the whole model. In a related unpublished work, more extensive hyperparameter optimization resulted in minor adjustments of the CNN architecture with comparable results.
\section{Database of Linear Amplification Characteristics for Airfoil Boundary Layers \label{database}}
A large database of the linear stability characteristics of two-dimensional incompressible boundary-layer flows over a broad set of airfoils was generated for the training and evaluation of the proposed model.
These boundary-layer flows can support the amplification of Tollmien-Schlichting (TS) waves and the most amplified TS waves at any location along the airfoil correspond to two-dimensional disturbances (i.e., spanwise wavenumber $\beta = 0$). This database documents the amplification characteristics of unstable TS waves under the quasiparallel, no-curvature approximation. A value of $N_\text{tr}=9$ has been empirically found to correlate with the onset of laminar-turbulent transition in benign freestream disturbance environments characteristic of external flows at flight altitudes. The airfoil contours were obtained from public domain sources, such as the UIUC Airfoil Coordinates Database \citep{uiuc}.
Linear stability characteristics for laminar boundary-layer flows were computed using a combination of potential flow solutions~\citep{drela1989} and a boundary-layer solver~\citep{wie1992}. The computational codes used are industry standard and have been used in number of research works over the years. Inviscid computations using panel method have been performed with 721 points around the airfoil. For boundary layer solver, 300-400 grid points have been considered in wall normal direction using a second order finite difference scheme. We note that the focus of this database is on transition due to TS waves in attached boundary layers, and therefore, flows involving a separation bubble (which cannot be computed with the viscous-inviscid-interactive procedure adopted herein) are not considered in the present work.
Airfoil contours included in the database belong to different categories and were selected randomly to cover a range of practical applications. These categories include different series of NACA airfoils, natural laminar flow airfoils, low Reynolds number airfoils, rotorcraft airfoils, and airfoil contours designed for transonic flows, etc. Selected airfoils from three of these categories have been plotted in Fig.~\ref{fig:airfoils}, which illustrates the markedly different airfoil contours included in the database. Data corresponding to both upper and lower surfaces of the airfoils has been considered, except for the symmetrical airfoil sections for which the lower-surface data have been excluded to avoid duplication, and hence, a bias in the data sampling. For reference, all 53 airfoils from the database are listed as well as plotted in Appendix~\ref{airfoils_list}. The range of chord Reynolds numbers ($Re_c$) included in the database extends over nearly four orders of magnitude ($[5\times 10^4, 2\times 10^8]$) and a broad range of angles of attack (AOA) $[-6^\circ, 8^\circ]$ has been considered for each of these airfoils. However, some of the boundary layers within the above range of Reynolds numbers and angles of attack are either stable or only weakly unstable (i.e., corresponding to a rather small peak N-factor, $N < 3$). Those flows were excluded from the database, still yielding a total of 31,247 flow cases corresponding to the 53 airfoils in this database. Although the computational cost of generating such database using is only few hours, the associated human hours are significantly higher since the process requires manual interventions and expert judgements to avoid spurious modes in linear stability computations. Furthermore, pre-processing of geometrical data to ensure smooth surface curvature also added significant human cost to the database generation.
\begin{figure}
\centering
\includegraphics[width=0.97\textwidth]{airfoils_summary.pdf}
\caption{Airfoil sections for three airfoil families in the database. A complete list of airfoils along with their geometries is given in Appendix~\ref{airfoils_list}}
\label{fig:airfoils}
\end{figure}
The database documents the mean flow features and the relevant linear stability characteristics in a sequential manner along 121 streamwise stations, starting from a station close to the onset of instability
and extending up to either the point where the N-factor envelope reaches $N_\text{env}=25$ or to the end of the chordwise domain (which can be upstream of the trailing edge if the boundary-layer solution terminates due to an incipient flow separation).
We keep the sequence length fixed at 121 for all airfoils and flow conditions, which makes it more efficient in handling the data during the training and testing of the RNN model. The location of each station is defined in terms of the arc length along the airfoil surface ($s$).
Besides the parameters given in Table~\ref{features}, sequential information for several other relevant parameters has also been included in the database, such as the chordwise location of each station ($x/c$),
local edge velocity ($U_e$), local boundary-layer edge density ($\rho_e$) and viscosity ($\mu_e$), boundary-layer momentum thickness ($\theta$), etc.
The present work is aimed at developing an RNN model, which is trained over a subset of the complete dataset, and has the ability to predict the transition location for any boundary-layer flow from the complete dataset with reasonable accuracy. Computational constraints limit the size of the training dataset for RNNs, since they are more expensive to train as compared to simple feedforward neural networks. Reducing the size of the training data via subsampling of the overall database would require the sampling process to avoid any bias toward any specific subset of the database. Such bias can have a dominant effect on the efficacy of the loss function used for training, resulting in a potential overfitting across certaint parts of the training data, and worse predictive performance for other subsets of the database. Hence, a large database of this type requires adequate sampling procedures for the selection of the training data so that the resulting model can provide a balanced representation of the entire database.
\section{Results \label{results}}
The proposed RNN model predicts the sequence of growth rates of the most amplified disturbances, i.e., the slope values of the N-factor envelope as a function of distance along the airfoil contour. The N-factor envelope can then be determined as the cumulative integral over this sequence, with the lower limit of integration corresponding to the airfoil location where the boundary-layer flow first becomes unstable (or, equivalently, the station across which the slope first changes in sign from a negative value to a positive one). The transition onset location can then be estimated as the location where the envelope reaches the critical N-factor determined via correlation with a relevant set of measurements. The sequential data are defined at a fixed number of stations for each flow, but the physical domain length can vary from case to case due to the potential onset of flow separation in a laminar boundary-layer computation.
The loss function used for the training process includes a weighting function corresponding to the cell size ($d\text{s}$) in the vicinity of each station. Specifically, the loss function used for training the neural network is defined in terms of a weighted sum over the square of the local error:
\begin{equation}
\mathcal{L} = \sum_{j=1}^{m}\left(l_j \right) \quad \quad \text{where} \quad \quad l_j= \sum_{i=1}^{n}\left((Y_{i}-\hat{Y}_{i})^{2}\cdot ds_i\right)
\end{equation}
where $m$ denotes the number of sequences in the dataset, $n$ denotes the number of stations in a sequence, and $Y$ and $\hat{Y}$ represent the true and predicted values, respectively, corresponding to the output quantity. The weighting function serves to reduce the bias due to a nonuniform streamwise grid used in the boundary-layer calculation.
The primary performance indicator of the proposed model is the prediction of the chordwise location of the laminar-turbulent transition. However, besides its obvious dependence on the stability characteristics of the airfoil boundary layer, the transition location also depends on the disturbance environment via the correlating N-factor value, $N_\text{tr}$. The measured transition locations in a broad range of flows typically correlate with a finite band of N-factor values in the vicinity of $N_\text{tr} = 9$ under a benign disturbance environment, such as those encountered by external flows at typical flight altitudes. For this reason, the predictive performance has been assessed considering the flow cases for which N-factor envelope reaches values of upto $N_\text{env} = 13$. To help provide a meaningful assessment of the model accuracy across a broad range of flows and correlating N-factors, three different error metrics have been defined separately from the loss function, as indicated below.
\begin{equation}
E_\text{env} = 100 \times \frac{1}{\tilde{m}} \sum_{j=1}^{\tilde{m}}\left( \frac{||N_\text{env} - \hat{N}_\text{env}||_{ds}}{||N_\text{env}||_{ds}} \right)_j
\label{eqn:env_error}
\end{equation}
\begin{equation}
E_\text{tr} = 100 \times \frac{1}{\tilde{m}} \sum_{j=1}^{\tilde{m}}\left\vert \frac{x/c_\text{tr} - \tilde{x}/c_\text{tr}}{x/c_\text{tr}} \right\vert_j
\label{eqn:rel_tr_error}
\end{equation}
\begin{equation}
E_{x/c} = 100 \times \frac{1}{\tilde{m}} \sum_{j=1}^{\tilde{m}}\left\vert x/c_\text{tr} - \tilde{x}/c_\text{tr} \right\vert_j
\label{eqn:abs_tr_error}
\end{equation}
where $\tilde{m}$ denotes the number of sequences in the dataset for which the N-factor envelope reaches values of upto $N_\text{env} = 13$. The first error metric ($E_\text{env}$) is based on the $L2$ norm to evaluate the accuracy of the predicted N-factor envelope ($N_\text{env}$), determined by integrating the predicted slope values $dN/ds$. To emphasize a finite band of N-factor values in the vicinity of $N_\text{tr} = 9$, only the range of $5<N_\text{env}<13$ has been considered for each flow case.
The second error metric ($E_\text{tr}$) corresponds to the relative discrepancy between the true and predicted chordwise locations of transition onset for the case of $N_\text{tr}=9$.
The third error metric ($E_{x/c}$) relates to the absolute error in the predicted transition location, scaled by the airfoil chord length.
\subsection{Demonstration of predictive performance}
Selection of training data for the development of a general purpose model for the airfoil universe requires a balance between multiple requirements that may conflict with each other. It is clearly desirable for the size of the training data to be moderate enough to minimize the training cost. However, the training data must also be large enough in scope to represent the broad application space and must be designed to avoid an unfair bias toward specific subregions from the parameter space of latent features. Translating these requirements into a practical procedure is not a straightforward task. Given the availability of the large database of stability characteristics as described in the previous section, we have evaluated several different strategies for the selection of an appropriate subset of that database for the training process.
We begin by using a smaller portion of the available database for training purposes. This baseline case is representative of less ambitious efforts at database generation, as well as being better suited for the case involving a broader application space that includes additional flow parameters such as, for instance, nonzero Mach numbers, nonzero surface transpiration, and surface heating/cooling, etc. The baseline training set consists of five out of the total 53 airfoils, with each of these five airfoils representing a different subgroup of airfoils from Table~\ref{tab:list}, namely, the
$$ \text{NACA 0012, \ ONERA M6, \ NACA 2412, \ NACA 63-415, \ NLF(1)-0416} $$ airfoils. These airfoils correspond to airfoil indices of 2, 44, 6, 15, and 25, respectively. Here, the first two airfoils have symmetric contours whereas the latter three correspond to asymmetrical airfoil sections. Table~\ref{aoa_re} lists the various flow conditions at which boundary-layer solutions are available for each airfoil in the database. To assess the RNN model for interpolation and extrapolation with respect to the angle of attack and chord Reynolds number, respectively, flow conditions marked in red have been included in the testing dataset, while the remaining flow cases constitute the training dataset. Such an arrangement of the total available cases from these five airfoils results in a 60\%--40\% split between the training and testing data.
\begin{table}[tbp]
\centering
\caption{Flow conditions for all the cases in the airfoil database. For evaluation of the RNN model, flow conditions used for model testing are marked in red color whereas the flow conditions used for training are indicated in black color. \label{aoa_re}}
\begin{tabular}{|>{\centering\arraybackslash}m{0.34\textwidth} | >{\centering\arraybackslash}p{0.5\textwidth}|} \hline
\textbf{Angles of Attack} (deg) & \textbf{Reynolds Numbers} \\ \hline
{\small \textcolor{red}{$-6^\circ$}, $-5^\circ$, $-4.5^\circ$, $-4^\circ$, $-3.5^\circ$, \textcolor{red}{$-3^\circ$},} & {\small \textcolor{red}{$3.5\times10^4$}, $5.0\times10^4$, $7.0\times10^4$, $1.0\times10^5$, $1.4\times10^5$,} \\
{\small $-2.5^\circ$, $-2^\circ$, $-1.5^\circ$, $-1^\circ$, $-0.5^\circ$, \textcolor{red}{$0^\circ$},} & {\small \textcolor{red}{$2.8\times10^5$}, $4.0\times10^5$, $5.6\times10^5$, $8.0\times10^5$, $1.1\times10^6$,} \\
{\small $0.5^\circ$, $1^\circ$, $1.5^\circ$, $2^\circ$, $2.5^\circ$, \textcolor{red}{$3^\circ$}, $3.5^\circ$, $4^\circ$,} & {\small $1.6\times10^6$, $2.3\times10^6$, $3.2\times10^6$, \textcolor{red}{$4.5\times10^6$}, $6.4\times10^6$,} \\
{\small $5^\circ$, $6^\circ$, $7^\circ$, \textcolor{red}{$8^\circ$}} & {\small $9.0\times10^6$, $1.3\times10^7$, $1.8\times10^7$, \textcolor{red}{$3.6\times10^7$}, $5.1\times10^7$,} \\
{\small } & {\small $7.2\times10^7$, $1.0\times10^8$, \textcolor{red}{$1.4\times10^8$}} \\ \hline
\end{tabular}
\end{table}
The sequential information corresponding to the evolution of mean boundary-layer profiles along the airfoil surface has been documented in the database, with a uniform sequence length of 121 stations for all flow cases. An initial assessment was conducted to ascertain the effect of different sequence lengths. The results of this assessment are shown in Fig.~\ref{fig:seq_len}, wherein the error metrics defined in Eqs.~\ref{eqn:env_error} and \ref{eqn:rel_tr_error} have been plotted for different sequence lengths. Significant improvements can be observed by reducing the sequence length from 121 to 60, the reason for which is not entirely clear. It could be one of the areas in future investigations of such models. In general, however, this trend may be related to the fact that more information sometimes lead to diluting of information and, consequently, to worse results.
Further shortening of the sequence length yields relatively little benefit in terms of model accuracy for either training or test data. In fact, the accuracy of predicting the transition location worsens when the sequence length is reduced beyond 60. This observation is likely to be related to a poorer resolution of the shape of the N-factor envelope achieved when fewer stations are used across the same physical domain length along the airfoil surface. Looking at the similar trends for both error metrics, it was decided that a sequence length of 60 stations would provide an optimal choice for all of the results to be presented in this paper. Fig.~\ref{fig:seq_len} shows that the predictive performance for the testing dataset is comparable to that for the training dataset, demonstrating the interpolating and extrapolating capability of the proposed RNN model with respect to both AOA and $\text{Re}_c$.
\begin{figure}
\centering
\includegraphics[width=0.77\textwidth]{seq_length.pdf}
\caption{Comparison of prediction error percentage for training and testing datasets with different sequence lengths. Training and testing datasets have been defined based on flow conditions as given in Table~\ref{aoa_re}}
\label{fig:seq_len}
\end{figure}
Next, we assess the effect of the size of the RNN model on the prediction error. Figure~\ref{fig:parameters} displays the variation in error percentage as a function of the number of learnable parameters in the RNN model. While the error metric $E_\text{env}$ decreases as the number of learnable parameters is increased up to 5500, the error remains nearly constant with a further increase in the number of parameters. One may deduce from these results that an RNN model with 5500 learnable parameters provides near-optimal learning capability without causing overfitting. Consequently, this model size will be maintained for all the results presented for the current dataset. We note that the training dataset for this baseline case is of much smaller size in comparison to the complete airfoil database and that the use of a larger training dataset will most likely require a larger number of learnable parameters to enhance the learning capacity of the RNN model. Thus, the selection of model size will be discussed again when we work with somewhat larger training datasets in the following subsections.
\begin{figure}
\centering
\includegraphics[width=0.42\textwidth]{parameters.pdf}
\caption{Training and testing errors for a range of sizes of the RNN model indicated by the number of learnable parameters. The number of layers is kept the same while the parameters are varied proportionately in all three mappings, $\boldsymbol{W}_{\text{hh}}$, $\boldsymbol{W}_{\text{qh}}$, and $\boldsymbol{W}_{\text{hy}}$}
\label{fig:parameters}
\end{figure}
The architecture of the proposed neural network model has direct correlation with the underlying physics of flow transition. Previously proposed fully-connected neural network based transition models~\citep{fuller1997, crouch2002} didn't distinguish the evolution of flow in wall-normal and streamwise directions. At any station along airfoil surface, propagation of boundary layer flow in wall-normal direction is instantaneous (analogous to elliptic behavior of diffusion equation). Hence, \cite{zafar2020} used CNN to encode the information from the boundary layer profiles in wall-normal direction in a vector of latent features. Such treatment of boundary layer profiles provides a stronger correlation with the underlying physics of the flow and also allows the application of CNN based transition model to various instability mechanisms~\citep{paredes2020_journal}. These characteristics were clearly lacking in previously proposed neural network based transition models.
The current work uses both CNN and RNN in tandem where the RNN has been used to encapsulate the underlying physics of streamwise evolution of flow instability along with CNN which process the boundary layer profiles in wall-normal direction. Along the streamwise direction, elliptic behavior has already been taken care of in the stability theory however hyperbolic nature of the instability amplification from upstream to downstream requires sequence-to-sequence modeling, for which RNN has been used. To assess the benefit of sequence transduction in the RNN model, its predictive performance has compared with that of a previously proposed model which which does not account for the sequence information among the inputs at different stations~\citep{zafar2020}.
For a fair comparison, almost equal number of learnable parameters were used for both networks. The comparison is presented in Fig.~\ref{fig:rnn_vs_fc}, wherein we include the error percentages for both training and testing datasets. The testing cases have been further categorised in terms of interpolation and extrapolation with respect to the flow conditions. The results corresponding to each dataset show a clearly superior predictive performance for the RNN model vis-a-vis the fully connected neural network.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{rnn_vs_fc.pdf}
\caption{Comparison of results for the recurrent neural network and fully connected network. Testing cases have been sub-categorised as: Interpolation with respect to both AOA and Re ($\mathcal{I}_\text{AOA,Re}$), Extrapolation with respect to AOA ($\mathcal{E}_\text{AOA}$), Extrapolation with respect to Re ($\mathcal{E}_\text{Re}$), Extrapolation with respect to both AOA and Re ($\mathcal{E}_\text{AOA,Re}$)}
\label{fig:rnn_vs_fc}
\end{figure}
We also note that, in comparison to transition model based on fully-connected neural network, the RNN model has a moderately higher training cost associated to it. For the given training data (comprising of five airfoil dataset with sequence length of 60) and a given model size ($\sim$5500 learnable parameters), the RNN model took 29 GPU hours to train as compared to 19 GPU hours for the fully connected neural network. A single NVIDIA V100 GPU was used for training purposes.
In comparison to direct stability computations, the RNN model can estimate the transition location three to four orders of magnitude faster. Actual times for direct computations of linear stability can vary depending on various factors, including the specific flow and the level of resolution (in terms of the number of stations in a sequence and the number of frequencies used to determine the N-factor envelope). In that regard, the above estimate is believed to provide a reasonable, if somewhat conservative, estimate of the speed-up due to RNN.
\subsection{Evaluation of predictive performance for complete database}
We now perform a comparative assessment of the accuracy of the RNN models based on different selections of training datasets. These training datasets have been summarised in the Table~\ref{tab:cases}.
The rationale behind the selection of each training dataset from Table~\ref{tab:cases} will be outlined in the course of the discussion of the results, especially as the results for Case I provide the baseline for the selection of training data for the subsequent cases. Since the analysis is focused on the performance of the RNN model in predicting the transition location over an arbitrary airfoil, no distinction has been made between the airfoils and flow conditions used for training and testing, and all the flow cases included in the training dataset are considered alongside the cases that were not used during training. Because the sizes of the training sets from Table~\ref{tab:cases} are comparable to each other and always less than 24 percent of the total available data, the error metric based on the entire database was deemed to be a meaningful measure of the model's predictive accuracy.
\begin{table}[tbp]
\centering
\caption{Summary of training dataset cases. Flow cases corresponding to the mentioned airfoils are included in the training dataset. \label{tab:cases}}
\begin{tabular}{|>{\centering\arraybackslash}m{1cm} | p{2.1cm} | p{7.5cm} | >{\centering\arraybackslash}m{1.7cm}|} \hline
\textbf{Index} & \textbf{\quad \ Label} & \textbf{\qquad \qquad \qquad Training Dataset} & \textbf{Number of flow cases} \\ \hline
I & Five airfoils & {\small NACA-0012, NACA-2412, NACA-63-415, NLF(1)-0416, ONERA-M6} & 2624 \\ \hline
II & Random augmentation & Case I + 100 random flow cases from each of the other airfoils & 7026 \\ \hline
III & Augmented airfoils set & Case I + Five more airfoils with largest mean error ({\small LRN(1)-1007, NACA-6712, NLF(1)-1015, NLF(2)-0415, CLARK-Y}) & 4455 \\ \hline
IV & Error-based augmentation & Case I + Specific flow cases of other airfoils with $E_{\text{env}} \% > 3$ in Case I & 5024 \\ \hline
V-A & Random selection (\%) & Randomly selected 20\% flow cases of each airfoil & 6233 \\ \hline
V-B & Random selection (\#) & Randomly selected 100 flow cases of each airfoil & 5300 \\ \hline
\end{tabular}
\end{table}
Case I (denoted as "Five Airfoils") from Table~\ref{tab:cases} involves a training dataset that is comprised of the same five airfoils that were used in the earlier assessment of the RNN model size, comparison with fully connected network, etc. Results for the RNN model trained using the Case I dataset are shown in Figs.~\ref{fig:error_db_1} and \ref{fig:aoa_re_db_1}. Figure~\ref{fig:error_db_1} presents the mean error percentages for the predicted N-factor envelope and transition location corresponding to all airfoils from the overall database. The figures have been shaded to distinguish between the different groups of airfoils belonging to the airfoil families included within the overall database. Airfoil names corresponding to the indices from Fig.~\ref{fig:error_db_1} are given in Table~\ref{tab:list}. In general, the mean error percentage for most of the airfoils is below about 3\%, which demonstrates the general capability of the RNN model based on the Case I training data. Even though the model has been trained with a significantly smaller subsample of airfoils from the overall database, it is still able to predict the $N_\text{env}$ and transition location for the entire set of airfoils including different categories with a reasonable accuracy. Laminar to turbulent transition due to TS amplification within the attached flow region is achieved at varying numbers of flow conditions across the different airfoils, and the markers for each airfoil in the figure have been colored on the basis of the dataset size of that airfoil. This feature will be used to gain additional insights during the subsequent discussion as we describe the results for the remaining cases from Table~\ref{tab:list}.
\begin{figure}
\centering
\subfloat[Mean error ($E_\text{env}$) percentage for N-factor envelope ]{\includegraphics[width=0.9\textwidth]{env_error_db_1.pdf}} \\
\subfloat[Mean relative error ($E_\text{tr}$) percentage for transition location prediction ]{\includegraphics[width=0.9\textwidth]{tr_error_db_1.pdf}} \\
\subfloat[Mean absolute error ($E_{x/c}$) percentage for transition location prediction ]{\includegraphics[width=0.9\textwidth]{abs_tr_error_db_1.pdf}}
\caption{Mean error percentage for each airfoil in the database, corresponding to training dataset of Case I (five airfoils) given in Table~\ref{tab:cases}. Airfoils corresponding to training dataset have been encircled in red color. Markers' color represent the dataset size (number of flow cases) of each airfoil in the database}
\label{fig:error_db_1}
\end{figure}
Predictive performance as a function of the angle of attack and chord Reynolds number is shown in Fig.~\ref{fig:aoa_re_db_1}, which indicates the distribution of error percentage across the overall database via a color map for the kernel density estimate. In addition, 1\% of randomly sampled data points from the overall database have also been included as green dots within the figure. No bias in predictive errors toward specific flow conditions may be observed within the figure, indicating that the model is able to yield comparable accuracy across the entire range of flow conditions. The transition locations for most of the flow cases is predicted with a relative error percentage of ($E_\text{tr}$) $<$ 2\%, as shown by the higher density region with a darker color in the color map from Fig.~\ref{fig:aoa_re_db_1}.
\begin{figure}
\centering
\subfloat[Relative error ($E_\text{tr}$) percentage of flow cases at different angles of attack]{\includegraphics[width=0.77\textwidth]{aoa_tr_db_1.png}} \\
\subfloat[Relative error ($E_\text{tr}$) percentage of flow cases at different Reynolds number]{\includegraphics[width=0.77\textwidth]{re_tr_db_1.png}}
\caption{Relative error ($E_\text{tr}$) percentage for all flow cases, corresponding to training dataset of Case I (five airfoils) given in Table~\ref{tab:cases}. Green markers (filled circles) show only 1\% of the randomly sampled flow cases. The contour shows the kernel density estimated from all the flow cases. Darker region indicates higher probability density. The horizontal lines appearing in the contour plots such as that near an error of 0.2\% are due to technical reason (bins have been defined in linear--scale while the vertical axis of the plot is in log--scale) and don't depict any real discontinuity}
\label{fig:aoa_re_db_1}
\end{figure}
Results for Case I, shown in Fig.~\ref{fig:error_db_1}, indicate a few outlying airfoils corresponding to a high error percentage in the predictions of the RNN model. In particular, the average error in the prediction of the N-factor envelope for the LRN(1)-1007 airfoil is significantly higher ($E_\text{env}$>30\%) as compared to that for most of the other airfoils. LRN(1)-1007, designed for high lift and low drag at Re=$[5\times 10^4, 1.5\times 10^5]$, has a peculiar airfoil contour and aerodynamic behavior, which is markedly different with respect to the other airfoils in the training dataset. This may explain why the predictive error percentage for the LRN(1)-1007 airfoil is higher by almost an order of magnitude. Similarly, the NACA 6712 airfoil with a highly aft-cambered airfoil section also has a significantly higher error percentage ($E_\text{env}$>20\%). In comparison, the predictive performance for the other NACA airfoils is reasonably good, with the average absolute error in predicting the transition location below 1\% for most of those airfoils.
Augmenting the training dataset from Case I with additional data provides the most obvious way of improving on the predictive performance of the above RNN model. Several different strategies for data augmentation were evaluated in the course of this work, and they are denoted as Cases II, III, and IV in Table~\ref{tab:list}. For Case II, 100 randomly selected flow cases from every other airfoil have been added to the training dataset from Case I.
Even though this data augmentation causes the size of the training dataset to increase almost threefold with respect to that in Case I, the inclusion of flow cases for every airfoil within the training dataset leads to significantly improved predictive performance of the RNN model. The results for Case II are shown in Fig.~\ref{fig:error_db_2}. The overall prediction error percentages have decreased significantly in comparison with Case I, and the maximum absolute error in predicting the transition location ($E_{x/c}$) for any airfoil has reduced from $6.5\%$ in Case I to approximately 1\% in Case II.
\begin{figure}
\centering
\subfloat[Mean error ($E_\text{env}$) percentage for N-factor envelope ]{\includegraphics[width=0.9\textwidth]{env_error_db_2.pdf}} \\
\subfloat[Mean relative error ($E_\text{tr}$) percentage for transition location prediction ]{\includegraphics[width=0.9\textwidth]{tr_error_db_2.pdf}} \\
\subfloat[Mean absolute error ($E_{x/c}$) percentage for transition location prediction ]{\includegraphics[width=0.9\textwidth]{abs_tr_error_db_2.pdf}}
\caption{Mean error percentage for each airfoil in the database, corresponding to training dataset of Case II (random augmentation) given in Table~\ref{tab:cases}. Airfoils corresponding to training dataset have been encircled in red color. Markers' color represent the dataset size (number of flow cases) of each airfoil in the database}
\label{fig:error_db_2}
\end{figure}
For Case III, the training dataset has been augmented by including an additional set of airfoils for which the average error ($E_\text{env}$) is greater than 3\%. Five such airfoils, mentioned in Table~\ref{tab:cases}, along with the original five airfoils from Case I constitute the training dataset for Case III. Results for this training case are plotted in Fig.~\ref{fig:error_db_3}. The figure shows that, despite a larger size of the training dataset with respect to that in Case I, the predictive performance for Case III has worsened. The overall trend can be summarised by looking at the group of natural laminar flow airfoils in Fig.~\ref{fig:error_db_3}, for which the model predictions are now significantly worse (~7\%<$E_\text{tr}$<~13\%), except for those airfoils that have been included within the training dataset (~0.7\%<$E_\text{tr}$<~2\%). This observation points to a possible overfitting of the data by the RNN model under consideration. Hence, one may conclude that the augmentation of the original set by five additional airfoils does not provide a good representation of the overall database.
\begin{figure}
\centering
\subfloat[Mean error ($E_\text{env}$) percentage for N-factor envelope ]{\includegraphics[width=0.9\textwidth]{env_error_db_3.pdf}} \\
\subfloat[Mean relative error ($E_\text{tr}$) percentage for transition location prediction ]{\includegraphics[width=0.9\textwidth]{tr_error_db_3.pdf}} \\
\subfloat[Mean absolute error ($E_{x/c}$) percentage for transition location prediction ]{\includegraphics[width=0.9\textwidth]{abs_tr_error_db_3.pdf}}
\caption{Mean error percentage for each airfoil in the database, corresponding to training dataset of Case III (augmented airfoils set) given in Table~\ref{tab:cases}. Airfoils corresponding to training dataset have been encircled, where airfoils already in the training dataset from Case I have been encircled in red color, while the augmented set of airfoils have been encircled in blue color. Markers' color represent the dataset size (number of flow cases) of each airfoil in the database}
\label{fig:error_db_3}
\end{figure}
For Case IV, the training dataset from Case I has been augmented with the flow cases from overall database that correspond to the highest predictive error percentage ($E_\text{env}$ > 3\%) associated with the RNN model from Case I. A significant improvement can be observed in the overall predictive performance of the RNN model as compared to all of the previous cases. Based on this error-based data augmentation, the absolute error percentage in the predicted transition location ($E_{x/c}$) over any airfoil from the database is less than approximately 0.7\%. Fig.~\ref{fig:aoa_re_db_4} shows the distribution of the error percentage as a function of the flow conditions. Due to the kind of data augmentation used for this case, one finds that most flow cases tend towards smaller error values in comparison to those in Case I.
\begin{figure}
\centering
\subfloat[Mean error ($E_\text{env}$) percentage for N-factor envelope ]{\includegraphics[width=0.9\textwidth]{env_error_db_4.pdf}} \\
\subfloat[Mean relative error ($E_\text{tr}$) percentage for transition location prediction ]{\includegraphics[width=0.9\textwidth]{tr_error_db_4.pdf}} \\
\subfloat[Mean absolute error ($E_{x/c}$) percentage for transition location prediction ]{\includegraphics[width=0.9\textwidth]{abs_tr_error_db_4.pdf}}
\caption{Mean error percentage for each airfoil in the database, corresponding to training dataset of Case IV (error based augmentation) given in Table~\ref{tab:cases}. Airfoils corresponding to training dataset have been encircled in red color. Markers' color represent the dataset size (number of flow cases) of each airfoil in the database}
\label{fig:error_db_4}
\end{figure}
\begin{figure}
\centering
\subfloat[Relative error ($E_\text{tr}$) percentage of flow cases at different angles of attack]{\includegraphics[width=0.77\textwidth]{aoa_tr_db_4.png}} \\
\subfloat[Relative error ($E_\text{tr}$) percentage of flow cases at different Reynolds number]{\includegraphics[width=0.77\textwidth]{re_tr_db_4.png}}
\caption{Relative error ($E_\text{tr}$) percentage for all flow cases, corresponding to training dataset of Case IV given in Table~\ref{tab:cases}. Green markers (filled circles) show only 1\% of the randomly sampled flow cases. The contour shows the kernel density estimated from all the flow cases. Darker region indicates higher probability density. The horizontal lines appearing in the contour plots such as that near an error of 0.2\% are due to technical reason (bins have been defined in linear--scale while the vertical axis of the plot is in log--scale) and don't depict any real discontinuity}
\label{fig:aoa_re_db_4}
\end{figure}
The selection of training data for the cases II, III, and IV was based on the results of Case I, which consisted of five airfoils chosen somewhat arbitrarily (except for the attempt to include some representation from five different groups of airfoils). Although Case IV provides quite good results, such that the absolute error percentage associated with the prediction of the transition location ($E_{x/c}$) over any airfoil is 0.7\% or less, the selection of the training dataset has been made in an indirect manner on the basis of the results obtained in Case I.
A more direct strategy for subsampling a training dataset from the entire database has been assessed in Case V, where a completely random subset of varying magnitudes has been selected from the overall database. Two subcases (V-A and V-B) have been assessed in this regard, as summarised in Table~\ref{tab:cases}. Case V-A involves the selection of a fixed percentage of flow cases corresponding to each airfoil, which results in a different number of flow cases for each airfoil in the training dataset. As mentioned earlier, this uneven number of flow cases in the database results naturally from the fact that different airfoils achieve laminar to turbulent transition at different flow conditions and the fact that only upper surface boundary layers are retained in the case of airfoils with symmetric contours. For Case V-B, a fixed number of flow cases corresponding to each airfoil have been selected as training data in order to provide a uniform weighting to each of the airfoils. With this arrangement to define the two sub-cases, different sizes of training datasets have been used to analyze the variation of error percentage with respect to the size of the training dataset. The results of this study are shown in Fig.\ref{fig:case_5_comp}. The figure shows that, in both cases (V-A and V-B), there is an optimal size of the training dataset that leads to a minimum prediction error. For Case V-A, a training dataset size of $\sim$6200 (20\% flow cases from each airfoil) provides the best predictive performance. Similarly, for Case V-B, a training dataset size of $\sim$5300 (100 flow cases of each airfoil) provides the best predictive performance.
Comparing the results for both subcases in Fig.\ref{fig:case_5_comp} shows that Case V-B provides a better prediction accuracy, which can be explained based on the error percentages of three airfoils, NLF(1)-1015, NLF(2)-0415, and LRN(1)-1007.
These three airfoils with the highest percentage of error in Fig.~\ref{fig:error_db_5}(a) correspond to a relatively smaller number of flow cases in the database, as seen from the colors of their respective markers. Because the training set in Case V-A includes a fixed percentage of flow cases for each airfoil, the above three airfoils remain relatively underweight with respect to the other airfoils with a larger number of flow cases.
On the other hand, using a fixed number of flow cases for each airfoil provides a more balanced representation of the various airfoils within the training dataset, which results in a better overall predictive performance.
\begin{figure}
\centering
\subfloat[Maximum error percentage for any airfoil dataset]{\includegraphics[width=0.4\textwidth]{max_env.pdf} \quad \quad \includegraphics[width=0.4\textwidth]{max_tr.pdf}} \\
\subfloat[Average error percentage for whole airfoil database ]{\includegraphics[width=0.4\textwidth]{ave_env.pdf} \quad \quad \includegraphics[width=0.4\textwidth]{ave_tr.pdf}} \\
\caption{Case V: Variation of error percentage with respect to training dataset size (number of flow cases)}
\label{fig:case_5_comp}
\end{figure}
\begin{figure}
\centering
\subfloat[Case V-A with randomly selected 20\% flow cases from each airfoil ]{\includegraphics[width=0.9\textwidth]{env_error_db_5_20p.pdf}} \\
\subfloat[Case V-B with randomly selected 100 flow cases from each airfoil ]{\includegraphics[width=0.9\textwidth]{env_error_db_5_random_100.pdf}} \\
\caption{Comparison of mean error ($E_\text{env}$) percentage for N-factor envelopes in Case V-A and V-B. Markers' color represent the dataset size (number of flow cases) of each airfoil in the database\label{fig:error_db_5}}
\end{figure}
Results for all of the cases discussed in this section are summarised in Table~\ref{tab:cases_results}. It is interesting to note that the results of Case V-B, which provides a more direct approach for selecting the training dataset, are very comparable in terms of prediction accuracy with the results from Case IV, which uses an indirect approach to select the training dataset and provides the best results among all of the cases discussed herein. Moreover, the training dataset for both of these cases is of almost equal size. Hence, Case V-B provides a direct and convenient approach for selecting a subsample from a large database as the training data, while also yielding a good predictive performance over the entire database. Sample plots of the N-factor envelope for arbitrary combinations of airfoil contours and flow conditions are shown in Fig.~\ref{fig:N_plots}. These plots illustrate a qualitative comparison of the N-factor predictions based on the different training cases. One may clearly see that the predictions for certain flows in cases IV and V-B are accurate even if the corresponding predictions for cases I-III include significantly larger error.
\begin{table}[tbp]
\centering
\caption{Results for different training dataset cases. \label{tab:cases_results}}
\begin{tabular}{|>{\centering\arraybackslash}m{0.9cm} | p{2.1cm} | >{\centering\arraybackslash}m{1.7cm} | >{\centering\arraybackslash}m{0.9cm} | >{\centering\arraybackslash}m{0.9cm} | >{\centering\arraybackslash}m{0.9cm} | >{\centering\arraybackslash}m{1cm} | >{\centering\arraybackslash}m{1cm} | >{\centering\arraybackslash}m{1cm} |} \hline
& & & \multicolumn{3}{c|}{
\begin{tabular}{>{\centering\arraybackslash}m{3cm}}
\textbf{Maximum error} \\ \hline
\end{tabular}
} & \multicolumn{3}{c|}{
\begin{tabular}{>{\centering\arraybackslash}m{3.3cm}}
\textbf{Average error} \\ \hline
\end{tabular}
} \\
\textbf{Index} & \textbf{Label} & \textbf{Number of flow cases} & $\mathbf{E_\text{env}}$ & $\mathbf{E_\text{tr}}$ & $\mathbf{E_{x/c}}$ & $\mathbf{E_\text{env}}$ & $\mathbf{E_\text{tr}}$ & $\mathbf{E_{x/c}}$ \\ \hline
I & Five airfoils & 2624 & 39.2\% & 30.2\% & 6.52\% & 2.95\% & 2.24\% & 0.42\% \\ \hline
II & Random augmentation & 7026 & 8.95\% & 5.66\% & 1.40\% & 1.92\% & 1.64\% & 0.26\% \\ \hline
III & Augmented airfoils set & 4455 & 14.5\% & 37.4\% & 5.15\% & 2.71\% & 4.14\% & 0.43\% \\ \hline
IV & Error-based augmentation & 5024 & 5.38\% & 3.82\% & 0.70\% & 1.53\% & 1.17\% & 0.15\% \\ \hline
V-A & Random selection (\%) & 6233 & 10.1\% & 35.0\% & 1.51\% & 1.66\% & 2.04\% & 0.22\% \\ \hline
V-B & Random selection (\#) & 5300 & 5.49\% & 4.22\% & 0.70\% & 1.60\% & 1.38\% & 0.19\% \\ \hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\subfloat[NACA-4418 ($-0.5^\circ$, $1\times10^8$)]{\includegraphics[width=0.31\textwidth]{naca4418_AoAm0p5deg_Rec1p024e8_lower_Nenv_LST_QPNC.bin.pdf}} \quad
\subfloat[NACA-6712 ($-2.5^\circ$, $2\times10^8$)]{\includegraphics[width=0.31\textwidth]{naca6712_AoAm2p5deg_Rec2p048e8_upper_Nenv_LST_QPNC.bin.pdf}} \quad
\subfloat[HSNLF(1)-0213 ($-2.0^\circ$, $1\times10^8$)]{\includegraphics[width=0.31\textwidth]{hsnlf10213_AoAm2p0deg_Rec1p024e8_upper_Nenv_LST_QPNC.bin.pdf}} \\
\subfloat[LRN(1)-1007 ($-1^\circ$, $1.4\times10^8$)]{\includegraphics[width=0.31\textwidth]{lrn1007b_AoAm1p0deg_Rec1p44e8_upper_Nenv_LST_QPNC.bin.pdf}} \quad
\subfloat[NACA 63(2)-615 ($2^\circ$, $1.4\times10^8$)]{\includegraphics[width=0.31\textwidth]{naca632615_AoAp2p0deg_Rec1p44e8_upper_Nenv_LST_QPNC.bin.pdf}} \quad
\subfloat[GAW-1 ($3.5^\circ$, $2.6\times10^7$)]{\includegraphics[width=0.31\textwidth]{gaw1_AoAp3p5deg_Rec2p56e7_lower_Nenv_LST_QPNC.bin.pdf}} \\
\caption{N-factor envelope plots for arbitrarily chosen flow cases to illustrate the comparison of prediction by different training cases. Corresponding airfoil name and flow conditions (AOA, $\text{Re}_c$) have been mentioned for each plot\label{fig:N_plots}}
\end{figure}
\subsection{Working with limited database}
The database of airfoil flows generated during the present effort is relatively extensive in comparison to what may be generally available in a majority of practical situations. For this reason, assessments have been made to understand the predictive performance of the RNN model in selected possible scenarios. Such assessments are based on interpolation and extrapolation with respect to the airfoil contours, under the assumption that a relatively smaller training dataset based on just a few NACA 4-digit series airfoils is available. Table~\ref{tab:small_cases} outlines a summary of these cases and the corresponding results. Case VI and VII provide an assessment with respect to the interpolation of airfoil contours within a family of selected symmetric and asymmetric airfoils, respectively. The resulting predictions are seen to be reasonably accurate for the testing dataset with average error of 0.12\% and 0.04\% of the chord length for both symmetric and asymmetric airfoils, respectively.
Similarly, Case VIII targets the evaluation of model performance with respect to the extrapolation of the airfoil thickness, and again, the predictions for the testing dataset are found to be reasonably accurate with average error of 0.06\% of the chord length for the given airfoil.
Hence, it appears that the RNN model is able to predict well for previously unknown airfoil sections within the same family, regardless of whether the test data involves an interpolation within the distribution of the training data or an extrapolation beyond its boundaries. These findings support the selection strategy underlying case I, which included five airfoils representing multiple groups from the overall database.
\begin{table}[tbp]
\centering
\caption{Assessment cases for different training cases based on NACA 4-digit series airfoils. These cases have been studied to understand the model performance when a limited training dataset is available. \label{tab:small_cases}}
\begin{tabular}{|>{\centering\arraybackslash}m{0.9cm} | p{5cm} | >{\centering\arraybackslash}m{2.4cm} | >{\centering\arraybackslash}m{0.9cm} | >{\centering\arraybackslash}m{0.9cm} | >{\centering\arraybackslash}m{0.9cm} |} \hline
& & & \multicolumn{3}{c|}{
\begin{tabular}{>{\centering\arraybackslash}m{3cm}}
\textbf{Testing Error \%} \\ \hline
\end{tabular}
} \\
\textbf{Index} & \textbf{\qquad \quad Training Dataset} & \textbf{Testing Dataset} & $\mathbf{E_\text{env}}$ & $\mathbf{E_\text{tr}}$ & $\mathbf{E_{x/c}}$ \\ \hline
VI & {\small NACA-0006, \ NACA-0018} & {\small NACA-0012} & 0.97\% & 0.55\% & 0.12\% \\ \hline
VII & {\small NACA-2412, \ NACA-4412} & {\small NACA 2415} & 0.34\% & 0.17\% & 0.04\% \\ \hline
VIII & {\small NACA-0006, \ NACA-0012} & {\small NACA-0018} & 0.56\% & 0.24\% & 0.06\% \\ \hline
\end{tabular}
\end{table}
\begin{table}[tbp]
\centering
\caption{Results for a training dataset comprised of a single family of airfoils and a testing dataset comprised of the rest of the airfoils in the database. \label{tab:airfoil_extra}}
\begin{tabular}{|>{\centering\arraybackslash}m{0.75cm} | p{3.5cm} | >{\centering\arraybackslash}m{1.7cm} | >{\centering\arraybackslash}m{0.8cm} | >{\centering\arraybackslash}m{0.8cm} | >{\centering\arraybackslash}m{0.8cm} | >{\centering\arraybackslash}m{0.8cm} | >{\centering\arraybackslash}m{0.8cm} | >{\centering\arraybackslash}m{0.8cm} |} \hline
& & & \multicolumn{3}{c|}{
\begin{tabular}{>{\centering\arraybackslash}m{2.5cm}}
\textbf{Maximum error} \\ \hline
\end{tabular}
} & \multicolumn{3}{c|}{
\begin{tabular}{>{\centering\arraybackslash}m{2.4cm}}
\textbf{Average error} \\ \hline
\end{tabular}
} \\
\textbf{Index} & \textbf{\quad Training dataset} & \textbf{Number of flow cases} & $\mathbf{E_\text{env}}$ & $\mathbf{E_\text{tr}}$ & $\mathbf{E_{x/c}}$ & $\mathbf{E_\text{env}}$ & $\mathbf{E_\text{tr}}$ & $\mathbf{E_{x/c}}$ \\ \hline
IX & {\small NACA-0006, \ NACA-0018, NACA-2412, \ NACA-4412, NACA-6712} & 2841 & 67.5\% & 44.8\% & 9.51\% & 9.55\% & 9.43\% & 1.52\% \\ \hline
\end{tabular}
\end{table}
Assessment in Case IX involves a training dataset of five NACA 4-digit series airfoils, and the predictive performance is evaluated using a testing dataset based on the rest of the airfoils. Results for this case have been shown in Table~\ref{tab:airfoil_extra}, where it can be observed that the predictive performance in this case is far worse in comparison to Case I, where five airfoils were taken from a different family of airfoils. Hence, a model trained using just a single family of airfoils does not extrapolate well to the other families of airfoils. Results for Case IX have also been shown in Fig.~\ref{fig:error_db_extrap}, wherein the mean error percentages for the remaining families of airfoils are seen to be an order of magnitude higher than the error magnitudes associated with airfoils included in the training dataset. This finding reinforces the method adopted in Case I, namely, that a balanced training dataset should contain representation from different families of airfoils to achieve reasonably accurate predictive performance for the overall database.
\begin{figure}
\centering
\subfloat[Mean error ($E_\text{env}$) percentage for N-factor envelope ]{\includegraphics[width=0.9\textwidth]{env_error_db_extrap.pdf}} \\
\subfloat[Mean relative error ($E_\text{tr}$) percentage for transition location prediction ]{\includegraphics[width=0.9\textwidth]{tr_error_db_extrap.pdf}} \\
\subfloat[Mean absolute error ($E_{x/c}$) percentage for transition location prediction ]{\includegraphics[width=0.9\textwidth]{abs_tr_error_db_extrap.pdf}}
\caption{Mean error percentage for each airfoil in the database, corresponding to training dataset of Case IX given in Table~\ref{tab:airfoil_extra}. Airfoils corresponding to training dataset have been encircled in red color. Markers' color represent the dataset size (number of flow cases) of each airfoil in the database\label{fig:error_db_extrap}}
\end{figure}
\section{Conclusion \label{conclusion}}
A sequence-to-sequence modeling approach based on a recurrent neural network has been proposed to predict the location of laminar-turbulent transition via linear amplification characteristics of hydrodynamic instabilities in boundary-layer flows. This approach provides an end-to-end transition model, which maps the sequence of mean boundary-layer profiles to corresponding growth rates along the N-factor envelope, and then, to the estimated transition location. In this regard, a large database comprised of the linear growth characteristics of over 33,000 boundary-layer flows over 53 airfoils from a disparate range of applications has been used to train and test the proposed model. The results demonstrate that the RNN model is able to predict the transition location at various test flow conditions and for the entire range of airfoil contours with good accuracy (average error of less than 0.70 percent of the chord length for any given airfoil) despite being trained by a small subsample (about 16\%) of the complete database. To our knowledge, the database used herein is one of the largest of its kind, presumably representing a significant cross-section of the airfoil universe. To assess the techniques to facilitate the selection of representative yet computationally efficient training data, several alternate strategies have been investigated to provide insights into working with large amounts of data. The more easily realizable training set based on a small group of five airfoils (one each from five different groups of airfoil contours) forms the baseline for the selection of training data. A limited database of this type is found to result in substantial errors in transition prediction for a number of other airfoils, with average errors in transition location prediction across multiple test conditions for a single airfoil approaching as much as 6.52\% of the chord length for the given airfoil. Data augmentation with additional cases from other airfoils that correspond to worst prediction errors from the baseline model is found to provide the best choice for improving the predictive performance of the RNN model, reducing the average error across all flow conditions in predicting transition location to 0.7\% of the chord length for any airfoil. An alternate strategy of using a training dataset consisting of an equally weighted representation of each airfoil was also evaluated and was found to provide equally good predictive performance. Further assessments also showed that the RNN-based model is able to extrapolate/interpolate well within a family of similar airfoils and the predictive performance worsened while extrapolating the predictions to airfoil from other families.
Transition estimates based on the RNN model are easily three to four orders of magnitude faster than those based on direct stability computations. However, the main benefit of the deep learning models is an improved robustness of the prediction process, making it easier for non-experts in laminar-turbulent transition to perform such computations. On the other hand, the deep learning models are restricted in their generalizability and this paper has addressed some of the issues related to the development of models that cover a broad space of flows. We believe the two types of models to be complementary in nature.
A significant advantage of the proposed RNN model over the previously proposed neural network-based transition models is that by using the sequential information of the underlying mean flow, the RNN model is able to directly predict the required information of the N-factor envelope and the transition location, without requiring the user to define a range of critical frequencies and predicting the instability growth rates at a number of frequencies in this range. On the other hand, since the RNN model predicts the growth rates of the N-factor envelope in a sequential manner, it cannot be applied in a parallel manner, unlike conventional methods or previously proposed neural networks that can predict the local growth rates at each station in a parallel manner.
The proposed architecture processes the boundary-layer profiles at each station in a physically consistent manner using the convolutional neural network. This attribute enables its generalization to other instability mechanisms involving three-dimensional boundary-layer profiles involving crossflow velocity components or second-mode instabilities in high speed flows involving the profiles of thermodynamic quantities such as density and/or temperature. Future work could involve the application of the proposed architecture to one of the other instability mechanisms. Furthermore, since the RNN model uses input data for the boundary-layer profiles, which depends on the airfoil contour and flow conditions, future explorations could involve airfoil contours along with angle of attack and Reynolds number as global inputs to predict the N-factor envelope. Use of vector-cloud neural network can also be explored, as it would allow the user to employ boundary layer profiles defined at any arbitrary and variable number of grid points~\citep{zhou2021frame}.
|
2,877,628,090,505 | arxiv | \section{Introduction}
In 1895 Felix Klein proposed the following geometric representation of continued fractions. For an irrational real number $\omega$ consider the ray $y = \omega x$ on the plane with the integer lattice. Let us quote Klein (1924):
\begin{quote}
Imagine pegs or needles affixed at all the integral points, and wrap a tightly drawn string about the sets of pegs to the right and to the left of the $\omega$-ray, then the vertices of the two convex strong-polygons which bound our two point sets will be precisely the points $(p_{\nu}, q_{\nu})$ whose coordinates are the numerators and denominators of the successive convergents to $\omega$, the left polygon having the even convergents, the right one the odd. This gives a new and, one may well say, an extremely graphic definition of a continued fraction.
\end{quote}
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{Klein_quotients} \hspace{8pt} \includegraphics[scale=0.5]{Klein_graph}
\caption{Klein's construction and the corresponding LLS sequence} \label{fig:Klein}
\end{figure}
Many years later Vladimir I. Arnold (1998) revitalised this point of view, mainly with an emphasis on multi-dimensional generalisations. In particular, for a polyhedral cone he introduced the notion of the {\it sail} as the boundary of the convex hull of the integer points inside it. In dimension 2 the sail of the angle formed by the $\omega$-ray and $x$-axis is precisely Klein's construction of the continued fraction expansion of $\omega$ (see Figure \ref{fig:Klein}).
This line was developed in more detail by Karpenkov (2013), who, importantly for us, introduced the \emph{lattice length sine (LLS) sequence} of positive integers $(a_i), \, i \in \mathbb Z$ of the sail and proved a remarkable edge-angle duality between the sails of the adjacent angles (see Figure \ref{fig:Klein}). He also linked this with the theory of indefinite binary quadratic forms. Indeed, the zero set of such a form is a pair of lines, forming 4 angles with 4 sails, which are either isomorphic or dual to each other (see Figure \ref{fig:arnold_4graph}).\footnote {Photo of V.I. Arnold is reproduced from (Arnold 2002) with permission from MCCME.}
\begin{figure}[h]
\centering
\includegraphics[scale=0.2]{arnold2} \hspace{8pt} \includegraphics[height=56mm]{4graph}
\caption{Arnold and the sails for a pair of lines.} \label{fig:arnold_4graph}
\end{figure}
On the other hand, John H. Conway (1997) proposed the notion of the {\it topograph} of a binary quadratic form $Q$ as a graphical way to visualise the values of $Q$ on a planar binary tree (see section 3 below). For indefinite quadratic forms he introduced the notion of the {\it river}, which is a path on the topograph separating positive and negative values of $Q$ (see Figure \ref{fig:river}).
\begin{figure}[h]
\begin{center}
\includegraphics[height=40mm]{river2}
\caption{\small Conway river for the quadratic form $Q=x^2-2xy-5y^2.$} \label{fig:river}
\end{center}
\end{figure}
The main result of this paper is the following simple relation between the Conway river and the corresponding Arnold sail.
\begin{Theorem}
Let $Q(x,y)$ be a real indefinite binary quadratic form and consider the Arnold sail of the pair of lines given by $Q(x,y)=0,$ assuming that the origin is the only integer point on them.
Then the corresponding LLS sequence $(a_i), \, i \in \mathbb Z$ coincides with the sequence of the left- and right-turns of the Conway river on the topograph of $Q.$ This determines the river uniquely up to the action of the group $PGL(2, \mathbb Z)$ on the topograph and a change of direction.
\end{Theorem}
For example, for $Q=x^2-2xy-5y^2$ one can check that the corresponding LLS sequence is periodic, equal to $\dots 4,2,4,2,4,2, \dots$, which is exactly the sequence of left-right turns $\dots LLLLRRLLLLRR\dots$ of the (properly oriented) Conway river in Figure \ref{fig:river}.
The proof is simple and essentially follows from the results of Karpenkov (2013) combined with more detailed analysis of the Conway river from (Spalding and Veselov 2017).
\section{Arnold sail and the LLS sequence of the angles}
We follow here Karpenkov (2013) (see, in particular, Chapters 2 and 4).
Let $A,B,C$ be three distinct integer lattice points on the plane and $\angle ABC$ be the corresponding angle. Define the {\it integer length} $\mathrm{l}l(AB)$ of the segment $AB$ as the number of integer points in the interior of $AB$ plus one, and the {\it integer area} $\mathrm{l}S(\triangle ABC)$ of the triangle $\triangle ABC$ as the index of the sublattice generated by the integer vectors $AB$ and $BC$ in the integer lattice.
The {\it integer sine} of the angle $\angle ABC$ is defined as
\begin{equation}
\label{lsin}
\mathrm{l}\sin \angle ABC=\frac{\mathrm{l}S(\triangle ABC)}{\mathrm{l}l(AB)\mathrm{l}l(BC)}=\frac{|\det(AB,BC)|}{\mathrm{l}l(AB)\mathrm{l}l(BC)}.
\end{equation}
One can check that it is a positive integer and depends only on the angle, and not on the choice of $B$ and $C.$
Consider now a pair of lines given by $y=\alpha x$ and $y=\beta x$ and one of the angles $\angle {\mathcal A} O {\mathcal B}$ formed by them.
Let us assume that $\alpha$ and $\beta$ are irrational, so that the origin $O$ is the only integer point on them, and consider the convex hull of the integer points inside $\angle {\mathcal A} O {\mathcal B}$ (excluding $O$). Its boundary is an infinite broken line called the {\it Arnold sail} of the angle $\angle {\mathcal A} O {\mathcal B}$.
Let $(A_i), \, i\in \mathbb Z$ be the sequence of vertices of this sail.
Karpenkov (2013) introduced the following key notion of the \emph{LLS (lattice length sine) sequence} $(a_i), \, i \in \mathbb Z$ of the angle $\angle {\mathcal A} O {\mathcal B}$ as
\begin{equation}
\label{LLS}
a_{2k} = \mathrm{l}l A_k A_{k+1}, \quad
a_{2k-1} = \mathrm{l} \sin \left(\angle A_{k-1} A_k A_{k+1} \right).
\end{equation}
Karpenkov proved that the LLS sequence determines the angle uniquely up to an integer affine transformation. Note that the sequence is defined up to a shift of indices and depends on the orientation of the sail.
When the angle is formed by the $x$-axis and $\omega$-ray the corresponding LLS sequence is semi-infinite and gives precisely the continued fraction representation of $\omega$ (see Figure \ref{fig:Klein}):
$$\omega=[a_0, a_1, a_2, \dots]:=a_0+\frac{1}{a_1 +\frac{1}{a_2+ \dots}}.$$
Let us look at the sail of the angle formed by the $\omega$-ray and $y$-axis.
Let $B_0B_1B_2 \ldots$ be the sequence of vertices of the corresponding sail. Then we have the remarkable {\it edge-angle duality} (Karpenkov 2013):
\begin{equation}
\mathrm{l}\sin \left( \angle A_i A_{i+1} A_{i+2} \right) = \mathrm{l} l \left(B_i B_{i+1} \right),
\end{equation}
\begin{equation}
\mathrm{l}\sin \left( \angle B_i B_{i+1} B_{i+2} \right) = \mathrm{l} l \left(A_{i+1} A_{i+2} \right).
\end{equation}
This explains why we do not need to consider the second sail to extract the full continued fraction expansion.
Note that the coordinates of $A_i=(p_{2i}, q_{2i})$ and $B_i=(p_{2i-1}, q_{2i-1})$ are the corresponding denominators and numerators of the continued fraction convergents for $\omega$ (see (Klein 1924) and Figure \ref{fig:Klein}).
For general lines given by $y=\alpha x$ and $y=\beta x$ the corresponding (infinite in both directions) LLS sequence can be considered as {\it a joint continued fraction expansion} of the numbers $\alpha$ and $\beta$ and is related to the rational approximation of the arrangement of these two lines (or, equivalently to the corresponding quadratic form $Q=(y-\alpha x)(y-\beta x)$, see Chapter 10 in (Karpenkov 2013)).
\section{Topograph of binary quadratic form and Conway river}
We follow here the original approach of Conway (1997)
Conway proposed the following nice way to visualise the values of a binary quadratic form
\begin{equation}
\label{Q}
Q(x, y) = ax^2 + hxy + by^2, \quad (x,y) \in \mathbb{Z}^2.
\end{equation}
He considered the case when all the coefficients $a,b,h$ are integer, but his construction works for real coefficients as well.
Conway introduced the notions of the {\it lax} vector as a pair $(\pm v), v \in \mathbb Z^2$, and of the \emph{superbase} of the integer lattice $\mathbb Z^2$ as a triple of lax vectors $(\pm e_1, \pm e_2, \pm e_3)$ such that $(e_1, e_2)$ is a basis of the lattice and
\begin{equation*}
e_1 + e_2 + e_3 = 0.
\end{equation*}
It is easy to see that every basis can be included in exactly two superbases, which we can represent using the binary tree embedded in the plane (see Figure \ref{fig:tree}).
The lax vectors live in the complement to the tree (we show only one representative of them), while the superbases correspond to the vertices.
Note that all {\it primitive} lattice vectors, i.e. those which are not multiples of any other lattice vectors, appear on this tree.
\begin{figure}[h]
\begin{center}
\includegraphics[height=41mm]{Tree3big} \hspace{15pt} \includegraphics[scale=0.23]{abcv}
\caption{\small The superbase tree and arithmetic progression rule for values of quadratic forms.} \label{fig:tree}
\end{center}
\end{figure}
By taking the values of the form $Q$ on the vectors of the superbase tree, we get what Conway called the {\it topograph} of $Q.$ The idea is to get the values of $Q$ on all primitive lattice vectors in this way.
In particular, if $e_1=(1,0), e_2=(0,1), e_3=-(1,1)$ we have the values
$Q(e_1)=a, \,\, Q(e_2)=b, \,\, Q(e_3)=c:=a+b+h.$
One can construct the topograph of $Q$ starting from this triple using the {\it arithmetic progression rule} (known in geometry as the {\it parallelogram rule}):
\begin{equation}
\label{apr}
Q(u+v)+Q(u-v)=2(Q(u)+Q(v)), \quad u,v \in \mathbb R^2.
\end{equation}
We also need to construct the {\it Farey tree} by replacing $v=(p,q)$ on the superbase tree by the corresponding fraction $\frac{p}{q}$ (so that addition of vectors corresponds to taking the Farey mediant of fractions).
Using the Farey tree, we can label any semi-infinite path $\gamma$ on the tree by a real number $\xi$ such that the limit of the Farey fractions along $\gamma_\xi$ is $\xi$.
The path $\gamma_\xi$ is actually a geometric way to represent the continued fraction expansion of $\xi =\left[ a_0, a_1, a_2, a_3 \ldots \right]$:
it has $a_0$ left-turns on the tree, followed by $a_1$ right-turns, followed by $a_2$ left-turns, and so on
(see Figure \ref{fig:golden}, showing the Fibonacci path corresponding to the golden ratio $\xi=[1,1,1\dots]$).
\begin{figure}[h]
\begin{center}
\includegraphics[height=45mm]{Fib_Path_Q} \hspace{8pt} \includegraphics[height=45mm]{Fib_Path_Farey}
\caption{\small Topograph of $Q=x^2+y^2$ and the corresponding positive part of the Farey tree with marked Fibonacci path.} \label{fig:golden}
\end{center}
\end{figure}
Let us assume now that the form $Q$ is indefinite and does not represent zero, meaning that
$Q(x,y) \neq 0$ for all $(x,y)\in \mathbb Z^2\setminus (0,0).$
In this case the same arguments as in the integer case (Conway 1997) show that on the topograph of $Q$ positive and negative values are separated by an infinite path
which we call the {\it Conway river}. In the integer case we explained in (Spalding and Veselov 2017) how the Conway river is related to the continued fraction expansion of the roots $\alpha, \bar\alpha$ of the quadratic equation $Q(x,1)=0$
(see Figure \ref{fig:river-17}).
\begin{figure}[h]
\begin{center}
\includegraphics[height=95mm]{river7}
\caption{\small Paths to the roots $\alpha$ and $\bar\alpha$ and periodic Conway river for $Q=11x^2-10xy+2y^2.$} \label{fig:river-17}
\end{center}
\end{figure}
Note that in the general case of binary quadratic forms with real coefficients, we do not have periodicity of the river anymore.
\section{Proof of the theorem}
Consider an indefinite quadratic form $Q(x,y)=ax^2+hxy+by^2$ and factorise it as a product of linear forms
$$
Q(x,y)=b(y-\alpha x)(y-\beta x)
$$
with irrational $\alpha, \beta$, assuming without loss of generality that $\alpha>0$ and $\beta<0.$
Let $P=A_0$ be a corner of the Arnold sail of the corresponding pair of lines $y=\alpha x$ and $y=\beta x$. Choose a new basis in the lattice with $e_1=OA_0$ and $e_2$ being a primitive vector along the edge $A_0A_1$ of the Arnold sail. From Klein's construction it follows that this indeed a basis.
In the new coordinate system the corresponding $\alpha>1$ and $0>\beta>-1,$ and we have the situation shown in Figure \ref{fig:normal} justified by the following lemma (see also (Markov 1879) and Definition 1.1 from (Karpenkov 2018)).
\begin{Lemma}
The LLS sequence of the Arnold sail of a pair of lines $y=\alpha x$ and $y=\beta x$ with $\alpha>1$ and $0>\beta>-1$ is
\begin{equation}
\label{llseq}
\dots, b_4, b_3, b_2, b_1, a_0, a_1, a_2, a_3, \dots,
\end{equation}
where $a_i$ and $b_j$ are given by the continued fraction expansions
\begin{equation}
\label{llseqab}
\alpha=[a_0, a_1, a_2, a_3, \dots], \quad -\beta=[0, b_1, b_2, b_3, b_4, \dots].
\end{equation}
\end{Lemma}
The proof follows directly from Klein's construction and the results of Karpenkov (2013) (see Ch. 3 and 7, in particular, Prop. 7.5).
\begin{figure}[h]
\begin{center}
\includegraphics[height=90mm]{normal_graph}
\caption{\small Arnold sail in a special basis.} \label{fig:normal}
\end{center}
\end{figure}
Let us now look at the corresponding Conway river. Since $\alpha\beta<0$ this means that $Q(1,0)=b\alpha\beta$ and $Q(0,1)=b$ have different signs, so our initial position is already on the Conway river.
We know that the Conway river is the unique path on the Farey tree connecting the points $\alpha$ and $\beta$ on the boundary, and thus is the union of two paths $\gamma_\alpha$ and $\gamma_\beta$. Combining this with the description of the Farey paths in terms of continued fractions (see above), we conclude that the sequence (\ref{llseq}) determines the sequence of the river's left and right turns.
Now let's prove that this determines the river uniquely modulo the action of $PGL(2, \mathbb Z)$, which is the symmetry group of the binary tree embedded in the plane.
Indeed, we have a well-known isomorphism $$PSL(2, \mathbb Z)=\mathbb Z_2*\mathbb Z_3.$$ This allows us to define the action of $PSL(2, \mathbb Z)$ on the tree with generators of $\mathbb Z_2$ and $\mathbb Z_3$ acting as rotations by $\pi$ about the edge centre and by $2\pi/3$ about the vertex. The element $diag\,(-1,1) \in GL(2,\mathbb Z)$ acts by the natural reflection and changes the orientation.
Using this action one can transform any directed edge to any other. After that the sequence of left- and right-turns determines the river uniquely. The left-right symmetry corresponds to the reflection. This proves our theorem.
\section{Concluding remarks}
Arnold sails can be defined in a much more general situation, in particular, for cubic binary forms and multidimensional simplicial cones. This is related to the geometric theory of multidimensional continued fractions, also going back to Klein (see (Karpenkov 2013) for the details). It is an interesting question as to whether there is an analogue of the Conway topograph here.
Another interesting question is to study the growth of values of the real binary quadratic forms along the paths on the Conway topograph, similar to the integer case considered in (Spalding and Veselov 2017). Note that in the real case the values of the form along the Conway river may approach zero (see e.g. Kleinbock and Weiss (2015)), so the situation here is more subtle.
\section{Acknowledgements} We are very grateful to Oleg Karpenkov for explaining us his important results about LLS sequences, and to Nikolai Andreev for helpful discussions.
The work of K.S. was supported by the EPSRC as part of PhD study at Loughborough.
|
2,877,628,090,506 | arxiv | \section{Introduction}
A triple $(G, +, \circ)$, where $(G, +)$ and $(G, \circ)$ are (not necessarily abelian) groups, is said to be a \emph{skew left brace} if
\begin{equation}
g_1 \circ (g_2 + g_3) = (g_1 \circ g_2) - g_1 + (g_1 \circ g_3)
\end{equation}
for all $g_1, g_2, g_3 \in G$, where $- g_1$ denotes the inverse of $g_1$ in $(G, +)$. We call $(G, +)$ the \emph{additive group} and $(G, \circ)$ the \emph{multiplicative group} of the skew left brace $(G, +, \circ)$. A skew left brace
$(G, +, \circ)$ is said to be a \emph{left brace} if $(G, +)$ is an abelian group. The concept of left braces was introduced by Rump \cite{R2007} in 2007 in connection with non-degenerate involutive set theoretic solutions of the quantum Yang-Baxter equations. Thereafter the subject received a tremendous attention of the mathematical community; see \cite{BCJO18, FC2018, WR2019, AS2018} and the references therein. Interest in the study of set theoretic solutions of the quantum Yang-Baxter equations was intrigued by the paper \cite{D1992} of Drinfeld, published in 1992.
Let $X$ be an arbitrary set and $R : X \times X \to X \times X$ a bijective map. Recall that the pair $(X, R)$ is said to be a set theoretic solution of the Yang-Baxter equation if
$$R_{12}R_{23}R_{12} = R_{23}R_{12}R_{23}$$
holds in the set of all maps from $X \times X \times X$ to itself, where $R_{ij}$ is just $R$ acting on the $i$th and $j$th components of $X \times X \times X$ and identity on the
remaining one. Let us write
$$R(x, y) = \big(\sigma_x(y), \tau_y(x)\big), ~ x, y \in X$$
with $\sigma_x$ and $\tau_y$ component maps from $X$ to itself.
A solution $(X, R)$ is said to be non-degenerate if the component maps $\sigma_x$ and $\tau_y$ are bijections on $X$ for all $x, y \in X$. It is said to be involutive if $R^2$ is the identity map. The study of non-degenerate set theoretic solutions of the quantum Yang-Baxter equations has been extensively taken up, e. g., \cite{CJR10, PD2015, ESS99, GI2018, LV2016} to mention a few.
The concept of skew left brace was introduced by Guarnieri and Vendramin \cite{GV2017} in 2017 in connection with non-involutive non-degenerate set theoretic solutions of the quantum Yang-Baxter equations. They invented an algorithm, by generalising a result of Bachiller \cite{DB2016} for computing all skew left braces of a given order. They themselves computed left braces and skew left braces of lot of groups upto order 120. Vendramin \cite{LV2019} extended the number upto 168 with some exceptions. All these computations are done using computer algebra systems MAGMA \cite{magma} and GAP \cite{GAP} using the algorithm invented in \cite{GV2017}. For more work on skew braces see \cite{CSV19, LC2018, KN2019}.
This article aims at filling up the gaps in the table produced in \cite{LV2019} to some extent and making further computations for larger orders. An ingenious observation on regular subgroups of the holomorph of a given finite group allows us to improve the algorithm obtained in \cite{GV2017}, which substantially enhances the performance of MAGMA computation. The improved algorithm, actually, avoids an expensive calculation in the existing algorithm. We compute the number of non-isomorphic left braces and skew left braces of orders upto 868 except certain cases (mainly when the order is a multiple of 32). These results settle \cite[Problem 13]{LV2019} and \cite[Problem 6.1]{GV2017}. The computations will help in building a database of left braces and skew left braces, which in turn will greatly enrich the library of solutions of the quantum Yang-Baxter equation. On the basis of our computation, we suggest some conjectures for further research.
It is striking that there are more than a million skew brace structures of order $2^5$ and more than 20 millions skew brace structures of order $3^5$. The reader will encounter many more surprises while going through the tables. We have used MAGMA on a computer with 3.5 GHz 6-Core Intel Xeon E5 processor and 64 GB memory for these computations.
\section{Regular subgroups}
Let $G$ be a group, which acts on a set $X$. The action of an element $g \in G$ on an element $x \in X$ is denoted by $x^g$. A subgroup $H$ of $G$ is said to be \textit{action-closed} if for each pair $(g, x) \in G \times X$, there exists an element $h \in H$ such that $x^g = x^h$. By \textit{$H$ - conjugacy class} of $x \in G$, we mean $\{x^h \mid h \in H\}$. For $g, h \in G$, we write the conjugate of $g$ by $h$ as $g^h = h^{-1}gh$.
Let $G$ be a group and $\operatorname{Symm} (G)$ be the symmetric group on the set $G$. Recall that a subgroup $\mathcal{G}$ of $\operatorname{Symm} (G)$ is said to be \textit{regular} if $\mathcal{G}$-action on $G$ is free and transitive. By a free action we here mean that for any element $g \in G$, its stabilizer in $\mathcal{G}$ is the trivial subgroup. Observe that when $G$ is finite, any regular subgroup of $\operatorname{Symm} (G)$ is of order $|G|$.
For a group $G$, $\operatorname{Hol} (G)$ denotes the holomorph of $G$, which is defined as the semidirect product of $G$ with $\operatorname{Aut} (G)$, the automorphism group of $G$. So
\[\operatorname{Hol} (G) := \operatorname{Aut} (G) \ltimes G,\]
where the product in $\operatorname{Hol} (G)$ is given by
\[(\alpha, g)(\beta, h) = (\alpha \beta, g\alpha(h)).\]
Notice that $\operatorname{Hol} (G)$ acts on $G$ transitively under the following action:
\[g^{(\alpha, h)} = \pi_2\big((\alpha, h) (1, g)\big) = h\alpha(g)\]
for all $\alpha \in \operatorname{Aut} (G)$ and $g, h \in G$, where $\pi_2 : \operatorname{Hol} (G) \to G$ is the projection map given by $\pi_2\big( (\alpha, g)\big) = g$. It follows that the stabilizer of any element of $G$ in $\operatorname{Hol} (G)$ is isomorphic to $\operatorname{Aut} (G)$.
Let $\mathcal{G}$ be a regular subgroup of $\operatorname{Hol} (G)$. Then it is not difficult to see that for each $g \in G$, there exists a unique element $(\alpha, h) \in \mathcal{G}$ such that $g^{(\alpha, h)} = h\alpha(g) = 1$. Let $\operatorname{Reg} (G)$ denote the set of all regular subgroups of $\operatorname{Hol} (G)$. Then $\operatorname{Hol} (G)$ acts on $\operatorname{Reg} (G)$ by conjugation. With this setting, we have
the following easy observation, which plays a key role in what follows.
\begin{lemma}\label{key-lemma}
$\operatorname{Aut} (G)$, as a subgroup of $\operatorname{Hol} (G)$, is action-closed with respect to the conjugation action of $\operatorname{Hol} (G)$ on $\operatorname{Reg} (G)$.
\end{lemma}
\begin{proof}
Let $\mathcal{G} \in \operatorname{Reg} (G)$ and $(\alpha, h) \in \operatorname{Hol} (G)$. Then there exists an element $(\alpha_1, h_1) \in \mathcal{G}$ such that $h^{(\alpha_1, h_1)} = h_1\alpha_1(h) = 1$. Notice that
$$(\alpha_1, h_1)(\alpha, h) = \big(\alpha_1 \alpha, h_1\alpha_1(h)\big) = (\alpha_1 \alpha, 1).$$
Let $\beta := \alpha_1 \alpha$, which lies in $\operatorname{Aut} (G)$. Thus,
\[\mathcal{G}^{(\beta,1)} = \mathcal{G}^{(\alpha_1, h_1)(\alpha, h)} = \big(\mathcal{G}^{(\alpha_1, h_1)}\big)^{(\alpha, h)} = \mathcal{G}^{(\alpha, h)}.\]
Proof is now complete.
\end{proof}
The preceding lemma enables us to get the following generalization of \cite[Proposition 4.3]{GV2017}.
\begin{thm}\label{thm}
Let $(G, +)$ be a group. Then non-isomorphic skew left braces $(G, +, \circ)$ are in bijective correspondence with conjugacy classes of regular subgroups in $\operatorname{Hol} (G, +)$. Moreover, if $G$ is a $p$-group for some prime $p$, then non-isomorphic skew left brace structures over $G$ are in bijective correspondence with $\operatorname{Aut} (G)$ - conjugacy classes of regular subgroups of any Sylow $p$-subgroup of $\operatorname{Hol} (G, +)$.
\end{thm}
\begin{proof}
The first assertion follows from \cite[Proposition 4.3]{GV2017} along with Lemma \ref{key-lemma}. Let $\mathcal{S}$ be a fixed Sylow $p$-subgroup of $\operatorname{Hol} (G, +)$ and $\mathcal{S}'$ any other Sylow $p$-subgroup of $\operatorname{Hol} (G, +)$. In the light of first assertion, we only need to observe that any regular subgroup of $\mathcal{S}'$ lies in the $\operatorname{Hol} (G, +)$ - conjugacy class of some regular subgroup of $\mathcal{S}$. But this is obvious by Sylow theory.
\end{proof}
As a result, we get the following algorithm which improves \cite[Algorithm 5.1]{GV2017}.
\begin{alg}\label{alg1}
For a finite group $(G, +)$, the following sequence of computations constructs all non-isomorphic skew left braces $(G, +, \circ):$
\begin{enumerate}
\item Compute $\operatorname{Hol} (G, +)$.
\item Compute the list of regular subgroups of $\operatorname{Hol} (G)$ of order $|G|$ up to conjugation.
\item For each representative $\mathcal{G}$ of regular subgroups of $\operatorname{Hol} (G)$, construct the map $\chi : G \to \mathcal{G}$ given by $g \mapsto \big(f, f(g)^{-1}\big)$, where $ \big(f, f(g)^{-1}\big) \in \mathcal{G}$. The triple $(\mathcal{G}, G, \chi)$ yields a skew left brace $(G, +, \circ)$ with $\circ: G \times G \to G$ given by $g_1 \circ g_2 = \chi^{-1}\big(\chi(g_1)\chi(g_2)\big)$ for all $g_1, g_2 \in G$.
\end{enumerate}
\end{alg}
As remarked in \cite{GV2017} too, for enumerating skew left braces with additive group $(G, +)$ we only need first two steps of this algorithm.
We also have the following algorithm for finite $p$-groups.
\begin{alg}\label{alg2}
For a finite $p$-group $(G, +)$, the following sequence of computations constructs all non-isomorphic skew left braces $(G, +, \circ):$
\begin{enumerate}
\item Compute $\operatorname{Hol} (G, +)$.
\item Compute a representative $\mathcal{S}$ of the conjugacy class of Sylow $p$-subgroups of $\operatorname{Hol} (G, +)$.
\item Compute the list of regular subgroups of $\mathcal{S}$ of order $|G|$ up to conjugation by the elements of $\operatorname{Aut} (G)$.
\item For each representative $\mathcal{G}$ of regular subgroups of $\mathcal{S}$ under conjugation action of $\operatorname{Aut} (G)$, construct the map $\chi : G \to \mathcal{G}$ given by $g \mapsto \big(f, f(g)^{-1}\big)$, where $ \big(f, f(g)^{-1}\big) \in \mathcal{G}$. The triple $(\mathcal{G}, G, \chi)$ yields a skew left brace $(G, +, \circ)$ with $\circ: G \times G \to G$ given by $g_1 \circ g_2 = \chi^{-1}\big(\chi(g_1)\chi(g_2)\big)$ for all $g_1, g_2 \in G$.
\end{enumerate}
\end{alg}
Notice that for enumerating skew left braces with finite additive $p$-group $(G, +)$ we need first three steps of this algorithm. We conclude this section by reproducing the proof of the following fact.
\begin{prop}
Let $\mathcal{S}$ be a Sylow $p$-subgroup of $\operatorname{Hol} (G)$ of a finite $p$-group $G$. Then the union of $\operatorname{Aut} (G)$ - conjugacy classes of the regular subgroups of $\mathcal{S}$ constitutes the set of all regular subgroups of $\operatorname{Hol} (G)$.
\end{prop}
\begin{proof}
Let $\mathcal{G}$ be an arbitrary regular subgroup of $\operatorname{Hol} (G)$. Then $\mathcal{G}$, being of order $|G|$, is a subgroup of some Sylow $p$-subgroup $\mathcal{S}'$ of $\operatorname{Hol} (G)$. By Sylow theory, we know that there exists an element $(\phi, y) \in \operatorname{Hol} (G)$ such that $\mathcal{S} = (\mathcal{S}')^{(\phi,y)}$. Thus $\mathcal{G}^{(\phi,y)}$ is a subgroup of $\mathcal{S}$. It follows from the proof of Lemma \ref{key-lemma} that $\mathcal{G}^{(\phi,y)} = \mathcal{G}^{(\beta, 1)}$ for some $\beta \in \operatorname{Aut} (G)$. A routine calculation now shows that $\mathcal{G}^{(\beta, 1)}$ is a regular subgroup of $\mathcal{S}$. Indeed, if $x^{\big((\psi, z)^{(\beta,1)}\big)} = x$ for some $(\psi, z) \in \mathcal{G}$ and $x \in G$, then it follows that $\beta(x)^{(\psi, z)} = \beta(x)$, which is not possible. This proves that the action of $\mathcal{G}^{(\beta, 1)}$ is free on $G$. That the action is transitive, is left as an easy exercise, and the proof is complete.
\end{proof}
We remark that on the lines of proof of the preceding proposition, we can easily show that for an arbitrary finite group $G$, $\operatorname{Hol} (G)$ acts on $\operatorname{Reg} (G)$ by conjugation. We have used this fact above without proof as it is well known.
\section{Computations}
Throughout this section, for a given positive integer $n$, $b(n)$ and $s(n)$, respectively, denote the total number of left braces and skew left braces of order $n$. For each such $n$, $pf(n)$ stands for the prime factorization of $n$. Computations in this section are carried out using Algorithm \ref{alg1}. The following table remedies some gaps in the list obtained in \cite{LV2019}.
\begin{table}[H]
\centering
\begin{small}
\begin{tabular}{|c | c c c c c c c c|}
\hline
$n$ & $32$ & $54$ & $64$ & $72$ & $80$ & $81$ & $96$ & $108$ \\
$b(n)$ & $25281$ & $80$ & $?$ & $489$ & $1985$ & $804$ & $195971$ & $494$ \\
$s(n)$ & $1223061$ & $1028$ & $?$ & $17790$ & $74120$ & $8436$ & $?$ & $11223$ \\
\hline
$n$ & $112$ & $120$ & $126$ & $128$ & $136$ & $144$ & $147$ & $150$\\
$b(n)$ & $1671$ & $395$ & $36$ & $?$ & $108$ & $10215$ & $9$ & $19$\\
$s(n)$ & $65485$ & $22711$ & $990$ & $?$ & $986$ & $3013486$ & $123$ & $401$\\
\hline
$n$ & $152$ & $158$ & $160$ & $162$ & $164$ & $165$ & $166$ & $168$\\
$b(n)$ & $90$ & $2$ & $209513$ & $1374$ & $11$ & $2$ & $2$ & $443$\\
$s(n)$ & $800$ & $6$ & $?$ & $45472$ & $43$ & $12$ & $6$ & $28505$\\
\hline
\end{tabular}
\hspace{1cm}
\caption{Some missing values from \cite{LV2019}}\label{Table1}
\end{small}
\end{table}
We now enumerate $b(n)$ and $s(n)$ for $n \le 868$ except some cases for which computations are too big to be handled by our computer. We have given a lower bound on the number of skew left braces of order $3^5$, by taking into account the additive groups with Group Id's $[243, m]$, where $m = 1, \ldots, 31, 33, 37, 38, 40, 48, \ldots, 63, 65, 66, 67$. By the Group Id we mean the group identification of a group of given order in The Small Groups Library \cite{SmallGrp} implemented in GAP and MAGMA.
\begin{table}[H]
\centering
\begin{small}
\begin{tabular}{|c | c c c c c c c c c c|}
\hline
$n$ & $169$ & $170$ & $171$ & $172$ & $173$ & $174$ & $175$ & $176$ & $177$ & $178$\\
$b(n)$ & $4$ & $4$ & $14$ & $9$ & $1$ & $4$ & $4$ & $1670$ & $1$ & $2$\\
$s(n)$ & $4$ & $36$ & $80$ & $29$ & $1$ & $36$ & $4$ & $65466$ & $1$ & $6$\\
\hline
$n$ & $179$ & $180$ & $181$ & $182$ & $183$ & $184$ & $185$ & $186$ & $187$ & $188$\\
$b(n)$ & $1$ & $129$ & $1$ & $4$ & $2$ & $90$ & $1$ & $6$ & $1$ & $9$\\
$s(n)$ & $1$ & $5849$ & $1$ & $36$ & $8$ & $800$ & $1$ & $78$ & $1$ & $29$\\
\hline
$n$ & $189$ & $190$ & $191$ & $192$ & $193$ & $194$ & $195$ & $196$ & $197$ & $198$\\
$b(n)$ & $165$ & $4$ & $1$ & $?$ & $1$ & $2$ & $2$ & $41$ & $1$ & $16$\\
$s(n)$ & $4560$ & $36$ & $1$ & $?$ & $1$ & $6$ & $8$ & $389$ & $1$ & $294$\\
\hline
$n$ & $199$ & $200$ & $201$ & $202$ & $203$ & $204$ & $205$ & $206$ & $207$ & $208$\\
$b(n)$ & $1$ & $568$ & $2$ & $2$ & $2$ & $28$ & $2$ & $2$ & $4$ & $1984$\\
$s(n)$ & $1$ & $23471$ & $8$ & $6$ & $16$ & $410$ & $12$ & $6$ & $4$ & $74104$\\
\hline
$n$ & $209$ & $210$ & $211$ & $212$ & $213$ & $214$ & $215$ & $216$ & $217$ & $218$\\
$b(n)$ & $1$ & $12$ & $1$ & $11$ & $1$ & $2$ & $1$ & $5308$ & $1$ & $2$\\
$s(n)$ & $1$ & $468$ & $1$ & $43$ & $1$ & $6$ & $1$ & $523768$ & $1$ & $6$\\
\hline
$n$ & $219$ & $220$ & $221$ & $222$ & $223$ & $224$ & $225$ & $226$ & 227$$ & $228$\\
$b(n)$ & $2$ & $36$ & $1$ & $6$ & $1$ & $195483$ & $21$ & $2$ & $1$ & $34$ \\
$s(n)$ & $8$ & $702$ & $1$ & $78$ & $1$ & $?$ & $61$ & $6$ & $1$ & $606$\\
\hline
$n$ & $229$ & $230$ & $231$ & $232$ & $233$ & $234$ & $235$ & $256$ & $237$ & $238$\\
$b(n)$ & $1$ & $4$ & $2$ & $106$ & $1$ & $36$ & $1$ & $9$ & $2$ & $4$\\
$s(n)$ & $1$ & $36$ & $8$ & $944$ & $1$ & $990$ & $1$ & $29$ & $8$ & $36$\\
\hline
$n$ & $239$ & $240$ & $241$ & $242$ & $243$ & $244$ & $245$ & $246$ & $247$ & $248$\\
$b(n)$ & $1$ & $10518$ & $1$ & $8$ & $598065$ & $11$ & $4$ & $4$ & $1$ & $90$\\
$s(n)$ & $1$ & $4642485$ & $1$ & $57$ & $> 27447027$ & $43$ & $4$ & $36$ & $1$ & $800$\\
\hline
$n$ & $249$ & $250$ & $251$ & $252$ & $253$ & $254$ & $255$ & $256$ & $257$ & $258$\\
$b(n)$ & $1$ & $104$ & $1$ & $229$ & $2$ & $2$ & $1$ & $?$ & $1$ & $6$\\
$s(n)$ & $1$ & $1492$ & $1$ & $11541$ & $24$ & $6$ & $1$ & $?$ & $1$ & $78$\\
\hline
$n$ & $259$ & $260$ & $261$ & $262$ & $263$ & $264$ & $265$ & $266$ & $267$ & $268$\\
$b(n)$ & $1$ & $35$ & $4$ & $2$ & $1$ & $345$ & $1$ & $4$ & $1$ & $9$\\
$s(n)$ & $1$ & $739$ & $4$ & $6$ & $1$ & $20231$ & $1$ & $36$ & $1$ & $29$\\
\hline
$n$ & $269$ & $270$ & $271$ & $272$ & $273$ & $274$ & $275$ & $276$ & $277$ & $278$\\
$b(n)$ & $1$ & $160$ & $1$ & $2014$ & $5$ & $2$ & $13$ & $24$ & $1$ & $2$\\
$s(n)$ & $1$ & $6168$ & $1$ & $74960$ & $113$ & $6$ & $93$ & $324$ & $1$ & $6$\\
\hline
$n$ & $279$ & $280$ & $281$ & $282$ & $283$ & $284$ & $285$ & $286$ & $287$ & $288$\\
$b(n)$ & $11$ & $385$ & $1$ & $4$ & $1$ & $9$ & $2$ & $4$ & $1$ & $1392959$\\
$s(n)$ & $47$ & $22295$ & $1$ & $36$ & $1$ & $29$ & $8$ & $36$ & $1$ & $?$\\
\hline
$n$ & $289$ & $290$ & $291$ & $292$ & $293$ & $294$ & $295$ & $296$ & $297$ & $298$\\
$b(n)$ & $4$ & $4$ & $2$ & $11$ & $1$ & $31$ & $1$ & $106$ & $37$ & $2$\\
$s(n)$ & $4$ & $36$ & $8$ & $43$ & $1$ & $2152$ & $1$ & $944$ & $101$ & $6$\\
\hline
$n$ & $299$ & $300$ & $301$ & $302$ & $303$ & $304$ & $305$ & $306$ & $307$ & $308$\\
$b(n)$ & $1$ & $152$ & $2$ & $2$ & $1$ & $1670$ & $2$ & 16$$ & $1$ & $23$\\
$s(n)$ & $1$ & $8222$ & $16$ & $6$ & $1$ & $65466$ & $12$ & $294$ & $1$ & $311$\\
\hline
$n$ & $309$ & $310$ & $311$ & $312$ & $313$ & $314$ & $315$ & $316$ & $317$ & $318$\\
$b(n)$ & $2$ & $6$ & $1$ & $507$ & $1$ & $2$ & $11$ & $9$ & $1$ & $4$\\
$s(n)$ & $8$ & $94$ & $1$ & $32075$ & $1$ & $6$ & $47$ & $29$ & $1$ & $36$\\
\hline
$n$ & $319$ & $320$ & $321$ & $322$ & $323$ & $324$ & $325$ & $326$ & $327$ & $328$\\
$b(n)$ & $1$ & $?$ & $1$ & $4$ & $1$ & $10225$ & $4$ & $2$ & $2$ & $108$\\
$s(n)$ & $1$ & $?$ & $1$ & $36$ & $1$ & $?$ & $4$ & $6$ & $8$ & $986$\\
\hline
$n$ & $329$ & $330$ & $331$ & $332$ & $333$ & $334$ & $335$ & $336$ & $337$ & $338$\\
$b(n)$ & $1$ & $12$ & $1$ & $9$ & $14$ & $2$ & $1$ & $10990$ & $1$ & $8$\\
$s(n)$ & $1$ & $564$ & $1$ & $29$ & $80$ & $6$ & $1$ & $5247711$ & $1$ & $59$\\
\hline
$n$ & $339$ & $340$ & $341$ & $342$ & $343$ & $344$ & $345$ & $346$ & $347$ & $348$\\
$b(n)$ & $1$ & $35$ & $1$ & $42$ & $61$ & $90$ & $1$ & $2$ & $1$ & $28$\\
$s(n)$ & $1$ & $739$ & $1$ & $1164$ & $373$ & $800$ & $1$ & $6$ & $1$ & $410$\\
\hline
\end{tabular}
\hspace{1cm}
\caption{Further Computations}\label{Table1}
\end{small}
\end{table}
\begin{table}[H]
\centering
\begin{small}
\begin{tabular}{|c | c c c c c c c c c c|}
\hline
$n$ & $349$ & $350$ & $351$ & $352$ & $353$ & $354$ & $355$ & $356$ & $357$ & $358$\\
$b(n)$ & $1$ & $16$ & $166$ & $195479$ & $1$ & $4$ & $2$ & $11$ & $2$ & $2$\\
$s(n)$ & $1$ & $306$ & $4591$ & $?$ & $1$ & $36$ & $12$ & $43$ & $8$ & $6$\\
\hline
$n$ & $359$ & $360$ & $361$ & $362$ & $363$ & $364$ & $365$ & $366$ & $367$ & $368$\\
$b(n)$ & $1$ & $2035$ & $4$ & $2$ & $5$ & $27$ & $1$ & $6$ & $1$ & $1670$\\
$s(n)$ & $1$ & $535713$ & $4$ & $6$ & $20$ & $395$ & $1$ & $78$ & $1$ & $65466$\\
\hline
$n$ & $369$ & $370$ & $371$ & $372$ & $373$ & $374$ & $375$ & $376$ & $377$ & $378$\\
$b(n)$ & $4$ & $4$ & $1$ & $34$ & $1$ & $4$ & $54$ & $90$ & $1$ & $548$\\
$s(n)$ & $4$ & $36$ & $1$ & $606$ & $1$ & $36$ & $253$ & $800$ & $1$ & $47244$\\
\hline
$n$ & $379$ & $380$ & $381$ & $382$ & $383$ & $384$ & $385$ & $386$ & $387$ & $388$\\
$b(n)$ & $1$ & $27$ & $2$ & $2$ & $1$ & $?$ & $2$ & $2$ & $11$ & $11$\\
$s(n)$ & $1$ & $395$ & $8$ & $6$ & $1$ & $?$ & $12$ & $6$ & $47$ & $43$\\
\hline
$n$ & $389$ & $390$ & $391$ & $392$ & $393$ & $394$ & $395$ & $396$ & $397$ & $398$\\
$b(n)$ & $1$ & $12$ & $1$ & $463$ & $1$ & $2$ & $1$ & $111$ & $1$ & $2$\\
$s(n)$ & $1$ & $468$ & $1$ & $18078$ & $1$ & $6$ & $1$ & $4985$ & $1$ & $6$\\
\hline
$n$ & $399$ & $400$ & $401$ & $402$ & $403$ & $404$ & $405$ & $406$ & $407$ & $408$\\
$b(n)$ & $5$ & $12744$ & $1$ & $6$ & $1$ & $11$ & $805$ & $6$ & $1$ & $399$\\
$s(n)$ & $113$ & $3618636$ & $1$ & $78$ & $1$ & $43$ & $8453$ & $110$ & $1$ & $22923$\\
\hline
$n$ & $409$ & $410$ & $411$ & $412$ & $413$ & $414$ & $415$ & $416$ & $417$ & $418$\\
$b(n)$ & $1$ & $6$ & $1$ & $9$ & $1$ & $16$ & $1$ & $209507$ & $2$ & $4$\\
$s(n)$ & $1$ & $94$ & $1$ & $29$ & $1$ & $294$ & $1$ & $?$ & $8$ & $36$\\
\hline
$n$ & $419$ & $420$ & $421$ & $422$ & $423$ & $424$ & $425$ & $426$ & $427$ & $428$\\
$b(n)$ & $1$ & $104$ & $1$ & $2$ & $4$ & $106$ & $4$ & $4$ & $1$ & $9$\\
$s(n)$ & $1$ & $9052$ & $1$ & $6$ & $4$ & $944$ & $4$ & $36$ & $1$ & $29$\\
\hline
$n$ & $429$ & $430$ & $431$ & $432$ & $433$ & $434$ & $435$ & $436$ & $437$ & $438$\\
$b(n)$ & $2$ & $4$ & $1$ & $115708$ & $1$ & $4$ & $1$ & $11$ & $1$ & $6$\\
$s(n)$ & $8$ & $36$ & $1$ & $?$ & $1$ & $36$ &$1$ & $43$ & $1$ & $78$\\
\hline
$n$ & $439$ & $440$ & $441$ & $442$ & $443$ & $444$ & $445$ & $446$ & $447$ & $448$\\
$b(n)$ & $1$ & $474$ & $55$ & $4$ & $1$ & $40$ & $1$ & $2$ & $1$ & $?$\\
$s(n)$ & $1$ & $31970$ & $1110$ & $36$ & $1$ & $782$ & $1$ & $6$ & $1$ & $?$\\
\hline
$n$ & $449$ & $450$ & $451$ & $452$ & $453$ & $454$ & $455$ & $456$ & $457$ & $458$\\
$b(n)$ & $1$ & $82$ & $1$ & $11$ & $2$ & $2$ & $1$ & $441$ & $1$ & $2$\\
$s(n)$ & $1$ & $3797$ & $1$ & $43$ & $8$ & $6$ & $1$ & $28447$ & $1$ & $6$\\
\hline
$n$ & $459$ & $460$ & $461$ & $462$ & $463$ & $464$ & $465$ & $466$ & $467$ & $468$\\
$b(n)$ & $37$ & $27$ & $1$ & $12$ & $1$ & $1984$ & $4$ & $2$ & $1$ & $267$\\
$s(n)$ & $101$ & $395$ & $1$ & $468$ & $1$ & $74104$ & $66$ & $6$ & $1$ & $13941$\\
\hline
$n$ & $469$ & $470$ & $471$ & $472$ & $473$ & $474$ & $475$ & $476$ & $477$ & $478$\\
$b(n)$ & $1$ & $4$ & $2$ & $90$ & $1$ & $6$ & $4$ & $27$ & $4$ & $2$\\
$s(n)$ & $1$ & $36$ & $8$ & $800$ & $1$ & $78$ & $4$ & $395$ & $4$ & $6$\\
\hline
$n$ & $479$ & $480$ & $481$ & $482$ & $483$ & $484$ & $485$ & $486$ & $487$ & $488$\\
$b(n)$ & $1$ & $?$ & $1$ & $2$ & $2$ & $41$ & $1$ & $639775$ & $1$ & $106$\\
$s(n)$ & $1$ & $?$ & $1$ & $6$ & $8$ & $421$ & $1$ & $?$ & $1$ & $944$\\
\hline
$n$ & $489$ & $490$ & $491$ & $492$ & $493$ & $494$ & $495$ & $496$ & $497$ & $498$\\
$b(n)$ & $2$ & $16$ & $1$ & $28$ & $1$ & $4$ & $8$ & $1670$ & $2$ & $4$\\
$s(n)$ & $8$ & $318$ & $1$ & $410$ & $1$ & $36$ & $48$ & $65466$ & $16$ & $36$\\
\hline
$n$ & $499$ & $500$ & $501$ & $502$ & $503$ & $504$ & $505$ & $506$ & $507$ & $508$\\
$b(n)$ & $1$ & $634$ & $1$ & $2$ & $1$ & $3249$ & $2$ & $6$ & $9$ & $9$\\
$s(n)$ & $1$ & $21252$ & $1$ & $6$ & $1$ & $871013$ & $12$ & $142$ & $135$ & $29$\\
\hline
$n$ & $509$ & $510$ & $511$ & $512$ & $513$ & $514$ & $515$ & $516$ & $517$ & $518$\\
$b(n)$ & $1$ & $8$ & $1$ & $?$ & $189$ & $2$ & $1$ & $34$ & $1$ & $4$\\
$s(n)$ & $1$ & $216$ & $1$ & $?$ & $5055$ & $6$ & $1$ & $606$ & $1$ & $36$\\
\hline
$n$ & $519$ & $520$ & $521$ & $522$ & $523$ & $524$ & $525$ & $526$ & $527$ & $528$\\
$b(n)$ & $1$ & $484$ & $1$ & $16$ & $1$ & $9$ & $10$ & $2$ & $1$ & $9274$\\
$s(n)$ & $1$ & $28714$ & $1$ & $294$ & $1$ & $29$ & $112$ & $6$ & $1$ & $4381956$\\
\hline
\end{tabular}
\hspace{1cm}
\caption{Further Computations}\label{Table1}
\end{small}
\end{table}
\begin{table}[H]
\centering
\begin{small}
\begin{tabular}{|c | c c c c c c c c c c|}
\hline
$n$ & $529$ & $530$ & $531$ & $532$ & $533$ & $534$ & $535$ & $536$ & $537$ & $538$\\
$b(n)$ & $4$ & $4$ & $4$ & $23$ & $1$ & $4$ & $1$ & $90$ & $1$ & $2$\\
$s(n)$ & $4$ & $36$ & $4$ & $311$ & $1$ & $36$ & $1$ & $800$ & $1$ & $6$\\
\hline
$n$ & $539$ & $540$ & $541$ & $542$ & $543$ & $544$ & $545$ & $546$ & $547$ & $548$\\
$b(n)$ & $4$ & $1342$ & $1$ & $2$ & $2$ & $210043$ & $1$ & $24$ & $1$ & $11$\\
$s(n)$ & $4$ & $148151$ & $1$ & $6$ & $8$ & $?$ & $1$ & $2664$ & $1$ & $43$\\
\hline
$n$ & $549$ & $550$ & $551$ & $552$ & $553$ & $554$ & $555$ & $556$ & $557$ & $558$\\
$b(n)$ & $11$ & $40$ & $1$ & $345$ & $1$ & $2$ & $2$ & $9$ & $1$ & $36$\\
$s(n)$ & $47$ & $1370$ & $1$ & $20231$ & $1$ & $6$ & $8$ & $29$ & $1$ & $990$\\
\hline
$n$ & $559$ & $560$ & $561$ & $562$ & $563$ & $564$ & $565$ & $566$ & $567$ & $568$\\
$b(n)$ & $1$ & $10423$ & $1$ & $2$ & $1$ & $24$ & $1$ & $2$ & $7196$ & $90$\\
$s(n)$ & $1$ & $4633376$ & $1$ & $6$ & $1$ & $324$ & $1$ & $6$ & $2253564$ & $800$\\
\hline
$n$ & $569$ & $570$ & $571$ & $572$ & $573$ & $574$ & $575$ & $576$ & $577$ & $578$\\
$b(n)$ & $1$ & $12$ & $1$ & $27$ & $1$ & $4$ & $4$ & $?$ & $1$ & $8$\\
$s(n)$ & $1$ & $468$ & $1$ & $395$ & $1$ & $36$ & $4$ & $?$ & $1$ & $63$\\
\hline
$n$ & $579$ & $580$ & $581$ & $582$ & $583$ & $584$ & $585$ & $586$ & $587$ & $588$\\
$b(n)$ & $2$ & $35$ & $1$ & $6$ & $1$ & $108$ & $11$ & $2$ & $1$ & $202$\\
$s(n)$ & $8$ & $739$ & $1$ & $78$ & $1$ & $986$ & $47$ & $6$ & $1$ & $21836$\\
\hline
$n$ & $589$ & $590$ & $591$ & $592$ & $593$ & $594$ & $595$ & $596$ & $597$ & $598$\\
$b(n)$ & $1$ & $4$ & $1$ & $1984$ & $1$ & $160$ & $1$ & $11$ & $2$ & $4$\\
$s(n)$ & $1$ & $36$ & $1$ & $74104$ & $1$ & $6168$ & $1$ & $43$ & $8$ & $36$\\
\hline
$n$ & $599$ & $600$ & $601$ & $602$ & $603$ & $604$ & $605$ & $606$ & $607$ & $608$\\
$b(n)$ & $1$ & $2413$ & $1$ & $6$ & $11$ & $9$ & $10$ & $4$ & $1$ & $195479$\\
$s(n)$ & $1$ & $659897$ & $1$ & $110$ & $47$ & $29$ & $409$ & $36$ & $1$ & $?$\\
\hline
$n$ & $609$ & $610$ & $611$ & $612$ & $613$ & $614$ & $615$ & $616$ & $617$ & $618$\\
$b(n)$ & $3$ & $6$ & $1$ & $129$ & $1$ & $2$ & $2$ & $335$ & $1$ & $6$\\
$s(n)$ & $25$ & $94$ & $1$ & $5835$ & $1$ & $6$ & $12$ & $19885$ & $1$ & $78$\\
\hline
$n$ & $619$ & $620$ & $621$ & $622$ & $623$ & $624$ & $625$ & $626$ & $627$ & $628$\\
$b(n)$ & $1$ & $36$ & $37$ & $2$ & $1$ & $12547$ & $2308$ & $2$ & $2$ & $11$\\
$s(n)$ & $1$ & $702$ & $101$ & $6$ & $1$ & $5595183$ & $69032$ & $6$ & $8$ & $43$\\
\hline
$n$ & $629$ & $630$ & $631$ & $632$ & $633$ & $634$ & $635$ & $636$ & $637$ & $638$\\
$b(n)$ & $1$ & $72$ & $1$ & $90$ & $2$ & $2$ & $1$ & $28$ & $4$ & $4$\\
$s(n)$ & $1$ & $5940$ & $1$ & $800$ & $8$ & $6$ & $1$ & $410$ & $4$ & $36$\\
\hline
$n$ & $639$ & $640$ & $641$ & $642$ & $643$ & $644$ & $645$ & $646$ & $647$ & $648$\\
$b(n)$ & $4$ & $?$ & $1$ & $4$ & $1$ & $23$ & $2$ & $4$ & $1$ & $91071$\\
$s(n)$ & $4$ & $?$ & $1$ & $36$ & $1$ & $311$ & $8$ & $36$ & $1$ & $?$\\
\hline
$n$ & $649$ & $650$ & $651$ & $652$ & $653$ & $654$ & $655$ & $656$ & $657$ & $658$\\
$b(n)$ & $1$ & $16$ & $5$ & $9$ & $1$ & $6$ & $2$ & $2010$ & $14$ & $4$\\
$s(n)$ & $1$ & $306$ & $113$ & $29$ & $1$ & $78$ & $12$ & $74860$ & $80$ & $36$\\
\hline
$n$ & $659$ & $660$ & $661$ & $662$ & $663$ & $664$ & $665$ & $666$ & $667$ & $668$\\
$b(n)$ & $1$ & $100$ & $1$ & $2$ & $2$ & $90$ & $1$ & $42$ & $1$ & $9$\\
$s(n)$ & $1$ & $9346$ & $1$ & $6$ & $8$ & $800$ & $1$ & $1164$ & $1$ & $29$\\
\hline
$n$ & $669$ & $670$ & $671$ & $672$ & $673$ & $674$ & $675$ & $676$ & $677$ & $678$\\
$b(n)$ & $2$ & $4$ & $1$ & $?$ & $1$ & $2$ & $232$ & $51$ & $1$ & $4$\\
$s(n)$ & $8$ & $36$ & $1$ & $?$ & $1$ & $6$ & $3682$ & $791$ & $1$ & $36$\\
\hline
$n$ & $679$ & $680$ & $681$ & $682$ & $683$ & $684$ & $685$ & $686$ & $687$ & $688$\\
$b(n)$ & $1$ & $492$ & $1$ & $4$ & $1$ & $259$ & $1$ & $128$ & $2$ & $1670$\\
$s(n)$ & $1$ & $29698$ & $1$ & $36$ & $1$ & $12723$ & $1$ & $2084$ & $8$ & $65466$\\
\hline
$n$ & $689$ & $690$ & $691$ & $692$ & $693$ & $694$ & $695$ & $696$ & $697$ & $698$\\
$b(n)$ & $2$ & $8$ & $1$ & $11$ & $11$ & $2$ & $1$ & $395$ & $1$ & $2$\\
$s(n)$ & $28$ & $216$ & $1$ & $43$ & $47$ & $6$ & $1$ & $22667$ & $1$ & $6$\\
\hline
$n$ & $699$ & $700$ & $701$ & $702$ & $7703$ & $704$ & $705$ & $706$ & $707$ & $708$\\
$b(n)$ & $1$ & $126$ & $1$ & $550$ & $1$ & $?$ & $1$ & $2$ & $1$ & $24$\\
$s(n)$ & $1$ & $7102$ & $1$ & $47374$ & $1$ & $?$ & $1$ & $6$ & $1$ & $324$\\
\hline
\end{tabular}
\hspace{1cm}
\caption{Further Computations}\label{Table1}
\end{small}
\end{table}
\begin{table}[H]
\centering
\begin{small}
\begin{tabular}{|c | c c c c c c c c c c|}
\hline
$n$ & $709$ & $710$ & $711$ & $712$ & $713$ & $714$ & $715$ & $716$ & $717$ & $718$\\
$b(n)$ & $1$ & $6$ & $11$ & $108$ & $1$ & $12$ & $2$ & $9$ & $1$ & $2$\\
$s(n)$ & $1$ & $94$ & $47$ & $986$ & $1$ & $468$ & $12$ & $29$ & $1$ & $6$\\
\hline
$n$ & $719$ & $720$ & $721$ & $722$ & $723$ & $724$ & $725$ & $726$ & $727$ & $728$\\
$b(n)$ & $1$ & $65074$ & $1$ & $8$ & $2$ & $11$ & $4$ & $19$ & $1$ & $385$\\
$s(n)$ & $1$ & $?$ & $1$ & $65$ & $8$ & $43$ & $4$ & $466$ & $1$ & $22295$\\
\hline
$n$ & $729$ & $730$ & $731$ & $732$ & $733$ & $734$ & $735$ & $736$ & $737$ & $738$\\
$b(n)$ & $?$ & $4$ & $1$ & $40$ & $1$ & $2$ & $9$ & $195479$ & $2$ & $16$\\
$s(n)$ & $?$ & $36$ & $1$ & $782$ & $1$ & $6$ & $123$ & $?$ & $24$ & $294$\\
\hline
$n$ & $739$ & $740$ & $741$ & $742$ & $743$ & $744$ & $745$ & $746$ & $747$ & $748$\\
$b(n)$ & $1$ & $35$ & $5$ & $4$ & $1$ & $441$ & $1$ & $2$ & $4$ & $27$\\
$s(n)$ & $1$ & $739$ & $113$ & $36$ & $1$ & $28447$ & $1$ & $6$ & $4$ & $395$\\
\hline
$n$ & $749$ & $750$ & $751$ & $752$ & $753$ & $754$ & $755$ & $756$ & $757$ & $758$\\
$b(n)$ & $1$ & $224$ & $1$ & $1670$ & $1$ & $4$ & $2$ & $3757$ & $1$ & $2$\\
$s(n)$ & $1$ & $10001$ & $1$ & $65466$ & $1$ & $36$ & $12$ & $794193$ & $1$ & $6$\\
\hline
$n$ & $759$ & $760$ & $761$ & $762$ & $763$ & $764$ & $765$ & $766$ & $767$ & $768$\\
$b(n)$ & $2$ & $384$ & $1$ & $6$ & $1$ & $9$ & $4$ & $2$ & $1$ & $?$\\
$s(n)$ & $24$ & $22278$ & $1$ & $78$ & $1$ & $29$ & $4$ & $6$ & $1$ & $?$\\
\hline
$n$ & $769$ & $770$ & $771$ & $772$ & $773$ & $774$ & $775$ & $776$ & $777$ & $778$\\
$b(n)$ & $1$ & $12$ & $1$ & $11$ & $1$ & $36$ & $13$ & $108$ & $5$ & $2$\\
$s(n)$ & $1$ & $564$ & $1$ & $43$ & $1$ & $990$ & $93$ & $986$ & $113$ & $6$\\
\hline
$n$ & $779$ & $780$ & $781$ & $782$ & $783$ & $784$ & $785$ & $786$ & $787$ & $788$\\
$b(n)$ & $1$ & $128$ & $1$ & $4$ & $37$ & $9998$ & $1$ & $4$ & $1$ & $11$\\
$s(n)$ & $1$ & $13320$ & $1$ & $36$ & $101$ & $3074483$ & $1$ & $36$ & $1$ & $43$\\
\hline
$n$ & $789$ & $790$ & $791$ & $792$ & $793$ & $794$ & $795$ & $796$ & $797$ & $798$\\
$b(n)$ & $1$ & $4$ & $2$ & $1771$ & $1$ & $2$ & $1$ & $9$ & $1$ & $24$\\
$s(n)$ & $1$ & $36$ & $16$ & $484183$ & $1$ & $6$ & $1$ & $29$ & $1$ & $2664$\\
\hline
$n$ & $799$ & $800$ & $801$ & $802$ & $803$ & $804$ & $805$ & $806$ & $807$ & $808$\\
$b(n)$ & $1$ & $?$ & $4$ & $2$ & $1$ & $34$ & $1$ & $4$ & $1$ & $106$\\
$s(n)$ & $1$ & $?$ & $4$ & $6$ & $1$ & $606$ & $1$ & $36$ & $1$ & $944$\\
\hline
$n$ & $809$ & $810$ & $811$ & $812$ & $813$ & $814$ & $815$ & $816$ & $817$ & $818$\\
$b(n)$ & $1$ & $2751$ & $1$ & $38$ & $2$ & $4$ & $1$ & $10604$ & $1$ & $2$\\
$s(n)$ & $1$ & $272960$ & $1$ & $920$ & $8$ & $36$ & $1$ & $4658179$ & $1$ & $6$\\
\hline
$n$ & $819$ & $820$ & $821$ & $822$ & $823$ & $824$ & $825$ & $826$ & $827$ & $828$\\
$b(n)$ & $41$ & $46$ & $1$ & $4$ & $1$ & $90$ & $14$ & $4$ & $1$ & $111$\\
$s(n)$ & $1337$ & $1212$ & $1$ & $36$ & $1$ & $800$ & $105$ & $36$ & $1$ & $4985$\\
\hline
$n$ & $829$ & $830$ & $831$ & $832$ & $833$ & $834$ & $835$ & $836$ & $837$ & $838$\\
$b(n)$ & $1$ & $4$ & $2$ & $?$ & $4$ & $6$ & $1$ & $23$ & $165$ & $2$\\
$s(n)$ & $1$ & $36$ & $8$ & $?$ & $4$ & $78$ & $1$ & $311$ & $4560$ & $6$\\
\hline
$n$ & $839$ & $840$ & $841$ & $842$ & $843$ & $844$ & $845$ & $846$ & $847$ & $848$\\
$b(n)$ & $1$ & $1933$ & $4$ & $2$ & $1$ & $9$ & $4$ & $16$ & $4$ & $1984$ \\
$s(n)$ & $1$ & $878779$ & $4$ & $6$ & $1$ & $27$ & $4$ & $294$ & $4$ & $74104$ \\
\hline
$n$ & $849$ & $850$ & $851$ & $852$ & $853$ & $854$ & $855$ & $856$ & $857$ & $858$\\
$b(n)$ & $2$ & $16$ & $1$ & $24$ & $1$ & $4$ & $14$ & $90$ & $1$ & $12$\\
$s(n)$ & $8$ & $306$ & $1$ & $324$ & $1$ & $36$ & $80$ & $800$ & $1$ & $468$\\
\hline
$n$ & $859$ & $860$ & $861$ & $862$ & $863$ & $864$ & $865$ & $866$ & $867$ & $868$\\
$b(n)$ & $1$ & $27$ & $2$ & $2$ & $1$ & $?$ & $1$ & $2$ & $5$ & $23$\\
$s(n)$ & $1$ & $395$ & $8$ & $6$ & $1$ & $?$ & $1$ & $6$ & $26$ & $311$\\
\hline
\end{tabular}
\hspace{1cm}
\caption{Further Computations}\label{Table1}
\end{small}
\end{table}
We now record some partial computations considering specific additive groups of given orders.
\begin{table}[H]
\centering
\begin{small}
\begin{tabular}{| c | c | c | c | c | c | c | c | c | c | c | c|}
\hline
$Group \; Id$ & $[64,1]$ & $[64,2]$ & $[64,26]$ & $[64,50]$ & $[64,55]$ & $[64,83]$\\
$Number$ & $10$ & $11354$ & $2742$ & $142$ & $?$ & $734410$\\
\hline
\hline
$Group \; Id$ & $[64,183]$ & $[64,192]$ & $[64,246]$ & $[64,260]$ & $[64,267]$ & \\
$Number$ & $3124$ & $?$ & $253350$ & $2189661$ & $58558$ & \\
\hline
\end{tabular}
\hspace{1cm}
\caption{Enumerations of left braces of order 64 }\label{Table1}
\end{small}
\end{table}
\begin{table}[H]
\centering
\begin{small}
\begin{tabular}{| c | c | c | c | c | c | c | c | c | c | c | c|}
\hline
$Group \; Id$ & $[480,4]$ & $[480,199]$ & $[480,212]$ & $[480,919]$ & $[480,934]$ & $[480,1180]$ & $[480,1213]$\\
$Number$ & $128$ & $?$ & $4928$ & $958965$ & $99970$ & $?$ & $39650$\\
\hline
\end{tabular}
\hspace{1cm}
\caption{Enumerations of left braces of order 480}\label{Table1}
\end{small}
\end{table}
\section{Conclusion and Conjectures}
We start by presenting a comparison on the time taken ( in seconds) by \cite[Algorithm 5.1]{GV2017} and Algorithm \ref{alg1} for enumerating skew left braces of order 32 for select additive groups which took considerable amount of time on MAGMA.
\begin{table}[H]
\centering
\begin{small}
\begin{tabular}{| c | c | c | c | c | c | c | c | c | c | c | c|}
\hline
$Group \; Id \; of \;the \;additive \;group$ & $[32,23]$ & $[32,24]$ & $[32,25]$ & $[32,28]$ & $[32,29]$ & $[32,30]$\\
$Number \;of \;skew \;brace\; structures$ & $39488$ & $70400$ &$138336$ & $138336$ & $138336$ & $137526$\\
$Time\; on \; Algorithm \, 5.1 \;[13]$ & $11238$ & $9808$ &$18720$ & $10193$ & $10083$ & $34005$\\
$Time \; on \; Algorithm \; \ref{alg1}$ & $539$ & $709$ & $1905$ & $4308$ & $3135$ & $4658$\\
\hline
\hline
$Group \; Id \; of \;the \;additive \;group$ & $[32,31]$ & $[32,32]$ & $[32,33]$ & $[32,45]$ & $[32,47]$ & $[32,51]$\\
$Number \;of \;skew \;brace\; structures$ & $70944$ & $69236$ & $91008$ & $8015$ & $7870$ & $744$\\
$Time\; on \; Algorithm \, 5.1 \;[13]$ & $14568$ & $18342$ & $17222$ & $130942$ & $28848$ &$\#$\\
$Time \; on \; Algorithm \; \ref{alg1}$ & $4797$ & $9302$ & $8869$ & $30$ & $68$ & $8$\\
\hline
\end{tabular}
\hspace{1cm}
\caption{Time comparison on skew left braces of order 32}\label{Table1}
\end{small}
\end{table}
$\#$ Program was stopped after running more than a month without result.
\vspace{.1in}
The data obtained above reveals that Algorithm \ref{alg1} is very expensive, with respect to memory space and time, for handing the situation for prime power orders. So, one really needs to find a substitute for this algorithm. One may think of Algorithm \ref{alg2} as a substitute. But unfortunately, it requires the conjugacy classes of regular subgroups of a given Sylow-$p$-subgroup (of the holomorph of a given finite $p$-group) to be computed in the whole holomorph, which is again very expensive. Although Algorithm \ref{alg2} is not very efficient as such, we hope that it may be improved/modified to handle the computations on skew braces of prime power orders more efficiently.
We now present some conjectures suggested by the big data computed in above tables. It is known from \cite{D2020} that for a prime integer $q \ge 5$,
$$
b(4q) =
\begin{cases}
9, &\text{if} \;\; q \equiv 3 \mod{4}\\
11, &\text{if } \;q \equiv 1 \mod{4}
\end{cases}
$$
and for prime integers $p$ and $q$ such that $q > p+1 > 3$,
$$
b(p^2q) =
\begin{cases}
4, &\text{if} \;\; 3 \nmid p-1\\
p+8, &\text{if} \;\; 3 \mid p-1 \;\ \text{and}\;\ 9 \nmid p-1 \\
2p+8, &\text{if } \; 9 \mid p-1.
\end{cases}
$$
For skew left braces, we have
\begin{conj} Let $p$ and $q$ be prime integers. If $q \ge 5$, then
$$
s(4q) =
\begin{cases}
29, &\text{if} \;\; q \equiv 3 \mod{4}\\
43, &\text{if } \; q \equiv 1 \mod{4}
\end{cases}
$$
and if $q > p+1 > 3$, then
$$
s(p^2q) =
\begin{cases}
4, &\text{if} \;\; p \nmid q-1\\
2p^2+7p+8, &\text{if} \;\; p \mid q-1 \;\ \text{and}\;\ p^2 \nmid q-1 \\
6p^2+6p+8, &\text{if } \; p^2 \mid q-1.
\end{cases}
$$
\end{conj}
For prime multiples of $8$ and $12$, we have
\begin{conj} Let $p \ge 11$ be a prime integer. Then
$$
b(8p) =
\begin{cases}
90, &\text{if} \;\; p \equiv 3,\;7 \mod{8}\\
106, &\text{if} \;\; p \equiv 5 \mod{8}\\
108, &\text{if } \; p \equiv 1 \mod{8}.
\end{cases}
$$
and
$$
s(8p) =
\begin{cases}
800, &\text{if} \;\; p \equiv 3,\;7 \mod{8}\\
944, &\text{if} \;\; p \equiv 5 \mod{8}\\
986, &\text{if } \; p \equiv 1 \mod{8}.
\end{cases}
$$
\end{conj}
\begin{conj} Let $p \ge 7$ be a prime integer. Then
$$
b(12p) =
\begin{cases}
24, &\text{if} \;\; p \equiv 11 \mod{12}\\
28, &\text{if} \;\; p \equiv 5 \mod{12}\\
34, &\text{if} \;\; p \equiv 7 \mod{12}\\
40, &\text{if } \; p \equiv 1 \mod{12}.
\end{cases}
$$
and
$$
s(12p) =
\begin{cases}
324, &\text{if} \;\; p \equiv 11 \mod{12}\\
410, &\text{if} \;\; p \equiv 5 \mod{12}\\
606, &\text{if} \;\; p \equiv 7 \mod{12}\\
782, &\text{if } \; p \equiv 1 \mod{12}.
\end{cases}
$$
\end{conj}
Skew left braces of order $pq$, $p < q$ being prime integers, have been constructed very recently in \cite{AB20}, where it is shown that $s(pq) = 1$ if $p \nmid q-1$ and $s(pq) = 2p+2$ otherwise. Going a step ahead, we have the following enumeration formula:
\begin{conj} Let $p$ and $q$ be prime integers such that $q > p \ge 3$. Then
$$
b(2pq) =
\begin{cases}
4, &\text{if} \;\; p \nmid q-1\\
6, &\text{if } \; p \mid q-1
\end{cases}
$$
and
$$
s(2pq) =
\begin{cases}
36, &\text{if} \;\; p \nmid q-1\\
8p+54, &\text{if} \;\; p \mid q-1.
\end{cases}
$$
\end{conj}
We close with the hope that the readers will be able to use the enormous data produced above to formulate many more conjectures according to their own need and interest.
\vspace{.2in}
\noindent{\it Acknowledgements.} The third named author thanks L. Vendramin for supplying MAGMA codes for computing skew left braces and for his useful comments on the introduction, and acknowledges the support of DST-RSF Grant INT/RUS/RSF/P-2. The first and second named authors acknowledge the support from the RFBR-18-01-0057. The authors thank the referee for suggesting useful modifications.
|
2,877,628,090,507 | arxiv | \section{Introduction}
The Zwicky Transient Facility (ZTF) is a three year photometric survey that uses a wide 47 deg$^{2}$ field of view camera on the Palomar 48-inch telescope with $g,r,i$ filters
\citep{B19a,B19b,G19,M19,D20,Z16}. During the first two years of the survey, the 40\% of the public time was used to observe the available sky every three nights in $g$ and $r$ filters, and the Galactic plane every night, while the rest of the time was divided between programs designed by the partnership (40\%) and Caltech (20\%). Besides its wide field and depth of coverage, one of the primary advantages of ZTF over other surveys such as ASASSN and Gaia is the increased temporal observations of the Galactic plane, especially within the partnership portion. The official survey began on 2018 March 18 and has been producing nightly alerts on transient/variable phenomena, as well as public and partnership data releases that can be accessed through IPAC\footnote{https://irsa.ipac.caltech.edu/Missions/ztf.html}. The first data release (DR1) took place on 2019 May 8, the second (DR2) on 2019 December 11, the third (DR3) on 2020 June 24, the fourth (DR4) on 2020 December 9 and the fifth (DR5) on 2021 March 31.
This paper is the second in a series identifying cataclysmic variables (CVs) from ZTF public and partnership data using the GROWTH Marshal\citep{K19} to filter the alerts during the interval from 2019 June 1 and 2020 May 31. The first paper\citep{Sz20} presented the software filter used in the Marshal (based on point source, $g-r$ color $\le$0.6 and magnitude change $\ge$2 mag within a timescale of $\le$2 days) and provided a list of 90 previously confirmed CVs and 218 strong candidates found in the ZTF alerts from June 2018 until May 31 2019. These objects were found based on the shape and colors of their light curves, and 29 of the candidates were confirmed by obtaining spectra.
\citet{W95} provides an overall review of the different types of CVs that are being found
in the ZTF data. They are all close binaries with mass transfer from a companion (usually a late main sequence star) to a white dwarf.
The main type being discovered in sky surveys is dwarf novae as they are easily located by the brightness changes during a disk instability outburst. A few novalikes are found when they undergo low and high accretion state changes. To further confirm the candidates and refine the classifications, spectra are needed. The presence of prominent hydrogen Balmer emission lines confirm a dwarf nova or novalike CV, while helium lines confirm an AM CVn or a novalike system containing a magnetic white dwarf (polar or intermediate polar), or a system with a very high accretion rate (SW Sex star). As the list of confirmed CVs grows and contains astrometry (from Gaia), the results can be used to test population models of close binary evolution \citep{HRP97,Mc19}.
\section{Identifying CVs}
Each night from 2019 June 1 to 2020 May 31 (except for the month of 2020 March when the system was down due to repairs of the filter exchanger), the light curves of the candidates created from the alerts that passed the Marshal CV filter that night (based on the point source, color, and magnitude change listed above) were examined. These light curves provided a 30 day interval of observation prior to the night requested. The candidates were then saved
if the light curve appeared to result from a dwarf nova outburst or a change in the accretion state of a novalike system. The saved systems then accumulated further data if obtained, allowing for a later classification.
The saved candidates were then cross-checked with other catalogs, including SIMBAD \citep{W20}, the AAVSO VSX catalog \citep{W07}, the Sloan
Digital Sky Survey \citep{Y00}, the Catalina Real-time
Transient Survey (CRTS) \citep{D09,D14}, MASTER \citep{L10} and
ASAS-SN \citep{Sh14} to see if they were previously known or candidate CVs.
While spectra are the ultimate confirmation that a candidate is a CV, various circumstances in the past year (the loss of blue capability in the Apache Point Observatory spectrograph and telescope shutdowns from the pandemic) prevented obtaining the same numbers of spectra as in Paper I. Only four confirmation spectra showing Balmer emission lines are
available from ZTF accessible facilities, two using the
Low Resolution Imaging Spectrometer (LRIS) \citep{O95} on the Keck telescope, one with the Floyds spectrograph on the Las Cumbres 2m telescope at Haleakala \citep{Br13} and one from the SPRAT at the Liverpool telescope \citep{St04}. These spectra are discussed in detail below.
\section{Results}
The scans of the usable nights from the GROWTH Marshal with the CV filter
yielded 93 previously confirmed CVs (generally from spectra but in a few cases from the presence of a superhump outburst feature in the light curve
\citep{W95}), and 279 strong candidates based on their ZTF light curves. Table 1 provides a list of
the previously confirmed objects, and Table 2 lists the
strong candidates. Some sources are also listed as candidates in CRTS or MASTER, but if they have not been confirmed with spectra, we placed them in Table 2. As in Paper I, we
will refer to the objects by the use of an abbreviated
RA(HHMM) and Dec(Deg) i.e. ZTF0014+59, with the full coordinates given in the tables. Also included in the tables are the Galactic latitude, the observed ZTF range in magnitudes from outburst peak to quiescence, or from high to low accretion states, the DR3 Gaia parallax and errors in mas (for measurements more than 3 times the error), the distances in parsecs (simply using the inverted parallax), the absolute
magnitude at the ZTF observed minimum magnitude, the
number of normal outbursts and longer superoutbursts (SOBs) observed in the Marshal light curves, the number of days of ZTF coverage available between 2019 June 1 and 2020 May 31, if photometry of the source exists in
the Sloan Digital Sky Survey (SDSS) footprint or in the CRTS, if any spectra were obtained with the ZTF instruments (Table 2) or available from the SDSS or the
literature (Table 1), and any other relevant information.
Figure 1 shows a few examples of the different types of light curves (an SOB, normal short-cycle outbursts, high/low states)
that led to the classification as a CV candidate in Table 2.
\subsection{Spectroscopic Confirmations}
Only four objects from Table 2 were able to be confirmed from the presence of Balmer emission lines (Figures 2 and 3). While the Spectral Energy Distribution Machine \citep{Bl18} on the Palomar 60-inch telescope obtained several spectra, they were observed near outburst and
only showed a blue continuum from the disk with no emission visible. The medium resolution Keck spectra of the two objects with light curves in the bottom row of Figure 1 (ZTF2134-02 and ZTF2131+49) are shown in Figure 2. These objects were obtained at quiescence and both reveal strong Balmer emission, while ZTF2131+49 also has strong helium lines, especially \ion{He}{2}4686. Thus, this object is a candidate for a system containing a magnetic white dwarf and is worth further followup. The lower resolution spectra from the LCO and SPRAT spectrographs for ZTF0618+22 and ZTF1928+55 were observed near outburst but do show the presence of Balmer emission, confirming them as CVs.
\subsection{The Galactic plane}
As shown in Paper I, the ZTF inclusion of the galactic plane in its footprint results in more new candidates in this area of the sky, compared to the known
candidates. This is further confirmed with the second year data, although with
smaller differences, likely due to changes in the portion of the public survey time spent on
the plane (some of the nightly plane coverage was shifted to coverage of TESS fields). The left
panel of Figure 4 shows the number of known systems (Table 1) and candidates
(Table 2) while the right panel compares the first and second years of data
on all objects. While 18\% of the known systems are within 10$^{\circ}$ of
the plane, 25\% of the candidates are within this range. This compares to 23\% and 45\% in the first year.
\subsection{Absolute Magnitudes}
The EDR3 Gaia parallaxes \citep{L21} were used to calculate the distances and absolute magnitudes shown in Tables 1 and 2. Paper I, which used the DR2 Gaia parallaxes, showed that the majority of CVs from ZTF had absolute magnitudes at quiescence between 10$-$12, near the faint end of previous results in the literature \citep{W87,W95}. Figure 5 (right panel) shows a similar distribution for the second year, with an even larger number of systems at the fainter magnitudes of 12$-$13. This is likely due to the greater number of parallaxes of fainter objects available with Gaia EDR3. This increase also makes the distribution of parallax between known and candidate systems more equal (left panel of Figure 5). However, the 6 faintest absolute magnitudes (those $\ge$13.0) all have relatively close distances of 108-365 pc, meaning that they are intrinsically faint. The trends of increasing outburst amplitude and decreasing outburst frequency for the faintest absolute magnitudes (Figure 6) as seen in Paper 1 are also apparent, consistent with low mass transfer rates and low disk viscosity in these systems \citep{HSC95}, although the large scatter indicates there is not a simple relationship between these quantities.
\section{Peculiar Light Curves}
There are several systems that have light curves that do not look like normal outbursts of dwarf novae. Included among these are the systems with high and low states. Two are shown in Figure 1 (ZTF2134-02 and ZTF2131+49), and three others are ZTF0434+03, ZTF2119+41 and ZTF2239+23, shown in Figure 7. Among these 5, ZTF2134-02, ZTF2119+41 are named as X-ray sources and ZTF0434+03 is listed as a possible (but not confirmed) quasar by \citet{DS11}. A redshift measurement can clarify the nature of this source. As noted above, the spectrum of ZTF2131+49 shows high excitation consistent with a magnetic white dwarf, while ZTF2134-02 looks more like a typical dwarf nova. Spectra of the other 3 can determine their correct classification.
ZTF1736+75 and ZTF1756+02 (Figure 7) have features that show low amplitude outbursts, and a slight plateau or standstill at about one magnitude below their outburst magnitude. These are signatures that could classify them as Z Cam type systems, with relatively long orbital periods and high accretion rates that keep the disks near the limit for dwarf nova outbursts \citep{W95}. Spectra can determine the orbital period and reveal expected deep absorption lines with emission cores during the standstill states. Lastly, ZTF1848+41 has a large dip in the middle of its SOB which is quite distinctive, and indicative of a small class of dwarf novae with an extreme mass ratio due to a degenerate secondary \citep{K15}. Further monitoring will show if this is a peculiarity that is present after each SOB or if it is a unique occurrence.
\section{Completeness}
The GROWTH Marshal uses difference images in the alerts each night to produce candidates that are available to view for 5 nights at a time. Due to bad weather or instrument problems at the time of the rise to brightness, objects can be missed and not saved. Thus, this approach produces candidates and known CVs but is not complete. Recently, a machine learning (ML) method \citep{C20,vR21} to find various types of outbursting stars and variables has been accomplished using the entire existing light curves from the project. It generally finds more candidates than the Marshal alert method, but also
has flaws based on matching the light curves to correct object types and requiring an actual measurement at quiescence rather than an upper limit.
The ML method generally has a true positive rate for CVs of $\sim$25\% i.e. only 1 in 4 objects is an actual CV, as determined by visual inspection of the light curves and period determination to identify any periodic variables. Most of the false positives are irregular variables or 'bogus' light curves. To test the differences in the two methods, we compared the results found from the Marshal for the specific month of 2019 September with those objects from the ML set that had data obtained within that month. This comparison showed that the ML method found 227 objects while the GROWTH method found 55. The overlap between the two methods points out the differences. Including the past discoveries, the GROWTH Marshal found 78 of the 227 machine learning objects, thus missing 66\%, while the machine learning missed 36 (65\%) of the 55 found from the Marshal. Since there are no obvious intrinsic differences among the overlapping and missing groups of objects, the missed objects are likely due to the limitations in each method listed above.
Current efforts are underway to refine the ML capabilities by adding the misclassifications to the machine learning training set to 'teach' the algorithm how to better distinguish CVs from the false positives. At the current time, a new Fritz Marshal is in development as an alert broker for internal use by the ZTF collaboration and has replaced the GROWTH Marshal. It is expected to provide more flexibility in trying different filters and in cross-matching with multiwavelength databases that will ultimately lead to better identification of CV candidates.
Another approach to finding CVs by using a periodicity search \citep{O20}, found about 60 new dwarf nova candidates in the DR1 database. Of these, 32 overlap with the GROWTH Marshal results.
\section{Conclusions}
The second year of ZTF alert filtering with the GROWTH Marshal has produced a list of 372 known and candidate CVs. Gaia parallaxes are available for almost half of these systems, and the resulting absolute magnitudes continue to show that most of the new systems being discovered are at the faint end of the distribution. The faintest ones are relatively closeby, and therefore likely have the lowest mass-transfer rates. Several systems merit followup observations to provide their detailed classification. ZTF2131+49 shows high excitation \ion{He}{2} emission and could harbor a magnetic white dwarf. ZTF1736+75 and ZTF1756+02 have features resembling standstills in their light curves and may be Z Cam type systems with high mass-transfer rates. ZTF1848+41 had a peculiar SOB with a large decrease in brightness that appeared to divide the SOB in two making it a good candidate for having a degenerate companion. Ongoing machine learning methods appear to find a greater number of CVs using the entire data from the onset of the survey rather than from nightly alerts. Since each method produces different results, further refinements are needed and ongoing to obtain the optimum candidates for all types of CVs from ZTF.
\acknowledgments
PS,BK,CL and BD acknowledge funding from NSF grant AST-1514737.
A.Y.Q.H. is supported by a National Science Foundation Graduate Research Fellowship under Grant No.\,DGE-1144469. MC is supported by the David and Ellen Lee Prize Postdoctoral Fellowship at the California Institute of Technology. MLG acknowledges support from the DIRAC Institute in the Department of Astronomy at the University of Washington. The DIRAC Institute is supported through generous gifts from the Charles and Lisa Simonyi Fund for Arts and Sciences, and the Washington Research Foundation.
This work was supported by the GROWTH project funded by the National Science Foundation under PIRE Grant No.\,1545949 and based on observations obtained with the Samuel Oschin Telescope 48-inch and the 60-inch Telescope at the Palomar Observatory as part of the Zwicky Transient Facility project. ZTF is supported by the NSF under grant AST-1440341 and a collaboration including Caltech, IPAC, the Weizmann Institute for Science,the Oskar Klein Center at Stockholm University, the University of Maryland, the University of Washington, Deutsches Elektronen-Synchrotron and Humboldt University, Los Alamos National Laboratories, the TANGO Consortium of Taiwan, the University of Wisconsin at Milwaukee, and Lawrence Berkeley national Laboratories. Operations are conducted by COO, IPAC and UW.
This work also makes use of observations from the Las Cumbres Observatory global telescope network.
\vspace{5mm}
\facilities{Keck:I;PO:1.2m}
|
2,877,628,090,508 | arxiv | \section*{Methods}
\noindent \textbf{Finite-temperature field theory calculations:} We write the free propagators for the two fields as
\begin{equation}
G_{\phi}=\frac{1}{-\alpha_{\phi}+k^{2}/2+\omega^{2}/2}
\end{equation}
\begin{equation}
G_{\psi}=\frac{1}{-\alpha_{\psi}+k^{2}/2+\gamma\omega/k^{z-2}}
\end{equation}
The $S_{\phi\psi}^{2}$ interaction gives a correction to $\chi_{\phi}^{-1}$ of the form
\begin{align}
& k_{B}T\sum_{n}\int d^{d}k G_{\phi}\omega_{n}^{2}G_{\psi} \nonumber \\
& = k_{B}T\sum_{n}\int d^{d}k\frac{\omega_{n}^{2}}{(-\alpha_{\phi}+\frac{k^{2}}{2}+\frac{\omega_{n}^{2}}{2})(-\alpha_{\psi}+\frac{k^{2}}{2}+\gamma\frac{\omega_{n}}{k^{z-2}})}
\end{align}
We carry out the summation over the Matsubara frequencies using the standard contour integral technique~\cite{abrikosov2012methods}, yielding
\begin{equation}
\int d^{d}kk^{z-2}\left[\frac{\omega_{\phi}(\omega_{\psi}-\omega_{\phi})}{2(\omega_{\phi}^{2}-\omega_{\psi}^{2})}+\frac{\omega_{\phi}\omega_{\psi}n_{\mathrm{B}}(\omega_{\phi})}{\omega_{\phi}^{2}-\omega_{\psi}^{2}} + \frac{\omega_{\psi}^{2}n_{\mathrm{B}}(\omega_{\psi})}{\omega_{\psi}^{2}-\omega_{\phi}^{2}} \right].
\end{equation}
Close to criticality ($\alpha_{\phi},\alpha_{\psi}\rightarrow 0$) in three dimensions, the third term gives the lowest exponent in the temperature dependence of $T^{3-1/z}$. This leads to the strongest correction to $\chi_{\phi}^{-1}$, as presented in equation~\ref{chi} of the main text.
\noindent \textbf{Density functional calculations:} Our first-principles calculations were carried out using the Vienna Ab-initio Simulation Package ({\sc vasp})~\cite{kresse1996efficient}, with the Perdew-Burke-Ernzerhof approximation to the exchange correlation functional~\cite{perdew1996generalized}. Eu $4f$ electrons were treated with the GGA+$U$ method, using Dudarev's approach~\cite{dudarev1998electron}, with $U=6.0$ eV and $J=1.0$ eV. Default projector augmented wave pseudopotentials were employed. A plane wave cutoff of 500 eV was used and the Brillouin zone was sampled using an $8\times 8\times 6$ $k$-point grid. Phonon calculations were performed using the {\sc phonopy} code~\cite{togo2015first}, with 80 atom supercells using a $4\times 4\times 6$ $k$-point mesh. For the biaxially strained EuTiO$_{3}$, the strain tensor reads
\begin{equation}
\varepsilon=\begin{pmatrix}
\zeta & 0 & 0 \\
0 & \zeta & 0 \\
0 & 0 & -\nu\zeta \\
\end{pmatrix},
\end{equation}
where $\zeta=(a-a_{0})/a_{0}$ is the applied strain ($a_{0}$ and $a$ are the equilibrium and strained in-plane lattice constants) and $\nu$ is the biaxial Poisson ratio.
\noindent \textbf{Ising model for estimating ferroelectric critical temperature:} Ferroelectric alloys were modeled by a simple transverse Ising model~\cite{blinc1974soft}, which has been shown to give reasonable estimates for experimental critical temperatures~\cite{zhang2000study},
\begin{align}
H = -\Omega\sum_{i}\sigma_{i}^{x}-\frac{1}{2}\sum_{ij}J_{ij}\sigma_{i}^{z}\sigma_{j}^{z} \nonumber \\
-\frac{1}{4} \sum_{ijkl}J_{ijkl}\sigma_{i}^{z}\sigma_{j}^{z}\sigma_{k}^{z}\sigma_{l}^{z}-2\mu E\sum_{i}\sigma_{i}^{z}.
\end{align}
Here $\Omega$ is the tunneling frequency and $\mu$ is the effective dipole moment, which couples to the external electric field $E$. $\sigma_{i}^{x,y,z}$ are pseudospins at the $i$-th site, which interact via two-body ($J_{ij}$) and four-body ($J_{ijkl}$) exchange terms. The term proportional to $\sigma_{x}$ is the tunneling between the two minima of the free energy double well, and does not imply that the electric dipole has any precessional dynamics. Alloying is simulated by weighting the parameters by concentration of the constituents $f_{\alpha}$,
\begin{equation}
J = \sum_{\alpha}f_{\alpha}J_{\alpha}, \quad \Omega = \sum_{\alpha}f_{\alpha}\Omega_{\alpha}, \quad
\mu = \sum_{\alpha}f_{\alpha}\mu_{\alpha}.
\end{equation}
The polarization is then given by $P=2n\mu\sum_{i}\langle \sigma_{i}^{z}\rangle$, where $n$ is the number of dipoles per unit volume. The change of lattice parameter, $a$, with alloying is approximated using Vegard's law. Treating the pseudospin in the mean field approximation yields a self-consistent equation for the polarization, which was then solved numerically. The susceptibility $\chi=\left(\partial \langle P\rangle/\partial E\right)_{E=0}$ was used to estimate the critical temperatures for different alloy compositions. Parameters used (shown in Table~\ref{ising_parameters}) were previously fitted to reproduce experimental values of critical temperatures and give good estimates of the experimental critical temperatures~\cite{zhang2000study}.
\begin{table}[h]
\centering
\caption{Parameters used for estimating ferroelectric critical temperature of Ba and Sr alloyed EuTiO$_{3}$.}
\label{ising_parameters}
\begin{tabular}{ c c c c c c}
\hline\hline
Compound & $J_{ij}$(meV) & $J_{ijkl}$(meV) & $\Omega$(meV) & $\mu$(e\AA{})& $a$(\AA{})\\
\hline\hline
BaTiO$_{3}$ & 23.90 & 62.16 & 30.58 & 2.17 & 4.005 \\
SrTiO$_{3}$/EuTiO$_{3}$ & 2.04 & 0 & 6.86 & 1.51 & 3.905\\
\hline\hline
\end{tabular}
\end{table}
\noindent \textbf{Heisenberg model for estimating magnetic critical temperature:} The energies obtained from density functional calculations were mapped to a classical Heisenberg model to calculate the exchange parameters. The $4f^{7}$ moments on the Eu$^{2+}$ are well localized, therefore the system can be reasonably described by a simple Heisenberg model. Alloying of non-magnetic ions (Sr and Ba) was modeled by introducing random binary variables $\zeta_{i}$ for each site $i$, such that
\begin{eqnarray}
\zeta_{i} &=& 1 \quad i=\mathrm{Eu} \nonumber \\
&=& 0 \quad i=\mathrm{Sr,Ba}.
\end{eqnarray}
This yields the following Hamiltonian
\begin{equation}
H=-\sum_{ij}\mathcal{J}_{ij}\zeta_{i}\zeta_{j}S_{i}\cdot S_{j} + \sum_{i}\mathcal{D}_{i}\zeta_{i}S_{i}^{2},
\end{equation}
where $S_{i}$ are classical spins at site $i$, $\mathcal{J}_{ij}$ is the nearest-neighbour exchange interaction strength and $\mathcal{D}_{i}$ is the single ion anisotropy energy. A competition between the exchange term, anisotropy term and dilution through alloying leads to a phase transition by tuning the $\alpha_{\psi}$ coefficient in the action (equation~\ref{action2}). From density functional calculations, exchange interaction strengths were obtained to be: $\mathcal{J}^{ab}$=-0.0286 meV ($ab$-plane exchange parameter) and $\mathcal{J}^{c}$=0.0331 meV ($c$ direction exchange parameter). The magnetic phases and critical temperatures of the system were then estimated using a standard Metropolis-based Monte Carlo procedure~\cite{landau2014guide}. This simple treatment of disorder has previously been successfully applied to dilute magnetic semiconductors~\cite{bergqvist2004magnetic}.
\section*{Acknowledgements}
We acknowledge helpful discussions with G. Aeppli, T. Donner, K. Dunnett, C. Ederer, A. Edstr\"{o}m, T. Esslinger, N. Fedorova, C. Gattinoni, Q. Meier, A. Morales, R. Pisarev and P. Zupancic. This work is supported by ETH-Zurich (AN, AC and NAS) and the US DOE BES E3B7, the Villum foundation, and Knut and Alice Wallenberg Foundation (AVB). Calculations were performed at the Swiss National Supercomputing Centre (project ID p504).
\section*{Author Contributions}
NAS conceived the concept. NAS, AVB, AC and AN devised the analysis. AN carried out the calculations. AN and NAS wrote the manuscript with contributions from all authors.
|
2,877,628,090,509 | arxiv | \section{Introduction}\label{S:1}
There are two different ways to construct line fields on surfaces
immersed in $\mathbf R ^4$. The first one consists in considering
the ellipse of curvature in the normal bundle of the surface and
taking the pull back of points on this ellipse to define tangent
direction fields. Examples of this approach are given by: the {\it
lines of axial curvature}, along which the second fundamental form
points in the direction of the large and the small axes of the
ellipse of curvature; the {\it mean directionally curved lines},
along which the second fundamental form points in the direction of
the mean curvature vector; and the {\it asymptotic lines}, along
which the second fundamental form points in the direction of the
tangent lines to the ellipse of curvature.
The other way consists in defining the {\it $\nu$-principal
curvature lines}, along which the surface bends extremally in the
direction of the normal vector $\nu$. To this end, we need to take
an unitary normal vector field $\nu$ and follow the classical
approach for surfaces immersed in $\mathbf R ^3$.
The lines of axial curvature are globally defined and their
singularities are the axiumbilic points where the ellipse of
curvature becomes either a circle or a point. The axiumbilic
points and the lines of axial curvature are assembled into two
axial configurations. The first one is defined by the axiumbilics
and the field of orthogonal lines on which the surface is curved
along the large axis of the ellipse of curvature. The second one
is defined by the axiumbilics and the field of orthogonal lines on
which the surface is curved along the small axis of the ellipse of
curvature. Each axial configuration is a net consisting of
orthogonal curves and axiumbilic points. Therefore a line of axial
curvature is not necessarily a simple regular curve; it can be
immersed with transversal crossings. The differential equation of
lines of axial curvature is a quartic differential equation
according to \cite{GS, GGTG1, GGTG2}. A global analysis of the
lines of axial curvature was developed in \cite{GS}.
The mean directionally curved lines are globally defined and their
singularities are either the inflection points, where the ellipse
of curvature is a radial line segment, or the minimal points,
where the mean curvature vector vanishes. It was shown in \cite{M}
that the differential equation of mean directionally curved lines
fits into the class of quadratic or binary differential equations.
The global behavior of mean directionally curved lines was studied
in \cite{M}.
The asymptotic lines do not need to be globally defined on the
surfaces and in general are not orthogonal. It was shown in
\cite{MRR} that a necessary and sufficient condition for existence
of the globally defined asymptotic lines on a surface
$\mathbf{M}^2$ in $\mathbf R ^4$ is the local convexity of
$\mathbf{M}^2$. The differential equation of asymptotic lines is
also a quadratic differential equation and their singularities are
the inflection points.
The $\nu$-principal curvature lines are orthogonal and globally
defined on surfaces immersed in $\mathbf R ^4$ and their
singularities are the $\nu$-umbilic points, where the
$\nu$-principal curvatures coincide. The differential equation of
$\nu$-principal curvature lines is a quadratic differential
equation according to \cite{SR}. An analysis of $\nu$-principal
curvature lines near generic $\nu$-umbilic points is presented in
\cite{SR} and in \cite{GSa} the $\nu$-principal cycles (closed
$\nu$-principal curvature lines) are studied. A global analysis of
the $\nu$-principal curvature lines was developed in \cite{GMS},
for $\nu = H$, where $H$ is the normal mean curvature vector.
We prove in \cite{M2} that the orthogonality of the asymptotic
lines is equivalent to the vanishing of the normal curvature. This
result has been already obtained by Romero-Fuster and
S\'anchez-Bringas in \cite{RS} using a different approach. We also
prove in \cite{M2} that the quartic differential equation of lines
of axial curvature can be written as the product of the quadratic
differential equations of mean directionally curved lines and
asymptotic lines if and only if the normal curvature of $\alpha$
vanishes at every point. Thus if the normal curvature of $\alpha$
vanishes at every point then the axial curvature cross fields
split into four direction fields and therefore it is not possible
that the lines of axial curvature have transversal crossings.
On the other hand, it is well known that a point $p$ is
semiumbilic if and only if the normal curvature vanishes at $p$,
\cite{RS}. Semiumbilic points are interesting from the viewpoint
of the theory of singularities of functions. Observe now that we
have analogous statements if instead of vanishing normal curvature
it is required semiumbilicity.
We say that an immersion $\alpha:\mathbf{M}^2\rightarrow \mathbf R
^4$ is {\it hyperspherical} if its image is contained in a
hypersphere. In this work we study some properties of surfaces
immersed in $\mathbf R ^4$ whose asymptotic lines are orthogonal.
In particular, we relate the property of having globally defined
orthogonal asymptotic lines with hypersphericity, obtaining the
following theorem.
\noindent {\bf Theorem 3.2.} Let $\alpha:\mathbf{M}^2\rightarrow
\mathbf R ^4$ be an immersion of a smooth oriented surface with
globally defined orthogonal asymptotic lines. Suppose that there
exist an unitary normal vector field $\nu$ and $r > 0$ such that
the distance from the projection of the ellipse of curvature
$\varepsilon_{\alpha}(p)$ onto the $\nu$-axis to $p$ is $r$, for
all $p \in \mathbf{M}^2$, and the Gaussian curvature $K \neq r^2$.
Then $\alpha$ is hyperspherical.
Finally, theorem 3.4 of \cite{RS}, lemma 2.1 and theorem 2.1 of
\cite{M2} and results of this paper are put together in Theorem
\ref{teo5} establishing seven other equivalent conditions to the
orthogonality of the asymptotic lines.
This paper is organized as follows. A review of properties of the
first and second fundamental forms, the ellipse of curvature and
the line fields on surfaces immersed in $\mathbf R ^4$ is
presented in section \ref{S:2}. General aspects of the curvature
theory for surfaces immersed in $\mathbf R ^4$ are presented in
the works of Forsyth \cite{F}, Wong \cite{W}, Little \cite{L} and
Asperti \cite{A}. Section \ref{S:3} is devoted to the study of
orthogonal asymptotic lines as well as hypersphericity of
immersions. Finally, in section \ref{S:4} some general problems
are stated.
\newtheorem{teo}{Theorem}[section]
\newtheorem{lema}[teo]{Lemma}
\newtheorem{prop}[teo]{Proposition}
\newtheorem{cor}[teo]{Corollary}
\section
Line fields on surfaces in $\mathbf R ^4$}\label{S:2} For sake of
completeness in this section we present a survey of the relevant
notions that will need later. Let $\alpha:\mathbf{M}^2\rightarrow
\mathbf R ^4$ be an immersion of a smooth oriented surface into
$\mathbf R ^4$, which is endowed with the Euclidean inner product
$\langle \cdot,\cdot \rangle$ and is oriented. In this paper
immersions are assumed to be $C^\infty$. Denote respectively by
$\mathbf{TM}$ and $\mathbf{NM}$ the tangent and the normal bundles
of $\alpha$ and by $T_{p}\mathbf{M}$ and $N_{p}\mathbf{M}$ the
respective fibers, i.e., the tangent and the normal planes at
$p\in \mathbf{M}^2$. Let $\{\nu_1,\nu_2\}$ be a frame of vector
fields orthonormal to $\alpha$. Assume that $(u,v)$ is a positive
chart of $\mathbf{M}^2$ and that
$\{\alpha_u,\alpha_v,\nu_1,\nu_2\}$ is a positive frame of
$\mathbf R ^4$. In such a chart $(u,v)$ the first fundamental form
of $\alpha$, $I_\alpha$, is given by
\[
I=I_\alpha=\langle d\alpha,d\alpha \rangle =E du^2+2F dudv+G dv^2,
\]
where $E=\langle \alpha_u,\alpha_u \rangle$, $F=\langle
\alpha_u,\alpha_v \rangle$ and $G=\langle \alpha_v,\alpha_v
\rangle$. The second fundamental form of $\alpha$, $II_\alpha$, is
defined in terms of the $\mathbf{NM}$-valued quadratic form
\[
II=II_\alpha=\langle d^2\alpha,\nu_1 \rangle \nu_1+\langle
d^2\alpha,\nu_2 \rangle \nu_2=II_{\nu_1}\nu_1+II_{\nu_2}\nu_2,
\]
where
\[
II_{\nu_i} = II_{\nu_i,\alpha}=e_{i}du^2+2f_{i}dudv+g_{i}dv^2,
\]
$e_{i}=\langle \alpha_{uu},\nu_{i} \rangle$, $f_{i}=\langle
\alpha_{uv},\nu_{i} \rangle$, and $g_{i}=\langle
\alpha_{vv},\nu_{i} \rangle$, for $i=1,2$.
The following functions are associated to $\alpha$ (see \cite{L}):
\begin{enumerate}
\item The {\it mean curvature vector} of $\alpha$
\[
H=H_{\alpha}=H_{1}\nu_{1}+H_{2}\nu_{2},
\]
where
\[
H_{i}=H_{i,\alpha}=\frac{E g_{i}-2 F f_{i}+G e_{i}}{2(E G-F^2)},
\]
for $i=1,2$;
\item The {\it normal curvature} of $\alpha$
\[
k_{N}=k_{N,\alpha}=\frac{E(f_{1}g_{2}-f_{2}g_{1})-F(e_{1}g_{2}-e_{2}g_{1})+G(e_{1}f_{2}-e_{2}f_{1})
}{2(E G-F^2)};
\]
\item The {\it resultant} ${\it \Delta}$ of $II_{1,\alpha}$ and
$II_{2,\alpha}$
\[
{\it \Delta}={\it \Delta}_\alpha={1\over {4(E G-F^2)}}
\left|\begin{array}{cccc}
e_{1} & 2f_{1} & g_{1} & 0 \\
e_{2} & 2f_{2} & g_{2} & 0 \\
0 & e_{1} & 2f_{1}& g_{1} \\
0 & e_{2} & 2f_{2}& g_{2}
\end{array} \right|;
\]
\item The {\it Gaussian curvature} of $\alpha$
\[
K=K_{\alpha}=\frac{e_1 g_1 - (f_1)^2 + e_2 g_2 - (f_2)^2}{E
G-F^2};
\]
\item The {\it normal curvature vector} of $\alpha$ defined by
$\eta(p,v)=\frac{II(p,v)}{I(p,v)}$.
\end{enumerate}
The image of the unitary tangent circle $\mathbf{S}^1$ by
$\eta(p):T_p\mathbf{M}\rightarrow N_p\mathbf{M}$ describes an
ellipse in $N_p\mathbf{M}$ called {\it ellipse of curvature} of
$\alpha$ at {\it p} and denoted by $\varepsilon_{\alpha}(p)$. This
ellipse may degenerate into a line segment, a circle or a point.
The center of the ellipse of curvature is the mean curvature
vector {\it H} and the area of $\varepsilon_{\alpha}(p)$ is given
by ${\pi\over 2} \left|k_N(p) \right|$. The map $\eta(p)$
restricted to $\mathbf{S}^1$, being quadratic, is a double
covering of the ellipse of curvature. Thus every point of the
ellipse corresponds to two diametrically opposed points of the
unitary tangent circle. The ellipse of curvature is invariant by
rotations in both the tangent and normal planes.
A point $p\in \mathbf{M}^2$ is called a {\it minimal point} of
$\alpha$ if $H(p)=0$ and it is called an {\it inflection point} of
$\alpha$ if ${\it \Delta}(p)=0 $ and $k_N(p)=0$. It follows that
$p\in \mathbf{M}^2$ is an inflection point if and only if its
ellipse of
curvature is a radial line segment \cite{L}.\\
\noindent{\bf Lines of axial curvature.} The four vertices of the
ellipse of curvature $\varepsilon_{\alpha}(p)$ determine eight
points on the unitary tangent circle which define two crosses in
the tangent plane. Thus we have two cross fields on $\mathbf{M}^2$
called {\it axial curvature cross fields}. This construction fails
at the {\it axiumbilic points} where the ellipse of curvature
becomes either a circle or a point. Generically the index of an
isolated axiumbilic point is $\pm {1\over 4}$ (see \cite{GS,
GGTG1, GGTG2}). The integral curves of the axial curvature cross
fields are the {\it lines of axial curvature}.
Generically there is no good way to distinguish one end of the
large (or small) axis of $\varepsilon_{\alpha}(p)$ and therefore
pick out a direction of the cross field. Thus a line of axial
curvature is not necessarily a simple regular curve; it can be
immersed with transversal crossings.
The differential equation of the lines of axial curvature is a
quartic differential equation of the form
\begin{equation} \label{1}
Jac\biggl(\|\eta-H\|^2,I \biggr)=0,
\end{equation}
\noindent where
\[
Jac(\cdot, \cdot)={{\partial(\cdot, \cdot)}\over {\partial
(du,dv)}},
\]
which according to \cite{GS} can be written as
\begin{equation} \label{2}
A_0du^4+A_1du^3dv+A_2du^2dv^2+A_3dudv^3+A_4dv^4=0,
\end{equation}
\noindent where
\[
A_0=a_0E^3, \; A_1=a_1E^3, \; A_2=-6a_0GE^2+3a_1FE^2,
\]
\[
A_3=-8a_0EFG+a_1E(4F^2-EG), \; A_4=a_0G(EG-4F^2)+a_1F(2F^2-EG),
\]
\[
a_0=4\biggl[F(EG-2F^2)(e_1^2+e_2^2)-Ea_6a_2-E^2F(a_3+a_5)+E^3a_4\biggr],
\]
\[
a_1=4\biggl[Ga_6(e_1^2+e_2^2)+8EFGa_2+E^3(g_1^2+g_2^2)-2E^2G(a_3+a_5)\biggr],
\]
\[
a_2=e_1f_1+e_2f_2, \; a_3=e_1g_1+e_2g_2, \; a_4=f_1g_1+f_2g_2,
\]
\[
a_5=2(f_1^2+f_2^2), \; a_6=EG-4F^2.
\]
\noindent{\bf Mean directionally curved lines.} The line through
the mean curvature vector $H(p)$ meets $\varepsilon_{\alpha}(p)$
at two diametrically opposed points. This construction induces two
orthogonal directions on $T_p\mathbf{M}^2$. Therefore we have two
orthogonal direction fields on $\mathbf{M}^2$ called {\it
H-direction fields}. The singularities of these fields, called
here {\it H-singularities}, are the points where either $H=0$
(minimal points) or at which the ellipse of curvature becomes a
radial line segment (inflection points). Generically the index of
an isolated {\it H}-singularity is $\pm {1\over 2}$ \cite{M}. The
integral curves of the {\it H}-direction fields are the {\it mean
directionally curved lines}.
The differential equation of mean directionally curved lines is a
quadratic differential equation of the form \cite{M}
\begin{equation} \label{3}
Jac\{Jac(II_{\nu_1},II_{\nu_2}),I\}=0,
\end{equation}
\noindent which can be written as
\begin{equation} \label{4}
B_1(u,v)du^2+2B_2(u,v)dudv+B_3(u,v)dv^2=0,
\end{equation}
\noindent where
\[
B_1=(e_1g_2-e_2g_1)E+2(e_2f_1-e_1f_2)F,\:\:
B_2=(f_1g_2-f_2g_1)E+(e_2f_1-e_1f_2)G,
\]
\[
B_3=2(f_1g_2-f_2g_1)F+(e_2g_1-e_1g_2)G.
\]
\noindent{\bf Asymptotic lines.} Suppose that $p$ (the origin of
$N_p\mathbf{M}^2$) lies outside $\varepsilon_{\alpha}(p)$, for all
$p\in \mathbf{M}^2$. The two points on $\varepsilon_{\alpha}(p)$
at which the lines through the normal curvature vectors are
tangent to $\varepsilon_{\alpha}(p)$ induce a pair of directions
in $T_p\mathbf{M}^2$ which in general are not orthogonal. Thus we
have two tangent direction fields on $\mathbf{M}^2$, called {\it
asymptotic direction fields}. The singularities of these fields
are the points where the ellipse of curvature becomes a radial
line segment, i.e., the {\it inflection points}. Generically the
index of an isolated inflection point is $\pm {1\over 2}$
\cite{GMRR}. The integral curves of the asymptotic direction
fields are the {\it asymptotic lines}.
The differential equation of asymptotic lines is a quadratic
differential equation of the form \cite{M}
\begin{equation} \label{5}
Jac(II_{\nu_1},II_{\nu_2})=0,
\end{equation}
\noindent which can be written as
\begin{equation} \label{6}
T_1(u,v)du^2+T_2(u,v)dudv+T_3(u,v)dv^2=0,
\end{equation}
\noindent where
\[
T_1=e_1f_2-e_2f_1, \; T_2=e_1g_2-e_2g_1, \; T_3=f_1g_2-f_2g_1.
\]
\noindent {\bf $\nu$-Principal curvature lines.} The projection of
the pullback, ${\alpha}{^*} (\mathbf R ^4)$, of the tangent bundle
of $\mathbf R ^4$ onto the tangent bundle of an immersion $\alpha$
will be denoted by $\Pi_{\alpha,T}$. This vector bundle is endowed
with the standard metric induced by the Euclidean one in $\mathbf
R ^4$.
Denote by $\nu = \nu_{\alpha}$ the {\it unit normal vector field}
of ${\alpha}$. The eigenvalues $k_1 = k_{1,\alpha} \leq
k_{2,\alpha} = k_2$ of the {\it Weingarten operator} ${\mathcal
W}_\alpha = -\Pi_{\alpha,T}D\nu_\alpha $ of $\mathbf {TM}$ are
called the {\it $\nu$-principal curvatures} of $\alpha$. The
points where $k = k_1 = k_2$ will be called the {\it
$\nu$-umbilic} points of $\alpha$ and define the set ${\mathcal
S}_u = {\mathcal S}_{u,\alpha}$. We say that $\alpha$ is {\it
$\nu$-umbilical} if each point of the immersion is $\nu$-umbilic.
Outside ${\mathcal S}_u$ are defined the {\it minimal},
$L_{m,\alpha}$, and the {\it maximal}, $L_{M,\alpha}$, {\it
$\nu$-principal line fields} of $\alpha$, which are the
eigenspaces of ${\mathcal W}_\alpha$ associated respectively to
$k_1$ and $k_2$. Generically the index of an isolated
$\nu$-umbilic point is $\pm {1\over 2}$ \cite{SR}. The integral
curves of the $\nu$-principal line fields are the {\it
$\nu$-principal curvature lines}.
In a local chart $(u,v)$ the $\nu$-principal curvatures lines are
characterized as the solutions of the following quadratic
differential equation \cite{SR}
\begin{equation} \label{pcl}
(Fg_\nu - f_\nu G)dv^2+(E g_\nu - e_\nu G)dudv+(E f_\nu - F
e_\nu)du^2=0,
\end{equation}
\noindent where $E$, $F$ and $G$ are the coefficients of the first
fundamental form and $e_\nu = <\alpha_{uu}, \nu>$, $f_\nu =
<\alpha_{uv},\nu>$ and $g_\nu = <\alpha_{vv},\nu>$ are the
coefficients of the {\it second fundamental form relative to}
$\nu$, denoted by $II_\nu = II_{\nu_\alpha}$. Equation (\ref{pcl})
is equivalently written as
\begin{equation} \label{pcl2}
Jac (II_\nu, I) = 0.
\end{equation}
\section
Orthogonal asymptotic lines}\label{S:3} Let
$\alpha:\mathbf{M}^2\rightarrow \mathbf R ^4$ be an immersion of a
smooth oriented surface into $\mathbf R ^4$. In \cite{GS} Garcia
and Sotomayor prove the following theorem: Suppose that the image
of the surface $\mathbf{M}^2$ by $\alpha$ is contained into
$\mathbf R ^3$. Then the quartic differential equation of lines of
axial curvature is the product of the quadratic differential
equation of its principal curvature lines and the quadratic
differential equation of its mean curvature lines. It is
interesting to observe that every point of $\mathbf{M}^2$ is an
inflection point.
We have established in \cite{M} the following theorem: Let
$\alpha:\mathbf{M}^2\rightarrow \mathbf{S}^3(r)$ be an immersion
of a smooth oriented surface into a 3-dimensional sphere of radius
$r>0$. Consider the natural inclusion
$i:\mathbf{S}^3(r)\rightarrow \mathbf R ^4$ and the composition
$i\circ \alpha$ also denoted by $\alpha$. Then the quartic
differential equation of lines of axial curvature (\ref{1}) can be
written as
\begin{equation} \label{7}
Jac\{Jac(II_{\nu_1},II_{\nu_2}),I\} \cdot
Jac(II_{\nu_1},II_{\nu_2})=0,
\end{equation}
\noindent where the first expression in (\ref{7}) is the quadratic
differential equation of mean directionally curved lines (\ref{3})
and the second one is the quadratic differential equation of
asymptotic lines (\ref{5}).
It is interesting to observe that in the above construction the
asymptotic lines are orthogonal and the normal curvature of
$\alpha$ vanishes at every point. This is a particular case of the
following theorem proved in \cite{M2}, which was also obtained in
\cite{RS} using a different approach: Let
$\alpha:\mathbf{M}^2\rightarrow \mathbf R ^4$ be an immersion of a
smooth oriented surface with isolated inflection points. The
immersion $\alpha$ has orthogonal asymptotic lines if and only if
the normal curvature of $\alpha$ vanishes at every point.
We have established in \cite{M2} the following theorem: Let
$\alpha:\mathbf{M}^2\rightarrow \mathbf R ^4$ be an immersion of a
smooth oriented surface with isolated inflection points. The
quartic differential equation of lines of axial curvature
(\ref{1}) can be written as
\begin{equation} \label{8}
Jac\{Jac(II_{\nu_1},II_{\nu_2}),I\}\cdot
Jac(II_{\nu_1},II_{\nu_2})=0,
\end{equation}
\noindent where the first expression in (\ref{8}) is the quadratic
differential equation of mean directionally curved lines (\ref{3})
and the second one is the quadratic differential equation of
asymptotic lines (\ref{5}), if and only if the normal curvature of
$\alpha$ vanishes at every point.
We can prove the following corollary: Let
$\alpha:\mathbf{M}^2\rightarrow \mathbf R ^4$ be an immersion of a
smooth oriented surface into $\mathbf R ^4$. If the immersion
$\alpha$ has orthogonal asymptotic lines then the inflection
points are obtained where the ellipse of curvature becomes a
point. In fact, from Equation (\ref{8})
\begin{equation} \label{9}
Jac\biggl(\|\eta-H\|^2,I
\biggr)=Jac\{Jac(II_{\nu_1},II_{\nu_2}),I\}\cdot
Jac(II_{\nu_1},II_{\nu_2})=0.
\end{equation}
\noindent As the inflection points are singularities of asymptotic
lines then by (\ref{9}) they are singularities of lines of axial
curvature. But the singularities of lines of axial curvature are
the points where the ellipse of curvature becomes either a circle
or a point. Thus the only possibility in this case is that the
ellipse of curvature becomes a point.
\begin{teo} \label{teo1}
Let $\alpha:\mathbf{M}^2\rightarrow \mathbf{S}^3(r)$ be an
immersion of a smooth oriented surface into a 3-dimensional sphere
of radius $r>0$. Consider the natural inclusion
$i:\mathbf{S}^3(r)\rightarrow \mathbf R ^4$ and the composition
$i\circ \alpha$ also denoted by $\alpha$. Then there exist an
unitary normal vector field $\nu$ and $\lambda > 0$ such that the
ellipse of curvature $\varepsilon_{\alpha}(p)$ is a line segment
with the following property: the distance from the projection of
$\varepsilon_{\alpha}(p)$ onto the $\nu$-axis to $p$ is $\lambda$,
for all $p \in \mathbf{M}^2$.
\end{teo}
\noindent{\bf Proof.} Let $\{\nu_1,\nu_2\}$ be a frame of vector
fields orthonormal to $\alpha$, where $\nu_1(p)\in
T_{p}\mathbf{S}^3(r)$ and $\nu_2(p)$ is the inward normal to
$\mathbf{S}^3(r)$, for all $p\in \mathbf{M}^2$. Thus
\[
\nu_2\equiv - {\frac{1} {r} \; {\alpha}},\; e_2 = \frac{1} {r} \;
E,\; f_2 = \frac{1} {r}\; F\; \mbox{and}\; g_2 = \frac{1} {r} \;
G,
\]
where $E$, $F$ and $G$ are the coefficients of the first
fundamental form of $\alpha$. It follows that
\[
II_{\nu_2} = \frac{1} {r} \; I.
\]
Now
\[
\eta=\frac{II}{I}=\frac{II_{\nu_1}}{I} \; \nu_1 +
\frac{II_{\nu_2}}{I} \; \nu_2= \frac{II_{\nu_1}}{I} \; \nu_1 +
\frac{1} {r} \; \nu_2.
\]
This implies that the ellipse of curvature
$\varepsilon_{\alpha}(p)$ is a line segment orthogonal to $\nu_2$,
for all $p\in \mathbf{M}^2$. Define $\nu = \nu_2$ and $\lambda =
\frac{1}{r}$. The theorem is proved.
Let $\alpha:\mathbf{M}^2\rightarrow \mathbf R ^4$ be an immersion
of a smooth oriented surface with globally defined orthogonal
asymptotic lines. Then the ellipse of curvature
$\varepsilon_{\alpha}(p)$ is a line segment for all $p \in
\mathbf{M}^2$ except at the inflection points. We say that the
immersion $\alpha$ has {\it constant projection} if there exist an
unitary normal vector field $\nu$ and $r > 0$ such that the
distance from the projection of $\varepsilon_{\alpha}(p)$ onto the
$\nu$-axis to $p$ (the origin of $N_p \mathbf{M}^2$) is $r$, for
all $p \in \mathbf{M}^2$. The constant $r$ is called {\it distance
of projection}.
Theorem \ref{teo1} shows that if $\alpha$ is hyperspherical then
$\alpha$ has constant projection whose distance of projection is
$r^{-1}$, where $r$ is the radius of the hypersphere. The converse
of Theorem~\ref{teo1} is given by the following theorem.
\begin{teo} \label{teo2}
Let $\alpha:\mathbf{M}^2\rightarrow \mathbf R ^4$ be an immersion
of a smooth oriented surface with globally defined orthogonal
asymptotic lines. Suppose that $\alpha$ has constant projection
with distance of projection $r > 0$, and the Gaussian curvature $K
\neq r^2$. Then $\alpha$ is hyperspherical.
\end{teo}
\noindent{\bf Proof.} Since all the notions of this paper are
independents of the chart it is enough to prove this theorem for
an orthogonal one. By hypothesis there is an unitary normal vector
field $\nu$ orthogonal to $\varepsilon_{\alpha}(p)$, for all $p
\in {\mathbf{M}^2}$. We can take $\{\nu_1 = \nu^\perp,\nu_2 = \nu
\}$ a frame of vector fields orthonormal to $\alpha$, where
$\nu^\perp(p)$ is parallel to $\varepsilon_{\alpha}(p)$, such that
$\{\alpha_u,\alpha_v,\nu^\perp,\nu\}$ is a positive frame of
$\mathbf R ^4$, for a positive orthogonal chart $(u,v)$ of
$\mathbf{M}^2$. Thus $e_2 = r E, \; f_2 = 0, \; g_2 = r G$. The
immersion $\alpha$ satisfies the Codazzi equations \cite{F}
\begin{equation}\label{c1}
(e_1)_v-(f_1)_u = \Gamma_{12}^1 e_1 + \left( \Gamma_{12}^2 - \Gamma_{11}^1 \right)f_1-
\Gamma_{11}^2 g_1 - a_{12}^3 e_2 + a_{11}^3 f_2,
\end{equation}
\begin{equation}\label{c2}
(e_2)_v-(f_2)_u = \Gamma_{12}^1 e_2 + \left( \Gamma_{12}^2 - \Gamma_{11}^1 \right)f_2-
\Gamma_{11}^2 g_2 - a_{12}^3 e_1 + a_{11}^3 f_1,
\end{equation}
\begin{equation}\label{c3}
(f_1)_v-(g_1)_u = \Gamma_{22}^1 e_1 + \left( \Gamma_{22}^2 - \Gamma_{12}^1 \right)f_1-
\Gamma_{12}^2 g_1 + a_{12}^3 f_2 - a_{11}^3 g_2,
\end{equation}
\begin{equation}\label{c4}
(f_2)_v-(g_2)_u = \Gamma_{22}^1 e_2 + \left( \Gamma_{22}^2 - \Gamma_{12}^1 \right)f_2-
\Gamma_{12}^2 g_2 - a_{12}^3 f_1 + a_{11}^3 g_1,
\end{equation}
\noindent and the following structure equations \cite{F}
\begin{equation}\label{s1}
(\nu^\perp)_u = a_{11}^1 \alpha_u + a_{11}^2 \alpha_v + a_{11}^3
\nu,
\end{equation}
\begin{equation}\label{s2}
(\nu^\perp)_v = a_{12}^1 \alpha_u + a_{12}^2 \alpha_v + a_{12}^3
\nu,
\end{equation}
\begin{equation}\label{s3}
\nu_u = a_{21}^1 \alpha_u + a_{21}^2 \alpha_v - a_{11}^3 \nu^\perp,
\end{equation}
\begin{equation}\label{s4}
\nu_v = a_{22}^1 \alpha_u + a_{22}^2 \alpha_v - a_{12}^3 \nu^\perp,
\end{equation}
\noindent where
\[
a_{11}^1 = \frac{f_1F-e_1G}{EG-F^2}, \; a_{11}^2 = \frac{e_1F-f_1E}{EG-F^2}, \;
a_{12}^1 = \frac{g_1F-f_1G}{EG-F^2}, \; a_{12}^2 = \frac{f_1F-g_1E}{EG-F^2},
\]
\[
a_{21}^1 = \frac{f_2F-e_2G}{EG-F^2}, \; a_{21}^2 = \frac{e_2F-f_2E}{EG-F^2}, \;
a_{22}^1 = \frac{g_2F-f_2G}{EG-F^2}, \; a_{22}^2 = \frac{f_2F-g_2E}{EG-F^2},
\]
\noindent and $\Gamma_{ij}^k$ are the Christoffel symbols of $\alpha$ \cite{F},
$i,j,k = 1, 2$, which in this case are given by
\[
\Gamma_{11}^1=\frac {E_u}{2E}, \; \Gamma_{11}^2= - \frac
{E_v}{2G}, \; \Gamma_{12}^1=\frac {E_v}{2E}, \;
\Gamma_{12}^2=\frac {G_u}{2G}, \; \Gamma_{22}^1= - \frac
{G_u}{2E}, \; \Gamma_{22}^2=\frac {G_v}{2G}.
\]
\noindent Substituting the above Christoffel symbols in the
Codazzi equations (\ref{c2}) and (\ref{c4}) we have respectively
\begin{equation}\label{c5}
rE_v = \frac{E_v}{2E}rE + \frac{E_v}{2G}rG - a_{12}^3 e_1 +
a_{11}^3 f_1
\end{equation}
\noindent and
\begin{equation}\label{c6}
-rG_u = - \frac{G_u}{2E}rE - \frac{G_u}{2G}rG - a_{12}^3 f_1 +
a_{11}^3 g_1.
\end{equation}
\noindent But Equations (\ref{c5}) and (\ref{c6}) are equivalents
to
\begin{equation}\label{c7}
-a_{12}^3 e_1 + a_{11}^3 f_1 = 0
\end{equation}
\noindent and
\begin{equation}\label{c8}
-a_{12}^3 f_1 + a_{11}^3 g_1 = 0,
\end{equation}
\noindent respectively. Now the Gaussian curvature is
\[
K = \frac{e_1 g_1 -(f_1)^2}{EG} + \frac{e_2 g_2}{EG} = \frac{e_1
g_1 -(f_1)^2}{EG} + r^2.
\]
By hypothesis $K \neq r^2$, and thus
\begin{equation}\label{ga}
e_1 g_1 - (f_1)^2 \neq 0.
\end{equation}
\noindent From the Equations (\ref{c7}), (\ref{c8}) and (\ref{ga})
we have that
\begin{equation}\label{s5}
a_{11}^3=a_{12}^3 = 0.
\end{equation}
\noindent Substituting Equation (\ref{s5}) in (\ref{s3}) and
(\ref{s4}) results that
\[
\nu_u = -r \alpha_u \; \mbox{and}\; \nu_v = -r \alpha_v.
\]
Thus
\[
\nu = -r \alpha + \gamma,
\]
where $\gamma$ is a constant vector. Therefore
\[
\alpha = \frac {\gamma}{r} - \frac {1}{r} \nu.
\]
This means that $\alpha(\mathbf{M}^2)$ belongs to a hypersphere
with center $\frac{\gamma}{r}$ and radius $\frac{1}{r}$. The
theorem is proved.
The proof of the following theorem is immediate from the proof of
Theorem \ref{teo1}.
\begin{teo} \label{teo3}
Let $\alpha:\mathbf{M}^2\rightarrow \mathbf{S}^3(r)$ be an
immersion of a smooth oriented surface into a 3-dimensional sphere
of radius $r>0$. Consider the natural inclusion
$i:\mathbf{S}^3(r)\rightarrow \mathbf R ^4$ and the composition
$i\circ \alpha$ also denoted by $\alpha$. Then there exist an
unitary normal vector field $\nu$ and $\lambda > 0$ such that
$II_\nu = \langle d^2 \alpha,\nu \rangle = \lambda I $.
\end{teo}
The converse of Theorem \ref{teo3} is given by the following
theorem.
\begin{teo} \label{teo4}
Let $\alpha:\mathbf{M}^2\rightarrow \mathbf R ^4$ be an immersion
of a smooth oriented surface. Suppose that $\nu$ is an unitary
normal vector field such that $II_\nu = \langle d^2 \alpha,\nu
\rangle = \lambda I $, where $\lambda$ is a nonzero constant, and
the Gaussian curvature $K \neq \lambda^2$. Then $\alpha$ is
hyperspherical.
\end{teo}
\noindent {\bf Proof.} Take the positive frame
$\{\alpha_u,\alpha_v,\nu^\perp,\nu \}$. As $II_\nu = \langle d^2
\alpha,\nu \rangle = \lambda I $ we have
\[
\eta=\frac{II}{I}=\frac{II_{\nu^\perp}}{I} \; \nu^\perp +
\frac{II_\nu}{I} \; \nu = \frac{II_{\nu^\perp}}{I} \; \nu^\perp +
\lambda \; \nu.
\]
This implies that the ellipse of curvature
$\varepsilon_{\alpha}(p)$ is a line segment whose distance from
their projection onto the $\nu$-axis to $p$ is constant and equal
to $\lambda$, for all $p\in \mathbf{M}^2$. Therefore $\alpha$ has
constant projection with distance of projection $\lambda > 0$. As
$K \neq \lambda^2$ the theorem follows from Theorem \ref{teo2}.
Let $\alpha:\mathbf{M}^2\rightarrow \mathbf R ^4$ be an immersion
of a smooth oriented surface with globally defined orthogonal
asymptotic lines. Then the normal curvature of $\alpha$ vanishes
at every point. So there exist normal vector fields $\nu$ and
$\nu^\perp$ such that
\[
\eta=\frac{II}{I}=\frac{II_{\nu^\perp}}{I} \; \nu^\perp +
\frac{II_\nu}{I} \; \nu = \frac{II_{\nu^\perp}}{I} \; \nu^\perp +
\lambda \; \nu.
\]
\noindent Thus $II_\nu = \lambda I $, where $\lambda$ is a
positive scalar function on $\mathbf{M}^2$. This implies that
$\alpha$ is $\nu$-umbilical. The differential equation of
asymptotic lines (\ref{5}) is given by
\[
0 = Jac(II_{\nu^\perp},II_{\nu}) = Jac(II_{\nu^\perp},\lambda I),
\]
which is equivalent to
\[
Jac(II_{\nu^\perp},I) = 0.
\]
But this equation is the differential equation of
$\nu^\perp$-principal curvature lines (\ref{pcl2}).
Theorem 3.4 of \cite{RS}, lemma 2.1 and theorem 2.1 of \cite{M2}
and above results are put together in the next theorem.
\begin{teo} \label{teo5}
Let $\alpha:\mathbf{M}^2\rightarrow \mathbf R ^4$ be an immersion
of a smooth oriented surface. The following are equivalent
conditions on $\alpha$:
\begin{enumerate}
\item[\textrm a)] The immersion $\alpha$ has everywhere defined
orthogonal asymptotic lines;
\item[\textrm b)] The normal curvature of $\alpha$ vanishes at
every point;
\item[\textrm c)] The immersion $\alpha$ is $\nu$-umbilical for
some unitary normal vector field $\nu$;
\item[\textrm d)] All points of $\alpha$ are semiumbilic;
\item[\textrm e)] There exist a positive scalar function $\lambda$
and an unitary normal vector field $\nu$ such that the second
fundamental form relative to $\nu$ is given by $II_\nu = \lambda
I$;
\item[\textrm f)] The asymptotic lines coincide with the lines of
axial curvature defined by the large axis of the ellipse of
curvature;
\item[\textrm g)] The asymptotic lines coincide with the
$\nu^\perp$-principal curvature lines, for some unitary normal
vector field $\nu$;
\item[\textrm h)] The quartic differential equation of lines of
axial curvature is the product of the quadratic differential
equations of mean directionally curved lines and asymptotic lines.
\end{enumerate}
\noindent Furthermore if the above function $\lambda$ is a
nonzero constant and the Gaussian curvature $K \neq \lambda^2$
then $\alpha$ is hyperspherical.
\end{teo}
\section
Concluding remarks}\label{S:4} One direction of research can be
stated: To give an example of a non-hyperspherical immersion
$\alpha$ of a smooth oriented surface in $\mathbf R ^4$ with
globally defined orthogonal asymptotic lines having an isolated
inflection point.
Other direction of research emerges with the evaluation of the
index of an isolated $\nu$-umbilic point. This is related to the
upper bound 1 for the umbilic index on surfaces immersed in
$\mathbf R ^3$ and the Carath\'eodory conjecture (see \cite{SM}
and references therein). Gutierrez and S\'anchez-Bringas
\cite{GuS} have shown that this bound does not hold for the $\nu$
approach.
|
2,877,628,090,510 | arxiv | \section{Introduction}
\IEEEPARstart{C}{onvolutional} Neural Networks (CNNs) have recently gained a lot of attention as they outperform classical handcrafted methods in almost every computer vision tasks where data scarcity is not an issue. In biomedical image analysis, data are abundant. However, obtaining high quality and consistently labeled images is expensive as data curation and annotation require hours of work from well-trained experts \cite{greenspan2016guest}. Thus, the effective number of training examples is often low. This limitation is usually handled by transfer learning and data augmentation. Transfer learning, the process of fine-tuning a network trained on another task to the task at hand, is very common for 2D images. For 3D images, however, the lack of very large datasets hinders the availability of pre-trained models. Another approach, data augmentation, refers to the application of geometric transforms and perturbations to the training examples to make the CNN invariant to these distortions~\cite{shorten2019survey}. The cost of data augmentation is a substantial increase in the data size leading to a slower convergence rate and potential waste of trainable parameters.
A lot of recent research has focused on how to build CNNs that are invariant to these transforms by imposing constraints on the architecture of the network~\cite{CoW2016b, weiler2017learning, andrearczyk2020local, eickenberg2017solid}. The motivation of these approaches is to obviate the need to learn these invariances from the data and their transformation. As a result, an effective reduction of the number of trainable parameters is achieved and, potentially, a reduction of the number of training examples needed for the generalization of the network.
This work focuses on 3D biomedical texture analysis and on the design of CNNs that are invariant to local 3D rotations, \textit{i.e.}, rotations of individual local patterns. This invariance is obtained using continuously defined Rotation Invariant (RI) descriptors of functions on the sphere. By relying on a continuous-domain formulation, we avoid the difficulties associated with rotations of discretized images~\cite{vivaldi2006arithmetic, ke2014rotation}. Neighborhoods defined by learned radial profiles are used to locally project the image on the solid sphere. These descriptors are used together with a convolution operation to obtain Locally Rotation Invariant~(LRI)\footnote{LRI is used for Locally Rotation Invariant and Local Rotation Invariance interchangeably} operators in the 3D image domain as proposed in \cite{andrearczyk2019exploring}. These types of operators are relevant in biomedical texture analysis where discriminative patterns appear at random positions and orientations. The RI descriptors used in \cite{andrearczyk2019exploring, andrearczyk2019solid, andrearczyk2020local, eickenberg2017solid, weiler20183d} and in the present work are derived from the Spherical Harmonics (SH) decomposition of the kernels. The SHs are the generalization of the Circular Harmonics (CH) to the 2D sphere~\cite{gallier2009}. These two families of functions are intimately linked with Fourier theory, and both decompositions correspond to the Fourier transform of the function defined on the sphere $\mathbb{S}^2$ for the SHs and on the circle $\mathbb{S}^1$ for the CHs.
To better apprehend the two invariants considered in this work, namely the spectrum and the bispectrum, it is useful to consider them on the circle. The CH expansion of a function $f \in L_2(\mathbb{S}^1)$ for a degree $n$ is computed as $\widehat{f}_n = \frac{1}{2\pi} \int_0^{2\pi} f(\theta) e^{-\mathrm{j} \theta n} \mathrm{d} \theta$, which is the Fourier series for $2\pi$-periodic functions. For $m,n \in \mathbb{Z}$, the spectrum of the CH expansion is calculated as $s_n(f) = \widehat{f}_n \widehat{f}_n^* = |\widehat{f}_n|^2$ and the bispectrum as $b_{n,m}(f) = \widehat{f}_n \widehat{f}_m \widehat{f}_{n+m}^*$. One readily verifies that for a function $g(\theta) = f(\theta -\theta_0)$ we have for any $m,n \in \mathbb{Z}$ the equalities $s_n(f)=s_n(g)$ and $b_{n,m}(f) = b_{n,m}(g)$, since $\widehat{g}_n = \widehat{f}_n e^{-\mathrm{j}\theta_0 n}$. This means that the spectrum and bispectrum are RI, since a shift $\theta_0$ in the parameter of $f$ is equivalent to a rotation on the circle. The spectrum is the most simple, yet informative, Fourier-based RI quantity. However, it discards the phase between harmonics which contains all the information on how the sinusoids from the expansion add up to form edges and ridges \cite[Chapter 10]{smith1997scientist}. The bispectrum, on the contrary, conserves the phase information~\cite{kakarala2010} and constitutes a more specific pattern descriptor.
The main contributions of this paper are the introduction of a novel image operator based on the Solid Spherical Bispectrum (SSB) that is LRI and a corresponding CNN layer, resulting in a locally rotation invariant CNN. This work builds upon~\cite{andrearczyk2019solid}, where a Solid Spherical Energy (SSE) layer was proposed.
The radial profiles used to locally project the image on the solid sphere as well as the relative importance of the bispectrum coefficients can be learned end to end with backpropagation.
We experimentally investigate the relevance of the proposed SSB layer for biomedical texture classification. Finally, we study the ability of the SSB-CNN to learn with small amounts of data and compare with a classical CNN.
This manuscript is organized as follows. In Section~\ref{sec:related_work}, we review the main related works. Sections~\ref{sec:notations} to~\ref{sec:SH} describe the nomenclature and the mathematical tools used in this work. The definitions of the spectrum and bispectrum for functions defined on the sphere are reported in Section~\ref{sec:RIsphere} and are drawn from the work of Kakarala and Mao~\cite{kakarala2010}. We recall the theoretical benefits of the bispectrum over the spectrum in Section~\ref{sec:bisp_vs_sp_theory}. In Section~\ref{sec:lri_solid_sphere}, we define the SSE and SSB image operators and state that they are LRI. In Section \ref{sec:implementation}, we discuss the implementation details to integrate these image operators into a convolutional layer, referred to as the SSE or SSB layer.
Sections~\ref{sec:experiments_and_results} and~\ref{sec:discussions}
detail and discuss the experimental evaluation of the proposed approach.
Conclusions and perspectives are provided in Section~\ref{sec:conclusion}.
\vspace{-0.3cm}
\section{Related Work}\label{sec:related_work}
\subsection{Rotation Invariant Image Analysis}
Combining LRI and directional sensitivity is not straightforward and is often antagonist in simple designs~\cite{depeursinge2018rotation,andrearczyk2020local}. Several methods exist to combine both properties. Ojala \emph{et al.}~\cite{OPM2002} proposed the Local Binary Patterns (LBP) where they compare values of pixels within a circular neighborhood to the middle pixel. Pixels of the neighborhood are thresholded based on the central pixel to generate a binary code. LRI is achieved by ordering the binary code to obtain the smallest binary number.
Several LRI filtering approaches were proposed.
Varma and Zisserman~\cite{VaZ2005} used a filter-bank including the same filters at different orientations, where LRI is achieved by max pooling over the orientations.
Instead of explicitly computing responses (\emph{i.e.} convolving) to oriented filters, steerable filters can be used to improve efficiency~\cite{freeman1991design,Unser2013steerable}.
The work of Perona~\cite{perona1992steerable} shows the use of steerable filters for LRI edges and junctions analysis.
Dicente \emph{et al.}~\cite{DicenteCid2017} used a filter-bank composed of steerable Riesz wavelets. LRI is obtained by locally aligning the filters to the direction maximizing the gradient of the image. Data-driven steerable filters were used in~\cite{fageot2018principled} as LRI detectors of a given template within an image. Steerable Wavelet Machines (SWMs) were proposed in~\cite{DPW2017}, where task-specific profiles of steerable wavelets are learned using support vector machines.
Other approaches have been described to obtain invariants without explicitly rotating the descriptors. Such methods relies on moments~\cite{flusser2009moments}
or invariants built from the SH decomposition \cite{kakarala2012bispectrum}. Kakarala and Mao introduced the bispectrum of the SH decomposition in~\cite{kakarala2010} and they demonstrated the superiority of the bispectrum over the spectrum for 3D shape discrimination. In~\cite{kakarala2012bispectrum}, Kakarala showed that the bispectrum has better properties and contains more information than the spectrum, also proving its completeness for functions defined on compact groups. More recently, an extension of the spectral and bispectral invariants was used by Zucchelli \emph{et al.}~\cite{zucchelli2020computational} for the analysis of diffusion Magnetic Resonance Imaging data.
In~\cite{depeursinge2018rotation,eickenberg2017solid}, the authors used the spectrum of the SH expansion to compute LRI operators. Their work shares similarities with the method exposed here. However, our approach is more data-driven since we learn the radial profiles, whereas they rely on handcrafted ones.
\vspace{-0.3cm}
\subsection{Rotation Equivariance in CNNs}
Recently, several research contributions focused on the explicit encoding of rotation equivariance into CNNs. One group of methods relies on the extension of the classic convolution on the group of translations to groups of symmetries including rotations and reflections. A detailed description of the generalization of the convolution to compact groups is given in~\cite{kondor2018generalization} and to homogeneous spaces in \cite{cohen2019general}. Regarding the application of this generalization, Cohen and Welling~\cite{CoW2016b} used rotations of the filters together with recombinations of the response maps, which is performed according to the rules of group theory and allows equivariance to 2D right-angle rotations.
The same strategy was extended to 3D images in~\cite{winkels2019pulmonary,worrall2018cubenet}.
This 3D group CNN was applied to 3D texture classification in~\cite{andrearczyk2018rotational}. Bekkers \emph{et al.}~\cite{bekkers2018roto} used the convolution on the discretized group of 2D roto-translations. Weiler \emph{et al.}~\cite{weiler2017learning} proposed a CH kernel representation to achieve a more efficient rotation of the filters via steerability, still in the context of the convolution on groups. Cohen and Welling~\cite{cohen2016steerable} used the irreducible representation of the dihedral group to build CNNs that are equivariant to 2D discrete rotations.
The aforementioned methods offer the possibility to encode the equivariance to virtually any finite group. The 2D rotations group $SO(2)$ can be uniformly discretized by choosing a finite subgroup of $SO(2)$ with an arbitrary large number of elements. This is not anymore the case for 3D rotations since there is only 5 regular convex polyhedrons \cite[Chapter 10]{coxeter1961introduction}.
Therefore, approaches allowing for the propagation of the rotational equivariance without explicitly sampling the different orientations are crucial in 3D. Methods involving CH and SH have been introduced to address this problem. Worrall \emph{et al.}~\cite{WGT2016} used CHs representation of the kernels together with a complex convolution and complex non-linearities to achieve the rotational equivariance. The main drawback is that it generates many channels that must be disentangled to achieve rotation invariance. A SH representation of the kernels was used in~\cite{weiler20183d} to propagate the equivariance as a generalization of~\cite{WGT2016} to 3D images. It is also possible to adapt neural networks to non-Euclidean domains, for instance, to the 2D sphere, where the invariance to rotations plays a crucial role as in~\cite{kondor2018clebsch} and~\cite{cohen2018spherical}. Finally, the group convolution can be extended to more general Lie groups as proposed by Bekkers in~\cite{bekkers2019b}, where CNNs equivariant to roto-translation and scale-translation were implemented.
Most of these methods focused on the propagation of the rotation equivariance throughout the network, whereas we propose lightweight networks discarding this information after each LRI layer, similarly to~\cite{andrearczyk2020local}.
\section{Methods}\label{sec:methods}
\subsection{Notations and Terminology}\label{sec:notations}
We consider 3D images as functions $I \in L_2(\mathbb{R}^3)$, where the value $I(\bm{x}) \in \mathbb{R}$ corresponds to the gray level at location $\bm{x} = (x_1,x_2,x_3) \in \mathbb{R}^3$. The set of 3D rotation matrices in the Cartesian space is denoted as $SO(3)$. The rotation of an image $I$ is written as $I(\mathrm{R} \cdot)$, where $\mathrm{R} \in SO(3)$ is the corresponding rotation matrix.
The sphere is denoted as $\mathbb{S}^2 = \{ \bm{x} \in \mathbb{R}^3 : ||\bm{x}||_2 = 1\}$. Spherical coordinates are defined as $(\rho,\theta,\phi)$ with radius $\rho \geq 0$, elevation angle $\theta \in [0,\pi]$, and horizontal plane angle $\phi \in [0,2\pi)$. Functions defined on the sphere are written as $f\in L_2(\mathbb{S}^2)$ and are expressed in spherical coordinates. The inner product for $f, g \in L_2(\mathbb{S}^2)$ is defined by $\langle f \,, g \rangle_{L_2(\mathbb{S}^2)} = \int_0^\pi \int_0^{2\pi} f(\theta, \phi) \overline{g(\theta, \phi)} \sin(\theta)\text{\textup{d}}\phi \text{\textup{d}}\theta$. With a slight abuse of notation, the rotation of a function $f \in L_2(\mathbb{S}^2)$ is written as $f(\mathrm{R}\cdot)$, despite the fact that spherical functions are expressed in spherical coordinates.
The Kronecker delta $\delta[\cdot]$ is such that $\delta[n]=1$ for $n=0$ and $\delta[n]=0$ otherwise. The Kronecker product is denoted by $\otimes$. The triangle function is referred to as $\text{tri}(x)$ and is defined as $\text{tri}(x)=1-|x|$ if $|x| < 1$ and $\text{tri}(x)=0$ otherwise.
A block diagonal matrix formed by the sub-matrices $\mathrm{A}_i$ is written as $\left[\bigoplus_i \mathrm{A}_i\right]$. The Hermitian transpose is denoted by~$\bm{^\dag}$.
\vspace{-0.3cm}
\subsection{LRI Operators}\label{sec:lri_def}
This work focuses on image operators $\mathcal{G}$ that are LRI as previously introduced in \cite{andrearczyk2020local}. An operator $\mathcal{G}$ is LRI if it satisfies the three following properties:
\begin{itemize}
\item \emph{Locality}: there
exists $\rho_0 > 0$ such that, for every $\bm{x} \in \mathbb{R}^3$ and every image $I \in L_2(\mathbb{R}^3)$, the
quantity $\mathcal{G} \{I\} (\bm{x})$ only depends on local
image values $I(\bm{y})$ for $\lVert \bm{y} - \bm{x}\rVert
\leq \rho_0$.
\item \emph{Global equivariance to translations:} For any $I \in L_2(\mathbb{R}^3)$,
\begin{equation*}
\mathcal{G}\{ I (\cdot - \bm{x}_0) \} = \mathcal{G}\{I\}
(\cdot - \bm{x}_0) \quad \text{ for any } \bm{x}_0 \in
\mathbb{R}^3. \label{eq:transinv} \\
\end{equation*}
\item \emph{Global equivariance to rotations:} For any $I \in L_2(\mathbb{R}^3)$,
\begin{equation*}
\mathcal{G}\{ I(\mathrm{R}_0 \cdot) \} =
\mathcal{G}\{I\} (\mathrm{R}_0\cdot) \quad \text{ for
any } \mathrm{R}_0 \in SO(3). \label{eq:rotinv}
\end{equation*}
\end{itemize}
To reconcile the intuition of LRI with this definition, let us consider a simple scenario where two images $I_1$ and $I_2$ are composed of the same small template $\tau \in L_2(\mathbb{R}^3)$ appearing at random locations and orientations and distant enough to avoid overlaps between them. The locations of the templates $\tau$ are identical for $I_1$ and $I_2$, the difference between the two images being in the local orientation of the templates. These images can be written as
\begin{equation*}
I_k = \sum_{1 \leq j \leq J} \tau (\mathrm{R}_{j,k} (\cdot - \bm{x}_j)),
\end{equation*}
where $J$ is the number of occurrence of the template $\tau$ and $k=1,2$. The local orientation and position of the $j^{\textrm{th}}$ template in image $k$ are represented by $\mathrm{R}_{j,k}$ and $\bm{x}_j$, respectively.
If the operator $\mathcal{G}$ is LRI, then for any $1 \leq j \leq J$ and any rotations $\mathrm{R}_{j,1}$, $\mathrm{R}_{j,2} \in SO(3)$,
\begin{equation*}
\mathcal{G}\{I_1\}(\bm{x}_j) = \mathcal{G}\{I_{2}\}(\bm{x}_j).
\end{equation*}
From the definition of LRI, this equality is required to hold only at the center of the templates. This example is illustrated in Fig. \ref{fig:lri}, where only the responses at the center of the templates are represented.
\begin{figure}
\includegraphics[width=0.48\textwidth]{figures/fig_lri.png}
\caption{Visual representation of the output of a LRI operator. Here, different rotations are applied at the template centers. For the sake of simplicity, only the output values at the template centers are represented.}
\label{fig:lri}
\end{figure}
In this work, the design of LRI operators is obtained in two steps.
First, the image $I \in L_2(\mathbb{R}^3)$ is convolved with SHs modulated by compactly supported radial profiles, referred to as solid SHs.
The second step involves the computation of RI descriptors for each position.
\vspace{-0.3cm}
\subsection{Spherical Harmonics}\label{sec:SH}
Any function $f \in L_2(\mathbb{S}^2)$ can be expanded in the form of
\begin{equation}
\label{eq:sph_exp}
f(\theta, \phi) = \sum_{n=0}^\infty \sum_{m=-n}^n F_n^m Y_n^m(\theta, \phi),
\end{equation}
where $Y_n^m$ are the so-called SHs for a degree $n \in \mathbb{N}$ and order $m$ with $-n\leq m \leq n$. For their formal definition, see
\cite[Section 2.5]{depeursinge2018rotation} and for their visual representation, refer to Fig. \ref{fig:sph_family}. The SHs form an orthogonal basis of $L_2(\mathbb{S}^2)$~\cite[Chapter 5.6]{varshalovich1988quantum}. Thus, the expansion coefficients of Eq.~(\ref{eq:sph_exp}) can be computed by projecting $f$ onto
the SH basis using the inner product on the sphere
\begin{equation}
F_n^m = \langle f\,, Y_n^m \rangle_{L_2(\mathbb{S}^2)}.
\end{equation}
This expansion corresponds to the Fourier transform on the sphere.
We regroup the coefficients of all orders $m$ for a given degree $n$ as the $1 \times (2n+1)$ vector
\begin{equation}
\bm{\mathcal{F}}_n = [F_n^{-n} \ldots F_n^0 \ldots F_n^n],
\end{equation}
called the spherical Fourier vector of degree $n$.
One important property of SHs is their steerability, \textit{i.e.} the rotation of one SH can be
determined by a linear combination of the other SHs of same degree:
\begin{equation}
\label{eq:steerability_ynm}
Y_n^m(\mathrm{R}_0\cdot) = \sum_{m'=-n}^n [\mathrm{D}_n(\mathrm{R}_0)]_{m',m} Y_n^{m'},
\end{equation}
where $\mathrm{D}_n(\mathrm{R}_0)$ is a unitary matrix known as the Wigner
$\mathrm{D}$-matrix \cite[Chapter 4]{varshalovich1988quantum}.
Therefore, if two functions $f,f' \in L_2(\mathbb{S}^2)$ differ only by a rotation $\mathrm{R}_0 \in SO(3)$, \textit{i.e.}
$f'=f(\mathrm{R}_0 \cdot)$, their spherical Fourier vectors, $\bm{\mathcal{F}}_n$ and $\bm{\mathcal{F}}'_n$,
satisfy the following relation \cite[Section 3, Eq. (5)]{kakarala2010}:
\begin{equation}
\label{eq:wigner_rot}
\bm{\mathcal{F}}'_n = \bm{\mathcal{F}}_n \mathrm{D}_n(\mathrm{R}_0).
\end{equation}
This property is similar to the shifting property of the Fourier transform on the real line. In the spherical case, instead of multiplying by a complex exponential, the transform is multiplied by the Wigner $\mathrm{D}$-matrix of degree $n$ associated with the rotation $\mathrm{R}_0$.
\begin{figure}
\includegraphics[width=0.49\textwidth]{figures/spherical_harmonics_half.png}
\caption{Visual representation of the real and imaginary part of the $h(r)Y_n^m(\theta, \phi)$ here $h$ is chosen Gaussian for simplicity. Each box represents a given SH with the real part on the left and the imaginary part on the right. The blue represents positive values and orange negative values. Only the SHs for $m\geq 0$ are represented since we have the following symmetry $Y_n^{-m} = (-1)^m \overline{Y_n^m}$.}
\label{fig:sph_family}
\end{figure}
\vspace{-0.3cm}
\subsection{Spherical RI: the Spectrum and the Bispectrum}\label{sec:RIsphere}
With the properties of the spherical Fourier vectors, it is possible to efficiently obtain RI operators for functions defined on the sphere. Two quantities computed from these coefficients will be of interest: the spherical spectrum and the spherical bispectrum.
\subsubsection{Spectrum}
The spectrum is a ubiquitous quantity in signal processing and it is well known to provide a source of translation invariant descriptors for periodic functions and functions defined on the real line. In these cases, the spectrum corresponds to the squared modulus of the Fourier transform. Its spherical equivalent, the spherical spectrum, is defined by the averaged squared norm of the spherical Fourier vector $\bm{\mathcal{F}}_n$:
\begin{equation}
\label{eq:spectrum}
s_n(f) = \frac{1}{2n+1} \bm{\mathcal{F}}_n \bm{\mathcal{F}}_n^{\bm{\dag}} =
\frac{1}{2n+1}\sum_{m=-n}^n |F_n^m|^2.
\end{equation}
\subsubsection{Bispectrum}
The bispectrum is defined as in \cite[Section 4 Eq. (24)]{kakarala2010}:
\begin{equation}
\label{eq:bispectrum}
b^{\ell}_{n,n'}(f) = [\bm{\mathcal{F}}_n \otimes \bm{\mathcal{F}}_{n'}] \mathrm{C}_{nn'} \widetilde{\bm{\mathcal{F}}_\ell}^{\dag},
\end{equation}
where the term $\bm{\mathcal{F}}_n \otimes \bm{\mathcal{F}}_{n'}$ is a $1 \times (2n+1)(2n'+1)$ vector, $\mathrm{C}_{nn'}$ is the $(2n+1)(2n'+1) \times (2n+1)(2n'+1)$ Clebsh-Gordan matrix containing the Clebsh-Gordan coefficients, whose definition and main properties are recalled in Appendix~\ref{app:CG}, and $\widetilde{\bm{\mathcal{F}}_\ell} = [0, \ldots , 0, \bm{\mathcal{F}}_\ell, \, 0, \ldots, 0]$ is a zero-padded vector of size $1 \times (2n+1)(2n'+1)$ containing the spherical Fourier vector of degree $\ell$ with $|n-n'|\leq \ell \leq n+n'$. The zero-padding is performed to match the size of $\mathrm{C}_{nn'}$ and to select only the rows corresponding to the $\ell^{\textrm{th}}$ degree.
The spectrum and the bispectrum are known to be RI. We recall this fundamental result that will be crucial for us thereafter. \\
\vspace{-0.3cm}
\begin{proposition}\label{prop:ri_spec_bisp_complete}
The spectrum and the bispectrum of spherical functions are RI. This means that, for any rotation $\mathrm{R}_0 \in SO(3)$ and any function $f \in L_2(\mathbb{S}^2)$, we have, for $f' = f( \mathrm{R}_0 \cdot )$,
\begin{equation}
s_n (f) = s_n (f') \quad \text{and} \quad b^{\ell}_{n,n'} (f) = b^{\ell}_{n,n'} (f')
\end{equation}
for any $n,n' \geq 0$ and any $|n-n'| \leq \ell \leq n+n'$. \\
\end{proposition}
\vspace{-0.3cm}
The result that the bispectrum of a spherical function is RI is given in \cite[Theorem 4.1]{kakarala2010}.
Besides, we introduce the following notations:
\begin{equation}
\mathcal{S}\{\bm{\mathcal{F}}_n\} = s_n(f)
\end{equation} and
\begin{equation}
\mathcal{B}\{\bm{\mathcal{F}}_n, \bm{\mathcal{F}}_{n'}, \bm{\mathcal{F}}_{l}\} = b^\ell_{n, n'}(f).
\end{equation}
These notations highlight that the spectrum coefficient $s_n(f)$ only depends on the $n$th-order Fourier vector $\bm{\mathcal{F}}_n$, and that the bispectrum coefficient $b_{n,n'}^{\ell}(f)$ only depends on $\bm{\mathcal{F}}_n$, $\bm{\mathcal{F}}_{n'}$, and $\bm{\mathcal{F}}_\ell$.
Moreover, the rotation invariance of the spectrum and bispectrum can be reformulated as
\begin{equation}
\mathcal{S}\{\bm{\mathcal{F}}_n \mathrm{D}_n(R)\} = \mathcal{S}\{\bm{\mathcal{F}}_n\}
\end{equation}
and
\begin{equation}
\mathcal{B}\{\bm{\mathcal{F}}_n \mathrm{D}_n(R) , \bm{\mathcal{F}}_{n'} \mathrm{D}_{n'}(R), \bm{\mathcal{F}}_{\ell} \mathrm{D}_\ell(R)\} = \mathcal{B}\{\bm{\mathcal{F}}_n, \bm{\mathcal{F}}_{n'}, \bm{\mathcal{F}}_{\ell}\}.
\end{equation}
\vspace{-0.5cm}
\subsection{Advantages of the Bispectrum over the Spectrum}\label{sec:bisp_vs_sp_theory}
Despite the simplicity to compute the spherical spectrum, it can be beneficial to use the more complete bispectrum, for which we provide two arguments.
\subsubsection{Inter-Degree Rotations} The spectrum does not take into account the inter-degree rotation. For instance, let us build a function $f'$ from the SH expansion $\bm{\mathcal{F}} = (\bm{\mathcal{F}}_0, \bm{\mathcal{F}}_1, \cdots)$ of the function $f$ as follows: for each degree $n$, we apply a different Wigner $\mathrm{D}$-matrix $\mathrm{D}_n(\mathrm{R}_n)$ with at least one rotation matrix $\mathrm{R}_n$ different from the others. The corresponding SH expansion $\bm{\mathcal{F}}'= (\bm{\mathcal{F}}_0\mathrm{D}_0(\mathrm{R}_0), \bm{\mathcal{F}}_1\mathrm{D}_1(\mathrm{R_1}), \cdots)$ will have the same spectrum since the Wigner $\mathrm{D}$-matrices are unit matrices (\emph{i.e.}, they do not impact the norm of $\bm{\mathcal{F}}_n$).
\subsubsection{Intra-Degree Variations} Another aspect to which the spectrum is insensitive is in the distinction of intra-degree variations. for $n_0\geq1$ fixed, the functions $f=Y_{n_0}^m$ have the same spectrum $s_{n}(f) = \frac{\delta[n-n_0]}{2n_0+1}$ but are not rotation of each other in general (see Fig. \ref{fig:sph_family}).
On the contrary, the bispectrum does not suffer from these limitations (see Section \ref{sec:toyExperiment}). Furthermore, the spectral information is contained in the bispectrum. This can be easily seen as:
\begin{equation}
b^n_{0,n}(f)= \bm{\mathcal{F}}_0 \bm{\mathcal{F}}_n \bm{\mathcal{F}}_n^\dag =F_0^0 s_n(f).
\end{equation}
This illustrates that, given a non-zero mean $\bm{\mathcal{F}}_0 = F_0^0 \in \mathbb{R}$, we can retrieve the spectral information from the bispectrum. This can appear as a restriction for the bispectrum. However, in practice, it is possible to add a constant to the signal ensuring that $F_0^0$ is non-zero.
The aforementioned properties make the bispectrum a more faithful descriptor and a good substitute of the spectral decomposition.
\vspace{-0.3cm}
\subsection{LRI on the Solid Sphere $\mathbb{R}^3$}\label{sec:lri_solid_sphere}
The previous sections introduced the theoretical aspects to build RI descriptors for functions defined on the sphere. In this work, we are interested in 3D images, therefore we will use the spherical spectrum and bispectrum in combination with solid SHs. Solid SHs are the multiplication of the SHs by a radial profile to extend them to a 3D volume.
We introduce the following notation for solid SHs evaluated on the Cartesian grid:
\begin{equation}
\kappa_n^m(\bm{x}) = \kappa_n^m (\rho,\theta,\phi) = h_n(\rho) Y_n^m(\theta, \phi),
\end{equation}
where $h_{n}$ is a compactly supported radial profile that is shared among the SHs $Y_n^m$ with same degree $n$. In the final network, the radial profiles $h_n$ are learned from the data.
The image is convolved with the solid SHs and by regrouping the resulting feature maps for each degree, we obtain the spherical Fourier feature maps\footnote{The convolution $(I*\kappa_n^{m})(\bm{x})$ with all the $\kappa_n^m$ is equivalent to a local projection of the image around the position $\bm{x}$ to a function defined on the sphere followed by a projection onto the SHs basis. For that reason, we use the same notation as for the spherical Fourier vector of degree $n$. We distinguish the spherical Fourier feature maps by the evaluation over a position $\bm{x}$.}:
\begin{equation}
\label{eq:sh_conv}
\bm{\mathcal{F}}_n(\bm{x}) = [(I*\kappa_n^{m})(\bm{x})]_{m=-n}^{m=n}.
\end{equation}
In other terms, the $m^{\textrm{th}}$ component of $\bm{\mathcal{F}}_n(\bm{x})$ is $\langle I ( \bm{x} - \cdot ) , h_{n} Y_{n}^m \rangle$, and measures the correlation between $I$ centered at $\bm{x}$ and the solid SH $\kappa_n^m = h_{n} Y_{n}^m$.
Thanks to the Fourier feature maps, we introduce the image operators used in this paper in Definition \ref{def:op}. \\
\begin{definition} \label{def:op}
We define the \emph{SSE image operator} $\mathcal{G}^\text{SSE}_n$ of degree $n \geq 0$ as
\begin{equation}
\label{eq:gsse}
\mathcal{G}^\text{SSE}_n\{I\}(\bm{x}) = \mathcal{S}\{ \bm{\mathcal{F}}_n (\bm{x})\}
\end{equation}
for any $I \in L_2(\mathbb{R}^3)$ and $\bm{x} \in \mathbb{R}^3$.
Similarly, we define the \emph{SSB image operator} $\mathcal{G}^\text{SSB}_{n,n',\ell}$ associated with degrees $n,n' \geq 0$ and $|n-n'| \leq \ell \leq n+n'$ as
\begin{equation}
\label{eq:gssb}
\mathcal{G}^\text{SSB}_{n,n',\ell}\{I\}(\bm{x}) =
\mathcal{B}\{ \bm{\mathcal{F}}_n(\bm{x}), \bm{\mathcal{F}}_{n'}(\bm{x}), \bm{\mathcal{F}}_{\ell}(\bm{x}) \},
\end{equation}
for any $I \in L_2(\mathbb{R}^3)$ and $\bm{x} \in \mathbb{R}^3$. \\
\end{definition}
The SSE image operators have been considered in \cite{andrearczyk2020local}, where it was proven to be LRI in Appendix D. We recall this result and extend it to SSB image operators in the following proposition, whose proof is given in Appendix \ref{app:LRIproof}. \\
\vspace{-0.3cm}
\begin{proposition} \label{prop:LRI}
The SSE and SSB image operators are globally equivariant to translations and rotations. When the radial profiles $h_n$ are all compactly supported, these operators are therefore LRI in the sense of Section \ref{sec:lri_def}.
\end{proposition}
\vspace{-0.3cm}
\subsection{Implementation of the LRI layers}\label{sec:implementation}
In this section, we report the implementation details of our LRI design.
\subsubsection{Parameterization of the Radial Profiles}
The radial profiles are parameterized as a linear combination of radial functions
\begin{equation} \label{eq:hqn}
h_{q,n}(\rho) = \sum_{j=0}^{J} w_{q,n,j} \psi_j(\rho),
\end{equation}
where the $w_{q,n,j}$ are the trainable parameters of the model.
In \eqref{eq:hqn}, $h_{q,n}$ is the $q^\textrm{th}$ radial profile associated to the degree $n$. The index $q$ controls the number of output streams in the layer.
The index $j=0,\ldots,J$ controls the radial components of the filter. The radial functions are chosen as $\psi_j(\rho)=\text{tri}(\rho-j)$.
\subsubsection{Number of Feature Maps}
The image is convolved with the kernels $\kappa_{q,n}^m$ to obtain the spherical Fourier feature maps $\{\bm{\mathcal{F}}_{q,n}(\bm{x})\}_{n=0,\ldots,N}^{q=1,\ldots,Q}$. Here, $Q$ is the number of output streams of the layer and $N$ is the maximal degree of the SH decomposition. These feature maps are combined according to Eq. (\ref{eq:gsse}) and (\ref{eq:gssb}) resulting in $\mathcal{G}^\text{SSE}_{q,n}\{I\}(\bm{x})$ or $\mathcal{G}^\text{SSB}_{q,n,n',l}\{I\}(\bm{x})$ respectively.
In the following, we discuss the number of feature maps generated for only one output stream, thus we drop the index $q$.
In the case of the operator $\mathcal{G}^\text{SSE}_n$, the number of generated feature maps is $N+1$. For the $\mathcal{G}^\text{SSB}_{n,n',\ell}$ operator, the total number of features maps is $\mathcal{O}(N^3)$. It is actually not necessary to compute all the bispectrum coefficients, some of them being redundant due to the following properties. First, for each $n$, $n'$ and $\ell$, the bispectral components $b_{n,n'}^\ell(f)$ and $b_{n',n}^\ell(f)$ are proportional independently of $f$~\cite[Theorem 4.1]{kakarala2010}. Hence, we choose to compute the components only for $n\leq n'$ and $0\leq n+n'\leq N$. Second, even though the bispectrum is complex-valued, when $f$ is real, $b^\ell_{n,n'}(f)$ is either purely real or purely imaginary if $n+n'+\ell$ is even or odd respectively~\cite[Theorem 2.2]{kakarala2011viewpoint}. Thus, we can map it to a real-valued scalar. In our design, we take either the real or the imaginary part depending on the value of the indices $n,n',\ell$.
Even with these two properties the number of feature maps for the $\mathcal{G}^\text{SSB}_{n,n',\ell}$ operator still follows a polynomial of degree 3 (see Table~\ref{tab:comp_num_feature_map} for the first values), but for low $N$ it still reduces greatly this number. Moreover, the maximal degree $N$ for the SH expansion cannot be taken arbitrarily large as the kernels are discretized~\cite{andrearczyk2020local}. The upper bound for $N$ is given by $N \leq \frac{\pi c}{4}$, where $c$ is the diameter of the kernel. This condition can be regarded as the Nyquist frequency for the SH expansion. As an example, $N=7$ is the maximal value for a kernel of size $9\times9\times9$.
\begin{table}[h]
\caption{Number of feature maps obtained for the $\mathcal{G}^\text{SSE}_n$ and $\mathcal{G}^\text{SSB}_{n,n',\ell}$ operators in function of the maximal degree $N$.}
\label{tab:comp_num_feature_map}
\centering
\begin{tabular}{@{}lrrrrrrrr@{}} \toprule
$N$ & 0 & 1 & 2 & 4 & 6 & 8 & 10 & 100 \\ \midrule
Spectrum & 1 & 2 & 3 & 5 & 7 & 9 & 11 & 101 \\
Bispectrum & 1 & 2 & 5 & 14 & 30& 55 & 91 & 48127 \\ \bottomrule
\end{tabular}
\end{table}
\subsubsection{Discretization}
The kernels $\kappa_{q,n}^m = h_{q,n} Y_n^m$ are discretized by evaluating them on a Cartesian grid. For more details about the discretization, see \cite[Section 2.4]{andrearczyk2019exploring}.
\vspace{-0.3cm}
\section{Experiments and Results}\label{sec:experiments_and_results}
Section~\ref{sec:toyExperiment} illustrates the differences between the spherical spectrum and bispectrum with two toy experiments designed to fool the spectrum. Then, we compare the classification performance of three different CNNs (SSE, SSB and standard) detailed in Section~\ref{sec:network_architecture} on the two datasets described in Section~\ref{sec:datasets} in terms of accuracy (Section~\ref{sec:exp_perf}) and generalization power (Section~\ref{sec:learningCurves}) \emph{i.e.} the number of training samples required to reach the final accuracy.
\vspace{-0.3cm}
\subsection{Comparing Local Spectrum to Bispectrum Representations}\label{sec:toyExperiment}
Two toy experiments are conducted to highlight the differences in terms of the representation power of the spherical spectrum and bispectrum. The first experiment is designed to show that the spectrum is unable to discriminate between patterns that only differ in terms of rotations between degrees. The second experiment illustrates that the spectrum cannot capture differences within the same degree. These two experiments are done in the 3D image domain to show the applicability of the spherical spectrum and bispectrum of the solid SHs and to be as close as possible to the final application.
The images are obtained by evaluating $h(\rho) \sum_{n=0}^N \sum_{m=-n}^{m=n}F_n^m Y_m^n(\theta, \phi)$ on a 3D Cartesian grid of $32\times32\times32$ with $h$ defined as an isotropic Simoncelli wavelet profile \cite{portilla2000parametric}. This first experiment investigates the capability of the spectrum and bispectrum to discriminate between functions with distinct inter-degree rotations. Representatives $f$ and $f'$ of the two classes are obtained by summing the SHs described by their repsective spherical Fourier transform $\bm{\mathcal{F}}$ and $\bm{\mathcal{F}}'$. $\bm{\mathcal{F}}$ is composed of $\bm{\mathcal{F}}_1 = [1 , \mathrm{j},1]$, $\bm{\mathcal{F}}_2 = [1, -1 ,1,1,1]$, $\bm{\mathcal{F}}_3 = [1 ,-1 ,1, \mathrm{j},1, 1, 1]$ and $\bm{\mathcal{F}}_n = \boldsymbol{0}$ for any $n\neq 1,2,3$. The coefficients are chosen to ensure that the images are real and that the spherical spectrum $s_n(f)=1$ for $n=1,2,3$. The spherical decomposition $\bm{\mathcal{F}}'$ of the second class is computed as $\bm{\mathcal{F}}'_1=\bm{\mathcal{F}}_1 \mathrm{D}_1(\mathrm{R}_1)$, $\bm{\mathcal{F}}'_2=\bm{\mathcal{F}}_2 \mathrm{D}_2(\mathrm{R}_2)$ and $\bm{\mathcal{F}}'_3=\bm{\mathcal{F}}_3 \mathrm{D}_3(\mathrm{R}_3)$, where $\mathrm{R}_1$, $\mathrm{R}_2$ and $\mathrm{R}_3$ are distinct 3D rotations. This allows to combine the different degrees with different rotations resulting in a function $f'$ with spectrum $s_{n}(f') = s_n(f)$ for all $n$ but $f'\neq f$. Moreover, for each class, 50 distinct instances are created by adding Gaussian noise and randomly rotating the images. The random rotations are drawn from a uniform distribution over the 3D rotations and then we use the associated Wigner-$\mathrm{D}$ matrices to rotate the instances. This time, the same rotation is applied to all degrees. The bispectrum and spectrum are calculated using only the responses to the spherical filters at the origin voxel of the images and the results are presented in Fig. \ref{fig:results_scalar1}. Note that only a subset of distintive coefficients of the bispectrum is shown. The results indicate that the spectrum cannot detect inter-degree rotations, whereas the bispectrum can.
\begin{figure}[h]
\subfloat[Spectrum]{
\input{figures/tikz/spectrum_toy1.tikz}
}
\subfloat[Bispectrum]{
\input{figures/tikz/bispectrum_toy1.tikz}
}
\caption{Experiment with distinct inter-degree rotations. Spherical spectral (left) and bispectral (right) decomposition of the two classes. The blue bars represent the decomposition for the first class $f$
and the orange bars for the second class $f'$ ($\bm{\mathcal{F}}'_i=\bm{\mathcal{F}}_i \mathrm{D}_i(R_i)$, $i=1,2,3$). Note that only a subset of the bispectral components is displayed. It can be observed that the spectrum cannot distinguish between $f$ and $f'$, and that the bispectrum can.
}
\label{fig:results_scalar1}
\end{figure}
In the next experiment, instead of applying a distinct rotation to each degree, we choose orders that are different within the same degree. For the first class $f$, we use only the order $m=0$ and for the second class $f'$ the orders $m=n,-n$. This choice is motivated by their differences in shape as represented in Fig. \ref{fig:sph_family}. The spherical Fourier transform $\bm{\mathcal{F}}$ of the first class is chosen to be $\bm{\mathcal{F}}_1 = [0 ,\sqrt{3} \mathrm{j} ,0]$, $\bm{\mathcal{F}}_2 = [0 ,0,\sqrt{5},0,0]$, $\bm{\mathcal{F}}_3 = [0,0,0,\sqrt{7} \mathrm{j},0,0,0]$ and $\bm{\mathcal{F}}_n = 0$ for any $n\neq 1,2,3$. The spherical decomposition $\bm{\mathcal{F}}'$ of the second class is $\bm{\mathcal{F}}'_1 = [\sqrt{3/2},0,\sqrt{3/2}]$, $\bm{\mathcal{F}}'_2 = [ \sqrt{5/2},0,0,0,\sqrt{5/2}]$, $\bm{\mathcal{F}}'_3 = [\sqrt{7/2},0,0,0,0,0,\sqrt{7/2}]$. The coefficients are chosen to obtain a spherical spectrum of 1 for $n=1$ , $n=2$ and $n=3$ and to generate real images. The results in Fig. \ref{fig:results_scalar2} show that the bispectrum can discriminate between the two classes even though they have the same spectrum.
\begin{figure}[h]
\input{figures/tikz/toy2.tikz}
\caption{Experiment with intra-degree variations. Spherical spectral (left) and bispectral (right) decomposition of the two classes. The blue bars represent the decomposition for the first class $f$
and the orange bars for the second class $f'$.
Note that only a subset of the bispectral components is displayed. It can be observed that the spectrum cannot distinguish between $f$ and $f'$, and that the bispectrum can.}
\label{fig:results_scalar2}
\end{figure}
\vspace{-0.3cm}
\subsection{Datasets}\label{sec:datasets}
To evaluate the performance of the proposed LRI operators, we use both a synthetic and a medical dataset. The synthetic dataset constitutes a controlled experimental setup and contains two classes with 500 volumes each of size $32\times32\times32$ for each class. They are generated by placing two types of patterns with a size of $7\times7\times7$, namely a binary segment and a 2D cross with the same norm, at random 3D orientations and random locations with possible overlap. The number of patterns per volume is randomly set to $\lfloor d \cdot(\frac{s_v}{s_p})^3\rfloor$, where $s_v$ and $s_p$ are the sizes of the volume and of the pattern, respectively and the density $d$ is drawn from a uniform distribution in the range $[0.1,0.5]$. The two classes vary by the proportion of the patterns, \textit{i.e.} 30\% segments with 70\% crosses for the first class and vice versa for the second class. 800 volumes are used for training and the remaining 200 for testing. Despite the simplicity of this dataset, some variability is introduced by the overlapping patterns and the linear interpolation of the 3D rotations.
The second dataset is a subset of the American National Lung Screening Trial (NLST) that was annotated by radiologists at the University Hospitals of Geneva (HUG)~\cite{andrearczyk2020local}. The dataset includes 485 pulmonary nodules from distinct patients in CT, among which 244 benign and 241 malignant. We pad or crop the input volumes (originally with sizes ranging from $16\times16\times16$ to $128\times128\times128$) to the size $64\times64\times64$. We use balanced training and test splits with 392 and 93 volumes respectively. Examples of 2D slices of the lung nodules are illustrated in Fig. \ref{fig:ex_nlst}. The Hounsfield units are clipped in the range $[-1000,400]$, then normalized with zero mean and unit variance (using the training mean and variance).
\begin{figure}[h]
\subfloat[Benign]{
\includegraphics[width=0.24\textwidth]{figures/nlst_benign.png}
}
\subfloat[Malignant]{
\includegraphics[width=0.24\textwidth]{figures/nlst_malignant.png}
}
\caption{Slices drawn from the NLST dataset showing a benign pulmonary nodule
and a malignant one.}
\label{fig:ex_nlst}
\end{figure}
\vspace{-0.3cm}
\subsection{Network Architecture}\label{sec:network_architecture}
This work uses the network architecture proposed in \cite{andrearczyk2019exploring}. The first layer consists of the LRI layer implemented as described in Section \ref{sec:implementation}. The obtained responses are aggregated using spatial global average pooling, similarly to \cite{AnW2016}. This pooling aggregates the LRI operator responses into a single scalar per feature map and is followed by one Fully Connected (FC) layer. For the nodule classification experiment, we average the responses inside the nodule masks instead of across the entire feature maps to remove the effect of the size allowing the network to focus on the textural content of the nodule. The final softmax FC layer is connected directly with a cross-entropy loss. Standard Rectified Linear Units (ReLU) activations are used. The two different types of LRI networks are referred to as SSE-CNN and SSB-CNN when the LRI layer uses the $\mathcal{G}^\text{SSE}$ or $\mathcal{G}^\text{SSB}$ operator respectively.
The networks are trained using an Adam optimizer with $\beta_1=0.99$ and $\beta_2=0.9999$ and a batch size of 8. Other task-specific parameters are: for the synthetic experiment (kernel size $7\times7\times7$, stride 1, 2 filters and 50,000 iterations), for the nodule classification experiment (kernel size $9\times9\times9$, stride 2, 4 filters and 10,000 iterations).
The initial values of the trainable weights in \eqref{eq:hqn} are drawn independently from a Gaussian distribution as $w_{q,n,j} \sim \mathcal{N}(0,\,1)$ and the biases are initialized to zero. This initialization is inspired by \cite{he2015delving,weiler2017learning} in order to avoid vanishing and exploding activations and gradients.
We compare the proposed CNNs to a network with the same architecture but with a standard 3D convolutional layer and varying numbers of filters, referred to as Z3-CNN.
\vspace{-0.3cm}
\subsection{Classification Performance of the SSB-, SSE- and Z3-CNN}\label{sec:exp_perf}
Here, we evaluate the classification performance of both the SSE-CNN and SSB-CNN on the two datasets described in Section \ref{sec:datasets}. The accuracies of both designs are computed with 10 different initializations for varying maximal degrees $N$.
Confidence Intervals (CI) at $95\%$ and mean accuracies are reported in Fig. \ref{fig:res_perf_synth} and \ref{fig:res_perf_NLST} for the synthetic and NLST datasets respectively. On both datasets, the SSB-CNN outperforms the two other networks.
To exclude the possibility that this performance gain is simply due to a higher number of feature maps, we trained a SSE-CNN on the synthetic dataset with maximal degree $N=2$ and 4 kernels in the first layer instead of 2. This amounts to a total of 12 feature maps after the first layer.
This model achieves $0.9075 \pm 0.006$ of accuracy and is still significantly outperformed by the SSB-CNN with maximal degree $2$ and 2 kernels, which has 10 feature maps after the first layer and obtains an accuracy of $0.924 \pm 0.008$ (Fig.~\ref{fig:res_perf_synth}).
One important remark is that both LRI networks contain fewer parameters than the Z3-CNN. For instance in the NLST experiment, the SSB- and SSE-CNN have 330 and 222 parameters respectively for a maximal degree $N=4$ against 7322 parameters for the Z3-CNN.
\begin{figure}
\input{figures/tikz/synth_acc.tikz}
\caption{Classification accuracies and numbers of parameters on the synthetic dataset for varying maximal degrees $N$. The error bars represent the CIs at $95\%$. The accuracy of the Z3-CNN with 10 filters is $0.875 \pm 0.011$ with 3462 trainable parameters and is represented by the green dashed lines.}
\label{fig:res_perf_synth}
\end{figure}
\begin{figure}
\input{figures/tikz/nlst_acc.tikz}
\caption{Classification accuracies and numbers of parameters on the NLST dataset for varying maximal degrees $N$.
The error bars represent the CIs at $95\%$.
The accuracy of the Z3-CNN with 10 filters is $0.810 \pm 0.014$
with 7322 trainable parameters and is represented by the green dashed lines.}
\label{fig:res_perf_NLST}
\end{figure}
\vspace{-0.3cm}
\subsection{Learning Curves of the SSB-, SSE- and Z3-CNN}\label{sec:learningCurves}
The SSB- and SSE-CNN are LRI networks and thus require neither additional training examples nor a large number of parameters to learn this property.
In addition, they rely on compressing SH parametric representations.
For these two reasons, we expect that they will better generalize with fewer training examples (\emph{i.e.} steeper learning curve) than the standard Z3-CNN on
data for which this property is relevant. To test this hypothesis, we compare the classification performance of each method using an increasingly large number of training examples $N_s$. For the synthetic dataset, we use \mbox{$N_s = 16, 32, 64, 128, 200, 300, 400$} and for the nodule classification \mbox{$N_s= 10,30, 64,128,200,300,392$}. For each value of $N_s$, 10 repetitions are made and $N_s$ examples are randomly drawn from the same training fold as the previous experiments (Section~\ref{sec:exp_perf}).
For the SSB-CNN we report the accuracy for $N=2$ on the synthetic dataset and $N=4$ on the NLST dataset.
The accuracy of the SSE-CNN is reported for $N=2$ on the synthetic dataset and $N=1$ for the NLST dataset.
These parameters are chosen according to the previous experiments (Section~\ref{sec:exp_perf}, Fig.~\ref{fig:res_perf_synth} and~\ref{fig:res_perf_NLST}) as they provided the best accuracy.
The experiment is also conducted with the Z3-CNN and the results are reported for both 10 and 144 filters in the convolution layer. The mean accuracy with CIs at $95\%$ of the three models and on the two datasets is reported in Fig.~\ref{fig:res_lc_synth} and~\ref{fig:res_lc_NLST}.
\begin{figure}
\input{figures/tikz/lc_synt.tikz}
\caption{Performances on the synthetic dataset in terms of accuracy for a varying number of training examples. The error bars represent the CIs at $95\%$. The number of filters in the first layer for the SSB- and SSE-CNN is 2.}
\label{fig:res_lc_synth}
\end{figure}
\begin{figure}
\input{figures/tikz/lc_nlst}
\caption{Performances on the NLST dataset in terms of accuracy for a varying number of training examples. The error bars represent the CIs at $95\%$. The number of filters in the first layer for the SSB- and SSE-CNN is 4.}
\label{fig:res_lc_NLST}
\end{figure}
\vspace{-0.3cm}
\section{Discussions}\label{sec:discussions}
\subsection{The Bispectrum is More Discriminative}
The two experiments presented in Section~\ref{sec:toyExperiment} illustrate the types of pattern information that cannot be characterized by spectral components. In these settings, the spectrum is unable to distinguish between classes that differ either by a difference of orientation between degrees (inter-degree rotation) or by intra-degree variations. This is not the case for the bispectral coefficients that allow describing functions in $L_2(\mathbb{S}^2)$ more accurately. As expected, the cost of a more complete representation is a larger number of components. However, it is possible to compute only a subset of the bispectral components depending on the task.
In the CNN implementation of these two invariants (Section~\ref{sec:exp_perf}), we observe that the specific information captured by the SSB improves the classification performance for both datasets: as soon as the maximum degree is greater than one, the SSB-CNN outperforms the SSE-CNN (Section~\ref{sec:exp_perf}, Fig.~\ref{fig:res_perf_synth} and~\ref{fig:res_perf_NLST}).
Besides, both the SSE- and the SSB-CNN outperform the standard Z3-CNN on the synthetic data which was specifically designed to give an advantage to LRI networks. By contrast, in the nodule classification task (NLST dataset), the Z3-CNN outperforms the SSE-CNN. It seems that the simple design of the SSE-CNN fails to capture the specific signature of malignant pulmonary nodule information on these data.
However, once again, the richer invariant representation of the SSB-CNN allows outperforming even the Z3-CNN with statistical significance when $N=4$ while using approximately 22 times fewer parameters.
\vspace{-0.3cm}
\subsection{Better Generalization of the LRI Models}
The learning curve experiment on the synthetic dataset presented in Section~\ref{sec:learningCurves} shows that both LRI designs outperform the Z3-CNN for any number of training examples.
What is more notable is the steeper learning for the two LRI networks.
Both SSE- and SSB-CNNs seem to require the same number of training examples to reach their final performance level.
For the Z3-CNN, two networks are compared: one with 10 filters and the other with 144 filters, accounting for 7322 and 105,410 trainable parameters, respectively.
Even though the number of parameters is vastly different, the overall shape of the learning curves does not significantly change between the two Z3 networks, pointing out that the relationship between numbers of parameters and training examples is not obvious and highly depends on the architecture.
On the NLST dataset, the SSB-CNN outperforms the Z3-CNN when trained with the same number of training examples. However, the steeper learning curve of the former is less pronounced than with the synthetic dataset. We expect the gap between the two learning curves to be wider if we use deeper architecture as the difference in the number parameters will be higher.
Overall, we observe that the proposed SSB-CNN requires fewer training examples than the Z3-CNN, thanks to both the LRI property and the compressing parametric SH kernel representations.
\vspace{-0.3cm}
\section{Conclusion}\label{sec:conclusion}
We showed that, by using the highly discriminative SSB RI descriptor, we are able to implement CNNs that are more accurate than the previously proposed SSE-CNN. Furthermore, we also observed that LRI networks can learn with fewer training examples than the more traditional Z3-CNN, which supports our hypothesis that the latter tends to misspend the parameter budget to learn data invariances and symmetries.
The main limitation of the proposed experimental evaluation is that it relies on shallow networks that would place these approaches more at the crossroad between handcrafted methods and deep learning. In future work, the LRI layers will be implemented in a deeper architecture to leverage the fewer resources that they require in comparison with a standard convolutional layer.
This is expected to constitute a major contribution to improve 3D data analysis when curated and labelled training data is scarce, which most often the case in medical image analysis.
The code is available on GitHub\footnote{\url{https://github.com/voreille/ssbcnn}, as of April 2020.}.
\appendices
\vspace{-0.3cm}
\section{Clebsch-Gordan matrices}\label{app:CG}
Let us fix $n_1 , n_2 \geq 0$.
The Clebsch-Gordan matrix $\mathrm{C}_{n_1,n_2}$ is characterized by the fact that it block-diagonalizes the Kronecker product of two Wigner-$\mathrm{D}$ matrices as
\begin{equation} \label{eq:CGandWigner}
\mathrm{D}_{n_1}(\mathrm{R}) \otimes \mathrm{D}_{n_2}(\mathrm{R}) = \mathrm{C}_{n_1, n_2} \left[
\bigoplus_{i=|n_1-n_2|}^{n_1+n_2} \mathrm{D}_i(\mathrm{R})
\right] \mathrm{C}_{n_1, n_2}^\dag
\end{equation}
for any matrix rotation $\mathrm{R} \in SO(3)$.
This means in particular that $\mathrm{C}_{n_1,n_2}$ has $\sum_{n= |n_1 - n_2|}^{n_1+n_2} (2n+1)$ rows and $(2n_1 +1)(2n_2+1)$ columns. These two numbers are actually equal, hence $\mathrm{C}_{n_1,n_2} \in \mathbb{R}^{(2n_1 +1)(2n_2+1)\times (2n_1 +1)(2n_2+1)}$, but the relation \eqref{eq:CGandWigner} also reveals the structure of the matrix, whose coefficients are indexed as $\mathrm{C}_{n_1,n_2}[ (n,m) , (m_1,m_2)]$, with $n \in \{ |n_1 - n_2| , \ldots , (n_1+n_2) \}$, $m_1 \in \{-n_1 ,\ldots , n_1 \}$, and $m_2 \in \{-n_2 ,\ldots , n_2 \}$. In the literature, the Clebsch-Gordan coefficients are often written with bracket notations, that reveal some of their symmetries~\cite{alex2011numerical}. Moreover, the Clebsch-Gordan matrix has many $0$ entries. We indeed have that
\begin{align*}
\mathrm{C}_{n_1,n_2}[ (n,m) , (m_1,m_2)] &= 0 \text{ if } m \neq m_1 + m_2 \\
&= \langle n_1 m_1 n_2 m_2 | n (m_1 + m_2) \rangle,
\end{align*}
where $\langle | \rangle$ is the bracket notation used for instance in \cite[Chapter 5.3.1]{chaichian1998symmetries}.
\vspace{-0.3cm}
\section{Proof of Proposition \ref{prop:LRI}}\label{app:LRIproof}
The equivariance to translations is simpler and similar to the equivariance to rotations, therefore we skip it (it simply uses that $(I(\cdot - \bm{x}_0) * \kappa_n^m) (\bm{x}) = (I*\kappa_n^m)(\bm{x} - \bm{x}_0)$).
Let $\bm{\mathcal{F}}_n(\bm{x})$ and $\bm{\mathcal{F}}'_n(\bm{x})$ be the Fourier feature maps of $I$ and $I(\mathrm{R}_0 \cdot)$ respectively, with $\mathrm{R}_0 \in SO(3)$. According to \eqref{eq:steerability_ynm} applied to $\mathrm{R} = \mathrm{R}^{-1}_0$, we have that
\begin{equation} \label{eq:Ikapparota}
\kappa_n^m (\mathrm{R}^{-1}_0 \cdot ) = \sum_{m' =-n}^n \mathrm{D}_n(\mathrm{R}_0^{-1})_{m,m'} \kappa_n^m.
\end{equation}
Moreover, we have that $(I(\mathrm{R}_0 \cdot) * \kappa_n^m ) (\bm{x}) = ( I * \kappa_n^m (\mathrm{R}^{-1}_0 \cdot) ) (\mathrm{R}_0 \bm{x})$. Together with \eqref{eq:Ikapparota}, this implies that
\begin{equation}
\bm{\mathcal{F}}'_n(\bm{x}) = \left( ( I(\mathrm{R}_0 \cdot) * \kappa_n^m ) (\bm{x}) \right)_m = \bm{\mathcal{F}}(\mathrm{R}_0 \bm{x}) \mathrm{D}_n (\mathrm{R}_0^{-1} \bm{x}).
\end{equation}
This implies that
\begin{align*}
\mathcal{G}_{n,n',\ell}^{\mathrm{SSB}} &\{ I ( \mathrm{R}_0 \cdot ) \} (\bm{x}) =
\mathcal{B}\{ \bm{\mathcal{F}}'_n(\bm{x}), \bm{\mathcal{F}}'_{n'}(\bm{x}), \bm{\mathcal{F}}'_{\ell}(\bm{x}) \} \\
& = \mathcal{B}\{ \bm{\mathcal{F}}_n(\mathrm{R}_0 \bm{x}) \mathrm{D}_n (\mathrm{R}_0^{-1}), \ldots \\
& \qquad \bm{\mathcal{F}}_{n'}(\mathrm{R}_0 \bm{x}) \mathrm{D}_{n'} (\mathrm{R}_0^{-1}), \bm{\mathcal{F}}_{\ell}(\mathrm{R}_0 \bm{x}) \mathrm{D}_{\ell} (\mathrm{R}_0^{-1}) \} \\
&= \mathcal{B}\{ \bm{\mathcal{F}}_n(\mathrm{R}_0 \bm{x}) , \bm{\mathcal{F}}_{n'}(\mathrm{R}_0 \bm{x}) , \bm{\mathcal{F}}_{\ell}(\mathrm{R}_0 \bm{x}) \} \\
&= \mathcal{G}_{n,n',\ell}^{\mathrm{SSB}} \{ I \} (\mathrm{R}_0 \bm{x}),
\end{align*}
where we used the invariance of the bispectrum for the third equality. This demonstrates the equivariance of the operator $ \mathcal{G}_{n,n',\ell}^{\mathrm{SSB}}$ with respect to rotations. Finally, the locality simply follows from the fact that the convolution $I*\kappa_{n}^m (\bm{x})$ depends on the values of $I (\bm{x} - \bm{y})$ with $\bm{y}$ in the support of $\kappa_n^m$, which is bounded as soon as $h_n$ is compactly supported, what we assumed.
\vspace{-0.3cm}
\section*{Acknowledgment}
The authors are grateful to Michael Unser, who suggested them to consider the bispectrum as a tool to capture rotation-invariant features of 3D signals.
This work was supported by the Swiss National Science Foundation (SNSF grants 205320\_179069 and P2ELP2\_181759) and the Swiss Personalized Health Network (SPHN IMAGINE and QA4IQI projects), as well as a hardware grant from NVIDIA.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
\section{Introduction}
\IEEEPARstart{C}{onvolutional} Neural Networks (CNNs) have recently gained a lot of attention as they outperform classical handcrafted methods in almost every computer vision tasks where data scarcity is not an issue. In biomedical image analysis, data are abundant. However, obtaining high quality and consistently labeled images is expensive as data curation and annotation require hours of work from well-trained experts \cite{greenspan2016guest}. Thus, the effective number of training examples is often low. This limitation is usually handled by transfer learning and data augmentation. Transfer learning, the process of fine-tuning a network trained on another task to the task at hand, is very common for 2D images. For 3D images, however, the lack of very large datasets hinders the availability of pre-trained models. Another approach, data augmentation, refers to the application of geometric transforms and perturbations to the training examples to make the CNN invariant to these distortions~\cite{shorten2019survey}. The cost of data augmentation is a substantial increase in the data size leading to a slower convergence rate and potential waste of trainable parameters.
A lot of recent research has focused on how to build CNNs that are invariant to these transforms by imposing constraints on the architecture of the network~\cite{CoW2016b, weiler2017learning, andrearczyk2020local, eickenberg2017solid}. The motivation of these approaches is to obviate the need to learn these invariances from the data and their transformation. As a result, an effective reduction of the number of trainable parameters is achieved and, potentially, a reduction of the number of training examples needed for the generalization of the network.
This work focuses on 3D biomedical texture analysis and on the design of CNNs that are invariant to local 3D rotations, \textit{i.e.}, rotations of individual local patterns. This invariance is obtained using continuously defined Rotation Invariant (RI) descriptors of functions on the sphere. By relying on a continuous-domain formulation, we avoid the difficulties associated with rotations of discretized images~\cite{vivaldi2006arithmetic, ke2014rotation}. Neighborhoods defined by learned radial profiles are used to locally project the image on the solid sphere. These descriptors are used together with a convolution operation to obtain Locally Rotation Invariant~(LRI)\footnote{LRI is used for Locally Rotation Invariant and Local Rotation Invariance interchangeably} operators in the 3D image domain as proposed in \cite{andrearczyk2019exploring}. These types of operators are relevant in biomedical texture analysis where discriminative patterns appear at random positions and orientations. The RI descriptors used in \cite{andrearczyk2019exploring, andrearczyk2019solid, andrearczyk2020local, eickenberg2017solid, weiler20183d} and in the present work are derived from the Spherical Harmonics (SH) decomposition of the kernels. The SHs are the generalization of the Circular Harmonics (CH) to the 2D sphere~\cite{gallier2009}. These two families of functions are intimately linked with Fourier theory, and both decompositions correspond to the Fourier transform of the function defined on the sphere $\mathbb{S}^2$ for the SHs and on the circle $\mathbb{S}^1$ for the CHs.
To better apprehend the two invariants considered in this work, namely the spectrum and the bispectrum, it is useful to consider them on the circle. The CH expansion of a function $f \in L_2(\mathbb{S}^1)$ for a degree $n$ is computed as $\widehat{f}_n = \frac{1}{2\pi} \int_0^{2\pi} f(\theta) e^{-\mathrm{j} \theta n} \mathrm{d} \theta$, which is the Fourier series for $2\pi$-periodic functions. For $m,n \in \mathbb{Z}$, the spectrum of the CH expansion is calculated as $s_n(f) = \widehat{f}_n \widehat{f}_n^* = |\widehat{f}_n|^2$ and the bispectrum as $b_{n,m}(f) = \widehat{f}_n \widehat{f}_m \widehat{f}_{n+m}^*$. One readily verifies that for a function $g(\theta) = f(\theta -\theta_0)$ we have for any $m,n \in \mathbb{Z}$ the equalities $s_n(f)=s_n(g)$ and $b_{n,m}(f) = b_{n,m}(g)$, since $\widehat{g}_n = \widehat{f}_n e^{-\mathrm{j}\theta_0 n}$. This means that the spectrum and bispectrum are RI, since a shift $\theta_0$ in the parameter of $f$ is equivalent to a rotation on the circle. The spectrum is the most simple, yet informative, Fourier-based RI quantity. However, it discards the phase between harmonics which contains all the information on how the sinusoids from the expansion add up to form edges and ridges \cite[Chapter 10]{smith1997scientist}. The bispectrum, on the contrary, conserves the phase information~\cite{kakarala2010} and constitutes a more specific pattern descriptor.
The main contributions of this paper are the introduction of a novel image operator based on the Solid Spherical Bispectrum (SSB) that is LRI and a corresponding CNN layer, resulting in a locally rotation invariant CNN. This work builds upon~\cite{andrearczyk2019solid}, where a Solid Spherical Energy (SSE) layer was proposed.
The radial profiles used to locally project the image on the solid sphere as well as the relative importance of the bispectrum coefficients can be learned end to end with backpropagation.
We experimentally investigate the relevance of the proposed SSB layer for biomedical texture classification. Finally, we study the ability of the SSB-CNN to learn with small amounts of data and compare with a classical CNN.
This manuscript is organized as follows. In Section~\ref{sec:related_work}, we review the main related works. Sections~\ref{sec:notations} to~\ref{sec:SH} describe the nomenclature and the mathematical tools used in this work. The definitions of the spectrum and bispectrum for functions defined on the sphere are reported in Section~\ref{sec:RIsphere} and are drawn from the work of Kakarala and Mao~\cite{kakarala2010}. We recall the theoretical benefits of the bispectrum over the spectrum in Section~\ref{sec:bisp_vs_sp_theory}. In Section~\ref{sec:lri_solid_sphere}, we define the SSE and SSB image operators and state that they are LRI. In Section \ref{sec:implementation}, we discuss the implementation details to integrate these image operators into a convolutional layer, referred to as the SSE or SSB layer.
Sections~\ref{sec:experiments_and_results} and~\ref{sec:discussions}
detail and discuss the experimental evaluation of the proposed approach.
Conclusions and perspectives are provided in Section~\ref{sec:conclusion}.
\vspace{-0.3cm}
\section{Related Work}\label{sec:related_work}
\subsection{Rotation Invariant Image Analysis}
Combining LRI and directional sensitivity is not straightforward and is often antagonist in simple designs~\cite{depeursinge2018rotation,andrearczyk2020local}. Several methods exist to combine both properties. Ojala \emph{et al.}~\cite{OPM2002} proposed the Local Binary Patterns (LBP) where they compare values of pixels within a circular neighborhood to the middle pixel. Pixels of the neighborhood are thresholded based on the central pixel to generate a binary code. LRI is achieved by ordering the binary code to obtain the smallest binary number.
Several LRI filtering approaches were proposed.
Varma and Zisserman~\cite{VaZ2005} used a filter-bank including the same filters at different orientations, where LRI is achieved by max pooling over the orientations.
Instead of explicitly computing responses (\emph{i.e.} convolving) to oriented filters, steerable filters can be used to improve efficiency~\cite{freeman1991design,Unser2013steerable}.
The work of Perona~\cite{perona1992steerable} shows the use of steerable filters for LRI edges and junctions analysis.
Dicente \emph{et al.}~\cite{DicenteCid2017} used a filter-bank composed of steerable Riesz wavelets. LRI is obtained by locally aligning the filters to the direction maximizing the gradient of the image. Data-driven steerable filters were used in~\cite{fageot2018principled} as LRI detectors of a given template within an image. Steerable Wavelet Machines (SWMs) were proposed in~\cite{DPW2017}, where task-specific profiles of steerable wavelets are learned using support vector machines.
Other approaches have been described to obtain invariants without explicitly rotating the descriptors. Such methods relies on moments~\cite{flusser2009moments}
or invariants built from the SH decomposition \cite{kakarala2012bispectrum}. Kakarala and Mao introduced the bispectrum of the SH decomposition in~\cite{kakarala2010} and they demonstrated the superiority of the bispectrum over the spectrum for 3D shape discrimination. In~\cite{kakarala2012bispectrum}, Kakarala showed that the bispectrum has better properties and contains more information than the spectrum, also proving its completeness for functions defined on compact groups. More recently, an extension of the spectral and bispectral invariants was used by Zucchelli \emph{et al.}~\cite{zucchelli2020computational} for the analysis of diffusion Magnetic Resonance Imaging data.
In~\cite{depeursinge2018rotation,eickenberg2017solid}, the authors used the spectrum of the SH expansion to compute LRI operators. Their work shares similarities with the method exposed here. However, our approach is more data-driven since we learn the radial profiles, whereas they rely on handcrafted ones.
\vspace{-0.3cm}
\subsection{Rotation Equivariance in CNNs}
Recently, several research contributions focused on the explicit encoding of rotation equivariance into CNNs. One group of methods relies on the extension of the classic convolution on the group of translations to groups of symmetries including rotations and reflections. A detailed description of the generalization of the convolution to compact groups is given in~\cite{kondor2018generalization} and to homogeneous spaces in \cite{cohen2019general}. Regarding the application of this generalization, Cohen and Welling~\cite{CoW2016b} used rotations of the filters together with recombinations of the response maps, which is performed according to the rules of group theory and allows equivariance to 2D right-angle rotations.
The same strategy was extended to 3D images in~\cite{winkels2019pulmonary,worrall2018cubenet}.
This 3D group CNN was applied to 3D texture classification in~\cite{andrearczyk2018rotational}. Bekkers \emph{et al.}~\cite{bekkers2018roto} used the convolution on the discretized group of 2D roto-translations. Weiler \emph{et al.}~\cite{weiler2017learning} proposed a CH kernel representation to achieve a more efficient rotation of the filters via steerability, still in the context of the convolution on groups. Cohen and Welling~\cite{cohen2016steerable} used the irreducible representation of the dihedral group to build CNNs that are equivariant to 2D discrete rotations.
The aforementioned methods offer the possibility to encode the equivariance to virtually any finite group. The 2D rotations group $SO(2)$ can be uniformly discretized by choosing a finite subgroup of $SO(2)$ with an arbitrary large number of elements. This is not anymore the case for 3D rotations since there is only 5 regular convex polyhedrons \cite[Chapter 10]{coxeter1961introduction}.
Therefore, approaches allowing for the propagation of the rotational equivariance without explicitly sampling the different orientations are crucial in 3D. Methods involving CH and SH have been introduced to address this problem. Worrall \emph{et al.}~\cite{WGT2016} used CHs representation of the kernels together with a complex convolution and complex non-linearities to achieve the rotational equivariance. The main drawback is that it generates many channels that must be disentangled to achieve rotation invariance. A SH representation of the kernels was used in~\cite{weiler20183d} to propagate the equivariance as a generalization of~\cite{WGT2016} to 3D images. It is also possible to adapt neural networks to non-Euclidean domains, for instance, to the 2D sphere, where the invariance to rotations plays a crucial role as in~\cite{kondor2018clebsch} and~\cite{cohen2018spherical}. Finally, the group convolution can be extended to more general Lie groups as proposed by Bekkers in~\cite{bekkers2019b}, where CNNs equivariant to roto-translation and scale-translation were implemented.
Most of these methods focused on the propagation of the rotation equivariance throughout the network, whereas we propose lightweight networks discarding this information after each LRI layer, similarly to~\cite{andrearczyk2020local}.
\section{Methods}\label{sec:methods}
\subsection{Notations and Terminology}\label{sec:notations}
We consider 3D images as functions $I \in L_2(\mathbb{R}^3)$, where the value $I(\bm{x}) \in \mathbb{R}$ corresponds to the gray level at location $\bm{x} = (x_1,x_2,x_3) \in \mathbb{R}^3$. The set of 3D rotation matrices in the Cartesian space is denoted as $SO(3)$. The rotation of an image $I$ is written as $I(\mathrm{R} \cdot)$, where $\mathrm{R} \in SO(3)$ is the corresponding rotation matrix.
The sphere is denoted as $\mathbb{S}^2 = \{ \bm{x} \in \mathbb{R}^3 : ||\bm{x}||_2 = 1\}$. Spherical coordinates are defined as $(\rho,\theta,\phi)$ with radius $\rho \geq 0$, elevation angle $\theta \in [0,\pi]$, and horizontal plane angle $\phi \in [0,2\pi)$. Functions defined on the sphere are written as $f\in L_2(\mathbb{S}^2)$ and are expressed in spherical coordinates. The inner product for $f, g \in L_2(\mathbb{S}^2)$ is defined by $\langle f \,, g \rangle_{L_2(\mathbb{S}^2)} = \int_0^\pi \int_0^{2\pi} f(\theta, \phi) \overline{g(\theta, \phi)} \sin(\theta)\text{\textup{d}}\phi \text{\textup{d}}\theta$. With a slight abuse of notation, the rotation of a function $f \in L_2(\mathbb{S}^2)$ is written as $f(\mathrm{R}\cdot)$, despite the fact that spherical functions are expressed in spherical coordinates.
The Kronecker delta $\delta[\cdot]$ is such that $\delta[n]=1$ for $n=0$ and $\delta[n]=0$ otherwise. The Kronecker product is denoted by $\otimes$. The triangle function is referred to as $\text{tri}(x)$ and is defined as $\text{tri}(x)=1-|x|$ if $|x| < 1$ and $\text{tri}(x)=0$ otherwise.
A block diagonal matrix formed by the sub-matrices $\mathrm{A}_i$ is written as $\left[\bigoplus_i \mathrm{A}_i\right]$. The Hermitian transpose is denoted by~$\bm{^\dag}$.
\vspace{-0.3cm}
\subsection{LRI Operators}\label{sec:lri_def}
This work focuses on image operators $\mathcal{G}$ that are LRI as previously introduced in \cite{andrearczyk2020local}. An operator $\mathcal{G}$ is LRI if it satisfies the three following properties:
\begin{itemize}
\item \emph{Locality}: there
exists $\rho_0 > 0$ such that, for every $\bm{x} \in \mathbb{R}^3$ and every image $I \in L_2(\mathbb{R}^3)$, the
quantity $\mathcal{G} \{I\} (\bm{x})$ only depends on local
image values $I(\bm{y})$ for $\lVert \bm{y} - \bm{x}\rVert
\leq \rho_0$.
\item \emph{Global equivariance to translations:} For any $I \in L_2(\mathbb{R}^3)$,
\begin{equation*}
\mathcal{G}\{ I (\cdot - \bm{x}_0) \} = \mathcal{G}\{I\}
(\cdot - \bm{x}_0) \quad \text{ for any } \bm{x}_0 \in
\mathbb{R}^3. \label{eq:transinv} \\
\end{equation*}
\item \emph{Global equivariance to rotations:} For any $I \in L_2(\mathbb{R}^3)$,
\begin{equation*}
\mathcal{G}\{ I(\mathrm{R}_0 \cdot) \} =
\mathcal{G}\{I\} (\mathrm{R}_0\cdot) \quad \text{ for
any } \mathrm{R}_0 \in SO(3). \label{eq:rotinv}
\end{equation*}
\end{itemize}
To reconcile the intuition of LRI with this definition, let us consider a simple scenario where two images $I_1$ and $I_2$ are composed of the same small template $\tau \in L_2(\mathbb{R}^3)$ appearing at random locations and orientations and distant enough to avoid overlaps between them. The locations of the templates $\tau$ are identical for $I_1$ and $I_2$, the difference between the two images being in the local orientation of the templates. These images can be written as
\begin{equation*}
I_k = \sum_{1 \leq j \leq J} \tau (\mathrm{R}_{j,k} (\cdot - \bm{x}_j)),
\end{equation*}
where $J$ is the number of occurrence of the template $\tau$ and $k=1,2$. The local orientation and position of the $j^{\textrm{th}}$ template in image $k$ are represented by $\mathrm{R}_{j,k}$ and $\bm{x}_j$, respectively.
If the operator $\mathcal{G}$ is LRI, then for any $1 \leq j \leq J$ and any rotations $\mathrm{R}_{j,1}$, $\mathrm{R}_{j,2} \in SO(3)$,
\begin{equation*}
\mathcal{G}\{I_1\}(\bm{x}_j) = \mathcal{G}\{I_{2}\}(\bm{x}_j).
\end{equation*}
From the definition of LRI, this equality is required to hold only at the center of the templates. This example is illustrated in Fig. \ref{fig:lri}, where only the responses at the center of the templates are represented.
\begin{figure}
\includegraphics[width=0.48\textwidth]{figures/fig_lri.png}
\caption{Visual representation of the output of a LRI operator. Here, different rotations are applied at the template centers. For the sake of simplicity, only the output values at the template centers are represented.}
\label{fig:lri}
\end{figure}
In this work, the design of LRI operators is obtained in two steps.
First, the image $I \in L_2(\mathbb{R}^3)$ is convolved with SHs modulated by compactly supported radial profiles, referred to as solid SHs.
The second step involves the computation of RI descriptors for each position.
\vspace{-0.3cm}
\subsection{Spherical Harmonics}\label{sec:SH}
Any function $f \in L_2(\mathbb{S}^2)$ can be expanded in the form of
\begin{equation}
\label{eq:sph_exp}
f(\theta, \phi) = \sum_{n=0}^\infty \sum_{m=-n}^n F_n^m Y_n^m(\theta, \phi),
\end{equation}
where $Y_n^m$ are the so-called SHs for a degree $n \in \mathbb{N}$ and order $m$ with $-n\leq m \leq n$. For their formal definition, see
\cite[Section 2.5]{depeursinge2018rotation} and for their visual representation, refer to Fig. \ref{fig:sph_family}. The SHs form an orthogonal basis of $L_2(\mathbb{S}^2)$~\cite[Chapter 5.6]{varshalovich1988quantum}. Thus, the expansion coefficients of Eq.~(\ref{eq:sph_exp}) can be computed by projecting $f$ onto
the SH basis using the inner product on the sphere
\begin{equation}
F_n^m = \langle f\,, Y_n^m \rangle_{L_2(\mathbb{S}^2)}.
\end{equation}
This expansion corresponds to the Fourier transform on the sphere.
We regroup the coefficients of all orders $m$ for a given degree $n$ as the $1 \times (2n+1)$ vector
\begin{equation}
\bm{\mathcal{F}}_n = [F_n^{-n} \ldots F_n^0 \ldots F_n^n],
\end{equation}
called the spherical Fourier vector of degree $n$.
One important property of SHs is their steerability, \textit{i.e.} the rotation of one SH can be
determined by a linear combination of the other SHs of same degree:
\begin{equation}
\label{eq:steerability_ynm}
Y_n^m(\mathrm{R}_0\cdot) = \sum_{m'=-n}^n [\mathrm{D}_n(\mathrm{R}_0)]_{m',m} Y_n^{m'},
\end{equation}
where $\mathrm{D}_n(\mathrm{R}_0)$ is a unitary matrix known as the Wigner
$\mathrm{D}$-matrix \cite[Chapter 4]{varshalovich1988quantum}.
Therefore, if two functions $f,f' \in L_2(\mathbb{S}^2)$ differ only by a rotation $\mathrm{R}_0 \in SO(3)$, \textit{i.e.}
$f'=f(\mathrm{R}_0 \cdot)$, their spherical Fourier vectors, $\bm{\mathcal{F}}_n$ and $\bm{\mathcal{F}}'_n$,
satisfy the following relation \cite[Section 3, Eq. (5)]{kakarala2010}:
\begin{equation}
\label{eq:wigner_rot}
\bm{\mathcal{F}}'_n = \bm{\mathcal{F}}_n \mathrm{D}_n(\mathrm{R}_0).
\end{equation}
This property is similar to the shifting property of the Fourier transform on the real line. In the spherical case, instead of multiplying by a complex exponential, the transform is multiplied by the Wigner $\mathrm{D}$-matrix of degree $n$ associated with the rotation $\mathrm{R}_0$.
\begin{figure}
\includegraphics[width=0.49\textwidth]{figures/spherical_harmonics_half.png}
\caption{Visual representation of the real and imaginary part of the $h(r)Y_n^m(\theta, \phi)$ here $h$ is chosen Gaussian for simplicity. Each box represents a given SH with the real part on the left and the imaginary part on the right. The blue represents positive values and orange negative values. Only the SHs for $m\geq 0$ are represented since we have the following symmetry $Y_n^{-m} = (-1)^m \overline{Y_n^m}$.}
\label{fig:sph_family}
\end{figure}
\vspace{-0.3cm}
\subsection{Spherical RI: the Spectrum and the Bispectrum}\label{sec:RIsphere}
With the properties of the spherical Fourier vectors, it is possible to efficiently obtain RI operators for functions defined on the sphere. Two quantities computed from these coefficients will be of interest: the spherical spectrum and the spherical bispectrum.
\subsubsection{Spectrum}
The spectrum is a ubiquitous quantity in signal processing and it is well known to provide a source of translation invariant descriptors for periodic functions and functions defined on the real line. In these cases, the spectrum corresponds to the squared modulus of the Fourier transform. Its spherical equivalent, the spherical spectrum, is defined by the averaged squared norm of the spherical Fourier vector $\bm{\mathcal{F}}_n$:
\begin{equation}
\label{eq:spectrum}
s_n(f) = \frac{1}{2n+1} \bm{\mathcal{F}}_n \bm{\mathcal{F}}_n^{\bm{\dag}} =
\frac{1}{2n+1}\sum_{m=-n}^n |F_n^m|^2.
\end{equation}
\subsubsection{Bispectrum}
The bispectrum is defined as in \cite[Section 4 Eq. (24)]{kakarala2010}:
\begin{equation}
\label{eq:bispectrum}
b^{\ell}_{n,n'}(f) = [\bm{\mathcal{F}}_n \otimes \bm{\mathcal{F}}_{n'}] \mathrm{C}_{nn'} \widetilde{\bm{\mathcal{F}}_\ell}^{\dag},
\end{equation}
where the term $\bm{\mathcal{F}}_n \otimes \bm{\mathcal{F}}_{n'}$ is a $1 \times (2n+1)(2n'+1)$ vector, $\mathrm{C}_{nn'}$ is the $(2n+1)(2n'+1) \times (2n+1)(2n'+1)$ Clebsh-Gordan matrix containing the Clebsh-Gordan coefficients, whose definition and main properties are recalled in Appendix~\ref{app:CG}, and $\widetilde{\bm{\mathcal{F}}_\ell} = [0, \ldots , 0, \bm{\mathcal{F}}_\ell, \, 0, \ldots, 0]$ is a zero-padded vector of size $1 \times (2n+1)(2n'+1)$ containing the spherical Fourier vector of degree $\ell$ with $|n-n'|\leq \ell \leq n+n'$. The zero-padding is performed to match the size of $\mathrm{C}_{nn'}$ and to select only the rows corresponding to the $\ell^{\textrm{th}}$ degree.
The spectrum and the bispectrum are known to be RI. We recall this fundamental result that will be crucial for us thereafter. \\
\vspace{-0.3cm}
\begin{proposition}\label{prop:ri_spec_bisp_complete}
The spectrum and the bispectrum of spherical functions are RI. This means that, for any rotation $\mathrm{R}_0 \in SO(3)$ and any function $f \in L_2(\mathbb{S}^2)$, we have, for $f' = f( \mathrm{R}_0 \cdot )$,
\begin{equation}
s_n (f) = s_n (f') \quad \text{and} \quad b^{\ell}_{n,n'} (f) = b^{\ell}_{n,n'} (f')
\end{equation}
for any $n,n' \geq 0$ and any $|n-n'| \leq \ell \leq n+n'$. \\
\end{proposition}
\vspace{-0.3cm}
The result that the bispectrum of a spherical function is RI is given in \cite[Theorem 4.1]{kakarala2010}.
Besides, we introduce the following notations:
\begin{equation}
\mathcal{S}\{\bm{\mathcal{F}}_n\} = s_n(f)
\end{equation} and
\begin{equation}
\mathcal{B}\{\bm{\mathcal{F}}_n, \bm{\mathcal{F}}_{n'}, \bm{\mathcal{F}}_{l}\} = b^\ell_{n, n'}(f).
\end{equation}
These notations highlight that the spectrum coefficient $s_n(f)$ only depends on the $n$th-order Fourier vector $\bm{\mathcal{F}}_n$, and that the bispectrum coefficient $b_{n,n'}^{\ell}(f)$ only depends on $\bm{\mathcal{F}}_n$, $\bm{\mathcal{F}}_{n'}$, and $\bm{\mathcal{F}}_\ell$.
Moreover, the rotation invariance of the spectrum and bispectrum can be reformulated as
\begin{equation}
\mathcal{S}\{\bm{\mathcal{F}}_n \mathrm{D}_n(R)\} = \mathcal{S}\{\bm{\mathcal{F}}_n\}
\end{equation}
and
\begin{equation}
\mathcal{B}\{\bm{\mathcal{F}}_n \mathrm{D}_n(R) , \bm{\mathcal{F}}_{n'} \mathrm{D}_{n'}(R), \bm{\mathcal{F}}_{\ell} \mathrm{D}_\ell(R)\} = \mathcal{B}\{\bm{\mathcal{F}}_n, \bm{\mathcal{F}}_{n'}, \bm{\mathcal{F}}_{\ell}\}.
\end{equation}
\vspace{-0.5cm}
\subsection{Advantages of the Bispectrum over the Spectrum}\label{sec:bisp_vs_sp_theory}
Despite the simplicity to compute the spherical spectrum, it can be beneficial to use the more complete bispectrum, for which we provide two arguments.
\subsubsection{Inter-Degree Rotations} The spectrum does not take into account the inter-degree rotation. For instance, let us build a function $f'$ from the SH expansion $\bm{\mathcal{F}} = (\bm{\mathcal{F}}_0, \bm{\mathcal{F}}_1, \cdots)$ of the function $f$ as follows: for each degree $n$, we apply a different Wigner $\mathrm{D}$-matrix $\mathrm{D}_n(\mathrm{R}_n)$ with at least one rotation matrix $\mathrm{R}_n$ different from the others. The corresponding SH expansion $\bm{\mathcal{F}}'= (\bm{\mathcal{F}}_0\mathrm{D}_0(\mathrm{R}_0), \bm{\mathcal{F}}_1\mathrm{D}_1(\mathrm{R_1}), \cdots)$ will have the same spectrum since the Wigner $\mathrm{D}$-matrices are unit matrices (\emph{i.e.}, they do not impact the norm of $\bm{\mathcal{F}}_n$).
\subsubsection{Intra-Degree Variations} Another aspect to which the spectrum is insensitive is in the distinction of intra-degree variations. for $n_0\geq1$ fixed, the functions $f=Y_{n_0}^m$ have the same spectrum $s_{n}(f) = \frac{\delta[n-n_0]}{2n_0+1}$ but are not rotation of each other in general (see Fig. \ref{fig:sph_family}).
On the contrary, the bispectrum does not suffer from these limitations (see Section \ref{sec:toyExperiment}). Furthermore, the spectral information is contained in the bispectrum. This can be easily seen as:
\begin{equation}
b^n_{0,n}(f)= \bm{\mathcal{F}}_0 \bm{\mathcal{F}}_n \bm{\mathcal{F}}_n^\dag =F_0^0 s_n(f).
\end{equation}
This illustrates that, given a non-zero mean $\bm{\mathcal{F}}_0 = F_0^0 \in \mathbb{R}$, we can retrieve the spectral information from the bispectrum. This can appear as a restriction for the bispectrum. However, in practice, it is possible to add a constant to the signal ensuring that $F_0^0$ is non-zero.
The aforementioned properties make the bispectrum a more faithful descriptor and a good substitute of the spectral decomposition.
\vspace{-0.3cm}
\subsection{LRI on the Solid Sphere $\mathbb{R}^3$}\label{sec:lri_solid_sphere}
The previous sections introduced the theoretical aspects to build RI descriptors for functions defined on the sphere. In this work, we are interested in 3D images, therefore we will use the spherical spectrum and bispectrum in combination with solid SHs. Solid SHs are the multiplication of the SHs by a radial profile to extend them to a 3D volume.
We introduce the following notation for solid SHs evaluated on the Cartesian grid:
\begin{equation}
\kappa_n^m(\bm{x}) = \kappa_n^m (\rho,\theta,\phi) = h_n(\rho) Y_n^m(\theta, \phi),
\end{equation}
where $h_{n}$ is a compactly supported radial profile that is shared among the SHs $Y_n^m$ with same degree $n$. In the final network, the radial profiles $h_n$ are learned from the data.
The image is convolved with the solid SHs and by regrouping the resulting feature maps for each degree, we obtain the spherical Fourier feature maps\footnote{The convolution $(I*\kappa_n^{m})(\bm{x})$ with all the $\kappa_n^m$ is equivalent to a local projection of the image around the position $\bm{x}$ to a function defined on the sphere followed by a projection onto the SHs basis. For that reason, we use the same notation as for the spherical Fourier vector of degree $n$. We distinguish the spherical Fourier feature maps by the evaluation over a position $\bm{x}$.}:
\begin{equation}
\label{eq:sh_conv}
\bm{\mathcal{F}}_n(\bm{x}) = [(I*\kappa_n^{m})(\bm{x})]_{m=-n}^{m=n}.
\end{equation}
In other terms, the $m^{\textrm{th}}$ component of $\bm{\mathcal{F}}_n(\bm{x})$ is $\langle I ( \bm{x} - \cdot ) , h_{n} Y_{n}^m \rangle$, and measures the correlation between $I$ centered at $\bm{x}$ and the solid SH $\kappa_n^m = h_{n} Y_{n}^m$.
Thanks to the Fourier feature maps, we introduce the image operators used in this paper in Definition \ref{def:op}. \\
\begin{definition} \label{def:op}
We define the \emph{SSE image operator} $\mathcal{G}^\text{SSE}_n$ of degree $n \geq 0$ as
\begin{equation}
\label{eq:gsse}
\mathcal{G}^\text{SSE}_n\{I\}(\bm{x}) = \mathcal{S}\{ \bm{\mathcal{F}}_n (\bm{x})\}
\end{equation}
for any $I \in L_2(\mathbb{R}^3)$ and $\bm{x} \in \mathbb{R}^3$.
Similarly, we define the \emph{SSB image operator} $\mathcal{G}^\text{SSB}_{n,n',\ell}$ associated with degrees $n,n' \geq 0$ and $|n-n'| \leq \ell \leq n+n'$ as
\begin{equation}
\label{eq:gssb}
\mathcal{G}^\text{SSB}_{n,n',\ell}\{I\}(\bm{x}) =
\mathcal{B}\{ \bm{\mathcal{F}}_n(\bm{x}), \bm{\mathcal{F}}_{n'}(\bm{x}), \bm{\mathcal{F}}_{\ell}(\bm{x}) \},
\end{equation}
for any $I \in L_2(\mathbb{R}^3)$ and $\bm{x} \in \mathbb{R}^3$. \\
\end{definition}
The SSE image operators have been considered in \cite{andrearczyk2020local}, where it was proven to be LRI in Appendix D. We recall this result and extend it to SSB image operators in the following proposition, whose proof is given in Appendix \ref{app:LRIproof}. \\
\vspace{-0.3cm}
\begin{proposition} \label{prop:LRI}
The SSE and SSB image operators are globally equivariant to translations and rotations. When the radial profiles $h_n$ are all compactly supported, these operators are therefore LRI in the sense of Section \ref{sec:lri_def}.
\end{proposition}
\vspace{-0.3cm}
\subsection{Implementation of the LRI layers}\label{sec:implementation}
In this section, we report the implementation details of our LRI design.
\subsubsection{Parameterization of the Radial Profiles}
The radial profiles are parameterized as a linear combination of radial functions
\begin{equation} \label{eq:hqn}
h_{q,n}(\rho) = \sum_{j=0}^{J} w_{q,n,j} \psi_j(\rho),
\end{equation}
where the $w_{q,n,j}$ are the trainable parameters of the model.
In \eqref{eq:hqn}, $h_{q,n}$ is the $q^\textrm{th}$ radial profile associated to the degree $n$. The index $q$ controls the number of output streams in the layer.
The index $j=0,\ldots,J$ controls the radial components of the filter. The radial functions are chosen as $\psi_j(\rho)=\text{tri}(\rho-j)$.
\subsubsection{Number of Feature Maps}
The image is convolved with the kernels $\kappa_{q,n}^m$ to obtain the spherical Fourier feature maps $\{\bm{\mathcal{F}}_{q,n}(\bm{x})\}_{n=0,\ldots,N}^{q=1,\ldots,Q}$. Here, $Q$ is the number of output streams of the layer and $N$ is the maximal degree of the SH decomposition. These feature maps are combined according to Eq. (\ref{eq:gsse}) and (\ref{eq:gssb}) resulting in $\mathcal{G}^\text{SSE}_{q,n}\{I\}(\bm{x})$ or $\mathcal{G}^\text{SSB}_{q,n,n',l}\{I\}(\bm{x})$ respectively.
In the following, we discuss the number of feature maps generated for only one output stream, thus we drop the index $q$.
In the case of the operator $\mathcal{G}^\text{SSE}_n$, the number of generated feature maps is $N+1$. For the $\mathcal{G}^\text{SSB}_{n,n',\ell}$ operator, the total number of features maps is $\mathcal{O}(N^3)$. It is actually not necessary to compute all the bispectrum coefficients, some of them being redundant due to the following properties. First, for each $n$, $n'$ and $\ell$, the bispectral components $b_{n,n'}^\ell(f)$ and $b_{n',n}^\ell(f)$ are proportional independently of $f$~\cite[Theorem 4.1]{kakarala2010}. Hence, we choose to compute the components only for $n\leq n'$ and $0\leq n+n'\leq N$. Second, even though the bispectrum is complex-valued, when $f$ is real, $b^\ell_{n,n'}(f)$ is either purely real or purely imaginary if $n+n'+\ell$ is even or odd respectively~\cite[Theorem 2.2]{kakarala2011viewpoint}. Thus, we can map it to a real-valued scalar. In our design, we take either the real or the imaginary part depending on the value of the indices $n,n',\ell$.
Even with these two properties the number of feature maps for the $\mathcal{G}^\text{SSB}_{n,n',\ell}$ operator still follows a polynomial of degree 3 (see Table~\ref{tab:comp_num_feature_map} for the first values), but for low $N$ it still reduces greatly this number. Moreover, the maximal degree $N$ for the SH expansion cannot be taken arbitrarily large as the kernels are discretized~\cite{andrearczyk2020local}. The upper bound for $N$ is given by $N \leq \frac{\pi c}{4}$, where $c$ is the diameter of the kernel. This condition can be regarded as the Nyquist frequency for the SH expansion. As an example, $N=7$ is the maximal value for a kernel of size $9\times9\times9$.
\begin{table}[h]
\caption{Number of feature maps obtained for the $\mathcal{G}^\text{SSE}_n$ and $\mathcal{G}^\text{SSB}_{n,n',\ell}$ operators in function of the maximal degree $N$.}
\label{tab:comp_num_feature_map}
\centering
\begin{tabular}{@{}lrrrrrrrr@{}} \toprule
$N$ & 0 & 1 & 2 & 4 & 6 & 8 & 10 & 100 \\ \midrule
Spectrum & 1 & 2 & 3 & 5 & 7 & 9 & 11 & 101 \\
Bispectrum & 1 & 2 & 5 & 14 & 30& 55 & 91 & 48127 \\ \bottomrule
\end{tabular}
\end{table}
\subsubsection{Discretization}
The kernels $\kappa_{q,n}^m = h_{q,n} Y_n^m$ are discretized by evaluating them on a Cartesian grid. For more details about the discretization, see \cite[Section 2.4]{andrearczyk2019exploring}.
\vspace{-0.3cm}
\section{Experiments and Results}\label{sec:experiments_and_results}
Section~\ref{sec:toyExperiment} illustrates the differences between the spherical spectrum and bispectrum with two toy experiments designed to fool the spectrum. Then, we compare the classification performance of three different CNNs (SSE, SSB and standard) detailed in Section~\ref{sec:network_architecture} on the two datasets described in Section~\ref{sec:datasets} in terms of accuracy (Section~\ref{sec:exp_perf}) and generalization power (Section~\ref{sec:learningCurves}) \emph{i.e.} the number of training samples required to reach the final accuracy.
\vspace{-0.3cm}
\subsection{Comparing Local Spectrum to Bispectrum Representations}\label{sec:toyExperiment}
Two toy experiments are conducted to highlight the differences in terms of the representation power of the spherical spectrum and bispectrum. The first experiment is designed to show that the spectrum is unable to discriminate between patterns that only differ in terms of rotations between degrees. The second experiment illustrates that the spectrum cannot capture differences within the same degree. These two experiments are done in the 3D image domain to show the applicability of the spherical spectrum and bispectrum of the solid SHs and to be as close as possible to the final application.
The images are obtained by evaluating $h(\rho) \sum_{n=0}^N \sum_{m=-n}^{m=n}F_n^m Y_m^n(\theta, \phi)$ on a 3D Cartesian grid of $32\times32\times32$ with $h$ defined as an isotropic Simoncelli wavelet profile \cite{portilla2000parametric}. This first experiment investigates the capability of the spectrum and bispectrum to discriminate between functions with distinct inter-degree rotations. Representatives $f$ and $f'$ of the two classes are obtained by summing the SHs described by their repsective spherical Fourier transform $\bm{\mathcal{F}}$ and $\bm{\mathcal{F}}'$. $\bm{\mathcal{F}}$ is composed of $\bm{\mathcal{F}}_1 = [1 , \mathrm{j},1]$, $\bm{\mathcal{F}}_2 = [1, -1 ,1,1,1]$, $\bm{\mathcal{F}}_3 = [1 ,-1 ,1, \mathrm{j},1, 1, 1]$ and $\bm{\mathcal{F}}_n = \boldsymbol{0}$ for any $n\neq 1,2,3$. The coefficients are chosen to ensure that the images are real and that the spherical spectrum $s_n(f)=1$ for $n=1,2,3$. The spherical decomposition $\bm{\mathcal{F}}'$ of the second class is computed as $\bm{\mathcal{F}}'_1=\bm{\mathcal{F}}_1 \mathrm{D}_1(\mathrm{R}_1)$, $\bm{\mathcal{F}}'_2=\bm{\mathcal{F}}_2 \mathrm{D}_2(\mathrm{R}_2)$ and $\bm{\mathcal{F}}'_3=\bm{\mathcal{F}}_3 \mathrm{D}_3(\mathrm{R}_3)$, where $\mathrm{R}_1$, $\mathrm{R}_2$ and $\mathrm{R}_3$ are distinct 3D rotations. This allows to combine the different degrees with different rotations resulting in a function $f'$ with spectrum $s_{n}(f') = s_n(f)$ for all $n$ but $f'\neq f$. Moreover, for each class, 50 distinct instances are created by adding Gaussian noise and randomly rotating the images. The random rotations are drawn from a uniform distribution over the 3D rotations and then we use the associated Wigner-$\mathrm{D}$ matrices to rotate the instances. This time, the same rotation is applied to all degrees. The bispectrum and spectrum are calculated using only the responses to the spherical filters at the origin voxel of the images and the results are presented in Fig. \ref{fig:results_scalar1}. Note that only a subset of distintive coefficients of the bispectrum is shown. The results indicate that the spectrum cannot detect inter-degree rotations, whereas the bispectrum can.
\begin{figure}[h]
\subfloat[Spectrum]{
\input{figures/tikz/spectrum_toy1.tikz}
}
\subfloat[Bispectrum]{
\input{figures/tikz/bispectrum_toy1.tikz}
}
\caption{Experiment with distinct inter-degree rotations. Spherical spectral (left) and bispectral (right) decomposition of the two classes. The blue bars represent the decomposition for the first class $f$
and the orange bars for the second class $f'$ ($\bm{\mathcal{F}}'_i=\bm{\mathcal{F}}_i \mathrm{D}_i(R_i)$, $i=1,2,3$). Note that only a subset of the bispectral components is displayed. It can be observed that the spectrum cannot distinguish between $f$ and $f'$, and that the bispectrum can.
}
\label{fig:results_scalar1}
\end{figure}
In the next experiment, instead of applying a distinct rotation to each degree, we choose orders that are different within the same degree. For the first class $f$, we use only the order $m=0$ and for the second class $f'$ the orders $m=n,-n$. This choice is motivated by their differences in shape as represented in Fig. \ref{fig:sph_family}. The spherical Fourier transform $\bm{\mathcal{F}}$ of the first class is chosen to be $\bm{\mathcal{F}}_1 = [0 ,\sqrt{3} \mathrm{j} ,0]$, $\bm{\mathcal{F}}_2 = [0 ,0,\sqrt{5},0,0]$, $\bm{\mathcal{F}}_3 = [0,0,0,\sqrt{7} \mathrm{j},0,0,0]$ and $\bm{\mathcal{F}}_n = 0$ for any $n\neq 1,2,3$. The spherical decomposition $\bm{\mathcal{F}}'$ of the second class is $\bm{\mathcal{F}}'_1 = [\sqrt{3/2},0,\sqrt{3/2}]$, $\bm{\mathcal{F}}'_2 = [ \sqrt{5/2},0,0,0,\sqrt{5/2}]$, $\bm{\mathcal{F}}'_3 = [\sqrt{7/2},0,0,0,0,0,\sqrt{7/2}]$. The coefficients are chosen to obtain a spherical spectrum of 1 for $n=1$ , $n=2$ and $n=3$ and to generate real images. The results in Fig. \ref{fig:results_scalar2} show that the bispectrum can discriminate between the two classes even though they have the same spectrum.
\begin{figure}[h]
\input{figures/tikz/toy2.tikz}
\caption{Experiment with intra-degree variations. Spherical spectral (left) and bispectral (right) decomposition of the two classes. The blue bars represent the decomposition for the first class $f$
and the orange bars for the second class $f'$.
Note that only a subset of the bispectral components is displayed. It can be observed that the spectrum cannot distinguish between $f$ and $f'$, and that the bispectrum can.}
\label{fig:results_scalar2}
\end{figure}
\vspace{-0.3cm}
\subsection{Datasets}\label{sec:datasets}
To evaluate the performance of the proposed LRI operators, we use both a synthetic and a medical dataset. The synthetic dataset constitutes a controlled experimental setup and contains two classes with 500 volumes each of size $32\times32\times32$ for each class. They are generated by placing two types of patterns with a size of $7\times7\times7$, namely a binary segment and a 2D cross with the same norm, at random 3D orientations and random locations with possible overlap. The number of patterns per volume is randomly set to $\lfloor d \cdot(\frac{s_v}{s_p})^3\rfloor$, where $s_v$ and $s_p$ are the sizes of the volume and of the pattern, respectively and the density $d$ is drawn from a uniform distribution in the range $[0.1,0.5]$. The two classes vary by the proportion of the patterns, \textit{i.e.} 30\% segments with 70\% crosses for the first class and vice versa for the second class. 800 volumes are used for training and the remaining 200 for testing. Despite the simplicity of this dataset, some variability is introduced by the overlapping patterns and the linear interpolation of the 3D rotations.
The second dataset is a subset of the American National Lung Screening Trial (NLST) that was annotated by radiologists at the University Hospitals of Geneva (HUG)~\cite{andrearczyk2020local}. The dataset includes 485 pulmonary nodules from distinct patients in CT, among which 244 benign and 241 malignant. We pad or crop the input volumes (originally with sizes ranging from $16\times16\times16$ to $128\times128\times128$) to the size $64\times64\times64$. We use balanced training and test splits with 392 and 93 volumes respectively. Examples of 2D slices of the lung nodules are illustrated in Fig. \ref{fig:ex_nlst}. The Hounsfield units are clipped in the range $[-1000,400]$, then normalized with zero mean and unit variance (using the training mean and variance).
\begin{figure}[h]
\subfloat[Benign]{
\includegraphics[width=0.24\textwidth]{figures/nlst_benign.png}
}
\subfloat[Malignant]{
\includegraphics[width=0.24\textwidth]{figures/nlst_malignant.png}
}
\caption{Slices drawn from the NLST dataset showing a benign pulmonary nodule
and a malignant one.}
\label{fig:ex_nlst}
\end{figure}
\vspace{-0.3cm}
\subsection{Network Architecture}\label{sec:network_architecture}
This work uses the network architecture proposed in \cite{andrearczyk2019exploring}. The first layer consists of the LRI layer implemented as described in Section \ref{sec:implementation}. The obtained responses are aggregated using spatial global average pooling, similarly to \cite{AnW2016}. This pooling aggregates the LRI operator responses into a single scalar per feature map and is followed by one Fully Connected (FC) layer. For the nodule classification experiment, we average the responses inside the nodule masks instead of across the entire feature maps to remove the effect of the size allowing the network to focus on the textural content of the nodule. The final softmax FC layer is connected directly with a cross-entropy loss. Standard Rectified Linear Units (ReLU) activations are used. The two different types of LRI networks are referred to as SSE-CNN and SSB-CNN when the LRI layer uses the $\mathcal{G}^\text{SSE}$ or $\mathcal{G}^\text{SSB}$ operator respectively.
The networks are trained using an Adam optimizer with $\beta_1=0.99$ and $\beta_2=0.9999$ and a batch size of 8. Other task-specific parameters are: for the synthetic experiment (kernel size $7\times7\times7$, stride 1, 2 filters and 50,000 iterations), for the nodule classification experiment (kernel size $9\times9\times9$, stride 2, 4 filters and 10,000 iterations).
The initial values of the trainable weights in \eqref{eq:hqn} are drawn independently from a Gaussian distribution as $w_{q,n,j} \sim \mathcal{N}(0,\,1)$ and the biases are initialized to zero. This initialization is inspired by \cite{he2015delving,weiler2017learning} in order to avoid vanishing and exploding activations and gradients.
We compare the proposed CNNs to a network with the same architecture but with a standard 3D convolutional layer and varying numbers of filters, referred to as Z3-CNN.
\vspace{-0.3cm}
\subsection{Classification Performance of the SSB-, SSE- and Z3-CNN}\label{sec:exp_perf}
Here, we evaluate the classification performance of both the SSE-CNN and SSB-CNN on the two datasets described in Section \ref{sec:datasets}. The accuracies of both designs are computed with 10 different initializations for varying maximal degrees $N$.
Confidence Intervals (CI) at $95\%$ and mean accuracies are reported in Fig. \ref{fig:res_perf_synth} and \ref{fig:res_perf_NLST} for the synthetic and NLST datasets respectively. On both datasets, the SSB-CNN outperforms the two other networks.
To exclude the possibility that this performance gain is simply due to a higher number of feature maps, we trained a SSE-CNN on the synthetic dataset with maximal degree $N=2$ and 4 kernels in the first layer instead of 2. This amounts to a total of 12 feature maps after the first layer.
This model achieves $0.9075 \pm 0.006$ of accuracy and is still significantly outperformed by the SSB-CNN with maximal degree $2$ and 2 kernels, which has 10 feature maps after the first layer and obtains an accuracy of $0.924 \pm 0.008$ (Fig.~\ref{fig:res_perf_synth}).
One important remark is that both LRI networks contain fewer parameters than the Z3-CNN. For instance in the NLST experiment, the SSB- and SSE-CNN have 330 and 222 parameters respectively for a maximal degree $N=4$ against 7322 parameters for the Z3-CNN.
\begin{figure}
\input{figures/tikz/synth_acc.tikz}
\caption{Classification accuracies and numbers of parameters on the synthetic dataset for varying maximal degrees $N$. The error bars represent the CIs at $95\%$. The accuracy of the Z3-CNN with 10 filters is $0.875 \pm 0.011$ with 3462 trainable parameters and is represented by the green dashed lines.}
\label{fig:res_perf_synth}
\end{figure}
\begin{figure}
\input{figures/tikz/nlst_acc.tikz}
\caption{Classification accuracies and numbers of parameters on the NLST dataset for varying maximal degrees $N$.
The error bars represent the CIs at $95\%$.
The accuracy of the Z3-CNN with 10 filters is $0.810 \pm 0.014$
with 7322 trainable parameters and is represented by the green dashed lines.}
\label{fig:res_perf_NLST}
\end{figure}
\vspace{-0.3cm}
\subsection{Learning Curves of the SSB-, SSE- and Z3-CNN}\label{sec:learningCurves}
The SSB- and SSE-CNN are LRI networks and thus require neither additional training examples nor a large number of parameters to learn this property.
In addition, they rely on compressing SH parametric representations.
For these two reasons, we expect that they will better generalize with fewer training examples (\emph{i.e.} steeper learning curve) than the standard Z3-CNN on
data for which this property is relevant. To test this hypothesis, we compare the classification performance of each method using an increasingly large number of training examples $N_s$. For the synthetic dataset, we use \mbox{$N_s = 16, 32, 64, 128, 200, 300, 400$} and for the nodule classification \mbox{$N_s= 10,30, 64,128,200,300,392$}. For each value of $N_s$, 10 repetitions are made and $N_s$ examples are randomly drawn from the same training fold as the previous experiments (Section~\ref{sec:exp_perf}).
For the SSB-CNN we report the accuracy for $N=2$ on the synthetic dataset and $N=4$ on the NLST dataset.
The accuracy of the SSE-CNN is reported for $N=2$ on the synthetic dataset and $N=1$ for the NLST dataset.
These parameters are chosen according to the previous experiments (Section~\ref{sec:exp_perf}, Fig.~\ref{fig:res_perf_synth} and~\ref{fig:res_perf_NLST}) as they provided the best accuracy.
The experiment is also conducted with the Z3-CNN and the results are reported for both 10 and 144 filters in the convolution layer. The mean accuracy with CIs at $95\%$ of the three models and on the two datasets is reported in Fig.~\ref{fig:res_lc_synth} and~\ref{fig:res_lc_NLST}.
\begin{figure}
\input{figures/tikz/lc_synt.tikz}
\caption{Performances on the synthetic dataset in terms of accuracy for a varying number of training examples. The error bars represent the CIs at $95\%$. The number of filters in the first layer for the SSB- and SSE-CNN is 2.}
\label{fig:res_lc_synth}
\end{figure}
\begin{figure}
\input{figures/tikz/lc_nlst}
\caption{Performances on the NLST dataset in terms of accuracy for a varying number of training examples. The error bars represent the CIs at $95\%$. The number of filters in the first layer for the SSB- and SSE-CNN is 4.}
\label{fig:res_lc_NLST}
\end{figure}
\vspace{-0.3cm}
\section{Discussions}\label{sec:discussions}
\subsection{The Bispectrum is More Discriminative}
The two experiments presented in Section~\ref{sec:toyExperiment} illustrate the types of pattern information that cannot be characterized by spectral components. In these settings, the spectrum is unable to distinguish between classes that differ either by a difference of orientation between degrees (inter-degree rotation) or by intra-degree variations. This is not the case for the bispectral coefficients that allow describing functions in $L_2(\mathbb{S}^2)$ more accurately. As expected, the cost of a more complete representation is a larger number of components. However, it is possible to compute only a subset of the bispectral components depending on the task.
In the CNN implementation of these two invariants (Section~\ref{sec:exp_perf}), we observe that the specific information captured by the SSB improves the classification performance for both datasets: as soon as the maximum degree is greater than one, the SSB-CNN outperforms the SSE-CNN (Section~\ref{sec:exp_perf}, Fig.~\ref{fig:res_perf_synth} and~\ref{fig:res_perf_NLST}).
Besides, both the SSE- and the SSB-CNN outperform the standard Z3-CNN on the synthetic data which was specifically designed to give an advantage to LRI networks. By contrast, in the nodule classification task (NLST dataset), the Z3-CNN outperforms the SSE-CNN. It seems that the simple design of the SSE-CNN fails to capture the specific signature of malignant pulmonary nodule information on these data.
However, once again, the richer invariant representation of the SSB-CNN allows outperforming even the Z3-CNN with statistical significance when $N=4$ while using approximately 22 times fewer parameters.
\vspace{-0.3cm}
\subsection{Better Generalization of the LRI Models}
The learning curve experiment on the synthetic dataset presented in Section~\ref{sec:learningCurves} shows that both LRI designs outperform the Z3-CNN for any number of training examples.
What is more notable is the steeper learning for the two LRI networks.
Both SSE- and SSB-CNNs seem to require the same number of training examples to reach their final performance level.
For the Z3-CNN, two networks are compared: one with 10 filters and the other with 144 filters, accounting for 7322 and 105,410 trainable parameters, respectively.
Even though the number of parameters is vastly different, the overall shape of the learning curves does not significantly change between the two Z3 networks, pointing out that the relationship between numbers of parameters and training examples is not obvious and highly depends on the architecture.
On the NLST dataset, the SSB-CNN outperforms the Z3-CNN when trained with the same number of training examples. However, the steeper learning curve of the former is less pronounced than with the synthetic dataset. We expect the gap between the two learning curves to be wider if we use deeper architecture as the difference in the number parameters will be higher.
Overall, we observe that the proposed SSB-CNN requires fewer training examples than the Z3-CNN, thanks to both the LRI property and the compressing parametric SH kernel representations.
\vspace{-0.3cm}
\section{Conclusion}\label{sec:conclusion}
We showed that, by using the highly discriminative SSB RI descriptor, we are able to implement CNNs that are more accurate than the previously proposed SSE-CNN. Furthermore, we also observed that LRI networks can learn with fewer training examples than the more traditional Z3-CNN, which supports our hypothesis that the latter tends to misspend the parameter budget to learn data invariances and symmetries.
The main limitation of the proposed experimental evaluation is that it relies on shallow networks that would place these approaches more at the crossroad between handcrafted methods and deep learning. In future work, the LRI layers will be implemented in a deeper architecture to leverage the fewer resources that they require in comparison with a standard convolutional layer.
This is expected to constitute a major contribution to improve 3D data analysis when curated and labelled training data is scarce, which most often the case in medical image analysis.
The code is available on GitHub\footnote{\url{https://github.com/voreille/ssbcnn}, as of April 2020.}.
\appendices
\vspace{-0.3cm}
\section{Clebsch-Gordan matrices}\label{app:CG}
Let us fix $n_1 , n_2 \geq 0$.
The Clebsch-Gordan matrix $\mathrm{C}_{n_1,n_2}$ is characterized by the fact that it block-diagonalizes the Kronecker product of two Wigner-$\mathrm{D}$ matrices as
\begin{equation} \label{eq:CGandWigner}
\mathrm{D}_{n_1}(\mathrm{R}) \otimes \mathrm{D}_{n_2}(\mathrm{R}) = \mathrm{C}_{n_1, n_2} \left[
\bigoplus_{i=|n_1-n_2|}^{n_1+n_2} \mathrm{D}_i(\mathrm{R})
\right] \mathrm{C}_{n_1, n_2}^\dag
\end{equation}
for any matrix rotation $\mathrm{R} \in SO(3)$.
This means in particular that $\mathrm{C}_{n_1,n_2}$ has $\sum_{n= |n_1 - n_2|}^{n_1+n_2} (2n+1)$ rows and $(2n_1 +1)(2n_2+1)$ columns. These two numbers are actually equal, hence $\mathrm{C}_{n_1,n_2} \in \mathbb{R}^{(2n_1 +1)(2n_2+1)\times (2n_1 +1)(2n_2+1)}$, but the relation \eqref{eq:CGandWigner} also reveals the structure of the matrix, whose coefficients are indexed as $\mathrm{C}_{n_1,n_2}[ (n,m) , (m_1,m_2)]$, with $n \in \{ |n_1 - n_2| , \ldots , (n_1+n_2) \}$, $m_1 \in \{-n_1 ,\ldots , n_1 \}$, and $m_2 \in \{-n_2 ,\ldots , n_2 \}$. In the literature, the Clebsch-Gordan coefficients are often written with bracket notations, that reveal some of their symmetries~\cite{alex2011numerical}. Moreover, the Clebsch-Gordan matrix has many $0$ entries. We indeed have that
\begin{align*}
\mathrm{C}_{n_1,n_2}[ (n,m) , (m_1,m_2)] &= 0 \text{ if } m \neq m_1 + m_2 \\
&= \langle n_1 m_1 n_2 m_2 | n (m_1 + m_2) \rangle,
\end{align*}
where $\langle | \rangle$ is the bracket notation used for instance in \cite[Chapter 5.3.1]{chaichian1998symmetries}.
\vspace{-0.3cm}
\section{Proof of Proposition \ref{prop:LRI}}\label{app:LRIproof}
The equivariance to translations is simpler and similar to the equivariance to rotations, therefore we skip it (it simply uses that $(I(\cdot - \bm{x}_0) * \kappa_n^m) (\bm{x}) = (I*\kappa_n^m)(\bm{x} - \bm{x}_0)$).
Let $\bm{\mathcal{F}}_n(\bm{x})$ and $\bm{\mathcal{F}}'_n(\bm{x})$ be the Fourier feature maps of $I$ and $I(\mathrm{R}_0 \cdot)$ respectively, with $\mathrm{R}_0 \in SO(3)$. According to \eqref{eq:steerability_ynm} applied to $\mathrm{R} = \mathrm{R}^{-1}_0$, we have that
\begin{equation} \label{eq:Ikapparota}
\kappa_n^m (\mathrm{R}^{-1}_0 \cdot ) = \sum_{m' =-n}^n \mathrm{D}_n(\mathrm{R}_0^{-1})_{m,m'} \kappa_n^m.
\end{equation}
Moreover, we have that $(I(\mathrm{R}_0 \cdot) * \kappa_n^m ) (\bm{x}) = ( I * \kappa_n^m (\mathrm{R}^{-1}_0 \cdot) ) (\mathrm{R}_0 \bm{x})$. Together with \eqref{eq:Ikapparota}, this implies that
\begin{equation}
\bm{\mathcal{F}}'_n(\bm{x}) = \left( ( I(\mathrm{R}_0 \cdot) * \kappa_n^m ) (\bm{x}) \right)_m = \bm{\mathcal{F}}(\mathrm{R}_0 \bm{x}) \mathrm{D}_n (\mathrm{R}_0^{-1} \bm{x}).
\end{equation}
This implies that
\begin{align*}
\mathcal{G}_{n,n',\ell}^{\mathrm{SSB}} &\{ I ( \mathrm{R}_0 \cdot ) \} (\bm{x}) =
\mathcal{B}\{ \bm{\mathcal{F}}'_n(\bm{x}), \bm{\mathcal{F}}'_{n'}(\bm{x}), \bm{\mathcal{F}}'_{\ell}(\bm{x}) \} \\
& = \mathcal{B}\{ \bm{\mathcal{F}}_n(\mathrm{R}_0 \bm{x}) \mathrm{D}_n (\mathrm{R}_0^{-1}), \ldots \\
& \qquad \bm{\mathcal{F}}_{n'}(\mathrm{R}_0 \bm{x}) \mathrm{D}_{n'} (\mathrm{R}_0^{-1}), \bm{\mathcal{F}}_{\ell}(\mathrm{R}_0 \bm{x}) \mathrm{D}_{\ell} (\mathrm{R}_0^{-1}) \} \\
&= \mathcal{B}\{ \bm{\mathcal{F}}_n(\mathrm{R}_0 \bm{x}) , \bm{\mathcal{F}}_{n'}(\mathrm{R}_0 \bm{x}) , \bm{\mathcal{F}}_{\ell}(\mathrm{R}_0 \bm{x}) \} \\
&= \mathcal{G}_{n,n',\ell}^{\mathrm{SSB}} \{ I \} (\mathrm{R}_0 \bm{x}),
\end{align*}
where we used the invariance of the bispectrum for the third equality. This demonstrates the equivariance of the operator $ \mathcal{G}_{n,n',\ell}^{\mathrm{SSB}}$ with respect to rotations. Finally, the locality simply follows from the fact that the convolution $I*\kappa_{n}^m (\bm{x})$ depends on the values of $I (\bm{x} - \bm{y})$ with $\bm{y}$ in the support of $\kappa_n^m$, which is bounded as soon as $h_n$ is compactly supported, what we assumed.
\vspace{-0.3cm}
\section*{Acknowledgment}
The authors are grateful to Michael Unser, who suggested them to consider the bispectrum as a tool to capture rotation-invariant features of 3D signals.
This work was supported by the Swiss National Science Foundation (SNSF grants 205320\_179069 and P2ELP2\_181759) and the Swiss Personalized Health Network (SPHN IMAGINE and QA4IQI projects), as well as a hardware grant from NVIDIA.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
2,877,628,090,511 | arxiv | \section{Introduction}
In this work we continue the investigation started in~\cite{KoRiWi13OPYM} into the positive Jacobian constraint in the Calculus of Variations. There, using a convex integration-type argument, we characterized all Young measures generated by sequences in $\mathrm{W}^{1,p}(\Omega;\mathbb{R}^d)$, where $\Omega \subset \mathbb{R}^d$ is a bounded open set and $p<d$, with the property that every element of the sequence has positive Jacobian almost everywhere.
Here we extend this characterization to more restrictive pointwise constraints on the Jacobian determinant, e.g.\ the condition that it be bounded below by a positive constant or even be equal to a given positive constant almost everywhere. These requirements are very natural in elasticity theory, where they correspond to limited compressibility or incompressibility of an elastic solid.
On a more theoretical level, our characterization and its various corollaries display the vast flexibility of the pointwise Jacobian determinant in Sobolev spaces $\mathrm{W}^{1,p}(\Omega;\mathbb{R}^d)$ below the critical exponent $p=d$. While it is well-known \cite{Ball77CCET, BalMur84WQVP,Sver88RPDF,Mull90DETD,Mull93SSDD,Hen11SobHomJ0} that the Jacobian loses many of its usual geometric properties for $p<d$, thus leading to the failure of weak continuity or of the change-of-variables formula (i.e., in terms of elasticity theory, to cavitation), one of our aims in this work is to systematize and generalize these observations within a convex integration framework.
We refer to Sections~\ref{statement} and~\ref{sc:applications} below for a precise formulation of our results. Before that, however, we wish to give an informal discussion of our findings, highlighting various different aspects.
\subsection{Kinderlehrer--Pedregal theory}
It is a recurrent theme in the Calculus of Variations to obtain characterization results for Young measures generated by sequences of maps with specific properties. The prototypical result is that of Kinderlehrer--Pedregal~\cite{KinPed91CYMG,KinPed94GYMG}, which applies to sequences of gradients. Various generalizations have been studied, e.g.\ to so-called $\mathcal{A}$-free sequences~\cite{FonMul99AQLS} or generalized Young measures involving concentrations~\cite{FoMuPe98ACOE,KriRin10CGGY,Rind14LPCY}. An additional difficulty is posed by requiring that the generating sequence satisfies not only a linear differential constraint (like the gradient constraint), but also a nonlinear and nonconvex pointwise constraint. Such a problem was treated in~\cite{SzeWie12YMGI}, where the constraint was related to the incompressible Euler equations, and in~\cite{KoRiWi13OPYM}, where the Jacobian determinant was required to be positive almost everywhere. This article presents a significant extension of the latter result, see Theorem~\ref{thm:main_intro} below. Note, however, that~\cite{KoRiWi13OPYM} is not strictly contained in the present work as the side constraint is \emph{open} in \emph{loc.\ cit.} and \emph{closed} here.
\subsection{First-order PDEs}
A corollary of our characterization (Theorem~\ref{damo}) is an existence statement for Dirichlet problems of the form
\begin{equation}\label{damointro}
\left\{
\begin{aligned}
\det\nabla v(x)&=J(x),\\
u|_{\partial\Omega}&=g.
\end{aligned}
\right.
\end{equation}
This problem was first stated in this form by B. Dacorogna and J. Moser~\cite{DacMos90PDEJ}, motivated by earlier work of Moser~\cite{Mose65OVEM} on diffeomorphisms between volume forms on manifolds. They answered the existence question positively provided $g=\operatorname{id}$ and $J$ is positive, lies in $C^{k,\alpha}$, and satisfies a compatibility condition. Their solution $v$ then is a $C^{k+1,\alpha}$-diffeomorphism. When the positivity assumption on $J$ is dropped or different boundary conditions are considered, similar results are available~\cite{CuDaKn09OENS, Kneu12OEPB}, but then $v$ may no longer be chosen as a diffeomorphism. For a similar result and a discussion of this problem in Sobolev spaces we refer the reader to \cite{Ye94PJSP}.
Here, in Section~\ref{ssc:DM}, we establish the following result: If $g\in \mathrm{W}^{1-1/p,p}(\partial\Omega)$ for some $1 < p<d$ and $J\in\mathrm{L}^{p/d}(\Omega)$ is measurable, then there exists a solution $v\in \mathrm{W}^{1,p}(\Omega)$ of~\eqref{damointro}. The fact that our result requires no compatibility condition on $J$ and $g$ underscores the pathological behaviour of the Jacobian for $p<d$ and the loss of its classical geometric properties.
\subsection{The distributional determinant}
The properties of the pointwise determinant of matrix-valued maps in $\mathrm{L}^p$, $p<d$, led to the definition of the \term{distributional determinant}~\cite{Ball77CCET}, which may no longer be defined as a function, but only as a distribution. In~\cite{Mull93SSDD} examples were constructed of maps for which the difference of the distributional and the pointwise determinant is supported on sets of arbitrary Hausdorff dimension $\alpha\in(0,d)$. We also exhibit in the present paper, by completely different methods, examples of maps whose distributional and pointwise determinants differ (\emph{any} solution of~\eqref{damointro} with $g=\operatorname{id}$ and $\int_{\Omega}J(x) \;\mathrm{d} x \neq |\Omega|$ will have this property).
Of course, our results do not answer the intriguing problem~\cite{Mull93SSDD} under what conditions~\eqref{damointro} can be solved in $\mathrm{W}^{1,p}$ if one replaces the pointwise determinant by the distributional one. In fact it would be interesting to know what can be said about the distributional Jacobian of the maps that we construct.
\subsection{Cavitation}
A related phenomemon in elasticity theory is \term{cavitation}~\cite{Ball82DESC,Sver88RPDF,MuSp95EX,SivSp00EX,HeMo-Co10CAV&FRAC}, which refers to the formation of holes in an elastic solid. Consider the problem~\eqref{damointro} with $\Omega=B_1(0)$ (the unit ball in $\mathbb{R}^d$), $J\equiv1$, and $g(x)=2x$. If an elastic solid is to be deformed according to this data, the deformation necessarily has to be discontinuous, thus exhibiting cavitation. Since our convex integration construction is in a sense local and does not distinguish particular points in the domain, the discontinuous solutions in $W^{1,p}$ produced in this paper include a kind of ``diffuse cavitation''.
A further consequence of these observations in conjunction with~\cite{Sver88RPDF} is that for the maps we construct, the cofactor matrix $\operatorname{cof}\,\nabla v$, which is easily seen to be in $\mathrm{L}^{p/(d-1)}$, cannot be expected to lie in general in $\mathrm{L}^q$ for any $q\geq p/(p-1)$.
\subsection{Relaxation}
We prove a relaxation theorem (Corollary~\ref{thm:relaxation} below) under the constraint that the gradients of admissible maps have determinants greater or equal to $r>0$, or determinants precisely equal to $r$, almost everywhere. This follows immediately from our results. No relaxation results under these constraints seem to exist in the literature. We note~\cite{AnHMan09RTNE} where a relaxation theorem is proved for $p\in(1,\infty)$ under the assumption that the integrand $f$ satisfies $f(A)\to\infty$ as $\det\, A\to 0^+$, nevertheless without accounting for the requirement that $f(A)=\infty$ if $\det A\leq 0$ which is natural in elasticity. A very interesting relaxation result was also recently proved in \cite{ContiDolz14} for functionals relevant in elasticity theory and $p\geq d$. Of course it would be very interesting to find similar relaxation results with the pointwise Jacobian replaced by the distributional one.
\subsection{Weak continuity of the determinant}
It is well-known that if $u_j \rightharpoonup u$ in $\mathrm{W}^{1,p}$ with $p\geq d$, then $\det\nabla u_j\rightharpoonup \det\nabla u$ in the sense of distributions, whereas this weak continuity property may fail for $p<d$ (see e.g.~\cite{BalMur84WQVP,FoLeMa05WCLS} and the references therein). This is again related to the discrepancy between the pointwise and the distributional determinant. In fact, for $p<d$, it is shown in \cite[Ex.~3, p.~284]{GMSvol1} that the map $u(x)=x$ can be approximated weakly in $\mathrm{W}^{1,p}$ by a sequence $(u_j)$ such that $\det\nabla u_j =0$ a.e., making the determinant weakly discontinuous in $\mathrm{W}^{1,p}$. The same result can be extended to any smooth function $u$ (see \cite{DePhil12WJ}) and by density to $u\in \mathrm{W}^{1,p}$, so that the determinant is weakly discontinuous everywhere in $\mathrm{W}^{1,p}$.
In Corollary~\ref{cor:approximation} we strongly exhibit this everywhere discontinuity by showing that any $u\in\mathrm{W}^{1,p}$ ($p<d$) can be approximated weakly in $\mathrm{W}^{1,p}$ by a sequence of maps with Jacobian even prescribed almost everywhere.
\subsection{Lower Semicontinuity}
As a final application, in Section~\ref{sec:lsc} we make the perhaps surprising observation that, for $p<d$, neither the class of $\mathrm{W}^{1,p}$-quasiconvex stored-energy functions, cf.~\cite{BalMur84WQVP}, nor the seemingly larger class of $\mathrm{W}^{1,p}$-orientation-preserving quasiconvex functions are suitable for the minimization problems of nonlinear elastostatics under realistic growth assumptions. We accomplish this by showing that an integrand cannot be $\mathrm{W}^{1,p}$-(orientation-preserving) quasiconvex and satisfy natural growth conditions at the same time. In particular, this essentially rules out that $\mathrm{W}^{1,p}$-orientation-preserving quasiconvex functions can satisfy the condition
\[
f(A) \to \infty \quad\text{as}\quad \det\, A \to 0^+\quad\mbox{ and }\quad f(A) = \infty \quad\text{if}\quad \det\, A \leq 0,
\]
which one imposes on realistic integrands in nonlinear elasticity, see~\cite{Ball02SOPE}. In this context, we remark that the energies are formulated in terms of the \emph{pointwise} Jacobian.
\subsection{Convex integration}
Finally, we give some remarks on the method of proof of our results, which can be viewed as an instance of \term{convex integration}. In this general technique, one uses an iteration scheme which starts, in our case, from any map $u\in\mathrm{W}^{1,p}$ and approaches the determinant constraint by adding suitable oscillatory perturbations at each step, the frequencies increasing rapidly from step to step. The crucial observation (Proposition~\ref{prop:geometry}) is that the \term{$p$-quasiconvex hull} (cf.~\cite{KoRiWi13OPYM}) of the set of matrices with given determinant is sufficiently large such as to provide for enough suitable perturbations.
Convex integration has been used in a variety of situations in topology, differential geometry, nonlinear PDE, and the Calculus of Variations~\cite{Nash54C1IE,Grom86PDR,DacMar97GETH,EliMis02IH,Kirc03RGM,MulSve03CILM,AsFaSz08CILP,DeLSze12HEFD}. A common feature of these very different problems is that there exists a ``threshold regularity'' above which the situation is ``rigid'', whereas below the threshold the problem displays ``flexible'' behavior. For instance, the only $\mathrm{C}^2$ isometric embedding of $\mathbb{S}^2$ into $\mathbb{R}^3$ is the canonical embedding, whereas J.~Nash~\cite{Nash54C1IE} constructed infinitely many $\mathrm{C}^1$ embeddings with unexpected behavior. The loss of rigidity in this example is due to the lack of a well-defined curvature of the embedded submanifold. Another, more recent, example is given by the incompressible Euler equations~\cite{DeLSze12HEFD}. There, the kinetic energy is conserved (``rigidity'') for sufficiently regular solutions, but can become subject to dissipation (``flexibility'') for less regular solutions.
Similarly, the main thrust of this work entails that the Jacobian determinant is ``rigid'' in $W^{1,p}$ for $p\geq d$, yet becomes ``flexible'' and loses many of its classical properties for $p<d$, as showcased in the course of our previous discussion.
On a more technical level, our method allows one to distinguish via convex integration between different levels of Sobolev regularity (the only other results of this kind, as far as we are aware, are found in~\cite{AsFaSz08CILP} and the work of Yan~\cite{Yan96RemarksStability,Yan01LinearBVP,Yan03Baire}). Moreover, our convergence argument via Young measure generation (proof of Proposition~\ref{prop:convexint}) is new and may be helpful to facilitate future convex integration-type arguments.
\subsection{Plan of the paper}
The plan of this paper is as follows: First, in Section~\ref{sc:setup}, we give a brief introduction to Young measures and introduce terminology. Section~\ref{statement} gives a precise formulation of the main characterization result. In Section~\ref{sc:convex_int}, we provide all the necessary definitions and present our convex integration principle (Proposition~\ref{prop:convexint}) leading to a general method (Theorem~\ref{thm:main_abstract}) for the characterization of Young measures generated by gradients that satisfy a differential inclusion of the form $\nabla u(x) \in S_{R(x,\,\sbullet\,)}$ a.e., where $S_{R(x,\,\sbullet\,)}$ is the zero-sublevel set of a Carath\'{e}odort function $R$. For this, we require a \enquote{tightness condition} on the $p$-quasiconvex hull of $S_{R(x,\,\sbullet\,)}$ (see Definition~\ref{def:p-full}).
In Section~\ref{sc:geometry}, we restrict attention to constraint functions of the form $R(x,A)=\max\{J_1(x) - \det\, A, \det\, A - J_2(x), 0\}$ with corresponding sublevel set $S_{R(x,\,\sbullet\,)} = \setb{A\in\mathbb{R}^{d\times d}}{J_1(x)\leq \det\, A \leq J_2(x)}$. We prove that the above sets satisfy the hypotheses of Theorem~\ref{thm:main_abstract} and the characterization of the corresponding gradient Young measures follows. For the convenience of the reader, the result is first proved in the physically most relevant dimension $d=3$. The case $d=2$ is significantly simpler and the proof is omitted, whereas the case $d>3$ is, at least notationally, more involved and is presented separately in Section~\ref{sc:arbitrary_dim}. Sections~\ref{sc:applications} and~\ref{sec:lsc} are devoted to the applications mentioned above.
\subsection*{Acknowledgments}
The authors wish to thank John Ball, Sergio Conti, Georg Dolzmann and Jan Kristensen for discussions related to the present paper. KK was supported by the European Research Council grant agreement ${\rm n^o}$ 291053. FR and EW were partly supported by a Royal Society International Exchange Grant IE131532.
\section{Setup}\label{sc:setup}
On the space $\mathbb{R}^{d \times d}$ of $(d \times d)$-matrices $M = (M^i_j)$ ($i,j = 1,\ldots,d$) we use the Frobenius norm
\[
\abs{M} = \abs{M}_F := \left[ \sum_{i,j=1}^d (M^i_j)^2 \right]^{1/2}
= \left[ \sum_{k=1}^d \sigma_k^2 \right]^{1/2},
\]
where $\sigma_k$, $k=1,\ldots,d$ are the singular values of $M$.
Let $1 \leq p < \infty$. A \term{$p$-Young measure} is a parametrized family $\nu = (\nu_x)_{x \in \Omega} \subset \mathbf{M}^1(\mathbb{R}^N)$ of probability measures on $\mathbb{R}^N$, where $\mathbf{M}^1(\mathbb{R}^N)$ denotes the space of probability measures, such that:
\begin{enumerate}
\item[(1)] The family $(\nu_x)$ is \term{weakly* measurable}, that is, for every Borel set $B \subset \mathbb{R}^N$ the map $x \mapsto \nu_x(B)$ is ($\mathcal{L}^d \begin{picture}(10,8)\put(2,0){\line(0,1){7}}\put(1.8,0){\line(1,0){7}}\end{picture} \Omega$)-measurable.
\item[(2)] The map $x \mapsto \int \abs{A}^p \;\mathrm{d} \nu_x$ lies in $\mathrm{L}^1(\Omega)$.
\end{enumerate}
Many properties of Young measures are collected in~\cite{Pedr97PMVP}, we recall only some of them: The \term{barycenter} of a $p$-Young measure $\nu$ is
\[
[\nu](x) := \int A \;\mathrm{d} \nu_x(A), \qquad x \in \Omega,
\]
and $[\nu] \in \mathrm{L}^p(\Omega;\mathbb{R}^N)$. A Young measure $\nu$ is \term{homogeneous} if $x \mapsto \nu_x$ is an almost everywhere constant map, i.e.\ $\nu_x = \nu \in \mathbf{M}^1(\mathbb{R}^N)$ for a.e.\ $x \in \Omega$.
We say that a (necessarily norm-bounded) sequence $(u_j) \subset \mathrm{L}^p(\Omega;\mathbb{R}^N)$ \term{generates} the Young measure $\nu$ if
\[
\int_\Omega f(x,u_j(x)) \;\mathrm{d} x \;\;\to\;\;
\int_\Omega \int f(x,A) \;\mathrm{d} \nu_x(A) \;\mathrm{d} x
\]
for all Carath\'{e}odory functions $f:\Omega\times\mathbb{R}^N\to \mathbb{R}$ (that is, $f$ is measurable in the first and continuous in the second argument) such that $(f(\,\sbullet\,,u_j))$ is equiintegrable. We express generation in symbols as $u_j\overset{\mathrm{Y}}{\to} \nu$.
It can be shown that if $(u_j)$ and $(v_j)$ are $\mathrm{L}^p(\Omega)$-bounded sequences with $\norm{u_j-v_j}_p \to 0$ as $j\to\infty$ and $(u_j)$ generates the Young measure $\nu$, then also $(v_j)$ generates $\nu$. It can further be proved that all $p$-Young measures are generated by some sequence of uniformly $\mathrm{L}^p(\Omega;\mathbb{R}^N)$-bounded functions.
\section{Statement of the main result}\label{statement}
We consider in this article \term{differential inclusions} of the form
\begin{equation}
\label{eq:constraint_intro}
\nabla u(x) \in S_{R(x,\,\sbullet\,)} \quad\text{a.e.,}\qquad
\nabla u \in \mathrm{L}^p(\Omega,\mathbb{R}^{d\times d}),
\end{equation}
where $S_{R(x,\,\sbullet\,)}$ is the zero-sublevel set of $R(x,\,\sbullet\,)$ for a Carath\'{e}odory \term{constraint function} $R \colon\Omega\times\mathbb{R}^{d \times d} \to \mathbb{R}$, i.e.
\[
S_{R(x,\,\sbullet\,)} := \setb{A\in\mathbb{R}^{d\times d}}{R(x,A)\leq 0}.
\]
This principle generalizes some of the methods presented in~\cite{KoRiWi13OPYM} to arbitrary constraints of the form \eqref{eq:constraint_intro} satisfying certain properties (see Definition~\ref{def:p-full}).
As an application of the general principle, we provide a characterization of Young measures generated by gradients bounded in $\mathrm{L}^p(\Omega,\mathbb{R}^{d\times d})$, $1<p<d$, and satisfying a constraint of the form
\begin{equation} \label{eq:det_r}
J_1(x) \leq \det \nabla u_j(x) \leq J_2(x) \qquad\text{for all $j$ and a.e.\ $x\in\Omega$,}
\end{equation}
where
\[
J_1 \colon \Omega \to [-\infty,+\infty), \qquad
J_2 \colon \Omega \to (-\infty,+\infty]
\]
are given functions such that
\[
J_1(x)\leq J_2(x) \qquad\text{for a.e.~$x\in\Omega$.}
\]
This characterization gives rise to a number of special cases which are discussed after the statement of the main result:
\begin{theorem}\label{thm:main_intro}
Let $1 < p < d$. Suppose that $\Omega \subset \mathbb{R}^d$ is open and bounded, $|\partial\Omega|=0$, and let $\nu = (\nu_x)_{x\in\Omega} \subset \mathbf{M}^1(\mathbb{R}^{d \times d})$ be a $p$-Young measure. Moreover let $J_1 \colon \Omega \to [-\infty,+\infty)$, $J_2 \colon \Omega \to (-\infty,+\infty]$ be measurable and such that $J_1(x)\leq J_2(x)$ for a.e.~$x\in\Omega$. Also, assume that for $i=1,2$,
\begin{equation*}
\int_{\Omega}J_1^+(x)^{p/d} \;\mathrm{d} x<\infty
\qquad\text{and}\qquad
\int_{\Omega}J_2^-(x)^{p/d} \;\mathrm{d} x<\infty,
\end{equation*}
where $J_i^{\pm}$ denotes the positive or negative part of $J_i$, respectively. Then the following statements are equivalent:
\begin{itemize}
\item[(i)] There exists a sequence of gradients $(\nabla u_j) \subset \mathrm{L}^p(\Omega;\mathbb{R}^{d \times d})$ that generates $\nu$, such that
\[
\qquad J_1(x) \leq \det \nabla u_j(x) \leq J_2(x) \quad\text{for all $j\in\mathbb{N}$ and a.e.~$x\in\Omega$. }
\]
\item[(ii)] The conditions (I)-(IV) hold:
\begin{itemize}
\item[(I)] $\displaystyle\int_{\Omega}\int\abs{A}^p \;\mathrm{d} \nu_x(A)<\infty$;
\item[(II)] the barycenter $[\nu](x) := \int A \;\mathrm{d} \nu_x(A)$ is a gradient, i.e.\ there exists $\nabla u \in \mathrm{L}^p(\Omega;\mathbb{R}^{d \times d})$ with $[\nu] = \nabla u$ a.e.;
\item[(III)] for every quasiconvex function $h \colon \mathbb{R}^{d \times d} \to \mathbb{R}$ with $\abs{h(A)}\leq c(1+\abs{A}^p)$, the Jensen-type inequality
\[
\qquad\qquad h(\nabla u(x)) \leq \int h(A) \;\mathrm{d} \nu_x(A) \qquad\text{holds for a.e.\ $x \in \Omega$;}
\]
\item[(IV)] $\supp{\nu_x} \subset \setb{ A \in \mathbb{R}^{d \times d} }{ J_1(x) \leq \det\, A \leq J_2(x) }$ for a.e.\ $x \in \Omega$.
\end{itemize}
\end{itemize}
Furthermore, in this case the sequence $(u_j)$ can be chosen such that $(\nabla u_j)$ is $p$-equiintegrable\footnote{We say that a sequence $(\nabla u_j)$ is $p$-equiintegrable if the sequence $(|\nabla u_j|^p)$ is equiintegrable.} and $u_j - u\in\mathrm{W}^{1,p}_0(\Omega,\mathbb{R}^d)$, where $u\in\mathrm{W}^{1,p}(\Omega,\mathbb{R}^d)$ is the deformation underlying $\nu$ (i.e.\ the function whose gradient is the barycenter of $\nu$).
\end{theorem}
Recall that a locally bounded Borel function $h \colon \mathbb{R}^{d \times d} \to \mathbb{R}$ is called \term{quasiconvex} if
\begin{equation} \label{eq:quasiconvex}
h(A_0) \leq \,\Xint-_{\Bbb^d} h(\nabla v(x)) \;\mathrm{d} x
\end{equation}
for all $A_0 \in \mathbb{R}^{d \times d}$ and all $v \in \mathrm{C}^\infty(\Bbb^d;\mathbb{R}^d)$ with $v(x) = A_0x$ on $\partial \Bbb^d$, see~\cite{Daco08DMCV} for more on this fundamental class of functions.
The conditions~(I)-(III) are the well-known criteria of Kinderlehrer--Pedregal~\cite{KinPed91CYMG, KinPed94GYMG} characterizing gradient $p$-Young measures. Observe also that the conditions on $J_i$ only concern the sets where the functions are finite and hence the constraints are active. For example, if $J_1 \equiv -\infty$, then the lower bound is inactive and the condition on $J_1$ is trivially true.
As an important special case, for $J_1(x) = J_2(x) = J(x)$ a.e.~in $\Omega$, measurable and in $\mathrm{L}^{p/d}$, we obtain a characterization under a constraint of the Dacorogna--Moser~\cite{DacMos90PDEJ} form
\[
\left\{
\begin{aligned}
&\det \nabla u_j(x) = J(x) \qquad\text{for all $j$ and a.e.\ $x\in\Omega$} \\
&\text{$J \colon \Omega \to \mathbb{R}$ a given function.}
\end{aligned}
\right.
\]
Moreover, the generating sequence also satisfies $u_j - u \in \mathrm{W}^{1,p}_0(\Omega,\mathbb{R}^d)$ where $u$ is the deformation underlying $\nu$.
In the cases relevant to elasticity, we choose $J_1(x) = r > 0$ a.e.~and $J_2 \equiv +\infty$, corresponding to a uniform positivity constraint on the Jacobian, i.e.
\[
\det\nabla u_j(x) \geq r > 0 \qquad\text{for all $j$ and a.e.\ $x\in\Omega$.}
\]
In this context we note that requiring the Jacobian to be not only positive but uniformly positive, is often the appropriate condition when considering stored-energy functions $f$ under realistic growth conditions, i.e.
\[
f(A) \to \infty \quad\text{as}\quad \det\, A \to 0^+\quad\mbox{ and }\quad f(A) = \infty \quad\text{if}\quad \det\, A \leq 0,
\]
see e.g.~\cite{Ball02SOPE}.
We stress that $p < d$ is necessary for our results to hold. This restriction, however, includes for instance the prototypical case of quadratic growth in three dimensions. This constraint comes as a consequence of the $d$-growth of the determinant, cf.~the discussion in~\cite{KoRiWi13OPYM} and also Remark~\ref{rk:remark_qc_hull} below.
Furthermore, choosing $J_1(x) = J_2(x) = 1$ a.e.~our result also pertains to Young measures generated by gradients of \enquote{incompressible} maps, i.e.
\[
\det \nabla u_j(x) = 1 \qquad\text{for all $j$ and a.e.\ $x\in\Omega$.}
\]
This constraint is particularly relevant in the study of solids and fluids. We remark again, however, that the terminology \enquote{incompressibility} should only be interpreted as a pointwise Jacobian constraint and not as a geometric condition.
The proofs of our results are based on two main pillars: On one hand, an explicit construction of laminates in matrix space allows us to build special homogeneous gradient Young measures expressing an arbitrary matrix as a hierarchy of oscillations along rank-one lines, see Section~\ref{sc:geometry}. On the other hand, the abstract convex integration principle mentioned above then enables us to construct generating sequences consisting of gradients and such that the aforementioned differential inclusions are satisfied \emph{exactly} (it is of course easy to satisfy them only approximately, but the real challenge is to make them satisfied exactly; cf. e.g.\ Chapter~5 of~\cite{Mull99VMMP}). This is contained in Section~\ref{sc:convex_int}.
\section{A general convex integration principle}
\label{sc:convex_int}
In order to state our convex integration principle and main result, we need two definitions:
\begin{definition}
For $1\leq p,q <\infty$, we denote by $\mathcal{R}^{p,q}(\Omega;\mathbb{R}^{d \times d})$ the class of all Carath\'{e}odory functions $R:\Omega\times\mathbb{R}^{d\times d}\to \mathbb{R}$ for which there exists a measurable function $\kappa \colon \Omega\to [0,\infty)$ and a constant $C>0$ (independent of $x$) such that
\[
\int_{\Omega}\kappa(x)^{p/q} \;\mathrm{d} x<\infty
\qquad\text{and}\qquad
|R(x,A)|\leq \kappa(x)+C|A|^q.
\]
\end{definition}
\begin{definition} \label{def:p-full}
Suppose $R\in\mathcal{R}^{p,q}(\Omega,\mathbb{R}^{d\times d})$, where $\Omega \subset \mathbb{R}^d$, $1\leq p,q <\infty$, and let
\[
S_{R(x,\,\sbullet\,)}:=\setb{A\in\mathbb{R}^{d\times d}}{R(x,A)\leq 0}.
\]
For fixed $x \in \Omega$, we say that a set $D\supseteq S_{R(x,\,\sbullet\,)}$ is contained \textbf{tightly} in the $p$-quasiconvex hull of $S_{R(x,\,\sbullet\,)}$ if there exists a constant $C>0$ such that for every $M\in D$ there is a homogeneous gradient $p$-Young measure $\nu$ satisfying the following properties:
\begin{itemize}
\item[(i)] $[\nu] = M$;
\item[(ii)] $\supp \nu \subset S_{R(x,\,\sbullet\,)}$;
\item[(iii)] $\displaystyle\int |A - M|^p\;\mathrm{d} \nu(A) \leq C\max\left\{R(x,M),0\right\}^{p/q}$.
\end{itemize}
We say that a set $D\supseteq \bigcup_{x \in \Omega} S_{R(x,\,\sbullet\,)}$ is contained tightly in the $p$-quasiconvex hull of $(S_{R(x,\,\sbullet\,)})_{x\in\Omega}$ \textbf{uniformly} in $x$ if there exists a constant $C>0$ (independent of $x$) such that for every map $M \colon \Omega\to D$ with $M=\nabla u$ for some $u\in\mathrm{W}^{1,p}(\Omega)$, there exists a gradient $p$-Young measure $(\nu_x)_{x\in\Omega}$ for which (i)-(iii) hold for almost every $x\in\Omega$ (with $M$ replaced by $M(x)$ and $\nu$ replaced by $\nu_x$).
\end{definition}
\begin{remark}\label{rk:remark_qc_hull}
\begin{enumerate}
\item In~\cite{KoRiWi13OPYM} it is shown that for the function $R(A)= - \det\, A$, $\mathbb{R}^{d\times d}$ is tightly contained in the $p$-quasiconvex hull of $S_R$.
\item The \term{closed $p$-quasiconvex hull} of a set $S$, denoted $S^{p\text{-}qc}$, is classically defined as the set of all $M$ for which there exists a homogeneous gradient $p$-Young measure $\nu$ so that~(i),~(ii) hold in the above definition with $S$ in place of $S_{R(x,\,\sbullet\,)}$.
\item Note that in the case that $R(x,\,\sbullet\,)$ \emph{quasiconvex} (see~\eqref{eq:quasiconvex}) and $p\geq q$, the closed $p$-quasiconvex hull of $S_{R(x,\,\sbullet\,)}$ is $S_{R(x,\,\sbullet\,)}$ itself: Let $M \in S_{R(x,\,\sbullet\,)}^{p\text{-}qc}$. By the Jensen-type inequality in the Kinderlehrer--Pedregal characterization of gradient Young measures, we obtain
\[
\qquad R(x,M) \leq \lrangle{R(x,\,\sbullet\,),\nu} \leq 0,
\]
thus $M \in S_{R(x,\,\sbullet\,)}$. In particular, in this case, no strict superset of $S_{R(x,\,\sbullet\,)}$ can satisfy conditions (i) and (ii) of Definition~\ref{def:p-full} and hence cannot be contained tightly in the $p$-quasiconvex hull of $S_{R(x,\,\sbullet\,)}$.
\item Also, note that by (iii) we infer
\begin{equation*}
\qquad \int |A|^p \;\mathrm{d} \nu(A)\leq C \biggl[ \int |A-M|^p \;\mathrm{d} \nu(A) +|M|^p \biggr]
\leq C \bigl[ |R(x,M)|^{p/q}+|M|^p \bigr].
\end{equation*}
\end{enumerate}
\end{remark}
Recall that, by the characterization of Kinderlehrer--Pedregal~\cite{KinPed91CYMG,KinPed94GYMG}, $\nu=(\nu_x)_{x\in\Omega}$ is a \term{gradient $p$-Young measure}, that is, it is generated by a sequence of $\mathrm{L}^p$-norm-bounded gradients, if and only if the following conditions hold:
\begin{itemize}
\item[(I)] $\displaystyle\int_{\Omega}\int\abs{A}^p \;\mathrm{d} \nu_x(A)<\infty$;
\item[(II)] the barycenter $[\nu](x) := \int A \;\mathrm{d} \nu_x(A)$ is a gradient, i.e.\ there exists $\nabla u \in \mathrm{L}^p(\Omega;\mathbb{R}^{d \times d})$ with $[\nu] = \nabla u$ a.e.;
\item[(III)] for every quasiconvex function $h \colon \mathbb{R}^{d \times d} \to \mathbb{R}$ with $\abs{h(A)}\leq C(1+\abs{A}^p)$, the Jensen-type inequality
\[
\qquad h(\nabla u(x)) \leq \int h(A) \;\mathrm{d} \nu_x(A) \qquad\text{holds for a.e.\ $x \in \Omega$.}
\]
\end{itemize}
We also introduce, for $R\in\mathcal{R}^{p,q}(\Omega;\mathbb{R}^{d \times d})$, the following pointwise condition, expressing that $\nu$ satisfies our side constraint:
\begin{itemize}
\item[(IV)] $\supp{\nu_x} \subset S_{R(x,\,\sbullet\,)}$ for a.e.\ $x \in \Omega$.
\end{itemize}
Our abstract characterization result for gradient $p$-Young measures is the following:
\begin{theorem} \label{thm:main_abstract}
Let $\Omega \subset \mathbb{R}^d$ be open and bounded with $|\partial\Omega|=0$, let $R\in\mathcal{R}^{p,q}(\Omega;\mathbb{R}^{d \times d})$ for $1 \leq q < \infty$, $1 < p < \infty$, and suppose that $\mathbb{R}^{d \times d}$ is tightly contained in the $p$-quasiconvex hull of $(S_{R(x,\,\sbullet\,)})_{x\in\Omega}$, uniformly in $x$. Then, the following are equivalent for a $p$-Young measure $\nu = (\nu_x)_{x\in\Omega} \subset \mathbf{M}^1(\mathbb{R}^{d \times d})$:
\begin{itemize}
\item[(i)] There exists a sequence of gradients $(\nabla u_j) \subset \mathrm{L}^p(\Omega;\mathbb{R}^{d \times d})$ that generates $\nu$, such that
\[
\qquad \nabla u_j(x) \in S_{R(x,\,\sbullet\,)} \quad\text{for all $j\in\mathbb{N}$ and a.e.~$x\in\Omega$.}
\]
\item[(ii)] The conditions (I)--(IV) hold.
\end{itemize}
Furthermore, in this case the sequence $(u_j)$ can be chosen such that $(\nabla u_j)$ is $p$-equiintegrable and $u_j - u\in\mathrm{W}^{1,p}_0(\Omega,\mathbb{R}^d)$ where $u\in\mathrm{W}^{1,p}(\Omega,\mathbb{R}^d)$ is the deformation underlying $\nu$ (i.e.\ the function whose gradient is the barycenter of $\nu$).
\end{theorem}
We first prove a key proposition, representing a convergence principle in the spirit of convex integration.
\begin{proposition}\label{prop:convexint}
Assume that $\Omega$, $R$, $p$, $q$ are as in the preceding theorem and suppose that $\mathbb{R}^{d \times d}$ is contained tightly in the $p$-quasiconvex hull of $(S_{R(x,\,\sbullet\,)})_{x\in\Omega}$ uniformly in $x$, and let $u\in \mathrm{W}^{1,p}(\Omega;\mathbb{R}^d)$. Then there exists $v\in \mathrm{W}^{1,p}(\Omega;\mathbb{R}^d)$ such that
\begin{itemize}
\item[(i)]$\nabla v(x) \in S_{R(x,\,\sbullet\,)}$\quad for a.e.\ $x \in \Omega$,\vspace{0.2cm}
\item[(ii)] $v - u\in\mathrm{W}^{1,p}_0(\Omega,\mathbb{R}^d)$,\vspace{0.2cm}
\item[(iii)]$\displaystyle \norm{\nabla v-\nabla u}^p_p\leq C\int_{\Omega} \mathbbm{1}_{\{y\,:\,R(y,\nabla u(y))>0\}}(x) \, |R(x,\nabla u(x))|^{p/q}\;\mathrm{d} x,$\vspace{0.2cm}
\end{itemize}
where $C>0$ is a constant independent of $u$.
\end{proposition}
\begin{remark}
The preceding theorem and proposition also hold in the more general situation where a family $(D_x)_{x\in\Omega}$ is tightly contained in the $p$-quasiconvex hull of $S_{R(x,\,\sbullet\,)}$ uniformly in $x$ (note that Definition~\ref{def:p-full} can be suitably generalized to $x$-dependent sets $D_x$ in an obvious way), under the additional assumption that any gradient $p$-Young measure $\nu$ with $\supp \nu_x \subset D_x$ a.e.~can be generated by a $p$-equiintegrable sequence of gradients $(\nabla u_j)$ such that $\nabla u_j(x)\in D_x$ a.e. and $u_j - u\in\mathrm{W}^{1,p}_0(\Omega,\mathbb{R}^d)$ ($u$ denotes the map underlying $\nu$).
\end{remark}
\begin{proof}[Proof of Proposition~\ref{prop:convexint}]
Assume without loss of generality that
\[
\int_{\Omega}\mathbbm{1}_{\{y\,:\,R(y,\nabla u(y))>0\}}(x) \, |R(x,\nabla u(x))|^{p/q}\;\mathrm{d} x > 0.
\]
We construct a sequence of functions $\{v^l\}_{l\in\mathbb{N}}$, bounded in $\mathrm{W}^{1,p}(\Omega;\mathbb{R}^d)$, such that
\begin{align}
\label{eq:det_proof}
&v^l - u \in\mathrm{W}^{1,p}_0(\Omega;\mathbb{R}^d),\\
\label{eq:smalldet}
&\int_{\Omega}\mathbbm{1}_{\{R(y,\nabla v^l(y))>0\}}(x) |R(x,\nabla v^l(x))|^{p/q}\;\mathrm{d} x \\
&\qquad \leq 2^{-lp}\int_{\Omega}\mathbbm{1}_{\{R(y,\nabla u(y))>0\}}(x) |R(x,\nabla u(x))|^{p/q}\;\mathrm{d} x, \notag\\
\label{eq:cauchy}
&\int_{\Omega}|\nabla v^{l+1}(x)-\nabla v^l(x)|^p\;\mathrm{d} x \\
&\qquad \leq 2^{-(l-1)p}C\int_{\Omega}\mathbbm{1}_{\{R(y,\nabla u(y))>0\}}(x) |R(x,\nabla u(x))|^{p/q}\;\mathrm{d} x, \notag
\end{align}
where $C>0$ is a constant.
Let us construct the sequence inductively. Set $v^0 = u$ so that~\eqref{eq:det_proof} and~\eqref{eq:smalldet} are satisfied. If $v^l\in \mathrm{W}^{1,p}(\Omega;\mathbb{R}^d)$ has been constructed to satisfy~\eqref{eq:det_proof} and~\eqref{eq:smalldet}, we find $v^{l+1}$ in the following way: since $\mathbb{R}^{d \times d}$ is tightly contained in the $p$-quasiconvex hull of $S_{R(x,\,\sbullet\,)}$ uniformly in $x$, there exists a gradient $p$-Young measure $(\nu_x^l)_{x\in\Omega}$ with $[\nu_x^l]=\nabla v^l(x)$ and with support in the set $S_{R(x,\,\sbullet\,)}$ almost everywhere.
Observe that by~(iii) in Definition~\ref{def:p-full}, for $x \in \Omega$ such that $R(x,\nabla v^l(x))\leq 0$ we have $\nu_x^l=\delta_{\nabla v^l(x)}$.
By standard Young measure arguments (see for example~\cite{Pedr97PMVP}), there exists a $p$-equiintegrable sequence of gradients $(\nabla v^{l,m})_m \subset \mathrm{L}^p(\Omega;\mathbb{R}^{d \times d})$ generating $\nu^l$ such that $v^{l,m} - v^l\in\mathrm{W}^{1,p}_0(\Omega;\mathbb{R}^d)$ and hence $v^{l,m} - u\in\mathrm{W}^{1,p}_0(\Omega;\mathbb{R}^d)$ for all $m\in\mathbb{N}$.
We define $g:\Omega\times\mathbb{R}^{d\times d}\to\mathbb{R}$ by
\begin{equation}\label{eq:dettest}
g(x,A)= \mathbbm{1}_{\{y\,:\,R(y,A)>0\}}(x) |R(x,A)|^{p/q}=\begin{cases}R(x,A)^{p/q} & \text{if $R(x,A)>0$,}\\
0 & \text{otherwise.}
\end{cases}
\end{equation}
Using $g$ as a test function and the fact that $\nu^l_x$ is supported in $S_{R(x,\,\sbullet\,)}$, by Young measure representation, we may choose $m$ large enough, say $m=M$, and define $\nabla v^{l+1}:=\nabla v^{l,M}$ such that
\begin{align*}
&\int_{\Omega}\mathbbm{1}_{\{R(y,\nabla v^{l+1}(y))>0\}}(x)|R(x,\nabla v^{l+1}(x))|^{p/q}\;\mathrm{d} x \\
&\qquad \leq 2^{-(l+1)p}\int_{\Omega}\mathbbm{1}_{\{R(y,\nabla u(y))>0\}}(x) |R(x,\nabla u(x))|^{p/q}\;\mathrm{d} x,
\end{align*}
i.e.~\eqref{eq:det_proof} as well as \eqref{eq:smalldet} hold for $v^{l+1}$.
Also, by taking $M$ even larger if necessary, we can ensure that also
\begin{equation}\label{eq:Mchoice}
\int_{\Omega}|\nabla v^{l+1}(x)-\nabla v^l(x)|^p\;\mathrm{d} x\leq 2^p\int_{\Omega}\int|A-\nabla v^l(x)|^p\;\mathrm{d}\nu_x^l(A) \;\mathrm{d} x
\end{equation}
Indeed, this follows again from Young measure representation for the integrand $|A-\nabla v^l(x)|^p$.
Next, for any $l \in \mathbb{N}$, by property (iii) of Definition~\ref{def:p-full} and~\eqref{eq:smalldet} we infer that
\[
\begin{aligned}
\int_{\Omega}\int|A-\nabla v^l(x)|^p\;\mathrm{d}\nu_x^l(A)\;\mathrm{d} x&\leq C\int_{\Omega}\mathbbm{1}_{\{R(y,\nabla u(y))>0\}}(x) |R(x,\nabla v^l(x))|^{p/q}\;\mathrm{d} x\\
&\leq 2^{-lp} C \int_{\Omega} \mathbbm{1}_{\{R(y,\nabla u(y))>0\}}(x) |R(x,\nabla u(x))|^{p/q}\;\mathrm{d} x
\end{aligned}
\]
for a constant $C>0$ independent of $x$. Combining with~\eqref{eq:Mchoice} we get the estimate
\begin{equation*}
\int_{\Omega}|\nabla v^{l+1}(x)-\nabla v^l(x)|^p\;\mathrm{d} x\leq 2^{-(l-1)p} C \int\mathbbm{1}_{\{R(y,\nabla u(y))>0\}}(x) |R(x,\nabla u(x))|^{p/q}\;\mathrm{d} x,
\end{equation*}
which is \eqref{eq:cauchy}, completing the definition of our sequence.
The result then follows readily: by \eqref{eq:cauchy}, $(\nabla v^l)_{l\in\mathbb{N}}$ is a Cauchy sequence in $\mathrm{L}^p(\Omega;\mathbb{R}^{d \times d})$ and therefore has a strong $\mathrm{L}^p$-limit $\nabla v$. In particular, it holds that $v - u\in\mathrm{W}^{1,p}_0(\Omega;\mathbb{R}^d)$ and (ii) follows. Using the triangle inequality and~\eqref{eq:cauchy}, we deduce that
\[
\begin{aligned}
\norm{\nabla v-\nabla u}_p&\leq\sum_{l=0}^{\infty}\norm{\nabla v^{l+1}-\nabla v^l}_p\\
&\leq C^{1/p}\left(\int_{\Omega}\mathbbm{1}_{\{R(y,\nabla u(y))>0\}}(x) |R(x,\nabla u(x))|^{p/q}\;\mathrm{d} x\right)^{1/p}\sum_{l=0}^{\infty}2^{-(l-1)}\\
&\leq 4C^{1/p}\left(\int_{\Omega}\mathbbm{1}_{\{R(y,\nabla u(y))>0\}}(x) |R(x,\nabla u(x))|^{p/q}\;\mathrm{d} x\right)^{1/p},
\end{aligned}
\]
proving (iii). Lastly, $(\nabla v^l)_l$ is $p$-equiintegrable (being Cauchy in $\mathrm{L}^p$), and since $|R(x,\nabla v^l(x))|^{p/q}\leq C(|\kappa(x)|^{p/q}+|\nabla v^l(x)|^p)$, also $\{|R(\,\sbullet\,,\nabla v^l)|^{p/q}\}_{l\in\mathbb{N}}$ is equiintegrable and converges, up to a subsequence, to $|R(\,\sbullet\,,\nabla v)|^{p/q}$. Therefore, by Vitali's Convergence Theorem,
\[
\int_{\Omega}\mathbbm{1}_{\{R(y,\nabla v(y))>0\}}(x)|R(x,\nabla v(x))|^{p/q}\;\mathrm{d} x=0,
\]
which implies $R(x,\nabla v(x))\leq 0$ for a.e.\ $x\in\Omega$, i.e.\ (i), and the proof is complete.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:main_abstract}]
(i) $\Rightarrow$ (ii):
Conditions (I)--(III) follow from the usual Kinderlehrer--Pedregal Theorem in~\cite{KinPed94GYMG}. Regarding (IV), let $h\in \mathrm{L}^{\infty}(\Omega\times\mathbb{R}^{d\times d})$ be Carath\'{e}odory and such that $\supp h(x,\,\sbullet\,) \subset\subset \mathbb{R}^{d \times d} \setminus S_{R(x,\,\sbullet\,)}$ for almost every $x$. Then, by the assumptions on $\nabla u_j$,
\[
\int_\Omega \int h(x,A) \;\mathrm{d} \nu_x(A) \;\mathrm{d} x = \lim_{j\to\infty} \int_\Omega h(x,\nabla u_j(x)) \;\mathrm{d} x = 0.
\]
Varying $h$, we infer that $\supp \nu_x \subset S_{R(x,\,\sbullet\,)}$ for a.e.\ $x \in \Omega$.
(ii) $\Rightarrow$ (i):
For $1 < p < \infty$, $1 \leq q < \infty$ as in Definition~\ref{def:p-full}, let $\nu$ be a gradient $p$-Young measure with $\supp \nu_x \subset S_{R(x,\,\sbullet\,)}$ for a.e.\ $x \in \Omega$. Standard results yield that there exists a generating sequence $(\nabla u_j)$ for $\nu$ which is $p$-equiintegrable and satisfies $u_j - u\in\mathrm{W}^{1,p}_0(\Omega;\mathbb{R}^d)$ where $\nabla u=[\nu]$. By Young measure representation applied to the test function $g$ in~\eqref{eq:dettest} and the assumption on the support of $\nu$, we may assume (after passing to a subsequence if necessary) that
\begin{equation}\label{eq:closetoK}
\int_{\Omega}\mathbbm{1}_{\{R(y,\nabla u_j(y))>0\}}(x)|R(x,\nabla u_j(x))|^{p/q}\;\mathrm{d} x<\frac{1}{j^p}.
\end{equation}
Applying Proposition~\ref{prop:convexint} to each $u_j$, we obtain a new sequence $(v_j)$, such that $\nabla v_j(x)\in S_{R(x,\,\sbullet\,)}$ a.e., $v_j - u\in\mathrm{W}^{1,p}_0(\Omega;\mathbb{R}^d)$ and, by~\eqref{eq:closetoK} and part~(iii) of Proposition~\ref{prop:convexint},
\[
\norm{\nabla u_j-\nabla v_j}_p<\frac{C^{1/p}}{j}.
\]
Hence $(\nabla v_j)$ is $p$-equiintegrable and generates $\nu$.
\end{proof}
\section{Differential inclusions involving prescribed Jacobians} \label{sc:geometry}
In this section we show that all of $\mathbb{R}^{d\times d}$ is tightly contained in the $p$-quasiconvex hull of $(S_{R(x,\,\sbullet\,)})_{x \in \Omega}$ uniformly in $x$ for all $p\in[1,d)$, where for $J_1$ and $J_2$ as in Theorem~\ref{thm:main_intro},
\[
R(x,A) := \max\{J_1(x) - \det\, A, \det\, A - J_2(x), 0 \}
\]
and the corresponding sublevel sets are given by
\[
S_{R(x,\,\sbullet\,)}=\setb{A\in\mathbb{R}^{d\times d}}{J_1(x)\leq \det\, A \leq J_2(x)}.
\]
Then Theorem~\ref{thm:main_abstract} establishes Theorem~\ref{thm:main_intro}.
We note that the above function $R$ is indeed an element of $\mathcal{R}^{p,d}(\Omega,\mathbb{R}^{d\times d})$. To see this, note that
\begin{align*}
0 \leq R(x,A) &\leq \max\{J_1^+(x) - \det\, A, \det\, A + J_2^-(x), 0 \} \\
&\leq J_1^+(x) + J_2^-(x) + C|A|^d =: \kappa(x) + C|A|^d
\end{align*}
with $\kappa\in \mathrm{L}^{p/d}(\Omega)$. Of course, since $R(x,\,\sbullet\,)$ is quasiconvex for a.e.~$x\in\Omega$, by Remark~\ref{rk:remark_qc_hull} (3) we are forced to restrict attention to $p<d$.
The fact that $\mathbb{R}^{d\times d}$ is contained tightly in the $p$-quasiconvex hull of the above $(S_{R(x,\,\sbullet\,)})_{x\in\Omega}$ will be a corollary to the following proposition, which vastly generalizes Proposition 3.1 in~\cite{KoRiWi13OPYM}:
\begin{proposition} \label{prop:geometry}
Let $1 \leq p < d$, $r\in\mathrm{L}^{p/d}(\Omega)$, and set $R(x,A)=|\det\, A-r(x)|$. Then $R\in \mathcal{R}^{p,d}(\Omega,\mathbb{R}^{d\times d})$ and $\mathbb{R}^{d\times d}$ is tightly contained in the $p$-quasiconvex hull of $(S_{R(x,\,\sbullet\,)})_{x\in\Omega}$ uniformly in $x$.
\end{proposition}
Before proving Proposition~\ref{prop:geometry}, we state and prove in the form of a corollary the result concerning the function $R(x,A)=\max\{J_1(x) - \det\, A, \det\, A - J_2(x), 0\}$.
\begin{corollary}\label{cor:geometry}
Let $1 \leq p < d$ and $J_1$, $J_2$ as in Theorem~\ref{thm:main_intro}. Set
\[
R(x,A)=\max\bigl\{J_1(x) - \det\, A, \det\, A - J_2(x), 0\bigr\}.
\]
Then $\mathbb{R}^{d\times d}$ is tightly contained in the $p$-quasiconvex hull of $(S_{R(x,\,\sbullet\,)})_{x\in\Omega}$ uniformly in $x$.
\end{corollary}
\begin{proof}
Suppose $M=\nabla u$ for some $u\in\mathrm{W}^{1,p}(\Omega;\mathbb{R}^d)$. Define the function $r_M:\Omega\to\mathbb{R}$ in the following way:
\[
r_M(x)=\begin{cases}\det\, M(x) & \text{if $J_1(x)\leq\det\, M(x)\leq J_2(x)$,}\\
J_1(x) & \text{if $\det\, M(x)<J_1(x)$,}\\
J_2(x) & \text{if $\det\, M(x)>J_2(x)$.}
\end{cases}
\]
It then follows from the assumptions on $J_1$ and $J_2$ that $r_M\in\mathrm{L}^{p/d}(\Omega)$, and therefore Proposition~\ref{prop:geometry} applied to $r_M$ yields a $p$-gradient Young measure $(\nu_x)_x$ such that, for almost every $x\in\Omega$,
\begin{itemize}
\item[(i)] $[\nu_x] = M(x)$;
\item[(ii)] $\supp \nu_x \subset \setb{A\in\mathbb{R}^{d\times d}}{\det\, A = r_M(x)}\subset S_{R(x,\,\sbullet\,)}$;
\item[(iii)] $\displaystyle\int |A - M|^p\;\mathrm{d} \nu_x(A) \leq C|r_M(x) - \det\, M(x)|^{p/d}$,
\end{itemize}
where $C$ is independent of $M$ and $x$. The claim now follows from the observation, using the definitions of $r_M$ and $R$, that
\[
|r_M(x) - \det\, M(x)|\leq R(x,M(x))
\]
for almost every $x$.
\end{proof}
\subsection{Three dimensions}
We first prove Proposition~\ref{prop:geometry} for $d=3$ only. The proof for $d=2$ is similar but simpler and the proof for $d>3$ is outlined in the next section.
\begin{proof}[Proof of Proposition~\ref{prop:geometry} for $d = 3$]
In the first steps of the proof, we fix a matrix $M_0$ and a real number $r$.
\proofstep{Step~1.}
Following~\cite{KoRiWi13OPYM}, we transform an arbitrary matrix $M_0$ to diagonal form using the real singular value decomposition and write $M_0 = \tilde{P} \tilde{D}_0 \tilde{Q}^T$ where $\tilde{D}_0 = \mathrm{diag}(\sigma_1,\sigma_2,\sigma_3)$ with $0\leq\sigma_1 \leq \sigma_2\leq\sigma_3$, and $\tilde{P},\tilde{Q}$ orthogonal matrices. If $\det\, M_0 < 0$, either $\tilde{P}$ or $\tilde{Q}$ has negative determinant, say $\det\, \tilde{P} < 0$ (the other case is similar). Then, $M_0 = P D_0 Q^T$ where
\[
D_0 := \mathrm{diag}(\sigma_1,\sigma_2,-\sigma_3), \qquad
P := \tilde{P}\mathrm{diag}(1,1,-1), \qquad
Q := \tilde{Q},
\]
with $P,Q \in \mathrm{SO}(3)$ and $\det\, D_0 < 0$. Similarly, if $\det\, M_0 \geq 0$, we may write $M_0 = P D_0 Q^T$, where $P,Q \in \mathrm{SO}(3)$ and $\det\, D_0 \geq 0$ for
\[
D_0 = \mathrm{diag}(\sigma_1,\sigma_2,\sigma_3).
\]
Note that if $D_0$ can be written as a laminate then the same holds for $M_0$ since $P (a \otimes b) Q^T = (Pa) \otimes (Qb)$ for any $a,b \in \mathbb{R}^3$. Also, we remark that the matrices $D_0$ and $M_0$ share the same
determinant and (Frobenius) matrix norm. Consequently, we may henceforth assume without loss of generality that
\[
M_0 = \mathrm{diag}(\sigma_1,\sigma_2,\pm\sigma_3).
\]
with $0\leq\sigma_1\leq\sigma_2\leq\sigma_3$.
We now distinguish the cases $\sigma_3\geq\bigl(\frac{|r|}{2}\bigr)^{1/3}$ and $\sigma_3<\bigl(\frac{|r|}{2}\bigr)^{1/3}$.\\
\noindent\textbf{Case~I: $\sigma_3\geq\bigl(\frac{|r|}{2}\bigr)^{1/3}.$}
\proofstep{Step~I.2.}
Set $\gamma := \frac{|r - \det\, M_0|^{1/2}}{\sigma_3^{1/2}}$ and decompose $M_0$ twice along rank-one lines:
\begin{align*}
M_0 &= \frac{1}{4} \bigl[ M_0 + \gamma (\mathrm{e}_1 \otimes \mathrm{e}_2) + \gamma (\mathrm{e}_2 \otimes \mathrm{e}_1) \bigr]
+ \frac{1}{4} \bigl[ M_0 + \gamma (\mathrm{e}_1 \otimes \mathrm{e}_2) - \gamma (\mathrm{e}_2 \otimes \mathrm{e}_1) \bigr] \\
&\qquad + \frac{1}{4} \bigl[ M_0 - \gamma (\mathrm{e}_1 \otimes \mathrm{e}_2) + \gamma (\mathrm{e}_2 \otimes \mathrm{e}_1) \bigr]
+ \frac{1}{4} \bigl[ M_0 - \gamma (\mathrm{e}_1 \otimes \mathrm{e}_2) - \gamma (\mathrm{e}_2 \otimes \mathrm{e}_1) \bigr].
\end{align*}
Direct computation yields that two of these four matrices (either those where both $\gamma$'s come with the same sign, or those where the $\gamma$'s have different signs, depending on the sign of $\sigma_3$) have determinant $r$, and the other two have determinant $2\det\, M_0 - r$. We call the former ones $M_{1,G1}$ and $M_{1,G2}$ ($G$ for \textit{good}) and the latter ones $M_{1,B1}$ and $M_{1,B2}$ ($B$ for \textit{bad}), so that we have the decomposition
\begin{align*}
M_0 = \frac{1}{4} M_{1,B1} + \frac{1}{4} M_{1,G1} + \frac{1}{4} M_{1,G2} + \frac{1}{4} M_{1,B2}
\end{align*}
with
\begin{align*}
\det\, M_{1,G1} &= \det\, M_{1,G2} = r, \\
\det\, M_{1,B1} &= \det\, M_{1,B2} = 2\det\, M_0 - r.
\end{align*}
Now, if $|\det\, M_{0}|\leq |r|$, it holds that $\frac{|r|}{2}\geq\frac{1}{4}|r-\det\, M_{0}|$ and, taking into account that $\sigma_3\geq\bigl(\frac{|r|}{2}\bigr)^{1/3}$,
\begin{equation*}
\sigma_3\geq 4^{-1/3}|r-\det\, M_{0}|^{1/3}.
\end{equation*}
On the other hand, if $|\det\, M_{0}|> |r|$, we can use the fact that $\sigma_3\geq|\det\, M_{0}|^{1/3}$ to infer that
\begin{equation*}
\sigma_3\geq|\det\, M_{0}|^{1/3}\geq 2^{-1/3}(|r|+|\det\, M_{0}|)^{1/3}\geq 2^{-1/3}|r-\det\, M_{0}|^{1/3}.
\end{equation*}
This implies that there is a constant $C>0$, independent of $M_{0}$ and $r$, such that in either case
\begin{equation*}
\sigma_3\geq C|r-\det\, M_{0}|^{1/3}.
\end{equation*}
It then follows that for $J = G1, G2, B1, B2$,
\begin{equation} \label{eq:M_1X_dist}
\abs{M_{1,J}-M_0} = \biggl( 2 \frac{\abs{r - \det\, M_0}}{\sigma_3} \biggr)^{1/2} \leq C \frac{\abs{r - \det\, M_0}^{1/2}}{\abs{r - \det\, M_0}^{1/6}} = C \abs{r - \det\, M_0}^{1/3}
\end{equation}
for a constant $C > 0$ independent of $M_0$ and $r$, and also
\begin{equation} \label{eq:rdet_dist}
\abs{r - \det\, M_{1,J}} \leq 2 \abs{r - \det\, M_0}.
\end{equation}
Moreover, the singular value $\sigma_3$ is not altered by this construction, and so there is still a singular value of $M_{1,B1}$ and $M_{1,B2}$ with modulus at least $\bigl(\frac{|r|}{2}\bigr)^{1/3}$. Therefore we may recursively apply the procedure from the preceding steps to decompose $M_{1,B1}$ and $M_{1,B2}$ in turn taking the role of $M_0$. This yields matrices $M_{2,G1},\ldots,M_{2,G4}$, $M_{2,B1},\ldots,M_{2,B4}$ such that
\begin{align*}
M_{1,B1} = \frac{1}{4} M_{2,G1} + \frac{1}{4} M_{2,G2} + \frac{1}{4} M_{2,B1} + \frac{1}{4} M_{2,B2}, \\
M_{1,B2} = \frac{1}{4} M_{2,G3} + \frac{1}{4} M_{2,G4} + \frac{1}{4} M_{2,B3} + \frac{1}{4} M_{2,B4},
\end{align*}
and so on.
The laminate which we get after $k$ steps is then given by
\[
\nu_k:= \sum_{i=1}^{k}\sum_{j=1}^{2^i}\frac{1}{4^i}\delta_{M_{i,Gj}} + \sum_{j=1}^{2^{k}}\frac{1}{4^{k}}\delta_{M_{k,Bj}},
\]
where, for all $i$, $j$, $\det\, M_{i,Gj}=r$.
\proofstep{Step~I.3.}
It is clear that each $\nu_k$ satisfies $[\nu_k]=M_0$ and we turn attention to the distance integral in~(iii) of Definition~\ref{def:p-full}. That is,
\begin{equation*}
\int \abs{A-M_0}^p \;\mathrm{d} \nu_k(A) = \sum_{i=1}^{k} \sum_{j=1}^{2^i} \frac{1}{4^i} \abs{M_{i,Gj}-M_0}^p + \sum_{j=1}^{2^{k}} \frac{1}{4^{k}} \abs{M_{k,Bj}-M_0}^p.
\end{equation*}
Let us define $X_i := M_{i,Gj}$, $X_0 := M_0$, and $X_{\ell-1}$ to be the matrix $M_{\ell-1,Bj}$ with $j \in \{1,\ldots,2^{\ell-1}\}$ such that $X_\ell$ is constructed from $X_{\ell-1}$ by laminating as in the previous proof step (with the convention $M_{0,B1} := M_0$); similarly, let $Y_{k} := M_{k,Bj}$, $Y_0 := M_0$, and $Y_{\ell-1}$ defined analogously to $X_{\ell-1}$. Then, $\sum_{\ell=1}^i X_\ell - X_{\ell-1} = M_{i,Gj}-M_0$ and $\sum_{\ell=1}^{k} Y_\ell - Y_{\ell-1} = M_{k,Bj}-M_0$, and by the virtue of the triangle inequality
\begin{equation*}
\int \abs{A-M_0}^p \;\mathrm{d} \nu_k(A) \leq \sum_{i=1}^{k} \sum_{j=1}^{2^i} \frac{1}{4^i} \biggl( \sum_{\ell=1}^i \abs{X_\ell -X_{\ell-1}} \biggr)^p + \sum_{j=1}^{2^{k}} \frac{1}{4^{k}} \biggl( \sum_{\ell=1}^{k} \abs{Y_\ell-Y_{\ell-1}} \biggr)^p.
\end{equation*}
In order to get bounds on $\abs{X_\ell - X_{\ell-1}}$, we use~\eqref{eq:M_1X_dist} and then~\eqref{eq:rdet_dist} recursively.
Thus,
\begin{align*}
\sum_{\ell=1}^i \abs{X_\ell -X_{\ell-1}} &\leq \sum_{\ell=1}^i C \, \abs{r - \det\, X_{\ell-1}}^{1/3} \leq \sum_{\ell=1}^i C 2^{(\ell-1)/3} \, \abs{r - \det\, M_0}^{1/3} \\
&\leq \frac{C \, \abs{r - \det\, M_0}^{1/3}}{2^{1/3}-1} 2^{i/3}
\end{align*}
and a similar estimate holds for the sum involving the $Y_\ell$'s with $i$ replaced by $k$. Hence,
\begin{align}
\int \abs{A-M_0}^p \;\mathrm{d} \nu_k(A) &\leq C \abs{r - \det\, M_0}^{p/3} \left[ \sum_{i=1}^{k} (2^{p/3 - 1})^i + (2^{p/3 - 1})^{k} \right]\nonumber \\
&\leq C \abs{r - \det\, M_0}^{p/3} \biggl[ \frac{1}{1-\rho} + \rho^{k} \biggr] \nonumber\\
&\leq C_p \abs{r - \det\, M_0}^{p/3},
\label{eq:geom_prop1}
\end{align}
where $\rho:=2^{p/3 - 1}<1$ (since $p < 3$) and $C_p>0$ is a constant depending on $p$ (but not on $M_0$ or $r$) which blows up as $p\to 3$.
Also, each $\nu_k$ is a probability measure and by \eqref{eq:geom_prop1} we deduce that
\begin{align}
\int \abs{A}^p \;\mathrm{d} \nu_k(A) & \leq 2^p \left[\int \abs{A-M_0}^p \;\mathrm{d} \nu_k(A) + \abs{M_0}^p \right]\nonumber\\
& \leq 2^p C_p \abs{r - \det\, M_0}^{p/3} + 2^p \abs{M_0}^p.
\label{eq:geom_prop2}
\end{align}
Observe moreover that the mass of $\nu_k$ carried by the matrices outside $S_R$ is
\begin{equation}\label{eq:mass}
\nu_k \bigl( \setb{ A \in \mathbb{R}^{3 \times 3} }{ \det\, A \neq r } \bigr) = \frac{2^{k}}{4^{k}} \to 0 \qquad\text{as $k\to\infty$}
\end{equation}
\noindent\textbf{Case~II: $\sigma_3<\bigl(\frac{|r|}{2}\bigr)^{1/3}$.}
\proofstep{Step~II.2.} Again, we assume that $M_0$ is given by
\[
M_0 := \mathrm{diag}(\sigma_1,\sigma_2,\pm\sigma_3)
\]
with $0<\sigma_1\leq\sigma_2\leq\sigma_3$, but now $\sigma_3<\bigl(\frac{|r|}{2}\bigr)^{1/3}$. We decompose $M_0$ along a rank-one line as
\[
M_0=\frac{1}{2}\left[M_0+\delta(e_3\otimes e_3)\right]+\frac{1}{2}\left[M_0-\delta(e_3\otimes e_3)\right]
=:\frac{1}{2}M_0^++\frac{1}{2}M_0^-,
\]
where we choose $\delta=2\bigl(\frac{|r|}{2}\bigr)^{1/3}$. Then, the singular values $\sigma_3+\delta$ and $\sigma_3-\delta$ of $M_0^+$ and $M_0^-$, respectively, have absolute value at least $\bigl(\frac{|r|}{2}\bigr)^{1/3}$. Moreover, we have the estimates
\begin{align}
\abs{M_0^{\pm} - M_0} &= 2\biggl(\frac{|r|}{2}\biggr)^{1/3}\leq 2|r-\det\, M_0|^{1/3} \label{eq:dist+-},\\
\abs{r - \det\, M_0^{\pm}} &\leq 3\abs{r - \det\, M_0}. \label{eq:rdet_dist+-}
\end{align}
Indeed, both inequalities follow from the observation that $|\det\, M_0|\leq\sigma_3^3<|r|/2$, and therefore $|r-\det\, M_0|>|r|/2$.
\proofstep{Step~II.3.}
We can treat $M_0^{\pm}$ exactly as in Case~I, which is now applicable to $M_0^+$ and $M_0^-$. This gives us two sequences of laminates $\nu_k^+$ and $\nu_k^-$ with $[\nu_k^{\pm}]=M_0^{\pm}$,
\begin{equation}\label{eq:mass2}
\nu^\pm_k \bigl( \setb{ A \in \mathbb{R}^{3 \times 3} }{ \det\, A \neq r } \bigr) \to 0
\end{equation}
as $k\to\infty$, and the estimate
\begin{equation}\label{eq:pmestimate}
\int|A-M_0^{\pm}|^p\;\mathrm{d}\nu_k^{\pm}(A)\leq C_p|r-\det\, M_0^{\pm}|^{p/3}
\end{equation}
for $1\leq p<3$, where $C_p$ does not depend on $k$, $M_0$ or $r$. It follows that the measure $\nu_k$ defined by $\nu_k=\frac{1}{2}\nu_k^++\frac{1}{2}\nu_k^-$ satisfies $[\nu_k]=M_0$ and $\nu_k \bigl( \setb{ A \in \mathbb{R}^{3 \times 3} }{ \det\, A \neq r } \bigr) \to 0
$. Moreover, combining~\eqref{eq:pmestimate} with~\eqref{eq:dist+-} and~\eqref{eq:rdet_dist+-}, we have
\begin{align}
\int|A-M_0|^p\;\mathrm{d}\nu_k(A)&= \frac{1}{2}\int|A-M_0|^p\;\mathrm{d}\nu_k^+(A)+\frac{1}{2}\int|A-M_0|^p\;\mathrm{d}\nu_k^-(A) \nonumber\\
&\leq C_p\biggl[|M_0^+-M_0|^p+|M_0^--M_0|^p \nonumber\\
& \qquad\qquad +\int |A-M_0^+|^p\;\mathrm{d}\nu_k^+(A) +\int |A-M_0^-|^p\;\mathrm{d}\nu_k^-(A)\biggr] \nonumber\\
&\leq C_p\left[|r-\det\, M_0|^{p/3}+|r-\det\, M_0^{\pm}|^{p/3}\right] \nonumber\\
&\leq C_p|r-\det\, M_0|^{p/3}.\label{eq:pbounded2}
\end{align}
\proofstep{Step~4.} In this last step, let $M(x)=\nabla u(x)$ for some $u\in\mathrm{W}^{1,p}(\Omega;\mathbb{R}^d)$ and $r\in\mathrm{L}^{p/d}(\Omega)$. Applying the previous steps to $M_0=M(x)$ and $r=r(x)$ for almost every $x$, we obtain a sequence $((\nu_{x,k})_{x\in\Omega})_{k\in\mathbb{N}}$ of parametrized probability measures. Note that, for every $k$, $(\nu_{x,k})_x$ is weakly* measurable. Indeed, $M:\Omega\to\mathbb{R}^{d\times d}$ is measurable by assumption, and the matrices obtained from the rank-one splittings of $M_0$ in the previous steps depend continuously on $M_0$ (to be more precise, there is a discontinuity at $\sigma_3=(|r|/2)^{1/3}$, which is where Cases I and II bifurcate, thus rendering the dependence of the matrices $M_{i,Gj}$, $M_{i,B_j}$ on $M_0$ only \emph{piecewise} continuous).
Moreover, by the bounds~\eqref{eq:geom_prop2} and~\eqref{eq:pbounded2}, we obtain that $(\nu_{x,k})_k$ is a bounded sequence in the space $\mathrm{L}_w^{\infty}(\Omega;\mathbf{M}^1(\mathbb{R}^{d\times d}))$ of weakly* measurable maps from $\Omega$ into $\mathbf{M}^1(\mathbb{R}^{d\times d})$. Therefore (cf.~\cite{Mull99VMMP}, Sections 3.1 and 3.4), there exists a subsequence (not relabeled) and a Young measure $\nu=(\nu_x)_x\in\mathrm{L}_w^{\infty}(\Omega;\mathbf{M}^1(\mathbb{R}^{d\times d}))$ such that
\begin{equation}\label{eq:YMconvergence}
\int_{\Omega}\int_{\mathbb{R}^{d\times d}}f(x,A)\;\mathrm{d} \nu_{x,k}(A)\;\mathrm{d} x\to \int_{\Omega}\int_{\mathbb{R}^{d\times d}}f(x,A)\;\mathrm{d} \nu_x(A)\;\mathrm{d} x
\end{equation}
as $k\to\infty$, for every Carath\'{e}odory function $f:\Omega\times\mathbb{R}^{d\times d}\to\mathbb{R}$ such that the family $\left(\int f(x,A)\;\mathrm{d}\nu_{x,k}(A)\right)_k$ is equiintegrable.
We claim that $(\nu_x)_x$ is a $p$-gradient Young measure that satisfies (i)-(iii) from Definition~\ref{def:p-full}, which then implies the proposition. First, a standard diagonal argument together with the bounds~\eqref{eq:geom_prop2} and~\eqref{eq:pbounded2} implies that indeed $\nu$ is a $p$-gradient Young measure.
Property (i) follows from the fact that $[\nu_{x,k}]=M(x)$ for almost every $x$, and from~\eqref{eq:YMconvergence} with $f(x,A)=\psi(x)A$ for any $\psi\in \mathrm{L}^\infty(\Omega)$. Varying over $\psi$ then gives $[\nu_x]=M(x)$ almost everywhere.
Property (ii) is a consequence of~\eqref{eq:mass},~\eqref{eq:mass2} as well as the choice $f(x,A)=\psi(x)\mathbbm{1}_{\{M\,:\,\det\, M\neq r(x)\}}(A)$ in~\eqref{eq:YMconvergence} (the characteristic function is lower semicontinuous with respect to $A$, which makes it admissible as a test function; see~\cite{Mull99VMMP}, Section 3.4).
Finally, (iii) is a consequence of~\eqref{eq:geom_prop2} and~\eqref{eq:pbounded2} in conjunction with~\eqref{eq:YMconvergence} using $f(x,A)=|A-M(x)|^p$. Notice that the equiintegrability of $\left(\int f(x,A)\;\mathrm{d}\nu_{x,k}(A)\right)_k$ for this choice of $f$ follows from~\eqref{eq:geom_prop1} and~\eqref{eq:pbounded2} and the assumptions on $r$ and $M$.
\end{proof}
\subsection{Arbitrary dimensions}
\label{sc:arbitrary_dim}
In this part, we briefly outline the proof of Proposition~\ref{prop:geometry} for arbitrary dimensions. The cases $d=3$ and $d>3$ are quite similar, so that we only provide the basic estimates, everything else remaining the same.
\begin{proof}[Proof of Proposition~\ref{prop:geometry} for $d>3$] \proofstep{Step 1.} As before we bring a matrix $M_0\in \mathbb{R}^{d\times d}$ into diagonal form and write
\[
M_0 := \mathrm{diag}(\sigma_1,\sigma_2,\ldots,\pm\sigma_d),
\]
for which $0\leq\sigma_1\leq\sigma_2\leq\ldots\leq\sigma_d$.
We now distinguish the cases $\sigma_3\cdots\sigma_d\geq\bigl(\frac{|r|}{2}\bigr)^{(d-2)/d}$ and $\sigma_3\cdots\sigma_d<\bigl(\frac{|r|}{2}\bigr)^{(d-2)/d}$.\\
\noindent\textbf{Case~I: $\sigma_3\cdots\sigma_d\geq\bigl(\frac{|r|}{2}\bigr)^{(d-2)/d}$.}
\proofstep{Step~I.2.}
Set $\gamma := \frac{|r - \det\, M_0|^{1/2}}{(\sigma_3\cdots\sigma_d)^{1/2}}$ and decompose $M_0$ twice along rank-one lines:
\begin{align*}
M_0 &= \frac{1}{4} \bigl[ M_0 + \gamma (\mathrm{e}_1 \otimes \mathrm{e}_2) + \gamma (\mathrm{e}_2 \otimes \mathrm{e}_1) \bigr]
+ \frac{1}{4} \bigl[ M_0 + \gamma (\mathrm{e}_1 \otimes \mathrm{e}_2) - \gamma (\mathrm{e}_2 \otimes \mathrm{e}_1) \bigr] \\
&\qquad + \frac{1}{4} \bigl[ M_0 - \gamma (\mathrm{e}_1 \otimes \mathrm{e}_2) + \gamma (\mathrm{e}_2 \otimes \mathrm{e}_1) \bigr]
+ \frac{1}{4} \bigl[ M_0 - \gamma (\mathrm{e}_1 \otimes \mathrm{e}_2) - \gamma (\mathrm{e}_2 \otimes \mathrm{e}_1) \bigr] \\
&=: \frac{1}{4} M_{1,B1} + \frac{1}{4} M_{1,G1} + \frac{1}{4} M_{1,G2} + \frac{1}{4} M_{1,B2},
\end{align*}
where the ``good'' and the ``bad'' matrices are again labeled such that
\begin{align*}
\det\, M_{1,G1} &= \det\, M_{1,G2} = r, \\
\det\, M_{1,B1} &= \det\, M_{1,B2} = 2\det\, M_0 - r.
\end{align*}
If $|\det\, M_{0}|\leq |r|$, it holds that $\frac{|r|}{2}\geq\frac{1}{4}|r-\det\, M_{0}|$ and, taking into account that $\sigma_3\cdots\sigma_d\geq \bigl(\frac{|r|}{2}\bigr)^{(d-2)/d}$,
\[
\sigma_3\cdots\sigma_d\geq \left(\frac{1}{4}\right)^{(d-2)/d}|r-\det\, M_{0}|^{(d-2)/d}.
\]
If however $|\det\, M_{0}|> |r|$, we need a more involved estimate: Note that
\begin{equation}\label{eq:star_d}
(\sigma_1\sigma_2)^{d-2} \leq (\sigma_3\cdots\sigma_d)(\sigma_3\cdots\sigma_d)=(\sigma_3\cdots\sigma_d)^2.
\end{equation}
Then, through \eqref{eq:star_d}, we obtain that
\[
|\det\, M_{0}|^{(d-2)/d}\leq (\sigma_3\cdots\sigma_d)^{2/d}(\sigma_3\cdots\sigma_d)^{(d-2)/d}=\sigma_3\cdots\sigma_d,
\]
i.e.
\[
(\sigma_3\cdots\sigma_d)^{d/(d-2)}\geq|\det\, M_{0}|\geq \frac{1}{2}(|r|+|\det\, M_{0}|)\geq \frac{1}{2}|r-\det\, M_{0}|.
\]
This shows that there is a constant $C>0$, independent of $M_{0}$ and $r$, such that in either case
\[
\sigma_3\cdots\sigma_d\geq C|r-\det\, M_{0}|^{(d-2)/d}.
\]
Then the following estimates hold:
\begin{align*}
\abs{M_{1,J}-M_0} &= \biggl( 2 \frac{\abs{r - \det\, M_0}}{\sigma_3\cdots\sigma_d} \biggr)^{1/2} \leq C \frac{\abs{r - \det\, M_0}^{1/2}}{|r-\det\, M_0|^{(d-2)/2d}}= C\abs{r - \det\, M_0}^{1/d},\\
\abs{r - \det\, M_{1,J}} &\leq 2 \abs{r - \det\, M_0}.
\end{align*}
We may now proceed as in the case $d=3$.
\noindent\textbf{Case~II: $\sigma_3\cdots\sigma_d<\bigl(\frac{|r|}{2}\bigr)^{(d-2)/d}$.}
\proofstep{Step~II.2.} We may still assume that $M_0$ is given by
\[
M_0 := \mathrm{diag}(\sigma_1,\sigma_2,\ldots,\pm\sigma_d)
\]
with $0<\sigma_1\leq\sigma_2\leq\ldots\leq\sigma_d$, but now $\sigma_3\cdots\sigma_d<\bigl(\frac{|r|}{2}\bigr)^{(d-2)/d}$. We decompose $M_0$ along rank-one lines as
\[
\begin{aligned}
M_0&=\frac{1}{2}\left[M_0+\delta(e_3\otimes e_3)\right]+\frac{1}{2}\left[M_0-\delta(e_3\otimes e_3)\right]\\
&=:\frac{1}{2}M_0^++\frac{1}{2}M_0^-,
\end{aligned}
\]
where we choose $\delta=2\bigl(\frac{|r|}{2}\bigr)^{1/d}$. Then, the singular values $\sigma_3+\delta$ and $\sigma_3-\delta$ of $M_0^+$ and $M_0^-$, respectively, have absolute value at least $\bigl(\frac{|r|}{2}\bigr)^{1/d}$. Note that by \eqref{eq:star_d}
\[
|\det\, M_0|\leq (\sigma_3\cdots\sigma_d)^{2/(d-2)}\sigma_3\cdots\sigma_d\leq \biggl(\frac{|r|}{2}\biggr)^{2/d}\biggl(\frac{|r|}{2}\biggr)^{(d-2)/d}=\frac{|r|}{2}.
\]
Therefore,
\[
\abs{M_0^{\pm} - M_0} = \delta = 2 \left(\frac{|r|}{2}\right)^{1/d}\leq 2\abs{r - \det\, M_0}^{1/d},
\]
and
\[
\abs{r - \det\, M_0^{\pm}}\leq \abs{r - \det\, M_0} + \abs{\det\, M_0 - \det\, M_0^{\pm}}.
\]
But
\begin{align*}
\abs{\det\, M_0 - \det\, M_0^{\pm}} &= \abs{\delta\sigma_1\sigma_2\sigma_4\cdots\sigma_d}\nonumber\\
&\leq 2\left(\frac{|r|}{2}\right)^{1/d}\sigma_3\sigma_3\cdots\sigma_d\nonumber\\
&\leq 2\left(\frac{|r|}{2}\right)^{1/d}\left(\frac{|r|}{2}\right)^{1/d}\biggl(\frac{|r|}{2}\biggr)^{(d-2)/d}\nonumber\\
&= |r| \leq 2\abs{r - \det\, M_0}^{1/d},
\end{align*}
where we have used the fact that
\[
\sigma_3^{d-2}\leq \sigma_3\cdots\sigma_d\leq\biggl(\frac{|r|}{2}\biggr)^{(d-2)/d}
\]
and $|\det\, M_0|\leq \abs{r}/2$.
If $|\sigma_3\pm\delta|\sigma_4\cdots\sigma_d\geq\bigl(\frac{|r|}{2}\bigr)^{(d-2)/d}$, we can continue as in Case I. If not, we repeat the argument of Step II.2 (after reordering the singular values), which can be done exactly as above. It is easy to see that after at most $(d-2)$ steps, we are in the situation of Case I.
\end{proof}
\section{Applications}\label{sc:applications}
In the following we give precise statements and proofs of the applications mentioned in the introduction.
\subsection{Characterization of Young measures}
We first prove our main theorem:
\begin{theorem}\label{thm:main2}
Let $1 < p < d$. Suppose that $\Omega \subset \mathbb{R}^d$ is open and bounded, $|\partial\Omega|=0$, and let $\nu = (\nu_x)_{x\in\Omega} \subset \mathbf{M}^1(\mathbb{R}^{d \times d})$ be a $p$-Young measure. Moreover let $J_1 \colon \Omega \to [-\infty,+\infty)$, $J_2 \colon \Omega \to (-\infty,+\infty]$ be measurable and such that $J_1(x)\leq J_2(x)$ for a.e.~$x\in\Omega$. Also, assume that for $i=1,2$,
\begin{equation*}
\int_{\Omega}J_1^+(x)^{p/d} \;\mathrm{d} x<\infty
\qquad\text{and}\qquad
\int_{\Omega}J_2^-(x)^{p/d} \;\mathrm{d} x<\infty,
\end{equation*}
where $J_i^{\pm}$ denotes the positive or negative part of $J_i$, respectively. Then the following statements are equivalent:
\begin{itemize}
\item[(i)] There exists a sequence of gradients $(\nabla u_j) \subset \mathrm{L}^p(\Omega;\mathbb{R}^{d \times d})$ that generates $\nu$, such that
\[
\qquad J_1(x) \leq \det \nabla u_j(x) \leq J_2(x) \quad\text{for all $j\in\mathbb{N}$ and a.e.~$x\in\Omega$. }
\]
\item[(ii)] The conditions (I)-(IV) hold:
\begin{itemize}
\item[(I)] $\displaystyle\int_{\Omega}\int\abs{A}^p \;\mathrm{d} \nu_x(A)<\infty$;
\item[(II)] the barycenter $[\nu](x) := \int A \;\mathrm{d} \nu_x(A)$ is a gradient, i.e.\ there exists $\nabla u \in \mathrm{L}^p(\Omega;\mathbb{R}^{d \times d})$ with $[\nu] = \nabla u$ a.e.;
\item[(III)] for every quasiconvex function $h \colon \mathbb{R}^{d \times d} \to \mathbb{R}$ with $\abs{h(A)}\leq c(1+\abs{A}^p)$, the Jensen-type inequality
\[
\qquad\qquad h(\nabla u(x)) \leq \int h(A) \;\mathrm{d} \nu_x(A) \qquad\text{holds for a.e.\ $x \in \Omega$;}
\]
\item[(IV)] $\supp{\nu_x} \subset \setb{ A \in \mathbb{R}^{d \times d} }{ J_1(x) \leq \det\, A \leq J_2(x) }$ for a.e.\ $x \in \Omega$.
\end{itemize}
\end{itemize}
Furthermore, in this case the sequence $(u_j)$ can be chosen such that $(\nabla u_j)$ is $p$-equiintegrable and $u_j - u\in\mathrm{W}^{1,p}_0(\Omega,\mathbb{R}^d)$, where $u\in\mathrm{W}^{1,p}(\Omega,\mathbb{R}^d)$ is the deformation underlying $\nu$ (i.e.\ the function whose gradient is the barycenter of $\nu$).
\end{theorem}
\begin{proof}
The result follows by Theorem~\ref{thm:main_abstract} and Corollary~\ref{cor:geometry}.
\end{proof}
Let us also state a refinement of the sufficiency part of the preceding theorem (and also of the main result of~\cite{KoRiWi13OPYM}):
\begin{theorem} \label{thm:kappa_equiint}
For $1 < p < d$ and a bounded open Lipschitz domain $\Omega \subset \mathbb{R}^d$, let $\kappa$ be a \term{singular growth modulus}, that is, a convex function $\kappa \colon (0,\infty) \to [0,\infty)$ with $\kappa(s) \to +\infty$ as $s \to 0$, which we extend by setting $\kappa(s) := +\infty$ for $s\leq 0$. Assume furthermore that we are given a Young measure $\nu = (\nu_x)_{x\in\Omega} \subset \mathbf{M}^1(\mathbb{R}^{d \times d})$ satisfying~(I)--(III) from Theorem~\ref{thm:main2} as well as
\begin{equation} \label{eq:kappa_cond}
\int \kappa( \det\, A ) \;\mathrm{d} \nu_x(A) < \infty
\qquad \text{for a.e.\ $x \in \Omega$.}
\end{equation}
Then, there exists a sequence of gradients $(\nabla u_j) \subset \mathrm{L}^p(\Omega;\mathbb{R}^{d \times d})$ that generates $\nu$ and such that
\begin{equation} \label{eq:kappa_equiint}
\bigl\{ \abs{\nabla u_j}^p + \kappa( \det \nabla u_j ) \bigr\}_j
\quad\text{is an equiintegrable family.}
\end{equation}
\end{theorem}
\begin{proof}
The condition~\eqref{eq:kappa_cond} entails in particular that $\nu_{x_0}( \setn{ A \in \mathbb{R}^{d \times d}}{ \det\, A > 0 })=1$ for almost every $x_0\in\Omega$. Fix such an $x_0 \in \Omega$ and denote by $A_0 := [\nu_{x_0}] = \nabla u(x_0) \in \mathbb{R}^{d \times d}$ the barycenter of $\nu_{x_0}$. Let $(v_j) \subset (\mathrm{W}^{1,p} \cap \mathrm{C}^\infty)(\Bbb^d;\mathbb{R}^d)$ such that $(\nabla v_j)$ is a $p$-equiintegrable generating sequence for $\nu_{x_0}$ satisfying the additional constraint $v_j(y) = [\nu_{x_0}](y) = A_0 y$ for $y \in \partial \Bbb^d$; the existence of this sequence follows from standard Young measure results, see for example Lemmas~8.3 and~8.15 in~\cite{Pedr97PMVP}.
Let $n \in \mathbb{N}$ and select $k = k(n) \in \mathbb{N}$ so large that $k(n) \geq n$, $\kappa(1/k) \geq 1$, and
\[
\kappa(1/k) \cdot \nu_{x_0} \bigl( \setb{ A \in \mathbb{R}^{d \times d} }{ \det\, A \leq 1/k } \bigr)
\leq \int_{\{\det\, A \leq 1/k\}} \kappa( \det\, A ) \;\mathrm{d} \nu_{x_0}(A)
\leq \frac{1}{n}.
\]
Here we have used implicitly that $\kappa$ is decreasing on an interval $(0,s_0)$ with $s_0 > 0$ since it is convex and $\kappa(s) \to +\infty$ as $s \to 0$; choose $k \geq 1/s_0$.
Using the Young measure representation of limits and discarding some elements at the beginning of the sequence $(v_j)$ if necessary, we may pick $j = j(n)$ such that with $\omega_d := \abs{\Bbb^d}$ the following two conditions hold:
\begin{align}
&\int_{E_j} \, \absBB{ \frac{1}{k} - \det\, \nabla v_j }^{p/d}\;\mathrm{d} y + \abs{E_j}
\leq \frac{4 \omega_d}{\kappa(1/k) n}
\leq \frac{C}{n}
\qquad\text{and} \label{eq:YM_conv1} \\
&\int_{\{ 1/(k+1) \leq \det\, \nabla v_j \leq 1/\ell \} } \, \kappa( \det\, \nabla v_j ) \;\mathrm{d} y \label{eq:YM_conv2}\\
&\qquad \leq \omega_d \int_{\{ 1/(k+1) \leq \det\, A \leq 1/\ell \}} \kappa( \det\, A ) \;\mathrm{d} \nu_{x_0}(A) + \frac{1}{n} \notag
\end{align}
for all $\ell = 1,\ldots,k = k(n)$, where
\[
E_j := \setb{ y \in \Bbb^d }{ \det\, \nabla v_j(y) \leq 1/k }
\]
and $C = C(d)$ is a dimensional constant. For~\eqref{eq:YM_conv2} we used the Young measure upper semi-representation for the \emph{bounded} upper semicontinuous integrand $g(A) := \mathbbm{1}_{\{ 1/(k+1) \leq \det\, A \leq 1/\ell \}} \cdot \kappa( \det\, A )$,
\[
\limsup_{j\to\infty} \int_{\Bbb^d} g(\nabla v_j(y)) \;\mathrm{d} y \leq \int_{\Bbb^d} \int g(A) \;\mathrm{d} \nu_{x_0}(A) \;\mathrm{d} x.
\]
Since the above are only finitely many conditions, they can be satisfied by discarding only finitely many leading terms in the sequence $(v_j)$. As an immediate consequence, however, the assertion~\eqref{eq:YM_conv2} in fact holds for all $\ell \in N$ since for $\ell > k+1$ it is trivially satisfied.
Next, we choose an open set $D_j$ with Lipschitz boundary and such that
\begin{align*}
B_j &:= \setb{ y \in \Bbb^d }{ \det\, \nabla v_j(y) < 1/(k+1) } \\
&\phantom{:}\subset D_j \subset \setb{ y \in \Bbb^d }{ \det\, \nabla v_j(y) \leq 1/k } = E_j.
\end{align*}
This is always possible: since $\nabla v_j$ is continuous, $\partial B_j$ and $\partial E_j$ can meet only in $\partial \Bbb^d$, so we can construct $D_j$ with a Lipschitz (or even smooth) boundary. Invoking Proposition~\ref{prop:convexint} for the function $v_j$ \emph{restricted to the set} $D_j$, we get a new function $w_j \in \mathrm{W}^{1,p}(D_j;\mathbb{R}^d)$ with
\[
\det\, \nabla w_j \geq 1/k \quad\text{a.e.\ in $D_j$}, \qquad
w_j = v_j \quad\text{on $\partial D_j$,}
\]
where as usual the boundary assertion is to be understood in the sense of trace. If $\partial D_j$ intersects $\partial \Bbb^d$, the boundary assertion is to include $w_j(y) = v_j(y) = A_0y$ for $y \in \partial D_j \cap \partial \Bbb^d$. Moreover, we have
\[
\norm{\nabla w_j - \nabla v_j}^p_{\mathrm{L}^p(D_j;\mathbb{R}^{d \times d})} \leq C_p \int_{E_j} \, \absBB{ \frac{1}{k} - \det\, \nabla v_j(y) }^{p/d} \;\mathrm{d} y \leq \frac{C_p}{n},
\]
where the constant $C_p = C(d,p)$ changes from expression to expression.
Now extend our new $w_j$, which at the moment is defined in $D_j$ only, to a function on all of $\Bbb^d$ by setting
\[
w_j(y) := v_j(y) \qquad\text{for $y \in \Bbb^d \setminus D_j$.}
\]
Since $w_j$ agrees with $v_j$ on the boundary of $D_j$, we deduce $w_j \in \mathrm{W}^{1,p}(\Bbb^d;\mathbb{R}^d)$ and also
\[
\norm{\nabla w_j - \nabla v_j}^p_p \leq \frac{C_p}{n}.
\]
We will show next the crucial fact that the family of functions $\{ \kappa( \det \nabla w_j ) \}_j$ is equiintegrable. For this, we estimate for any $\ell \in \mathbb{N}$ (large enough so that $\kappa$ is decreasing on $(0,1/\ell)$):
\begin{align*}
&\int_{\{\det\, \nabla w_j \leq 1/\ell \}} \kappa( \det\, \nabla w_j(y) ) \;\mathrm{d} y \\
&\qquad \leq \kappa(1/k) \cdot \abs{E_j} + \int_{\{1/(k+1) \leq \det\, \nabla v_j \leq 1/\ell \}} \, \kappa( \det\, \nabla v_j(y) ) \;\mathrm{d} y \\
&\qquad \leq C \biggl( \frac{1}{n} + \int_{\{\det\, A \leq 1/\ell \}} \kappa( \det\, A ) \;\mathrm{d} \nu_{x_0}(A) \biggr).
\end{align*}
Here, for the first integral we used that $\det \nabla w_j \geq 1/k$ on $D_j \subset E_j$ and~\eqref{eq:YM_conv1}; the second integral was estimated using~\eqref{eq:YM_conv2}. Now recall that $j = j(n)$ was chosen depending on $n$ (and also on $k$, but this is again chosen depending on $n$). Thus, we may take the limit superior as $n \to \infty$ or, equivalently, as $j\to\infty$, to get
\[
\limsup_{j\to\infty} \int_{\{\det\, \nabla w_j \leq 1/\ell \}} \kappa( \det\, \nabla w_j(y) ) \;\mathrm{d} y
\leq C \int_{\{\det\, A \leq 1/\ell \}} \kappa( \det\, A ) \;\mathrm{d} \nu_{x_0}(A)
\]
and this vanishes as $\ell \uparrow \infty$.
Hence, we conclude that $\{ \kappa( \det \nabla w_{j(n)} ) \}_n$, or, without labeling the subsequence of $j$'s, $\{ \kappa( \det \nabla w_{j} ) \}_j$, is an equiintegrable family, i.e.\ after renaming the sequence we arrive at~\eqref{eq:kappa_equiint}. More precisely, given $K > 0$ we choose $\ell \in \mathbb{N}$ such that $\kappa(1/\ell) < K \leq \kappa(1/(\ell+1))$, whereby $\{ \kappa(\det\, A) > K \} \subset \{ \det\, A < 1/\ell \}$ and since $\ell \uparrow \infty$ as $K \uparrow \infty$ the above assertion implies the sought equiintegrability.
\end{proof}
\subsection{Connection to the Dacorogna--Moser theory and extensions}
\label{ssc:DM}
We investigate a similar question as in~\cite{DacMos90PDEJ}; however, in subcritical Sobolev spaces, the geometric interpretation no longer holds (which is manifested in the absence of compatibility conditions on the boundary).
\begin{theorem}\label{damo}
Let $\Omega \subset \mathbb{R}^d$ be a bounded Lipschitz domain, $1<p<d$, $J:\Omega\to\mathbb{R}$ be measurable with
\begin{equation*}
\int_{\Omega}|J(x)|^{p/d} \;\mathrm{d} x<\infty,
\end{equation*}
and let $g\in \mathrm{W}^{1-1/p,p}(\partial\Omega;\mathbb{R}^d)$. Then, there exists $v\in \mathrm{W}^{1,p}(\Omega;\mathbb{R}^d)$ such that
\[
\left\{
\begin{aligned}
\det\nabla v(x)&=J(x) &&\quad\text{for a.e.\ $x \in \Omega$,}\\
v|_{\partial\Omega}&=g &&\quad\text{in the sense of trace.}
\end{aligned}
\right.
\]
\end{theorem}
\begin{proof}
Since the trace operator is surjective from $\mathrm{W}^{1,p}(\Omega)$ to $\mathrm{W}^{1-1/p,p}(\partial\Omega)$, there exists $u\in \mathrm{W}^{1,p}(\Omega)$ such that $u|_{\partial\Omega}=g$ in the trace sense. The statement then follows immediately by Corollary~\ref{cor:geometry} combined with Proposition~\ref{prop:convexint}, taking $R(x,A)=|\det\, A-J(x)|\in\mathcal{R}^{p,d}(\Omega;\mathbb{R}^{d \times d})$.
\end{proof}
\begin{corollary}
Let $\Omega \subset \mathbb{R}^d$ be a bounded Lipschitz domain, $1<p<d$ and $g\in \mathrm{W}^{1-1/p,p}(\partial\Omega;\mathbb{R}^d)$. Then, there exists a map $v\in \mathrm{W}^{1,p}(\Omega;\mathbb{R}^d)$ such that
\[
\left\{
\begin{aligned}
\det\nabla v(x)&=1 &&\quad\text{for a.e.\ $x \in \Omega$,}\\
v|_{\partial\Omega}&=g &&\quad\text{in the sense of trace.}
\end{aligned}
\right.
\]
\end{corollary}
Of course, also the constraint $\det \nabla u(x) = J(x)$ for a given $J \colon \Omega' \to \mathbb{R}$, satisfying the usual assumptions, can be treated.
\subsection{Relaxation}
\label{ssc:relax}
Consider the following two functionals for a Carath\'{e}odory function $f \colon \Omega \times \mathbb{R}^{d\times d} \to \mathbb{R}$ and a function $\bar{u}\in\mathrm{W}^{1,p}(\Omega)$:
\begin{itemize}
\item $\mathcal{F}[u]:= \displaystyle \int_{\Omega}f(x,\nabla u(x)) \;\mathrm{d} x$,\qquad defined over the set
\[
\mathcal{A}:=\setb{u\in \mathrm{W}^{1,p}(\Omega,\mathbb{R}^d)}{u\vert_{\partial\Omega}=\bar{u},\,\nabla u(x)\in S_R\text{ a.e.}},
\]
where $S_R = S_{\det\geq r} :=\setb{A\in\mathbb{R}^{d\times d}}{\det\, A \geq r}$ or $S_R = S_{\det=r} :=\setb{A\in\mathbb{R}^{d\times d}}{\det\, A = r}$.
\item $\mathcal{F}^{YM}(\nu):= \displaystyle\int_{\Omega} \int f(x,A) \;\mathrm{d} \nu_x(A) \;\mathrm{d} x$,\qquad defined over the set
\[
\mathcal{A}^{YM}:=\setb{\nu\,\text{$p$-GYM}}{ \text{$\supp{\nu_x}\subset S_R$ a.e.\ , $[\nu]=\nabla u$, $u\in\mathrm{W}^{1,p}(\Omega;\mathbb{R}^d)$, $u\vert_{\partial\Omega}=\bar{u}$}},
\]
where we used \enquote{$p$-GYM} as an abbreviation for \enquote{gradient $p$-Young measure}.
\end{itemize}
The following relaxation theorem holds:
\begin{corollary}\label{thm:relaxation}
Suppose that $\Omega\subset\mathbb{R}^d$ is a bounded Lipschitz domain, $1<p<d$, $\bar{u}\in\mathrm{W}^{1,p}(\Omega)$, and $f: \Omega \times \mathbb{R}^{d\times d}\rightarrow\mathbb{R}$ is a Carath\'{e}odory function satisfying
\[
c(\abs{A}^p-1)\leq f(x,A)\leq C(1+\abs{A}^p)
\]
for all $(x,A)\in\Omega\times\mathbb{R}^{d\times d}$ and constants $0<c\leq C$. Then,
\[
\inf_{\mathcal{A}}\, I=\min_{\mathcal{A}^{YM}}\, \mathcal{F}^{YM}.
\]
In particular, whenever $(u_j)$ is an infimizing sequence of $I$ in $\mathcal{A}$, a subsequence of $(\nabla u_j)$ generates a Young measure $\nu\in\mathcal{A}^{YM}$ minimizing $\mathcal{F}^{YM}$ over $\mathcal{A}^{YM}$. Conversely, whenever $\nu$ minimizes $\mathcal{F}^{YM}$ in $\mathcal{A}^{YM}$, there exists an infimizing sequence $(u_j)$ of $I$ in $\mathcal{A}$ such that $(\nabla u_j)$ generates $\nu$.
\end{corollary}
\begin{proof}
Given the characterization of gradient $p$-Young measures with support in $S_R$ above, the proof is standard.
\end{proof}
Note that, in our regime of $p<d$, the determinant is not in general weakly continuous along infimizing sequences and one cannot take
\[
\mathcal{A}^{YM}=\setb{\nu\,\text{$p$-GYM}}{ \text{$\supp{\nu_x}\subset S_R$ a.e.\ , $[\nu]=\nabla u$, where $u\in\mathcal{A}$}}
\]
as the set of admissible measures in the above relaxation theorem.
\subsection{Approximation}
\label{ssc:approx}
Next, we obtain the following interesting approximation result:
\begin{corollary}\label{cor:approximation}
Let $\Omega\subset\mathbb{R}^d$ be open and bounded with $|\partial\Omega|=0$. Suppose that $1 < p < d$ and $u\in\mathrm{W}^{1,p}(\Omega,\mathbb{R}^d)$. For $S_R$, where either $R=r-\det\, A$ or $R = |r-\det\, A|$, there exists a sequence $(u_j)\subset\mathrm{W}^{1,p}(\Omega,\mathbb{R}^d)$ bounded such that for all $j\in\mathbb{N}$, $u_j - u\in\mathrm{W}^{1,p}_0$, $\nabla u_j(x)\in S_R$ for a.e.~$x\in\Omega$ and as $j\to\infty$
\[
u_j \rightharpoonup u\mbox{ in $\mathrm{W}^{1,p}(\Omega,\mathbb{R}^d)$.}
\]
In particular, $\norm{u_j - u}_{p}\to 0$ as $j\to\infty$.
\end{corollary}
\begin{proof}
Let $u\in \mathrm{W}^{1,p}(\Omega,\mathbb{R}^d)$ and define a gradient $p$-Young measure $(\nu_x)$ with $[\nu]=\nabla u$ by
\[
\nu_x = \left\{\begin{array}{ll} \delta_{\nabla u(x)}, &\nabla u(x)\in S_R \\ \mu_x, & \nabla u(x)\notin S_R\end{array}\right. ,
\]
where $\mu_x$ is the homogeneous gradient $p$-Young measure provided by the fact that $\mathbb{R}^{d\times d}$ is tightly contained in the $p$-quasiconvex hull of $S_R$ for either $S_{\det \geq r}$ (see \proofstep{Step~1} in the proof of Corollary~\ref{cor:geometry}) or $S_{\det = r}$ (see Proposition~\ref{prop:geometry}). By Theorem~\ref{thm:main2}, there exists $(u_j)\subset \mathrm{W}^{1,p}(\Omega,\mathbb{R}^d)$ generating $(\nu_x)$ such that $\nabla u_j(x)\in S_R$, $u_j - u\in\mathrm{W}^{1,p}_0(\Omega,\mathbb{R}^d)$ and $u_j\rightharpoonup u$ in $\mathrm{W}^{1,p}(\Omega,\mathbb{R}^d)$.
\end{proof}
For simplicity, we only stated this result for the constraints $\det \geq r$ and $\det = r$ which are relevant in elasticity; nevertheless, we note that the same result holds for the more general constraint
\[
J_1(x) \leq \det\nabla u_j(x) \leq J_2(x)\qquad\text{for all $j$ and a.e.~$x$},
\]
with $J_1$, $J_2$ are as in Theorem~\ref{thm:main2}. We note that this produces arbitrary counterexamples to the weak continuity of the determinant in $\mathrm{W}^{1,p}(\Omega,\mathbb{R}^d)$ for $p<d$.
\section{Lack of lower semicontinuity for a class of functionals}\label{sec:lsc}
A \term{singular growth modulus} (cf. Theorem~\ref{thm:kappa_equiint}) is a convex function $\kappa \colon (0,\infty) \to [0,\infty)$ with $\kappa(s) \to +\infty$ as $s \to 0$. We extend $\kappa$ by setting $\kappa(s) := +\infty$ for $s\leq 0$. For $p<d$, let us assume the growth condition
\begin{equation} \label{eq:kappa_growth}
\limsup_{s \to +\infty}\, \frac{\kappa(s)}{s^{p/d}} < \infty.
\end{equation}
In what follows, $f \colon \Omega \times \mathbb{R}^{d \times d} \to [0,\infty]$ will be a Carath\'{e}odory integrand satisfying the \term{elastic coercivity/growth estimates}
\begin{equation} \label{eq:f_elasticity_est}
\frac{1}{M} \bigl( \abs{A}^p + \kappa( \det\, A ) \bigr) \leq f(x,A) \leq M \bigl( 1 + \abs{A}^p + \kappa( \det\, A ) \bigr)
\end{equation}
for a constant $M > 0$.
In this section, we want to show that under these assumptions, the functional
\begin{equation} \label{eq:F_def}
\mathcal{F}[u] := \int_\Omega f(x,\nabla u(x)) \;\mathrm{d} x, \qquad
\text{where $u \in \mathrm{W}^{1,p}(\Omega;\mathbb{R}^d)$ with $\det\, \nabla u > 0$ a.e.,}
\end{equation}
is \emph{not} $\mathrm{W}^{1,p}$-weakly lower semicontinuous along sequences $u_j \rightharpoonup u$ in $\mathrm{W}^{1,p}(\Omega;\mathbb{R}^d)$ satisfying the additional constraint
\[
\det\, \nabla u > 0 \quad\text{a.e.}
\]
We show this in two steps: First we show that this form of lower semicontinuity implies a certain quasiconvexity condition on $f$; secondly, we prove that no such integrands exist under the growth conditions~\eqref{eq:f_elasticity_est}.
More precisely, let $h \colon \mathbb{R}^{d \times d} \to (-\infty,+\infty]$ be a Borel function that is locally bounded on (i.e.\ bounded on any compact subset of) the set $\setn{ A \in \mathbb{R}^{d \times d} }{ \det\, A > 0 }$. We call $h$ \term{$\mathrm{W}^{1,p}$-orientation-preserving quasiconvex} if
\[
h(A_0) \leq \,\Xint-_{\Bbb^d} h(\nabla v(x)) \;\mathrm{d} x
\]
for all $A_0 \in \mathbb{R}^{d \times d}$ with $\det\, A_0 > 0$ and all $v \in \mathrm{W}^{1,p}(\Bbb^d;\mathbb{R}^d)$ with $v(x) = A_0x$ on $\partial \Bbb^d$ (in the sense of trace) and $\det\, \nabla v > 0$ almost everywhere (recall that $\Bbb^d$ denotes the unit ball in $\mathbb{R}^d$).
We note that under the additional $p$-growth condition $\abs{h(A)} \leq M(1+\abs{A}^p)$ the notion of $\mathrm{W}^{1,p}$-orientation-preserving quasiconvexity is weaker than the usual quasiconvexity~\cite{Morr52QSMI,Daco08DMCV}, since it is clearly weaker than $\mathrm{W}^{1,p}$-quasiconvexity.
\begin{remark}
We remark that, starting from the prototypical example of the determinant, there is a sizeable literature on the weak lower semicontinuity of polyconvex and quasiconvex functionals below the critical exponent $p=d$. As this lies outside the scope of the present work the reader is referred to \cite{FM97,Marcellini86,Maly93,AcDalM94,DalMSb95} and references therein.
\end{remark}
Returning to our result, we then have:
\begin{proposition} \label{prop:converse}
For $1 < p < d$ and a bounded open Lipschitz domain $\Omega \subset \mathbb{R}^d$, let $\kappa$ be a singular growth modulus with~\eqref{eq:kappa_growth} and assume that $f \colon \Omega \times \mathbb{R}^{d \times d} \to [0,\infty)$ is a Carath\'{e}odory integrand satisfying the elastic coercivity/growth estimates~\eqref{eq:f_elasticity_est}. Also, let the functional $\mathcal{F}$ be defined as in~\eqref{eq:F_def}. If $\mathcal{F}$ is $\mathrm{W}^{1,p}$-weakly lower semicontinuous along sequences $u_j \rightharpoonup u$ in $\mathrm{W}^{1,p}(\Omega;\mathbb{R}^3)$ satisfying the additional assumption $\det\, \nabla u > 0$ a.e., then
\[
\text{$f(x,\,\sbullet\,)$ is orientation-preserving quasiconvex for almost every $x \in \Omega$.}
\]
\end{proposition}
\begin{proof}
We assume that $f(x,A) = h(A)$ does not depend on $x$ (otherwise, one needs to use an additional localization argument).
Let $A_0 \in \mathbb{R}^{d \times d}$ with $\det\, A_0 > 0$ and let $v \in \mathrm{W}^{1,p}(\Bbb^d;\mathbb{R}^d)$ with $v(x) = A_0x$ on $\partial \Bbb^d$ (in the sense of trace) and $\det\, \nabla v > 0$ a.e. By virtue of the Vitali Covering Theorem, find a covering of $\mathcal{L}^d$-almost all of $\Bbb^d$ by balls $B(x_k,r_k) \subset \Bbb^d$ such that $r_k \leq 1/j$, $k \in \mathbb{N}$, and define
\[
w_j(x) := \sum_k r_k \, \mathbbm{1}_{B(x_k,r_k)}(x) \, v \Bigl( \frac{x-x_k}{r_k} \Bigr) + A_0 x_k,
\]
hence $w_j(x) = A_0 x$ for $x \in \partial \Bbb^d$ (in the sense of trace) and
\[
\nabla w_j(x) = \sum_k \mathbbm{1}_{B(x_k,r_k)}(x) \nabla v \Bigl( \frac{x-x_k}{r_k} \Bigr).
\]
Then,
\begin{align*}
\int_{\Bbb^d} h(\nabla w_j(x)) \;\mathrm{d} x &= \sum_k \int_{B(x_k,r_k)} h \Bigl( \nabla v \Bigl( \frac{x-x_k}{r_k} \Bigr) \Bigr) \;\mathrm{d} x
= \sum_k r_k^d \int_{\Bbb^d} h(\nabla v(y)) \;\mathrm{d} y \\
&= \int_{\Bbb^d} h(\nabla v(y)) \;\mathrm{d} y
\end{align*}
Also, $w_j$ converges weakly to the linear function $x \mapsto A_0x$ in $\mathrm{W}^{1,p}(\Bbb^d;\mathbb{R}^d)$. Thus, since $\det\, \nabla u(x) = \det\, A_0 > 0$, the assumed lower semicontinuity implies
\[
h(A_0) \leq \liminf_{j\to\infty} \,\Xint-_{\Bbb^d} h(\nabla w_j(x)) \;\mathrm{d} x = \,\Xint-_{\Bbb^d} h(\nabla v(y)) \;\mathrm{d} y,
\]
and $h$ is $\mathrm{W}^{1,p}$-orientation-preserving quasiconvex.
\end{proof}
Next, we prove:
\begin{proposition}\label{prop:nonexistence}
Suppose that $h$ satisfies the growth conditions~\eqref{eq:f_elasticity_est} for some $p\in (1,d)$. Then $h$ is not $\mathrm{W}^{1,p}$-orientation-preserving quasiconvex.
\end{proposition}
\begin{proof}
For $\epsilon>0$, define $A^{\epsilon}=\epsilon I$, where $I$ denotes the $d\times d$ identity matrix. By Propositions~\eqref{prop:convexint} and~\ref{prop:geometry} with $r=1$, there exists $v^{\epsilon}\in \mathrm{W}^{1,p}(\Bbb^d)$ such that $\det\,\nabla v^{\epsilon}=1$ almost everywhere and $v^{\epsilon}-A^{\epsilon}x\in\mathrm{W}_0^{1,p}(\Bbb^d)$. Moreover, by Proposition~\eqref{prop:convexint},
\begin{equation*}
\norm{\nabla v^{\epsilon}-A^{\epsilon}}_p^p\leq C\int_{\Bbb^d}|\det\, A^{\epsilon}-1|^{p/d}\;\mathrm{d} x,
\end{equation*}
whence it follows (observing $\det\, A^{\epsilon}=\epsilon^d$) that $\norm{\nabla v^{\epsilon}}_p<C$ for a constant independent of $\epsilon$ (at least when $\epsilon$ is small). By~\eqref{eq:f_elasticity_est}, on one hand
\begin{equation*}
\lim_{\epsilon\searrow0}h(A^{\epsilon})=+\infty,
\end{equation*}
but on the other hand
\begin{equation*}
\begin{aligned}
\,\Xint-_{\Bbb^d} h(\nabla v^{\epsilon}(x)) \;\mathrm{d} x&\leq M\,\Xint-_{\Bbb^d}(1+|\nabla v^{\epsilon}(x)|+\kappa(1))\\
&\leq M(1+\kappa(1)+\norm{\nabla v^{\epsilon}}_p^p)\leq C.
\end{aligned}
\end{equation*}
Since $\epsilon>0$ was arbitrary, it follows that $h$ cannot be $\mathrm{W}^{1,p}$-orientation-preserving quasiconvex.
\end{proof}
The combination of Propositions~\ref{prop:converse} and~\ref{prop:nonexistence} finally yields:
\begin{theorem}\label{thm:wlsc}
For $1 < p < d$ and a bounded open Lipschitz domain $\Omega \subset \mathbb{R}^d$, let $\kappa$ be a singular growth modulus with~\eqref{eq:kappa_growth} and assume that $f \colon \Omega \times \mathbb{R}^{d \times d} \to [0,\infty)$ is a Carath\'{e}odory integrand satisfying the elastic coercivity/growth estimates~\eqref{eq:f_elasticity_est}. Also, let the functional $\mathcal{F}$ be defined as in~\eqref{eq:F_def}. Then, $\mathcal{F}$ is not $\mathrm{W}^{1,p}$-weakly lower semicontinuous along sequences $u_j \rightharpoonup u$ in $\mathrm{W}^{1,p}(\Omega;\mathbb{R}^3)$ satisfying the additional assumption $\det\, \nabla u > 0$ a.e.
\end{theorem}
\begin{remark}
\begin{itemize}
\item[a)] Since every $\mathrm{W}^{1,p}$-quasiconvex function, cf.~\cite{BalMur84WQVP}, is clearly $\mathrm{W}^{1,p}$-orientation-preserving quasiconvex, it follows from the theorem that there exist no $\mathrm{W}^{1,p}$-quasiconvex functions with the growth conditions~\eqref{eq:f_elasticity_est}.
\item[b)] It is apparent from the proof of Proposition~\ref{prop:nonexistence} that the theorem still holds if the upper bound in~\eqref{eq:f_elasticity_est} is weakened to $f(x,A)\leq M(1+|A|^p)$ for all matrices such that $\det\, A=r$ for \emph{some} $r>0$.
\end{itemize}
\end{remark}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
2,877,628,090,512 | arxiv | \section{Introduction}
The double $S^1$-transfer is a stable morphism $\mathrm{tr}_2 : \mathbb{C} \mathbf{P}^\infty _+ \smash \mathbb{C} \mathbf{P}^\infty_+
\rightarrow S^{-2}$; determining its image in stable homotopy groups is a fundamental problem. This has an algebraic counterpart with respect to any complex oriented cohomology theory, in particular complex cobordism $MU$. Namely, there is an algebraic double transfer $[e_\tau]^2$, which is a class in $\mathrm{Ext}^2_{MU_* MU} (MU_* (\mathbb{C} \mathbf{P}^\infty_+\smash \mathbb{C} \mathbf{P}^\infty_+), MU_* [-4]) $, where $[-4]$ denotes the shift in internal degree and $\mathrm{Ext}$ is calculated in the category of comodules over the Hopf algebroid $(MU_*, MU_ *MU)$; this induces a morphism
\[
\hom_{MU_* MU}^* (MU_*, MU_* (\mathbb{C} \mathbf{P}^\infty_+ \smash \mathbb{C} \mathbf{P}^\infty_+))
\rightarrow
\mathrm{Ext}^{2,*}_{MU_* MU} (MU_*, MU_* [-4])
\]
where the left hand side corresponds to the graded abelian group of $MU_*MU$-comodule primitives and the right hand side identifies with the $2$-line of the Adams-Novikov $E^2$-term.
The corresponding algebraic framework for the single transfer is well understood, by the results of Miller \cite{miller}. For the double transfer, the situation is more complicated since the $MU_*MU$-comodule primitives of $MU_* (\mathbb{C} \mathbf{P}^\infty_+ \smash \mathbb{C} \mathbf{P}^\infty_+)$ are not fully understood and due to the additional complexity in passing from the Adams-Novikov $1$-line to the $2$-line. Baker approached this algebraic question in \cite{baker_transfer} by using Morava $K$-theory, working $p$-locally for a prime $p \geq 5$, in particular studying a family of primitives derived from the work of Knapp \cite{knapp_habilit}.
This paper approaches the algebraic question by two related methods, namely via the $f$-invariant of Laures \cite{laures} (which requires that $6$ is inverted) and via the $f'$-invariant of Behrens \cite{behrens}, which is defined when working $p$-locally for $p \geq 5$. Both are constructed by using elliptic homology, which is defined as a Landweber exact, complex oriented theory when $6$ is invertible. (In particular, for current purposes, it is not necessary to use topological modular forms.)
The $f'$-invariant arises naturally when attempting to exploit the fact that the double transfer admits a chromatic factorization of the form
\[
\mathbb{C} \mathbf{P}^\infty_+ \smash \mathbb{C} \mathbf{P}^\infty_+ \rightarrow S^{-4}/p^\infty, v_1 ^\infty
\]
for $p \geq 3$, which was first constructed by Hilditch (see \cite{BCGHRW}). The explicit determination of the induced morphism in $MU$-homology is non-trivial; it is determined implicity here by using the Hattori-Stong theorem.
The first part of the paper explains this (see Theorem \ref{thm:MU-thetas} and Proposition \ref{prop:MUtheta_sigmap}) and relates it explicitly to calculations involving the algebraic double transfer, using standard chromatic technology. The result of Proposition \ref{prop:restrict_Xi_prim} is in principle sufficient to be able to calculate the algebraic double transfer on primitive elements; however, identifying the associated classes in $\mathrm{Ext}^2$ is non-trivial (compare \cite{baker_transfer}).
In the second half of the paper, complex cobordism is replaced by elliptic homology and the $f$ and $f'$ invariants arising from comodule primitives of ${Ell}_* (\mathbb{C} \mathbf{P}^\infty_+ \smash \mathbb{C} \mathbf{P}^\infty_+) $ are considered. The $f$-invariant on primitives is given by Theorem \ref{thm:f-pspt}, as a consequence of Proposition \ref{prop:restrict_Xi_prim}, whereas the $f'$-invariant on primitives is given (implicitly) by Theorem \ref{thm:fprime}. These results should shed light on the comodule primitives which are detected by the algebraic double transfer.
Behrens and Laures \cite{behrens_laures} have established the relationship between the $f$ and $f'$ invariants. For the invariants associated to comodule primitives via the double transfer, this relationship is transparent from chromatic technology, as indicated in Remark \ref{rem:relate_f,f'}.
\tableofcontents
\section{Chromatic factorization using $\mathrm{Im}(J)$}
\label{sect:hattori}
This section reviews the techniques for calculating morphisms to the spectrum $L_1 S /p^\infty$ of the
chromatic filtration, for $p$ a fixed prime, and how to calculate the induced $MU_*MU$-comodule morphisms by using the Hattori-Stong
theorem.
The terms ring spectrum and module spectrum refer to the weak, up to homotopy notions. If $E$ is
a ring spectrum and $M$ is an $E$-module the morphism of
$E$-module spectra induced by a morphism of spectra $f : X \rightarrow M$ is denoted $\tilde{f} : E \smash X \rightarrow M$.
\subsection{Non-connective $\mathrm{Im}(J)$-theory}
Let $\gamma \in \mathbb{Z}$ be a topological generator of the $p$-adic
units $\zed_p^\times$. Non-connective image of $J$ theory, ${Ad}$, is
defined by the cofibre
sequence
\[
{Ad}
\rightarrow
KU_{(p)}
\stackrel{\psi^\gamma -1} {\rightarrow}
KU_{(p)}
\rightarrow ,
\]
where $\psi^\gamma$ is the stable Adams operation, which is a morphism of ring spectra. The homotopy type of ${Ad}$ is independent of the choice
of $\gamma$ (cf. \cite{knapp}).
The spectrum ${Ad}$ is a $KU$-module spectrum, in particular is
$KU$-local; moreover, there are equivalences
\[
{Ad} / p^\infty
\simeq
{Ad} \smash S/p^\infty
\simeq
(L_1 S) \smash S/p^\infty
\simeq
L_1 (S/p^\infty)
\]
(cf. \cite[Lemma 8.7]{ravenel}), where $L_
1$ is Bousfield localization with respect to $p$-local $K$-theory. Hence there
is a commutative diagram
\begin{eqnarray}
\label{eqn:Ad_cofibre_seq}
\xymatrix{
&& L_1 S/p^\infty
\ar[d]^\alpha
\\
KU_{(p)}
\ar[r]
&
KU _\mathbb{Q}
\ar[r]^q
\ar[d]_{\psi^\gamma -1}
&
KU /p^\infty
\ar[d]^{\psi^\gamma - 1}
\\
&
KU_\mathbb{Q}
\ar[r]
&
KU /p^\infty
}
\end{eqnarray}
in which the three-term vertical and horizontal sequences are cofibre
sequences and $q$ is the reduction morphism. This provides a way of calculating maps to $L_1 S / p^\infty$,
as exploited in \cite[Section 5]{BCGHRW} and \cite{imaoka}, for example.
For the purposes of this paper, the following terminology is introduced.
\begin{defn}
A $\mathbb{Q}$-representative of a morphism of spectra $g : Y \rightarrow L_1 S /p^\infty$ is a morphism $f : Y \rightarrow KU_\mathbb{Q}$ which makes the following diagram commute
\begin{eqnarray}
\label{eqn:KU-commutative-ad}
\xymatrix{
Y
\ar[r]^g
\ar[d]_f
&
L_1 S /p^\infty
\ar[d]^\alpha
\\
KU_\mathbb{Q}
\ar[r]_q
&
KU/p^\infty.
}
\end{eqnarray}
\end{defn}
\begin{lem}
If $f$ is a $\mathbb{Q}$-representative of $g$, then $(\psi^\gamma -1) f$ lies in the image of $KU^0_{(p)}Y
\rightarrow KU^0_\mathbb{Q} Y$.
\end{lem}
\begin{proof}
Follows from the commutativity of the square in diagram (\ref{eqn:Ad_cofibre_seq}).
\end{proof}
\begin{prop}
\label{prop:represent-ad}
Let $Y$ be a spectrum such that $KU ^*_{(p)} Y$ is a finitely-generated free $KU_{(p)*}$-module and
$KU^{\mathrm{odd}}_{(p)}Y=0$. Then
\begin{enumerate}
\item
the morphism $[Y, L_1 S /p^\infty ] \rightarrow [Y, KU/p^\infty]$ is injective;
\item
any morphism $g : Y \rightarrow L_1 S /p^\infty$ admits a $\mathbb{Q}$-representative;
\item
a morphism $f: Y \rightarrow KU_\mathbb{Q}$ such that $(\psi^\gamma -1) f$ lies in the image of $KU^0_{(p)}Y \rightarrow
KU^0_\mathbb{Q} Y$ is the $\mathbb{Q}$-representative of a unique morphism $g : Y \rightarrow L_1 S/p^\infty$.
\end{enumerate}
\end{prop}
\begin{proof}
Straightforward.
\end{proof}
\begin{exam}
The hypotheses of Proposition \ref{prop:represent-ad} are satisfied for $Y$ the Thom spectrum of a finite rank virtual $\mathbb{C}$-vector bundle over $\mathbb{C}\mathbf{P}^n$ and for smash products of spectra of this type.
\end{exam}
\subsection{Chromatic factorization}
Recall that a complex oriented ring spectrum $E$ is Landweber exact if the orientation $MU_* \rightarrow E_*$ is
Landweber exact for the Hopf algebroid $(MU_* , MU_*MU)$ (see Definition \ref{def:Landweber_exact}).
\begin{lem}
\label{lem:Landweber_exact}
Let $E$ be a Landweber exact complex oriented ring spectrum and $Y$ be a spectrum.
\begin{enumerate}
\item There exist natural isomorphisms
\[
\hom_{MU_* MU} (MU_* Y, MU_* E) \cong \hom_{MU_*} (MU_* Y , E_*)
\cong
\hom_{E_*} (E_* Y, E_*).
\]
\item
For a morphism of spectra $f : Y \rightarrow E$, the comodule morphism $MU_* f : MU_* Y \rightarrow MU_* E$ corresponds
via the above isomorphisms to the morphism of $E_*$-modules $\tilde{f}_* : E_* Y \rightarrow E_*$ induced by $
\tilde{f}: E \smash Y \rightarrow E$.
\end{enumerate}
\end{lem}
\begin{proof}
The first isomorphism of part (1) follows from the identification of $MU_*E$ as the extended comodule $MU_*MU \otimes_{MU_*} E_*$ and the second from the isomorphism of $E_*$-modules $E_* Y \cong E_* \otimes_{MU_*} MU_* Y$ which is a consequence of Landweber exactness. The final statement is straightforward.
\end{proof}
\begin{lem}
\label{lem:hattori-stong}
Let $E$ be a Landweber exact complex oriented ring spectrum, then
the morphism $L_1 S /p^\infty \rightarrow KU/p^\infty$ induces a monomorphism of $E_* E$-comodules, $E_* /p^\infty [v_1^{-1}] \hookrightarrow
E_* KU /p^\infty$.
\end{lem}
\begin{proof} By Landweber exactness,
it suffices to prove this result for the universal case $E=MU$, where it is a consequence of the Hattori-Stong theorem (cf. \cite[Proposition 20.33]{switzer}), which states that the $KU$-Hurewicz morphism $MU_* \rightarrow MU_*KU$ is rationally faithful (in the terminology of \cite[Definition 1.1]{laures}), which is equivalent to the statement that $MU_* \otimes \mathbb{Q}/ \mathbb{Z} \hookrightarrow MU_* KU \otimes \mathbb{Q}/ \mathbb{Z}$ is a monomorphism. Hence, on the $p$-local component, this gives a monomorphism of $MU_*MU$-comodules $MU_* /p^\infty \hookrightarrow MU_* KU /p^\infty$.
The morphism of $MU_*MU$-comodules $MU_* /p^\infty [v_1^{-1}] \rightarrow MU_* KU/p^\infty$ corresponds to the localization of the above morphism, inverting $v_1$, since the morphism $L_1 S \rightarrow KU_{(p)}$ factors the unit $S \rightarrow KU_{(p)}$. The result follows.
\end{proof}
\begin{prop}
\label{prop:f,g-Landweber}
Let $E$ be a Landweber exact complex oriented ring spectrum and $g: Y \rightarrow L_1 S /p^\infty$ be a morphism of spectra which admits a $\mathbb{Q}$-representative $f: Y \rightarrow KU_\mathbb{Q}$. Then
\begin{enumerate}
\item
the morphism
$E_* (g) : E_* Y \rightarrow E_* (L_1 S /p^\infty)$ is determined by $E_* (f)$ via the commutative diagram of morphisms of $E_*E$-comodules:
\[
\xymatrix{
E_* Y
\ar[r]^{E_* (g)}
\ar[d]_{E_*(f)}
&
E_*/p^\infty [v_1^{-1}]
\ar@{^(->}[d]
\\
E_* KU \otimes \mathbb{Q}
\ar@{->>}[r]
&
E_* KU/p^\infty .
}
\]
\item
The morphism $E_* (f)$ is determined by the morphism of $KU_*$-modules $\tilde{f}_* : KU_* Y \rightarrow KU_*
\otimes \mathbb{Q}$ induced by $f$.
\end{enumerate}
\end{prop}
\begin{proof}
Again, by Landweber exactness, it is sufficient to prove the result for the universal case, $E = MU$.
The commutative diagram (\ref{eqn:KU-commutative-ad}) induces a commutative diagram of $MU_* MU$-comodules:
\[
\xymatrix{
MU_* Y
\ar[r]^{MU_* (g)}
\ar@{^(->}[d]_{MU_*(f)}
&
MU_* /p^\infty [v_1 ^{-1}]
\ar@{^(->}[d]
\\
MU_*KU \otimes \mathbb{Q}
\ar@{->>}[r]
&
MU_* KU /p^\infty,
}
\]
in which $MU_* /p^\infty [v_1^{-1}] \rightarrow MU_* KU /p^\infty$ is
injective, by the Hattori-Stong theorem (Lemma \ref{lem:hattori-stong}),
and $MU_* KU \otimes \mathbb{Q} \rightarrow
MU_* KU
/p^\infty$ is the canonical surjection. Thus, $MU_* (g)$ is determined by the
total composite of the diagram, hence by the morphism of $MU_*MU$-comodules
$MU_* Y \stackrel{MU_* (f)}{\rightarrow}MU_* KU \otimes
\mathbb{Q}$.
The final statement follows from Lemma \ref{lem:Landweber_exact}, which implies that the morphism $MU_*(f)$ is the composite
\[
MU_* Y
\stackrel{\psi}{\rightarrow}
MU_* MU \otimes _{MU_*} MU_* Y
\rightarrow
MU_*MU \otimes _{MU_*} KU_* Y
\stackrel{MU_* MU \otimes \tilde{f}_*}{\rightarrow}
MU_*KU \otimes \mathbb{Q},
\]
where $\psi$ is the comodule structure morphism and the second morphism is induced by $MU_* Y
\rightarrow KU_* Y$, given by the orientation of $KU$.
\end{proof}
In the case $E = KU$, this can be made more precise, by using the augmentation $KU_*KU \rightarrow KU_*$:
\begin{lem}
\label{lem:comodule-factorization}
Let $g: Y \rightarrow L_1 S /p^\infty$ be a morphism of spectra which admits a $\mathbb{Q}$-representative $f: Y \rightarrow KU_\mathbb{Q}$.
Then there is an induced commutative diagram of
morphisms of $KU_*$-modules:
\[
\xymatrix{
KU_* Y
\ar[r]^{KU_* (g)}
\ar[d]^{KU_* (f)}
\ar @{-->}@/_3pc/[dd]_{\tilde{f}_*}
&
KU_*/p^\infty
\ar[d]
\ar@/^3pc/[dd]^{KU_* /p^\infty}
\\
KU_*KU \otimes \mathbb{Q}
\ar[r]
\ar@{-->}[d]
&
KU_* KU/p^\infty
\ar@{-->}[d]
\\
KU_* \otimes \mathbb{Q}
\ar@{->>}[r]
&
KU_*/p^\infty
}
\]
in which the solid arrows are morphisms of $KU_*KU$-comodules and the lower
vertical morphisms are induced by the augmentation $KU_* KU \rightarrow KU_*$.
In particular, the comodule morphism $KU_* (g) : KU_* Y \rightarrow KU_*
/p^\infty$ factorizes as morphisms of $KU_*$-modules as
\[
KU_ * Y
\stackrel{\tilde{f}_*} {\rightarrow}
KU_* \otimes \mathbb{Q}
\twoheadrightarrow
KU_* /p^\infty.
\]
\end{lem}
\section{Chromatic factorization of the double transfer}
\label{section:chromatic}
This section reviews the construction of the chromatic factorization of the double transfer (see Theorem
\ref{thm:chromatic-factor}), working $p$-locally at an odd prime $p$. The morphism in $MU_*$-homology is
calculated implicitly in $MU_*MU$-comodules (see Theorem \ref{thm:MU-thetas}), by applying the results of Section \ref{sect:hattori}.
\subsection{Generalized Bernoulli numbers}
\label{subsect:recollections}
To fix notation, recall that the Hopf algebroid $(MU_* , MU_*MU)$ is isomorphic to the Hopf algebroid $(L, LB)$ which represents the groupoid scheme
of formal group laws and strict isomorphisms, where $L$ is the Lazard ring and
$L B \cong L [b_i | i \geq 0, b_0 =1]$ as a left $L$-algebra (cf.
\cite{ravenel_green}). The $b_i's$ represent the universal strict isomorphism
$\underline{b} (x) = \sum_i b_i x^{i+1}$ between the formal group laws defined
respectively by the left and right units $\eta_L,\eta_R : L \rightrightarrows
LB$, which are determined by their logarithms $\log^L$, $\log^R$ defined over
$LB \otimes\mathbb{Q}$. The exponential series $\exp^L, \exp^R$ over $LB \otimes \mathbb{Q}$ are the respective
composition inverses of $\log^L, \log^R$.
\begin{lem}
\label{lem:identify_b}
The power series $\underline{b}$ satisfies the identity
$
\underline{b} = \exp^R \circ \log ^L.
$
\end{lem}
\begin{defn}
\cite[Definition 1.1]{miller}
Let $F$ be a formal group law defined over a ring $R$. The Bernoulli numbers
$B_n(F) \in R \otimes \mathbb{Q}$, for strictly positive integers $n \in \mathbb{Z}_{>0}$, are defined by
\[
\frac{1}{\exp^F x} - \frac{1}{x} =
\sum _{i\geq 0} \frac{B_{i+1}(F)}{(i+1)!} x^i,
\]
where $\exp^F(x) \in (R \otimes \mathbb{Q}) [[x]]$ is the exponential of $F$. The reduced Bernoulli number $\overline{B}_n (F)$ is defined as $\overline{B}_n (F) :=\frac{B_n (F)}{n} \in R \otimes \mathbb{Q}$.
\end{defn}
\begin{exam}
For $n \in \mathbb{Z}_{>0}$, write $B_n^{KU} \in KU_* \otimes \mathbb{Q}$ (respectively
$\overline{B}_n ^{KU} \in KU_* \otimes \mathbb{Q}$) for the Bernoulli number (resp. reduced Bernoulli number) associated to the
orientation of $KU$. This is a graded form of the usual Bernoulli number $B_n$ (resp. reduced).
\end{exam}
\begin{rem}
If the formal group law $F$ is graded with respect to the usual conventions (so that the coordinate has degree $-2$), then $B_n (F)$ is a homogeneous element of degree $2n$.
\end{rem}
\begin{rem}
Miller established the following fundamental divisibility property of the reduced Bernoulli numbers: if $R$ is a torsion free ring, then $d_n \overline{B}_n (F) \in R$, where $d_n$ is the order of the reduced Bernoulli number $\overline{B}_n $ in ${\mathbb{Q}/\mathbb{Z}}$ (see \cite[Theorem 1.3]{miller}).
\end{rem}
\subsection{The single $S^1$-transfer}
\begin{defn}
For $n \in \mathbb{Z}$, let $\mathbb{C} \mathbf{P}^\infty_n$ denote the Thom spectrum of the
(virtual) bundle $n \lambda$ over $\mathbb{C} \mathbf{P}^\infty$, where $\lambda$ denotes the canonical
line bundle over $\mathbb{C} \mathbf{P}^\infty$.
\end{defn}
For $E$ a complex oriented ring spectrum, the Thom isomorphism implies that $E_* (\mathbb{C} \mathbf{P}^\infty_n)$ is a free $E_*$-module on
classes $\{\beta_i | i \geq n \}$. (The systems of generators as $n$ varies are compatible, hence $n$ will be omitted from the notation.)
There is a Künneth isomorphism $E_* (\mathbb{C} \mathbf{P}^\infty_m \smash \mathbb{C} \mathbf{P}^\infty _n) \cong E_* (\mathbb{C} \mathbf{P}^\infty_m) \otimes _{E_*} E_* (\mathbb{C} \mathbf{P}^\infty_n)$, and the associated module generators will be written $\beta_i \otimes \beta_j$.
\begin{nota}
For $E$ a complex oriented ring spectrum and $m,n$ integers, let $\underline{\beta}_m (S)$ denote the
Laurent power series $\sum_{i \geq m} \beta_i S ^i$ over $E_* (\mathbb{C} \mathbf{P}^\infty_m)$ and let $\underline{\beta}_m (S) \otimes \underline{\beta}_n (T)$ denote $\sum_{i \geq m, j \geq n} \beta_i \otimes \beta_j S ^iT^j$, defined over $E_* (\mathbb{C} \mathbf{P}^\infty_m \smash \mathbb{C} \mathbf{P}^\infty_n)$.
\end{nota}
Such generating power series provide an efficient way of encoding calculations. For example:
\begin{lem}
\label{lem:comodule_cpn}
\cite[Proposition 3.3]{miller}
Let $n$ be an integer, then the comodule structure $MU_* (\mathbb{C} \mathbf{P}^\infty_n) \rightarrow MU_* MU \otimes_{MU_*} MU_*
(\mathbb{C} \mathbf{P}^\infty_n)$ is determined by
\[
\underline{\beta}_n (S)
\mapsto
\underline{\beta}_n (\underline{b} (S) \otimes 1).
\]
\end{lem}
\begin{rem}
In the expression $\underline{\beta}_n (\underline{b} (x) \otimes 1)$, the elements $\beta_i$ are the
module generators, which are usually written on the right when considering left $MU_*MU$-comodules.
Miller \cite{miller} works with right comodules, where this notational issue does not arise.
\end{rem}
The cofibre sequence of spectra (cf. \cite[section 2]{miller}):
\begin{eqnarray}
\label{eqn:cofibre-CPn}
S^{2n}
\rightarrow \mathbb{C} \mathbf{P}^\infty_{n}
\rightarrow \mathbb{C} \mathbf{P}^\infty_{n+1}
\rightarrow,
\end{eqnarray}
for $n \in \mathbb{Z}$, induces a short exact sequence of $MU_*MU$-comodules:
\[
0
\rightarrow
MU_* [2n]
\rightarrow
MU_* (\mathbb{C} \mathbf{P}^\infty_n)
\rightarrow
MU_*(\mathbb{C} \mathbf{P}^\infty_{n+1})
\rightarrow
0,
\]
where $[a]$ denotes the shift in degree, so that $(V_* [a])_t = V_{t-a}$, for a $\mathbb{Z}$-graded object $V_*$.
The choice of generators gives a standard splitting of this sequence in $MU_*$-modules. In particular:
\begin{nota}
\label{nota:sigma}
For $E$ a complex oriented ring spectrum,
\begin{enumerate}
\item
let $\sigma : E_ * (\mathbb{C} \mathbf{P}^\infty_0) \rightarrow E_* (\mathbb{C} \mathbf{P}^\infty_{-1})$ be the section in $E_*$-modules
defined by $\sigma (\beta_i) = \beta_i$ (for $i \geq 0$) and $r : E_* (\mathbb{C} \mathbf{P}^\infty_{-1}) \rightarrow E_*[-2]$
be the corresponding retract, which sends generators $\beta_i$, $i \geq 0$ to zero.
\item
let $\sigma' : E_ * (\mathbb{C} \mathbf{P}^\infty_0 \smash \mathbb{C} \mathbf{P}^\infty_0) \rightarrow E_* (\mathbb{C} \mathbf{P}^\infty_{-1} \smash \mathbb{C} \mathbf{P}^\infty_0)$ denote the section $\sigma \otimes E_* (\mathbb{C} \mathbf{P}^\infty_0)$.
\end{enumerate}
\end{nota}
For $n=-1$, the connecting morphism of the cofibre sequence (\ref{eqn:cofibre-CPn}) defines the $S^1$-transfer
$\tau : \mathbb{C} \mathbf{P}^\infty_{0}\rightarrow S^{-1}$. The double $S^1$-transfer is
the smash product
$
\tau \smash \tau :
\mathbb{C} \mathbf{P}^\infty_{0} \smash \mathbb{C} \mathbf{P}^\infty_0 \rightarrow S^{-2}.
$
The rational Thom class $U : \mathbb{C} \mathbf{P}^\infty_{-1}\rightarrow S^{-2}_\mathbb{Q}$ induces a
morphism of cofibre sequences
\begin{eqnarray}
\label{eqn:tau-tilde}
\xymatrix{
S^{-2}
\ar[r]
\ar@{=}[d]
&
\mathbb{C} \mathbf{P}^\infty_{-1}
\ar[r]
\ar[d]_U
&
\mathbb{C} \mathbf{P}^\infty_0
\ar[r]^\tau
\ar[d]^{\tilde{\tau}}
&
S^{-1}
\ar@{=}[d]
\\
S^{-2}
\ar[r]
&S^{-2}_\mathbb{Q}
\ar[r]
&
S^{-2}
_{\mathbb{Q}/\mathbb{Z}}
\ar[r]
&
S ^{-1},
}
\end{eqnarray}
where $\tilde{\tau}$, the chromatic factorization of the single transfer, is
determined uniquely by the commutativity of the right hand square.
The morphism of $MU_*MU$-comodules
\[
MU_* (\tilde{\tau})
:
MU_* (\mathbb{C} \mathbf{P}^\infty_0)
\rightarrow
MU_* \otimes {\mathbb{Q}/\mathbb{Z}} [-2]
\]
is determined by the comodule morphism $MU_*(U) : MU_* (\mathbb{C} \mathbf{P}^\infty_{-1})
\rightarrow
MU_* \otimes \mathbb{Q} [-2]$, via the commutative diagram
\begin{eqnarray}
\label{eqn:U-sigma}
\xymatrix{
MU_* (\mathbb{C} \mathbf{P}^\infty_{-1})
\ar[r]^{MU_*(U)}
&
MU_* \otimes \mathbb{Q} [-2]
\ar@{->>}[d]
\\
MU_* (\mathbb{C} \mathbf{P}^\infty_0)
\ar@{.>}[u]^\sigma
\ar[r]_(.45){MU_* (\tilde{\tau}) }
&
MU_* \otimes {\mathbb{Q}/\mathbb{Z}} [-2],
}
\end{eqnarray}
in which the solid arrows denote comodule morphisms.
By Lemma \ref{lem:Landweber_exact}, the comodule morphism $MU_*(U)$ is the composite
\[
\xymatrix{
MU_* (\mathbb{C} \mathbf{P}^\infty_{-1})
\ar[r]
&
MU_*MU \otimes _{MU_*} \mathbf{H}\mathbb{Q}_*(\mathbb{C} \mathbf{P}^\infty_{-1})
\ar[d]^{MU_*MU \otimes \mathbf{H}\mathbb{Q}_*(U)}
\\
&
MU_*MU \otimes _{MU*} \mathbb{Q} [-2]
\ar[r]^(.6)\cong
&
MU_* \otimes \mathbb{Q} [-2],
}
\]
where $\mathbf{H}\mathbb{Q}$ is the rational Eilenberg-MacLane spectrum and the first morphism is the composite of the comodule structure morphism with $MU_* (\mathbb{C} \mathbf{P}^\infty_{-1} ) \rightarrow \mathbf{H}\mathbb{Q}_* (\mathbb{C} \mathbf{P}^\infty_{-1})$ induced by the canonical orientation of $\mathbf{H}\mathbb{Q}$.
\begin{lem}(Cf. \cite[Theorem 3.9]{miller}.)
\label{lem:MUThom}
\begin{enumerate}
\item
The morphism $MU_* (U) :MU_* (\mathbb{C} \mathbf{P}^\infty_{-1}) \rightarrow MU_* \otimes \mathbb{Q} [-2]$
is determined by
\[
\underline{\beta}_{-1} (S)
\mapsto
\frac{1}{\log S}.
\]
\item
The morphism of $MU_*$-modules $MU_* (U) \circ \sigma : MU_* (\mathbb{C} \mathbf{P}^\infty_0 ) \rightarrow MU_* \otimes \mathbb{Q} [-2]$
is determined by
\[
\underline{\beta}_{0} (S)
\mapsto
\frac{1}{\log S} - \frac{1}{S}.
\]
\item
The morphism $MU_* (\tilde{\tau}) $ is the composite of $MU_*(U) \circ \sigma$ with the projection $MU_* \otimes \mathbb{Q} [-2]\twoheadrightarrow MU_* \otimes {\mathbb{Q}/\mathbb{Z}}[-2]$.
\end{enumerate}
\end{lem}
\begin{proof}
The first statement follows from the comodule structure of $MU_*(\mathbb{C} \mathbf{P}^\infty_{-1})$ together with the fact that,
under the morphism $MU_* MU \otimes \mathbb{Q} \cong MU_* \otimes MU_*
\otimes \mathbb{Q} \rightarrow MU_* \otimes \mathbb{Q}$ induced by the augmentation $MU_*
\otimes \mathbb{Q} \rightarrow \mathbb{Q}$ on the right hand factor, $\exp^R (S) \mapsto
S$, so that $\underline{b} (S) \mapsto \log (S)$.
The section $\sigma$ is determined by $\underline{\beta}_0 (S) \mapsto \underline{\beta}_{-1} (S) - \beta_{-1}\frac{1}{S}$, which gives the second statement, by composition. The final statement follows from the commutativity of diagram (\ref{eqn:U-sigma}).
\end{proof}
\subsection{The chromatic factorization of the double transfer}
Working $p$-locally ($p$ odd), the above chromatic factorization of the single transfer extends to a chromatic factorization of the double transfer, for which the original published reference is \cite[Theorem 5.2]{BCGHRW}, where the result is attributed to Hilditch, and a generalization is given by Imaoka in \cite{imaoka}. (Imaoka \cite{imaoka_p2} has also considered the chromatic factorization of the double transfer at the prime $p=2$.)
\begin{thm}
\cite{BCGHRW}
\label{thm:chromatic-factor}
Let $p \geq 3$ be a prime.
There exists a morphism $\Theta : \mathbb{C} \mathbf{P}^\infty_{-1} \smash \mathbb{C} \mathbf{P}^\infty_0 \rightarrow L_1 S^{-4}/p^\infty$ which fits into a commutative square:
\begin{eqnarray}
\label{eqn:Theta_extends_tautilde}
\xymatrix{
\Sigma^{-2} \mathbb{C} \mathbf{P}^\infty_0
\ar[r]
\ar[d]_{\Sigma^{-2}\tilde{\tau}}
&
\mathbb{C} \mathbf{P}^\infty_{-1} \smash \mathbb{C} \mathbf{P}^\infty_0
\ar[d]^\Theta
\\
S^{-4} /p^\infty
\ar[r]
&
L_1 S ^{-4}/p^\infty,
}
\end{eqnarray}
where the top morphism is induced by the inclusion of the bottom cell $S^{-2} \rightarrow \mathbb{C} \mathbf{P}^\infty_{-1}$.
Moreover, for any extension to a morphism of cofibre sequences:
\[
\xymatrix{
\Sigma^{-2} \mathbb{C} \mathbf{P}^\infty_0
\ar[r]
\ar[d]_{\Sigma^{-2}\tilde{\tau}}
&
\mathbb{C} \mathbf{P}^\infty_{-1} \smash \mathbb{C} \mathbf{P}^\infty_0
\ar[d]^\Theta
\ar[r]
&
\mathbb{C} \mathbf{P}^\infty_0 \smash \mathbb{C} \mathbf{P}^\infty_0
\ar[d]^{\overline{\Theta}}
\ar[r]^(.6){\tau \smash \mathbb{C} \mathbf{P}^\infty_0}
&
\Sigma ^{-1}\mathbb{C} \mathbf{P}^\infty_0
\ar[d]^{\Sigma^{-1} \tilde{\tau}}
\\
S^{-4} /p^\infty
\ar[r]
&
L_1 S ^{-4}/p^\infty
\ar[r]
&
S^{-4} /p^\infty , v_1^\infty
\ar[r]
&
S^{-3}/p^\infty,
}
\]
where the top row is the cofibre sequence $(S^{-2}\rightarrow
\mathbb{C} \mathbf{P}^\infty_{-1} \rightarrow \mathbb{C} \mathbf{P}^\infty_0)\smash \mathbb{C} \mathbf{P}^\infty_0 $ and the bottom row is the cofibre
sequence associated to $L_1$-localization, $\overline{\Theta} :
\mathbb{C} \mathbf{P}^\infty_0 \smash \mathbb{C} \mathbf{P}^\infty_0
\rightarrow S^{-4} /p^\infty , v_1^\infty$ provides a factorization of
the double transfer morphism across the chromatic morphism $S^{-4}
/p^\infty , v_1^\infty \rightarrow S^{-2}$.
\end{thm}
The morphism $\Theta$ is constructed using Proposition \ref{prop:represent-ad}, by defining an
explicit cohomology class cohomology class $\theta \in [\mathbb{C} \mathbf{P}^\infty_{-1} \smash \mathbb{C} \mathbf{P}^\infty_0 , \Sigma^{-4} KU _\mathbb{Q} ]$ such that there
is a commutative diagram
\begin{eqnarray}
\label{eqn:theta_Q_representative}
\xymatrix{
\mathbb{C} \mathbf{P}^\infty_{-1}\smash \mathbb{C} \mathbf{P}^\infty_0
\ar[r]^\Theta
\ar[d]_\theta
&
L_1 S ^{-4}/p^\infty
\ar[d]
\\
\Sigma^{-4} KU _\mathbb{Q}
\ar[r]
&
\Sigma^{-4} KU /p^\infty.
}
\end{eqnarray}
\begin{rem}
The construction of $\theta$ is by an eigenspace argument for the action of the Adams operation $\psi^\gamma$,
where $\gamma \in \mathbb{Z}$ is a topological generator of $\zed_p$ (compare Proposition \ref{prop:represent-ad}); this is made explicit in the proof of \cite[Proposition 2.4]{imaoka}, which generalizes this result.
\end{rem}
For later use, the following notation is introduced.
\begin{nota}
\label{nota:tilde_theta_prime}
Let $\tilde{\theta} ' (S,T) $ denote the power series in $KU_* \otimes \mathbb{Q} [[S,T]]$
\[
\sum _{i, j >0}\frac{B_i^{KU}}{i!}\frac{B_j^{KU}}{j!}
\left(
\frac{\gamma^i -1}{\gamma^{i+j}-1}
\right)
(\log^{KU} S)^{i-1}(\log ^{KU} T) ^{j-1},
\]
where $\log^{KU}$ is the logarithm of the multiplicative formal group law of $KU_* \otimes \mathbb{Q}$.
\end{nota}
The morphism $\theta$ is not uniquely defined; \cite[Theorem 5.2]{BCGHRW} gives an explicit choice for $\theta$, working
with the $p$-local Adams summand $E(1)$, which can be replaced by $p$-local $K$-theory. The following choice is used here:
\begin{defn}
\label{def:theta}
Let $\theta : \mathbb{C} \mathbf{P}^\infty_{-1} \smash \mathbb{C} \mathbf{P}^\infty_0 \rightarrow \Sigma ^{-4} KU_\mathbb{Q}$ be the class
which is determined by
$\tilde{\theta}_* : KU_*(\mathbb{C} \mathbf{P}^\infty_{-1}\smash \mathbb{C} \mathbf{P}^\infty_0) \rightarrow KU_* \otimes \mathbb{Q} [-4]$:
\[
\underline{\beta}_{-1}(S) \otimes \underline{ \beta}_0(T)
\mapsto
\frac{1}{S} \left(\frac{1}{\log^{KU}T} - \frac{1}{T}\right)
+
\tilde{\theta}' (S,T).
\]
\end{defn}
\subsection{Calculating in complex cobordism}
Let $p$ be a fixed odd prime and $\Theta, \overline{\Theta}$ be as in Theorem
\ref{thm:chromatic-factor}, where the $\mathbb{Q}$-representative $\theta$ of $\Theta$ is the morphism of Definition
\ref{def:theta}.
\begin{thm}
\label{thm:MU-thetas}
Let $p$ be an odd prime.
There is a commutative diagram of morphisms of
$MU_*MU$-comodules:
\[
\xymatrix{
&
MU_* (\mathbb{C} \mathbf{P}^\infty_{-1}\smash \mathbb{C} \mathbf{P}^\infty_0)
\ar@{->>}[r]
\ar@/_1pc/[ld]_(.6){MU_*(\theta)}
\ar[d]^{MU_*(\Theta)}
&
MU_* (\mathbb{C} \mathbf{P}^\infty_{0}\smash \mathbb{C} \mathbf{P}^\infty_0)
\ar[d]^{MU_*(\overline{\Theta})}
\\
MU_* KU \otimes \mathbb{Q}[-4]
\ar@{->>}@/_1pc/[rd]
&
MU_*/p^\infty[v_1^{-1}] [-4]
\ar@{^(->}[d]
\ar@{->>}[r]^{\mathrm{pr}}
&
MU_* /p^\infty,v_1^\infty [-4]
\ar@{^(->}[d]
\\
&
MU_* KU /p^\infty[-4]
\ar@{->>}[r]
&
(MU_* KU /p^\infty)/(MU_*/p^\infty)[-4].
}
\]
\begin{enumerate}
\item
The underlying $MU_*$-module morphism of $MU_* (\overline{\Theta})$ is the composite $\mathrm{pr}\circ MU_* (\Theta)
\circ \sigma'$.
\item
\label{thm_item:Theta}
The morphism $MU_*({\Theta})$ is determined by the comodule
morphism $MU_* (\theta)$ and hence by the morphism $\tilde{\theta}_* : KU_* (\mathbb{C} \mathbf{P}^\infty_{-1}\smash \mathbb{C} \mathbf{P}^\infty_0
)\rightarrow KU_* \otimes \mathbb{Q}$.
\item
The morphism $MU_*(\overline{\Theta})$ is determined by the comodule
morphism $MU_* (\theta)$.
\end{enumerate}
\end{thm}
\begin{proof}
The commutativity of the left hand part of the diagram follows from Proposition \ref{prop:f,g-Landweber} and the upper right hand square is induced by the morphism of cofibre sequences defining $\overline{\Theta}$. The lower right hand square is induced by the monomorphism given by the Hattori-Stong theorem (see Lemma \ref{lem:hattori-stong})
\[
MU_* /p^\infty [v_1^{-1}] \hookrightarrow MU_ * KU /p^\infty,
\]
since the kernel to the projection $MU_* /p^\infty [v_1^{-1}] \twoheadrightarrow MU_*/p^\infty , v_1^\infty $ is
$MU_* /p^\infty$.
The morphism $MU_* (\mathbb{C} \mathbf{P}^\infty_{-1} \smash \mathbb{C} \mathbf{P}^\infty_0) \twoheadrightarrow MU_* (\mathbb{C} \mathbf{P}^\infty_0 \smash \mathbb{C} \mathbf{P}^\infty_0)$ admits the section $\sigma'$ in $MU_*$-modules, hence the identification of the underlying module morphism of $MU_* (\overline{\Theta})$ follows from the upper right hand square.
For part \ref{thm_item:Theta}, the injectivity of
$MU_* /p^\infty, v_1 ^\infty [-4] \hookrightarrow MU_* KU /p^\infty[-4]$
and the commutativity of the left hand square implies that $
MU_* (\theta)$ determines $MU_* ({\Theta})$ as a morphism of $MU_*MU$-comodules. The morphism $MU_* (\theta)$ is determined by $\tilde{\theta}_* : KU_* (\mathbb{C} \mathbf{P}^\infty_{-1}\smash \mathbb{C} \mathbf{P}^\infty_0
)\rightarrow KU_* \otimes \mathbb{Q}$, by the second part of Proposition \ref{prop:f,g-Landweber}.
The final statement follows from the commutativity of the lower right hand square.
\end{proof}
\subsection{Calculating $MU_* (\theta)$}
The torsion-free ring $MU_*KU$ has two formal group law structures, corresponding to the left and right units $MU_* \rightarrow
MU_*KU$ and $KU_* \rightarrow MU_* KU$ respectively; write $\log^{MU}$ and $\log^{KU}$ for the respective logarithms
defined over $MU_* KU \otimes \mathbb{Q}$ and set $\underline{b}' = \exp^{KU} \circ \log^{MU}$, which identifies
with the image of $\underline{b}$ under $MU_*MU \rightarrow MU_*KU$.
\begin{prop}
\label{prop:MUtheta_sigmap}
\
\begin{enumerate}
\item
The morphism $MU_*(\theta)\in \hom_{MU_*MU} (MU_* (\mathbb{C} \mathbf{P}^\infty_{-1} \smash \mathbb{C} \mathbf{P}^\infty_0 ), MU_* KU \otimes \mathbb{Q}[-4]) $ is
determined by
$
\underline{\beta}_{-1} (S) \otimes \underline{\beta}_0 (T)
\mapsto $
\[
\frac{1}{\underline{b}'(S)}
\Big(\frac{1}{\log^{MU} T} - \frac{1}{\underline{b}' (T) }\Big)
+
\sum _{i, j >0}\frac{B_i^{KU}}{i!}\frac{B_j^{KU}}{j!}
\left(
\frac{\gamma^i -1}{\gamma^{i+j}-1}
\right)
(\log^{MU} S)^{i-1}(\log ^{MU} T) ^{j-1}.
\]
\item
The morphism $MU_*(\theta) \circ \sigma' \in \hom_{MU_*} (MU_* (\mathbb{C} \mathbf{P}^\infty_0 \smash \mathbb{C} \mathbf{P}^\infty_0 ), MU_* KU \otimes \mathbb{Q}[-4]) $ is
determined by
$
\underline{\beta}_0 (S) \otimes \underline{\beta}_0 (T)
\mapsto $
\[
\Big( \frac{1}{\underline{b}'(S)}- \frac{1}{S}\Big)
\Big(\frac{1}{\log^{MU} T} - \frac{1}{\underline{b}' (T) }\Big)
+
\sum _{i, j >0}\frac{B_i^{KU}}{i!}\frac{B_j^{KU}}{j!}
\left(
\frac{\gamma^i -1}{\gamma^{i+j}-1}
\right)
(\log^{MU} S)^{i-1}(\log ^{MU} T) ^{j-1}.
\]
\end{enumerate}
\end{prop}
\begin{proof}
By Proposition \ref{prop:f,g-Landweber}, the morphism $MU_* (\theta)$
is determined by the commutative diagram
\[
\xymatrix{
MU_* (\mathbb{C} \mathbf{P}^\infty_{-1} \smash \mathbb{C} \mathbf{P}^\infty_0)
\ar[rr]^(.4){\psi _{MU_* (\mathbb{C} \mathbf{P}^\infty_{-1} \smash \mathbb{C} \mathbf{P}^\infty_0)}}
\ar[d]_{MU_* (\theta)}
&\ &
MU_*MU
\otimes_{MU_*} MU_*(\mathbb{C} \mathbf{P}^\infty_{-1} \smash \mathbb{C} \mathbf{P}^\infty_0)
\ar[d]
\\
MU_*MU\otimes_{MU_*} KU_* \otimes \mathbb{Q} [-4]
&&
MU_*MU \otimes _{MU_*} KU_* (\mathbb{C} \mathbf{P}^\infty_{-1} \smash \mathbb{C} \mathbf{P}^\infty_0)
\ar[ll]^{MU_*MU \otimes \tilde{\theta}_*}.
}
\]
Lemma \ref{lem:comodule_cpn} implies that the comodule structure of $MU_* (\mathbb{C} \mathbf{P}^\infty_{-1} \smash \mathbb{C} \mathbf{P}^\infty_0)$ is determined by
\[
\underline{\beta}_{-1} (S) \otimes \underline{\beta}_0 (T)
\mapsto
\underline{\beta}_{-1} (\underline{b}(S)\otimes 1) \otimes \underline{\beta}_0 (\underline{b}(T) \otimes 1).
\]
Composing with the morphism induced by $\theta$ and using the identity $\underline{b}' = \exp^{KU} \circ \log^{MU}$ shows that $MU_* (\theta)$ is given by
$\underline{\beta}_{-1} (S) \otimes \underline{\beta}_0 (T)
\mapsto$
\[
\frac{1}{\underline{b}'(S)}
\Big(\frac{1}{\log^{MU} T} - \frac{1}{\underline{b}' (T) }\Big)
+
\sum _{i, j >0}\frac{B_i^{KU}}{i!}\frac{B_j^{KU}}{j!}
\left(
\frac{\gamma^i -1}{\gamma^{i+j}-1}
\right)
(\log^{MU} S)^{i-1}(\log ^{MU} T) ^{j-1}.
\]
The second statement is proved by composing with the morphism $\sigma'$, which is represented by
\[
\underline{\beta}_0 (S) \otimes \underline{\beta}_0 (T) \mapsto \underline{\beta}_{-1} (S) \otimes \underline{\beta}_0 (T) - \beta_{-1} \frac{1}{S}
\otimes \underline{\beta}_0 (T).
\]
The morphism $MU_* (\theta)$ restricts to give:
\[
\beta_{-1} \otimes \underline{\beta}_0 (T) \mapsto \frac{1} {log^{MU} T} - \frac{1}{\underline{b}'(T)}.
\]
The result follows.
\end{proof}
\section{The algebraic transfer}
\label{sect:cohomtransfer}
The algebraic version of the double transfer is introduced in this section and is related to chromatic theory.
\subsection{The algebraic transfer}
\label{subsect:alg_transfer}
The cofibre sequence defining the transfer $\tau : \mathbb{C} \mathbf{P}^\infty_0 \rightarrow
S^{-1}$ induces a short exact sequence of $MU_*MU$-comodules and hence an
algebraic transfer class:
\[
[e_\tau] \in \mathrm{Ext}^1_{MU_* MU}(MU_* (\mathbb{C} \mathbf{P}^\infty_0),MU_* [-2]).
\]
\begin{defn}
\label{def:algebraic_double_transfer}
The algebraic double transfer is the class:
\[
[e_\tau]^2\in \mathrm{Ext}^2_{MU_* MU}(MU_* (\mathbb{C} \mathbf{P}^\infty_0\smash \mathbb{C} \mathbf{P}^\infty_0 ),MU_* [-4])
\]
given by Yoneda product.
\end{defn}
Proposition \ref{prop:cocycle-canonical-splitting} applied with respect
to the section $\sigma$ gives the standard choice $e_\tau$ of representing cocycle:
\begin{lem}
\label{lem:cocycle-etau}
The cocycle $
{e_\tau} \in \hom_{MU_*}(MU_* (\mathbb{C} \mathbf{P}^\infty_0), MU_* MU[-2])
$
is determined by
\[
\underline{\beta}_{0} (S)
\mapsto
\frac{1}{S}
-
\frac{1}{\underline{b} (S)} .
\]
\end{lem}
Diagram (\ref{eqn:tau-tilde}) induces a morphism of short exact sequences of $MU_*MU$-comodules:
\[
\xymatrix{
0
\ar[r]
&
MU_* [-2]
\ar@{=}[d]
\ar[r]
&
MU_* (\mathbb{C} \mathbf{P}^\infty_{-1} )
\ar[d]^{MU_* (U)}
\ar[r]
&
MU_* (\mathbb{C} \mathbf{P}^\infty_0)
\ar[d]^{MU_*(\tilde{\tau})}
\ar[r]
&
0
\\
0
\ar[r]
&
MU_* [-2]
\ar[r]
&
MU_* \otimes \mathbb{Q} [-2]
\ar[r]
&
MU_* \otimes {\mathbb{Q}/\mathbb{Z}} [-2]
\ar[r]
&
0.
}
\]
The morphism $\tilde{\tau}$ provides a chromatic factorization of the single transfer $\tau$; this corresponds to the
following result, which can be proved using Proposition \ref{prop:cocycle-canonical-splitting}.
\begin{prop}
There is an equality in $\mathrm{Ext}^1_{MU_* MU} (MU_* (\mathbb{C} \mathbf{P}^\infty_0), MU_* [-2])$:
\[
[e_\tau] = \partial_1 MU_* (\tilde{\tau}),
\]
where $\partial _1$ is the chromatic connecting morphism
associated to
\[
\xymatrix{
0
\ar[r]
&
MU_* [-2]
\ar[r]
&
MU_* \otimes \mathbb{Q} [-2]
\ar[r]
&
MU_* \otimes {\mathbb{Q}/\mathbb{Z}} [-2]
\ar[r]
&
0.
}
\]
\end{prop}
\subsection{The class $[\kappa]$}
Rather than working directly with the double algebraic transfer $[e_\tau]^2$, it is convenient to
work with a class $[\kappa]$ in $\mathrm{Ext}^1$ (see Definition \ref{def:kappa}), which is related to
the double transfer via the chromatic connecting morphism $\partial_1$ (see Proposition \ref{prop:kappa-transfer}).
Forming the tensor product $[e_\tau] \otimes MU_* (\mathbb{C} \mathbf{P}^\infty_0)$ gives a
class
\[
[E_\tau ]\in \mathrm{Ext}^1_{MU_* MU}(MU_* (\mathbb{C} \mathbf{P}^\infty_0 \smash \mathbb{C} \mathbf{P}^\infty_0),MU_*(\mathbb{C} \mathbf{P}^\infty_0) [-2]).
\]
\begin{lem}
\label{lem:Etau}
The class $[E_\tau]$ is represented by the cocycle
\[
{E_\tau} \in \hom_{MU_*}(MU_* (\mathbb{C} \mathbf{P}^\infty_0\smash \mathbb{C} \mathbf{P}^\infty_0), MU_* MU\otimes _{MU_*} MU_* (\mathbb{C} \mathbf{P}^\infty_0)[-2])
\]
defined with respect to the section $\sigma'$, which is given by
\[
\underline{\beta}_ 0 (S) \otimes \underline{\beta}_0 (T)
\mapsto
\Big(
\frac{1}{S}
- \frac{1}{\underline{b}(S)}
\Big)
\underline{\beta}_0 (\underline{b}(T)).
\]
\end{lem}
\begin{proof}
Apply Proposition \ref{prop:cocycle-canonical-splitting} with respect to the section $\sigma'$.
\end{proof}
\begin{defn}
\label{def:kappa}
Let $[\kappa]$ denote the class
\[
MU_* (\tilde{\tau}) [E_\tau] \in \mathrm{Ext}^1
_{MU_* MU} (MU_* (\mathbb{C} \mathbf{P}^\infty_0 \smash \mathbb{C} \mathbf{P}^\infty_0), MU_* \otimes {\mathbb{Q}/\mathbb{Z}}[-4]).
\]
\end{defn}
The following result justifies using $\kappa$ in place of the algebraic double transfer.
\begin{prop}
\label{prop:kappa-transfer}
There is an identity
\[
[e_\tau]^2 = \partial_1 [\kappa],
\]
in $\mathrm{Ext}^2 _{MU_*MU}(MU_* (\mathbb{C} \mathbf{P}^\infty_0 \smash \mathbb{C} \mathbf{P}^\infty_0), MU_* [-4])$.
\end{prop}
\begin{proof}
Straightforward.
\end{proof}
The class $[\kappa]$ is represented by the standard choice $\kappa$ of cocycle, constructed with respect
to the section $\sigma'$:
\[
{\kappa} \in \hom_{MU_*} ( MU_* (\mathbb{C} \mathbf{P}^\infty_0\smash \mathbb{C} \mathbf{P}^\infty_0),MU_*MU \otimes {\mathbb{Q}/\mathbb{Z}} [-4]).
\]
\begin{nota}
\label{nota:Xi}
Write $K \in \hom_{MU_*} (MU_* (\mathbb{C} \mathbf{P}^\infty_0\smash \mathbb{C} \mathbf{P}^\infty_0), MU_*MU \otimes
\mathbb{Q} [-4])$ for the composite morphism of $MU_*$-modules:
\[
\xymatrix{
MU_* (\mathbb{C} \mathbf{P}^\infty_0 \smash \mathbb{C} \mathbf{P}^\infty_0)
\ar[r]^(.4){E_\tau}
\ar[rd]_K
&
MU_* MU\otimes _{MU_*}MU_*(\mathbb{C} \mathbf{P}^\infty_{0})[-2]
\ar[d]^{MU_*MU \otimes MU_*(U) \circ \sigma }
\\
&
MU_*MU \otimes \mathbb{Q} [-4].
}
\]
\end{nota}
\begin{prop}
\label{prop:calculate_Xi}
\
\begin{enumerate}
\item
The class $[\kappa]$ is represented by the cocyle $\kappa$
given by reduction of the morphism $K$ via $MU_*MU \otimes \mathbb{Q} [-4]
\rightarrow
MU_*MU \otimes {\mathbb{Q}/\mathbb{Z}}[-4]$.
\item
The morphism $K \in \hom_{MU_*} (MU_* (\mathbb{C} \mathbf{P}^\infty_0\smash \mathbb{C} \mathbf{P}^\infty_0), MU_*MU \otimes
\mathbb{Q} [-4])$ is determined by
\[
\underline{\beta}_0 (S) \otimes \underline{\beta}_0 (T)
\mapsto
\Big(
\frac{1}{S}- \frac{1}{\underline{b}(S)}
\Big)
\Big(
\frac{1}{\log^L T }
- \frac{1}{\underline{b}(T)}
\Big).
\]
\end{enumerate}
\end{prop}
\begin{proof}
The first statement follows from the construction of $K$.
The second follows from Lemma \ref{lem:Etau}, by composition with the morphism
$MU_*MU \otimes (MU_*(U) \circ\sigma)$ (see Lemma \ref{lem:MUThom}), using the identity $\log^L = \log^R \circ \underline{b}$ of Lemma \ref{lem:identify_b}.
\end{proof}
\subsection{Relation with the chromatic factorization of the double transfer}
\label{subsect:relate-double-chromatic}
Let $p$ be an odd prime. Here $\kappa$ will be written to denote the associated $p$-local cocycle
\[
\kappa
\in \hom_{MU_*} (MU_* (\mathbb{C} \mathbf{P}^\infty_0\smash \mathbb{C} \mathbf{P}^\infty_0) , MU_*MU/p^{\infty}[-4]).
\]
The construction of $\overline{\Theta}$ as a chromatic factorization
of the double transfer implies that the class $[\kappa]$ is related to the morphism $MU_*(\overline{\Theta})$. Write
$\partial_2$ for the chromatic connecting morphism associated to the short exact
sequence of comodules
\[
0
\rightarrow MU_* /p^\infty
\rightarrow
MU_* /p^\infty [v_1 ^{-1}]
\rightarrow
MU_* /p^\infty ,v_1 ^\infty
\rightarrow 0.
\]
\begin{prop}
\label{prop:second-factorization}
There is an identity
$
[\kappa ] = \partial _2 MU_* (\overline{\Theta})
$
in $\mathrm{Ext}^1_{MU_*MU} (MU_* (\mathbb{C} \mathbf{P}^\infty_0\smash \mathbb{C} \mathbf{P}^\infty_0 ), MU_*/p^\infty [-4])$. In particular, $\partial_2 MU_* (\overline{\Theta}) $ is independent of the choice of $\overline{\Theta}$.
\end{prop}
\begin{proof}
The morphism $MU_* (\Theta)$ gives rise to a morphism between short exact sequences of $MU_*MU$-comodules:
\[
\xymatrix{
0
\ar[r]
&
MU_* (\mathbb{C} \mathbf{P}^\infty_0)[-2]
\ar[d]_{MU_* (\tilde{\tau}) [-2]}
\ar[r]
&
MU_* (\mathbb{C} \mathbf{P}^\infty_{-1} \smash \mathbb{C} \mathbf{P}^\infty_0)
\ar[r]
\ar[d]_{MU_* (\Theta)}
&
MU_* (\mathbb{C} \mathbf{P}^\infty_0 \smash \mathbb{C} \mathbf{P}^\infty_0)
\ar[d]^{MU_* (\overline{\Theta}) }
\ar[r]
&
0
\\
0
\ar[r]
&
MU_*/p^\infty[-4]
\ar[r]
&
MU_* /p^\infty [v_1]^{-1}[-4]
\ar[r]
&
MU_* /p^\infty , v_1 ^\infty [-4]
\ar[r]
&
0,
}
\]
where the top row represents $[E_\tau]$. By definition, $[\kappa] = MU_* (\tilde{\tau}) [E_\tau]$ and $\partial _2 MU_* (\overline{\Theta})$ is represented by the pullback of the lower short exact sequence along $MU_* (\overline{\Theta})$.
Forming the pushout of the top sequence using $MU_* (\tilde{\tau}) $ and the pullback of the lower
sequence via $MU_* (\overline{\Theta})$ gives Yoneda-equivalent short exact sequences, which therefore define the same
class in $\mathrm{Ext}^1$, as required.
\end{proof}
\begin{rem}
It is instructive to check this result directly at the level of cocycles by using the description of the connecting morphism given in Lemma \ref{lem:explicit-connecting}.
\end{rem}
\subsection{Relating $K$ and $MU_*(\theta) \circ \sigma'$}
\label{subsect:relate_K_MuTheta}
The morphism $MU_*MU \rightarrow MU_* KU$ associated to the
orientation of $KU$ induces the morphism (using the notation introduced in Section \ref{sect:cocycle-basechange}):
\[
_{MU_*} K _{KU_*} \in \hom_{MU_*} (MU_* (\mathbb{C} \mathbf{P}^\infty_0\smash \mathbb{C} \mathbf{P}^\infty_0), MU_*KU \otimes
\mathbb{Q} [-4]).
\]
This can be identified with the composite
\[
\xymatrix{
MU_* (\mathbb{C} \mathbf{P}^\infty_0 \smash \mathbb{C} \mathbf{P}^\infty_0)
\ar[r]^{- \sigma'}
&
MU_* (\mathbb{C} \mathbf{P}^\infty_{-1} \smash \mathbb{C} \mathbf{P}^\infty_0)
\ar[r]^(.35){\psi}
&
MU_*MU \otimes_{MU_*}
MU_* (\mathbb{C} \mathbf{P}^\infty_{-1} \smash \mathbb{C} \mathbf{P}^\infty_0)
\ar[d]
\\
&
MU_* KU \otimes \mathbb{Q}[-4]
&
MU_*MU \otimes_{MU_*}
KU_* (\mathbb{C} \mathbf{P}^\infty_{-1} \smash \mathbb{C} \mathbf{P}^\infty_0)
\ar[l],
}
\]
where $\psi$ is the comodule structure map, the vertical arrow is induced by the orientation of $KU$ and the final
morphism of $MU_*MU$-modules is defined by the composite
\begin{eqnarray}
\label{eqn:KU_thom_sigma}
KU_* (\mathbb{C} \mathbf{P}^\infty_{-1} \smash \mathbb{C} \mathbf{P}^\infty_0)
\twoheadrightarrow
KU_* (\mathbb{C} \mathbf{P}^\infty_0) [-2]
\stackrel{KU_*(U) \circ \sigma}{\longrightarrow}
KU_* \otimes \mathbb{Q} [-4]
\end{eqnarray}
of the projection induced by $KU_* (\mathbb{C} \mathbf{P}^\infty_{-1}) \twoheadrightarrow KU_* [-2]$ with the morphism induced by the rational
Thom class of $\mathbb{C} \mathbf{P}^\infty_{-1}$.
\begin{rem}
The sign arises due to the conventions used in defining the cobar complex, as in Proposition \ref{prop:cocycle-canonical-splitting}.
\end{rem}
The second morphism in (\ref{eqn:KU_thom_sigma}) is related to the morphism $\tilde{\theta}_* : KU_* (\mathbb{C} \mathbf{P}^\infty_{-1} \smash \mathbb{C} \mathbf{P}^\infty_0) \rightarrow KU_* \otimes
\mathbb{Q} [-4]$ via the following commutative diagram derived from Theorem \ref{thm:MU-thetas}:
\[
\xymatrix{
\Sigma^{-2} \mathbb{C} \mathbf{P}^\infty_{-1}
\ar[r]
\ar[d]_{\Sigma^{-2}U}
&
\Sigma^{-2} \mathbb{C} \mathbf{P}^\infty_0
\ar[d]_{\Sigma^{-2}\tilde{\tau}}
\ar[r]
&
\mathbb{C} \mathbf{P}^\infty_{-1}
\smash
\mathbb{C} \mathbf{P}^\infty_0
\ar[d]^{\Theta}
\ar[ldd]|\hole_(.3)\theta
\\
S^{-4}_\mathbb{Q}
\ar[r]
\ar[rd]
&
S^{-4}/p^\infty
\ar[r]
&
L_1 S^{-4} /p^\infty
\ar[d]
\\
&
\Sigma^{-4} KU_\mathbb{Q}
\ar[r]
&
\Sigma^{-4} KU/p^\infty.
}
\]
This implies the following result (which corresponds to a fundamental property of $\theta$ used in the construction of $\Theta$).
\begin{lem}
\label{lem:restrict_Theta}
The restriction of $\tilde{\theta}_* : KU_* (\mathbb{C} \mathbf{P}^\infty_{-1} \smash \mathbb{C} \mathbf{P}^\infty_0) \rightarrow
KU_* \otimes \mathbb{Q}[-4]$ along the morphism $KU_* (\mathbb{C} \mathbf{P}^\infty_0) [-2] \hookrightarrow KU_* (\mathbb{C} \mathbf{P}^\infty_{-1} \smash \mathbb{C} \mathbf{P}^\infty_0)$, induced by
the inclusion of the bottom cell $S^{-2} \hookrightarrow \mathbb{C} \mathbf{P}^\infty_{-1}$ is
the morphism
\[
KU_*(U) \circ \sigma[-2] : KU_* (\mathbb{C} \mathbf{P}^\infty_0) [-2]
\rightarrow KU_* \otimes \mathbb{Q} [-4].
\]
\end{lem}
\begin{rem}
This result corresponds to the fact that the morphism $\tilde{\theta}_*$ is determined by the power series
\[
\frac{1}{S} \left(\frac{1}{\log^{KU}T} - \frac{1}{T}\right)
+
\tilde{\theta} ' (S,T)
\]
where $\tilde{\theta}'$ is the formal power series introduced in Notation \ref{nota:tilde_theta_prime}.
\end{rem}
Proposition \ref{prop:calculate_Xi} gives the following, using the notation of Proposition \ref{prop:MUtheta_sigmap}:
\begin{prop}
\label{prop:calculate_K_KU}
The morphism $$_{MU_*}K_{KU_*} \in \hom_{MU_*} (MU_* (\mathbb{C} \mathbf{P}^\infty_0\smash \mathbb{C} \mathbf{P}^\infty_0), MU_*KU \otimes
\mathbb{Q} [-4])$$ is determined by
\[
\underline{\beta}_0 (S) \otimes \underline{\beta}_0 (T)
\mapsto
\Big(
\frac{1}{S}- \frac{1}{\underline{b}'(S)}
\Big)
\Big(
\frac{1}{\log^{MU} T }
- \frac{1}{\underline{b}'(T)}
\Big).
\]
\end{prop}
\begin{proof}
By construction $\underline{b}'$ corresponds to the image of $\underline{b}$ under the morphism induced by $MU_*MU
\rightarrow MU_*KU$, and $\log^L$ maps to $\log^{MU}$. The result follows from Proposition
\ref{prop:calculate_Xi}.
\end{proof}
The above description of $ _{MU_*} K _{KU_*}$ can be compared with that of $MU_* (\theta ) \circ \sigma ' : MU_* (\mathbb{C} \mathbf{P}^\infty_0
\smash \mathbb{C} \mathbf{P}^\infty_0) \rightarrow MU_*KU \otimes \mathbb{Q} [-4]$. Write $\tilde{\theta}'_*$ for the morphism of $MU_*$-modules $MU_* (\mathbb{C} \mathbf{P}^\infty_0 \smash \mathbb{C} \mathbf{P}^\infty_0) \rightarrow KU_* \otimes \mathbb{Q}[-4]$ determined by $\tilde{\theta}'$.
\begin{cor}
\label{cor:relate_MUtheta}
There is an identification of morphisms in $\hom_{MU_*} (MU_* (\mathbb{C} \mathbf{P}^\infty_0\smash \mathbb{C} \mathbf{P}^\infty_0), MU_*KU \otimes
\mathbb{Q} [-4])$:
\[
MU_* (\theta) \circ \sigma'
=
-\ _{MU_*}K_{KU_*} + (MU_*MU \otimes \tilde{\theta}'_*) \circ \psi_{MU_* (\mathbb{C} \mathbf{P}^\infty_0\smash \mathbb{C} \mathbf{P}^\infty_0)}.
\]
\end{cor}
\begin{proof}
Compare the calculation in Proposition \ref{prop:MUtheta_sigmap} with Proposition \ref{prop:calculate_K_KU}. (Note that the sign arises from the conventions used in defining the cobar complex, as in Proposition \ref{prop:cocycle-canonical-splitting}.)
\end{proof}
\section{Restricting to primitives}
The spherical elements of $MU_* (\mathbb{C} \mathbf{P}^\infty_0\smash \mathbb{C} \mathbf{P}^\infty_0)$ lie in the comodule primitives; this motivates the study of the restriction of the algebraic double transfer to the comodule primitives.
\subsection{Comodule primitives}
\begin{nota}
For $M$ a left $MU_*MU$-comodule, write $\mathbf{P} M$ for the graded abelian group of comodule primitives.
\end{nota}
As above $\mathbf{H}\mathbb{Q}$ denotes the rational Eilenberg-MacLane spectrum; its integral counterpart is denoted ${\mathbf{H}\mathbb{Z}}$. The following is clear:
\begin{lem}
\label{lem:thom_orientation_primitives}
For $X$ a spectrum, there is a natural commutative diagram of graded abelian groups, induced by the orientation of ${\mathbf{H}\mathbb{Z}}$ and rationalization
\[
\xymatrix{
\mathbf{P} MU_* X
\ar@{^(->}[r]
\ar[d]
&
MU_* X
\ar[r]
&
\mathbb{Z} \otimes_{MU_* } MU_* (X)
\ar[r]
&
{\mathbf{H}\mathbb{Z}}_* X
\ar[d]
\\
\mathbf{P} MU_* X \otimes \mathbb{Q}
\ar[rrr]_\cong
&&&
\mathbf{H}\mathbb{Q}_* X,
}
\]
in which the lower horizontal morphism is an isomorphism.
If $MU_* X$ has no additive torsion, then
$\mathbf{P} MU_* X \rightarrow {\mathbf{H}\mathbb{Z}}_* X$ is a monomorphism.
\end{lem}
\begin{exam}
\label{exam:prim_CP}
Let $d$ be a natural number. Then
$$
\mathbf{P}
MU_* ((\mathbb{C} \mathbf{P}^\infty_0)^{\smash d})
\hookrightarrow
{\mathbf{H}\mathbb{Z}}_* ((\mathbb{C} \mathbf{P}^\infty_0)^{\smash d})
$$
is a morphism of algebras, where the product is induced by the $H$-space structure of $\mathbb{C} \mathbf{P}^\infty$. The homology ${\mathbf{H}\mathbb{Z}}_*
((\mathbb{C} \mathbf{P}^\infty_0)^{\smash d})$ is the free divided power algebra $\Gamma^* (\mathbb{Z}^{\oplus d}) $, hence $ \mathbf{P} MU_* ((\mathbb{C} \mathbf{P}^\infty_0)^{\smash d})$ is a subalgebra of $\Gamma^* (\mathbb{Z}^{\oplus d}) $.
\end{exam}
The primitives $ \mathbf{P} MU_* (\mathbb{C} \mathbf{P}^\infty_0)$ were calculated by David Segal
\cite{segal}; an elegant approach is given by Miller in \cite[Proposition 4.1]{miller}, where
the primitive generators $p_n \in MU_{2n}(\mathbb{C} \mathbf{P}^\infty_0)$ are defined by means of the expansion
\[
\underline{\beta}_0(\exp (T))
=
\sum
\frac{p_n}{n!}T^n
\]
in $MU_* (\mathbb{C} \mathbf{P}^\infty_0) \otimes \mathbb{Q}$, so that $|p_n|= 2n$. The morphism
$
\mathbf{P} MU_* (\mathbb{C} \mathbf{P}^\infty_0) \rightarrow {\mathbf{H}\mathbb{Z}}_* (\mathbb{C} \mathbf{P}^\infty_0)
$
sends $p_n$ to $ (\beta_1^{\mathbf{H}\mathbb{Z}})^n = n! \beta_n^{\mathbf{H}\mathbb{Z}}$, where the $\beta_i^{\mathbf{H}\mathbb{Z}}$ denote the canonical module generators of ${\mathbf{H}\mathbb{Z}}_* (\mathbb{C} \mathbf{P}^\infty_0)$ (cf. \cite[Remark 4.2]{miller}).
By the Künneth isomorphism $MU_* (\mathbb{C} \mathbf{P}^\infty_0 \smash \mathbb{C} \mathbf{P}^\infty_0) \cong MU_* (\mathbb{C} \mathbf{P}^\infty_0) \otimes_{MU_*} MU_*(\mathbb{C} \mathbf{P}^\infty_0)$, for pairs of natural numbers $(i,j)$, $p_i \otimes p_j$ is a primitive of $MU_* (\mathbb{C} \mathbf{P}^\infty_0 \smash \mathbb{C} \mathbf{P}^\infty_0)$. The integral calculation of $\mathbf{P} MU_* (\mathbb{C} \mathbf{P}^\infty_0 \smash \mathbb{C} \mathbf{P}^\infty_0)$ is an interesting and difficult problem: the elements $p_i \otimes p_j$ do not generate the primitives, due to delicate divisibility questions (cf. \cite{knapp_habilit}, \cite{baker_transfer} and \cite{BCRS}, for example).
\begin{lem}
\label{lem:identify-primitives}
The primitive $\mathbf{P} MU_* (\mathbb{C} \mathbf{P}^\infty_0 \smash \mathbb{C} \mathbf{P}^\infty_0)$ subgroup is a graded free
$\mathbb{Z}$-module such that $\mathbf{P} MU_* (\mathbb{C} \mathbf{P}^\infty_0 \smash \mathbb{C} \mathbf{P}^\infty_0) \otimes \mathbb{Q}$ has
basis $\{ p_i \otimes p_j | i, j \geq 0 \}$.
\end{lem}
\begin{nota}
\label{nota:primel}
Let $\mathfrak{p} (S, T) $ denote the two-variable power-series in $MU_* (\mathbb{C} \mathbf{P}^\infty_0\smash \mathbb{C} \mathbf{P}^\infty_0 ) \otimes \mathbb{Q} [[S, T]]$:
\[
\mathfrak{p}(S, T):= \sum_{m,n \geq 0} p_m \otimes p_n \frac{S^m T^n}{m! n !}.
\]
\end{nota}
\begin{lem}
\label{lem:identify_primel}
There is an identity of formal power series:
\[
\mathfrak{p}(S, T) = \underline{\beta}_0 (\exp S) \otimes \underline{\beta}_0 (\exp T).
\]
\end{lem}
\subsection{Restricting the double transfer to primitives}
\label{subsect:primitive_double_transfer}
A primitive $\mathfrak{p}$ of degree $2k$ in $MU_* (\mathbb{C} \mathbf{P}^\infty_0 \smash \mathbb{C} \mathbf{P}^\infty_0)$ corresponds to
a morphism of comodules
\[
\mathfrak{p} \in
\hom_{MU_* MU}(MU_* [2k] , MU_* (\mathbb{C} \mathbf{P}^\infty_0 \smash \mathbb{C} \mathbf{P}^\infty_0)).
\]
This induces a class $\mathfrak{p}^* [\kappa] \in \mathrm{Ext}^1_{MU_*MU} (MU_* [2k] , MU_* \otimes {\mathbb{Q}/\mathbb{Z}} [-4])$ and, by the chromatic connecting morphism $\partial_1$, the image of the double algebraic transfer:
\[
\mathfrak{p} ^* [e_\tau^2] = \partial_1 \mathfrak{p}^* [\kappa] \in \mathrm{Ext}^2 _{MU_*MU} (MU_*[2k] , MU_* [-4]),
\]
where the identification follows from Proposition \ref{prop:kappa-transfer}.
In particular, to understand the restriction of the double algebraic transfer to the primitive element $\mathfrak{p}$, it suffices to consider $\mathfrak{p}^* [\kappa]$,
which is represented by the cocycle $\kappa\circ \mathfrak{p}$:
\[
\xymatrix{
MU_* [2k]
\ar[r]^(.4){\mathfrak{p}}
&
MU_*(\mathbb{C} \mathbf{P}^\infty_0 \smash \mathbb{C} \mathbf{P}^\infty_0 )
\ar[r]
^\kappa
&
MU_* MU \otimes {\mathbb{Q}/\mathbb{Z}} [-4].
}
\]
By Proposition \ref{prop:calculate_Xi}, the cocycle $\kappa\circ \mathfrak{p}$ fits
into the commutative diagram
\[
\xymatrix{
MU_* [2k]
\ar[r]^(.4){K \circ \mathfrak{p}}
\ar[rd]_{\kappa \circ \mathfrak{p}}
&
MU_*MU \otimes \mathbb{Q} [-4]
\ar@{->>}[d]
\\
&
MU_* MU \otimes {\mathbb{Q}/\mathbb{Z}} [-4].
}
\]
Recall that the morphism $K$ is given in Proposition \ref{prop:calculate_Xi} by specifying the image of $\underline{\beta}_0 (x) \otimes \underline{\beta}_0 (y)$.
\begin{nota}
Write $\overline{B}_n ^L, \overline{B}_n^R \in MU_*MU \otimes \mathbb{Q}$ for the reduced Bernoulli numbers associated to the left (respectively right) $MU_*$-algebra structures.
\end{nota}
\begin{prop}
\label{prop:restrict_Xi_prim}
The restriction of the morphism $K : MU_* (\mathbb{C} \mathbf{P}^\infty_0 \smash \mathbb{C} \mathbf{P}^\infty_0) \rightarrow MU_*MU\otimes \mathbb{Q} [-4]$ to the primitive elements is determined
by
\[
K (p_m \otimes p_n)
=
(\overline{B}^R_{m+1} - \overline{B}^L _{m+1})\overline{B}^R_{n+1}
\]
for natural numbers $m,n$.
\end{prop}
\begin{proof}
By Proposition \ref{prop:calculate_Xi}, the morphism $K $ is given by
\[
\underline{\beta}_0 (x) \otimes \underline{\beta}_0 (y)
\mapsto
\Big(
\frac{1}{x}- \frac{1}{\underline{b}(x)}
\Big)
\Big(
\frac{1}{\log^L y }
- \frac{1}{\underline{b}(y)}
\Big).
\]
Lemma \ref{lem:identify_primel} identifies the generating formal power series $\mathfrak{p}(S,T)$ for the primitive elements $p_i \otimes p_j$; thus the image of $\mathfrak{p}(S,T)$ is given by substituting the power series $x= \exp^L
S $, $y=\exp^L T$ in the above exression (note that the left module structure of $MU_*MU$ is used), which gives
\[
\mathfrak{p}(S,T)
\mapsto
\Big(
\frac{1}{\exp^L S}- \frac{1}{\underline{b}(\exp^L S)}
\Big)
\Big(
\frac{1}{\log^L ( \exp^L T)}
- \frac{1}{\underline{b}(\exp^L T)}
\Big).
\]
Simplifying and reversing the order in the two brackets, this gives:
\[
\Big(
\frac{1}{\exp^R S}- \frac{1}{\exp^L S}
\Big)
\Big(
\frac{1}{\exp^R T}
-
\frac{1}{T}
\Big).
\]
The result follows from the definition of $\mathfrak{p}(S,T)$ and of the reduced Bernoulli numbers.
\end{proof}
In principle, this result determines the class $\mathfrak{p} ^* [\kappa]$, for any primitive $\mathfrak{p}$. This can be made more concrete by passing to elliptic homology and appealing to the invariants introduced by Laures \cite{laures} and by Behrens \cite{behrens}, as explained in the following sections.
\section{Passage to elliptic homology}
To study the $p$-local Adams-Novikov two-line, for $p \geq 5$ a prime, complex cobordism can usefully be replaced by elliptic homology, by change of rings.
The results of this section are entirely algebraic, relying on the fact that the formal group law of elliptic homology is defined over the ring of holomorphic modular forms.
\subsection{Formal group law input}
\label{subsect:fgl_input}
Consider the ring ${MF}$ of
holomorphic forms of level one over the ring ${\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]}$, so that ${MF} \cong {\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]} [c_4, c_6]$,
graded by weight. The Fourier expansion at the cusp at infinity defines the
$q$-expansion ${MF}\hookrightarrow {\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]} [[q]] [u]$, a
monomorphism of rings by the $q$-expansion principle (the variable $u$
corresponds to the grading). The morphism $q^0 :
{\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]}[[q]] \rightarrow {\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]}$ sending a power series to its
constant term induces a ring morphism ${\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]} [[q]] [u] \rightarrow {\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]} [u]
$. This gives a commutative diagram of ring morphisms:
\begin{eqnarray}
\label{eqn:q0_hol}
\xymatrix{
L
\ar[r]
\ar[d]_{{\mathbb{G}_{\mathrm{m}, u}}}
&
{MF}
\ar[d]
\\
{\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]} [u]
&
{\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]} [[q]][u] .
\ar[l]^{q^0}
}
\end{eqnarray}
The composite $L \rightarrow {\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]} [[q]][u]$ classifies the (graded) formal
group law associated to the Tate Weierstrass curve defined
over ${\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]} [[q]] [u]$. After reduction $q \mapsto 0$, this is the
multiplicative (graded) formal group law ${\mathbb{G}_{\mathrm{m}, u}}$ over ${\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]} [[q]][u]$, since the Tate curve has
multiplicative reduction.
\begin{rem}
The universal Weierstrass curve over the ring of holomorphic modular forms ${MF}$ is not an elliptic curve;
however, the associated formal group law is
defined, since the curve is smooth at the identity section. Similarly for the Tate curve.
\end{rem}
The ring $\mf^{\mathrm{mer}}$ of meromorphic modular forms over ${\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]}$ is isomorphic
to $ {MF} [\Delta^{-1}]$, where $\Delta$ is the discriminant. The $q$-expansion of a
meromorphic modular form lies in ${\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]} [[q]][q^{-1}]$ and $q^0$ defines an
additive morphism $q^0 : {\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]} [[q]][q^{-1}] \rightarrow {\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]}$. This
gives the following commutative diagram
\begin{eqnarray}
\label{eqn:mf-fgl}
\xymatrix{
L
\ar[r]
\ar[d]_{{\mathbb{G}_{\mathrm{m}, u}}}
&
{MF}
\ar@{^(->}[r]
\ar[d]
&
\mf^{\mathrm{mer}}
\ar[d]
\\
{\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]} [u^{\pm 1}]
&
{\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]} [[q]][u]
\ar@{^(->}[r]
\ar[l]^{q^0}
&
{\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]} [[q]][q^{-1},u^{\pm 1}],
\ar@{-->}@/^2pc/[ll]^{q^0}
}
\end{eqnarray}
in which the solid arrows are ring morphisms.
The morphisms $L \rightarrow {\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]} [u^{\pm 1}] $ and $L \rightarrow
\mf^{\mathrm{mer}}$ are Landweber exact and correspond respectively to
$KU_{\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]}$ (which will be denoted here simply by $KU$) and a version of elliptic homology, denoted by ${Ell}$.
\begin{prop}\cite[Theorem 2.7]{laures}.
The ring ${Ell} _ 0 KU$ is isomorphic to Katz's ring
$\mathfrak{D}_{\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]}$ of divided congruences over ${\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]}$.
\end{prop}
\begin{rem}
\
\begin{enumerate}
\item
The ring ${Ell}_* KU $ is concentrated in even degree and is $2$-periodic.
\item
The ring ${Ell}_0 KU$ is a subring of ${Ell}_0 KU \otimes \mathbb{Q}\cong
({Ell}_* \otimes KU_* \otimes \mathbb{Q})_0$ and identifies with the sub-${\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]}$-module
of sums $\sum_i f_i $ of modular forms such that the Fourier expansion $\Sigma_i f_i(q)$ has coefficients
in ${\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]}$; this is precisely the ring $\mathfrak{D}_{\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]}$ of divided congruences \cite{katz}.
\end{enumerate}
\end{rem}
\subsection{The reduction map $\overline{\rho}^1$}
\label{subsect:rho}
The additive morphism $q^0 : \mf^{\mathrm{mer}} \rightarrow {\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]} [u^{\pm 1}]$ is
realized by a morphism of spectra $q^0 : {Ell} \rightarrow KU$, which is
derived from Miller's elliptic character (see
\cite{laures}, following Miller \cite{millerW}). Hence there is an induced
morphism of spectra
$
{Ell} \smash {Ell}
\rightarrow
KU \smash {Ell} ,
$
which induces a morphism of right ${Ell}_*$-modules
\[
\overline{\rho}^1 : {Ell}_* {Ell}
\rightarrow
KU_* {Ell},
\]
which is used in defining Laures' $f$-invariant \cite{laures} (see Section \ref{subsect:f_inv}).
Since ${Ell}$ and $KU$ are Landweber exact, ${Ell}_* {Ell} \otimes \mathbb{Q} \cong
{Ell}_* \otimes {Ell}_* \otimes \mathbb{Q}$ and $KU_* {Ell} \otimes
\mathbb{Q} \cong KU_*\otimes {Ell}_* \otimes \mathbb{Q}$.
\begin{prop}
\label{prop:rho_rat}
The morphism $\overline{\rho}^1 \otimes \mathbb{Q}$ is the morphism of right ${Ell}_* \otimes \mathbb{Q}$-modules:
\[
q^0 \otimes {Ell}_* \otimes \mathbb{Q} :
{Ell}_* \otimes {Ell}_* \otimes \mathbb{Q}
\cong
{Ell}_* {Ell} \otimes \mathbb{Q}
\rightarrow
KU_* {Ell}\otimes \mathbb{Q}
\cong
KU_* \otimes {Ell}_* \otimes \mathbb{Q}
\]
\end{prop}
There is a morphism of short exact sequences of right ${Ell}_*$-modules
\begin{eqnarray}
\label{eqn:ses-rho1}
\xymatrix{
\ \ \ \ 0
\ar[r]
&
{Ell}_*{Ell}
\ar[r]
\ar[d]_{\overline{\rho}^1}
&
{Ell}_*{Ell} \otimes \mathbb{Q}
\ar[r]
\ar[d]^{\overline{\rho}^1 \otimes \mathbb{Q}}
&
{Ell}_*{Ell} \otimes {\mathbb{Q}/\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]}
\ar[r]
\ar[d]^{\overline{\rho}^1 \otimes {\mathbb{Q}/\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]}}
&
0
\\
0
\ar[r]
&
KU_* {Ell}
\ar[r]
&
KU_* {Ell}\otimes \mathbb{Q}
\ar[r]
&
KU_* {Ell} \otimes {\mathbb{Q}/\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]}
\ar[r]
&
0.
}
\end{eqnarray}
Thus $\overline{\rho}^1 \otimes {\mathbb{Q}/\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]}$ is determined by $\overline{\rho}^1 \otimes \mathbb{Q}$ and
hence by the additive morphism $q^0$.
\begin{rem}
Following \cite[Theorem 4.2]{behrens_laures}, the morphism $\overline{\rho}^1$ is used here to define the $f$-invariant rather than the analogous morphism
$\rho^1 : {Ell}_*{Ell} \rightarrow {Ell}_* KU$ (compare \cite[Proposition 3.9]{laures}). The relationship between the two approaches to calculating the $f$-invariant is explained by \cite[Proposition 3.10]{laures}.
\end{rem}
\subsection{Reduction of the cocycle $\kappa$}
\label{subsect:basechange_kappa_ell}
The class $[\kappa] \in \mathrm{Ext}^1 _{MU_*MU} (MU_* (\mathbb{C} \mathbf{P}^\infty_0 \smash \mathbb{C} \mathbf{P}^\infty_0), MU_* \otimes {\mathbb{Q}/\mathbb{Z}}[-4])$ corresponds to the algebraic double transfer, by Proposition \ref{prop:kappa-transfer} and, by Proposition \ref{prop:calculate_Xi}, the representing cocycle
\[
\kappa \in \hom_{MU_*} (MU_* (\mathbb{C} \mathbf{P}^\infty_0 \smash \mathbb{C} \mathbf{P}^\infty_0) , MU_*MU \otimes {\mathbb{Q}/\mathbb{Z}}[-4])
\]
is induced by the morphism $K \in \hom_{MU_*} (MU_*
(\mathbb{C} \mathbf{P}^\infty_0 \smash
\mathbb{C} \mathbf{P}^\infty_0) , MU_*MU \otimes \mathbb{Q} [-4])$, by composition with the quotient map $MU_*MU
\otimes \mathbb{Q} \twoheadrightarrow MU_*MU \otimes {\mathbb{Q}/\mathbb{Z}}$.
Base change along $MU_* \rightarrow {Ell}_*$ gives a cocycle
\[
\kappa_{{Ell}} :=\ _{{Ell}_*}\kappa _{{Ell}_*} \in \hom_{{Ell}_*}
({Ell}_*(\mathbb{C} \mathbf{P}^\infty_0 \smash \mathbb{C} \mathbf{P}^\infty_0) ,
{Ell}_* {Ell} \otimes {\mathbb{Q}/\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]} [-4]),
\]
which represents a class $[\kappa_{{Ell}}] \in \mathrm{Ext}^1 _{{Ell}_*{Ell}} ({Ell}_* (\mathbb{C} \mathbf{P}^\infty_0 \smash \mathbb{C} \mathbf{P}^\infty_0), {Ell}_* \otimes {\mathbb{Q}/\mathbb{Z}}[-4])$,
by Lemma \ref{lem:cocycle-base-change}. The following is clear:
\begin{lem}
\label{lem:kappa_ell_Xi}
The morphism $\kappa_{{Ell}}$ is the reduction of the morphism
\[
_{{Ell}_*}K_{{Ell}_*} \in
\hom_{{Ell}_*} ({Ell}_*(\mathbb{C} \mathbf{P}^\infty_0 \smash \mathbb{C} \mathbf{P}^\infty_0) ,
{Ell}_* {Ell} \otimes \mathbb{Q} [-4])
\]
via the morphism
${Ell}_* {Ell} \otimes \mathbb{Q} [-4] \twoheadrightarrow {Ell}_* {Ell}\otimes {\mathbb{Q}/\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]}
[-4]$.
\end{lem}
\begin{prop}
\label{prop:kappa-tmf}
The morphism of right ${Ell}_*$-modules
\[
(\overline{\rho}^1 \otimes {\mathbb{Q}/\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]} ) \circ \kappa_{Ell} \in \hom
({Ell}_*(\mathbb{C} \mathbf{P}^\infty_0 \smash \mathbb{C} \mathbf{P}^\infty_0) ,
KU_* {Ell} \otimes {\mathbb{Q}/\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]} [-4])
\]
coincides with the morphism $_{KU_*} \kappa _{{Ell}_*}$.
Hence, the morphism $(\overline{\rho}^1 \otimes {\mathbb{Q}/\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]} ) \circ \kappa_{Ell}$ is the reduction
of the morphism of right ${Ell}_*$-modules
\[_{KU_*}K_{{Ell}_*} \in \hom ({Ell}_*
(\mathbb{C} \mathbf{P}^\infty_0\smash \mathbb{C} \mathbf{P}^\infty_0) , KU_* {Ell}\otimes \mathbb{Q} [-4])
\]
via the morphism
$KU_* {Ell}\otimes \mathbb{Q} [-4] \twoheadrightarrow KU_* {Ell} \otimes {\mathbb{Q}/\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]}
[-4]$.
\end{prop}
\begin{proof}
The diagram of short exact sequences (\ref{eqn:ses-rho1}) together
with Lemma \ref{lem:kappa_ell_Xi} show that it is sufficient to calculate the
respective morphisms to $KU_* {Ell} \otimes \mathbb{Q}$. This can be carried out
using the identification of $\overline{\rho}^1 \otimes \mathbb{Q}$ given by Proposition \ref{prop:rho_rat}.
There is a commutative diagram
\[
\xymatrix{
{Ell}_* (\mathbb{C} \mathbf{P}^\infty_0 \smash \mathbb{C} \mathbf{P}^\infty_0)
\ar@/^1pc/[rd]^{_{{Ell}_*}K_{{Ell}_*}}
\ar[d]_{_{{MF}}K_{{Ell}_*}}
\\
{MF} \otimes_{MU_*} MU_*{Ell} \otimes \mathbb{Q} [-4]
\ar[d]_{q^0 \otimes {Ell}_*}
\ar[r]
&
{Ell}_* \otimes _{MU_*} MU_* {Ell}_*\otimes
\mathbb{Q} [-4]
\ar@/^1pc/[ld]^(.4){\overline{\rho}^1 \otimes \mathbb{Q}}
\\
KU_* {Ell} \otimes \mathbb{Q} [-4],
}
\]
where the horizontal morphism is induced by ${MF} \rightarrow \mf^{\mathrm{mer}} \cong {Ell}_*$.
The commutativity of the top triangle follows from the fact that the
elliptic formal group law is defined over the ring of holomorphic modular forms, ${MF}$, and the commutativity of
the lower triangle follows from the commutative diagram (\ref{eqn:mf-fgl}).
To complete the proof, observe that the vertical composite is the morphism $_{KU_*}K_{{Ell}_*}$, by the
commutative diagram (\ref{eqn:q0_hol}).
\end{proof}
\section{The $f$ and $f'$ invariants}
\label{sect:invariants}
The algebraic image of the double transfer can be analysed by using either the $f$-invariant of Laures (considered as an
invariant of the Adams-Novikov two-line) or the $f'$-invariant introduced by Behrens.
\subsection{Recollections on the $f$ invariant}
\label{subsect:f_inv}
The $f$-invariant of Laures \cite{laures} is a
homomorphism
\[
f :
\pi_{2k}(S) \otimes {\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]}
\rightarrow
\mathfrak{D} _\mathbb{Q} / \Big( \mathfrak{D}_{\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]} + (\mf^{\mathrm{mer}}_0)_\mathbb{Q} +
(\mf^{\mathrm{mer}}_{k+1})_\mathbb{Q} \Big).
\]
This factorizes across an invariant
\[
\iota^2 : \mathrm{Ext}^{2,2k+2} _{MU_*MU} (MU_* , MU_*) \otimes {\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]}
\hookrightarrow
\mathfrak{D} _\mathbb{Q} / \Big(\mathfrak{D}_{\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]} \oplus (\mf^{\mathrm{mer}}_0)_\mathbb{Q} \oplus
(\mf^{\mathrm{mer}}_{k+1})_\mathbb{Q} \Big),
\]
where the injectivity is given by \cite[Proposition 3.9]{laures}.
Via the chromatic connecting map
\[
\mathrm{Ext}^{1,*}_{MU_*MU} (MU_*,MU_* \otimes {\mathbb{Q}/\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]} )
\stackrel{\partial_1}{\rightarrow}
\mathrm{Ext}^{2,*}_{MU_*MU} (MU_*,MU_* ) \otimes {\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]},
\]
$\iota^2$ defines an invariant of
$
\mathrm{Ext}^{1,*}_{MU_*MU} (MU_*,MU_* \otimes {\mathbb{Q}/\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]} ).
$
Change of rings associated to the orientation $MU_* \rightarrow {Ell}_*$, allows the respective groups to be replaced by
\begin{eqnarray*}
&& \mathrm{Ext}^{2,*} _{{Ell}_*{Ell}} ({Ell}_* , {Ell}_*)
\\
&& \mathrm{Ext}^{1,*}_{{Ell}_*{Ell}} ({Ell}_*,{Ell}_* \otimes {\mathbb{Q}/\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]} ).
\end{eqnarray*}
We identify the invariant $\iota^2$ following Behrens and Laures. Write $M_k ^{\bullet +1} \cong \pi_{2k} ({Ell}^{\bullet +1})$ for the cobar complex associated to
${Ell}$. A morphism between semi-cosimplical abelian groups is defined \cite[page 25]{behrens_laures}:
\begin{eqnarray}
\label{eqn:BL_semicosimplicial}
\xymatrix{
M_k^{(1)}
\ar@<.5ex>[r]|{d_0}
\ar@<-.5ex>[r]|{d_1}
\ar[d]
_{\overline{\rho}^0}
&
M_k^{(2)}
\ar@<1ex>[rr]|{d_0}
\ar@<0ex>[rr]|{d_1}
\ar@<-1ex>[rr]|{d_2}
\ar[d]
_{\overline{\rho}^1}
&&
M_k^{(3)}
\ar[d]
\ar[d]
_{\overline{\rho}^2}
\\
(\mf^{\mathrm{mer}}_k)_{{\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]}}
\ar@<.5ex>[r]|(.6){d_0}
\ar@<-.5ex>[r]|(.6){d_1}
&
\mathfrak{D}_{{\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]}}
\ar@<1ex>[rr]|(.3){d_0}
\ar@<0ex>[rr]|(.3){d_1}
\ar@<-1ex>[rr]|(.3){d_2}
&&
\mathfrak{D}_\mathbb{Q} / \big(
(\mf^{\mathrm{mer}}_k)_{\mathbb{Q}}
\oplus
(\mf^{\mathrm{mer}}_0)_\mathbb{Q}
\big)
}
\end{eqnarray}
The composite of $\overline{\rho}^2$ with the projection
\[
\mathfrak{D}_\mathbb{Q} / \big(
(\mf^{\mathrm{mer}}_k)_{\mathbb{Q}}
\oplus
(\mf^{\mathrm{mer}}_0)_\mathbb{Q}
\big)
\twoheadrightarrow
\mathfrak{D}_\mathbb{Q} / \big(
\mathfrak{D}_{{\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]}} \oplus (\mf^{\mathrm{mer}}_k)_{\mathbb{Q}} \oplus (\mf^{\mathrm{mer}}_0)_\mathbb{Q}
\big)
\]
induces the morphism
\[
\iota^2 : \mathrm{Ext}^{2,2k} _{{Ell}_*{Ell}} ({Ell}_* , {Ell}_*) \otimes {\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]}
\hookrightarrow
\mathfrak{D} _\mathbb{Q} / \Big(\mathfrak{D}_{\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]} \oplus (\mf^{\mathrm{mer}}_0)_\mathbb{Q} \oplus
(\mf^{\mathrm{mer}}_{k})_\mathbb{Q} \Big),
\]
on restriction to cocycles.
Write the chain cocomplex associated to the cobar complex as
\[
\xymatrix{
M_k^{(1)}
\ar[r]^{\delta^0}
&
M_k^{(2)}
\ar[r]^{\delta^1}
&
M_k^{(3)}
\ar[r]
&
\ldots
\ \ .
}
\]
The morphism $\iota^2$ is identified explicitly by the following straightforward application of chromatic arguments.
\begin{lem}
\label{lem:iota2_rho1}
Let $x$ be a $2$-cocycle in $M_k^{(2)}$ which represents a class $$[x] \in \mathrm{Ext}^{2, *} _{{Ell}_* {Ell}} ({Ell}_*, {Ell}_*).$$ Then
\begin{enumerate}
\item
there exists an element $c \in M_k^{(1)}$ and an integer $n$ such that $\delta^1 c = nx$;
\item
the invariant $\iota^2 [x]$ is represented by the element $\frac{1}{n}\overline{\rho}^1 (c) \in \mathfrak{D}_\mathbb{Q}$.
\end{enumerate}
\end{lem}
\begin{proof}
A straightforward consequence of the commutative diagram (\ref{eqn:BL_semicosimplicial}) together with the fact that $\mathrm{Ext}^d_{{Ell}_*{Ell}}({Ell}_* , {Ell}_*) \otimes \mathbb{Q}$ is trivial
for $d >0$.
\end{proof}
\begin{prop}
\label{prop:f-inv_1cocycle}
Let $[c] \in \mathrm{Ext}^{1,2k}_{{Ell}_*{Ell}} ({Ell}_* , {Ell}_* \otimes {\mathbb{Q}/\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]})$ be represented by a
cocycle $c : {Ell}_* [2k] \rightarrow {Ell}_* {Ell} \otimes {\mathbb{Q}/\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]}$ which factorizes as
left ${Ell}_*$-module morphisms
\[
\xymatrix{
{Ell}_*[2k]
\ar[r]^(.4){\hat{c}}
\ar[rd]
_c
&
{Ell}_*{Ell} \otimes \mathbb{Q}
\ar@{->>}[d]
\\
&
{Ell}_*{Ell} \otimes {\mathbb{Q}/\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]}.
}
\]
Then the invariant $\iota^2( \partial_1 [c])$ is represented by the image of the generator
under the map
\[
\xymatrix{
{Ell}_*[2k]
\ar[rr]^(.45){\ _{KU_*} \hat{c} _{{Ell}_*} }
&&
KU_*{Ell} \otimes \mathbb{Q},
}
\]
where $KU_{2k} {Ell}\otimes \mathbb{Q}$ is identified with $\mathfrak{D}_\mathbb{Q}$ by periodicity.
\end{prop}
\begin{proof}
The result follows from Lemma \ref{lem:iota2_rho1}.
\end{proof}
\subsection{Restricting the $f$-invariant to primitives}
\begin{nota}
For $\mathfrak{p}$ a primitive in $\mathbf{P} MU_* (\mathbb{C} \mathbf{P}^\infty_0 \smash \mathbb{C} \mathbf{P}^\infty_0)$, let $f(\mathfrak{p})$
denote the $f$-invariant of $ \mathfrak{p}^* [\kappa]\in \mathrm{Ext}^1_{MU_*MU}
(MU_*[|\mathfrak{p}| +4] , MU_* \otimes {\mathbb{Q}/\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]})$.
\end{nota}
For $n$ a natural number, write $\overline{B}_n ^{{Ell}}\in {Ell}_* \otimes \mathbb{Q}$ (respectively
$\overline{B}_n ^{KU} \in KU_* \otimes \mathbb{Q}$) for the reduced Bernoulli numbers associated to the complex
orientations of ${Ell}$ and $KU$ respectively.
\begin{rem}
The reduced Bernoulli number $\overline{B}_n ^{{Ell}}$ is defined in the ring ${MF} \otimes \mathbb{Q}$ of holomorphic modular forms, since the formal group law of ${Ell}_*$ is
the image of a formal group law over ${MF}$ via the morphism ${MF}\hookrightarrow \mf^{\mathrm{mer}}\cong {Ell}_*$.
\end{rem}
\begin{thm}
\label{thm:f-pspt}
Let $s,t$ be natural numbers.
The $f$-invariant $$f(p_s \otimes p_t)
\in \mathfrak{D} _\mathbb{Q} / \Big(\mathfrak{D}_{\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]} \oplus (\mf^{\mathrm{mer}}_0)_\mathbb{Q} \oplus
(\mf^{\mathrm{mer}}_{s+t+2})_\mathbb{Q} \Big),$$ is represented by the element
$
- \overline{B}_{t+1}^{Ell} \overline{B} _{s+1} ^{KU}
\in
\mathfrak{D}_\mathbb{Q}.
$
\end{thm}
\begin{proof}
By Proposition \ref{prop:f-inv_1cocycle}, these invariants are represented by the morphism
$
\ _{KU_*} ( K \circ \mathfrak{p}) _{{Ell}_*}.
$
Hence, by Proposition \ref{prop:restrict_Xi_prim},
$f(p_s \otimes p_t)$ is represented by
\[
(\overline{B}_{s+1} ^{{Ell}} - \overline{B} _{s+1} ^{KU}) \overline{B}_{t+1}^{{Ell}}
\in
\mathfrak{D}_\mathbb{Q}.
\]
The term $\overline{B}_{s+1}^{{Ell}} \overline{B}_{t+1} ^{{Ell}}$ becomes zero on passage to the quotient, since it belongs to
the subgroup $(\mf^{\mathrm{mer}}_{s+t+2})_\mathbb{Q}$.
\end{proof}
\begin{cor}
\label{cor:symm_f-invariant}
Let $s,t$ be natural numbers.
The invariant $f(p_s \otimes p_t)$ is represented by the element
$
\overline{B}_{s+1}^{Ell} \overline{B} _{t+1} ^{KU}
\in
\mathfrak{D}_\mathbb{Q}.
$
\end{cor}
\begin{proof}
The group $\mathfrak{S}_2$ acts on $MU_* (\mathbb{C} \mathbf{P}^\infty_0 \smash \mathbb{C} \mathbf{P}^\infty_0)$ by comodule morphisms induced by interchanging the factors $\mathbb{C} \mathbf{P}^\infty_0$. It is straightforward to show that the induced right action on $\mathrm{Ext}_{MU_*MU} ^2 (MU_* (\mathbb{C} \mathbf{P}^\infty_0 \smash \mathbb{C} \mathbf{P}^\infty_0) , MU_* [-4]) $ satisfies
\[
([e_\tau]^2 ) \sigma = \mathrm{sgn} (\sigma) [e_\tau]^2.
\]
Hence
$
f (p_s \otimes p_t) = - f (p_t \otimes p_s);
$
in particular, the $f$-invariant of $p_s \otimes p_t$ is represented by the element $\overline{B}_{s+1}^{Ell} \overline{B} _{t+1} ^{KU}$.
\end{proof}
\subsection{The $f'$-invariant on primitives}
For $p \geq 5$ a prime, Behrens \cite{behrens} defines the $f'$-invariant via a morphism
\[
f' :
\mathrm{Ext}^{2,2k+2} _{MU_*MU} (MU_* , MU_*)_{(p)}
\rightarrow
H^0(C (l)^\bullet/p^\infty, v_1 ^\infty )_{2k+2},
\]
where $l$ is a topological generator of $\zed_p^\times$ (for example $l = \gamma$) and $C(l)^\bullet$ is an
explicit semi-cosimplicial abelian group which is defined in terms of modular forms of level one and modular forms of level $l$. Namely, as in \cite{behrens}, write $M_k (\Gamma_0 (l))_{\zed_p}$ for the space of modular forms of weight $k$ and level $\Gamma_0(l)$ over $\zed_p$
which are meromorphic at the cusps. Then the semi-cosimplicial graded abelian group is of the form
\[
C(l)^\bullet_{2k} =
\Big(
\xymatrix{
(M_k)_{\zed_p}
\ar@<.5ex>[r]|(.4){d_0}
\ar@<-.5ex>[r]|(.4){d_1}
&
*{
\begin{array}{c}
M_k(\Gamma_0 (l))_{\zed_p}\\
\times\\
(M_k)_{\zed_p}
\end{array}
}
\ar@<1ex>[r]
\ar[r]
\ar@<-1ex>[r]
&
M_k(\Gamma_0 (l))_{\zed_p}
}
\Big),
\]
where the morphisms $d_0, d_1$ are identified explicitly in terms of $q$-expansions.
(See \cite[Section 6]{behrens} and the review in \cite[Section 3]{behrens_laures}.) It follows that the $f'$-invariant of a class is
represented by a modular form which satisfies certain congruences.
\begin{rem}
Behrens and Laures work $p$-locally and replace $MU$ by $BP$, so as to accord better with the results of Miller, Ravenel and Wilson \cite{MRW}. Thus, below ${Ell}_*$
denotes $p$-local elliptic homology ($p \geq 5$), and a $p$-typical orientation $BP_* \rightarrow {Ell}_*$ is fixed, as in \cite{behrens_laures}.
\end{rem}
Behrens and Laures shows that the $f'$-invariant fits into a commutative diagram
\begin{eqnarray}
\label{eqn:f'_invariant}
\xymatrix{
\mathrm{Ext}^{2,4t}_{BP_*BP} (BP_*, BP_*)
\ar[dd]_{f'}
&
\mathrm{Ext}^{0,4t}_{BP_*BP} (BP_*, BP_*/p^\infty,v_1^\infty)
\ar[l]_{\partial_1\partial_2}
\ar[d]^{L_{v_2}}
\ar@/_1pc/[ldd]
\\
&\mathrm{Ext}^{0,4t}_{BP_*BP} (BP_*, BP_*/p^\infty,v_1^\infty [v_2^{-1}])
\ar[d]^\cong
\\
H^0(C (l)^\bullet/p^\infty, v_1 ^\infty )_{4t}
&
\mathrm{Ext}^{0,4t}_{{Ell}_*{Ell}} ({Ell}_*, {Ell}_*/p^\infty,v_1^\infty),
\ar[l]^(.55){\cong}_(.55){\tilde{\eta}}
}
\end{eqnarray}
where the diagonal arrow is induced by the $p$-typical orientation of ${Ell}$, the vertical change of rings isomorphism is given in the proof of
\cite[Lemma 4.6]{behrens_laures} and the isomorphism $\tilde{\eta}$ in \cite[Proposition 3.17]{behrens_laures}. The upper triangle is commutative by
\cite[Diagram 3.15]{behrens_laures} and the lower triangle is commutative by \cite[Diagram 3.16]{behrens_laures}. Up to the isomorphism $\tilde{\eta}$, the $f'$-invariant can be considered
as taking values in the comodule primitives of ${Ell}_*/p^\infty,v_1^\infty$; Behrens gives a modular description of $H^0(C (l)^\bullet/p^\infty, v_1 ^\infty )_{4t}$
in \cite[Theorems 1.2, 1.3]{behrens}.
Consider classes which are in the image of the algebraic double transfer. Recall that $\overline{\Theta}$ defines a chromatic factorization of the double transfer and this
induces a commutative diagram
\begin{eqnarray}
\label{eqn:alg_double_transfer}
\xymatrix{
&
\mathrm{Ext}_{BP_*BP}^{0,*-4} (BP_* , BP_* (\mathbb{C} \mathbf{P}^\infty_0\smash \mathbb{C} \mathbf{P}^\infty_0))
\ar[d]^{BP_* (\overline{\Theta})}
\ar[ld]
\\
\mathrm{Ext}^{2,*}_{BP_* BP} (BP_* , BP_*)
&
\mathrm{Ext}^{0,*}_{BP_* BP} (BP_* , BP_* /p^\infty, v_1^\infty)
\ar[l]^{\partial_1 \partial_2},
}
\end{eqnarray}
by Proposition \ref{prop:kappa-transfer} and Proposition \ref{prop:second-factorization} (with $BP$ in place of $MU$).
\begin{lem}
\label{lem:independence_Theta}
The composite morphism
\[
\xymatrix{
\mathrm{Ext}_{BP_*BP}^{0,4t -4} (BP_* , BP_* (\mathbb{C} \mathbf{P}^\infty_0\smash \mathbb{C} \mathbf{P}^\infty_0) )
\ar[rr]^{BP_* (\overline{\Theta})}
&&
\mathrm{Ext}^{0,4t}_{BP_* BP} (BP_* , BP_* /p^\infty, v_1^\infty)
\ar[d]
\\
&&
\mathrm{Ext}^{0,4t}_{{Ell}_*{Ell}} ({Ell}_*, {Ell}_*/p^\infty,v_1^\infty)
}
\]
induced by the change of rings associated to the $p$-typical orientation $BP_* \rightarrow {Ell}_*$, is independent of the choice of $\overline{\Theta}$.
\end{lem}
\begin{proof}
Follows from the commutativity of the diagrams (\ref{eqn:f'_invariant}) and (\ref{eqn:alg_double_transfer}), together with the fact that the bottom horizontal morphism in (\ref{eqn:f'_invariant}) is an isomorphism.
\end{proof}
By naturality, it suffices to replace the composite morphism considered above by the morphism
\[
{Ell}_* (\overline{\Theta}) :
\mathrm{Ext}_{{Ell}_*{Ell}}^{0,4t -4} ({Ell}_* , {Ell}_* (\mathbb{C} \mathbf{P}^\infty_0\smash \mathbb{C} \mathbf{P}^\infty_0) )
\rightarrow
\mathrm{Ext}_{{Ell}_*{Ell}}^{0,4t} ({Ell}_* , {Ell}_* /p^\infty,v_1^\infty )
\]
which is considered as being the $f'$ invariant for primitive elements. This morphism is independent of the choice of orientation; hence, in the following, ${Ell}_*$ is equipped with its standard complex orientation.
Theorem \ref{thm:MU-thetas} gives a commutative
diagram
\[
\xymatrix{
&
\mathbf{P} {Ell}_*(\mathbb{C} \mathbf{P}^\infty_0 \smash \mathbb{C} \mathbf{P}^\infty_0)
\ar@{^(.>}[d]
\ar@/^1pc/@<4em>@{.>}[dd]^{f'}
\ar@/^3pc/@<4em>@{.>}[ddd]^{f''}
\\
{Ell}_* (\mathbb{C} \mathbf{P}^\infty_{-1} \smash \mathbb{C} \mathbf{P}^\infty_0)
\ar[dd]_{{Ell}_* (\theta)}
&
{Ell}_* (\mathbb{C} \mathbf{P}^\infty_0 \smash \mathbb{C} \mathbf{P}^\infty_0)
\ar@{.>}[l]_{\sigma'}
\ar[d]^{{Ell}_*(\overline{\Theta})}
\\
&
{Ell}_* /p^\infty, v_1 ^\infty [-4]
\ar@{^(->}[d]
\\
{Ell}_*KU \otimes \mathbb{Q}[-4]
\ar[r]
&
{Ell}_* KU \otimes \mathbb{Q}/\Big({Ell}_*KU _{(p)} \oplus ({Ell}_*)_\mathbb{Q} \Big) [-4],
}
\]
where the solid arrows denote comodule morphisms and the dotted arrows morphisms of
$\zed_{(p)}$-modules. The composite of $f'$ with the monomorphism ${Ell}_* /p^\infty, v_1 ^\infty [-4]
\hookrightarrow {Ell}_* KU \otimes \mathbb{Q}/\Big({Ell}_*KU _{(p)} \oplus ({Ell}_*)_\mathbb{Q} \Big) [-4]$ is denoted $f''$, as indicated.
\begin{rem}
The morphism ${Ell}_* (\theta)$ composed with the morphism induced by $\psi^\gamma - 1$ is integral in the appropriate sense, as a consequence of the construction of $\theta$. This can be related to the analysis of $H^0 (C(l)^\bullet/ p^\infty, v_1^\infty) $ using the identifications of \cite[Proposition 6.1]{behrens}.
\end{rem}
The morphism ${Ell}_*(\theta) \circ \sigma'$ is given by Proposition
\ref{prop:MUtheta_sigmap}, after base change to ${Ell}_*$; it is determined by $
\underline{\beta}_0 (x) \otimes \underline{\beta}_0 (y)
\mapsto $
\[
\Big( \frac{1}{\underline{b}'(x) } - \frac{1}{x}\Big)
\Big(\frac{1}{\log^{{Ell}} y} - \frac{1}{\underline{b}' (y) }\Big)
+
\sum _{i, j >0}\frac{B_i^{KU}}{i!}\frac{B_j^{KU}}{j!}
\left(
\frac{\gamma^i -1}{\gamma^{i+j}-1}
\right)
(\log^{Ell} x)^{i-1}(\log ^{Ell} y) ^{j-1},
\]
where the power series $\underline{b}'$ is understood as $\exp^{KU} \circ
\log^{Ell}$ when considered as a power series over ${Ell}_* KU \otimes \mathbb{Q}$.
\begin{thm}
\label{thm:fprime}
The $f'$-invariant on the primitives of ${Ell}_* (\mathbb{C} \mathbf{P}^\infty_0\smash \mathbb{C} \mathbf{P}^\infty_0) $ is determined by
\[
f'' (p_s \otimes p_t)
=
\Big[
\overline{B}^{Ell}_{s+1}\overline{B}^{KU} _{t+1}
+ \overline{B}^{KU}_{s+1}\overline{B}^{KU}
_{t+1}\frac{\gamma^{s+1}(1- \gamma^{t+1}) }{\gamma^{s+t+2}-1}
\Big],
\]
where $s, t$ are natural numbers and $
\overline{B}^{Ell}_{s+1}\overline{B}^{KU} _{t+1} + \overline{B}^{KU}_{s+1}\overline{B}^{KU}
_{t+1}\frac{\gamma^{s+1}(1- \gamma^{t+1}) }{\gamma^{s+t+2}-1}
$ is considered as an element of ${Ell}_*KU \otimes \mathbb{Q}$.
\end{thm}
\begin{proof}
The method of proof is similar to that used in Proposition \ref{prop:restrict_Xi_prim}.
Substitute $x= \exp^{Ell} S$ and $y = \exp^{Ell} T$ in the power
series representing ${Ell}_* (\theta) \circ \sigma'$; this gives the power series
\[
\Big( \frac{1}{\exp^{KU} S} - \frac{1}{\exp^{Ell} S }\Big)
\Big(\frac{1}{T} - \frac{1}{\exp^{KU} T }\Big)
+
\sum _{i, j >0}\frac{B_i^{KU}}{i!}\frac{B_j^{KU}}{j!}
\left(
\frac{\gamma^i -1}{\gamma^{i+j}-1}
\right)
S^{i-1}T ^{j-1}.
\]
The result follows by identifying coefficients.
\end{proof}
\begin{rem}
\label{rem:relate_f,f'}
The relationship between the $f$ and the $f'$ invariants (in the general case) is made explicit in \cite[Theorem 4.2]{behrens_laures} by analysing the semi-cosimplicial diagram (\ref{eqn:BL_semicosimplicial}).
Upon restricting to classes arising from comodule primitives via the algebraic double transfer, the relationship is clear. Observe that
Theorem \ref{thm:f-pspt}, Corollary \ref{cor:symm_f-invariant} and Theorem \ref{thm:fprime} show that, on passage to the quotient $$\mathfrak{D} _\mathbb{Q} / \Big(\mathfrak{D}_{\mathbb{Z}[{\scriptstyle{\frac{1}{6}}}]} \oplus (\mf^{\mathrm{mer}}_0)_\mathbb{Q} \oplus
(\mf^{\mathrm{mer}}_{s+t+2})_\mathbb{Q} \Big)$$ the elements $f(p_s \otimes p_t)$ and $f'(p_s \otimes p_t)$ both are defined by the class of the element $\overline{B}^{Ell}_{s+1}\overline{B}^{KU} _{t+1} $ in $\mathfrak{D}_\mathbb{Q}$, since the additional term appearing in Theorem \ref{thm:fprime} becomes trivial in this quotient.
The relationship can be seen as a direct consequence of Lemma \ref{lem:restrict_Theta} and Corollary \ref{cor:relate_MUtheta}; here, the sign appearing in the leading term in Corollary \ref{cor:relate_MUtheta} has been avoided by following Behrens and Lawson in defining the $f$-invariant by using the morphism $\overline{\rho}^1$.
\end{rem}
|
2,877,628,090,513 | arxiv | \chapter{Introduction}
Recent advances in our understanding of non-perturbative superstring
theory
have led to the establishment of many connections between hitherto
unrelated
superstring theories. Many of these connections involve p-brane
solutions of
the respective supergravity theories that couple to the (p+1)-form
potentials
in the Ramond-Ramond (RR) sector. Most of these RR p-branes, and all
of the
IIA ones, are singular as solutions of ten-dimensional (D=10)
supergravity, so
their status in superstring theory was unclear until recently (see
[\PKTb] for
a recent review). It now appears [\pol] that the RR p-branes of type
II
supergravity theories have their place in type II superstring theory
as
`Dirichlet-branes', or `D-branes' [\polb]. These include the p-branes
for
$p=0,2,4,6$ in the type IIA case and the p-branes for $p=1,3,5$ in
the type IIB
case. However, they also include a type IIB 7-brane, and a type IIA
8-brane and
it is possible to view the D=10 spacetime as a type IIB 9-brane
[\pol]. Note
that since the dual of a $p$-brane in D=10 is a ($6-p$)-brane, only
$p$-branes
with $p\le6$ have duals with $p\ge0$ for which a standard
(Minkowski space) interpretation is
available\foot{The IIB 7-brane has a (-1)-brane dual, but the latter
has an
interpretation as an instanton [\ggp].}, so the $p$-branes with
$p\ge7$ have
implications that are qualitatively different from those with
$p\le6$. This
difference is also apparent in the $p$-brane solutions of the
effective IIA or
IIB supergravity field equations. These solutions generally involve a
function
that is harmonic on the ($9-p$)-space `transverse' to the
($p+1$)-dimensional
worldvolume of the $p$-brane. For $p\le6$ the transverse space has
dimension 3
or greater so there exist harmonic functions that are constant at
infinity, but
for $p\ge7$ the transverse space has dimension 2 or less and the
asymptotic
properties are therefore qualitatively different. Partly for this
reason little
attention has been given so far to the $p\ge7$ branes.
Since p-branes couple naturally to (p+1)-form potentials, one expects
to find a
stable p-brane solution of a supergravity theory only if it includes
a
($p+1$)-form potential. From this perspective the IIB D=10 7-brane is
the most
straightforward of the $p\ge 7$ cases because the pseudo-scalar field
of IIB
supergravity can be exchanged for its 8-form dual. Indeed, a type IIB
7-brane
solution has recently been described [\ggp]; it can be viewed as a
dimensional
`oxidation' of the `stringy cosmic string' solution of [\SCS].
However, this
class of 7-brane solutions is specific to the {\it uncompactified}
IIB
supergravity and is therefore not expected to be related by T-duality
to other
type II p-branes. Here we shall present a new class of multi 7-brane
solutions
of the $S^1$-compactified IIB supergravity. In the decompactification
limit the
new solutions reduce to the trivial D=10 Minkowski spacetime
solution.
Nevertheless, as we shall see, these solutions are T-dual to both the
6-brane
and the IIA 8-brane solutions of the IIA theory.
The existence of an 8-brane solution of IIA supergravity is obscured
by the
absence of a 9-form potential, $A_9$, in the standard IIA
supergravity theory.
However, there is one in type IIA superstring theory [\pol] and this
suggests
that it should be possible to introduce one into the IIA supergravity
theory.
The 9-form potential would have a 10-form field-strength $F_{10}$.
Assuming a
standard kinetic term of the form $F_{10}^2$, the inclusion of this
field
does not lead to any additional degrees of freedom (per spacetime
point) and so
is not immediately ruled out by supersymmetry considerations, but it
allows the
introduction of a cosmological constant, as explained many years ago
in the
context of a four-form field strength in four-dimensional field
theories
[\duff, \ant]. As it happens, a version of type IIA supergravity
theory with a
cosmological constant was constructed (up to quartic fermion
terms) some time ago by Romans [\rom], who called it the `massive'
IIA
supergravity theory; the complete construction via superspace methods
was found
subsequently [\CGO]. It has been argued that the existence of the
massive IIA
supergravity is related to the existence of the 9-form potential of
type IIA
superstring theory [\pol]. Here we shall confirm this suggestion by
reformulating the massive IIA supergravity through the introduction
of a 9-form
potential\foot{Strictly speaking we do this only for the bosonic
Lagrangian, but the method guarantees the existence of a fermionic
extension to a full supergravity theory.}.
The new theory has the advantage that its solutions
include those of
both the massless and the massive IIA theory. We propose this new IIA
supergravity theory as the effective field theory of the type IIA
superstring,
allowing for the 9-form potential. It has been suggested [\pol,\pols]
that the
expectation value of the dual of this 10-form field strength should
be
interpreted as the cosmological constant of the massive IIA
supergravity
theory. One result of this paper is the determination of the precise
relation
between these quantities; they are conjugate variables in a sense
discussed
previously in the D=4 context [\Teit].
The massive IIA supergravity theory has the peculiarity that D=10
Minkowski spacetime is {\it not} a solution of the field equations
(and neither is the product of D=4 Minkowski spacetime with a
Calabi-Yau space). Various Kaluza-Klein (KK) type solutions were found
by Romans but none of them were supersymmetric, i.e. his solutions
break all the supersymmetries. A supersymmetric multi 8-brane
configuration was recently proposed as a solution of the Killing spinor condition in an appropriate bosonic background [\PW]. We verify that
this is a solution of the field equations of the new IIA supergravity
theory and we present a generalization of it. The solutions are
all singular at the `centres' of the metric, i.e. the 8-brane positions,
but this is a general feature of RR p-branes.
It is known that after compactification on $S^1$ the {\it
perturbative}
type IIA and type IIB superstrings are equivalent [\polb,\DHS], being
related
by a $Z_2$ T-duality transformation that takes the radius $R$ of the
$S^1$ of
one superstring theory into a radius $1/R$, in appropriate units, of
the other
superstring theory. It follows that the {\it same} effective
N=2 D=9 field theory should be obtained by dimensional reduction of
either the
IIA or IIB theory in D=10, and this is in fact the case [\BHO]. If
this $Z_2$
T-duality is valid non-perturbatively too, then $p$-brane solutions
of the IIA
theory must correspond to $p$-brane solutions of the IIB theory and
vice-versa,
in the sense that there are solutions of either the IIA or the IIB
theory that reduce to the same solution of the $S^1$-compactified
theory.
In particular the double-dimensional reduction to D=9 of a given IIA
8-brane
should be equivalent to the direct reduction to D=9 of some IIB
7-brane. There
is a potential difficulty in verifying this because the relevant D=9
theory
must be a {\it massive} N=2 supergravity theory. It is not too
difficult to see how to obtain a massive N=2 D=9 supergravity
from the massive D=10 IIA theory but it
is not so obvious how the resulting theory may also be obtained from
the (necessarily massless) D=10 IIB theory, although it must be possible
if T-duality is to be valid non-perturbatively. As we shall show, it is
possible by an application of a mechanism for obtaining a massive theory
in a lower dimension from a massless one in a higher dimension.
This mechanism is essentially that of Scherk and Schwarz [\SSM] but
in our case supersymmetry is preserved by the reduction. This result allows
us to map
8-brane solutions of the D=10 IIA theory into 7-brane solutions of
the IIB
theory, and vice-versa.
These IIB 7-brane and IIA 8-brane solutions may be seen as the
effective field theory realization of the associated D-branes of
the corresponding type II superstring theory. In this context, the
$Sl(2;R)$ symmetry of the IIB supergravity is expected to be
replaced by an $Sl(2;Z)$ U-duality [\HT], which amounts to an
identification of points in the space $Sl(2,R)/U(1)$ of IIB vacua
that differ by the action of $Sl(2,Z)$. One interesting
consequence of this IIB duality, when combined with the T-duality
of the 7-brane and 8-brane, is a quantization of the cosmological
constant of the $S^1$-compactified IIA superstring theory\foot{We
thank John Schwarz for suggesting this possibility to us.}.
The organisation of this article is as follows. In section 2, we
begin with a
review of the massive IIA supergravity, introducing some
simplifications.
In section 3, we construct the new formulation of the bosonic sector
of this
theory, incorporating the 9-form gauge field $A_9$, in which the
cosmological
constant emerges as an integration constant. In section 4, we
construct
supersymmetric multi 8-brane solutions of the massive IIA
supergravity theory,
some of which are asymptotically flat. In section 5, we show how both
the
massive IIA supergravity and the (massless) IIB supergravity theories
may be
dimensionally reduced to yield a new D=9 N=2 massive supergravity
theory. We
then use this to establish the massive Type II $T$ duality rules. In
section 6,
we construct the most general seven brane solutions of the IIB theory
that are
both compatible with the KK ansatz and preserve half the
supersymmetry. We then
show that the massless $T$ duality transformations take this solution
to the
IIA 6-brane while the massive $T$-duality transformations take it to
the IIA
8-brane solution. In section 7 we further comment on the relation to
type IIA
superstring theory and the quantization of the cosmological constant,
and on the connection to D=11 `M-theory'. Finally,
in Appendix A we give a
simplified formulation of the supersymmetry transformations of IIB
supergravity.
\chapter{The massive D=10 IIA supergravity}
The bosonic field content of the massive IIA D=10 supergravity theory
comprises
(in our notation) the (Einstein) metric, $g^{(E)}$, the dilaton,
$\sigma$, a
massive 2-form tensor field
$B'$ and a three-form potential $C'$. One introduces the
field-strengths
$$
\eqalign{
G &=4dC' + 6m(B')^2\cr
H &=3dB' }
\eqn\aone
$$
where $m$ is a mass parameter. The Lagrangian for these fields is
[\rom]
$$
\eqalign{
{\cal L} &= \sqrt{-g^{(E)}}\; \Big[ R_{(E)}
-{1\over2}|\partial\sigma|^2 -
{1\over3}e^{-\sigma}|H|^2 -{1\over12}e^{{1\over2}\sigma}|G|^2 -
m^2e^{{3\over2}\sigma}|B'|^2
-{1\over2}m^2 e^{{5\over2}\sigma}\Big]\cr
& + {1\over 9}\varepsilon \big[ dC'dC'B' + mdC'(B')^3 + {9\over20}m^2
(B')^5\big]\ .}
\eqn\atwo
$$
The notation for forms being used here is that a $q$-form $Q$ has
components
$Q_{M_1\dots M_q}$ given by
$$
Q= Q_{M_1\dots M_q}dx^{M_1}\wedge \dots \wedge dx^{M_q}\ .
\eqn\athree
$$
Thus, the $(1/9)\varepsilon dC'dC'B'$ term in \atwo\ is shorthand for
$$
{1\over 9}\varepsilon^{M_1\dots
M_{10}}\partial_{M_1}C'_{M_2M_3M_4}\partial_{M_5}C'_{M_6M_7M_8}B_{M_9M
_{10}}\ .
\eqn\aathree
$$
As explained in [\rom] the massless limit is not found by simply
setting $m=0$
in \atwo\ because the supersymmetry transformations involve terms
containing
$m^{-1}$. Instead, one first makes the field redefinitions
$$
\eqalign{
B' &= B + {2\over m}dA\cr
C' &= \tilde C - {6\over m} AdA\ . }
\eqn\afour
$$
This redefinition introduces the gauge invariance
$$
\eqalign{
\delta A &= -m\Lambda \cr
\delta B &= 2d\Lambda\cr
\delta\tilde C &= 12 Ad\Lambda}
\eqn\afive
$$
for which the gauge-invariant field strengths are
$$
\eqalign{
F&= 2dA + mB\cr
H&= 3dB\cr
G &= 4d\tilde C + 24 BdA + 6m B^2\ .}
\eqn\asix
$$
The bosonic Lagrangian of the massive IIA theory is now
$$
\eqalign{
{\cal L} &= \sqrt{-g^{(E)}}\; \Big[ R_{(E)}
-{1\over2}|\partial\sigma|^2 -
{1\over3}e^{-\sigma}|H|^2 -{1\over12}e^{{1\over2}\sigma}|G|^2 -
e^{{3\over2}\sigma}|F|^2
-{1\over2}m^2 e^{{5\over2}\sigma}\Big]\cr
& + {1\over 9}\varepsilon \big[ d\tilde C d\tilde C B + 6d\tilde C
B^2 dA +
12(dA)^2B^3 +
md\tilde C B^3 + {9\over2}mB^4 dA + {9\over 20}m^2 (B)^5\big]\ ,}
\eqn\aseven
$$
and the bosonic Lagrangian of the massless IIA theory can now be
found by
taking the $m\rightarrow 0$ limit.
The Lagrangian \aseven\ can be simplified by the further redefinition
$$
\tilde C = C-6AB\ .
\eqn\aeight
$$
The $\Lambda$-gauge transformation of the new 3-form $C$ is
$$
\delta C = -6m\Lambda B
\eqn\anine
$$
and the gauge-invariant field strengths, $F$, $H$, and $G$ are now
given by
$$
\eqalign{
F&= 2dA + mB\cr
H&= 3dB\cr
G &=4dC + 24AdB + 6mB^2 \ .}
\eqn\aten
$$
At the same time, to make contact with string theory, it is
convenient to
introduce the string metric
$$
g_{MN} = e^{-{1\over2}\sigma}g_{MN}^{(E)}\ .
\eqn\aeleven
$$
The bosonic Lagrangian now takes the simple form
$$
\eqalign{
{\cal L} &= \sqrt{-g}\big\{e^{-2\sigma}\; \big[ R
+4|\partial\sigma|^2
-
{1\over3}|H|^2\big] - |F|^2 - {1\over12}|G|^2
- {1\over2}m^2\big\}\cr
& + {1\over 9}\varepsilon\big[ dC dC B + mdCB^3 + {9\over 20}m^2
(B)^5\big]\
.}
\eqn\atwelve
$$
Observe that the final topological term is simply a type of
Chern-Simons (CS)
term associated with the 11-form $G^2H$. Thus, the bosonic action of
the
massive type IIA supergravity theory can be written as
$$
\eqalign{
I = \int_{{\cal M}_{10}}\! d^{10}x\; &\sqrt{-g}\Big\{ e^{-2\sigma}\;
\Big[ R
+4|\partial\sigma|^2 - {1\over3}|H|^2\Big] - |F|^2 -
{1\over12}|G|^2 - {1\over2}m^2\Big\} \cr
& + {1\over 9}\int_{{\cal M}_{11}}\! G^2 H \ ,}
\eqn\athirteen
$$
where ${\cal M}_{11}$ is an 11-manifold with boundary ${\cal
M}_{10}$. Apart
from the cosmological constant, the $m$-dependent terms in the action
can be
simply understood as arising from the replacement of the usual
$m$-independent
field strengths of the massless type IIA theory by their
$m$-dependent
generalizations \aten. Furthermore, the $m$-dependence of these field
strengths
is completely fixed by the `Stueckelberg' gauge transformation
$\delta
A=-m\Lambda$ of $A$, as are the remaining $\Lambda$-transformations.
The
relation of the constant $m$ appearing in this transformation with
the
cosmological constant cannot be understood purely within the context
of the
bosonic Lagrangian but is, of course, fixed by supersymmetry.
Observe that the cosmological constant term in \athirteen\ is now (in
the
string
metric) independent of the dilaton. This is typical of the RR sector
and is
consistent with the idea that $m$ can be interpreted as the
expectation value
of the dual of a RR 10-form field strength. This interpretation would
have the
additional virtue of restoring the invariance under the discrete
symmetry in
which all RR fields change sign, a symmetry that is broken by the
terms linear
in $m$ in \atwelve. We shall now show how to reformulate the massive
IIA theory
along these lines. As we shall see the cosmological constant is
simply related
to, but not equal to, the expectation value of the ten-form field
strength.
\chapter{D=10 IIA supergravity with 9-form potential}
We shall start with the bosonic Lagrangian of \atwelve. Expanding in
powers of $m$, the associated action $I(m)$ is
$$
\eqalign{
I(m) &= I(0) + \int \! d^{10} x \; \Big\{
2m\sqrt{-g}\big[(dC+6AdB)\cdot B^2
-2dA\cdot
B\big] + {m\over 9}\varepsilon dCB^3\big]\cr
\qquad & -{1\over2}m^2\sqrt{-g}\Big[1+2|B|^2+6|B^2|^2\Big] +{m^2\over
20}\varepsilon
B^5\Big]\Big\} \ ,}
\eqn\none
$$
where $I(0)$ is the bosonic action of the massless IIA supergravity
theory.
We now promote the constant $m$ to a field $M(x)$, at the same time
introducing
a 9-form potential $A_9$ as a Lagrange multipler for the constraint
$dM=0$.
Omitting a surface term, the Lagrange multiplier term can be
rewritten as
$$
10\; \varepsilon dA_9 M\ .
\eqn\ntwo
$$
The $A_9$ field equation implies that $M=m$, for some constant $m$,
so the
remaining equations are equivalent to those of the massive IIA theory
except that the constant $m$ is now arbitrary and that we now have an
additional field equation from varying $M$. This additional equation
is
$$
{\delta I(M) \over \delta M(x)} = -\varepsilon F_{10}
\eqn\nthree
$$
where $I(M)$ is the action \none\ but with $M$ replacing $m$, and
$F_{10}= 10\;
dA_9$ is the 10-form field strength of $A_9$. Thus the $M$ equation
simply
determines the new field strength $F_{10}$. Observe that the
expectation value
of $(\varepsilon F_{10})$ is {\it not} equal to the expectation value
of
$\sqrt{-g}M$, as a matter of principle (although it may equal it in
special
backgrounds), but is rather the value of the variable canonically
conjugate to it.
Note that the gauge and supersymmetry transformations of the action
$I(M)$ no
longer vanish. However, the variations of $I(M)$ are proportional to
$dM$ and
can therefore be cancelled by a variation of the new 9-form gauge
potential
$A_9$. This determines the gauge and supersymmetry transformations of
$A_9$.
The supersymmetry variation will not be needed for our purposes so we
omit it.
The $\Lambda$-gauge transformation of $A_9$ found in this way is
$$
\delta (\varepsilon A_9) = {2\over5}\sqrt{-g} \Big[\Lambda\cdot F +
(\Lambda
B)\cdot
G\Big] - {1\over30}\varepsilon \big( 2\Lambda dC B^2 + M\Lambda
B^4\big)\ .
\eqn\aextra
$$
We now have a new gauge-invariant bosonic action
$$
I(M) + \int d^{10}x\; M\varepsilon F_{10}\ .
\eqn\nfour
$$
The field $M$ can now be treated as an auxiliary field that can be
eliminated
via its field equation
$$
\sqrt{-g}M = K^{-1}(B)\Big\{ \varepsilon (F_{10} + {1\over 9}dC B^3)
+2\sqrt{-g}[(dC + 6AdB)\cdot B^2 - 2dA\cdot B]\Big\}\ ,
\eqn\nfive
$$
where
$$
K(B) = 1+2|B^2|+6|B^2|^2-{1\over10\sqrt{-g}}\varepsilon B^5\ .
\eqn\nsix
$$
Using this relation in \nfour\ we arrive at the Lagrangian
$$
{\cal L}_{new} = {\cal L}_0 + \big[\sqrt{-g}K(B)\big]^{-1} \Big\{
\varepsilon
(F_{10} +
{1\over 9}dC B^3) +2\sqrt{-g}[(dC + 6AdB)\cdot B^2 - 2dA\cdot
B]\Big\}^2
\eqn\nseven
$$
where ${\cal L}_0$ is the bosonic Lagrangian of the massless IIA
theory.
Note the non-polynomial structure of the new Lagrangian in the gauge
field $B$.
This greatly obscures the $\Lambda$-gauge invariance, which is
ensured by the
very complicated $\Lambda$-gauge transformation of $A_9$.
The full IIA supergravity Lagrangian in the new formulation of course
requires the inclusion of the fermion terms. Although we have not
worked these out it should be clear from the above construction of
the purely bosonic sector that they can be deduced directly from those
in the `old' formulation by following the above steps. Thus, the existence
of the full IIA supergravity theory with 9-form potential is guaranteed.
Presumably, there also exists an on-shell superspace formulation of
the field equations of this new theory, which it would be of interest
to find. We leave this problem to future investigations.
\chapter{The IIA D=10 Eightbrane}
The appearance of the 9-form potential in the above reformulation of
the
massive IIA supergravity theory suggests the existence of an
associated 8-brane
solution. We will find solutions of the equations of motion of
\nseven\ of the
form
$$
\eqalign{
ds^2 &= f^2(y)\; dx^\mu dx^\nu\eta_{\mu\nu} + g^2(y)dy^2\cr
\sigma &= \sigma(y) \cr
A_9 &= A_9(y)}
\eqn\bone
$$
with all other fields vanishing, and where $\eta$ is the Minkowski
9-metric.
Such a solution will have 9-dimensional Poincar\'e invariance and
hence an
interpretation as an 8-brane. We shall further require of such a
solution that
it preserve some supersymmetry, so we shall begin by considering the
variation
of the gravitino one-form $\psi$ and the dilatino $\lambda$ in the
presence of
configurations of the above form. The full variations of the massive
IIA theory
can be found in [\rom] in the Einstein--frame. They depend on the
constant
$m$. In the new theory, this constant is replaced by the function $M$
given in \nfive. Here, however, we shall need the fermion variations
in the {\it string-frame}. For $M=0$ these are implicit in the superspace results of [\BGRMV]. For the backgrounds considered here, for which
all fermions vanish and $\sqrt{-g}M=\varepsilon F_{10}$,
the $M\ne 0$ string-frame fermion variations are most easily deduced
from the Einstein-frame results of [\rom]. The result is
$$
\eqalign{
\delta_\epsilon \psi &= D\epsilon + {1\over 8}M
e^{\sigma}\Gamma\epsilon\cr
\delta_\epsilon \lambda &= -{1\over
2\sqrt{2}}\big(\Gamma^M\partial_M\sigma +
{5\over4}Me^{\sigma}\Big)\epsilon\ . }
\eqn\btwo
$$
For configurations of the assumed form, and further assuming that
$\epsilon$
depends only on $y$, the equations $\delta\psi=0$ and
$\delta\lambda=0$ become
$$
\eqalign{
0&=g^{-1}\epsilon' +{1\over 8}M e^{\sigma} \bar\Gamma_y\epsilon
\cr
0& = \big( g^{-1}f'\bar\Gamma_y
+{1\over 4}M fe^{\sigma} \big) \epsilon \cr
0&= \big(g^{-1}\sigma' +
{5\over4}Me^{\sigma}\bar\Gamma_y\big)\epsilon \ ,}
\eqn\bthree
$$
where the prime indicates differentiation with respect to $y$ and
$\bar\Gamma$
are the {\it constant}, orthonormal frame basis, gamma--matrices. To
find
non-zero solutions for $\epsilon$ we are now forced to suppose that
$\epsilon$
has a definite `chirality' in the sense that
$$
\bar\Gamma_y\epsilon =\pm \epsilon\ .
\eqn\bfour
$$
We then find that
$$
g^{-1}f' = \mp {1\over 4}Mfe^{\sigma}
\eqn\abfive
$$
and that
$$
g^{-1}\Big(e^{-\sigma}\Big)' =\pm {5\over 4}M\ .
\eqn\bfive
$$
Eliminating $M$ from these equations we deduce that $f$ is
proportional to
$e^{\sigma/5}$. As we are free to rescale the coordinates $x^\mu$ we
may choose
the constant of proportionality to be unity, without loss of
generality. Thus,
$$
f= e^{{1\over 5}\sigma}\ .
\eqn\bsix
$$
We are also free to choose $g(y)$ to be any function that is
non-singular where $f(y)$ is non-singular\foot{i.e. where
neither $f$ nor $f^{-1}$ vanish.}. For example, the
choice $g=f$ leads to a manifestly
conformally flat form of the 8-brane metric. A solution in this form
was given
in [\PW]. We postpone a discussion of this solution until we have the
general
solution, to be given below. The choice of $g$ that we shall make
here is
$g=f^{-1}$. In this case, use of the $A_9$ field equation $M'=0$ in
\bfive\
yields
$$
\partial_y^2\Big( e^{-{4\over5}\sigma}\Big) =0\ .
\eqn\bsixa
$$
The general solution is given in terms of a harmonic function $H(y)$,
the precise nature of which will be discussed shortly, i.e.
$$
e^{-{4\over5}\sigma}= H(y)\ .
\eqn\bseven
$$
This leads to the 8-brane configuration
$$
\eqalign{
ds^2 &= H^{-{1\over 2}}\; dx^\mu dx^\nu\eta_{\mu\nu}\ + \
H^{1\over2}dy^2\cr
e^{-4\sigma} &= H^5 \cr
M &= \pm H^\prime \ .}
\eqn\bnine
$$
where the prime indicates differentiation with respect to $y$. We
have verified
that this configuration is a solution of the full set of field
equations.
The Killing spinor $\epsilon$ is given by
$$
\epsilon = H^{-{1\over8}}(y)\epsilon_0,\qquad \bar\Gamma_y\epsilon_0
=\pm
\epsilon_0\ , \eqn\bseven
$$
where $\epsilon_0$ is a constant spinor.
It remains to determine the harmonic function $H$. Consider first the
massive
IIA theory for which the function $M$ equals the (non-zero) constant
$m$
appearing in the Lagrangian, which we may choose to be positive. In
this case
$$
H=\pm m(y-y_0)
\eqn\gone
$$
for constant $y_0$, where the sign depends on the choice of
'chirality' of
$\epsilon$. However, $H$ must be positive for real $\sigma$, so the
spinor
$\epsilon$ must change chirality at $y=y_0$. This is
possible because
$\epsilon$ blows up at $y=y_0$\foot{Presumably, this
is acceptable because the metric is also singular at
$y=y_0$; in any case, we shall see below that this
feature is not generic for the general 8-brane
solution of the new IIA supergravity theory.}.
Thus, the massive IIA theory has a solution for which
$$
H= m|y-y_0|\ .
\eqn\gtwo
$$
Note that this is a continuous function of $y$ with a kink
singularity at
$y=y_0$, at which the curvature tensor has a delta function
singularity.
In the new IIA theory we may suppose that $M$ is only {\it locally}
constant. The form of the function $H$ in this case depends on the
type of
point singularity that we allow. The above example suggests that we
should
require $H$ to be a continuous function of $y$. There are solutions
for which
$H$ is discontinuous but they have $\delta'$ type singularities of
the
curvature tensor, and we shall not consider them. In any case, the
restriction
to kink singularities produces physically sensible results, as we
shall see.
An example of a solution with a single kink singularity of $H$ is
$$
H=\cases{-ay+b \qquad y<0\cr cy+b\qquad y>0}
\eqn\gthree
$$
where $a$, $b$ and $c$ are non-negative constants. We adopt this as
the
basic single 8-brane solution. It can be interpreted as a domain wall
separating regions with different values of $M$. The regions
$y\rightarrow
\pm\infty$ are
at infinite affine distance. The solution therefore has two
asymptotic regions
relative to which an 8-brane charge, $Q_\pm$, may be defined as the
value of
$M$ as $y\rightarrow \pm \infty$. For the above solution,
$$
Q_+ =c\qquad Q_-= a \ .
\eqn\gfour
$$
The constant $b$ determines the value of $\sigma$, and hence the
value of the
string coupling constant $e^\sigma$ at the 8-brane core. In
particular, if
$b=0$ the string coupling constant goes to infinity at the core. Note
that the
solution \gtwo\ of the masive IIA theory is the special case for
which $a=c$
and $b=0$.
The multi 8-brane generalization of \gthree\ with the same charges is
found by allowing kink singularities of H at n+1 ordered points
$y=y_0<y_1<y_2<\dots <y_n$. The function $H$ is
$$
H=\cases{-a(y-y_0) + \sum_{i=1}^n\mu_i (y_i-y_0) + b\qquad y<y_0 \cr
(c-\sum_{i=1}^n\mu_i) |y-y_0| + \sum_{i=1}^n \mu_i |y-y_i| +b \qquad
y>y_0}
\eqn\gfive
$$
where $\mu_i$ are positive constants and $a$, $b$, $c$
are non-negative constants.
The asymptotically left-flat or right-flat solutions are those for
which
$Q_-=0$ or $Q_+=0$, respectively. The asymptotically flat solutions
are those
which are both asymptotically left-flat and right-flat. An example of
an
asymptotically flat three 8-brane solution is given by $H=
\mu^2\big||y-y_0|-|y-y_1|\big| + \gamma^2$, where $\mu$ and $\gamma$
are
arbitrary constants.
If we now introduce a new variable $w(y)$ such that
$$
{ dw\over dy} = H^{{1\over2}}\ ,
\eqn\camone
$$
then the above 8-brane solution becomes
$$
\eqalign{
ds^2 &= Z^{-{1\over3}}(w)\big[ dx\cdot dx + dw^2\big]\cr
e^{-\sigma} &= Z^{5\over 6}(w) \ ,}
\eqn\camtwo
$$
where $Z(w)$ is a harmonic function of $w$, related to $H(y)$ by
$$
Z(w) = H^{3\over2}\big(y(w)\big)\ .
\eqn\camthree
$$
The 8-brane solution of [\PW] is of this form. For example, the
single 8-brane of that reference
corresponds to the special choice of $H$ in \gthree\ with
$b=0$. This
can be seen from the fact that the conformal factor of the solution
of [\PW]
blows up at the position of the 8-brane, while this is true of the
above
8-brane solution only if $b=0$.
If the above 8-brane solutions are to be considered as a field theory
realization of the Dirichlet 8-brane of type IIA superstring theory
then one
would expect them to be related by T-duality to both 7-brane and
9-brane
solutions of IIB
supergravity. Consider first the Dirichlet 9-brane; its field theory
realization is just D=10 Minkowski spacetime or, in the context of
the $S^1$
compactified theory required for T-duality considerations, the
product of $S^1$
with D=9
Minkowski spacetime. This spacetime can also be regarded as an
8-brane
solution.
T-duality requires that the same solution result from direct (as
against
double) dimensional reduction of the 8-brane solution found above.
This is
indeed the case because compatibility with the KK reduction requires
us to
choose the harmonic function $H$ to be constant, in which case $M=0$.
The
dimensionally reduced solution is then precisely the product of $S^1$
with D=9
Minkowski spacetime.
Thus, the IIB 8-brane can be regarded as a T-dual of the IIB 9-brane.
The more difficult task of determining the IIB 7-brane solutions to
which the
IIA 8-brane solutions are T-dual is what will occupy us for most of
the
remainder of this paper; it involves the construction of a new
massive D=9
supergravity, to which we now turn our attention.
\chapter{Massive D=9 N=2 supergravity}
The standard dimensional reduction to D=9 of either the massless IIA
supergravity theory or the IIB supergravity theory yields the
massless N=2 D=9
supergravity theory [\BHO] (see also [\LPSS]). Here we shall
construct a
massive N=2 D=9 supergravity theory. We shall do this in two ways.
The first
involves the massive IIA supergravity theory. At first sight it might
seem
that this theory cannot be dimensionally reduced to D=9 because the
product of
D=9 Minkowski space with $S^1$ is not a solution of the field
equations.
However, all we need is a solution with an abelian isometry and the
massive IIA
8-brane is such a solution. This allows us to reduce the massive D=10
IIA
theory to D=9\foot{An alternative, equivalent, procedure would be to
make use
of the new $A_9$ formulation of IIA supergravity to reduce to D=9 in
the
standard
way; the resulting D=9 theory has an 8-form potential which can be
traded for a
cosmological constant.}. We shall then show that exactly the same
theory can be
found by a Scherk-Schwarz dimensional reduction of the IIB
supergravity theory.
We begin by dimensionally reducing the massive IIA supergravity
theory. Since
we ultimately wish to make contact with the IIB theory via T-duality,
it is
convenient to use the conventions of [\BHO], where the massless
T-duality rules
are given. Thus, the first step is to rewrite the results of section
2 in the
notation of [\BHO]. The field content in D=10 is given by
$$
\bigl \{{\hat g}_{\hat\mu\hat\nu}, {\hat C}_{\hat\mu\hat\nu\hat\rho},
{\hat B}^{(1)}_{\hat\mu\hat\nu}, {\hat A}^{(1)}_{\hat\mu}, \hat\phi
\bigr\}
\eqn\ninea
$$
where the fields $\hat C$ and ${\hat A}^{(1)}$ are the R-R sector
fields. We
refer to [\BHO] for details of the notation, but we remark
here that
{\it in this section only the metric signature is `mostly minus'}
and that the hats
indicate D=10 variables; the D=9 variables resulting from the
dimensional
reduction will be without hats. Our starting point is the following
(string-frame) action, obtained by translating \athirteen\
into the conventions of [\BHO]:
$$
\eqalign{
I_{\rm IIA} = &{1\over 2}\int_{{\cal M}_{10}} d^{10} x \sqrt {-\hat
g}
\biggl \{ e^{-2\hat\phi}\bigl [ -\hat R + 4|d\hat\phi|^2 - {3\over 4}
|{\hat H}^{(1)}|^2\bigr ]\cr
&+{1\over 4}|{\hat F}^{(1)}_{m}|^2 + {3\over 4}|{\hat G}_{m}|^2
+ {1\over 2}m^2\biggr \} +{1\over 64} \int_{{\cal M}_{11}}{\hat
G}^2_m{\hat
H}^{(1)}\ .}
\eqn\nineb
$$
Apart from the cosmological term, all $m$ dependent terms occur via
the
field-strength tensors of the R-R fields. As explained in section 2,
the
$m$-dependent terms within these curvature tensors are determined by
the
Stueckelberg type symmetries, which now read:
$$
\eqalign{
\delta{\hat B}^{(1)} &= d{\hat \eta}^{(1)}\cr
\delta {\hat A}^{(1)} &= -{m\over 2}{\hat \eta}^{(1)}\cr
\delta \hat C &=-m{\hat \eta}^{(1)}{\hat B}^{(1)}\ .}
\eqn\ninec
$$
The $m$-dependence of the corresponding R-R curvatures is given by
$$
\eqalign{
{\hat F}^{(1)}_m &= {\hat F}^{(1)}_{m=0} + m{\hat B}^{(1)}\cr
{\hat G}_m &= {\hat G}_{m=0} + {m\over 2}({\hat B}^{(1)})^2\ .}
\eqn\nined
$$
The $m=0$ part of the curvatures in the conventions now being used
may be found
in [\BHO].
The field content of the massive $D=9$ Type II theory is given by
$$
\bigl \{g_{\mu\nu}, C_{\mu\nu\rho},
B^{(i)}_{\mu\nu}, {A}^{(i)}_{\mu}, \phi, k, \ell
\bigr\}\ .
\eqn\ninee
$$
The R-R sector fields are $C, B^{(2)}, A^{(1)}$ and $\ell$.
The action can be obtained by straightforward dimensional reduction
of the ten-dimensional theory and is given by
$$
\eqalign{
I = &{1\over 2}\int_{{\cal M}_9} d^9x \sqrt g
\biggl\{ e^{-2\phi}\bigl [ -R + 4|d\phi|^2 - {3\over 4}|H^{(1)}|^2\cr
& -|d{\rm log} k|^2 + {1\over 4}k^2|F^{(2)}|^2 + {1\over 4}k^{-2}
|F(B)|^2\bigr ] + {1\over 2} m^2 k\cr
&- {1\over 2}k^{-1}|d\ell -mB|^2 +{1\over 4}k|F_m^{(1)}|^2
+{3\over 4}k|G_m|^2 - {3\over 4}k^{-1}|H_m^{(2)}|^2\biggr\}\cr
&-{1\over 64}\int_{{\cal M}_{10}} G_m^2F(B) + 4G_mH^{(1)}H^{(2)}_m \
.}
\eqn\ninef
$$
The $m$-dependent factors in the curvature tensors are determined by
the following $D=9$ Stueckelberg type symmetries (which follow
straightforwardly from the $D=10$ rules)
$$
\eqalign{
\delta B &= d\Lambda\cr
\delta A^{(1)} &= -{m\over 2}\eta^{(1)} -m\Lambda A^{(2)}\cr
\delta B^{(1)} &= d\eta^{(1)} - A^{(2)}d\Lambda\cr
\delta B^{(2)} &= A^{(1)}d\Lambda + m\Lambda B^{(1)} +
{m\over 2}\eta^{(1)}B\cr
\delta C &= -m\eta^{(1)}\bigl ( B^{(1)} + A^{(2)}B\bigr )\cr
\delta \ell &= m\Lambda \ .}
\eqn\nineg
$$
These Stueckelberg symmetries lead to the following (unique)
modified curvatures for the R-R fields:
$$
\eqalign{
F^{(1)}_m &= F^{(1)}_{m=0} + \ell F^{(2)}_{m=0} + m\bigl (B^{(1)}
-A^{(2)}B\bigr )\cr
G_m &= G_{m=0} + {1\over 2}m(B^{(1)})^2 - mB^{(1)}A^{(2)}B\cr
H^{(2)}_m &= H^{(2)}_{m=0} -\ell H^{(1)}_{m=0} -mBB^{(1)}\, .}
\eqn\nineh
$$
The expressions for the $m=0$ curvatures may again be found in
[\BHO].
We now turn to the (massless) $D=10$ Type IIB theory. Its field
content is given by
$$
\bigl \{{\hat j}_{\hat\mu\hat\nu},{\hat {\cal
B}}^{(i)}_{\hat\mu\hat\nu},
\hat\ell, \hat\varphi, {\hat
D}_{\hat\mu\hat\nu\hat\rho\hat\sigma}^{(+)}
\bigr\}\, ,\hskip 1.5truecm i=1,2.
\eqn\ninei
$$
The R-R sector fields are ${\hat {\cal B}}^{(2)},
\hat D^{(+)}$ and $\hat\ell$.
The action is given by\foot{Strictly speaking, there is no
action for the $D=10$ Type IIB theory.
However, when properly used, the given action leads to a
well-defined action in $D=9$. For more details about this point, see
e.g. [\BBO].}
$$
\eqalign{
I_{IIB} = {1\over 2}\int_{{\cal M}_{10}}d^{10}x &{\sqrt {- j}}
\biggl\{e^{-2\hat\varphi}\bigl [ -\hat R + 4|d\hat\varphi|^2
-{3\over 4}|{\hat {\cal H}}^{(1)}|^2\bigr ]\cr
&-{1\over 2}|d\hat\ell|^2 -{3\over 4}|{\hat {\cal H}}^{(2)} -
\hat\ell
{\hat {\cal H}}^{(1)}|^2 - {5\over 6}|{\hat F}(D)|^2\biggr\}\cr
&-{1\over 96}\int_{{\cal M}_{11}}\epsilon^{ij}{\hat F}(D){\hat
{\cal H}}^{(i)}{\hat {\cal H}}^{(j)}\ .}
\eqn\ninej
$$
The question now is whether, after dimensional reduction to $D=9$,
the
massless $D=10$ Type IIB theory can be mapped onto the massive $D=9$
Type II theory found above. The standard reduction is given in [\BHO]
and leads
to the massless theory in nine dimensions. Since one cannot add a
cosmological
constant to the $D=10$ Type IIB theory we have to change something in
the
standard reduction. Our guiding point will be the $D=9$
Stueckelberg symmetries \nineg. Once we can reproduce these,
the action, with the exception of the cosmological term, follows by
symmetry.
We observe that from the IIA point of view the Stueckelberg
$\Lambda$-transformation is just the $\underline x$-component of the
$D=10$ Stueckelberg symmetry. From the IIB point of view it should
come
from a general coordinate transformation in the ${\underline x}$
direction
since we know that ${\hat \xi}^{\underline x} = \Lambda$ for $m=0$
[\BHO]. In order to reproduce the Stueckelberg
$\Lambda$-transformations we
should therefore introduce an extra $\underline x$ dependence in some
of the
fields of the $D=10$ IIB theory. The only $D=9$ fields that have an
$m$-dependent $\Lambda$-transformation are $\ell$ and $B^{(2)},
A^{(1)}$.
We find that these R-R fields can be given the correct $\Lambda$
transformation provided we introduce the following additional
dependence linear
in $\underline x$:
$$
\eqalign{
\hat \ell &= \ell +m{\underline x}\cr
{\hat {\cal B}}_{\mu\nu}^{(2)} &=
B_{\mu\nu}^{(2)} - B_{[\mu}A_{\nu]}^{(1)}
+m{\underline x}\bigl ( B_{\mu\nu}^{(1)} +
B_{[\mu}A_{\nu]}^{(2)}\bigr )\cr
{\hat {\cal B}}^{(2)}_{{\underline x}\mu} &= - A_\mu^{(1)} +
m{\underline x}A_\mu^{(2)}\ .}
\eqn\ninek
$$
Note that the $x$-dependence in $\hat \ell$, which was introduced to
reproduce
the correct $D=9$ Stueckelberg $\Lambda$-transformation, at the same
time
leads, via the kinetic term of $\hat \ell$, to the desired
cosmological
constant in $D=9$! This establishes a relation between the
cosmological
constant
and the Stueckelberg symmetries. Note also that, although the
IIB R-R fields $\hat \ell$ and ${\hat {\cal B}}^{(2)}$ depend on
$\underline x$, all the $\underline x$-dependence drops out in the
$D=10$
IIB action. This can be seen by rewriting the ansatz for ${\hat {\cal
B}}^{(2)}$ in the following equivalent form:
$$
{\hat {\cal B}}^{(2)}_{\hat \mu\hat\nu}
= {\hat {\cal B}}^{(2)}_{\hat\mu\hat\nu, m=0} + m{\underline x}
{\hat {\cal B}}^{(1)}_{\hat\mu\hat\nu, m=0\ .}
\eqn\ninel
$$
Finally, we still have to reproduce the correct $\eta^{(1)}$
Stueckelberg
symmetries. For $m=0$ this symmetry is related to the following
Type IIB gauge symmetry:
$$
\eqalign{
\delta {\hat {\cal B}}^{(i)} &= d{\hat \Sigma}^{(i)}\cr
\delta \hat D &= {3\over 4}d{\hat \Sigma}^{(2)}{\hat {\cal B}}^{(1)}
-{3\over 4}d{\hat \Sigma}^{(1)}{\hat {\cal B}}^{(2)}\ ,}
\eqn\ninem
$$
with ${\hat {\Sigma}}^{(i)} = \eta_\mu^{(i)}$. It turns out that the
following
$\underline x$-dependence in ${\hat {\Sigma}}^{(2)}$ reproduces the
correct $\eta^{(1)}$ Stueckelberg symmetry given in \nineg :
$$
\hat \Sigma_\mu^{(2)} = \eta_\mu^{(2)} + m{\underline
x}\eta_\mu^{(1)}\, .
\eqn\ninen
$$
This equation also follows from the requirement that the ansatz for
${\hat {\cal B}}^{(2)}_{\mu\nu}$ be consistent with the $m=0$ rule
$\delta B^{(1)}= d\eta^{(1)}$.
We have therefore recovered by non-trivial dimensional reduction of
IIB
supergravity the massive N=2 D=9 supergravity found earlier from
reduction of
the massive IIA theory. It is of interest to see how this mechanism
is related
to the Scherk-Schwarz (SS) mechanism [\SSM]. The essential ingredient
in their
method was a global $U(1)$ symmetry in the higher dimension.
Let $Q$ be the anti-hermitian generator of this $U(1)$ symmetry and
let
$\partial$ denote differentiation with respect to the KK
coordinate. Then the SS
mechanism can be summarised by the equation $\partial = mQ$. In our
case the
relevant $U(1)$ group acts on $\hat\ell$ (which is periodically
identified) by
a shift, so we should require $\partial \hat\ell =m$.
The solution is
$\hat\ell =\ell + m\underline {x}$, as above. The $U(1)$
transformation of the
field strength three-forms $\hat{\cal H}$ must be such that the
action \ninef\
is invariant, which determines the action of $Q$ on these fields.
Setting
$\partial =mQ$ then yields a dependence of these field strengths on
${\underline x}$ that is consistent with the ${\underline
x}$-dependence
\ninel\ of the two-form potentials.
Thus, the dimensional reduction used above is essentially an
application of the
SS method. However, the implications are rather different in the
present
context. For example, in the reduction of D=4 N=1 supergravity to D=3
using the
global chiral $U(1)$ symmetry [\SSM], the SS mechanism generates
masses for
the fermions but no scalar potential, thereby breaking supersymmetry.
In
contrast, in our case a cosmological constant is also generated and
the full
supersymmetry of the action is preserved by the reduction. The reason
that
supersymmetry is preserved can be traced to the fact that the
fermions can be
redefined in such a way that they are $U(1)$-invariant. The specific
redefinition required for this is given in Appendix A for the
Einstein-frame
fields but since the dilaton is $U(1)$ invariant this result holds
also for the
string-frame fields. Alternatively, one can note that in the example
given in
[\SSM] the chiral $U(1)$ acts on the supersymmetry parameter and this
triggers
the Higgs mechanism.
Having established that both the $D=10$ massive type IIA and massless
type IIB theory map onto the same $D=9$ massive Type II supergravity
theory,
it is straightforward to determine the massive type II $T$ duality
rules. We consider here only the map from massive IIA to massless
IIB.
The $m=0$ rules are given in [\BHO]. We give here only the rules that
receive
an $m$-dependent correction. These are the following:
$$
\eqalign{
\hat \ell &= {\hat A}^{(1)}_{{\underline x}} + m{\underline x}\cr
{\hat {\cal B}}_{\mu\nu}^{(2)} &=
{3\over 2}{\hat C}_{\mu\nu{\underline x}} - 2 {\hat A}^{(1)}_{[\mu}
{\hat B}^{(1)}_{\nu]{\underline x}} +
2{\hat g}_{{\underline x}[\mu}{\hat B}^{(1)}_{\nu]x}
{\hat A}_{{\underline x}}^{(1)}/g_{\underline {xx}}\cr
&+m{\underline x}\biggl ({\hat B}^{(1)}_{\mu\nu} +
2{\hat g}_{{\underline x}[\mu}{\hat B}^{(1)}_{\nu] {\underline x}}
g_{\underline {xx}}\biggr )\cr
{\hat {\cal B}}^{(2)}_{{\underline x}\mu} &= -{\hat A}_\mu^{(1)}
+ {\hat A}^{(1)}_{\underline x}
{\hat g}_{{\underline x}\mu}/{\hat g}_{\underline {xx}}\cr
& +m{\underline x}{\hat g}_{{\underline x}\mu}/{\hat g}_{\underline
{xx}}\ . }
\eqn\tena
$$
\chapter{The circularly symmetric IIB 7-brane}
The massive $T$--duality rules derived in the previous section are
expected to
relate IIA 8-brane solutions to IIB 7-brane solutions. Compatibility
of the
latter with the KK ansatz implies that the 7-brane solution must have
circular
symmetry in the transverse directions. We therefore begin with a
construction
of the general IIB 7-brane solution of this type that also preserves
half the
supersymmetry.
The most general static circularly symmetric 7-brane metric is
$$
ds^2 = f^2(r) d{\tilde x}\cdot d{\tilde x} + a^2(r) \big (
d\chi + \omega(r) dr \big )^2 + b^2(r)dr^2\, ,
\eqn\iione
$$
where $\{\tilde x\}$ are the coordinates of 8-dimensional Minkowski
spacetime
(the 7-brane worldvolume), the $\chi$--coordinate is along the
$U(1)$ Killing
vector field and $r$ is a radial coordinate. We are free to choose
the function
$b$, and we shall choose it such that $b=a$. Next, we change
coordinate from
$\chi$ to
$$
{\underline x}= \chi + \kappa(r)\, ,
\eqn\iitwo
$$
where the function $\kappa$ is such that
$$
\omega = {d\kappa\over dr}\, .
\eqn\iitwoa
$$
Note that since $\chi$ was an angular coordinate, so also is ${\underline x}$.
We may choose the identification such that
$$
{\underline x}\sim {\underline x} +1\ .
\eqn\iitwob
$$
The metric now reads
$$
ds^2 = f^2(r) d{\tilde x}\cdot d{\tilde x} + a^2(r)\big[ d{\underline
x}^2 +
dr^2\big]\, .
\eqn\iithree
$$
To find supersymmetric solutions, we assume that
$$
\eqalign{
\hat \ell &= \ell(r) + \tilde m {\underline x}\, ,\cr
\hat \varphi &= \hat\varphi (r) }
\eqn\iifour
$$
where $\tilde m$ is piecewise constant, and we set the rest of the
fields equal to zero.
Next, we substitute this ansatz into the (string frame) Killing
spinor equations
$$
\eqalign{
\delta_\epsilon \psi &\equiv D\epsilon + {1\over 8} ie^{\hat\varphi}
\big (\Gamma^M\partial_M\hat\ell\big ) \Gamma\epsilon = 0\, ,\cr
\delta_\epsilon\lambda &\equiv {1\over 4} \bigg
(\Gamma^M\partial_M\hat\varphi +
ie^{\hat\varphi}\Gamma^M\partial_M\hat\ell
\bigg )\epsilon = 0\, ,}
\eqn\iifive
$$
and assume that $\epsilon=\epsilon(r)$ where
$$
\bar\Gamma_{\underline x}\epsilon(r) = \pm i\bar\Gamma_r\epsilon(r)\ .
\eqn\iiifive
$$
We thereby deduce that
$$
\epsilon^\prime \pm {1\over8} e^{\hat\varphi}\tilde m \epsilon =0
\eqn\aiifive
$$
and
$$
\eqalign{
f^\prime \pm {1\over 4} e^{\hat\varphi} f \tilde m &= 0\, ,\cr
\ell^\prime &= 0\, ,\cr
a^{-1}a^\prime \mp{1\over 4} \tilde me^{\hat\varphi} &= 0\, ,\cr
\hat\varphi^\prime \pm e^{\hat\varphi} \tilde m &= 0\, ,}
\eqn\iisix
$$
where the prime indicates differentiation with respect to $r$.
Using the last two equations in \iisix\ , we have that
$$
\partial^2_r \big ( e^{-\hat\varphi}\big ) = 0\, .
\eqn\iiseven
$$
Thus we can set
$$
e^{-\hat\varphi} = H(r)\, ,
\eqn\iieight
$$
where $H$ is a harmonic function of $r$, of the type described in
section 4. The last of equations \iisix\ now yields
$$
\tilde m =\pm H^\prime\ ,
\eqn\iieighta
$$
while the remainder of equations \iisix\ yields the full
7-brane solution in terms of $H$ and three constants of
integration, which can be removed by rescaling the coordinates
and shifting $\hat\ell$. This solution is
$$
\eqalign{
ds^2 &= H^{-{1\over2}}(r) d{\tilde x}\cdot d{\tilde x} +
H^{1\over2}(r)\big[ d{\underline x}^2 + dr^2\big] \, ,\cr
e^{-\hat\varphi} &= H(r)\cr
\hat\ell &= \pm H'(r) {\underline x}\, .}
\eqn\iiten
$$
The Killing spinor corresponding to this solution is given by
$$
\epsilon = H^{-{1\over8}} \epsilon_0\, ,\qquad \bar\Gamma_{\underline
x }\epsilon_0 =
\pm i \bar\Gamma_r\epsilon_0\, .
\eqn\iieleven
$$
We suggest that this 7-brane solution of IIB
supergravity is the field theory
realization of the Dirichlet 7-brane of type IIB superstring theory.
As a
check on this interpretation we shall now verify that it is T-dual to
the IIA
6-brane solution of [\HS]. To this end we take $\{\tilde x\} = (v^m,
u)$, where
$v^m$ are coordinates for 7-dimensional Minkowski spacetime (the
6-brane
worldvolume), and also take the ignorable coordinate $u$ to be an
angular
coordinate. We can then apply (massless) T-duality rules of [\BHO],
in the $u$
direction. This leads to the following solution of IIA supergravity:
$$
\eqalign{
ds^2 &= H^{-{1\over2}}(r)\; dv\cdot dv \ + \ H^{1\over2}(r)
\big[du^2 + d{\underline x}^2 + dr^2 \big]\cr
e^{-\hat\phi} &= H^{3\over4}(r)\cr
\hat A^{(1)} &= \pm H^\prime (r){\underline x} du\ . }
\eqn\othertwo
$$
This is precisely the IIA 6-brane solution in the form given in
[\PKT], except
that in the general 6-brane solution the harmonic function $H$
depends on all
three `transverse' variables $(u,{\underline x},r)$. Thus, this is
the form of
the 6-brane compatible with a KK reduction to D=8. This is an
encouraging sign
that the 7-brane will also be T-dual to a IIA 8-brane solution, since
one
expects the 6-brane and 8-brane to be equivalent on reduction to D=8.
In order to show that this is indeed the case we need to establish
the
T-duality of the 7-brane to the 8-brane. We shall now show that the
massive
$T$--duality rules that we have given in section 5 relate the IIA
eight--brane
of section 4
to the IIB seven--brane given in \iiten. Although the general massive
Type II
$T$ duality rules are complicated they become very simple for the
special
solutions considered here. Since
$$
{\hat g}_{{\underline x}\mu} = \hat C = {\hat B}^{(1)} =
{\hat A}^{(1)} = 0\ ,
\eqn\tenb
$$
for our solutions, the massive Type II $T$ duality rules
are
$$
\eqalign{
{\hat j}_{\mu\nu} &= {\hat g}_{\mu\nu}\cr
{\hat j}_{\underline {xx}} &= 1/{\hat g}_{\underline {xx}}\cr
{\hat \ell } &= m{\underline x}\cr
{\hat \varphi} &= {\hat \phi} - {1\over 2}{\rm log}
(-{\hat g}_{\underline {xx}}) \ .}
\eqn\tenc
$$
To show that under the massive $T$ duality rules the IIA eight brane
solution
of section 4 is $T$ dual to the IIB seven brane solution, \iiten,
we first make the change of notation $\{x^\mu\} = (\tilde x,
{\underline x})$
and $y=r$. We then wrap the eight brane in a compactifying direction,
which we
can choose to be ${\underline x}$. The 8-brane solution is then
as follows:
$$
\eqalign{
ds^2 &= H^{-{1\over2}}(r) d{\tilde x}\cdot d{\tilde x} +
H^{-{1\over 2}}(r)d{\underline x}^2 + H^{1\over2}(r) dr^2 ,\cr
e^{-4\sigma} &= H^5(r) \cr
M &= \pm H^\prime(r)\, ,}
\eqn\iitwelve
$$
It is now straightforward to show that the T--duality rules,
\tenc, applied to the ${\underline x}$ direction, take
the IIA eight brane solution to the IIB seven brane solution.
The equivalence of eight and seven branes
is a non-trivial check of our massive $T$--duality rules.
\chapter{Superstrings and Supermembranes}
The metric and dilaton ($\sigma$) of the RR p-brane solutions of D=10
IIA or
IIB supergravity were shown in [\PKT], for $p\le 6$, to be
expressible in the
form
$$
\eqalign{
ds^2 &= H^{-{1\over2}} d^2s_{p+1} + H^{1\over2} d{\bf y}\cdot d{\bf
y}\cr
e^{4\sigma} &= H^{3-p} \ ,}
\eqn\sevenone
$$
where $d^2s_{p+1}$ is the Minkowski (p+1)-metric,
$d{\bf y}\cdot d{\bf y}$ is the Euclidean metric on the
`transverse'
space $R^{9-p}$, and $H$ is a harmonic function on this space,
apart from point
singularities. Here we have found new IIB 7-brane solutions that are
also of
this form and we have shown that they are related by T-duality both
to the IIA
KK 6-brane and to the IIA 8-brane, of which we have also given the
general
solution preserving half the supersymmetry. This 8-brane solution
can also be put in the above form. One advantage of the
this form of the solutions is that the T-duality between
the RR p-branes and the RR (p+1)-branes, after
compactification on $S^1$, is an almost immediate consequence of the
T-duality
rules of [\BHO], at least for $p\le 6$. The relationship between the
7-brane
and the 8-brane solutions is more subtle, as we have seen, because it
involves
the comparison in D=9 via a previously unknown {\it massive} N=2 D=9
supergravity theory.
So far, the context of our discussion has been that of supergravity
rather than superstring theory. A new feature of the IIB superstring
theory is its conjectured $Sl(2;Z)$ U-duality which requires,
in particular, that the pseudoscalar $\hat\ell$ be periodically
identified, i.e. that it take values in $S^1$.
Without loss of generality
we can suppose that the identification is such that
$$
\hat\ell \sim \hat\ell +1
\eqn\iifoura
$$
Returning now to the ansatz \iifour, we note that since ${\underline x}\sim {\underline x}+1$, the consistency of this ansatz requires $\tilde m$ to be
an integer. Of course, since $\tilde m$ is not dimensionless, this result
holds only for a particular choice of units. Such a choice is implicit in
the choice of periodicity of ${\underline x}$. If the period is chosen to be $R_B$, which can be interpreted as the radius of the compact dimension,
one finds that the unit of quantization of $\tilde m$ is $1/R_B$.
That is\foot{"Note added in proof: we have implicitly assumed that
the IIB string coupling constant $g_B$ is unity. As recently shown
[\NEW] the right hand side should be replaced by $n/g_BR_B$
when $g_B\ne1$; the T-duality transformation between $g_B$ and the IIA
string coupling constant $g_A$ then leads to a quantization condition
of the form $m\sim n/(g_A\sqrt{\alpha'})$ in which the IIA mass parameter
is expressed entirely in IIA terms.},
$$
\tilde m = {n\over R_B}
\eqn\iifourb
$$
for integer $n$. Recall now that the equivalence of the 7-brane
with the 8-brane under T-duality requires that $m=\tilde m$. This means, assuming IIB U-duality, that the IIA 8-brane solution can be mapped to
a IIB 7-brane solution by T-duality only if the cosmological constant
$m$ of the massive IIA theory is quantized as above, i.e. each time
one passes through a IIA 8-brane the cosmological constant must jump
by an integer multiple of basic unit $1/R_B$.
The single 8-brane solution should be related
to the
Dirichlet 8-brane of [\pol]. This is a string background in which
open string
states arise with fixed (Dirichlet) boundary conditions that are
imposed in one
space-like dimension at one or both ends of the string. These
conditions
restrict at least one of the end-points of open strings to lie in the
nine-dimensional worldvolume of an 8-brane. The 8-brane couples to a
9-form
gauge field with a ten-form field strength $F_{10}$. If the new IIA
supergravity constructed here is indeed the effective field theory of
the IIA
superstring in the presence of this 10-form field strength then it
should be
possible to recover the Lagrangian \nseven\ by string theory
considerations.
Neglecting terms of order $B^2$, which in any case follow from gauge
invariance, the only term in \nseven\ that is linear in $F_{10}$ is
proportional to
$$
(\varepsilon F_{10}) dA\cdot B\ .
\eqn\kone
$$
This is the crucial term that has to be reproduced in string theory.
There is a vertex operator in the RR sector of the type IIA theory
that
couples a ten-form field strength to the worldsheet. This vertex
operator has
the form $F_{10}{\bar S}S$, where $S$ is the spacetime spinor
worldsheet field
of the spacetime supersymmetric worldsheet action. There are
non-trivial tree
diagrams that mix $F_{10}$ with fields from the RR and NSNS sectors,
producing
a term of the form \kone, as required. The requirements of gauge
invariance
suggest that a more systematic consideration of string theory in the
presence
of D-branes would produce the full effective Lagrangian \nseven.
Since all the $p$-brane solutions of D=10 IIA supergravity for $p<8$
can be
viewed as arising from some 11-dimensional supermembrane theory,
or `M-theory',
[\BST,\PKT,\EW,\JHS,\HW] it would be surprising if the 8-brane did
not also
have an 11-dimensional interpretation. The obvious possibility is
that the D=10
8-brane is the double-dimensional reduction of a D=11 supersymmetric
9-brane.
Such an object would be expected (see [\PKTb]) to carry a 9-form
`charge' appearing in the D=11 supertranslation algebra as a central
charge.
This is possible because the 2-form charge normally associated with
the D=11
supermembrane is algebraically equivalent to a 9-form. It is not easy
to see
how to implement this idea, however, since there is no `massive' D=11
supergravity theory. One possibility is suggested by the recent
interpretation
[\HW] of the heterotic string as an $S^1/Z_2$ compactified M-theory.
Since the
compactification breaks half the supersymmetry and the compactifying
space is
actually the closed interval, the two D=10 spacetime boundaries
might be viewed as the worldvolumes of two D=11 9-branes.
Less ambitiously, one could try to relate the massive IIA
supergravity theory
to D=11 supergravity via some lower dimension, in the same way that
the IIB
theory is related to it via reduction to D=9. In fact, this can be
done by
compactification to D=8. To see this we first observe that there is
clearly a
new massive N=2 D=8 supergravity theory obtainable {\it either} from
the
massive N=2 D=9 theory (by the same procedure used to obtain the
latter from
the massive IIA theory in D=10) {\it or} from the massless N=2 D=9
theory by
Scherk-Schwarz dimensional reduction (using the global $U(1)$
symmetry
inherited from the D=10 IIB theory). Thus, solutions of this massive
N=2 D=8
supergravity theory should be liftable to D=9 as solutions of {\it
either} the
massless {\it or} the massive N=2 D=9 supergravity theory. However,
solutions
of the latter are also solutions of the massive D=10 IIA theory while
solutions
of the former are also solutions of D=11 supergravity.
In light of this we may now ask to what solution of D=11 supergravity
does the
8-brane solution of the massive IIA theory correspond? According to
the above
procedure we should first double dimensionally reduce the 8-brane to
D=8, where
it can be interpreted as a 6-brane. This 6-brane solution can of
course be
lifted to D=9 as a 7-brane solution of the massive N=2 D=9
supergravity theory,
but we expect that it can also be lifted to D=9 as a 6-brane solution
of the
{\it massless} N=2 D=9 theory which can then be lifted to D=10 as the
6-brane
solution of the massless IIA theory\foot{It can also be considered as
a 7-brane
solution of the $S^1$-compactified IIB theory.}. As shown in [\PKT],
this
6-brane solution is a non-singular solution of D=11 supergravity
analogous to
the D=5 KK monopole. Thus the M-theory interpretation of the massive
IIA
8-brane would appear to be as the KK 6-brane, at least on
compactification from
D=11 to D=8.
\vskip 1cm
\centerline{\bf Acknowledgements}
\vskip 0.5cm
We would like to thank A.~Ach\'ucarro, J.H. Schwarz and K.S. Stelle for
discussions.
G.P. would like to thank G. 't Hooft and B. de Wit for an invitation
to visit
the university of Utrecht, and the particle physics group of the
university of
Groningen for their hospitality.
E.B. would like to thank DAMTP for its hospitality.
G.P. is supported by a University Research Fellowship from the
Royal Society. The work of E.B. has been made
possible by a fellowship of the Royal Netherlands Academy of Arts and
Sciences (KNAW).
\vskip 1cm
\centerline{APPENDIX: {\bf An $SL(2,R)$--formulation of $D=10$
Type IIB Supergravity}}
\vskip 0.5cm
In section 5 we used the supersymmetry transformations
of type IIB supergravity in ten dimensions in a rather simple
form. In this appendix we present the relation of our formulation
to that given in [\Schwarz]. Since we make no reference to D=9
fields in this
section we shall drop the hats on the D=10 fields.
The scalars of IIB supergravity parametrize the
$SU(1,1)/U(1)$ coset. They form an $SU(1,1)$ doublet,
and under the local $U(1)$ they transform,
with weight $-1$, as follows:
$$
\phi'_\alpha = \exp(-i\Lambda)\phi_\alpha\, .
\eqn\aaone
$$
Here $\alpha=1,2$ and $\phi_\alpha$ satisfies
$|\phi_1|^2-|\phi_2|^2=1$.
The complex fermions $\psi_\mu$ and $\lambda$ have $U(1)$ weights
$-1/2$ and
$-3/2$ respectively.
The first step, also worked out in [\Schwarz],
is to fix a U(1) gauge by chosing $\phi_1$ real:
$$
\eqalign{
\phi_1 = (\phi_1)^* &= {1\over \sqrt{1-\Phi^*\Phi}}\,,\cr
\phi_2 &= {\Phi\over \sqrt{1-\Phi^*\Phi}}\, .
}
\eqn\aatwo
$$
This gauge choice is not invariant under $SU(1,1)$ or supersymmetry,
which requires
redefinition of these symmetries with compensating $U(1)$
transformations.
The complex field $\Phi$ can be written as
$$
\Phi(x) \equiv {1+ i\tau(x)\over 1-i\tau(x)},\qquad
\tau(x) \equiv \ell(x)+ie^{-\varphi(x)}\,.
\eqn\aathree
$$
The $SL(2,R)$ transformations of $\tau$ are now given by:
$$
\tau' = {c + d\tau\over a + b\tau}\,,\qquad ad-bc=1\,.
\eqn\aafour
$$
In [\BHO,\BJO] an action for the bosonic part of IIB supergravity
was given in terms of the real scalars $\ell$ and $\varphi$, where
$\varphi$ was identified as the dilaton.
Even though this bosonic action is simple, supersymmetry still
is quite complicated if no further redefinitions are made.
In particular, the variation of the gravitino still contains
the composite U(1)-gauge field $Q_\mu$, which is a complicated function
of $\tau$. Also, the transformation rule of $\lambda$ is
nonlinear.
The following redefinitions of the Type IIB fermions
simplifies matters considerably:
$$
\eqalign{
\tilde \psi_\mu &= \exp{(-{\textstyle {1 \over 2}} i\theta(x))}\,\psi_\mu\,,\cr
\tilde \epsilon &= \exp{(-{\textstyle {1 \over 2}} i\theta(x))}\,\epsilon\,,\cr
\tilde\lambda &= \exp{(-\tfrac{3}{2}i\theta(x))}\,\lambda\, , }
\eqn\aafive
$$
where the function $\theta(x)$ is defined by
$$
\exp{(-2i\theta(x))} = {1-i\tau\over 1+i\tau^*}\,.
\eqn\aasix
$$
The $SL(2,R)$ transformations of $\tilde \psi, \tilde\epsilon,
\tilde\lambda$ are as follows:
$$
\eqalign{
\tilde \lambda^\prime &= \bigg ( {a+b\tau^*\over a+b\tau}
\bigg )^{\tfrac{3}{4}}\tilde\lambda\, ,\cr
\tilde \epsilon^\prime &= \bigg ( {a+b\tau^*\over a+b\tau}
\bigg )^{\tfrac{1}{4}}\tilde\epsilon\, ,\cr
\tilde \psi^\prime &= \bigg ( {a+b\tau^*\over a+b\tau}
\bigg )^{\tfrac{1}{4}}\tilde\psi\, .}
\eqn\aasixa
$$
Note that the redefined fermions are invariant under the abelian
subgroup of
$Sl(2;R)$ defined by setting $a=d=1$, $b=0$. This subgroup, which
acts on
$\tau$ as $\tau'= \tau +c$, is the group used for SS dimensional
reduction in
section 5.
The redefinition \aafive\ leads to the following simplified, Einstein
frame,
supersymmetry transformations (now omitting the tildes):
$$
\eqalign{
\delta e_\mu^a &= {\textstyle {1 \over 2}} \,\bar\epsilon\,
\Gamma^a \psi_\mu + {\rm h.c.}\,,\cr
\delta\psi_\mu &= {\cal D}_\mu\epsilon
+ \tfrac{i}{4}\epsilon \,e^\varphi\partial_\mu \ell
-\tfrac{i}{192}\Gamma^{(5)}\Gamma_\mu\epsilon \,F_{(5)}\cr
& -\tfrac{1}{16}\left(\Gamma_\mu\Gamma^{(3)}+2\Gamma^{(3)}
\Gamma_\mu\right)
\epsilon^* e^{\varphi/2}\left( H^{(1)}-\ell H^{(2)}
-ie^{-\varphi}H^{(2)}\right)_{(3)}\,,\cr
\delta A_{\mu\nu\lambda\rho} &=
i\,\bar\epsilon\, \Gamma_{[\mu\nu\lambda}\psi_{\rho]} + {\rm h.c.}
-6 \epsilon_{ij}B^{(i)}_{[\mu\nu}\delta B^{(j)}_{\lambda\rho]}\,,\cr
\delta B^{(1)}_{\mu\nu} &=
{\textstyle {1 \over 2}}\left(e^{-\varphi/2} + i\ell e^{\varphi/2}\right)
\left(\bar\epsilon^*\Gamma_{[\mu}\psi_{\nu]}
-{\textstyle {1 \over 2}} \,\bar\epsilon\, \Gamma_{\mu\nu}\lambda\right)
+ {\rm h.c.}\,,\cr
\delta B^{(2)}_{\mu\nu} &=
\tfrac{i}{2} \,e^{\varphi/2}
\left(\bar\epsilon^*\Gamma_{[\mu}\psi_{\nu]}
-{\textstyle {1 \over 2}} \,\bar\epsilon\, \Gamma_{\mu\nu}\lambda\right)
+ {\rm h.c.} \,,\cr
\delta\lambda &=
\tfrac{1}{4}\Gamma^\mu\epsilon^*
\left( \partial_\mu\varphi +ie^\varphi\partial_\mu \ell
\right)\cr
&\qquad +\tfrac{1}{8}\Gamma^{(3)}\epsilon \,e^{\varphi/2}
\left(H^{(1)} - \ell H^{(2)} -ie^{-\varphi}
H^{(2)}\right)_{(3)}\,,\cr
\delta \ell &= i{\rm
e}^{-\varphi}\,\bar\epsilon\,\lambda^* + {\rm h.c.}\,,\cr
\delta\varphi &= \,\bar\epsilon\,\lambda^* + {\rm h.c.} \,.
}
\eqn\aaseven
$$
The covariant derivative ${\cal D}\epsilon$ in the variation of the
gravitino
contains only the gauge field of local Lorentz transformations but
no
composite $U(1)$ gauge field $Q_\mu$. Due to the redefinitions
of the fermions
the only remnant of $Q_\mu$ is a single
$e^\varphi\partial \ell$ term. The three-forms $H^{(i)},\ i=1,2$ are
the field
strengths of
the two-form gauge fields $B^{(i)}$.
The field strength $F_{(5)}$ satisfies a self-duality
condition, and is given by
$$
F_{\mu\nu\lambda\rho\sigma} \equiv
\partial\,{}_{[\mu} A_{\nu\lambda\rho\sigma]}
-6\epsilon_{ij}B^{(i)}_{[\mu\nu}H^{(j)}_{\lambda\rho\sigma]}\,.
\eqn\aaeight
$$
The above transformation rules should still be completed with
terms bilinear in the fermion fields. In
principle these can be constructed from the results of [\Schwarz],
using the redefinitions given above.
\refout
\end
|
2,877,628,090,514 | arxiv | \section{Introduction}
Electron transport in low-dimensional systems has drawn much attention
in the field of theoretical as well as experimental research due to
flourishing development in nanotechnology and nano-scale device modeling.
Low-dimensional model quantum systems are the basic building blocks for
future generation of nano-electronic devices. Several exotic features
are observed in this length scale owing to the effect of quantum
interference. This effect is generally observed in samples with size
much smaller or comparable to phase coherence length $L_{\phi}$, while
the effect disappears for larger systems. A normal metal mesoscopic
ring is a very good example to study the effect of quantum interference.
Current trend of fabricating nano-scale devices has resulted much
interest in characterization of ring type nanostructures. There are
several methods for preparation of mesoscopic rings. For instance, gold
rings can be designed using templates of suitable structure in combination
with metal deposition through ion beam etching~\cite{hobb,pearson}. In
a recent experiment, Yan {\it et al.} have proposed how gold rings can
be prepared by selective wetting of porous templates using polymer
membranes~\cite{yan}. With such rings we can fabricate nano-scale
electronic circuits which can be utilized for the operation of current
modulation. To explore this phenomenon the ring is coupled to two
electrodes, to form an electrode-ring-electrode bridge, where the ring
is penetrated by a time varying magnetic flux $\phi$. Electron transport
through a molecular bridge system was first studied theoretically by
Aviram and Ratner~\cite{aviram} during $1970$'s. Following this pioneering
work, several experiments have been done using different bridge systems
to reveal the actual mechanism of electron transport. Though, to date a
lot of theoretical~\cite{mag,lau,baer1,baer2,baer3,tagami,gold,cui,orella1,
orella2,fowler,peeters} as well as experimental works~\cite{reed1,reed2,
tali,fish} on two-terminal electron transport have been done addressing
several important issues, yet the complete knowledge of conduction
mechanism in nano-scale systems is still unclear to us. Transport
properties are characterized by several significant factors like
quantization of energy levels, quantum interference effect,
ring-to-electrode interface geometry, etc. Furthermore, electron transport
in the ring can also be modulated in other way by tuning the magnetic flux,
the so-called Aharonov-Bohm (AB) flux, penetrated by the ring.
Aim of the present paper is to illustrate the possibilities of
current modulation at nano-scale level using simple mesoscopic rings.
To achieve current modulation we design an electronic circuit using
a single mesoscopic ring or a cluster of such rings, where each ring
is penetrated by a time varying magnetic flux $\phi$ which plays the
central role for the modulation action. For a constant DC voltage,
current through the circuit shows oscillatory behavior as a function
of time $t$ depending on the phase of the magnetic flux $\phi$ passing
through the ring. Therefore, current modulation can be achieved simply
by tuning the phase of magnetic flux $\phi$ threaded by the ring. Within
a tight-binding framework, a simple parametric approach~\cite{muj1,san3,
muj2,san1,sam,san2,hjo,walc1,walc2} is given and all the calculations are
done through single particle Green's function technique to reveal the
electron transport. Here we present numerical results for the two-terminal
conductance and current which clearly describe the essential features
of current modulation. Our exact analysis may be helpful for designing
mesoscopic or nano-scale electronic devices. To the best of our knowledge
the modulation action using such simple mesoscopic rings has not been
addressed earlier in the literature.
The scheme of the present paper is as follows. With the brief introduction
(Section I), in Section II, we describe the model and theoretical
formulations for our calculations. Section III presents the significant
results, and finally, we conclude our results in Section IV.
\section{Model and synopsis of the theoretical formulation}
In the forthcoming two sub-sections we focus on two different circuit
configurations, for our illustrative purposes, those are used for
current modulation. Here we try to illustrate how a single mesoscopic
ring or two such rings, where each ring is penetrated by a time varying
magnetic flux $\phi$, under a DC bias voltage can support an oscillating
output current. A single mesoscopic ring can provide an oscillating
current with a particular frequency, while in the case of two rings,
oscillating currents with other frequencies can be obtained. These
ideas may be generalized further to produce oscillating currents with
other frequencies by considering more number of rings.
\subsection{Circuit configuration I}
Let us start by referring to Fig.~\ref{circuit1}. A mesoscopic ring,
penetrated by a time varying magnetic flux $\phi$, is attached
symmetrically to two semi-infinite one-dimensional ($1$D) metallic
electrodes, namely, source and drain. These two electrodes are
directly coupled to the positive and negative terminals of a battery,
a source of constant voltage.
\begin{figure}[ht]
{\centering \resizebox*{7cm}{3.4cm}{\includegraphics{circuit1.eps}}\par}
\caption{(Color online). Actual scheme of connection with the battery
where a mesoscopic ring, subject to a time varying magnetic flux $\phi$,
is attached symmetrically to source and drain. The blue arrow indicates
current direction in the circuit.}
\label{circuit1}
\end{figure}
The time varying magnetic flux passing through the ring can be expressed
mathematically in the form,
\begin{equation}
\phi(t)=\frac{\phi_0}{2} \sin(\omega t)
\label{in1}
\end{equation}
where, $\phi_0=ch/e$ is the elementary flux-quantum, $\omega$ corresponds
to the angular frequency and $t$ represents the time. This electronic
circuit provides an oscillating current in the output though a constant
DC input signal is applied which we will describe in the forthcoming
section. The frequency of the current is identical to that of the applied
flux $\phi(t)$.
\subsection{Circuit configuration II}
In Fig.~\ref{circuit2} two such mesoscopic rings those are directly
coupled to each other, penetrated by time varying magnetic fluxes
\begin{figure}[ht]
{\centering \resizebox*{7.5cm}{3.4cm}{\includegraphics{circuit2.eps}}\par}
\caption{(Color online). Actual scheme of connection with the battery
where two mesoscopic rings, subject to time varying magnetic fluxes
$\phi_1$ and $\phi_2$ are attached symmetrically to source and drain.
The blue arrow indicates current direction in the circuit.}
\label{circuit2}
\end{figure}
$\phi_1$ and $\phi_2$ are attached symmetrically to the electrodes, viz,
source and drain. A DC voltage source is connected to these two electrodes.
The time varying magnetic fluxes are expressed mathematically as,
\begin{eqnarray}
\phi_1(t) & = & \frac{\phi_0}{2} \sin(\omega t) \label{in2} \\
\phi_2(t) & = & \frac{\phi_0}{2} \sin(\omega t + \delta)
\label{in3}
\end{eqnarray}
where, $\delta$ refers to constant phase difference between the two fluxes
$\phi_1$ and $\phi_2$. Using this circuit configuration also oscillating
current in the output can be achieved, but in this case the frequency
of the current gets modified depending on the phase shift $\delta$.
\subsection{Theoretical formulation}
In this sub-section we will describe the basic theoretical formulation
for calculation of conductance and current through a single mesoscopic
ring, penetrated by a magnetic flux $\phi$, attached to two source and
drain. This similar theory is also used to study electron transport in
an array of mesoscopic rings.
Using Landauer conductance formula~\cite{datta,marc} we determine
two-terminal conductance ($g$) of the mesoscopic ring. At much low
temperatures and bias voltage it ($g$) can be written in the form,
\begin{equation}
g=\frac{2e^2}{h} T
\label{equ1}
\end{equation}
where, $T$ corresponds to the transmission probability of an electron
across the ring. In terms of the Green's function of the ring and
its coupling to two electrodes, the transmission probability can be
expressed as~\cite{datta,marc},
\begin{equation}
T={\mbox{Tr}} \left[\Gamma_1 G_{R}^r \Gamma_2 G_{R}^a\right]
\label{equ2}
\end{equation}
where, $\Gamma_S$ and $\Gamma_D$ describe the coupling of the ring
to the source and drain, respectively. Here, $G_R^r$ and $G_R^a$
are the retarded and advanced Green's functions, respectively, of the
ring considering the effects of the electrodes. Now, for the full system
i.e., the mesoscopic ring, source and drain, the Green's function is
expressed as,
\begin{equation}
G=\left(E-H\right)^{-1}
\label{equ3}
\end{equation}
where, $E$ is the energy of the source electron. Evaluation of this
Green's function needs the inversion of an infinite matrix, which is
really a difficult task, since the full system consists of the finite
size ring and two semi-infinite $1$D electrodes. However, the full
system can be partitioned into sub-matrices corresponding to the
individual sub-systems and the effective Green's function for the
ring can be written in the form~\cite{marc,datta},
\begin{equation}
G_R=\left(E-H_R-\Sigma_S-\Sigma_D \right)^{-1}
\label{equ4}
\end{equation}
where, $H_R$ describes the Hamiltonian of the ring. Within the
non-interacting picture, the tight-binding Hamiltonian of the ring
can be expressed like,
\begin{equation}
H_R = \sum_i \epsilon_i c_i^{\dagger} c_i + \sum_{<ij>} v
\left(c_i^{\dagger} c_j e^{i\theta}+ c_j^{\dagger} c_i e^{-i\theta}\right)
\label{equ5}
\end{equation}
where, $\epsilon_i$ and $v$ correspond to the site energy and
nearest-neighbor hopping strength, respectively. $c_i^{\dagger}$ ($c_i$)
is the creation (annihilation) operator of an electron at the site $i$
and $\theta=2 \pi \phi/N \phi_0$ is the phase factor due to the flux
$\phi$ enclosed by the ring consists of $N$ atomic sites. A similar
kind of tight-binding Hamiltonian is also used, except the phase factor
$\theta$, to describe the electrodes where the Hamiltonian is parametrized
by constant on-site potential $\epsilon^{\prime}$ and nearest-neighbor
hopping integral $t^{\prime}$. The hopping integral between the ring and
source is $\tau_S$, while it is $\tau_D$ between the ring and drain.
In Eq.~(\ref{equ4}), $\Sigma_S$ and $\Sigma_D$ are the self-energies
due to the coupling of the ring to the source and drain, respectively,
where all the information of the coupling are included into these
self-energies.
To determine current, passing through the mesoscopic ring,
we use the expression~\cite{marc,datta},
\begin{equation}
I(V)=\frac{2 e}{h}\int \limits_{-\infty}^{\infty}
\left(f_S-f_D\right) T(E)~ dE
\label{equ6}
\end{equation}
where, $f_{S(D)}=f\left(E-\mu_{S(D)}\right)$ gives the Fermi distribution
function with the electrochemical potential $\mu_{S(D)}=E_F\pm eV/2$ and
$E_F$ is the equilibrium Fermi energy. For the sake of simplicity,
we take the unit $c=e=h=1$ in our present calculations.
\section{Numerical results and discussion}
To illustrate the numerical results, we begin our discussion by
mentioning the values of different parameters used for our
calculations. In the mesoscopic ring, the on-site energy $\epsilon_i$
is fixed to $0$ for all the atomic sites $i$ and nearest-neighbor
hopping strength $v$ is set to $3$. While, for the side-attached
electrodes the on-site energy ($\epsilon^{\prime}$) and
nearest-neighbor hopping strength ($t^{\prime}$) are chosen as
$0$ and $4$, respectively. The hopping strengths $\tau_S$ and
$\tau_D$ are set as $\tau_S=\tau_D=2.5$. The equilibrium Fermi
energy $E_F$ is fixed at $0$.
\subsection{Responses in circuit configuration I}
The modulation action for the circuit configuration I is clearly
illustrated in Fig.~\ref{current1}, where we compute all the results
considering a ring with $N=20$. The upper panel presents the time
variation of magnetic flux with amplitude $\phi_0/2$ whose mathematical
form is given in Eq.~(\ref{in1}). The variation of conductance $g$ as
a function of time $t$ is illustrated in the middle panel. Here we
determine the typical conductance for a particular energy $E=2.5$. It
shows that the conductance oscillates periodically as a function of
$\omega t$ exhibiting $\pi$ periodicity and it gets the amplitude
$g_{max}=2$. This reveals that the transmission amplitude $T$ becomes
unity since we get the relation $g=2T$ from the Landauer conductance
formula in our chosen unit $c=e=h=1$.
\begin{figure}[ht]
{\centering \resizebox*{7.8cm}{5cm}{\includegraphics{current1.eps}}\par}
\caption{(Color online). Responses in circuit configuration I. Upper,
middle and lower panels describe the time dependences of flux $\phi$,
conductance $g$ and current $I$ as a function of time $t$. Conductance
is calculated at the energy $E=2.5$ and current is determined at the
typical bias voltage $V=2.5$. The ring size is fixed at $N=20$. The
amplitudes are: $\phi_{max}=0.5$, $g_{max}=2$ and $I_{max}=2.54$.}
\label{current1}
\end{figure}
Now we try to justify the oscillating behavior of conductance with time
$t$. The probability amplitude of getting an electron from the source to
drain across the ring depends on the quantum interference effect of
the electronic waves passing through the upper and lower arms of the
ring. For a symmetrically connected ring (upper and lower arms are
identical to each other), penetrated by a magnetic flux $\phi$, the
probability amplitude of getting an electron across the ring becomes
exactly zero ($T=0$) for the typical flux, $\phi=\phi_0/2$. This
vanishing behavior of transmission probability can be shown very
easily by simple mathematical calculation as follows.
For a symmetrically connected ring, the wave functions passing through
the upper and lower arms of the ring are given by,
\begin{eqnarray}
\psi_1 & = & \psi_0 e^{\frac{ie}{\hbar c} \int \limits_{\gamma_1}
\vec{A}.\vec{dr}} \nonumber \\
\psi_2 & = & \psi_0 e^{\frac{ie}{\hbar c} \int \limits_{\gamma_2}
\vec{A}.\vec{dr}}
\label{equ10}
\end{eqnarray}
where, $\gamma_1$ and $\gamma_2$ are used to indicate the two different
paths of electron propagation along the two arms of the ring. $\psi_0$
denotes the wave function in absence of magnetic flux $\phi$ and it is
same for both upper and lower arms as the ring is symmetrically coupled
to the electrodes. $\vec{A}$ is the vector potential associated with the
magnetic field $\vec{B}$ by the relation $\vec{B}= \vec{\nabla} \times
\vec{A}$. Hence the probability amplitude of finding the electron passing
through the ring can be calculated as,
\begin{equation}
|\psi_1 + \psi_2|^2 = 2|\psi_0|^2 + 2|\psi_0|^2 \cos \left({\frac{2\pi
\phi}{\phi_0}}\right)
\label{equ11}
\end{equation}
where, $\phi = \oint \vec{A}.\vec{dr} = \int \int \vec{B}.\vec{ds}$
is the flux enclosed by the ring.
From Eq.~(\ref{equ11}) it is clearly observed that at $\phi=\phi_0/2$
the transmission probability of an electron drops exactly to zero. On
the other hand, for all other values of $\phi$ i.e., $\phi \ne \phi_0/2$,
electron transmission through the ring takes place which provides
non-zero value of conductance. Thus, for the particular cases when
$\phi(t)$ becomes maximum ($+ \phi_0/2$) or minimum ($- \phi_0/2$),
conductance drops to zero which is clearly shown from the conductance
spectrum (middle panel of Fig.~\ref{current1}). Hence, changing the
frequency of time dependent flux $\phi(t)$, periodicity in conductance
can be regulated. To visualize the oscillatory action more prominently
we present the variation of current as a function of $\omega t$ in the
lower panel of Fig.~\ref{current1}. The current $I$ through the ring
is obtained by integrating over the transmission function $T$ (see
Eq. (~\ref{equ6})). Here we compute the current for the typical bias
voltage $V=2.5$. Following the conductance pattern, the oscillatory
behavior of the current is clearly understood, and like the conductance
spectrum current exhibits $\pi$ periodicity providing the amplitude
$I_{max}=2.54$. All these characteristic features suggest that an
oscillatory response in the output is obtained though the ring is
subject to a DC bias voltage.
\subsection{Responses in circuit configuration II}
Next, we concentrate on the responses obtained in the circuit
configuration II. The results are illustrated in Fig.~\ref{current2},
where total number of atomic sites $N$ in each ring is fixed at $8$.
In the upper panel, we plot the time dependent fluxes $\phi_1(t)$
(orange line) and $\phi_2(t)$ (magenta line) those pass through two
different rings. A constant phase shift $\delta$ exists between these
two fluxes as mathematically expressed in Eqs.~(\ref{in2}) and
(\ref{in3}). Here we set $\delta=\pi/2$. In the middle panel, we
describe the time dependence of conductance $g$ with amplitude
$g_{max}=1.9$, where conductance is evaluated at the typical energy
$E=2.5$. Conductance shows the oscillatory behavior as a function of
$\omega t$ providing $\pi/2$ periodicity. Thus, for this circuit
configuration II, periodicity becomes exactly half compared to
the circuit configuration I. The explanation of $\pi/2$ periodicity
is as follows. For this two ring system, the transmission probability
depends on the combined effect of quantum interferences in the two
rings. In ring-1, $\phi_1(t)$ is sinusoidal in form as described
mathematically in Eq.~(\ref{in2}), while in ring-2, the variation
of flux $\phi_2(t)$ is the same as in ring-1 with a phase shift $\pi/2$.
Therefore, ring-1 and ring-2 enclose $\phi_0/2$ flux alternatively
in the interval $\omega t=\pi/2$, and accordingly, zero transmission
probability is achieved at this interval. In the same footing,
here we also describe the variation current $I$ with time $t$ (see
lower panel of Fig.~\ref{current2}) to support the oscillatory action
observed in this circuit configuration II. The current is computed at
the typical bias voltage $V=4$. The variation of current shows $\pi/2$
periodicity with an amplitude $I_{max}=2$ and this periodic nature
is well understood from the conductance spectrum. From these
\begin{figure}[ht]
{\centering \resizebox*{7.8cm}{5cm}{\includegraphics{current2.eps}}\par}
\caption{(Color online). Responses in circuit configuration II. Upper,
middle and lower panels describe the time dependences of two fluxes
$\phi_1$ (orange line) and $\phi_2$ (magenta line), conductance $g$ and
current $I$ as a function of time $t$. Conductance is calculated at the
energy $E=2.5$ and current is determined at the typical bias voltage
$V=4$. In each ring, total number of atomic sites $N$ is fixed at $8$ and
we choose $\delta=\pi/2$. The amplitudes are: $\phi_{max}=0.5$,
$g_{max}=1.9$ and $I_{max}=2$.}
\label{current2}
\end{figure}
conductance and current spectra it is manifested that in the two ring
system which is subject to a DC bias voltage, the oscillatory response
can be modulated very easily by tuning the phase difference $\delta$
between two time varying magnetic fluxes.
Finally, we can say that extending this idea to an array of multi-ring
system in which different rings subject to time varying magnetic fluxes
in different phases, oscillatory responses can be achieved with $\pi/n$
frequencies, where $n$ corresponds to an integer. Our exact analysis may
provide some significant insights in designing nano-electronic circuits.
\section{Concluding remarks}
In a nutshell, we have addressed the possibilities of current modulation
at nano-scale level using mesoscopic rings enclosing a time varying
magnetic flux. We have shown that a single mesoscopic ring or two
such rings, subject to a DC bias voltage, can support an oscillating
output current. A single mesoscopic ring can exhibit an oscillating
current with a particular frequency associated with the flux $\phi(t)$,
while the frequency of the current can be regulated in the case of two
rings by tuning the phase difference $\delta$ between the fluxes
$\phi_1(t)$ and $\phi_2(t)$. The whole modulation action is based on
the central idea of quantum interference effect in presence of flux
$\phi$ in ring shaped geometries. We adopt a simple tight-binding
framework to illustrate the model and all the calculations are done
using single particle Green's function formalism. Our exact numerical
results provide two-terminal conductance and current which clearly
describe the essential features of current modulation. Our analysis
can be used in designing tailor made nano-scale electronic devices.
Throughout our work, we have described all the essential features
of current modulation for two different ring sizes. In circuit
configuration I, we have chosen a ring with total number of
atomic sites $N=20$. On the other hand, in circuit configuration II,
we have considered two identical rings, where each ring contains $8$
atomic sites. In our model calculations, these typical numbers ($20$
or $2\times8=16$) are chosen only for the sake of simplicity. Though the
results presented here change numerically with the ring size ($N$), but
all the basic features remain exactly invariant. To be more specific, it
is important to note that, in real situation the experimentally
achievable rings have typical diameters within the range $0.4$-$0.6$
$\mu$m. In such a small ring, unrealistically very high magnetic fields
are required to produce a quantum flux. To overcome this situation,
Hod {\em et al.} have studied extensively and proposed how to construct
nanometer scale devices, based on Aharonov-Bohm interferometry, those
can be operated in moderate magnetic fields~\cite{baer4,baer5,baer6,baer7}.
In the present paper we have done all the calculations by ignoring
the effects of the temperature, electron-electron correlation, etc.
Due to these factors, any scattering process that appears in the
mesoscopic ring would have influence on electronic phases, and, in
consequences can disturb the quantum interference effects. Here we
have assumed that, in our sample all these effects are too small, and
accordingly, we have neglected all these factors in this particular
study.
The importance of this article is mainly concerned with (i) the simplicity
of the geometry and (ii) the smallness of the size.
|
2,877,628,090,515 | arxiv | \section{Introduction}
\subsection{Asymmetric Games}
In engineering-economic and other systems, asymmetric games exist if some players have distinct advantages over other players. These advantages may be structural, e.g., first-mover advantage , e.g. Stackelberg games \cite{gabriel2012complementarity} or may take the form of disproportionately higher payoffs, a greater number of strategies, or other aspects. Concerns regarding equity and welfare arise when these situations involve shared resources or infrastructure of economic, social, or environmental importance. Thus, ways to balance this asymmetry provide insight into policies to improve equity in asymmetric games.
Games played on asymmetric networks are one important class and can naturally become a source of asymmetry between players. Specifically, the asymmetric network governs the interactions among players such that only a few neighbors are capable of influencing a given player's set of decisions \cite{parise2019variational}. If this influence is biased in one direction, then the network position of certain players may be advantageous in space, time or both. Players with these positional advantages could be "indifferent" to or even exploit the strategies of others and thereby create an asymmetric game. Stackelberg games are an important example of the latter. The difference is that in Stackelberg and leader-follower games more generally, e.g., mathematical program with equilibrium constraints (MPECs) or equilibrium problems with equilibrium constraints (EPECs) \cite{gabriel2012complementarity}, the leaders directly take into account the actions of the followers in their decision-making to optimize their own objective functions. The followers are passive and take the leaders' decisions as given.
The study of asymmetric games in this paper concentrates on an asymmetric network that takes inspiration from river systems with multiple independent water users. Specifically, the players located on the upstream end of the river have a positional advantage manifesting as privileged access to water. Downstream users must take these decisions as given, which may result in excess flooding, inadequate water supply, or degraded water quality. Independent, conflicting water usage decisions often arise in trans-boundary river basins. Namely, these include situations where the river basin is not solely contained within one administrative boundary. This general situation is known as the river-sharing problem \cite{van2007component}.
With this example in mind, we consider a general, asymmetric game on a line-graph network where a shared resource is accessed on a "first come, first served" basis. In such a network, each player is sequentially positioned in a line on a directed network \cite{van2007component}. Excluding the two terminal-end players, each player has both an upstream and downstream neighbor. The two terminal-end players have only one upstream or downstream neighbor depending on the position. The upstream users are like leaders in a Stackelberg game, except they may not directly exploit the downstream users (e.g., are "indifferent" to followers). In this context, truly indifferent leaders are mathematically equivalent to those who are abstaining from exploiting the followers.
Using this general model, we provide several non-cooperative game theory models for linear-graph networks of river systems. The aim is to allow the downstream players to balance this asymmetry through payments to water-release markets. Compared to other papers in this line of research, the non-cooperative approach provides more realistic modeling as compared to cooperative game theoretic ones \cite{peleg2007introduction} yet still allows for an improved system benefit as compared to the current one. There are a number of important examples of these asymmetric games in a variety of different areas. Similarities and differences between river basins and other infrastructure systems are discussed in the next section.
\subsection{Water vs. Other Infrastructure Systems \label{sec:wat_inf}}
Water resource-related risks are closely linked to a number of on-going economic and environmental concerns and have been exacerbated in recent years. The myriad number of causes are responsible for this include rapid population growth and urbanization, increased wastewater discharges and more stringent effluent limits, a greater number of recreational users, degraded in-stream environmental habitats and landscapes, increased frequency and duration of extreme climate events, and inequitable access to clean drinking water. For instance, a study found that the drought in the Western United States is the worst in 1,200 years \cite{rott_2022}. For years, stakeholders have recognized that the future was rapidly coming into focus: tackling complex challenges requires a unified collaborative approach and cutting-edge solutions to evaluate and mitigate future risks. Furthermore, translating the flow of water into the flow of benefits is inherently challenging because of water's ubiquitous usage across municipal, agricultural, and industrial sectors. This hinders the ability to validate and address the associated inequities with regulation alone.
What makes water management and river systems in particular interesting and the focus of the application in this paper, is their relationship to commodity markets. In general, there are no widely-implemented market structures to balance the asymmetry outlined above either for water quantity or quality. For example, the doctrine of prior appropriation in the western United States grants water rights on a "first in time, first in right" basis \cite{cech2005principles}. However, this system is rigid and does little more than transfer a spatial asymmetry into a temporal asymmetry. In contrast, other infrastructure systems can have market structures/systems to balance welfare and other system-level economic or other objectives. Consider the following examples to highlight this point.
In the electric power sector, markets in Europe and North America have several stages of decisions leading up to real time. For example, power producers can submit day-ahead bids for production levels and prices which then help independent system operators (ISOs) to balance power supply with forecasted demand. There are also markets that balance supply and demand in near real-time or automatic adjustment in real-time as well e.g., PJM power market in the U.S. (\href{https://www.pjm.com/}{https://www.pjm.com/} or Nordpool in Europe (\href{https://www.nordpoolgroup.com/}{https://www.nordpoolgroup.com/}).
Relative to water volumes, water resources don't operate with these levels of decision-making in part because they can store water as needed. In power systems, in today's markets there are generally no market-scale storage assets to mitigate potential imbalances in uncertain supply (i.e., renewable) or uncertain demand. Also, power markets allow for forward contracts as well as spot markets to be as flexibile as possible which is distinct from river-based water systems. One aspect that is akin to balancing upstream and downstream players in power is what is called demand response. This is temporal shifting of the consumer load (e.g., residential, industrial) to better balance supply and demand. For example, the residents in buildings may be incentivized with payments to shift their load to hours with lower prices for overall system benefit (i.e., less need for expensive and fossil fuel-based peaking plants). In this sense, the asymmetric game is over time with the upstream players the consumers (or producers) that are paid to alter their consumption (production) schedules for temporally later consumers or producers (i.e., downstream players) \cite{conejo2010real}
In transportation, specifically traffic management, there are also mechanisms in place to better balance the asymmetry in this transport infrastructure on a real-time basis. Consider real-time tolls that change their prices based on the volume of flow along a particular highway through the use of vehicle transponders. In effect, drivers can decide to use the roads later if the prices are too high. The earlier drivers in this case are the upstream players whose choice of using the tolled road can affect later, downsteam drivers. In this case the asymmetry is over time but the earlier drivers are negatively incentivized by much higher congestion tolls (assuming that they are driving during the busy hours) \cite{gabriel1997traffic}.
From a water-quality perspective, the analog with power is perhaps best through carbon emissions-reduction programs like the U.S. Regional Greenhouse Gas Initiative (RGGI) \href{https://www.rggi.org/}{https://www.rggi.org/}, \cite{ruth2010strategies}. This program gives certain carbon allowances (maximum amount of tons of carbon emissions) and it’s up to the market to balance this with policy goals. For example, power companies that produce renewable energy or can limit their carbon emissions can generate revenue from selling their unused allowances. Power companies that produce too much carbon emissions have to pay for this overage. It seems that RGGI has done well to monetize carbon emissions reductions. From this perspective, RGGI relates to water quality for example, sediment or pollution reduction in river systems. While power has such systems and markets in place, it is rarer for water systems to apply them successfully. The Virginia Nutrient Credit exchange program is a notable exception \cite{Gov_McD_2012}. Another interesting comparison between water and power is that in water systems water users along a river can act as both suppliers and consumers, which is analogous to prosumers in energy markets.
\section{Literature Review and Contributions of This Paper}
\subsection{Literature Review}
Most if not all of the research on line-graph games has been from the framework of cooperative game theory. Brink et al. (2007) used cooperative game theory to analyze line-graph games with applications in machine sequencing games and the river-sharing problem \cite{van2007component}. Khmelnitskaya (2010) extended this work to a more general case, which considers cooperative game theory on rooted-tree and sink-tree digraphs \cite{khmelnitskaya2010values}. This structure was then used to address the river-sharing problem for more complex networks. These works demonstrate that the river-sharing problem can be generalized to a mathematically abstract setting within a game-theoretic context.
Network games from the non-cooperative game theoretic framework have been researched, but lack coverage in line-graph games. For example, Cominetti et al. (2021) formulate the "Buck Passing Game" where the players attempt to pass a chore to other players in the network to minimize individual effort \cite{cominetti2021buck}. Zhou and Chen (2018) consider sequential consumption in networks during a firm's release of a new product \cite{zhou2018optimal}. It is similar to a line-graph game but involves more sophisticated network dynamics. Parise and Ozdaglar (2019) formulate a general, variational inequality framework for network games, but do not cover line-graph games as a specific case \cite{parise2019variational}.
Water-resource problems have been analyzed from a wide variety of both cooperative and non-cooperative game theoretic network contexts. Dinar and Hogarth (2015) presented a systematic review of game theory and water resource literature \cite{dinar2015game}. They found that much of this research was related to cooperative game theory. However, the non-cooperative models lack extensive formulations from an equilibrium programming perspective. Bekchanov et al. (2017) reviewed over 150 papers on water economic models. They concluded that the literature poorly integrates economic equilibrium models with the underlying water resource networks \cite{bekchanov2017systematic}. Archibald and Marshall (2018) corroborate this viewpoint in their literature review. They reviewed nearly 450 papers on mathematical programming in water resources, but equilibrium programming was notably absent \cite{archibald2018review}.
Britz et al. (2013) provides a notable exception to the equilibrium programming gap in the literature. They model a stylized river basin using multiple optimization problems with equilibrium constraints \cite{britz2013modeling}. In a follow-up paper, Kuhn et al. (2014) extend the approach to a real-world case study in the Lake Naivasha Basin. Despite the uniqueness of the approach, both of these papers do not attempt to generalize it to the general non-cooperative game theoretic context. They also focus primarily on water allocation issues without considering the trade-offs between water use curtailments and water infrastructure investment. Additionally, they do not consider nuances of water balances such as the role of indirect water reuse.
\subsection{Contributions of the Current Paper}
Thus, the current work formalizes the modeling approach used in \cite{britz2013modeling} as a Generalized Nash equilibrium model for asymmetric, non-cooperative games on line graphs. It also provides a counterpoint to the cooperative game theory approach that is already well covered in the literature. The goal is to demonstrate how self-enforcing agreements are possible among players even in the context of an asymmetric game. Specifically, market structures are used to identify trading opportunities that connect high marginal benefits downstream to lower marginal costs upstream. It also considers water management decisions beyond water allocation such as the role of consumptive use, indirect water reuse, storage, and capital projects.
Furthermore, the proposed water release market is more tangible than markets based on water-allocation. In the water release market, water's scarcity and associated value is based on the physical barriers and cost of releasing additional water to the river. In contrast, water-allocation markets are based on legally increasing a water-withdrawal limitation. Thus, the value of water is derived from scarcity associated with a legal barrier. In countries such as Chile, real-world implementation of these water-allocation markets are inequitable because the judicial system has not uniformly enforced this legal barrier \cite{galaz2004stealing}.
To illustrate the approach, the model is applied to a stylized river basin as well as a case study in the Duck River basin in Tennessee, USA. We consider the role of consumptive use, indirect water reuse, storage, and capital projects in water resource systems. The purpose is to extend the application of the approach used in \cite{britz2013modeling}. Taken together, the application goal is to develop a collaborative approach to water resources management to better balance the upstream-downstream asymmetries. Our approach achieves this goal using concepts from other infrastructure markets and economic theory.
Summarizing the above discussion, the current paper makes valuable contributions versus the existing literature as follows: 1. Formalize non-cooperative games on line graphs as a counterpoint to the existing cooperative game theory literature; 2. Extend non-cooperative river basin game theory models to consider engineering-economic decisions beyond water allocation schemes, and 3. Create novel water market structures to achieve a better alignment of stakeholder interests in a river basin.
\section{General Model}
\subsection{Line-Graph Network Games \label{sec:gm_lgng}}
Games on line graphs have a structure that can be expressed mathematically. The location of player $i \in I$ on the line graph can be considered as an index of sequential positions $1,2,...,|I|$. For each player, the decisions, $x_i$, and the associated payoff function, $f^{LG}_i(x_i)$, can be quantified using an optimization model. These payoffs are generated in the context where upstream players may be indifferent and/or uninfluenced by Crucially, the feasible region, $R_i$, is a function of the state variable $S \in \{s_o,s_1,s_2,...,s_{|I|}\}$. Assuming one shared resource for simplicity, the scalar $S$ represents the state of resources shared among the various players. It is constrained to take on a value $s_i$, which represents player i's transformations to the shared resources (e.g., water releases after withdrawing from the river).
Sequential transformations to $S$ represent the primary connections from one player's optimization model to another. The function $g^{LG}$ describes the value of the state transformations from the domain of player i's optimal decisions (i.e., $x^*_i$) and the current state of the resource (i.e., $S$).
Starting from an initial value $s_o$, each player applies these transformations sequentially according to their network position such that player i inherits the transformation to $S$ from player i-1. This nominally represents the only form of interaction among the players.
The following algorithm mathematically expresses the dynamics of this game:
\textit{For each} $i \in I:$
\begin{enumerate}
\item \textit{if i = 1:} $S \leftarrow{} s_o$ ; \textit{else}: $S \leftarrow{} s_{i-1}$
\item \textit{Solve} $\max_{x_i} f^{LG}_i(x_i) \quad s.t. \quad x_i \in R_i(S)$
\item $s_i=g^{LG}(x^*_i,S$), $i \leftarrow i+1$
\end{enumerate}
This formulation illustrates how players at a positional disadvantage (i.e., late in the sequence) inherit the shared resource. They are completely dependent on the optimal decisions, $x^*_i$ of the players early in the sequence yet have no direct opportunity to influence them because each program is sequentially solved. Beyond understanding the magnitude of potential inequities, a solution to this system is rather trivial in the aggregate because each player effectively optimizes in isolation.
\subsection{Generalized Nash Reformulation}
The line-graph network structure does not eliminate the possibility for mutually-beneficial actions if players are allowed to interact. Such a system is reformulated in (\ref{eqn:gen_nash_opt}) where each player separately solves the following optimization problem:
\begin{subequations}
\begin{equation}
max_{x_i} f^{GN}_i(x_i,y_i,z_i,\pi)
\label{eqn:gen_nash_of} :
\end{equation}
\texttt{s.t.}
\begin{equation}
x_i,y_i,z_i \in R^{GN}(s_{i-1}(y_i))
\label{eqn:gen_nash_fr}
\end{equation}
\label{eqn:gen_nash_opt}
\end{subequations}
The vector $y_i$ represents player i's decisions that desirably influence players early in the sequence to alter their utilization of the shared resource, $S \in \{s_1, s_2, ... s_n\}$ (e.g., revenue-generating water purchases). The term $s_o$ is not included here because it is a boundary condition representing the initial unaltered state of the resource. Mathematically speaking, the vector $y_i$ indirectly shows up in the objective functions of other players not equal to $i$. Conversely, the vector $z_i$ represents player i's accommodations for players later in the sequence (e.g., additional water releases downstream). The variable $\pi$ is a vector of variables unique to the system that informs each player's decisions (e.g., market prices). This is a generalized Nash problem as the constraint region is affected by other players' decisions.
The values of $s_i$ and $\pi$ are exogenous to each player but are endogenous to the system as shown in (\ref{eqn:gen_nash_sys1}):
\begin{equation}
s_i = g^{GN}(z_i,s_{i-1}),
\pi = h(Y,Z)
\label{eqn:gen_nash_sys_interact}
\end{equation}
\label{eqn:gen_nash_sys1}
The function $g^{GN}$
describes the value for the variable $s_i$ on the domain of player i's feasible accommodation decisions, $z_i$, and the feasible states of the inherited resource, $s_{i-1}$. In the simplest form, this function could include conservation of flow constraints. In its most complex form, this function could include simulation outputs from a physical process model. The function h transforms the vector of decisions $Y = (y_1^T, ..., y_{|I|}^T)^T$ and $Z = (z_1^T, ..., z_{|I|}^T)^T$ to some system value $\pi$ (e.g., market price).
\subsection{Cooperative Game Theoretic Considerations \label{sec:gm_coop}}
Topics from cooperative game theory are used to compare alternative systems or rules for non-cooperative player interactions in the models below. The first concept involves the characteric function $v(I^C)$ for each subset of players $I^C \subseteq I$. This function gives the amount that the members of $I^C$ are guaranteed to receive if they act together as a coalition \cite{winston2004operations}. Given the non-cooperative model structure specified above, the conditions for inclusion in $I^C$ are as follows:
\begin{equation}
I^C = \{i|\quad ||y_i||\neq 0\} \cup \{i|\quad ||z_i|| \neq 0\}
\end{equation}
where $y_i$ and $z_i$ are the same variables defined previously and $||\cdot||$ is any vector norm.
In the line-graph game, the value for the characteristic function can be calculated as follows:
\begin{equation}
v(I^C) = \sum_{i \in I^C}\left( f^{GN}_i(x^*_i,y^*_i,z^*_i,\pi) - f^{LG}_i(x^*_i)\right)
\label{eqn:line_graph_cf}
\end{equation}
where $x^*_i, y^*_i,z^*_i$ represent optimal decisions for player i. Thus, the characteristic function is the sum of the improvement from the Generalized Nash reformulation over the original line-graph game across all participating players.
The second concept is the notion of an imputation. It is defined mathematically as follows \cite{winston2004operations}:
\begin{subequations}
\begin{equation}
v(I) = \sum^{|I|}_{i=1} r_i
\label{eqn:imput_cond_1}, \quad
\end{equation}
\begin{equation}
r_i \ge v(\{i\}) \quad \forall i \in I
\label{eqn:imput_cond_2}
\end{equation}
\end{subequations}
where $r_i$ is the reward player i receives from participating in the coalition. The first condition states that the rewards distributed to all players must equal the value of the characteristic function composed of all players. The second condition states that participating in the coalition should not decrease the rewards received. Put another way, joining the coalition of the group should provide a higher reward than the coalition of only oneself. Therefore, an imputation must maximize the payoff to the coalition and leave no player worse off than they would be independently. It effectively is a condition for mutual interest among the participating players.
\begin{theorem} \label{th:imputation}
A non-negative reward $r_i$ for all players is a sufficient condition for an imputation in this optimization-based, line-graph game.
\end{theorem}
See the Appendix for the proof. Using these metrics, the solutions of alternative reformulations shown below are compared and contrasted using the associated characteristic functions and the criteria for imputations. Specifically, we seek alternative reformulations with high characteristic function values that are also imputations. Such conditions represent interactions that are non-cooperative fundamentally but are mutually beneficial for the players.
\section{River Basin Equilibrium Model Formulations}
This section presents a special case of the general line-graph model applied to water resources in river basins. Two alternative Generalized Nash reformulations are considered in Subsections \ref{sec:gcm_form} and \ref{sec:csm_form}. In both cases, the interaction structure between the players (i.e., the $\pi$ function in Equation \ref{eqn:gen_nash_sys_interact}) is a water-release market. Subsection \ref{sec:no_mkt} considers the original line-graph game without any market to allow interaction between the players. Finally, Subsection \ref{sec:performance} translates the cooperative game theory concepts into performance metrics for the market structures.
\subsection{Model Overview \label{sec:rbe_overview}}
Figure \ref{fig:flow_schem} depicts the flow balance for a particular river user, referred to as player i. The water available to player i is a function of the flow rate in the river as well as water released from upstream players. Player i can withdraw river water from an intake and combine it with an independent water supply, which typically represents a capital project such as a reservoir or pumped groundwater. The total water withdrawn is used to meet demand. A fraction of this water usage is returned to the river as discharges in the form of runoff, wastewater, or both. It recombines with the river water and flows to downstream players. The remaining water fraction is returned to the groundwater or another basin. This fraction is called consumptive losses because they effectively are unavailable for downstream players.
\begin{figure}
\centering
\includegraphics[scale = 0.75]{PlayerFlowSchematic.png}
\caption{Hydrology and flow balance in the river basin for player i.}
\label{fig:flow_schem}
\end{figure}
Releases from independent water supply infrastructure are one mechanism to increase flows in the river. Specifically, water is released from a structure, such as a dam, into the river to increase the flow levels. It typically provides a significant capacity increase over the natural flows in the river. A player in control of such infrastructure often gains a level of independence from other players. Thus, modeling infrastructure of this type extends beyond water allocation schemes because supply can often be increased if demand is high enough.
Reductions of consumptive losses are another mechanism to increase flows in the river. These reductions involve alterations such that more water is returned to the river as wastewater or runoff. One of the largest classes of consumptive losses are aging infrastructure. In this case, leaks from water mains or sewers enter the groundwater. Repairing the infrastructure reduces these losses from the river. Another class of losses are septic systems in rural and suburban areas, which return treated wastewater to the groundwater. Converting septic systems to sewers increases the return flow to the river via central wastewater collection and treatment.
The cost profiles associated with these two sources of water releases are nearly opposite. The capacity of the independent supply infrastructure is considered to be fixed, which reflects the large fixed costs of water supply infrastructure. However, the marginal costs are low and the potential releases are high. By contrast, consumptive-loss reductions are much more diffuse. For instance, many water mains would likely need to be repaired to generate a significant increase in return flows. Accordingly, the capacity of the consumptive-loss reductions are modeled as a continuous variable. To model diminishing returns, the marginal costs are considered to be progressively higher as more consumptive-loss reductions are implemented.
\begin{figure}
\centering
\includegraphics[scale = 0.75]{MarketFlowSchematic.png}
\caption{High-level view, water-release market (just 2 players) shown. }
\label{fig:market_schem}
\end{figure}
Having considered a single player, one can envision a market structure connecting the decisions of multiple players together. Figure \ref{fig:market_schem} depicts the high-level function of the proposed water-release market. Downstream players (e.g., Player 2) purchase water from this market, which is supplied by upstream players (e.g., Player 1). This supply takes the form of water releases from independent supply sources and consumptive-loss reductions. The market prices cover the cost of these releases and could also provide additional revenue to incentivize upstream players to participate.
\subsection{Notation \label{sec:notation}}
The notation in the model consists broadly of sets, primal variables, dual variables, and parameters. In the formulation, these are either units of flow or unit costs per flow rate. In the results section, flow rates are expressed in million gallons per day (MGD), or equivalently, thousand cubic meters per day (TCMD). Notation representing unit costs are expressed in discounted million dollars/MGD/planning period \((\$M/MGD)\), or equivalently, discounted million dollars/TCMD/planning period. \((\$M/TCMD)\). These latter units were chosen to represent flow rates conventionally while allowing total costs to be calculated over longer planning periods.
\subsubsection*{Sets}
The following list consists of the sets in the model. Aliases are provided to allow the calculation of cumulative values arising from the use of these sets in the model. The brackets are omitted when referring to the cardinality of these sets.
\begin{itemize}
\item $i, j, k \in \{I\}$ = indexed users of the river from upstream to downstream
\item $U_i \subset I$ = upstream nodes of $i$, where $j\in U_i$ is a typical node index
\item $D_i \subset I$ = downstream nodes of $i$, where $k\in D_i$ is a typical node index
\item $c \in \{C\}$ = classes of water loss reductions in ascending order of expense
\item $t, t' \in \{T\}$ = budgetary planning time periods
\end{itemize}
\subsubsection*{Primal Variables}
The following consists of the primal variables in each user's optimization problem. In this context, reliable capacity is a deterministic equivalent corresponding to a reasonable probability that the supply will be available to meet water demands.
\begin{itemize}
\item \(W^{D}_{i,t}\) = player i's incremental increase in direct water withdrawal from the river relative to time period t-1 (volume/day)
\item \(W^{S}_{i,t}\) = player i's water supply sources from capital improvements that are independent of upstream releases (volume/day).
\item \(Q_{i,t}\) = player i's total demand in time period t (volume/day)
\item \(K_{i,t}\) = player i's reliable capacity added from capital project in time period t (volume/day)
\item \(L^{R}_{i,c,t}\) = player i's incremental water loss reductions in class c in time period t (volume/day)
\item \(W^{P}_{i,t}\) = player i's water purchases from upstream in the cost-sharing market formulation to reduce asymmetric access to water in time period t (volume/day)
\item \(W^P_{i,j,t}\) = player i's purchases from an upstream player j (volume/day)
\item \(W^P_{k,i,t}\) = water sales to player k downstream from i (volume/day)
\item \(O^{min}_{i,t}\) = player i's minimum water outflow to downstream nodes in time period t (volume/day)
\end{itemize}
\subsubsection*{Dual Variables}
\begin{itemize}
\item \(\gamma^{loss}_{i,c,t}\) = Nonnegative shadow price for loss reductions (cost/unit flow)
\item \(\gamma^{flow}_{i,t}\) = Nonnegative shadow price for withdrawal limitations (cost/unit flow)
\item \(\gamma^{cap}_{i,t}\) = Nonnegative shadow price for storage releases (cost/unit flow)
\item \(\lambda^{sup}_{i,t}\) = Unrestricted shadow price for total supply (cost/unit flow)
\item \(\lambda^{aug}_{i,t}\) = Unrestricted shadow price for supply augmentations (cost/unit flow)
\item \(\lambda^{rel}_{i,t}\) = Unrestricted shadow price for the minimum release downstream (cost/unit flow)
\item \(\pi^{as}_{i,t}\) = Nonnegative user i's price for reducing asymmetric access to water in time period t (cost/unit flow)
\end{itemize}
\subsubsection*{Endogenous Functions} \label{subsubsect:endog_funct}
\begin{itemize}
\item \(B_{i,t}\) = Concave consumer benefit function for player i at time t (currency)
\item \(R^{LR}_{i,t}\) = Revenue from loss reduction for player i at time t (currency)
\item \(V^{op}_{i,t}\) = Total operational costs for player i at time t (currency)
\item \(V^{inv}_{i,t}\) = Total investment costs for player i at time t (currency)
\item \(\theta_{i,t}(Q_{i,t})\) = Endogenously defined inverse demand curve as a function of total water supply $Q_{i,t}$ (cost/unit flow)
\end{itemize}
\subsubsection*{Parameters}
\begin{itemize}
\item\(c^{ops}_{i,t}\) = player i's unit operating costs in time period t (cost/unit flow)
\item \(c^{cap}_{i,t}\) = player i's unit capital construction costs in time period t (cost/unit flow)
\item \(c^{cu}_{i,c,t}\) = player i's unit cost for consumptive use reductions in class c during time period t (cost/unit flow)
\item \(c^{sr}_{i,t}\) = costs incurred per release of reliable storage capacity for player i during time period t (cost/unit flow)
\item \(d_t\) = discount rate for time period t (\%)
\item \(\delta^{all}_{ds_{k,i}} \in \{0,1\}\) = logical parameter specifying if player k is downstream of player i
\item \(\delta^{all}_{us_{j,i}} \in \{0,1\}\) = logical parameter specifying if player j is upstream of user i
\item \(lf_{c,i,t}\) = player i's estimated fraction of water losses in class c and time period t (\%)
\item \(n_{i}\) = local water inflow at player i independent of upstream players and capital improvements (volume/day)
\item \(r^{fc}_{i,t}\) = player i's regulator imposed flow constraint in time period t (volume/day)
\item \(a^{req}_{i,t}\) = player i's required augmentation for capital project in time period t that is varied to parameterize the equilibrium model (volume/day)
\item \(\alpha_{i,t}\) = inverse demand intercept for player i in time period t (cost/unit flow)
\item \(\beta_{i,t}\) = inverse demand linear slope for player i in time period t (cost/unit flow)
\end{itemize}
\subsection{Formulation for Water Resource User $i\in I$ \label{sec:gen_formulation}}
Each player represents a municipal water provider who is competing with other providers for access to water along a river. Specifically, each player must decide the values for the following set of variables, $\zeta$:
\begin{align*}
\zeta=\left(W^{D}_{i,t},W^S_{i,t}, Q_{i,t},K_{i,t},L^{R}_{i,c,t},W^{P}_{i,t} \right)
\end{align*}
The maximum payoff and constraints for player i's decisions are modeled as an optimization problem, which is represented as (\ref{eqn:program}):
\begin{subequations}
\label{eqn:program}
\begin{align}
\max_{\zeta} \quad
\left( \sum^{|T|}_{t=1}d_t(B_{i,t} + R_{i,t}^{LR}-V^{op}_{i,t} - V^{inv}_{i,t}) \right)
\label{eqn:objfn}
\end{align}
\texttt{s.t.} \\
\begin{equation}
\sum^t_{t'=1}L^{R}_{c,i,t'} \le \sum^t_{t'=1}lf_{c,i,t'}W^D_{i,t'} \quad \forall c,t
\quad (\gamma^{loss}_{i,c,t})
\label{eqn:const_CU}
\end{equation}
\begin{equation}
Q_{i,t} \le n_{i} + W^S_{i,t} + W^P_{i,t} - r^{fc}_{i,t}+O^{min}_{i-1,t} \quad \forall t \quad (\gamma^{flow}_{i,t})
\label{eqn:const_withdrawlim}
\end{equation}
\begin{equation}
W^{S}_{i,t} \le K_{i,t}
\quad \forall t
\quad (\gamma^{cap}_{i,t})
\label{eqn:const_storrel}
\end{equation}
\begin{equation}
Q_{i,t} = \sum^t_{t'=1}W^D_{i,t'} \quad \forall t \quad (\lambda^{sup}_{i,t})
\label{eqn:supply}
\end{equation}
\begin{equation}
K_{i,t} = a^{req}_{i,t}
\quad \forall t
\quad (\lambda^{aug}_{i,t})
\label{eqn:const_aug}
\end{equation}
\begin{equation}
W^{D}_{i,t},W^{S}_{i,t},Q_{i,t},K_{i,t},W^{P}_{i,t} \ge 0 \quad \forall t , \quad L^{R}_{i,c,t} \ge 0 \quad \forall c,t
\label{eqn:nonneg}
\end{equation}
\end{subequations}
(\ref{eqn:objfn}) represents the objective function for each player. It seeks to maximize social welfare. This is measured as the discounted consumer surplus of water use and the discounted revenue from the water release market less discounted capital and operating costs. The terms in each player's objective function are defined endogenously in terms of z and are described in Section \ref{subsect:MCP Formulation}. (\ref{eqn:const_CU}) - (\ref{eqn:nonneg}) represent the constraints on each player's objective function. (\ref{eqn:const_CU}) states that the loss reductions cannot exceed the losses from water withdrawals. (\ref{eqn:const_withdrawlim}) states that the water withdrawn from the river is limited to the net inflow at a particular node minus the regulatory mandated stream flow. The net inflow depends on the minimum outflow from the player immediately upstream (i.e., player i-1). Thus, individual players can modify the constraint set of another player, which results in a generalized Nash equilibrium problem \cite{facchinei2007generalized}. (\ref{eqn:const_storrel}) states that the water released from independent water supply sources must be less than or equal to the capacity of the associated capital project. (\ref{eqn:supply}) states that the cumulative water demand consists of the sum of incremental direct water withdrawals up to the reference time period. Equation (\ref{eqn:const_aug}) states that any capital project must be built in its entire capacity.
\subsection{Mixed Complementarity Problem Formulation \label{sec:MCP}}
\label{subsect:MCP Formulation}
The water equilibrium model is formulated as a mixed complementarity problem (MCP). It consists of the concatenation of every player's Karush-Kuhn-Tucker (KKT) optimality conditions associated with (\ref{eqn:program}), the endogenous functions listed in Section \ref{subsubsect:endog_funct}, and market-clearing conditions \cite{gabriel2012complementarity}. The KKT conditions are necessary because the constraints are linear. The sufficiency direction results since each player's optimization problem (\ref{eqn:program}) is a concave maximization subject to polyhedral (hence convex) constraints.
The mixed complementarity problem generalizes the Karush-Kuhn-Tucker (KKT) optimality conditions of convex programs (with constraint qualifications), non-cooperative game theory, as well as a host of other problems in engineering and economics \cite{gabriel2012complementarity}. Formally, the MCP is defined for a given function $F$ as finding $x\in R^{n_x}, y \in R^{_y}$ such that
\begin{subequations} \label{eq:MCP}
\begin{align}
0\leq F_{x}(x,y) \perp x \geq 0 \label{eq:MCPa}\\
0= F_{y}(x,y), y \texttt{ free} \label{eq:MCPb}
\end{align}
\end{subequations}
where $x$ is a non-negative vector and $y$ is a vector of free variables. The particular form of the vector-valued function $F$ is application-specific and below we describe details about this function and related optimization problems.
Two alternative water release market structures are considered within these assumptions: a general commodity market and a cost-sharing market. The asymmetric line-graph game without a market is also considered. These
three structures result in different endogenous functions and market-clearing conditions. The similarities between them are described next, and their key features and differences are discussed in the subsections that follow.
Regardless of the market structure, there needs to be an expression defining the flow regime in the river that would result without any regulatory intervention or market. These are the flow conditions on which additional water purchases are based.
\begin{equation}
O^{min}_{i,t} = n_i - \sum^{|C|}_{c=1}\sum^t_{t'=1}lf_{c,i,t'}W^D_{i,t'}+\sum^{|C|}_{c=1}\sum^{t-1}_{t'=1}L^R_{c,i,t'}+O^{min}_{i-1,t} \quad \forall i,t
\label{eqn:min_flow}
\end{equation}
To serve this purpose, (\ref{eqn:min_flow}) establishes the minimum outflow conditions at each player's node. It is a function of the water withdrawal and release decisions and the minimum releases of the player immediately upstream (i.e., player i-1).
Both market structures share the same intrinsic consumer benefit function for player $i$'s water withdrawals at time $t$. As shown in (\ref{eqn:ben_funct}), it is represented as the area under a linearized water inverse demand curve $\theta$ with intercept \(\alpha_{i,t}\) and slope $-\beta_{i,t}<0)$.
\begin{equation}
B_{i,t}(Q_{i,t})=\int^{Q_{i,t}}_{0}\theta_{i,t}(x)\,dx = \int^{Q_{i,t}}_0(\alpha_{i,t}-\beta_{i,t}x\,)dx
\label{eqn:ben_funct}
\end{equation}
We note that $B_{i,t}(Q_{i,t})$ is concave since, by the Leibniz Integration Rule, $\frac{d B_{i,t}(Q_{i,t})}{d Q_{i,t}}=\alpha_{i,t}-\beta_{i,t}Q_{i,t}$ so that the second derivative is just $-\beta_{i,t}<0$.
Both market structures also share the same function for total investment costs. Thus the total capital investment costs for node $i$ at time $t$ is expressed as follows:
\begin{equation}
V^{inv}_{i,t}(K_{i,t}) = c^{cap}_{i,t}K_{i,t}
\end{equation}
\subsubsection{General Commodity Market (GCM) Formulation\label{sec:gcm_form}}
In the general commodity market formulation (GCM), separate markets are established for each player generating water releases. Downstream players submit bids to gain access to the water in each of these markets, and the water releases in each market are delivered to the downstream players with the highest willingness to pay at the market price. Intermediate player's between the supplier and the purchaser are not allowed to use this water when it is released into the river system. This reflects the structure of general commodity markets where the supplier establishes a price, and the supply is divided among the various consumers purchasing goods at this price.
A key distinction between the market structures in the way water purchases are defined. For the general commodity market, water purchases are defined as follows:
\begin{equation}
W^{P}_{i,t} = \sum^{|I|}_{j=1}\delta^{all}_{us_{j,i}}W_{ij,t}^{P} \quad \forall t
\label{eqn:trad_WP}
\end{equation}
It states that the total water releases a recipient purchases is the sum of all the purchase requests to all upstream neighbors.
The revenue from water releases for node $i$ at time $t$ is defined in terms of the price at the supplying player's market:
\begin{equation}
R_{i,t}^{LR}(L^{R}_{i,c,t},\forall c \in C )=\sum^{|C|}_{c=1}\pi^{as}_{i,t}(L^R_{c,i,t}+W^{S}_{i,t}) \quad \forall t
\end{equation}
The total operating costs are defined in terms of the marginal costs of water conveyance and treatment, supply releases, loss reductions, and water purchase requests to upstream players:
\begin{multline}
V^{op}_{i,t}(Q_{i,t}, L^{R}_{i,c,t},\forall c \in C,W^{S}_{i,t}, W^{P}_{ij,t},\forall j \in I, j \neq i) = \\
c^{ops}_{i,t}Q_{i,t} +\sum^{|C|}_{c=1}c^{cu}_{i,c,t}L^{R}_{i,c,t}+c^{sr}_{i,t}W^{S}_{i,t}+\sum_{j=1}^{|I|}\pi^{as}_{j,t}\delta^{all}_{us_{j,i}} W^{P}_{ij,t} \quad \forall t
\end{multline}
Loss reductions are considered incremental. Namely, loss reductions only need to be made in one time period because the measures to achieve them are usually permanent (e.g., water main leak reductions). Thus, the loss reductions only need to be paid for in one time period. In contrast, releases from independent water supply sources need to be purchased in each time period, because the measures to accomplish them are not permanent.
The market-clearing conditions say that the total amount of water demanded at node $i$ by downstream nodes to balance asymmetry (left-hand side) should be less than or equal to what is made available by node $i$ through loss-reduction efforts or releasing from independent water supply sources.
\begin{equation}
\sum^{|I|}_{k=1}\delta^{all}_{ds_{k,i}} W_{ki,t}^{P} \leq \sum_{c \in C} L^{R}_{i,c,t}+W^{S}_{i,t} \perp \pi_{i,t} \geq 0 , \forall t
\label{eqn:gcm_mc}
\end{equation}
Substituting these expressions into (\ref{eqn:program}) yields the General Commodity Market (GCM) formulation for each player:
\begin{subequations}
\begin{multline}
\max_{\zeta} \quad
\sum^{|T|}_{t=1}d_t(\int^{Q_{i,t}}_0(\alpha_{i,t}-\beta_{i,t}x\,)dx + \sum^{|C|}_{c=1}\pi^{as}_{i,t}(L^R_{c,i,t}+W^{S}_{i,t})-(c^{ops}_{i,t}Q_{i,t}
\\
+\sum^{|C|}_{c=1}c^{cu}_{i,c,t}L^{R}_{i,c,t}+c^{sr}_{i,t}W^{S}_{i,t}+\sum_{j=1}^{|I|}\pi^{as}_{j,t}\delta^{all}_{us_{j,i}} W^{P}_{ij,t} ) - c^{cap}_{i,t}K_{i,t})
\label{eqn:gcm_of}
\end{multline}
\texttt{s.t.} \\
\begin{equation} \label{eqn:gamma_loss}
\sum^t_{t'=1}L^{R}_{c,i,t'} \le \sum^t_{t'=1}lf_{c,i,t'}W^D_{i,t'} \quad \forall c,t
\quad (\gamma^{loss}_{i,c,t})
\end{equation}
\begin{equation} \label{eqn:gamma_flow}
Q_{i,t} \le n_{i} + W^S_{i,t} + \sum^{|I|}_{j=1}\delta^{all}_{us_{j,i}}W_{ij,t}^{P}- r^{fc}_{i,t} + O^{min}_{i-1,t} \quad \forall t \quad (\gamma^{flow}_{i,t})
\end{equation}
\begin{equation}
W^{S}_{i,t} \le K_{i,t}
\quad \forall t
\quad (\gamma^{cap}_{i,t})
\end{equation}
\begin{equation}
Q_{i,t} = \sum^t_{t'=1}W^D_{i,t'} \quad \forall t \quad (\lambda^{sup}_{i,t})
\end{equation}
\begin{equation}
K_{i,t} = a^{req}_{i,t}
\quad \forall t
\quad (\lambda^{aug}_{i,t})
\end{equation}
\begin{equation}
W^{D}_{i,t},W^{S}_{i,t},Q_{i,t},K_{i,t},W^{P}_{i,t} \ge 0 \quad \forall t , \quad L^{R}_{i,c,t} \ge 0 \quad \forall c,t
\end{equation}
\end{subequations}
The KKT conditions, endogenous functions, and market-clearing conditions are then concatenated together to form a linear complementarity problem (LCP) as shown in the Appendix.
\subsubsection{Cost-Sharing Market (CSM) Formulation \label{sec:csm_form}}
In the cost-sharing market (CSM) formulation, the restriction on intermediate players using water releases is relaxed. This allows multiple players to claim the same quantity of released water from an upstream player. While this is uncommon in most commodity markets, direct and indirect water reuse enables water supplies to be treated as a renewable resource. Indirect water reuse is common in river basins because treated wastewater discharges feed surface water intakes downstream \cite{daniell2015understanding}.
In contrast with the GCM structure, the markets are established at the purchasing player's node because water may be reused multiple times between the supplying and the purchasing player. Thus, the upstream players relative to the purchaser separately decide how much water releases to deliver based on the purchaser's willingness to pay. Conversely, the water release supplier receives revenue from all downstream users of this water. This market can be thought of as creating a mechanism for cost sharing through the rental of water.
In this structure, purchase agreements between players are no longer well defined. Therefore, water release purchases are simply the amount of water player i purchases. This makes the substitution of (\ref{eqn:trad_WP}) into (\ref{eqn:objfn}) and (\ref{eqn:const_withdrawlim}) unnecessary.
The revenue for node $i$'s water releases at time $t$ is similar to the GCM formulation, except the payment received for loss reductions is the sum of all the prices of the downstream players:
\begin{equation}
R_{i,t}^{LR}(L^{R}_{i,c,t},\forall c \in C, W^S_{i,t})=\sum^{|I|}_{k=1}\pi^{as}_{k,t}\delta^{all}_{ds_{k,i}}(\sum^{|C|}_{c=1}L^{R}_{i,c,t}+W^{S}_{i,t}) \quad \forall t
\end{equation}
As before, the total operating costs are defined in terms of the marginal costs of water conveyance and treatment, supply releases, loss reductions, and water release purchases:
\begin{equation}
V^{op}_{i,t}(Q_{i,t}, L^{R}_{i,c,t},\forall c \in C,W^{S}_{i,t}, W^{P}_{i,t}) =
c^{ops}_{i,t}Q_{i,t} +\sum^{|C|}_{c=1}c^{cu}_{i,c,t}L^{R}_{i,c,t}+c^{sr}_{i,t}W^{S}_{i,t}+\pi^{as}_{i,t}W^{P}_{i,t} \quad \forall t
\end{equation}
However, the cost for water releases is defined in terms of the price at the recipient player instead of the supplying players.
As in the traditional market formulation, loss reductions are permanent, and water releases from independent supply sources must be repurchased in subsequent time periods.
(\ref{eqn:MC}) represents the market-clearing conditions for player i's asymmetric access to water. It states that the losses reduced by the upstream players plus the amount released to the river from independent water supply sources place an upper bound on the purchases at node i. A positive price can only occur for these resources when the purchases equal the amount available upstream.
\begin{equation}
\sum^{|I|}_{i'=1}\delta^{all}_{us_{i',i}}(\sum^{|C|}_{c=1}L^{R}_{c,i',t}+W^{S}_{i',t})-W^{P}_{i,t} \ge 0 \perp \pi^{as}_{i,t} \ge 0 \quad \forall t
\label{eqn:MC}
\end{equation}
Substituting these expressions into \ref{eqn:program} yields the Cost Sharing Market (CSM) formulation for each player:
\begin{subequations}
\begin{multline}
\max_{\zeta} \quad
\sum^{|T|}_{t=1}d_t(\int^{Q_{i,t}}_0(\alpha_{i,t}-\beta_{i,t}x\,)dx + \sum^{|I|}_{k=1}\pi^{as}_{k,t}\delta^{all}_{ds_{k,i}}(\sum^{|C|}_{c=1}L^{R}_{i,c,t}+W^{S}_{i,t})
\\
-(c^{ops}_{i,t}Q_{i,t} +\sum^{|C|}_{c=1}c^{cu}_{i,c,t}L^{R}_{i,c,t}+c^{sr}_{i,t}W^{S}_{i,t}+\pi^{as}_{i,t}W^{P}_{i,t}) - c^{cap}_{i,t}K_{i,t})
\label{eqn:csm_of}
\end{multline}
\texttt{s.t.} \\
\begin{equation}
\sum^t_{t'=1}L^{R}_{c,i,t'} \le \sum^t_{t'=1}lf_{c,i,t'}W^D_{i,t'} \quad \forall c,t
\quad (\gamma^{loss}_{i,c,t})
\end{equation}
\begin{equation}
Q_{i,t} \le n_{i} + W^S_{i,t} + W_{i,t}^{P} - r^{fc}_{i,t} + O^{min}_{i-1,t} \quad \forall t \quad (\gamma^{flow}_{i,t})
\end{equation}
\begin{equation}
W^{S}_{i,t} \le K_{i,t}
\quad \forall t
\quad (\gamma^{cap}_{i,t})
\end{equation}
\begin{equation}
Q_{i,t} = \sum^t_{t'=1}W^D_{i,t'} \quad \forall t \quad (\lambda^{sup}_{i,t})
\end{equation}
\begin{equation}
K_{i,t} = a^{req}_{i,t}
\quad \forall t
\quad (\lambda^{aug}_{i,t})
\end{equation}
\begin{equation}
W^{D}_{i,t},W^{S}_{i,t},Q_{i,t},K_{i,t},W^{P}_{i,t} \ge 0 \quad \forall t , \quad L^{R}_{i,c,t} \ge 0 \quad \forall c,t
\end{equation}
\end{subequations}
The KKT conditions, endogenous functions, and market clearing conditions are then concatenated together to form this alternate LCP shown in the Appendix.
\subsubsection{No-Market Formulation \label{sec:no_mkt}}
This formulation represents no market-clearing conditions, as players only interact with each other through sequential water removal from the river. $R^{LR}_{i,t}$ is equal to zero because there is no water release market to generate revenue. Therefore, there is no incentive to reduce losses and no ability to make water purchases. The total operating costs are simplified accordingly:
\begin{equation}
V^{op}_{i,t}(Q_{i,t},W^{S}_{i,t}) = c^{ops}_{i,t}Q_{i,t}+c^{sr}_{i,t}W^{S}_{i,t}\quad \forall t
\end{equation}
Substituting these expressions into (\ref{eqn:program}) yields the No-Market formulation for each player i:
\begin{subequations}
\begin{equation}
\max_{\zeta} \quad
\sum^{|T|}_{t=1}d_t(\int^{Q_{i,t}}_0(\alpha_{i,t}-\beta_{i,t}x\,)dx
-(c^{ops}_{i,t}Q_{i,t} +c^{sr}_{i,t}W^{S}_{i,t}) - c^{cap}_{i,t}K_{i,t})
\label{eqn:nm_of}
\end{equation}
\texttt{s.t.} \\
\begin{equation}
Q_{i,t} \le n_{i} + W^S_{i,t} - r^{fc}_{i,t} + O^{min}_{i-1,t} \quad \forall t \quad (\gamma^{flow}_{i,t})
\end{equation}
\begin{equation}
W^{S}_{i,t} \le K_{i,t}
\quad \forall t
\quad (\gamma^{cap}_{i,t})
\end{equation}
\begin{equation}
Q_{i,t} = \sum^t_{t'=1}W^D_{i,t'} \quad \forall t \quad (\lambda^{sup}_{i,t})
\end{equation}
\begin{equation}
K_{i,t} = a^{req}_{i,t}
\quad \forall t
\quad (\lambda^{aug}_{i,t})
\end{equation}
\begin{equation}
W^{D}_{i,t},W^{S}_{i,t},Q_{i,t},K_{i,t}, \ge 0 \quad \forall t
\end{equation}
\end{subequations}
Solving the No-Market formulation with an LCP is not necessary. It can be solved recursively using the algorithm presented in Section \ref{sec:gm_lgng}. However, the LCP Formulation is presented to be consistent with the other two Generalized Nash reformulations and is shown in the Appendix.
\subsection{Performance Metrics \label{sec:performance}}
The cooperative game theoretic concepts described in Section \ref{sec:gm_coop} are used to create performance metrics for the GCM and CSM market structures. They are calculated from the optimal objective function values determined from the LCP solutions, which represent social welfare. The ultimate goal of the analysis is to test the effectiveness of each market structure in reducing the asymmetry between players.
The optimal objective function values are related to the rewards, $r_i$, that each player achieves. Let $f^m_i$ represent the optimal objective function value for player i under market structure $m \in \{GSM, CSM\}$. These correspond to (\ref{eqn:gcm_of}) and (\ref{eqn:csm_of}), which are both special cases of the generalized Nash reformulation objective function in (\ref{eqn:gen_nash_of}). Additionally, let $f_{o_i}$ represent the optimal objective function value for player i in the no-market formulation. It is a special case of $f^{LG}_i(x_i)$ described in Section \ref{sec:gm_lgng}.
With these definitions in mind, the performance metrics from cooperative game theory can be expressed mathematically. The reward $r^m_i$ each player experiences from participating in market structure m is the difference between the player's objective function value in market structure m and the no-market case:
\begin{equation}
r^m_i = f^m_i - f_{o_i}
\end{equation}
Accordingly, the characteristic function for market structure m, $v^m(I)$, is simply the sum of the rewards across all players:
\begin{equation}
v^m(I) = \sum_{i \in I} r^m_i
\end{equation}
The best performing market structures will be imputations and have large characteristic function definitions. With this in mind, consider a final metric representing the difference in characteristic function values between market structures $m_1$ and $m_2$:
\begin{equation}
v^{\Delta}(I) = |v^{m_1}-v^{m_2}|
\end{equation}
These metrics are used to analyze the numerical results presented in Section \ref{sec:results}.
\section{Results} \label{sec:results}
In this section, we discuss some theoretical results for the models proposed above with a focus on the GCM formulation for specificity. Additionally, we provide sensitivity results for a small stylized water system and numerical results for the Duck River in Tennessee, in the southeastern U.S. using real and realistic data to demonstrate insights of the models.
\subsection{Theoretical Results} \label{sec:theo_res}
These theoretical results address existence and uniqueness of solutions as well as the relationships among important variables. As will be explained, the existence of solutions can be guaranteed in certain cases. In terms of uniqueness, there is the possibility for multiple solutions to the River Basin Equilibrium Model Formulation. An example of one of the multiple solutions is provided in Section \ref{sec:3node_num}.
There are many variations in the water supply in solutions to the GCM model. For example, these variations could include more water from loss-reduction markets, water from extra supply based on capacity expansions, or decrease of demand. Accordingly, we single out a simpler yet illustrative case of what could result. Thus, the value of this illustrative case is the data input values that lead to one of these solutions in one case. It also illustrates the use of loss-reduction markets.
In this representative case, we take just one time period $|T|=1$, 1 loss-reduction class for each node, $|C|=1$ and adjust the optimization model that each node is trying to solve appropriately. Note that there is now no discount factor (i.e., $d_t=1$), $L^R_i$ has no index for class nor time, and the incremental additions $W^D_{it}$ are now equal to $Q_i$ so that equation from before relating them is now suppressed. Also, since the model is for the short-term, there are no capacity decisions $K_{it}, W^S_{i,t}$ are no longer needed as well as the associated dual variables to the corresponding constraints.
\begin{subequations} \label{eq:short}
\begin{equation}
\max_{\zeta} \quad
\int^{Q_{i}}_0(\alpha_{i}-\beta_{i}x\,)dx + \pi^{as}_{i}L^R_{i}-(c^{ops}_{i}Q_{i}+c^{cu}_{i}L^{R}_{i}+\sum_{j \in U_i}\pi^{as}_{j} W^{P}_{ij} )
\label{eqn:gcm_of_short}
\end{equation}
\texttt{s.t.} \\
\begin{equation} \label{eqn:gamma_loss_short}
L^{R}_{i} \le lf_{i}Q_i
\quad (\gamma^{loss}_{i})
\end{equation}
\begin{equation} \label{eqn:gamma_flow_short}
Q_{i} \le n_{i} + \sum_{j \in U_i}W_{ij}^{P}- r^{fc}_{i} + O^{min}_{i-1} \quad (\gamma^{flow}_{i})
\end{equation}
\begin{equation}
Q_{i},L^R_{i},W^{P}_{i,j}, \forall j \in U_i \ge 0
\end{equation}
\end{subequations}
In addition, there are the definitional and market-clearing constraints from before suitably modified (e.g., the loss reductions happen and are used in the same time period):
\begin{subequations} \label{eq:extra_short}
\begin{equation}
O^{min}_{i} = n_{i}-lf_{i}Q_{i}+O^{min}_{i-1}
\label{eqn:omin_short}
\end{equation}
\begin{equation}
\sum_{k \in D_i} W_{ki}^{P} \leq L^{R}_{i} \perp \pi^{as}_{i} \geq 0 \label{eq:MCC_short}
\end{equation}
\end{subequations}
The next result is about existence of a solution for this one-period model (\ref{eq:short}) $\forall i \in I $ plus (\ref{eq:extra_short}) and is presented below. For simplicity of illustration, we assume that there is just one upstream node $u$ that delivers loss reductions, one downstream node $d$ that receives it, and all other nodes $e$ are inactive relative to the loss-reduction market. Clearly, other conditions on the data may lead to other solutions as this equilibrium problem has multiple solutions in general.
\begin{theorem} \label{eq:existence}
Consider the river system linear complementarity problem problem defined by the KKT conditions to (\ref{eq:short}) combined with (\ref{eq:extra_short}) $\forall i \in I $. This problem always has a solution as long as the following conditions hold (assuming all positive cost coefficients):
\begin{description}\label{xxx}
\item [(i)] $\beta_i >0, \forall i \in I$
\item [(ii)] $c^{ops}_e -\alpha_e \geq 0$ for all nodes $e$ inactive in the loss-reduction market
\item [(iii)] $lf_u Q_u=W^p_{du}$
\item [(iv)] $\sum_{r=1}^e n_r \geq r^{fc}_e$ for all inactive nodes $e$.
\item [(v)] $0<\frac{\alpha_u-c^{ops}_u}{\beta_u} \leq \sum_{r=1}^u n_r -r^{fc}_u$ for loss-reduction supplying node $u$.
\item [(vi)]$c^{cu}_u \geq \alpha_d-c^{ops}_d$
\item [(vii)]$\sum_{r=1}^d n_r-r^{fc}_d =-\frac{\alpha_u-c^{ops}_u}{\beta_u}lf_u<0$
\end{description}
where $u, d, e$ are respectively, the one loss-reduction market supplying node, the one loss-reduction receiving node, and any other node $e$ not active in the loss-reduction market.
\end{theorem}
See the Appendix for the proof.\\
Note that in (i) $\beta_i>0$ means that the demand function for each node $i$ has a strictly decreasing slope which is quite reasonable. Also, the condition in (i) on the costs being nonnegative is also quite realistic. Condition (ii) amounts to saying that the operational costs for inactive node $e$ exceed the highest demand $\alpha_e$. Condition (iii) says that node $d \in D_u$ uses up 100\% of the loss reductions from node $u$ that is supplying it. Condition (iv) states that the inflows from upstream nodes is sufficient to cover the required flow constraints for each nodes $e$ that are inactive in the loss-reduction market. Condition (v) states that the extra supply at node $u$ after accounting for inflows and flow constraints should be sufficiently large. Condition (vi) is a statement connecting the loss-reduction costs for upstream node $u$ with the largest demand and operational costs for downstream node $d$. Lastly, (vii) states that the inflow to node $d$ from all upstream nodes and itself less any flow restrictions should be sufficiently negative to induce water purchases from upstream node $u$.
We now return to the original GCM model allowing for multiple time periods and loss-reduction classes. With this in mind, the next result concerns the prices at active nodes for the GCM model. Here we adopt the following notation: $U_{it}^+$ is the set of nodes upstream of node $i$ where node $i$ purchases loss-reduction measures at time $t$ from node $j$, i.e., $U_{it}^+=\{j \in I: j\in U_i, W^P_{ijt}>0\}$. Also, $D_{it}^+$ is the set of nodes downstream of node $i$ where node $i$ sells loss-reduction measures at time $t$ to node $k$, i.e., $D_{it}^+=\{k\in D_i: W^P_{kit}>0\}$. The result below implies that the upstream loss-reduction markets must equilibrate in prices so that there is no arbitrage opportunity among them relative to a fixed, downstream node $i$ that wants these loss-reduction measures.
\begin{theorem} \label{th:prices}
Consider node $i$ in the GCM formulation and all upstream nodes $j \in U_{it}^+$. Then, $\pi^{as}_{jt}=\pi_{it}, \forall j \in U_{it}^+$ where $\pi_{it}$ is a common loss-reduction market price for these nodes $j \in U_i$.
\end{theorem}
See the Appendix for the proof.
The next result is a statement about the uniqueness of the nodal prices for a fixed value of the dual variable $\gamma^{loss}_{i,c,t}$. This dual variable is associated with constraint (\ref{eqn:gamma_loss}) and it can be construed as the incremental value of one more unit of loss reductions. The first part of the next result (\ref{eqn:part a}) states that the prices at a node $i$ are the (discounted) sum of future values of $\gamma^{loss}_{i,c,t}$ plus the unit cost for consumptive use class $c$. This can be understood as the sum of the future opportunity costs plus the current operating cost of loss reductions. The second part of this next result (\ref{eqn:part b})says that these nodal prices also are the discounted shadow price for downstream nodes $k \in D_{it}^+$ of getting one more unit of flow via the associated multiplier $\gamma^{flow}_{kt}$ to constraint (\ref{eqn:gamma_flow}). Thus, these nodal prices are computed to balance the economic needs of node $i$ with its downstream loss-reduction market customers.
\begin{theorem} \label{th:prices2}
Consider the GCM formulation and a node $i$ for which there are loss reductions, i.e., $L^R_{c,i,t} >0$ for some $c \in C, t \in T$. Then,
\begin{subequations}
\begin{equation} \label{eqn:part a}
\pi^{as}_{it} =\frac{\sum_{t'=t} ^{|T|} \gamma^{loss}_{i,c,t}}{d_t}+c^{cu}_{i,c,t}
\end{equation}
\begin{equation} \label{eqn:part b}
\pi^{as}_{it}=\frac{\gamma^{flow}_{k,t}}{d_t}, \forall k \in D_{it}^+
\end{equation}
\end{subequations}
\end{theorem}
See proof in the Appendix.
Given that there are multiple solutions to the river basin equilibrium problem as indicated above, some other approach is needed perhaps to further refine the solution set. For example, one approach is to use equity-enforcing constraints or more generally logical constraints to filter out all but one solution as presented in the discretely constrained mixed complementarity problem (DC-MCP) formulations from \cite{gabriel2017solving, djeumou2019applications}.
\subsection{Numerical Results}
\subsubsection{Three-Node Model}
\label{sec:3node_num}
A small but illustrative three-node model of the above two formulations was developed to illustrate the merits of the modeling approach and to compare the market structures. Three players, numbered sequentially in increasing order from upstream to downstream, represent producers, prosumers, and consumers of consumptive loss reductions, respectively. Two loss-reduction classes and two time periods give the model multi-dimensionality with respect to these parameters. In contrast, no capital projects are present to simplify the analysis.
The parameters for this system were chosen to represent a system with asymmetric access to water and intermediate water scarcity. Enough water is available in the first time period for two out of three players to completely satisfy their demand. In the second time period, the economic growth potential of all players is greater than the water available. To ensure asymmetry, all of the inflow into the river basin occurs at Player 1's node ($n_1$). Each player has the same regulatory flow constraint ($r^{f,c}_{i,t}$) representing the minimum base flow necessary to preserve the aquatic habitat. The overall intent is to fully utilize the market in time period 2 while illustrating reasonable starting conditions in time period 1.
Within these guidelines, various scenarios were developed to consider different types of river basins. Certain parameters were kept constant from scenario to scenario to keep them relatively comparable. These include each player's operating costs ($c^{ops}_{i,t}$), the discount rate ($d_t$), and the inverse demand slope ($\beta_{i,t}$). The remaining parameters were varied across scenarios to ascertain the impact of different player configurations, supplies, and demands. These include the following four non-fixed parameters: maximum demand in time period 1, the maximum demand in time period 2, the loss fractions ($lf_{c,i,t}$), and the consumptive use reduction costs ($c^{cu}_{i,c,t}$).
The demands were incorporated into the model via the inverse demand intercept ($\alpha_{i,t}$). Specifically, a point of known price and quantity was assumed to exist at the current demand level and operating cost ($c^{ops}_{i,t}$). Linearizing about this point for a given value of $\beta_{i,t}$ was then used to obtain the intercept value. \ref{eqn:alpha_linear} expresses this mathematically:
\begin{equation}
\alpha_{i,t} = \beta_{i,t}demand_{i,t}+c^{ops}_{i,t}
\label{eqn:alpha_linear}
\end{equation}
Table \ref{tab:toy_model_data} summarizes the input data used in the three-node model scenarios. Each row represents the unique value assigned to the parameter from the first column. The second, third, and fourth columns indicate the applicable indices for the given value assignment. For example, the inverse demand slope ($\beta$) is assigned a value of 3.0 for all three players in all three time periods. Consumptive loss-reduction costs, time period 1 demands, time period 2 demands, and loss factors are baseline values to be varied in the different scenarios \footnote{For simplicity, only the baseline values themselves are depicted.}.
\begin{table}
\centering
\begin{tabular}{ccccc}
Parameter & Player(s) & Time Period(s) & Class(es) & Value \\ \hline
int rate & 1, 2, 3 & 1, 2 & & 0.04 \\ \hline
$\beta$ & 1, 2, 3 & 1, 2 & & 3 \\ \hline
\multirow{2}{*}{n} & 1 & & & 9 \\
& 2, 3 & & & 0 \\ \hline
rfc & 1, 2, 3 & & & 4 \\ \hline
c\_ops & 1, 2, 3 & & & 1 \\ \hline
\multirow{2}{*}{c\_cu\footnotemark[\value{footnote}]} & 1, 2, 3 & & 1 & 1 \\
& 1, 2, 3 & & 2 & 5 \\ \hline
\multirow{2}{*}{demand\footnotemark[\value{footnote}]} & 1, 2, 3 & 1 & & 5 \\
& 1, 2, 3 & 2 & & 10 \\ \hline
lf\footnotemark[\value{footnote}] & 1, 2, 3 & 1, 2 & 1, 2 & 0.1 \\ \hline
\end{tabular}
\caption{Data used in the three-node model scenario analysis.}
\label{tab:toy_model_data}
\end{table}
The baseline values, which are the last three parameters in Table \ref{tab:toy_model_data}, are systematically varied among the players to investigate how player heterogeneity influences market structure. For a given scenario, the baseline values are each multiplied by a low (0.66), medium (1.0), or high (1.33) scaling factor to obtain the actual value of the parameter. To create this heterogeneity, no two players have the same scaling factor for a given parameter in a given scenario. Mathematically, this results in 3! = 6 ways to assign a player a value for a given parameter. For example, consider one of the six ways to assign the time period 2 demands among the players. The scaling factors for players (1, 2, 3) are (medium, high, low) respectively such that the tuple of associated demand values is (10.00,13.33,6.66).
A given scenario consists of one of these player-specific tuples for each of the four baseline value parameters. Every combination of tuples for the four parameters were considered, which results in a sizable number of scenarios. Mathematically, applying the Cartesian product to the four non-fixed parameters results in $3!^4 = 1296$ scenarios total. Table \ref{tab:data_detailed} provides examples of final parameter values that correspond to scenarios analyzed closely in this section.
Each scenario was compiled and solved in GAMS \footnote{ www.gams.com/} using an application programming interface with Python. Three separate models were solved for each scenario, which included the GCM, CSM, and no-market formulations. The solutions from these models were used to calculate the metrics detailed in Section \ref{sec:performance}. These are used to quantify the benefit of the GCM and CSM market structures relative to no market.
Table \ref{tab:toy_summary_data} summarizes the results of the various scenario runs in more detail. The first row depicts the percentage of the scenarios where the market structure in the corresponding column generated higher social welfare improvements (i.e. $v^m(N)$) than the alternative (GCM or CSM). The second and third row depict the mean and standard deviation, respectively, of social welfare improvement as defined in Equation \ref{eqn:objfn}. Lastly, the fourth and fifth rows depict the improvement of social welfare when the given market structure is higher-performing (i.e., $v^{\Delta}(N)$).
\begin{table}
\centering
\begin{tabular}{l|ll}
Metric & GCM & CSM \\ \hline
\% higher $v^m(N)$ & 3.70\% & 96.30\% \\
average $v^m(N)$ & \$42.64 & \$36.87 \\
std. dev $v^m(N)$ & \$0.74 & \$11.96 \\
average $v^{\Delta}(N)$ & \$42.64 & \$14.49 \\
std. dev $v^{\Delta}(N)$ & \$0.74 & \$14.44
\end{tabular}
\caption{Summary of Scenario Results}
\label{tab:toy_summary_data}
\end{table}
For every player in every scenario, at least one of the market structures improves social welfare as compared to no market at all (i.e., $r_i \ge 0$). Therefore, the markets demonstrate mutual interest among the players (i.e., always form an imputation). However, not all the market structures result in mutually beneficial interactions. For instance, in some of the scenarios, the lower-performing market structure will not form an imputation. This illustrates why choosing the proper market structure is important.
The CSM generates higher social welfare for most of the scenarios compared to the GCM. This occurs in over 96 \% of the scenarios analyzed. However, the GCM vastly outperforms in this metric relative to the CSM in the small number of its preferred scenarios. It reliably generates high social welfare improvements when it is the appropriate choice. The average and standard deviations for $v^m(N)$ and $v^{\Delta}(N)$ in Table \ref{tab:toy_summary_data} support this finding.
The furthest downstream player always undergoes dramatic economic growth for scenarios where the GCM has higher social welfare than the CSM. Specifically, Player 3 has a low demand in time period 1 and grows to have a high demand in time period 2. Under these conditions, Player 3 is most vulnerable to the decisions of Player 2. As will be explained, this vulnerability exists when the system-wide social welfare is most sensitive to the positional advantage of intermediate players along the network. This demand profile appears in all the cases where the GCM exceeds the CSM in social welfare.
The loss-reduction market, regardless of structure, is most effective relative to no market for a specific set of common scenarios. These occur when the time period 2 demands get progressively higher downstream and the losses get progressively higher upstream. This finding makes sense, because higher losses upstream lead to less water available downstream where it is most valuable. Interestingly, these scenarios were relatively insensitive to the consumptive use costs. The demand and loss parameters seem to matter the most to the viability of the market.
With the aggregate results in mind, three scenarios were selected for detailed analysis. The objective was to identify a scenario for each market structure where the social welfare differences are the most pronounced. Mathematically, these are scenarios where $v^m(N)$ and $v^{\Delta}(N)$ are both high. The first scenario, hereafter called "Large Prosumer Scenario," was chosen to illustrate the merits of the CSM market. The second scenario, hereafter called "Downstream Economic Growth Scenario," was chosen to illustrate the merits of the GCM market. The parameter values used in these scenarios are shown in Table \ref{tab:data_detailed}. This further reinforces the importance of the demand profile to the appropriate water-release market structure. Furthermore, a third scenario was selected to illustrate the non-uniqueness of prices as described in Section \ref{sec:theo_res}.
\begin{table}
\centering
\begin{tabular}{|l|l|lll|}
\hline
\multirow{2}{*}{Parameter} & \multirow{2}{*}{Scenario} & \multicolumn{3}{l|}{Player} \\ \cline{3-5}
& & \multicolumn{1}{l|}{i = 1} & \multicolumn{1}{l|}{i = 2} & i = 3 \\ \hline
\multirow{3}{*}{$demand_{i,1}$} & Large Prosumer & \multicolumn{1}{l|}{3.33} & \multicolumn{1}{l|}{6.67} & 5.00 \\ \cline{2-5}
& D.S. Econ. Growth & \multicolumn{1}{l|}{5.00} & \multicolumn{1}{l|}{6.67} & 3.33 \\ \cline{2-5}
& Multiple Prices & \multicolumn{1}{l|}{5.00} & \multicolumn{1}{l|}{3.33} & 6.67 \\ \hline
\multirow{3}{*}{$demand_{i,2}$} & Large Prosumer & \multicolumn{1}{l|}{6.67} & \multicolumn{1}{l|}{13.33} & 10.00 \\ \cline{2-5}
& D.S. Econ. Growth & \multicolumn{1}{l|}{6.67} & \multicolumn{1}{l|}{10.00} & 13.33 \\ \cline{2-5}
& Multiple Prices & \multicolumn{1}{l|}{10.00} & \multicolumn{1}{l|}{13.33} & 6.67 \\ \hline
$c^{cu}_{i,1}$ & All Three & \multicolumn{1}{l|}{0.67} & \multicolumn{1}{l|}{1.00} & 1.33 \\ \hline
$c^{cu}_{i,2}$ & All Three & \multicolumn{1}{l|}{3.33} & \multicolumn{1}{l|}{5.00} & 6.67 \\ \hline
$lf_{1,i,1}$ & All Three & \multicolumn{1}{l|}{0.13} & \multicolumn{1}{l|}{0.10} & 0.07 \\ \hline
$lf_{1,i,2}$ & All Three & \multicolumn{1}{l|}{0.13} & \multicolumn{1}{l|}{0.10} & 0.07 \\ \hline
$lf_{2,i,1}$ & All Three & \multicolumn{1}{l|}{0.13} & \multicolumn{1}{l|}{0.10} & 0.07 \\ \hline
$lf_{2,i,2}$ & All Three & \multicolumn{1}{l|}{0.13} & \multicolumn{1}{l|}{0.10} & 0.07 \\ \hline
\end{tabular}
\caption{Parameter values used in the detailed analysis}
\label{tab:data_detailed}
\end{table}
Figures \ref{fig:S47_GCM} and \ref{fig:S47_CSM} compare and contrast the market structures for the Large Prosumer Scenario, while Figures \ref{fig:S113_GCM} and \ref{fig:S113_CSM} do the same for the Downstream Economic Growth Scenario. All these figures depict the line graph of the river in plan view with the cooperative game theory metrics. Additionally, a profile view representing the flow levels in the river is depicted in millions of gallons per day (MGD). These plots have two key groups of elements warranting explanation.
The first group of elements includes the bars on the profile view. The stacked bar represents the actual inflow. It is subdivided into the freely-available inflow irrespective of the market structure plus additional inflow purchased from a market. The maximum usable inflow for each player is also plotted to gauge the level of scarcity of the actual inflow. In essence, it represents the inflow levels that the player would want to see in the river.
The second group of elements includes the line plots on the profile view. They represent how consumptive losses and the subsequent reductions impact the actual flows on the river. Specifically, the inflow without market represents the minimum river levels at a player's node if no consumptive-loss reductions were made. By contrast, the inflow with market represents the actual levels in the river including consumptive-loss reductions. If these two line plots are equal, then no consumptive-loss reductions were made. For additional clarity, Table \ref{tab:det_an_def} mathematically defines these elements and other terms used in the detailed analysis.
\begin{table}[]
\centering
\begin{tabular}{|l|l|}
\hline
Term [Reference] & Mathematical Definition \\ \hline
inflow with market (MGD) [D1] & $n_i+O^{min}_{i-1,t}+\sum_{j=1}^{|I|}\sum_{c=1}^{|C|}\delta^{all}_{us_{j,i}}L^R_{i,c,t}+W^S_{j,t}$ \\ \hline
inflow without market [D2] & $n_i+O^{min}_{i-1,t}$ \\ \hline
freely available inflow (MGD) [D3] & $Q_{i,t}-W^P_{i,t}+r^{fc}_{i,t}$ \\ \hline
max-usable inflow (MGD) [D4] & $demand_{i,t}+r^{fc}_{i,t}$ \\ \hline
resource utilization (\%) [D5] & $1 -
\min\left\{{\frac{D1 - (Q_{i,t}+r^{fc}_{i,t})}{D1},\frac{D4 - (Q_{i,t}+r^{fc}_{i,t})}{D4}}\right\}$ \\ \hline
\end{tabular}
\caption{Mathematical definitions for terms used in the detailed analysis.}
\label{tab:det_an_def}
\end{table}
In the Large Prosumer Scenario, the CSM generates higher social welfare than the GCM because the former has higher resource utilization than the latter. In both market structures, Player 2 purchases inflow up to the amount made available through the markets. However, as shown in Figure \ref{fig:S47_GCM}, Player 3's actual inflow in the GCM is less than the available inflow with the market. This difference between available and actual inflow used to meet demand represents resource under-utilization because all the available inflow with the market is within Player 3’s maximum usable inflow. By contrast, Figure \ref{fig:S47_CSM} shows that both Player's 2 and 3 are able to purchase all the available inflow in the CSM.
The resource under-utilization in the GCM is a consequence of the structure of the market-clearing conditions. In \ref{eqn:gcm_mc}, water-release purchases (i.e., $W^P_{j,i,t}$) are treated as bilateral agreements between a supplier i and a purchaser j. These agreements will increase the actual inflow in the river for all players downstream of Player i unless Player j's consumptive losses are 100\%. However, the GCM structure only permits Player j to withdraw this added inflow. By contrast, the CSM relaxes the bilateral purchasing assumption (i.e., $W^P_{i,t}$) in \ref{eqn:MC} to allow multiple players to benefit from the water releases.
\begin{figure}[htbp]
\centering
\begin{subfigure}[b]{0.4\paperwidth}
\includegraphics[width = \linewidth]{S47_GCM.png}
\caption{General commodity market with resource under-utilization.}
\label{fig:S47_GCM}
\end{subfigure}
\begin{subfigure}[b]{0.4\paperwidth}
\includegraphics[width=\linewidth]{S47_CSM.png}
\caption{Cost sharing market.}
\label{fig:S47_CSM}
\end{subfigure}
\label{fig:S47main}
\caption{Visualization of the Large Prosumer Scenario. The cooperative game-theoretic rewards to each player, r, are shown in the plan view of the network. Water usage is shown in the profile view.}
\end{figure}
In the Downstream Economic Growth Scenario, the GCM generates higher social welfare than the CSM because the bilateral water purchases reduce the asymmetry in the network. As shown in Figure \ref{fig:S113_GCM}, the GCM decreases the loss reductions that Player 3 needs to purchase (2.07 MGD instead of 2.33 MGD) to maximize social welfare. This occurs because the water Player 3 purchases from Player 1 is inaccessible to Player 2. In the CSM, no such restriction exists, so Player 2 uses this additional water as shown in Figure \ref{fig:S113_CSM}. In doing so, Player 2 incurs additional losses because it has a non-zero loss factor.
This scenario illustrates that the overall decrease in social welfare from intermediate losses can be significant because Player 2 values an incremental increase in consumption less than Player 3. Numerically, Player 3 has a higher $\gamma^{flow}$ value than Player 2 (\$20.55 M vs. \$15.95 M), which represents the marginal value of additional water use. As alluded to earlier in this section, this phenomenon in the CSM market structure gives a positional advantage to intermediate players. Relative to the GCM, Player 2 increases water withdrawals and subsequently increases Player 3's consumptive loss-reduction purchases.
The Downstream Economic Growth Scenario also reveals an important nuance with regards to social welfare. The CSM model technically has a higher characteristic function value, but it is not an imputation because the reward to Player 3 is negative. Thus, the basin would not likely agree unanimously to implement this market structure. Alternatively, the GCM model is an imputation. Thus, the GCM is a more reasonable alternative for improving social welfare over the no-market structure.
\begin{figure}[htbp]
\centering
\begin{subfigure}[b]{0.4\paperwidth}
\centering
\includegraphics[width=\linewidth]{S113_GCM.png}
\caption{General commodity market with purchase protection.}
\label{fig:S113_GCM}
\end{subfigure}
\begin{subfigure}[b]{0.4\paperwidth}
\centering
\includegraphics[width=\linewidth]{S113_CSM.png}
\caption{Cost-sharing market with intermediate positional advantage.}
\label{fig:S113_CSM}
\end{subfigure}
\label{fig:S113main}
\caption{Visualization of the Downstream Economic Growth Scenario. The cooperative game-theoretic rewards to each player, r, are shown in the plan view of the network. Water usage is shown in the profile view.}
\end{figure}
In both scenarios, the figures for time period 1 were omitted because no purchases of loss reductions are made. This occurs because the players making loss reductions realize a higher price by waiting for demand increases. This can be observed mathematically in the stationary KKT conditions for loss reductions (Equations \ref{eqn:gcm_stat_LR} and \ref{eqn:csm_stat_LR}). The tendency to withhold supply may offset some of the inherent advantages of market efficiencies.
Table \ref{tab:mult_price} illustrates the non-unique solutions in the GCM market. The term "V" represents the starting point used for all the variables in the model. Thus, different solution starting points generate more than one valid solutions that satisfy Theorem \ref{th:prices2}. An equity-enforcing set of constraints found in discretely constrained MCPs would be one way to distinguish between multiple solutions\cite{djeumou2019applications, gabriel2017solving}.
\begin{table}[]
\centering
\begin{tabular}{c|c|c}
Variable / Parameter & V = 2 & V = 2000 \\ \hline
$\gamma^{loss}_{2,1,1}$ & 0.71 & 1.34 \\
$\gamma^{loss}_{2,1,2}$ & 6.84 & 6.52 \\
$c^{cu}_{2,1,1}$ & 1 & 1 \\
$\gamma^{loss}_{2,2,1}$ & 0 & 0.63 \\
$\gamma^{loss}_{2,2,2}$ & 3.55 & 3.24 \\
$c^{cu}_{2,2,1}$ & 5 & 5 \\
$\sum_{t=1}^2\gamma^{loss}_{2,1,t}+c^{cu}_{2,1,1}$ & 8.55 & 8.87 \\
$\sum_{t=1}^2\gamma^{loss}_{2,2,t}+c^{cu}_{2,2,1}$ & 8.55 & 8.87 \\
$\pi^{as}_{2,1}$ & 8.55 & 8.87 \\
$\gamma^{flow}_{3,1}$ & 8.55 & 8.87 \\
$W^P_{3,2,1}$ & 0.67 & 0.68
\end{tabular}
\caption{Multiple Prices Scenario in the GCM Market}
\label{tab:mult_price}
\end{table}
\subsubsection{Duck River Model}
Another model was also generated for the Duck River in Tennessee. The purpose of this model is to evaluate the market approaches with real-world data and to explore the impact of capital projects. In the three-node model, no capital projects were considered. In contrast, the Duck River model seeks to understand how the water release market impacts the optimal timing of water infrastructure investments with large fixed costs. This use case is relevant to river basins in general because water scarcity usually serves as a driver for significant supply expansion. For context, a similar real-options approach was considered for a single-optimization problem in Chapter 18 of \cite{daniell2015understanding}.
Stakeholders in the Duck River watershed in central Tennessee have been navigating water resource-related challenges in earnest since the extreme drought of 2007. Seven water utilities in the Duck River watershed serve water to approximately 250,000 people and industries that include car manufacturers, food processing plants, and other businesses. In addition to these uses, the river provides a wide range of other values including recreation, an excellent fishery, and some of the most biologically-rich freshwater habitat in North America \cite{obg2011supply}.
The drought of 2007 highlighted the issue that in extended dry weather conditions, the citizens of the Duck River region depend on the water stored in Normandy Reservoir to meet all designated uses, including drinking water, wastewater assimilation, recreation, and natural resource protection. The dramatic decrease in rainfall, combined with the multitude of uses of the reservoir and the river, caused record low water levels in Normandy Reservoir that resulted in temporary changes in dam operation to protect water uses. Weather patterns and growth projections, combined with the obligation to manage water resources responsibly for future generations, created the need for a comprehensive regional water supply plan for the Duck River Region.\cite{obg2011supply}
Development of the regional water supply plan \cite{obg2011supply} and a reservoir-river model of water budgets along the river highlighted the inequities in benefits between the upstream and downstream users in the basin \footnote{The OASIS software was used to build the reservoir-river model. More information can be found here: www.hazenandsawyer.com/publications/oasis-modeling-for-water-people/}. Higher than normal releases from the reservoir to the river during extreme dry events resulted in impacts to the reservoir users due to excessive draw down of the reservoir. In contrast, the flow constraint of 100 cubic feet per second (cfs) on just the most downstream user ignored consumptive water use by upstream water systems, golf courses, and other users.
While the river-reservoir model provided insights into water withdrawals and discharges along the river (i.e., water balance), improved decision-making tools are needed to gain a better understanding of the following questions:
\begin{itemize}
\item How does the flow of water in the Duck River translate into the flow of economic and environmental benefits for the individual stakeholders and the region?
\item How can decision-makers overcome the strategic fragmentation that exists among stakeholders who individually may have an incomplete view of the “big-picture” problems for the region?
\item How can water supply agreements and permits incorporate flexibility to overcome changing conditions?
\end{itemize}
\begin{figure}
\centering
\includegraphics[scale = 0.75]{DRA_Schematic.png}
\caption{Schematic of the Duck River Agency's municipal users and relative positions along the river. Normandy reservoir is at the upstream end of the river. The flow direction is shown opposite of convention to reflect the east-to-west flow of the river.}
\label{fig:DRA_schem}
\end{figure}
Figure \ref{fig:DRA_schem} depicts the line-graph network structure for the Duck River Basin. Contrasting with the previous example, there are six players considered instead of three. Each player is a municipal water provider for the named city or county with the exception of Duck River Utility Commission (DRUC). DRUC serves the cities of Manchester and Tullahoma using the water from Normandy reservoir at the upstream end of the basin. The Normandy dam separates DRUC from the rest of the downstream water providers.
Several data sources were consulted to estimate the model parameter values. The published water rates for each of the municipalities were used to estimate water operating costs. The water supply plan \cite{obg2011supply} and the drought management plan \cite{obg2013drought} were used to estimate water supply volumes, capital recovery costs, capital supply augmentation capacities, and regulatory flow constraints. Water demand projections and consumptive loss fractions were estimated from analysis associated with a demand projection study in the basin \cite{maddaus2016demand}. Industry-available data was used to estimate consumptive use costs and the inverse demand slope \cite{alcubilla2006derived} \cite{pickard2007reducing} \cite{Ramboll_WS2020}.
The primary water supply source for most of the basin is water released from Normandy reservoir. Historically, these releases have been at least 77.6 MGD at a 97 percent reliability \cite{obg2013drought} \footnote{This is based on the criteria for Stage 3 drought conditions, which represents the drought threshold necessitating the reduction of Normandy Dam outflows to 77.6 MGD. This condition is expected to occur once every thirty years.}. Due to rapid growth in the region, water demands are expected to increase in the coming decades. Furthermore, consumptive use represents over a quarter of the water withdrawn from the basin in some cases \cite{maddaus2016demand}. Some users discharge wastewater outside the basin as well.
This supply and demand profile of the basin creates the potential for inequities in economic growth. Columbia is expected to grow from 7.54 to 12.74 MGD between the years 2015 and 2050. Despite the growth potential, Columbia has the least favorable access to water. First, it is the only water system with a regulatory flow constraint in its water withdrawal permit. The permit states that Columbia must cease all water withdrawals if the flow in the river decreases to 64.6 MGD. Additionally, it is the farthest downstream player, so the consumptive losses of upstream water systems can exacerbate Columbia's inequity.
The water supply plan for the Duck River Agency \cite{obg2011supply} identified two projects as alternatives to address this and other water supply related inequities in the basin. Figure \ref{fig:DRA_schem} depicts the location of these proposed capital projects in the basin. The Normandy Dam improvements project would raise the Normandy dam to effectively increase the water stored in the Normandy Reservoir. Alternatively, the Williamsport project would create a new water intake at a less environmentally-sensitive location, which would allow for increased withdrawals from the river.
The latter alternative is incorporated into the present analysis because it has emerged as the higher-priority project for Columbia and the rest of the basin's stakeholders. The project enables Columbia to withdraw water without its regulatory flow constraint, which therefore improves its access to water significantly. It also decreases its dependence on releases from the Normandy reservoir, which leaves more water available for upstream players. Thus, the scenarios considered for this model involve the installation year for the Williamsport Intake project.
Assuming Columbia finances the project, the optimal timing for the investment is the key question underlying the scenario analysis in this model. Deferring investment decreases project costs. However, the decreased costs must be weighed against the benefits of the water supply increase. Furthermore, the role of the water release market must also be considered.
To answer this question, separate model runs for each market structure were performed for the installation years 2015,2020,..,2040,2045. It was assumed that Columbia bore the majority of the direct benefits and associated costs. Accordingly, the required augmentation parameter, $a^{req}_{columbia,t}$, and the capital cost parameter,$c^{cap}_{columbia,t}$, were manipulated from scenario to scenario. The former was set to the project's capacity, which is 64.6 MGD, for the installation year onward. The latter was set according to the simple formula: $c^{cap}_{columbia,t} = 5*[annual\_payment] / 64.6$. This converted the annual payment to cost per MGD per planning period.
\begin{figure}
\centering
\includegraphics[scale = 0.7]{WilliamsportScenarios.png}
\caption{Two plots of the social welfare for various installation years of the Williamsport Intake capital project for a water release market and no market.}
\label{fig:WI_Scenarios}
\end{figure}
Figure \ref{fig:WI_Scenarios} depicts the results of these scenarios. For each scenario, social welfare is plotted against the installation year for both the water release market and the no market models. The water release market is referred to generically because the GCM and CSM models produce identical results. This arises because there are no intermediate players purchasing water. Also, based on Theorem \ref{th:prices}, we see that the prices for all upstream nodes of Columbia active in the loss-reduction market must have similar nodal prices (in the GCM formulation). Take for example the year 2025, in that case, Columbia purchases from DRUC, Shelbyville, and Spring Hill and the resulting, common market price for water is 0.874. The curves reveal competing trade-offs in the installation year for the project. Furthermore, these competing trade-offs occur at higher social welfare (i.e., benefits minus costs) for the water release market than the no market model. The water release market also enables the project to be deferred for longer.
As noted previously, the cost of the installation decreases the longer it is deferred. However, the cumulative opportunity costs, as measured through $\gamma^{flow}$, increase with deferment because less water is available to satisfy economic growth. These competing costs are jointly minimized between the years 2035 and 2040 for the water release market.
The water release market decreases the water opportunity cost to make continued capital deferment viable, thereby serving as a temporary water supply solution. This can be visualized in Figure \ref{fig:DRA_2030}, which depicts the estimated flow conditions five years prior to the optimal time frame for the Williamsport Intake installation. The inflow needed to meet water demand is only a limiting factor (i.e., binding constraint) for Columbia because they are the only utility with a non-zero value for $r^{fc}_{i,t}$. To reduce the supply deficit, Columbia purchases a small amount of consumptive-loss reductions from each player to decrease the opportunity cost of deferring the Williamsport Intake project another five years. This can be observed in the differences between the "net inflow" and "min inflow" curves.
\begin{figure}[htbp]
\centering
\includegraphics[scale = 1]{DRA_2030_Schematic.png}
\caption{This figure depicts the function of the water release market in 2030, which is five years before the optimal installation of the capital project.}
\label{fig:DRA_2030}
\end{figure}
The Duck River case study reveals a potential unintended consequence of the water release market. Market profits (i.e.,$\pi^{as}_{i,t}-c^{cu}_{i,c,t}$) tend to increase water consumption beyond what it would be otherwise. As shown in Table \ref{tab:DRAWaterEfficiency}, each of the players upstream of Columbia occasionally use more water than their nominal demand during the Williamsport Intake deferment period (i.e., 2015-2030). Such excess water consumption only occurs when $\lambda^{sup}_{i,t}$ is positive, which can be interpreted as market profits.
Thus, increasing consumption allows the upstream users to experience more consumptive losses and subsequently derive more consumptive loss-reduction revenue from Columbia. All the upstream players that make loss reductions have a positive value for $\lambda^{sup}$ in 2030. This is when Columbia's water demand opportunity cost is the highest in the deferment period. Several players also have positive values for $\lambda^{sup}$ in 2015 because consumptive losses occur at lower rates for these players in later time periods.
\begin{table}
\centering
\begin{tabular}{l|l|l|l|l}
Utility (i) & Time Period (t) & $Q_{i,t}$ (MGD) & Nominal Demand (MGD) & $\lambda^{sup}_{i,t}$ (\$ M) \\ \hline
DRUC & 2030 & 7.5 & 7.27 & 0.06 \\
Shelbyville & 2015 & 3.98 & 3.85 & 0.07 \\
Shelbyville & 2030 & 5 & 4.89 & 0.03 \\
Lewisburg & 2015 & 2.59 & 2.47 & 0.06 \\
Lewisburg & 2030 & 3.27 & 3.17 & 0.03 \\
Spring Hill & 2015 & 2.79 & 2.66 & 0.06 \\
Spring Hill & 2030 & 3.95 & 3.84 & 0.03
\end{tabular}
\caption{This table demonstrates how profit generated from the water release market (i.e., $\lambda^{sup}_{i,t}$) can increase water consumption. The nominal demand represents the projected water needs, and Q represents the total water withdrawn for consumption.}
\label{tab:DRAWaterEfficiency}
\end{table}
\section{Conclusions}
The river basin equilibrium formulations presented reveal a general framework for non-cooperative models on line graphs. In the three-node example, a market structure leads to non-negative rewards for all players. However, these models are needed to determine which market structure is necessary to achieve these rewards. In the Duck River example, the water-release market acts as a temporary solution to help optimally defer capital investments. In all cases, it was assumed that consumptive-loss reductions are practical and relatively cheap to implement. Water releases from storage become much more important when this assumption fails.
Future research will build on the water-release market approach to make it more versatile for more types of river basins. For example, the models in this paper assume that upstream players cannot completely control the water resource. This assumption could fail in situations with a few number of players or restricted water access. In such a case, treating the river basin as a Stackelberg game is perhaps more appropriate. An alternate bi-level formulation could consider the top level player as a government entity that imposes regulatory flow constraints on the lower level players. Lastly, considering the role of groundwater could be important in river basins with high agricultural usage.
\section*{Acknowledgements}
This research was funded by the National Science Foundation's Civil Infrastructure Systems Program (NSF Award \# 2113891). Aside from funding, the agency did not play any other role in the study design, data collection, analysis, interpretation, writing of the report, or publication decisions.
Thank you to Doug Murphy, the executive director of the Duck River Agency, for providing technical assistance, data, and advice featured in this paper.
\section*{Declaration of Interest}
There are no interests to declare.
\printbibliography
\section{Appendix}
\subsection*{Proof of Theorem \ref{th:imputation}}
\begin{proof}
In this game, $r_i$ equals the difference between the optimal objective functions of the Generalized Nash reformulation and the original line-graph game:
\begin{equation}
r_i = f^{GN}_i(x^*_i,y^*_i,z^*_i,\pi) - f^{LG}_i(x^*_i)
\label{eqn:rewards}
\end{equation}
Assume $r_i \ge 0 \quad \forall i \in I$. In this case, $f^{GN}_i(x^*_i,y^*_i,z^*_i,\pi) = f^{LG}_i(x^*_i) \quad \forall i \notin I^C$ using an optimality argument. Otherwise, player i would chose to be in the set $I^C$. Therefore, $v(I)=v(I^C)=\sum_{i \in I^C}r_i$ follows from substituting Equation \ref{eqn:rewards} into Equation \ref{eqn:line_graph_cf} given that $r_i=0$ for all non-participating players. This satisfies Equation \ref{eqn:imput_cond_1}. The premise satisfies Equation \ref{eqn:imput_cond_2}.
\end{proof}
\subsection*{Proof of Theorem \ref{eq:existence}}
\begin{proof} \label{xxx}
First note that the KKT conditions to (\ref{eq:short}) are necessary due to the linearity of the constraint functions and sufficient due to the concavity of the objective function since $\beta_i>0$. These conditions are the following.
\begin{subequations} \label{eq:short_all}
\begin{equation} \label{eq:short1}
0 \leq c^{ops}_{i}-\alpha_i+\beta_iQ_i-lf_i\gamma^{loss}_i+\gamma^{flow}_i \perp Q_i \ge 0
\end{equation}
\begin{equation}\label{eq:short2}
0 \leq c^{cu}_i -\pi^{as}_i + \gamma^{loss}_i \perp L^R_i \geq 0
\end{equation}
\begin{equation} \label{eq:short3}
0 \leq \pi^{as}_j -\gamma^{flow}_{i} \perp W^P_{ij} \geq 0, \forall j \in U_i
\end{equation}
\begin{equation} \label{eq:short4}
0 \leq lf_i Q_i-L^R_i \perp \gamma^{loss}_i \geq 0
\end{equation}
\begin{equation}\label{eq:short5}
0 \leq n_i + \sum_{j \in U_i} W^P_{ij} -r^{fc}_i -Q_i + O^{min}_{i-1} \perp \gamma^{flow}_i \geq 0
\end{equation}
\end{subequations}
Consider a node $e$ that is not active in the loss-reduction market. Then, the following are feasible values for the variables:
\begin{equation*}
Q_e=0, L^R_e=0, W^P_{ej}=0, \forall j \in U_e, \gamma^{loss}_e=0, \gamma^{flow}_e=0, \pi^{as}_e=0.
\end{equation*}
To see why, take each part of (\ref{eq:short_all}) separately. (\ref{eq:short1}) is feasible as long as $c^{ops}_e -\alpha_e \geq 0$, which is condition (ii) in the premise. (\ref{eq:short2}) holds as long as $c^{cu}_e \geq 0$ which is guaranteed since all cost coefficients are positive. Conditions (\ref{eq:short3}), (\ref{eq:short4}) and the market-clearing condition (\ref{eq:MCC_short}) are automatically satisfied for the given values. Note that for $j\in U_e$, (\ref{eq:short3}) holds if $\pi^{as}_j >0$ or if $\pi^{as}_j=0$. Under (iii), and considering the definition of $O^{min}_e$, we see that from (\ref{eqn:omin_short}) $O^{min}_e=\sum_{r=1}^{e} n_r$. Thus, in (\ref{eq:short5}), we need $n_e +O^{min}_{e-1}=\sum_{r=1}^{e} n_r \geq r^{fc}_e$ which is (iv).
Now consider node $u$ which alone supplies loss reduction. As $L^R_u >0$ this means by (\ref{eq:short4}) that $Q_u>0$. $L^R_u>0$ in (\ref{eq:short2}) means that $\pi^{as}_u=c^{cu}_u+\gamma^{loss}_u>0$. This in turn implies in (\ref{eq:MCC_short}) and (iii) that $W^P_{du}=L^R_u=lf_u Q_u>0$ is feasible. Since $Q_u>0$, choose $\gamma^{loss}_u=\gamma^{flow}_u=0$ so that in (\ref{eq:short1}) $Q_u=\frac{\alpha_u-c^{ops}_u}{\beta_u}$ which is positive by the left-most inequality in (v). (\ref{eq:short3}) is true since $\forall j \in U_u$, $W^P_{uj}=0$, $0=\gamma^{flow}_u \leq \pi^{as}_j=0$. Note that since $lf_uQ_u=L^R_u$ and $\gamma^{loss}_u \geq 0$, (\ref{eq:short4}) is feasible. (\ref{eq:short5}), reduces to
\begin{equation*}
0<Q_u\leq \sum_{r=1}^u n_r -r^{fc}_u \perp \gamma^{flow}_u \geq 0
\end{equation*}
which is true as long as $Q_u$ satisfies
\begin{equation*}
0<Q_u=\frac{\alpha_u-c^{ops}_u}{\beta_u} \leq \sum_{r=1}^u n_r -r^{fc}_u
\end{equation*}
which is valid by the right-most inequality in (v).
Lastly, consider node $d$, a sole downstream node in $D_u$, active in the loss-reduction market and receiving water purchases from node $u$. By construction, from (\ref{eq:short3}), $W^P_{du}>0 \Rightarrow \gamma^{flow}_d=\pi^{as}_u>0$, the right-most part coming from the analysis above. Now let the rest of the variables have the following values: $\gamma^{loss}_d =0, Q_d=0, L^R_d=0, \pi^{as}_d=0, \gamma^{flow}_d=\pi^{as}_u >0, W^P_{dj}=0, j \neq u, W^P_{dj}=L^R_u>0, j=u$.
With those values, note that (\ref{eq:short1}) is satisfied as long as $\pi^{as}_u\geq \alpha_d-c^{ops}_d$ which is guaranteed by condition (vi) and taking into account the analysis above that gave $\pi^{as}_u=c^{cu}_u+\gamma^{loss}_u=c^{cu}_u$. (\ref{eq:short2}) is automatically satisfied with those values since $c^{cu}_d \geq 0$. (\ref{eq:short3}) is satisfied since $W^P_{du}>0 \Rightarrow \gamma^{flow}_{d}=\pi^{as}_u$. (\ref{eq:short4}) is automatically satisfied for the given values as is (\ref{eq:MCC_short}), the latter since $W^p_{du}=L^R_u$ and $\pi^{as}_u>0$. Since $\gamma^{flow}_{d}>0$, and given that $W^p_{du}=L^R_u=Q_u lf_u$ the last condition, (\ref{eq:short5}) is satisfied as long as:
\begin{equation*}
Q_d=\sum_{r=1}^d n_r-r^{fc}_d+\left(\frac{\alpha_u-c^{ops}_u}{\beta_u} \right) lf_u\\
\end{equation*}
which is guaranteed by (vii).
\end{proof}
\subsection*{Proof of Theorem \ref{th:prices}}
\begin{proof}
Consider a node $j \in U_{it}^+$. Since $W^P_{ijt}>0$ it follows by complementarity that $\pi^{as}_{j,t}=\frac{\gamma^{flow}_{i,t}}{d_t}$. As this is true $\forall j \in U_{it}^+$ we see that the corresponding prices $\pi^{as}_{j,t}$ must be equal with $\pi_{it}=\frac{\gamma^{flow}_{i,t}}{d_t}$.
\end{proof}
\subsection*{Proof of Theorem \ref{th:prices2}}
\begin{proof}
Consider a node $i$ for which $L^R_{cit}>0$, for some $c \in C, t \in T$. Then, by (\ref{eqn:gcm_stat_LR}),
\begin{equation*}
\pi^{as}_{it} =\frac{\sum_{t'=t} ^{|T|} \gamma^{loss}_{i,c,t}}{d_t}+c^{cu}_{cit}
\end{equation*}
Since $k \in D_i^+$, then by (\ref{eq:Wp}), $W^P_{k,i,t} >0$, we see that
\begin{equation*} \label{eqn:part b}
\pi^{as}_{it}=\frac{\gamma^{flow}_{k,t}}{d_t}, \forall k \in D_i^+
\end{equation*}
\end{proof}
\subsection*{LCP for the GCM Formulation}
\begin{subequations}\label{eq:KKT_GCM}
\begin{equation}
0 \le \sum^{|T|}_{t'=t}\lambda^{sup}_{i,t'} - \sum^{|T|}_{t'=t}\sum^{|C|}_{c=1}lf_{c,i,t}\gamma^{loss}_{c,i,t'} \perp W^{D}_{i,t} \ge 0 \quad \forall t
\label{eqn:gcm_stat_WD}
\end{equation}
\begin{equation}
0 \le d_t(c^{sr}_{i,t}-\pi^{as}_{i,t})-\gamma^{flow}_{i,t}+\gamma^{cap}_{i,t} \perp W^{S}_{i,t} \ge 0 \quad \forall t \label{eqn:b}
\end{equation}
\begin{equation}
0 \le d_t(c^{ops}_{i,t} - \theta_{i,t}(Q_{i,t})) - \lambda^{sup}_{i,t} + \gamma^{flow}_{i,t} \perp Q_{i,t} \ge 0 \quad \forall t
\label{eqn:gcm_stat_Q}
\end{equation}
\begin{equation}
0 \le d_tc^{cap}_{i,t} - \gamma^{cap}_{i,t} + \lambda^{aug}_{i,t}\perp K_{i,t} \ge 0 \quad \forall t \label{eqn:d}
\end{equation}
\begin{equation}
0 \le d_t(c^{cu}_{i,c,t} - \pi^{as}_{i,t})+\sum^{|T|}_{t'=t}\gamma^{loss}_{c,i,t'} \perp L^{R}_{i,c,t} \ge 0 \quad \forall c,t
\label{eqn:gcm_stat_LR}
\end{equation}
\begin{equation} \label{eq:Wp}
0 \le \delta^{all}_{us_{j,i}}(d_t\pi^{as}_{j,t}-\gamma^{flow}_{i,t}) \perp W^{P}_{ij,t} \ge 0 \quad \forall j,t \quad , \quad j \neq i
\end{equation}
\begin{equation}
0 \le \sum^t_{t'=1}lf_{c,i,t'}W^D_{i,t'} - L^{R}_{c,i,t'} \perp \gamma^{loss}_{i,c,t} \ge 0 \quad \forall c,t \label{eqn:g}
\end{equation}
\begin{equation}
0 \le n_{i} + W^S_{i,t} + \sum^{|I|}_{j=1}\delta^{all}_{us_{j,i}}W_{ij,t}^{P} - r^{fc}_{i,t} +
O^{min}_{i-1,t}-Q_{i,t} \perp \gamma^{flow}_{i,t} \ge 0 \quad \forall t \label{eqn:h}
\end{equation}
\begin{equation}
0 \le K_{i,t}-W^{S}_{i,t} \perp \gamma^{cap}_{i,t} \ge 0
\quad \forall t \label{eqn:i}
\end{equation}
\begin{equation}
\sum^t_{t'=1}W^D_{i,t'} - Q_{i,t} = 0 \quad , \quad \lambda^{sup}_{i,t} \quad free \quad \forall t \label{eqn:i}
\end{equation}
\begin{equation}
K_{i,t} - a^{req}_{i,t} = 0 \quad , \quad \lambda^{aug}_{i,t} \quad free
\quad \forall t \label{eqn:k}
\end{equation}
\begin{equation}
O^{min}_{i,t} = n_{i}-\sum^{|C|}_{c=1}\sum^t_{t'=1}lf_{c,i,t'}W^D_{i,t'}+\sum^{|C|}_{c=1}\sum^{t-1}_{t'=1}L^{R}_{c,i,t'}+O^{min}_{i-1,t} \quad \forall i,t \label{eqn:omin}
\end{equation}
\begin{equation}
\sum^{|I|}_{k=1}\delta^{all}_{ds_{k,i}} W_{ki,t}^{P} \leq \sum_{c \in C} L^{R}_{i,c,t}+W^{S}_{i,t} \perp \pi^{as}_{i,t} \geq 0 , \forall t \label{eqn:MC1}
\end{equation}
\begin{equation}
\theta_{i,t}(Q_{i,t}) = \alpha_{i,t} - \beta_{i,t}Q_{i,t} \quad \forall t \label{eqn:theta}
\end{equation}
\end{subequations}
The endogenous functions were substituted into Equation (\ref{eqn:objfn}) prior to solving for the KKT conditions.
\subsection*{LCP for the CSM Formulation}
\begin{subequations}
\begin{equation}
0 \le \sum^{|T|}_{t'=t}\lambda^{sup}_{i,t'} - \sum^{|T|}_{t'=t}\sum^{|C|}_{c=1}lf_{c,i,t}\gamma^{loss}_{c,i,t'}\perp W^{D}_{i,t} \ge 0 \quad \forall t
\end{equation}
\begin{equation}
0 \le d_t(c^{sr}_{i,t}-\sum^{|I|}_{k=1}\pi^{as}_{k,t}\delta^{all}_{ds_{k,i}})-\gamma^{flow}_{i,t}+\gamma^{cap}_{i,t} \perp W^{S}_{i,t} \ge 0 \quad \forall t
\end{equation}
\begin{equation}
0 \le d_t(c^{ops}_{i,t} - \theta_{i,t}(Q_{i,t})) - \lambda^{sup}_{i,t} + \gamma^{flow}_{i,t} \perp Q_{i,t} \ge 0 \quad \forall t
\end{equation}
\begin{equation}
0 \le d_tc^{cap}_{i,t} - \gamma^{cap}_{i,t} + \lambda^{aug}_{i,t}\perp K_{i,t} \ge 0 \quad \forall t
\end{equation}
\begin{equation}
0 \le d_t(c^{cu}_{i,c,t} - \sum^{|I|}_{k=1}\pi^{as}_{k,t}\delta^{all}_{ds_{k,i}})+\sum^{|T|}_{t'=t}\gamma^{loss}_{c,i,t'}\perp L^{R}_{i,c,t} \ge 0 \quad \forall c,t
\label{eqn:csm_stat_LR}
\end{equation}
\begin{equation}
0 \le d_t\pi^{as}_{i,t}-\gamma^{flow}_{i,t}\perp W^{P}_{i,t} \ge 0 \quad \forall t
\end{equation}
\begin{equation}
0 \le \sum^t_{t'=1}lf_{c,i,t'}W^D_{i,t'} - L^{R}_{c,i,t'} \perp \gamma^{loss}_{i,c,t} \ge 0 \quad \forall c,t
\end{equation}
\begin{equation}
0 \le n_{i} + W^S_{i,t} + W^{P}_{i,t} - r^{fc}_{i,t} +
O^{min}_{i-1,t}-Q_{i,t} \perp \gamma^{flow}_{i,t} \ge 0 \quad \forall t
\end{equation}
\begin{equation}
0 \le K_{i,t}-W^{S}_{i,t} \perp \gamma^{cap}_{i,t} \ge 0
\quad \forall t
\end{equation}
\begin{equation}
\sum^t_{t'=1}W^D_{i,t'} - Q_{i,t} = 0 \quad , \quad \lambda^{sup}_{i,t} \quad free \quad \forall t
\end{equation}
\begin{equation}
K_{i,t} - a^{req}_{i,t} = 0 \quad , \quad \lambda^{aug}_{i,t} \quad free
\quad \forall t
\end{equation}
\begin{equation}
O^{min}_{i,t} = n_{i}-\sum^{|C|}_{c=1}\sum^t_{t'=1}lf_{c,i,t'}W^D_{i,t'}+\sum^{|C|}_{c=1}\sum^{t-1}_{t'=1}L^{R}_{c,i,t'}+O^{min}_{i-1,t} \quad \forall t
\end{equation}
\begin{equation} \label{eq:pi}
\sum^{|I|}_{j=1}\delta^{all}_{us_{j,i}}(\sum^{|C|}_{c=1}L^{R}_{j,c,t}+W^{S}_{j,t})-W^{P}_{i,t} \ge 0 \perp \pi^{as}_{i,t} \ge 0 \quad \forall t
\end{equation}
\begin{equation}
\theta_{i,t}(Q_{i,t}) = \alpha_{i,t} - \beta_{i,t}Q_{i,t} \quad \forall t
\end{equation}
\end{subequations}
The endogenous functions were substituted into Equation (\ref{eqn:objfn}) prior to solving for the KKT conditions.
\subsection*{LCP for the No-Market Formulation}
\begin{subequations}
\begin{equation}
0 \le \sum^{|T|}_{t'=t}\lambda^{sup}_{i,t'} \perp W^{D}_{i,t} \ge 0 \quad \forall t
\end{equation}
\begin{equation}
0 \le d_tc^{sr}_{i,t}-\gamma^{flow}_{i,t}+\gamma^{cap}_{i,t} \perp W^{S}_{i,t} \ge 0 \quad \forall t
\end{equation}
\begin{equation}
0 \le d_t(c^{ops}_{i,t} - \theta_{i,t}(Q_{i,t})) - \lambda^{sup}_{i,t} + \gamma^{flow}_{i,t} \perp Q_{i,t} \ge 0 \quad \forall t
\end{equation}
\begin{equation}
0 \le d_tc^{cap}_{i,t} - \gamma^{cap}_{i,t} + \lambda^{aug}_{i,t}\perp K_{i,t} \ge 0 \quad \forall t
\end{equation}
\begin{equation}
0 \le n_{i} + W^S_{i,t}- r^{fc}_{i,t} +
O^{min}_{i-1,t}-Q_{i,t} \perp \gamma^{flow}_{i,t} \ge 0 \quad \forall t
\end{equation}
\begin{equation}
0 \le K_{i,t}-W^{S}_{i,t} \perp \gamma^{cap}_{i,t} \ge 0
\quad \forall t
\end{equation}
\begin{equation}
\sum^t_{t'=1}W^D_{i,t'} - Q_{i,t} = 0 \quad , \quad \lambda^{sup}_{i,t} \quad free \quad \forall t
\end{equation}
\begin{equation}
K_{i,t} - a^{req}_{i,t} = 0 \quad , \quad \lambda^{aug}_{i,t} \quad free
\quad \forall t
\end{equation}
\begin{equation}
O^{min}_{i,t} = n_{i}-\sum^{|C|}_{c=1}\sum^t_{t'=1}lf_{c,i,t'}W^D_{i,t'}+O^{min}_{i-1,t} \quad \forall t
\end{equation}
\begin{equation}
\theta_{i,t}(Q_{i,t}) = \alpha_{i,t} - \beta_{i,t}Q_{i,t} \quad \forall t
\end{equation}
\end{subequations}
\end{document}
|
2,877,628,090,516 | arxiv | \section{Introduction}
Now both GLAP [1] and BFKL [2] equations are used to describe the parton
distributions $n_i(x)$ at the small Bjorken variable $x$. The next to
leading corrections to the GLAP equation are well known. On the other hand,
the program of calculating the next-to-leading corrections to the BFKL
equation was formulated comparatively recently [3]. In this paper we
consider these corrections from the real gluon and quark production.
In each order of the perturbation theory the main contribution to the total
cross-section $\sigma _{tot}$ for the high energy collision of particles
with momenta $p_A$ and $p_B$ results from the multi-Regge kinematics for the
final state gluon momenta $k_0=p_{A^{\prime
}},k_1,...,k_n,k_{n+1}=p_{B^{\prime }}$ (see Fig.1):
\begin{eqnarray}
s \gg s_{i}=2k_{i-1}k_i \gg t_i=q_i^2\,,\,q_i=p_A-\sum\limits_{r=0}^{i-1}
k_r\,,\,\,
\prod\limits_{i=1}^{n+1} s_i=s\prod\limits_{i=1}^n {\bf k}_{i}^2 \,\,,\,\,
k_{\perp}^2=-{\bf k}^2\,,
\end{eqnarray}
where $k_{i\perp}$ are transverse components of momenta $k_i$.
In the leading logarithmic approximation (LLA) the $n$-gluon production
amplitude in this kinematics has the multi-Regge form [2]:
\begin{eqnarray}
A_{2+n}^{LLA}=A_{2+n}^{tree} \prod\limits_{i=1}^{n+1}
s_i^{\omega(t_i)}\,\,,
\end{eqnarray}
where $j=1+\omega (t)$ is the gluon Regge trajectory and
\begin{eqnarray}
\omega(t)=-\frac{g^2 N_c}{16\pi^3}\int d^2{\bf k}
\frac{{\bf q}^2}{{\bf k}^2 ({\bf q-k})^2} \,\,\,,\,\,t=-{\bf q}^2 \,\, .
\end{eqnarray}
The infrared divergencies in the Regge factors $s^{\omega (t_i)}$ cancel in
the total cross section with contributions from the real gluons. The
production amplitude for the gluon-gluon scattering in the tree
approximation has the following factorized form
\begin{eqnarray}
A_{2+n}^{tree}=2 s gT_{a'a}^{c_1}\Gamma_{A'A}\frac{1}{t_1}gT_{c_2 c_1}^{d_1}
\Gamma_{2,1}^1\frac{1}{t_2}....
gT_{c_{n+1}c_n}^{d_n}\Gamma_{n+1,n}^n
\frac{1}{t_{n+1}}gT_{b'b}^{c_{n+1}}\Gamma_{B'B}\,\,.
\end{eqnarray}
Here $a,b$ and $a^{\prime},b^{\prime},d_r $ ($r=1,2...n$) are colour indices
for initial and final gluons correspondingly. $T_{ab}^{c}=-if_{abc}$ are
generators of the gauge group $SU(N_c)$ in the adjoint representation, $g$
is the Yang-Mills coupling constant,
\begin{equation}
\Gamma _{A^{\prime }A}=\delta _{\lambda _A,\lambda _A^{\prime
}}\,\,,\,\,\,\Gamma _{r+1,r}^r=C_\mu (q_{r+1},q_r)\overline{e}_\mu (k_r)
\end{equation}
are the reggeon-particle-particle (RPP) and reggeon-reggeon-particle (RRP)
vertices correspondingly; $e(k_r)$ is the polarization vector of the
produced gluon. The quantity $\lambda _r=\pm 1$ is the helicity of the gluon
$r$ in the c.m.system. In LLA the $s$-channel helicities of colliding
particles are conserved. The effective nonlocal RRP vertex $C(q_2,q_1)$ is
given below [2]
\begin{eqnarray}
C(q_2,q_1) = -q_1-q_2 +
p_A(\frac{q_1^2}{k_1p_A}+2\frac{k_1p_B}{p_Ap_B}) -
p_B(\frac{q_2^2}{k_1p_B}+2\frac{k_1p_A}{p_Ap_B}).
\end{eqnarray}
It has the important property corresponding to the current conservation
\begin{eqnarray}
(k_1)_\mu C_\mu (q_2,q_1)=0,
\end{eqnarray}
which gives us a possibility to chose an arbitrary gauge for each of the
produced gluons. For example, in the left ($l$) light cone gauges, where $%
p_Ae^l(k)=0$ and $ke^l(k)=0$, one can use the following parametrisation of
the polarization vector $e^l(k)$
\begin{eqnarray}
e^l=e_{\perp}^l-\frac{k_{\perp}e_{\perp}^l}{kp_A} p_A
\end{eqnarray}
in terms of the two dimensional vector $e_{\perp }^l$. In this gauge the RRP
vertex takes an especially simple form, if we introduce the complex
components $e=\overline{e}_x+i\overline{e}_y\,,e^{*}=\overline{e}_x-i
\overline{e}_y$ and $k=k_x+ik_y\,,k^{*}=k_x-ik_y$ for transverse vectors $
\overline{e}_{\perp }^l,k_{\perp }$ [4]
\begin{eqnarray}
\Gamma_{2,1}^1=C e^* + C^* e ,\,\,\, C=\frac{q_1^* q_2}{k_1^*}.
\end{eqnarray}
The integral kernel of the BFKL equation in LLA is expressed in terms of the
product of the effective vertices (real contribution) and the gluon Regge
trajectory (virtual contribution). The above complex representation of the
effective vertex was used in ref. [4] to construct an effective scalar field
theory for the multi-Regge processes. It was derived recently from QCD by
integrating over the fields which correspond to the highly virtual particles
produced in the multi-Regge kinematics in the direct channels $s_i$ [5].
The total cross-section calculated in LLA using the above expressions for
production amplitudes grows very rapidly as $s^\omega $ ($\omega
=(g^2N_c/\pi ^2)\ln 2$) and violates the Froissart bound $\sigma _{tot}<c\ln
{}^2s$ [2]. One of the possible ways of the unitarization of the scattering
amplitudes corresponds to the solution of the BKP equations [6] for
multi-gluon compound states. These equations have a number of remarkable
properties, including conformal invariance [7], holomorphic separability
[8], and existence of nontrivial integrals of motion [9]. The Hamiltonian
for the corresponding Schr\"odinger equations coincides with the Hamiltonian
for a completely integrable Heisenberg model with the spins belonging to an
infinite dimensional representation of the noncompact M\"obius group [10].
All these results are based on calculations of effective Reggeon vertices
and the gluon Regge trajectory in the first nontrivial orders of
perturbation theory. Up to now we do not know the region of applicability of
LLA including the intervals of energies and momentum transfers fixing the
scale for the QCD coupling constant. The simple form of the BFKL equation
summing contributions of the Feynman diagrams describing the Pomeron as a
compound state of two reggeized gluons is valid also in the next-to-leading
approximation. To go beyond LLA the Born amplitudes for a quasi-multi-Regge
kinematics of produced gluons were calculated [3] and one-loop corrections
to the reggeon-reggeon-particle vertex were found [11,12]. Also two loop
contributions to the gluon Regge trajectory were calculated [13]. For the
total correction to the BFKL equation only the contribution related to the
production of a pair of gluons or quarks with a fixed invariant mass is not
known. This paper is devoted to the solution of this problem.
It is enough to consider the simplest process, in which in the final state
we have two gluons with momenta $p_{A^{\prime }}$ and $p_{B^{\prime }}$
almost coinciding with momenta $p_A$ and $p_B$ of initial gluons and a group
of particles in the central rapidity region (see Fig.2). The momenta $q_1$
and $q_2$ of virtual gluons in crossing channels $t_1$ and $t_2$ in this
kinematics can be decomposed as follows:%
\begin{eqnarray}
q_1=q_{1\perp}+\beta \,p_A\,,\,\,q_2=q_{2\perp}-\alpha \,p_B\,
\end{eqnarray}
where $\beta $ and $\alpha $ are the Sudakov parameters of the total
momentum $k=\sum\limits_{i=1}^nk_i$ for the produced particles:
\begin{eqnarray}
k=k_\perp + \beta \,p_A + \alpha \,p_B\,\, ,\,\, \kappa =k^2=s\alpha \beta +
(q_1-q_2)_\perp^2
\end{eqnarray}
and $\sqrt{\kappa }$ is their invariant mass which is asumed to be fixed at
high energies: $\kappa \ll s$.
The production amplitude in this kinematics has a factorised form and it is
expressed in terms of the scattering amplitude for two virtual gluons which
are reggeized after taking into account the radiative corrections. For its
gauge invariance in a general case of the multi gluon production one should
introduce an infinite number of the induced vertices for the interaction of
the reggeised gluon with usual gluons. The Feynman rules for these vertices
can be derived from the effective action [14] local in the particle
rapidities
\begin{eqnarray}
S_{eff}=-\int d^4 x \,\,tr \,\{\frac{1}{2}G_{\mu \nu}^2(v)+[A_-(v_-)-
A_-]\,\partial^2_\sigma A_++[A_+(v_+)-A_+]\,
\partial^2_\sigma A_-\}\,.
\end{eqnarray}
Here the fields $A_{\pm }$ and $v_\alpha $ are the anti-hermitial $SU(N_c)$%
-matrices describing the reggeised and usual gluons correspondingly; $v_{\pm
}$ are the light-cone components of $v_\alpha $. The quantities $A_{\pm
}(v_{\pm })$ are the composite fields expressed through the gluon fields $%
v_{\pm }$ entering in the Wilson $P$-ordered exponents:
\begin{equation}
A_{\pm }(v_{\pm })=-\frac 1g\partial _{\pm }P\,\exp (-\frac g2\int_{-\infty
}^{x^{\pm }}dx^{\prime \pm }v_{\pm })
\end{equation}
The reggeon fields obey to the additional constraint $\partial _{\mp }A_{\pm
}=0$ , which is important for the gauge invariance of the effective action.
We should add to the usual Yang-Mills action $\frac 12G_{\mu \nu }^2(v)$
also the term which describes the quark interactions. Similar to the pomeron
case [15] one can construct the Reggeon calculus in QCD for the Reggeon
fields $A_{\pm }$\thinspace starting from the above effective action and
integrating the functional integral over the gluon fields $v$ [14]. Because
$%
A_{\pm }(v_{\pm })$ has a linear term in $v_{\pm }$ the classical extremum
of $S_{eff}$ is situated at non-vanishing values of $v$ satisfying to
gauge-invariant Euler-Lagrange equations. Using the gaussian approximation
for the quantum fluctuations near this classical solution one could find
one-loop corrections to the BFKL kernel in an independent way in comparison
with the dispersion method of refs.[3],[11]. The possible advantage of this
approach could be a better infrared convergency of intermediate expressions.
In the next section we reproduce the gluon and quark production amplitudes
in the quasi-multi-Regge kinematics. In the third section the properties of
the amplitudes with the definite helicities of the final particles are
discussed. In the fourth section the real corrections to the BFKL equation
are constructed in terms of the integrals from bilinear combinations of
helicity amplitudes. In the fifth section the infrared divergencies in the
integrals over momenta of produced particles are extracted and regularized.
In Conclusion the obtained results are discussed.
\section{Gluon and quark pair production in the quasi-multi-Regge kinematics}
It is known (see [3],[14]) , that the gluon production amplitude $%
A_{2\rightarrow 4}$ in the quasi-multi-Regge kinematics for final particles
is expressed through the tensor $A^{\alpha _1\alpha _2}$ describing the
transition of two reggeized gluons with momenta $q_1,-q_{2}$ into two real
gluons with momenta $k_1,k_2$:
\begin{equation}
A_{2\rightarrow 4}=-g\,p_A^{+}\,T_{a^{\prime }a}^{c_1}\,
\delta_{\lambda_{A^{\prime}}\lambda_A} \,\frac 1{t_1}\,\psi
_{d_1d_2c_2c_1}^{\alpha _1\alpha _2}\,\frac 1{t_2}\,g\,p_B^{-}\,T_{b^{\prime
}b}^{c_2}\, \delta_{\lambda_{B^{\prime }}\lambda_B}\,,
\end{equation}
\begin{equation}
\psi _{d_1d_2c_2c_1}^{\alpha _1\alpha _2}=2\,g^2\left[
T_{d_1d}^{c_1}\,T_{d_2d}^{c_2}\,A^{\alpha _1\alpha
_2}(k_1,k_2)+T_{d_2d}^{c_1}\,T_{d_1d}^{c_2}\,A^{\alpha _2\alpha
_1}(k_2,k_1)\right] .
\end{equation}
Here $p_A^{+}=n_\alpha ^{+}p_A^\alpha \,=\sqrt{s}$ and $p_B^{-}=n_\alpha
^{-}p_B^\alpha =\sqrt{s}$ are the light-cone components of the colliding
particle momenta. The invariants $t_i$ are expressed through momentum
transfers: $t_i=q_i^2=q_{i\perp }^2$. $a,b$ and $a^{\prime },b^{\prime }$
are the colour indices of initial and scattered gluons correspondingly. The
helicity $\lambda _i$ of each colliding particle is conserved ($\lambda
_i=\lambda _{i^{\prime }}$). The matrices $T$ are the colour group
generators in the adjoint representation with the commutation relations: $%
\left[ T^k,T^l\right] =if^{klr}T^r$. The produced gluons with the Sudakov
momenta $k_i^{\pm },k_i^{\perp }$ ($i=1,2$) have the colour and Lorentz
indices $d_i$ and $\alpha _i$ correspondingly. The tensor $A^{\alpha
_1\alpha _2}(k_1,k_2)$ can be written as the sum of contributions of several
Feynman's diagrams (see Fig.3) [14]:%
$$
A^{\alpha _1\alpha _2}=-\frac{\Gamma ^{\alpha _1\beta
-}(k_1,k_1-q_1)\,\Gamma ^{\alpha _2\beta +}(k_2,k_2+q_2)}{2\,\,(q_1-k_1)^2}-
\frac{\gamma ^{\alpha _2\alpha _1\beta }(k_2,-k_1)\,\Gamma ^{+-\beta
}(q_2,q_1)}{2\,\,(k_1+k_2)^2}
$$
\begin{equation}
+n^{+\alpha _1}n^{-\alpha _2}-g^{\alpha _1\alpha _2}-\frac 12n^{+\alpha
_2}n^{-\alpha _1}+t_1\frac{n^{-\alpha _1}\,n^{-\alpha _2}}{%
k_1^{-}(k_1^{-}+k_2^{-})}+t_2\frac{n^{+\alpha _1}\,n^{+\alpha _2}}{%
k_2^{+}(k_1^{+}+k_2^{+})}\,,
\end{equation}
where we have $g^{00}=-g^{ii}=1$ for non-zero components of $g^{\alpha
_1\alpha _2}$and the light-cone vectors $n^{\pm }$ are $n^{+}=p_BE^{-1},%
\,n^{-}=p_AE^{-1},\,4E^2=s$\thinspace . The Yang-Mills vertex
\begin{equation}
\gamma ^{\alpha _2\alpha _1\beta }(k_2,-k_1)=(k_2-k_1)^\beta g^{\alpha
_2\alpha _1}+(2k_1+k_2)^{\alpha _2}g^{\alpha _1\beta }-(2k_2+k_1)^{\alpha
_1}g^{\alpha _2\beta }
\end{equation}
enters also in the effective vertices $\left[ 2\right] $:
$$
\Gamma ^{\alpha _1\beta -}(k_1,k_1-q_1)=\gamma ^{\alpha _1\beta
-}(k_1,k_1-q_1)-t_1n^{-\alpha _1}\frac 1{k_1^{-}}n^{-\beta},
$$
$$
\Gamma^{\alpha_2 \beta +}(k_2,k_2+q_2)=\gamma^{\alpha_2 \beta
+}(k_2,k_2+q_2)- t_2n^{+\alpha}\frac{1}{k_2^+}n^{+\beta},
$$
\begin{equation}
\Gamma ^{+-\beta }(q_2,q_1)=\gamma ^{+-\beta }(q_2,q_1)-2t_1\frac{n^{-\beta
} }{q_1^{-}-q_2^{-}}+2t_2\frac{n^{+\beta }}{q_1^{+}-q_2^{+}}\,.
\end{equation}
Let us consider the gauge properties of $A^{\alpha_1\alpha_2}$. Taking into
account that particles $1$ and $2$ are on the mass shell and using the
Ward-Slavnov identity for the production vertex:
\begin{equation}
(k_2+k_1)^\beta \Gamma ^{+-\beta }(q_2,q_1)=0,\,\,\,q_1-q_2=k_1+k_2 ,
\end{equation}
we can perform in $k_1^{\alpha_1}A^{\alpha_1 \alpha_2}$ the substitution:
\begin{equation}
k_1^{\alpha _1}\gamma ^{\alpha _2\alpha _1\beta }(k_2,-k_1)\rightarrow
k_2^{\alpha _2}k_1^\beta -(k_1+k_2)^2g^{\alpha _2\beta }.
\end{equation}
With the use of the expression for the light cone projection of the
Yang-Mills vertex
\begin{equation}
\gamma ^{\alpha _1\beta -}(k_1,k_1-q_1)=(2k_1-q_1)^{-}g^{\alpha _1\beta
}+(2q_1-k_1)^{\alpha _1}n^{-\beta }-(q_1+k_1)^\beta n^{-\alpha _1},
\end{equation}
one can derive the Ward-Slavnov identity for the scattering vertex:
\begin{equation}
k^{\alpha _1}\Gamma ^{\alpha _1\beta -}(k_1,k_1-q_1)=-(q_1-k_1)^2n^{-\beta
}+(k_1-q_1)^\beta k_1^{-}.
\end{equation}
>From above relations we obtain the following gauge property of $A^{\alpha_1
\alpha_2}$:
$$
k_1^{\alpha _1}A^{\alpha _1\alpha _2}=\frac 12(\Gamma ^{\alpha
_2-+}(k_2,k_2+q_2)+\Gamma ^{+-\alpha _2}(q_2,q_1))
$$
$$
+\frac 12\left( k_1^{-}\frac{(k_2+q_2)^\beta \Gamma ^{\alpha _2\beta
+}(k_2,k_2+q_2)}{(k_2+q_2)^2}-k_2^{\alpha _2}\frac{k_1^\beta \Gamma
^{+-\beta }(q_2,q_1)}{(k_1+k_2)^2}\right)
$$
$$
+k_1^{+}n^{-\alpha _2}-\frac 12k_1^{-}n^{+\alpha _2}-k_1^{\alpha _2}+t_1
\frac{n^{-\alpha _2}}{k_1^{-}+k_2^{-}}+t_2\frac{k_1^{+}}{k_2^{+}}\frac{%
n^{+\alpha _2}}{k_1^{+}+k_2^{+}}=
$$
\begin{equation}
=\frac 12k_2^{\alpha _2}\left( \frac{k_1^{-}k_2^{+}}{(k_2+q_2)^2}-\frac{%
k_1^\beta \Gamma ^{+-\beta }(q_2,q_1)}{(k_1+k_2)^2}\right) .
\end{equation}
It means, that the production amplitude $A_{2\rightarrow 4}$ multiplied by
the gluon polarization vectors $e(k_i)$ is gauge invariant and the
contribution of the Faddeev-Popov ghosts to the final state is fixed.
However, instead of working with a covariant gauge we shall use the
light-cone gauges different for two produced gluons $\left[ 14\right] $:
\begin{equation}
e^\alpha (k_1)k_1^\alpha =e^{-}(k_1)=0\,,\,\,\,e^\alpha (k_2)k_2^\alpha
=e^{+}(k_2)=0.
\end{equation}
The polarization vectors can be parametrized as follows
\begin{equation}
e(k_1)=e_{1\perp }-\frac{(k_1e_{1\perp })}{k_1^{-}}n^{-},\,\,e(k_2)=e_{2%
\perp }-\frac{(k_2e_{2\perp })}{k_2^{+}}n^{+},
\end{equation}
corresponding to the polarization matrices
\begin{equation}
\Lambda ^{\alpha _1\alpha _1^{\prime }}(k_1)=-g^{\alpha _1\alpha _1^{\prime
}}+\frac{k_1^{\alpha _1}n^{-\alpha _1^{\prime }}+k_1^{\alpha _1^{\prime
}}n^{-\alpha _1}}{k_1^{-}}\,,
\end{equation}
\begin{equation}
\Lambda ^{\alpha _2\alpha _2^{\prime }}(k_2)=-g^{\alpha _2\alpha _2^{\prime
}}+\frac{k_2^{\alpha _2}n^{+\alpha _2^{\prime }}+k_2^{\alpha _2^{\prime
}}n^{+\alpha _2}}{k_2^{+}}\,.
\end{equation}
In these gauges for $e(k_i)$ the matrix element of the tensor $A^{\alpha
_1\alpha _2}$ can be expressed in terms of a new tensor $a^{\alpha _1\alpha
_2}$ with pure transverse components using the definition:
\begin{equation}
e_{\alpha _1}^{*}(k_1)\,e_{\alpha _2}^{*}(k_2)\,A^{\alpha _1\alpha _2}\equiv
e_{\alpha _1}^{\perp *}(k_1)\, e_{\alpha _2}^{\perp *}(k_2)\,a^{\alpha
_1\alpha _2}.
\end{equation}
To find this tensor we should perform the following substitutions in the
above expressions:
\begin{equation}
\Gamma ^{\alpha _1\beta -}(k_1,k_1-q_1)\rightarrow 2k_1^{-}g_{\perp
}^{\alpha _1\beta }+2(q_1^{\perp }-k_1^{\perp })^{\alpha _1}n^{-\beta },
\end{equation}
\begin{equation}
\Gamma ^{\alpha _2\beta +}(k_2,k_2+q_2)\rightarrow 2k_2^{+}g_{\perp
}^{\alpha _2\beta }-2(q_2^{\perp }+k_2^{\perp })^{\alpha _2}n^{+\beta };
\end{equation}
\begin{equation}
n^{+\alpha _1}n^{-\alpha _2}-g^{\alpha _1\alpha _2}-\frac 12n^{-\alpha
_1}n^{+\alpha _2}\rightarrow 2\frac{k_1^{\perp \alpha _1}k_2^{\perp \alpha
_2}}{k_1^{-}k_2^{+}}-g_{\perp }^{\alpha _1\alpha _2}
\end{equation}
and
$$
\gamma ^{\alpha _2\alpha _1\beta }(k_2,-k_1)\rightarrow \widetilde{\gamma }%
^{\alpha _2\alpha _1\beta }=(k_2-k_1)^\beta \left( g_{\perp }^{\alpha
_2\alpha _1}+2\frac{k_1^{\perp \alpha _1}k_2^{\perp \alpha _2}}{%
k_1^{-}k_2^{+}}\right)
$$
\begin{equation}
+2\left( k_1^{\perp }-\frac{k_1^{+}}{k_2^{+}}k_2^{\perp }\right) ^{\alpha
_2}\left( g_{\perp }^{\alpha _1\beta }-\frac{n^{-\beta }}{k_1^{-}}k_1^{\perp
\alpha _1}\right) -2\left( k_2^{\perp }-\frac{k_2^{-}}{k_1^{-}}k_1^{\perp
}\right) ^{\alpha _1}\left( g_{\perp }^{\alpha _2\beta }-\frac{n^{+\beta }}{%
k_2^{+}}k_2^{\perp \alpha _2}\right) .
\end{equation}
Thus, one can rewrite $A^{\alpha _2\alpha _1}$ as follows
\begin{equation}
A^{\alpha _1\alpha _2}\rightarrow a^{\alpha _1\alpha _2}=-\frac 12\frac{
\widetilde{\gamma }^{\alpha _2\alpha _1\beta }\Gamma ^{+-\beta }(q_2,q_1)}{%
(k_1+k_2)^2}+b^{\alpha _2\alpha _1},
\end{equation}
where
\begin{equation}
b^{\alpha _1\alpha _2}=-2\,\frac{k_1^{-}k_2^{+}g_{\perp }^{\alpha _2\alpha
_1}-2Q_1^{\perp \alpha _1}Q_1^{\perp \alpha _2}}t+2\frac{k_1^{\perp \alpha
_1}k_2^{\perp \alpha _2}}{k_1^{-}k_2^{+}}-g_{\perp }^{\alpha _1\alpha _2}
\end{equation}
and
\begin{equation}
Q_1=q_1-k_1=q_2+k_2\,,\,\,t=Q_1^2.
\end{equation}
In an explicit form the tensor $a^{\alpha _1\alpha _2}$ after the suitable
renormalization is given below (see $\left[ 14\right] $):
$$
c^{\alpha _1\alpha _2}\equiv \frac 14a^{\alpha _1\alpha _2}=\frac{Q_{1\perp
}^{\alpha _1}Q_{1\perp }^{\alpha _2}}t-\frac{Q_{1\perp }^{\alpha _1}}\kappa
(k_{1\perp }^{\alpha _2}-\frac{k_1^{+}}{k_2^{+}}k_{2\perp }^{\alpha _2})+
\frac{Q_{1\perp }^{\alpha _2}}\kappa (k_{2\perp }^{\alpha _1}-\frac{k_2^{-
}}{%
k_1^{-}}k_{1\perp }^{\alpha _2})
$$
$$
+\frac{k_{1\perp }^{\alpha _1}k_{1\perp }^{\alpha _2}}\kappa \frac{q_{2\perp
}^2}{k_1^{-}(k_1^{+}+k_2^{+})}+\frac{k_{2\perp }^{\alpha _1}k_{2\perp
}^{\alpha _2}}\kappa \frac{q_{1\perp }^2}{k_2^{+}(k_1^{-}+k_2^{-})}-\frac{%
k_{1\perp }^{\alpha _1}k_{2\perp }^{\alpha _2}}\kappa \left( 1+\frac
t{k_1^{-}k_2^{+}}\right) +\frac{k_{2\perp }^{\alpha _1}k_{1\perp }^{\alpha
_2}}\kappa
$$
\begin{equation}
-\frac{g_{\perp }^{\alpha _1\alpha _2}}2\left( 1+\frac t\kappa +\frac{%
k_2^{+}k_1^{-}}t+\frac{k_2^{+}k_1^{-}-k_2^{-}k_1^{+}}\kappa -\frac{q_{1\perp
}^2}\kappa \frac{k_2^{-}}{k_1^{-}+k_2^{-}}-\frac{q_{2\perp }^2}\kappa \frac{%
k_1^{+}}{k_1^{+}+k_2^{+}}\right) ,
\end{equation}
where $\kappa =(k_1+k_2)^2.$
The amplitude of producing a pair of massless quark and antiquark with their
momenta $k_1 $ and $k_2$ correspondingly also can be written as a sum of two
terms being the matrices in the spin and colour spaces:
\begin{equation}
\psi _{c_2c_1}=-g^2\left(
t^{c_1}t^{c_2}b(k_1,k_2)-t^{c_2}t^{c_1}b^T(k_2,k_1)\right) ,
\end{equation}
where $t^c$ are the colour group generators in the fundamental
representation and the expressions for $b(k_1,k_2)$ and $b^T(k_1,k_2)$ are
constructed according to the Feynman rules including the effective vertices
(see the diagrams of Fig.4 describing $b(k_1,k_2)$) [14]:
$$
b(k_1,k_2)=\gamma ^{-}\frac{\widehat{q_1^{\perp }}-\widehat{k_1^{\perp }}}{%
(q_1-k_1)^2}\gamma ^{+}-\frac{\gamma ^\beta \Gamma ^{+-\beta }(q_2,q_1)}{%
(k_1+k_2)^2}\,,
$$
\begin{equation}
b^T(k_2,k_1)=\gamma ^{+}\frac{\widehat{q_1^{\perp }}-\widehat{k_2^{\perp
}}}{%
(q_1-k_2)^2}\gamma ^{-}-\frac{\gamma ^\beta \Gamma ^{+-\beta }(q_2,q_1)}{%
(k_1+k_2)^2}\,.
\end{equation}
We have the following relations valid for $q_1^{\perp }\rightarrow 0$ :
\begin{equation}
\Gamma ^{+-\beta }(q_2,q_1)\rightarrow 2(k_1+k_2)^\beta +2(k_1+k_2)^2\frac{%
n^{+\beta }}{k_1^{+}+k_2^{+}}\,,\,\,(q_1-k_i)^2\rightarrow
-k_i^{-}(k_1^{+}+k_2^{+})\,.
\end{equation}
Therefore using the Dirac equations for the spinors:
\begin{equation}
\overline{u}(k_1)\widehat{k_1}=\widehat{k_2}v(k_2)=0
\end{equation}
one obtains that the quark production amplitude vanishes in the limit of
small $q_1^{\perp }$ or $q_2^{\perp }$:
\begin{equation}
\overline{u}(k_1)\psi _{c_2c_1}v(k_2)\,\rightarrow 0\,
\end{equation}
analogously to the gluon case (see below). It is convenient to present the
wave functions $u(k_1)$ and $v(k_2)$ of the produced quark and antiquark
correspondingly as linear combinations of four definite spinors $\left[
5\right] $
\begin{equation}
u(k_1)=\sum_ic_i(k_1)u_i\,,v(k_2)=\sum_id_i(k_2)u_i\,,
\end{equation}
where
\begin{equation}
u_{--}=\frac 12\left(
\begin{array}{c}
0 \\
1 \\
0 \\
-1
\end{array}
\right) ,\,u_{+-}=\frac 12\left(
\begin{array}{c}
0 \\
1 \\
0 \\
1
\end{array}
\right) ,\,u_{-+}=-\frac 12\left(
\begin{array}{c}
1 \\
0 \\
1 \\
0
\end{array}
\right) ,\,u_{++}=\frac 12\left(
\begin{array}{c}
1 \\
0 \\
-1 \\
0
\end{array}
\right) \,.
\end{equation}
These spinors have the properties:
$$
\gamma _{+}u_{-k}=0,\,\gamma _{-}u_{+k}=0,\,\gamma \,u_{k+}=0,\,\gamma
^{*}u_{k-}=0\,,
$$
$$
\gamma \,u_{--}=-\gamma _{+}u_{++}=2u_{-+}\,,\,\gamma ^{*}u_{++}=-\gamma
_{-}u_{--}=-2u_{+-}\,,
$$
\begin{equation}
\gamma \,u_{+-}=-\gamma _{-}u_{-+}=2u_{++}\,,\,\gamma ^{*}u_{-+}=-\gamma
_{+}u_{+-}=-2u_{--}\,,
\end{equation}
where
\begin{equation}
\gamma _{\pm }=\gamma _0\pm \gamma _3\,,\,\gamma =\gamma ^1+i\gamma
^2\,,\,\gamma *=\gamma ^1-i\gamma ^2\,,\gamma _0=\left(
\begin{array}{cc}
1 & 0 \\
0 & -1
\end{array}
\right) \,,\,\gamma ^i=\left(
\begin{array}{cc}
0 & \sigma _i \\
-\sigma _i & 0
\end{array}
\right)
\end{equation}
and $\sigma _i$ are the Pauli matrices. Note, that $\gamma^{\pm}=\gamma^0
\pm \gamma^3$ and $\gamma^0 =\gamma_0,\, \gamma^i=-\gamma_i$. The quark and
anti-quark eigenstates of the matrix $\gamma _5$:
\begin{equation}
\gamma _5=\left(
\begin{array}{cc}
0 & 1 \\
1 & 0
\end{array}
\right) =-\frac 14(\gamma _{+}\gamma _{-}-\gamma _{-}\gamma _{+})\,\frac
14(\gamma \gamma ^{*}-\gamma^*\gamma )
\end{equation}
are described correspondingly by the spinors $u^{(\pm )}(k_1)$ and $%
\,v^{(\pm )}(k_2)$ with positive and negative helicities $\lambda =\pm \frac
12$. They can be written as follows:
$$
u^{(+)}(k_1)=c_{-+}(k_1)\,u_{-+}+c_{+-}(k_1)\,u_{+-}\,,\,%
\,v^{(-)}(k_2)=d_{-+}(k_2)\,u_{-+}+d_{+-}(k_2)\,u_{+-}\,;
$$
\begin{equation}
u^{(-)}(k_1)=c_{++}(k_1)\,u_{++}+c_{--}(k_1)\,u_{--}\,,\,%
\,v^{(+)}(k_2)=d_{++}(k_1)\,u_{++}+d_{--}(k_2)\,u_{--}\,.
\end{equation}
Due to the Dirac equation for $u(k_1)$ and $v(k_2)$ the coefficients $c_i$
and $d_i$ are not independent:%
$$
c_{+-}(k_1)=-\frac{k_1^{-}}{k_1^{*}}c_{-+}=-\frac{k_1}{k_1^{+}}%
c_{-+}\,,\,d_{+-}(k_2)=-\frac{k_2^{-}}{k_2^{*}}d_{-+}=-\frac{k_2}{k_2^{+}}%
d_{-+}\,,
$$
\begin{equation}
c_{--}(k_1)=-\frac{k_1}{k_1^{-}}c_{++}=-\frac{k_1^{+}}{k_1^{*}}%
c_{++}\,,\,d_{--}(k_2)=-\frac{k_2}{k_2^{-}}d_{++}=-\frac{k_2^{+}}{k_2^{*}}%
d_{++}\,,
\end{equation}
where we use the complex components $k=k^1+ik^2,\,k^*=k^1-ik^2$ of two
dimensional vector $k^\alpha$ .
In accordance with the normalization condition for the wave functions
\begin{equation}
\overline{u}(k_1)\gamma ^\alpha u(k_1)=2k_1^\alpha \,,\,\overline{v}%
(k_2)\gamma ^\alpha v(k_2)=2k_2^\alpha
\end{equation}
we have the other constraints on these coefficients:
\begin{equation}
\mid c_{\pm i}(k_1)\mid ^2=2k_1^{\mp }\,,\,\mid d_{\pm i}(k_2)\mid
^2=2k_2^{\mp }\,.
\end{equation}
Using above relations one can calculate the matrix elements of the
production amplitude:
$$
\overline{u}^{(+)}(k_1)b(k_1,k_2)v^{(-)}(k_2)=\frac
12c_{+-}^{*}(k_1)d_{-+}(k_2)\,b^{(+-)}(k_1,k_2)\,,
$$
$$
\overline{u}^{(-)}(k_1)b^T(k_2,k_1)v^{(-)}(k_2)=\frac{1}{2}c_{-+}^*(k_1)
d_{+-}(k_2)\,b^{(-+)}(k_2,k_1)\,,
$$
$$
\overline{u}^{(-)}(k_1)b(k_1,k_2)v^{(+)}(k_2)=\frac
12c_{++}^{*}(k_1)d_{--}(k_2)\,b^{(-+)}(k_1,k_2)\,,
$$
\begin{equation}
\overline{u}^{(-)}(k_1)b^T(k_2,k_1)v^{(+)}(k_2)=\frac{1}{2}%
c_{--}^*(k_1)d_{++} (k_2)\,b^{(+-)}(k_2,k_1)\,,
\end{equation}
where
\begin{equation}
b^{(+-)}(k_1,k_2)=(b^{(-+)}(k_1,k_2))^{*}=-4\,\frac{q_1-k_1}{(q_1-k_1)^2}-
\frac{j^\beta \Gamma ^{+-\beta }(q_2,q_1)}{(k_1+k_2)^2}\,,
\end{equation}
and the quark current $j$ is
\begin{equation}
j=n+n^{*}\,\frac{k_2^{-}}{k_2^{*}}\,\frac{k_1}{k_1^{-}}-n^{-}\,\frac{k_1}{%
k_1^{-}}-n^{+}\,\frac{k_2^{-}}{k_2^{*}}\,.
\end{equation}
By definition we have
$$
n^\alpha k^\alpha =k,\,n^{*\alpha }k^\alpha =k^{*}\,,\,n^{\pm \alpha
}k^\alpha =k^{\pm }.
$$
This current is conserved:
\begin{equation}
(k_1^\beta +k_2^\beta )\,j^\beta =0\,.
\end{equation}
\section{Final state particles with definite helicities and the complex
transverse momenta}
Using the reality condition $s\alpha _i\beta _i=\overrightarrow{k_i}^2$ for
the produced gluons $i=1,2$ to exclude the Sudakov parameters $\alpha _{i
}$one can present the tensor $c^{\alpha _1\alpha _2}$ only in terms
of transverse momenta $\overrightarrow{k_i}\,,\,\overrightarrow{q_i}$ and
the relative parameter $x=\frac{\beta _1}{\beta _1+\beta _2}$:
$$
c^{\alpha _1\alpha _2}=\frac{\delta ^{\alpha _1\alpha _2}}{2Z}
\overrightarrow{q_1}^2x(1-x)-xk_1^{\alpha _1}\frac{\overrightarrow{q_2}%
^2k_1^{\alpha _2}+\overrightarrow{\Delta }^2Q_1^{\alpha _2}-(1-x)^{-1}(
\overrightarrow{Q_1}^2-\overrightarrow{k_1}^2)k_2^{\alpha _2}}{\kappa
\overrightarrow{\,k_1}^2}-
$$
$$
-\frac 1{\overrightarrow{k_1}^2}xk_1^{\alpha _1}Q_1^{\alpha _2}+\frac{\Delta
^{\alpha _1}q_1^{\alpha _2}+\delta ^{\alpha _1\alpha _2}\overrightarrow{q_1}%
( \overrightarrow{k_1}+x\overrightarrow{q_2})-(1-x)^{-1}q_1^{\alpha
_1}(k_1^{\alpha _2}-x\Delta ^{\alpha _2})}\kappa +
$$
\begin{equation}
+\frac{Q_1^{\alpha _1}Q_1^{\alpha _2}-\frac 12(1-x)(\overrightarrow{Q_1}^2-
\overrightarrow{k_1}^2)\delta ^{\alpha _1\alpha _2}}t+x\overrightarrow{q_1}%
^2\,\frac{k_2^{\alpha _1}k_2^{\alpha _2}+\delta ^{\alpha _1\alpha _2}(1-x)
\overrightarrow{k_1}\overrightarrow{\Delta }}{\kappa Z}
\end{equation}
where $\delta ^{\alpha _1\alpha _2}=-g_{\perp }^{\alpha _1\alpha _2}$ is the
Kroniker tensor ($\delta ^{11}=\delta ^{22}=1$) and
$$
x\equiv \frac{k_1^{+}}{k_1^{+}+k_2^{+}}\,,\,\overrightarrow{\Delta }\equiv
\overrightarrow{q_1}-\overrightarrow{q_2}=\overrightarrow{k_1}+
\overrightarrow{k_2}\,,\,\overrightarrow{Q_1}=\overrightarrow{q_1}-
\overrightarrow{k_1}\,,\,\kappa =\frac{(\overrightarrow{k_1}-x
\overrightarrow{\Delta })^2}{x(1-x)}\,,
$$
\begin{equation}
Z=-x(1-x)(\overrightarrow{\Delta }^2+\kappa )\,,\,\,t=-\frac{(
\overrightarrow{k_1}-x\overrightarrow{q_1})^2+x(1-x)\overrightarrow{q_1}^2}%
x\,.
\end{equation}
The imaginary part of the elastic scattering amplitude calculated with the
use of the $s$-channel unitarity condition in terms of the squared
production amplitude $A_{2\rightarrow 4}$ contains the infrared divergences
at small $k_{i\perp }^2$ and $\kappa $. To avoid such divergencies the
dimensional regularization
\begin{equation}
\frac{d^4k}{(2\pi )^4}\,\longrightarrow \frac{d^Dk}{(2\pi )^D}
\end{equation}
is used in the gauge theories. In the previous papers $\left[ 11-13\right] $
we calculated in this regularization the next to leading corrections to the
production amplitude in the multi-Regge kinematics for the final particles.
In the $D$-dimensional space the gluon has $D-2$ degrees of freedom. In our
case the transverse momenta $\overrightarrow{q_1},\overrightarrow{q_2}$ of
the reggeized gluons form a plane in the $D-2$ -dimensional space for the
polarization vectors $\overrightarrow{e_{\perp }}$. Two states described by
the vectors $\overrightarrow{e_{\perp }}$ belonging to this plane can be
considered as the physical states. Other $D-4$ states are the auxilliary
states necessary for the regularization. For $D\rightarrow 4$ their
contribution in the region of fixed $k_{i\perp }$ and $\kappa $ becomes
negligible.
We begin with the physical contributions from the region of non-vanishing $%
k_{i\perp }$ and $\kappa $ where the transverse subspace can be considered
as two-dimensional one. The singular region of small $k_{i\perp }$ and $%
\kappa $ will be discussed later. Let us introduce the complex components of
the tensor $c^{\alpha _1\alpha _2}$ directly related with the amplitudes for
producing the gluons with equal and opposite helicities correspondingly
(independently this result was obtained also in ref. [16]):
$$
c^{+-}(k_1,k_2)\equiv c^{11}+ic^{21}-ic^{12}+c^{22}=-\frac{%
q_2\,(k_1^{*}-x\Delta ^{*})\,q_1^{*}}{\kappa \,k_1^{*}(1-x)}\,,
$$
$$
c^{++}(k_1,k_2)=c^{11}+ic^{21}+ic^{12}-c^{22}=\frac{Q_1Q_1}t-\frac{xQ_1}{%
k_1^{*}}+\frac{x\overrightarrow{q_1}^2k_2}\kappa \left( \frac{k_2}Z+\frac
1{k_1^{*}}\right) -
$$
\begin{equation}
-\frac{xq_1(k_1-x\Delta )}{\kappa \,(1-x)}-\frac 1{\kappa \,k_1^{*}}\left(
(k_1-x\Delta )\frac x{1-x}(\overrightarrow{Q_1}^2-\overrightarrow{k_1}%
^2)-(k_1^{*}-x\Delta ^{*})q_1k_2\right) ,
\end{equation}
where $B=B^1+i\,B^2,\,B^{*}=B^1-iB^2$ are complex coordinates of a
two-dimensional vector $\overrightarrow{B}$. The above expression for $%
c^{\alpha _1\alpha _2}$ can be presented in the following form:
$$
c^{+-}(k_1,k_2)=\overline{c^{-+}}(k_1,k_2)=-x\,\frac{q_2\,q_1^{*}}{%
(k_1-x\,\Delta )\,k_1^{*}}\,,
$$
$$
c^{++}(k_1,k_2)=\overline{c^{--}}(k_1,k_2)=
$$
$$
=-\frac{x\,(Q_1)^2}{\left( (\overrightarrow{k_1}-x\overrightarrow{q_1}%
)^2+x(1-x)\overrightarrow{q_1}^2\right) }+\frac{x\,\overrightarrow{q_1}%
^2(k_2)^2}{\overrightarrow{\Delta }^2\left( (\overrightarrow{k_1}-x
\overrightarrow{\Delta })^2+x(1-x)\overrightarrow{\Delta }^2\right) }-
$$
\begin{equation}
-\frac{x(1-x)q_1k_2q_2^{*}}{\Delta ^{*}\,(k_1-x\Delta )k_1^{*}}-\frac{%
xq_1^{*}k_1q_2}{\overrightarrow{\Delta }^2(k_1^{*}-x\Delta ^{*})}+\frac{%
xq_2^{*}Q_1}{\Delta ^{*}\,k_1^{*}}\,.
\end{equation}
One can verify, that these expressions have the following symmetry
properties:
\begin{equation}
c^{+-}(k_1,k_2) \leftrightarrow c^{-+}(k_1,k_2),\,c^{++}(k_1,k_2)
\leftrightarrow c^{++}(k_1,k_2)
\end{equation}
under the simultaneous substitutions
$$
k_1\leftrightarrow k_2\,,\,q_1\leftrightarrow -q_2\,,\,x\leftrightarrow
\frac{x\overrightarrow{k_2}^2}{(\overrightarrow{k_1}-x\overrightarrow{\Delta
})^2+x(1-x)\overrightarrow{\Delta }^2}\,,
$$
\begin{equation}
k_1-x\Delta \leftrightarrow \frac{k_1k_2(k_1^{*}-x\Delta ^{*})}{(
\overrightarrow{k_1}-x\overrightarrow{\Delta })^2+x(1-x)\overrightarrow{%
\Delta }^2},
\end{equation}
corresponding to the left-right symmetry:
\begin{equation}
k_1\leftrightarrow k_2\,,\,q_1\leftrightarrow -q_2\,,\,n^{+}\leftrightarrow
n^{-}.
\end{equation}
In the Regge regime of small $1-x$ and fixed $k_i,q_i$ amplitudes $c^{\alpha
_1\alpha _2}$ are simplified as follows
\begin{equation}
c^{+-}(k_1,k_2)\longrightarrow \frac{q_1^{*}\,q_2}{k_1^{*}\,k_2}%
\,,\,\,c^{++}(k_1,k_2)\longrightarrow \frac{q_1^{*}}{k_1^{*}}\,\frac{q_1-k_1
}{q_1^{*}-k_1^{*}}\,\frac{q_2^{*}}{k_2^{*}}
\end{equation}
and are proportional to the product of the effective vertices $\Gamma
^{+-\beta }$ in the light-cone gauge $\left[ 4\right] $. For $x\rightarrow 0$
the amplitudes $c^{+-}(k_1,k_2)$ and $c^{++}(k_1,k_2)$ vanish, but for
simultaneously small $k_1$ or $k_1-x\Delta $ one obtains from them a nonzero
integral contribution because in this region there are poles:
\begin{equation}
c^{++}(k_1,k_2)\rightarrow \frac{q_1q_2^{*}\Delta }{q_1^{*}q_2\Delta ^{*}}%
\,c^{+-}(k_1,k_2)\rightarrow -\frac{xq_1q_2^{*}\Delta }{\Delta
^{*}(k_1-x\Delta )k_1^{*}}\,.
\end{equation}
For large $k_1$ and fixed $q_i\,,\,x$ we obtain
\begin{equation}
c^{+-}(k_1,k_2)\longrightarrow -x\,\frac{q_1^{*}q_2}{\overrightarrow{k_1}^2}%
\,,\,\,c^{++}(k_1,k_2)\longrightarrow -x(1-x)^2\,\frac{q_1q_2}{
\overrightarrow{k_1}^2}-x^3\frac{q_1^{*}q_2^{*}k_1}{(k_1^{*})^3}
\end{equation}
and therefore the integrals for the cross section of the gluon production do
not contain any ultraviolet divergency.
As it is seen from above formulas, $c^{\alpha _1\alpha _2}$ contains
infrared poles at small $\overrightarrow{k_1}$:
\begin{equation}
c^{+-}(k_1,k_2)\rightarrow \,\frac{q_1^{*}q_2}{\Delta \,k_1^{*}}%
\,,\,c^{++}(k_1,k_2)\rightarrow \,\frac{q_1q_2{*}}{\Delta ^{*}\,k_1^{*}}\,\,
\end{equation}
and at small $\overrightarrow{k_1}-x\,\overrightarrow{\Delta }$:
\begin{equation}
c^{+-}\rightarrow -\,\frac{q_1^{*}q_2}{\Delta ^{*}(k_1-x\Delta )}%
\,,\,\,c^{++}\rightarrow -(1-x)^2\,\frac{q_1\Delta q_2^{*}}{(\Delta
^{*})^2(k_1-x\Delta )}-x^2\frac{q_1^{*}q_2}{\Delta ^{*}(k_1^{*}-x\Delta
^{*})%
}\,.
\end{equation}
It is obvious, that the amplitude $c^{+-}(k_1,k_2)$ vanishes for $%
q_1\rightarrow 0$ or $q_2\rightarrow 0$. The amplitude $c^{++}(k_1,k_2)$
behaves as follows
\begin{equation}
c^{++}(k_1,k_2)\rightarrow \frac{xk_1}{k_1^*} \left(\frac{x^2q_1^*\Delta^*}{%
k_1^*(k_1^*-x\Delta^*)}+ \frac{(1-x)^2q_1\Delta}{k_1(k_1-x\Delta )}\right)
\end{equation}
for $q_1\rightarrow 0$ and as follows
\begin{equation}
c^{++}(k_1,k_2)\rightarrow -\frac{x}{(\overrightarrow{k_1}^2-2x
\overrightarrow{k_1} \overrightarrow{\Delta}+x\overrightarrow{\Delta}^2)^2}
\left(\frac{x^2q_2^*\Delta^*k_2^4}{k_1^*(k_1-x\Delta)}+ \frac{%
(1-x)^2q_2\Delta\overrightarrow{k_1}^2k_1^*}{k_1^*-x\Delta^*}\right)
\end{equation}
for $q_2\rightarrow 0$. The last expression can be obtained from previous
one with the use of the above left-right symmetry of $c^{++}(k_1,k_2)$.
Let us return now to the quark-anti-quark production and renormalize the
amplitude as follows
\begin{equation}
b^{(+-)}(k_1,k_2)=\frac 4{k_1^{*}}\,c(k_1,k_2)\,.
\end{equation}
Then the factor $c(k_1,k_2)$ is given below:
$$
c(k_1,k_2)=\frac{xQ_1k_1^{*}}{(\overrightarrow{k_1}-x\overrightarrow{q_1}%
)^2+x(1-x)\overrightarrow{q_1}^2}-\frac{x\overrightarrow{q_1}^2k_2k_1^{*}}{
\overrightarrow{\Delta }^2\left( (\overrightarrow{k_1}-x\overrightarrow{%
\Delta })^2+x(1-x)\overrightarrow{\Delta }^2\right) }
$$
\begin{equation}
+\frac{x(1-x)q_1q_2^{*}}{\Delta ^{*}(k_1-x\Delta )}-\frac{xq_1^{*}q_2k_1^{*}
}{\overrightarrow{\Delta }^2(k_1^{*}-x\Delta ^{*})}-\frac{xq_2^{*}}{\Delta
^{*}}.
\end{equation}
It can be written as follows
$$
\frac 1x\,c(k_1,k_2)=\frac{(1-x)q_1k_1^{*}-xq_1^{*}k_1+x\overrightarrow{q_1}%
^2}{(\overrightarrow{k_1}-x\overrightarrow{q_1})^2+x(1-
x)\overrightarrow{q_1}%
^2}
$$
\begin{equation}
-\frac{\overrightarrow{q_1}^2}{\overrightarrow{\Delta }^2}\,\frac{%
(1-x)\Delta k_1^{*}-x\Delta ^{*}k_1+x\overrightarrow{\Delta }^2}{(
\overrightarrow{k_1}-x\overrightarrow{\Delta })^2+x(1-x)\overrightarrow{%
\Delta }^2}+\frac{(1-x)q_1q_2^{*}}{\Delta ^{*}(k_1-x\Delta )}-\frac{%
xq_1^{*}q_2}{\Delta (k_1^{*}-x\Delta ^{*})}\,.
\end{equation}
For large values of $k_1$ this expression decreases rapidly:
\begin{equation}
\frac 1xc(k_1,k_2)\rightarrow x(1-x)\frac{q_1q_2}{k_1^2}-x^2\frac{%
q_1^{*}q_2^{*}}{(k_1^{*})^2}\,.
\end{equation}
The amplitude $c(k_1,k_2)$ vanishes
\begin{equation}
c(k_1,k_2)\rightarrow \frac{x^3q_1^{*}\Delta ^{*}}{k_1^{*}(k_1^{*}-x\Delta
^{*})}-\frac{x^2(1-x)q_1\Delta }{k_1(k_1-x\Delta )}
\end{equation}
for $\overrightarrow{q_1}\rightarrow 0$ and
\begin{equation}
c(k_1,k_2)\rightarrow \frac{x^3k_2^3q_2^{*}\Delta ^{*}(k_1^{*}-x\Delta
^{*})-x^2(1-x)(k_1^{*})^2k_2^{*}q_2\Delta (k_1-x\Delta )}{\left( (
\overrightarrow{k_1}-x\overrightarrow{\Delta })^2+x(1-x)\overrightarrow{%
\Delta }^2\right)^2 (\overrightarrow{k_1}-x\overrightarrow{\Delta })^2}
\end{equation}
for $\overrightarrow{q_2}\rightarrow 0$ correspondingly. The last expression
can be obtained from previous one if we take into account that under above
left-right symmetry transformations $c(k_1,k_2)$ is changed as follows
\begin{equation}
c(k_1,k_2)\leftrightarrow -\frac{k_2^{*}}{k_1^{*}}c(k_1,k_2)\,.
\end{equation}
\section{Squared production amplitudes}
The total cross-section of the gluon production is proportional to the
integral from the squared
amplitude $\psi _{d_1d_2c_2c_1}^{\alpha _1\alpha _2}$ summed over all
indices. It is expressed in terms of the quantity
\begin{equation}
R\equiv (c^{\alpha _1\alpha _2}(k_1,k_2))^2+(c^{\alpha _1\alpha
_2}(k_2,k_1))^2+c^{\alpha _1\alpha _2}(k_1,k_2)c^{\alpha _2^{\prime }\alpha
_1^{\prime }}(k_2,k_1)\,\Omega ^{\alpha _1\alpha _1^{\prime }}\, \Omega
^{\alpha_2\alpha _2^{\prime }},
\end{equation}
where the tensor $\Omega ^{\alpha _i\alpha _{i^{\prime }}}$ interchanges the
left and right gauges (see $\left[ 4\right] $):
$$
\Omega ^{\alpha _i\alpha _{i^{\prime }}}=g_{\perp }^{\alpha _i\alpha
_{i^{\prime }}}-2\frac{k_{i\perp }^{\alpha _i}k_{i\perp }^{\alpha
_{i^{\prime }}}}{k_{i\perp }^2}\,.
$$
Above we took into account that the colour factor for the interference term
is two times smaller than one for the direct contributions. The generalized
BFKL equation for the virtual gluon cross-section can be written in the
integral form as follows
\begin{equation}
\sigma (\overrightarrow{q_1},\,q_1^{+})=\sigma _0(\overrightarrow{q_1}%
,q_1^{+})+\int \frac{d\,q_2^{+}}{q_2^{+}}\int d^2\overrightarrow{q_2}%
\,K_\delta (\overrightarrow{q_1},\overrightarrow{q_2})\sigma (
\overrightarrow{q_2},\,q_2^{+})\,
\end{equation}
where the integration region over the longitudinal momentum $q_2^{+}$ is
restricted from above by the value proportional to $q_1^{+}$:
\begin{equation}
q_2^{+}\,<\,\delta \,q_1^{+}.
\end{equation}
The intermediate infinitesimal parameter $\delta >0$ is introduced to
arrange the particles in the groups with strongly different rapidities (see
$%
\left[ 14\right] $). The integral kernel $K_\delta (\overrightarrow{q_1},
\overrightarrow{q_2})$ takes into account the interaction among the
particles incide each group where $\delta $ plays role of the ultraviolet
cut-off in their relative rapidities. The kernel $K_\delta $ can be
calculated in the perturbation theory:
\begin{equation}
K_\delta (\overrightarrow{q_1},\overrightarrow{q_2})=\sum_{r=1}^\infty
\left( \frac{g^2}{2(2\pi )^{D-1}}\right) ^rK_\delta ^{(r)}(\overrightarrow{%
q_1},\overrightarrow{q_2}).
\end{equation}
The real contribution to the kernel in the leading logarithmic approximation
is proportional to the square of the effective vertex:
\begin{equation}
K_{real}^{(1)}(\overrightarrow{q_1},\overrightarrow{q_2})=-\frac{N_c}{4\,
\overrightarrow{q_1}^2\,\overrightarrow{q_2}^2}\left( \Gamma ^{+-\beta
}(q_2,q_1)\right) ^2=N_c\,\frac 4{(\overrightarrow{q_1}-\overrightarrow{q_2}%
)^2}.
\end{equation}
The corresponding virtual contribution is proportional to the gluon Regge
trajectory $\omega (-\overrightarrow{q}_1^2)$.
The next to leading term in $K_\delta $ related with the two gluon
production is given below
\begin{equation}
K_{gluons}^{(2)}=\int d\kappa \,d^Dk_1\,\delta (k_1^2)\,\delta (k_2^2)\,
\frac{16N_c^2R}{\overrightarrow{q_1}^2\overrightarrow{q}_2^2}=\frac{%
16\,N_c^2 }{2\overrightarrow{q_1}^2\overrightarrow{q_2}^2}\int_\delta
^{1-\delta } \frac{dx}{x(1-x)}\int d^{D-2}\overrightarrow{k_1}\,R\,.
\end{equation}
The limits in the integral over $x$ correspond to a restriction from above
for the invariant mass $\sqrt{\kappa }$ of the produced gluons. In the
solution of the generalized BFKL equation the dependence of $\delta $ should
disappear.
For the physical value $D=4$ of the space-time dimension one can express $R$
in terms of the contributions from the states with the definite helicities:
\begin{equation}
R=R(+-)+R(++)
\end{equation}
where
$$
R(+-)=\frac 12\left( \mid c^{+-}(k_1,k_2)\mid ^2+\mid c^{+-}(k_2,k_1)\mid
^2+Re\,c^{+-}(k_1,k_2)\,c^{-
+}(k_2,k_1)\frac{k_1^{*}}{k_1}\frac{k_2}{k_2^{*}}%
\right) \,,
$$
\begin{equation}
R(++)=\frac 12\left( \mid c^{++}(k_1,k_2)\mid ^2+\mid c^{++}(k_2,k_1)\mid
^2+Re\,c^{++}(k_1,k_2)\,c^{++}(k_2,k_1)\frac{k_1^{*}}{k_1}\frac{k_2^{*}}{k_2}
\right) .
\end{equation}
Here we have used the following relation between the polarization vectors in
the right and left gauges $\left[ 4\right] $:
\begin{equation}
e_{\perp }^r(k)=-(e_{\perp }^l(k))^{*}\frac k{k^{*}}
\end{equation}
to express all helicity amplitudes in terms of two complex functions $%
c^{+-}(k_1,k_2)$ and $c^{++}(k_1,k_2)$ given above. One should also take
into account the following relations:
$$
c^{+-}(k_1,k_2)\,\frac{k_1^{*}}{k_1}=-\frac{q_2q_1^{*}}{\Delta }\,\left(
\frac 1{k_1-x\Delta }-\frac 1{k_1}\right) \,,
$$
$$
c^{++}(k_1,k_2)\,\frac{k_1^{*}}{k_1}=-\frac{(1-x)^2q_1q_2^{*}}{\Delta
^{*}(k_1-x\Delta )}-\frac{x^2q_1^{*}q_2}{\Delta (k_1^{*}-x\Delta ^{*})}+
\frac{(1-x)^2q_1q_2^{*}}{\Delta ^{*}k_1}
$$
$$
+\frac{x^2k_1q_1^{*}\left( (3-x)q_1-k_1\right) -xq_1^2\left(
x(2-x)q_1^{*}+(1-x)^2k_1^{*}\right) }{k_1\left( (\overrightarrow{k_1}-x
\overrightarrow{q_1})^2+x(1-x)\overrightarrow{q_1}^2\right) }
$$
\begin{equation}
+\frac{x^2\,\overrightarrow{q_1}^2\Delta ^{*}k_1\left( k_1-(3-x)\Delta
\right) +x\overrightarrow{q_1}^2(\Delta )^2\left( x(2-x)\Delta
^{*}+(1-x)^2k_1^{*}\right) }{\overrightarrow{\Delta }^2k_1\left( (
\overrightarrow{k_1}-x\overrightarrow{\Delta })^2+x(1-x)\overrightarrow{%
\Delta }^2\right) }.
\end{equation}
The various bilinear combinations of the functions $c^{+-}$ and $c^{++}$
needed to calculate $R(+-)$ and $R(++)$ are given in Appendix. The
ultraviolet divergencies at large $k_1$ which appear in some contributions
are cancelled in the sum.
At small $k_1$ and fixed $x$ we obtain:
\begin{equation}
\mid c^{++}(k_1,k_2)\mid ^2\rightarrow \mid c^{+-}(k_1,k_2)\mid
^2\rightarrow \frac{\overrightarrow{q_1}^2\overrightarrow{q_2}^2}{
\overrightarrow{k_1}^2\overrightarrow{\Delta }^2}\,.\,\,
\end{equation}
In this region the interference terms are convergent. The quantity $R$ at
small $\overrightarrow{k_1}$ and fixed $x$ can be calculated in the $D$%
-dimensional space-time:
\begin{equation}
R\rightarrow \frac{\overrightarrow{q_1}^2\overrightarrow{q_2}^2}{
\overrightarrow{\Delta }^2\overrightarrow{k_1}^2}\,,
\end{equation}
which gives a possibility to regularize the infrared divergency at $k_1
\rightarrow 0$.
At small $k_1-x\Delta $ and fixed $x$ we obtain for the bilinear
combinations of $c^{+-}$:
\begin{equation}
\mid c^{+-}(k_1,k_2)\mid ^2\rightarrow -c^{+-}(k_1,k_2)\,c^{-+}(k_2,k_1)\,
\frac{k_1^{*}}{k_1}\frac{k_2}{k_2^{*}}\rightarrow \frac{\overrightarrow{q_1}%
^2\overrightarrow{q_2}^2}{\overrightarrow{\Delta }^2(\overrightarrow{k_1}-x
\overrightarrow{\Delta })^2}\,.
\end{equation}
The bilinear combinations of $c^{++}$ in the same region are simplified
drastically:
\begin{eqnarray*}
\mid c^{++}(k_1,k_2)\mid ^2\rightarrow -c^{++}(k_1,k_2)c^{++}(k_2,k_1)\frac{%
k_1^{*}}{k_1}\frac{k_2^{*}}{k_2}\rightarrow \mid \frac{(1-x)^2q_1q_2^{*}}{%
\Delta ^{*}(k_1-x\Delta )}+\frac{x^2q_1^{*}q_2}{\Delta (k_1^{*}-x\Delta
^{*})%
}\mid ^2
\end{eqnarray*}
\begin{equation}
=\frac{\overrightarrow{q_1}^2\overrightarrow{q_2}^2(1-2x)^2}
{\overrightarrow{\Delta }^2(\overrightarrow{k_1}-
x \overrightarrow{\Delta })^2}\,
+ \frac{4x^2(1-x)^2(\overrightarrow{q_1}^2\overrightarrow{\Delta }-
\overrightarrow{\Delta }^2
\overrightarrow{q_1}\,,\,\overrightarrow{k_1}-x\overrightarrow{\Delta })^2 }
{\overrightarrow{\Delta }^4(\overrightarrow{k_1}-x \overrightarrow{\Delta })
^4}.
\end{equation}
The quantity $R$ at small $\overrightarrow{k_1}-x\overrightarrow{\Delta }$
and fixed $x$ can be calculated for an arbitrary dimension $D$ of the
space-time if we take into account that the tensor $c^{\alpha _1\alpha
_2}(k_1,k_2)$ is simplified in this limit:%
$$
c^{\alpha _1\alpha _2}(k_1,k_2)\rightarrow \widetilde{c}^{\alpha _1\alpha
_2}(k_1,k_2)=-2\frac{(1-x)\Delta ^{\alpha _1}\Delta ^{\alpha _2}}{
\overrightarrow{\Delta }^4}\,\frac{\left( (1-x)\overrightarrow{q_1}^2
\overrightarrow{\Delta }+x\overrightarrow{\Delta }^2\overrightarrow{q_1}%
\,,\, \overrightarrow{k_1}-x\overrightarrow{\Delta }\right) }{(
\overrightarrow{k_1}-x\overrightarrow{\Delta })^2}
$$
$$
+2\frac{(1-x)(\overrightarrow{\Delta },\,\overrightarrow{k_1}-x
\overrightarrow{\Delta })}{\overrightarrow{\Delta }^2(\overrightarrow{k_1}-x
\overrightarrow{\Delta })^2}\,\Delta ^{\alpha _1}q_1^{\alpha _2}-\frac{%
x(1-x)\,\delta ^{\alpha _1\alpha _2}}{\overrightarrow{\Delta }^2}\,\frac{(
\overrightarrow{q_1}^2\overrightarrow{\Delta }-\overrightarrow{\Delta }^2
\overrightarrow{q_1}\,,\,\overrightarrow{k_1}-x\overrightarrow{\Delta })}{(
\overrightarrow{k_1}-x\overrightarrow{\Delta })^2}
$$
\begin{equation}
+\frac{(1-x)(k_1^{\alpha _1}-x\Delta ^{\alpha _1})(\overrightarrow{q_1}%
^2\Delta ^{\alpha _2}-\overrightarrow{\Delta }^2q_1^{\alpha _2})}{
\overrightarrow{\Delta }^2(\overrightarrow{k_1}-x\overrightarrow{\Delta
})^2}%
-\frac{x(k_1^{\alpha _2}-x\Delta ^{\alpha _2})(\overrightarrow{q_2}^2\Delta
^{\alpha _1}+\overrightarrow{\Delta }^2q_2^{\alpha _1})}{\overrightarrow{%
\Delta }^2(\overrightarrow{k_1}-x\overrightarrow{\Delta })^2}\,.
\end{equation}
This expression has the important symmetry property:
\begin{equation}
\widetilde{c}^{\alpha _1\alpha _2}(k_1,k_2)=-\left( \delta ^{\alpha _1\alpha
_1^{\prime }}-2\frac{\Delta ^{\alpha _1}\Delta ^{\alpha _1^{\prime }}}{
\overrightarrow{\Delta }^2}\right) \left( \delta ^{\alpha _2\alpha
_2^{\prime }}-2\frac{\Delta ^{\alpha _2}\Delta ^{\alpha _2^{\prime }}}{
\overrightarrow{\Delta }^2}\right) \widetilde{c}^{\alpha _2^{\prime }\alpha
_1^{\prime }}(k_2,k_1)\,,
\end{equation}
which in particular means, that at $k\rightarrow x\Delta $ the interference
is destructive:
\begin{equation}
\widetilde{c}^{\alpha _1\alpha _2}(k_1,k_2)\,\widetilde{c}^{\alpha
_2^{\prime }\alpha _1^{\prime }}(k_2,k_1)\,\Omega ^{\alpha _1\alpha
_1^{\prime }}(\Delta )\,\Omega ^{\alpha _2\alpha _2^{\prime }}(\Delta
)=-\mid \widetilde{c}^{\alpha _1\alpha _2}(k_1,k_2)\mid ^2
\end{equation}
and
\begin{equation}
\mid \widetilde{c}^{\alpha _1\alpha _2}(k_2,k_1)\mid ^2=\mid \widetilde{c}%
^{\alpha _1\alpha _2}(k_1,k_2)\mid ^2\,.
\end{equation}
Let us write $\widetilde{c}$ as a sum of three terms:
\begin{equation}
\widetilde{c}^{\alpha _1\alpha _2}(k_1,k_2)=\frac{\delta ^{\alpha _1\alpha
_2}}{D-2}\,\widetilde{c}^{\alpha \alpha }(k_1,k_2)+\widetilde{c}^{\left[
\alpha _1\alpha _2\right] }(k_1,k_2)+\widetilde{c}^{(\alpha _1\alpha
_2)}(k_1,k_2)\,,
\end{equation}
where the trace of the matrix $\widetilde{c}^{\alpha _1\alpha _2}(k_1,k_2)$
is
\begin{equation}
\widetilde{c}^{\alpha \alpha }(k_1,k_2)=\left( (4-D)\,\frac{x(1-x)}{
\overrightarrow{\Delta }^2}\,\,\frac{\overrightarrow{q_1}^2\overrightarrow{%
\Delta }-\overrightarrow{\Delta
}^2\overrightarrow{q_1}}{(\overrightarrow{k_1%
}-x\overrightarrow{\Delta })^2}-\frac{\overrightarrow{q_2}^2\overrightarrow{%
\Delta }+\overrightarrow{\Delta }^2\overrightarrow{q_2}}{\overrightarrow{%
\Delta }^2(\overrightarrow{k_1}-x\overrightarrow{\Delta })^2}\,,\,
\overrightarrow{k_1}-x\overrightarrow{\Delta }\right) \,
\end{equation}
and
\begin{equation}
\widetilde{c}^{\left[ \alpha _1\alpha _2\right] }=\frac 12\left(
\widetilde{c%
}^{\alpha _1\alpha _2}-\widetilde{c}^{\alpha _2\alpha _1}\right) \,,\,
\widetilde{c}^{(\alpha _1\alpha _2)}=\frac 12\left( \widetilde{c}^{\alpha
_1\alpha _2}+\widetilde{c}^{\alpha _2\alpha _1}\right) -\frac{\delta
^{\alpha _1\alpha _2}}{D-2}\,\widetilde{c}^{\alpha \alpha }\,.
\end{equation}
These terms do not interfere each with others:
\begin{equation}
R=R^{(0)}+R^{(1)}+R^{(2)}\,.
\end{equation}
and for $k_1\rightarrow x\Delta $ their contributions are
\begin{equation}
R^{(0)}=\frac 1{D-2}\left( (4-D)\frac{x(1-x)}{\overrightarrow{\Delta }^2}(
\overrightarrow{q_1}^2\overrightarrow{\Delta }-\overrightarrow{\Delta }^2
\overrightarrow{q_1})-\frac{\overrightarrow{q_2}^2}{\overrightarrow{\Delta }%
^2}\overrightarrow{\Delta }-\overrightarrow{q_2}\,,\,\frac{\overrightarrow{%
k_1}-x\overrightarrow{\Delta }}{(\overrightarrow{k_1}-x\overrightarrow{%
\Delta })^2}\right) ^2
\end{equation}
for the singlet term,
$$
R^{(1)}=\frac 12\,\frac{\overrightarrow{q_1}^2\overrightarrow{q_2}^2}{
\overrightarrow{\Delta }^2(\overrightarrow{k_1}-x\overrightarrow{\Delta
})^2}%
-\frac 12\left( \frac{(\overrightarrow{q_2}^2\overrightarrow{\Delta }+
\overrightarrow{\Delta }^2\overrightarrow{q_2}\,,\,\overrightarrow{k_1}-x
\overrightarrow{\Delta })}{\overrightarrow{\Delta }^2(\overrightarrow{k_1}-x
\overrightarrow{\Delta })^2}\right) ^2
$$
\begin{equation}
+2\frac{x(1-x)}{\overrightarrow{\Delta }^2}\left( \left( \frac{(q_1^\beta
\overrightarrow{\Delta }-\Delta ^\beta \overrightarrow{q_1}\,,\,
\overrightarrow{k_1}-x\overrightarrow{\Delta })}{(\overrightarrow{k_1}-x
\overrightarrow{\Delta })^2}\right) ^2-\frac{\overrightarrow{q_1}^2
\overrightarrow{\Delta }^2-(\overrightarrow{\Delta },\overrightarrow{q_1})^2
}{(\overrightarrow{k_1}-x\overrightarrow{\Delta })^2}\right) \,
\end{equation}
for the antisymmetric tensor and
$$
R^{(2)}=-\frac 1{D-2}\,\left( \frac{\overrightarrow{q_2}^2\overrightarrow{%
\Delta }+\overrightarrow{\Delta }^2\overrightarrow{q_2}}{\overrightarrow{%
\Delta }^2}-2\frac{x(1-x)}{\overrightarrow{\Delta }^2}\left(
\overrightarrow{%
q_1}^2\overrightarrow{\Delta }-\overrightarrow{\Delta
}^2\overrightarrow{q_1}%
\right) \,,\,\frac{\overrightarrow{k_1}-x\overrightarrow{\Delta }}{(
\overrightarrow{k_1}-x\overrightarrow{\Delta })^2}\right) ^2
$$
$$
+4x^2(1-x)^2\left( \frac{\overrightarrow{q_1}^2\overrightarrow{\Delta }}{
\overrightarrow{\Delta }^2}-\overrightarrow{q_1}\,,\,\frac{\overrightarrow{%
k_1}-x\overrightarrow{\Delta }}{(\overrightarrow{k_1}-x\overrightarrow{%
\Delta })^2}\right) ^2+\frac 12\left( \frac{\overrightarrow{q_2}^2
\overrightarrow{\Delta }+\overrightarrow{\Delta }^2\overrightarrow{q_2}}{
\overrightarrow{\Delta }^2}\,,\,\frac{\overrightarrow{k_1}-x\overrightarrow{%
\Delta }}{(\overrightarrow{k_1}-x\overrightarrow{\Delta })^2}\right) ^2
$$
\begin{equation}
+\frac
12\frac{\overrightarrow{q_1}^2\overrightarrow{q_2}^2}{\overrightarrow{%
\Delta }^2(\overrightarrow{k_1}-x\overrightarrow{\Delta })^2}-2x(1-x)\left(
\frac{\overrightarrow{q_1}^2\overrightarrow{q_2}^2(\overrightarrow{\Delta },
\overrightarrow{k_1}-x\overrightarrow{\Delta })^2}{\overrightarrow{\Delta }%
^4(\overrightarrow{k_1}-x\overrightarrow{\Delta
})^4}+\frac{(\overrightarrow{%
q_1},\overrightarrow{q_2})^2}{\overrightarrow{\Delta
}^2(\overrightarrow{k_1}%
-x\overrightarrow{\Delta })^2}\right) .
\end{equation}
for the symmetric traceless tensor correspondingly. Note, that the total sum
$R$ is especially simple:
\begin{equation}
R=(D-2)\left( x(1-x)\frac{\overrightarrow{q_1}^2\overrightarrow{\Delta }-
\overrightarrow{\Delta }^2\overrightarrow{q_1}}{\overrightarrow{\Delta }^2}%
\,,\,\frac{\overrightarrow{k_1}-x\overrightarrow{\Delta }}{(\overrightarrow{%
k_1}-x\overrightarrow{\Delta })^2}\right) ^2+\frac{\left( 1-2x(1-x)\right)
\overrightarrow{q_1}^2\overrightarrow{q_2}^2}{\overrightarrow{\Delta }^2(
\overrightarrow{k_1}-x\overrightarrow{\Delta })^2}.
\end{equation}
We shall use this fact in the next section.
In the physical case $D=4$ the quantities $R^{(i)}$ can be expressed in
terms of the contributions from the definite helicity states:
$$
R(+-)\rightarrow R^{(0)}+R^{(1)}\,,
$$
$$
R^{(0)}=\frac 18\left| \frac{q_1^{*}q_2}{\Delta (k_1-x\Delta )}+\frac{%
q_1q_2^{*}}{\Delta ^{*}(k_1^{*}-x\Delta ^{*})}\right| ^2,\,R^{(1)}=\frac
18\left| \frac{q_1^{*}q_2}{\Delta (k_1-x\Delta )}-\frac{q_1q_2^{*}}{\Delta
^{*}(k_1^{*}-x\Delta ^{*})}\right| ^2 \,,
$$
\begin{equation}
R(++)\rightarrow R^{(2)}=\frac 12\left| \frac{(1-x)^2q_1q_2^{*}}{\Delta
^{*}(k_1-x\Delta )}+\frac{x^2q_1^{*}q_2}{\Delta (k_1^{*}-x\Delta ^{*})}%
\right| ^2\,.
\end{equation}
It allows us to regularize the integrals for $K_{gluons}^{(2)}$ in this
kinematical region.
As it is seen from above formulas for the bilinear expressions containing $%
c^{\alpha _1\alpha _2}(k_1,k_2)$, simultaneously with the infrared
singularity in the integral over $k_1$ we have the divergency at $x=0$ and $%
x=1$. The divergency at $x=1$ is related with the Regge limit $\kappa
\rightarrow \infty $. The divergency at $x=0$ has an infrared origin. The
bilinear combinations of $c^{\alpha _1\alpha _2}$ can be simplified in this
infrared region of small $k_1$ and $x$:
$$
\mid c^{+-}(k_1,k_2)\mid ^2\rightarrow \mid c^{++}(k_1,k_2)\mid
^2\rightarrow \frac{x^2\overrightarrow{q_1}^2\overrightarrow{q_2}^2}{
\overrightarrow{k_1}^2(\overrightarrow{k_1}-x\overrightarrow{\Delta })^2}\,,
$$
\begin{equation}
c^{+-}(k_1,k_2)c^{-+}(k_2,k_1)\frac{k_1^{*}}{k_1}\frac{k_2}{k_2^{*}}%
\rightarrow c^{++}(k_1,k_2)c^{++}(k_2,k_1)\frac{k_1^{*}}{k_1}\frac{k_2^{*}}{%
k_2}\rightarrow -\frac{x\overrightarrow{q_1}^2\overrightarrow{q_2}^2}{\Delta
^{*}k_1(\overrightarrow{k_1}-x\overrightarrow{\Delta })^2}\,.
\end{equation}
This singularity also can be regularized because one can calculate $R$ in
the $D$-dimensional space:
\begin{equation}
R\rightarrow \frac{x^2\overrightarrow{q_1}^2\overrightarrow{q_2}^2}{
\overrightarrow{k_1}^2(\overrightarrow{k_1}-x\overrightarrow{\Delta })^2}+
\frac{\overrightarrow{q_1}^2\overrightarrow{q_2}^2}{\overrightarrow{\Delta }%
^2(\overrightarrow{k_1}-x\overrightarrow{\Delta })^2}-\frac{x(
\overrightarrow{\Delta }\overrightarrow{k_1})\,\overrightarrow{q_1}^2
\overrightarrow{q_2}^2}{\overrightarrow{\Delta }^2\overrightarrow{k_1}^2(
\overrightarrow{k_1}-x\overrightarrow{\Delta })^2}
\end{equation}
using the approximate expression for $c^{\alpha _1\alpha _2}(k_1,k_2)$ and $%
c^{\alpha _2\alpha _1}(k_2,k_1)$ in the region of small $x$ and $k_1$:
$$
c^{\alpha _1\alpha _2}(k_1,k_2)\rightarrow x\,\frac{(\overrightarrow{k_1}%
^2\Delta ^{\alpha _1}-x\overrightarrow{\Delta }^2k_1^{\alpha _1})(
\overrightarrow{\Delta }^2q_1^{\alpha _2}-\overrightarrow{q_1}^2\Delta
^{\alpha _2})}{\overrightarrow{\Delta }^2(\overrightarrow{k_1}-x
\overrightarrow{\Delta })^2\overrightarrow{k_1}^2}\,,
$$
\begin{equation}
c^{\alpha _2\alpha _1}(k_2,k_1)\rightarrow (k_1^{\alpha _1}-x\Delta ^{\alpha
_1})\,\frac{\overrightarrow{q_2}^2\Delta ^{\alpha _2}+\overrightarrow{\Delta
}^2q_2^{\alpha _2}}{\overrightarrow{\Delta }^2(\overrightarrow{k_1}-x
\overrightarrow{\Delta })^2}.
\end{equation}
We write down also the expression for $R$ in the Regge region of small $1-x$
and fixed $\overrightarrow{k_1}$ for arbitrary $D$:
\begin{equation}
R \rightarrow \frac{\overrightarrow{q_1}^2 \overrightarrow{q_2}^2} {
\overrightarrow{k_1}^2 \overrightarrow{k_2}^2}.
\end{equation}
Let us return now to the quark-antiquark production. The total cross-section
of this process in accordance with the normalization condition for spinors
is proportional to the integral from the expression:
\begin{equation}
K_{quarks}^{(2)}=\int_0^1\frac{dx}{x(1-x)}\int d^{D-2}\overrightarrow{k_1}
\frac{8\,L}{\overrightarrow{q_1}^2\overrightarrow{q_2}^2}
\end{equation}
where we put $\delta =0$ because the integral in $x$ is convergent at $x=0$
and $x=1$. The expression for $L$ with corresponding colour factors is given
below:
\begin{equation}
L=\frac{N_c^2-1}{4N_c}\left( \frac{1-x}x\mid c(k_1,k_2)\mid ^2+\frac
x{1-x}\mid c(k_2,k_1)\mid ^2\right) +\frac
1{2N_c}Re\;c(k_1,k_2)\,c(k_2,k_1).
\end{equation}
Here two equal contributions from two different helicity states of the
quark-anti-quark pair were taken into account. Note, that in the limit of
large $N_c$ the quark contribution is smaller than gluon one and the
interference term is suppressed by the factor $1/N_c^2 $ in comparison with
the direct contribution.
The bilinear combinations of the function $c(k_1,k_2)$ entering in $L$ are
given in Appendix. In the region of $\overrightarrow{k_1}\rightarrow x
\overrightarrow{\Delta }$ the interference for quarks
is constructive (contrary to the gluon case):
\begin{equation}
\frac 1{x^2}\left| c(k_1,k_2)\right| ^2\rightarrow \frac
1{x(1-x)}c(k_1,k_2)c(k_2,k_1)\rightarrow \left| \frac{(1-x)q_1q_2^{*}}{%
\Delta ^{*}(k_1-x\Delta )}-\frac{xq_1^{*}q_2}{\Delta (k_1^{*}-x\Delta ^{*})}%
\right| ^2.
\end{equation}
We can find the asymptotic behaviour of bilinear combinations of $c(k_1,k_2)$
at $\overrightarrow{k_1}\rightarrow x\overrightarrow{\Delta }$ for an
arbitrary dimension $D$ of space-time:
$$
\frac 1{x^2}\left| \widetilde{c}(k_1,k_2)\right| ^2\rightarrow \frac
1{x(1-x)}\widetilde{c}(k_1,k_2)\widetilde{c}(k_2,k_1)\rightarrow
$$
$$
\frac 1{32\,x(1-x)}\,\Gamma ^{+-\beta }(q_2,q_1)\Gamma ^{+-\beta ^{\prime
}}(q_2,q_1)\,\frac{tr\,\widehat{k_1}\gamma _\beta \widehat{k_2}\gamma
_{\beta ^{\prime }}}{(k_1+k_2)^4}\rightarrow
$$
\begin{equation}
\left( \frac{\overrightarrow{q_1}^2\overrightarrow{q_2}^2}{\overrightarrow{%
\Delta }^2(\overrightarrow{k_1}-x\overrightarrow{\Delta })^2}-4x(1-x)\left(
\frac{\overrightarrow{\Delta }^2\overrightarrow{q_1}-\overrightarrow{q_1}^2
\overrightarrow{\Delta }}{\overrightarrow{\Delta }^2}\,,\,\frac{
\overrightarrow{k_1}-x\overrightarrow{\Delta }}{(\overrightarrow{k_1}-x
\overrightarrow{\Delta })^2}\right) ^2\right) \frac 14tr\,(1).
\end{equation}
It gives us a possibility to regularize the infrared divergency. Futher we
use the traditional prescription $tr \,(1) \,=\,4$ for the spinor space in
the $D$-dimensional coordinate space.
\section{Infrared and collinear divergencies}
The gluon and quark production cross-sections contain infrared divergencies
which should be cancelled with the virtual corrections to the multi-Regge
processes. Using the expressions for products of amplitudes $c^{+-}$
presented in Appendix, we can calculate $R(+-)$ for $D=4$:
\begin{equation}
R(+-)=\frac{\overrightarrow{q_1}^2\overrightarrow{q_2}^2}4\left( \frac 1{
\overrightarrow{k_1}^2(\overrightarrow{k_1}-\overrightarrow{\Delta })^2}+
\frac{x^2}{\overrightarrow{k_1}^2(\overrightarrow{k_1}-x\overrightarrow{%
\Delta })^2}+\frac{(1-x)^2}{(\overrightarrow{k_1}-\overrightarrow{\Delta }%
)^2(\overrightarrow{k_1}-x\overrightarrow{\Delta })^2}\right) .
\end{equation}
As for the bilinear combinations of $c^{++}$, we write them as sums of
singular and regular terms:%
$$
\left| c^{++}(k_1,k_2)\right| ^2=\left| c^{++}(k_1,k_2)\right|
_{sing}^2+\left| c^{++}(k_1,k_2)\right| _{reg}^2\,,
$$
$$
\left| c^{++}(k_2,k_1)\right| ^2=\left| c^{++}(k_2,k_1)\right|
_{sing}^2+\left| c^{++}(k_2,k_1)\right| _{reg}^2\,,
$$
\begin{equation}
c^{++}(k_1,k_2)c^{++}(k_2,k_1)=\left( c^{++}(k_1,k_2)c^{++}(k_2,k_1)\right)
_{sing}+\left( c^{++}(k_1,k_2)c^{++}(k_2,k_1)\right) _{reg}\,,
\end{equation}
where the singular terms are chosen in accordance with the previous section
as follows:
$$
\left| c^{++}(k_1,k_2)\right| _{sing}^2=\left| c^{+-}(k_1,k_2)\right|
^2+r(k_1,x)\,,
$$
$$
Re\,\left[ \left( c^{++}(k_1,k_2)\,c^{++}(k_2,k_1)\right) _{sing}\frac{%
k_1^{*}}{k_1} \frac{k_2^{*}}{k_2}\right]=
$$
$$
=Re\,\left[ c^{+-}(k_1,k_2)\,c^{-+}(k_2,k_1)\frac{k_1^{*}}{k_1}\frac{k_2}{%
k_2^{*}} \right]-\frac 12\,\left( r(k_1,x)+r(\Delta -k_1,1-x)\right) \,,
$$
\begin{equation}
r(k_1,x)=\frac{\overrightarrow{q_1}^2\overrightarrow{q_2}^2}{\overrightarrow{
\Delta }^2}\left( x(1-x)-2\right) \frac{2x^2(1-x)\overrightarrow{k_1}
\overrightarrow{\Delta }}{\overrightarrow{k_1}^2(\overrightarrow{k_1}-x
\overrightarrow{\Delta })^2}+2\,Re\,\frac{x^3(1-x)^2q_1^2q_2^{*2}\Delta }{%
\Delta ^{*2}(k_1-x\Delta )^2k_1}\;.
\end{equation}
They are written in terms of products of the amplitudes $c^{+-}$ from
Appendix, which allows us to take into account their singularities in a
simple form. For the total sum $R_{sing}=R(+-)+R_{sing}(++)$ we obtain for $%
D=4$:
\begin{equation}
R_{sing}=2\;R(+-)+\frac 14\left( r(k_1,x)+r(\Delta -k_1,1-x)\right) .
\end{equation}
The singular contribution $R_{sing}$ contains all infrared and Regge
singularities of $R$. It decreases rapidly at large $k_1$. Integrals from
the regular terms are convergent everywhere. At fixed $x$ the quantity $R$
has singularities at $\overrightarrow{k_1}\rightarrow
0,\,\overrightarrow{k_2%
}\rightarrow 0$\thinspace and $\overrightarrow{k_1}\rightarrow x
\overrightarrow{\Delta }$. According to the results of the previous section
one can find it for an arbitrary dimension $D$ of space-time at small $k_i$
and $k_1-x\Delta $:
\begin{equation}
R\rightarrow \frac{\overrightarrow{q_1}^2\overrightarrow{q_2}^2}{
\overrightarrow{\Delta }^2\overrightarrow{k_i}^2}\,,\,R\rightarrow \frac{%
\left( 1-x(1-x)\right) ^2\overrightarrow{q_1}^2\overrightarrow{q_2}^2}{
\overrightarrow{\Delta }^2(\overrightarrow{k_1}-x\overrightarrow{\Delta })^2}
\end{equation}
after averaging over angles. The integration over these infrared regions
gives the result:
\begin{equation}
\frac{\mu ^{4-D}}\pi \int_{\inf r}d^{D-2}k_1\,R=\frac{\overrightarrow{q_1}^2
\overrightarrow{q_2}^2}{\overrightarrow{\Delta }^2}\left( \frac{\pi ^{\frac{%
D-4}2}}{\Gamma (\frac{D-2}2)}\,\frac 2{D-4}+\ln \,\frac{\Lambda ^2}{\mu ^2}%
\right) \left( 2+\left( 1-x(1-x)\right) ^2\right) ,
\end{equation}
where $\mu $ is the renormalization point and $\Lambda \ll \left| \Delta
\right| $ is an intermediate cut-off parameter: $\overrightarrow{k_i}%
^2<\Lambda ^2,\,(\overrightarrow{k_1}-x\overrightarrow{\Delta })<\Lambda ^2$%
. In the integral over the region $\overrightarrow{k_i}^2>\Lambda ^2,(
\overrightarrow{k_1}-x\overrightarrow{\Delta })>\Lambda ^2$ we can put $D=4$%
:
$$
\frac 1\pi \int d^2k_1\,R_{sing}=
$$
\begin{equation}
\frac{\overrightarrow{q_1}^2\overrightarrow{q_2}^2}{\overrightarrow{\Delta }%
^2}\left( \ln \frac{x(1-x)\overrightarrow{\Delta }^4}{\Lambda ^4}+\left(
1-x(1-x)\right) ^2\ln \,\frac{x(1-x)\overrightarrow{\Delta }^2}{\Lambda ^2}%
+x^2(1-x)^2Re\,\frac{q_1q_2^{*}}{q_1^{*}q_2}\right) .
\end{equation}
The total contribution for fixed $x$ is
$$
\frac{\mu ^{4-D}}\pi \int d^{D-2}k_1\,R_{sing}=x^2(1-x)^2Re\,\frac{%
q_1^2q_2^{*2}}{\overrightarrow{\Delta }^2}+\frac{\overrightarrow{q_1}^2
\overrightarrow{q_2}^2}{\overrightarrow{\Delta }^2}\left( 1+\left(
1-x(1-x)\right) ^2\right) \ln \left( x(1-x)\right)
$$
\begin{equation}
+\frac{\overrightarrow{q_1}^2\overrightarrow{q_2}^2}{\overrightarrow{\Delta
}%
^2}\left( \frac{\pi ^{\frac{D-4}2}}{\Gamma (\frac{D-2}2)}\,\frac 2{D-4}+\ln
\,\frac{\overrightarrow{\Delta }^2}{\mu ^2}\right) \left( 2+\left(
1-x(1-x)\right) ^2\right) .
\end{equation}
In the regions of small $x$ or $1-x$ one should substitute $R_{sing}$ by $%
2R(+-)$ and integrate it over $D-2$ dimensional transverse space:
$$
\frac{\mu ^{4-D}}\pi \int d^{D-2}k_1\,R_{sing}=
$$
\begin{equation}
\frac{\overrightarrow{q_1}^2\overrightarrow{q_2}^2}{\overrightarrow{\Delta }%
^2}\frac{2^{4-D}\pi ^{\frac{D-1}2}}{\Gamma (\frac{D-3}2)\sin \,(\pi \frac{D-
4%
}2)}\left| \frac{\overrightarrow{\Delta }^2}{\mu ^2}\right| ^{\frac{D-4}%
2}\left( 1+x^{D-4}+(1-x)^{D-4}\right) .
\end{equation}
Using the above formulas we can integrate the result also over $x$ taking
into account, that the intermediate parameter $\delta $ should be considered
as small as possible, because the virtual corrections to the amplitudes in
the multi-Regge kinematics were calculated under this condition. In
particular it means, that the contribution from the region of small $x$ and
$%
1-x$ is proportional to the expression:
$$
2\int_\delta ^\sigma \frac{d\,x}{x(1-x)}\left( 1+x^{D-4}+(1-x)^{D-4}\right)
$$
\begin{equation}
=\frac 2{D-4}+4\,\ln \,\frac \sigma \delta \,+2\ln \,\sigma \,+(D-4)\left(
\ln \,\sigma \right) ^2
\end{equation}
for small $\sigma $ and $D\rightarrow 4$. Thus, we obtain finally the
following contribution from $R_{sing}$ after its dimensional regularization:%
$$
\frac{\mu ^{4-D}}\pi \int_\delta ^{1-\delta }\frac{d\,x}{x(1-x)}\int
d^{D-2}k_1R_{sing}=\frac 16\,Re\,\frac{q_1^2q_2^{*2}}{\overrightarrow{\Delta
}^2}+\frac{\overrightarrow{q_1}^2\overrightarrow{q_2}^2}{\overrightarrow{%
\Delta }^2}\left( \frac{67}{18}-4\frac{\pi ^2}6\right)
$$
\begin{equation}
+\frac{\overrightarrow{q_1}^2\overrightarrow{q_2}^2}{\overrightarrow{\Delta
}%
^2}\frac{2^{4-D}\pi ^{\frac{D-1}2}}{\Gamma (\frac{D-3}2)\sin \,(\pi \frac{D-
4%
}2)}\left| \frac{\overrightarrow{\Delta }^2}{\mu ^2}\right| ^{\frac{D-4}%
2}\left( \frac 2{D-4}+4\ln \frac 1\delta \,-\frac{11}6\right) ,
\end{equation}
where it is implied, that the terms of the order of value of $D-4$ should be
omitted. The infrared divergencies at $D\rightarrow 4$ in the above formulas
should be cancelled with the contribution from one-loop corrections to the
Reggeon-Reggeon-particle vertex [11].
Let us consider the region of small $\Delta $:
\begin{equation}
\left| \Delta \right| \ll \left| q_1\right| \simeq \left| q_2\right|
\end{equation}
which can lead to the infrared divergency in the generalised BFKL equation
as it was in the case of LLA. Here the essential integration region
corresponds to the soft gluon transverse momenta:
\begin{equation}
k_1\sim k_2\sim \Delta \ll q
\end{equation}
where $q$ means $q_1$ or $q_2$. The expressions for $c^{+-}(k_1,k_2)$ and $%
c^{++}(k_1,k_2)$ in the soft region are given below:
$$
c^{+-}(k_1,k_2)\rightarrow c_{soft}^{+-}(k_1,k_2)=-x\frac{\overrightarrow{q}%
^2}{k_1^{*}(k_1-x\Delta )}\,,
$$
$$
c^{++}(k_1,k_2)\rightarrow c_{soft}^{++}(k_1,k_2)=
$$
\begin{equation}
=\frac{x\overrightarrow{q}^2}{\overrightarrow{\Delta }^2}\left(
\frac{k_2^2}{%
(\overrightarrow{k_1}-x\overrightarrow{\Delta })^2+x(1-x)\overrightarrow{%
\Delta }^2}-\frac{k_1}{k_1^{*}-x\Delta ^{*}}-\frac{(1-x)^2\Delta ^2}{%
(k_1-x\Delta )k_1^{*}}+\frac{(2-x)\Delta }{k_1^{*}}\right) .
\end{equation}
If we write down $c_{soft}^{++}(k_1,k_2)$ in the form:
\begin{equation}
\frac{\overrightarrow{\Delta }^2}{\overrightarrow{q}^2}%
\,c_{soft}^{++}(k_{1,}k_2)=\frac \Delta {k_1^{*}}+\chi (k_1,k_2)\,,
\end{equation}
by extracting its singularity at $\overrightarrow{k_1}\rightarrow 0$, the
following symmetry property of it can be verified:
\begin{equation}
\left( \frac \Delta {k_2^{*}}+\chi (k_2,k_1)\right) \frac{k_1^{*}}{k_1}\,
\frac{k_2^{*}}{k_2}=\frac{\Delta ^{*}}{k_2}-\chi ^{*}(k_1,k_2)\,.
\end{equation}
Using this relation we obtain for the bilinear combinations of
$c_{soft}^{++}(k_1,k_2)$ the following expressions
$$
\frac{1}{x^2\overrightarrow{q}^4}\left|
c_{soft}^{++}(k_1,k_2)\right| ^2=\left( \frac{(1-x)\left(
\overrightarrow{k_1}-x \overrightarrow{\Delta }\, , (1-
2x)\overrightarrow{k_1}
+x\overrightarrow{\Delta }\right) }{(
(\overrightarrow{k_1}-x\overrightarrow{\Delta })^2 +x(1-x)
\overrightarrow{\Delta }^2)(\overrightarrow{k_1}-x\overrightarrow{\Delta })
^2}\right) ^2
$$
$$
+\frac{(\overrightarrow{k_1}-x\overrightarrow{\Delta })
^2-(1-x)(3-4x)\overrightarrow{k_1}^2}{((\overrightarrow{k_1}-
x\overrightarrow{%
\Delta })^2+x(1-x)\overrightarrow{\Delta }^2)(\overrightarrow{k_1}-
x\overrightarrow{\Delta })^2 \overrightarrow{k_1}^2}
+\frac{1-x}{\overrightarrow{k_1}^2\left( \overrightarrow{k_1}
-x\overrightarrow{\Delta }\right) ^2}\,,
$$
$$
c_{soft}^{++}(k_1,k_2)\,c_{soft}^{++}(k_2,k_1)\,\frac{k_1^{*}}{k_1}\,\frac{%
k_2^{*}}{k_2}=
$$
\begin{equation}
=-\frac 12\left( \left| c_{soft}^{++}(k_1,k_2)\right| ^2+\left|
c_{soft}^{++}(k_2,k_1)\right| ^2\right)
+\frac{\overrightarrow{q}^2}{2k_1k_2}%
\left( c_{soft}^{++}(k_1,k_2)+c_{soft}^{++}(k_2,k_1)\right) .
\end{equation}
As it was done in a general case of fixed $\Delta $ the bilinear
combinations of $c_{soft}^{++}(k_1,k_2)$ can be presented in the following
form:%
$$
\left| c_{soft}^{++}(k_1,k_2)\right| ^2=\left| c_{soft}^{++}(k_1,k_2)\right|
_{sing}^2+\left| c_{soft}^{++}(k_1,k_2)\right| _{reg}^2\,,\,\,
c_{soft}^{++}(k_1,k_2)c_{soft}^{++}(k_2,k_1)=
$$
\begin{equation}
=\left( c_{soft}^{++}(k_1,k_2)c_{soft}^{++}(k_2,k_1)\right) _{sing}+\left(
c_{soft}^{++}(k_1,k_2)c_{soft}^{++}(k_2,k_1)\right) _{reg}
\end{equation}
where the singular soft contributions are defined as follows
$$
\left| c_{soft}^{++}(k_1,k_2)\right| _{sing}^2=\left|
c_{soft}^{+-}(k_1,k_2)\right| ^2+r_s(k_1,x)\,,
$$
$$
Re\,\left[\left( c_{soft}^{++}(k_1,k_2)\,c_{soft}^{++}(k_2,k_1)\right)
_{sing} \frac{k_1^{*}}{k_1}\frac{k_2^{*}}{k_2}\right]=
$$
\begin{equation}
=Re\,\left[ c_{soft}^{+-}(k_1,k_2)\,c_{soft}^{-+}(k_2,k_1)
\frac{k_1^{*}}{k_1%
}\frac{k_2}{k_2^{*}}\right] -\frac 12\left( r_s(k_1,x)+r_s(\Delta
-k_1,1-x)\right) \,.
\end{equation}
The bilinear combinations of amplitudes $c_{soft}^{+-}$ are obtained from
formulas of Appendix:
$$
\mid c_{soft}^{+-}(k_1,k_2)\mid ^2=\frac{x^2\overrightarrow{q}^4}{
\overrightarrow{k_1}^2(\overrightarrow{k_1}-x\overrightarrow{\Delta })^2}%
\,\,,
$$
$$
Re\,\left[c_{soft}^{+-}(k_1,k_2)\,c_{soft}^{-
+}(k_2,k_1)\,\frac{k_1^{*}}{k_1}\,
\frac{k_2}{k_2^{*}}\right] =
$$
\begin{equation}
=\frac{\overrightarrow{q}^4}2\left( \frac 1{\overrightarrow{k_1}^2(
\overrightarrow{k_1}-\overrightarrow{\Delta })^2}-
\frac{x^2}{\overrightarrow{%
k_1}^2(\overrightarrow{k_1}-x\overrightarrow{\Delta })^2}-\frac{(1-x)^2}{(
\overrightarrow{k_1}-\overrightarrow{\Delta })^2(\overrightarrow{k_1}-x
\overrightarrow{\Delta })^2}\right) .
\end{equation}
and $r_s$ is derived from $r$ at $\Delta \rightarrow 0$:
\begin{equation}
r_s(k_1,x)=\frac{\overrightarrow{q}^4}{\overrightarrow{\Delta }^2}\left(
x(1-x)-2\right) \frac{2x^2(1-x)\overrightarrow{k_1}\overrightarrow{\Delta }}{
\overrightarrow{k_1}^2(\overrightarrow{k_1}-x\overrightarrow{\Delta })^2}+2
\overrightarrow{q}^4Re\,\frac{x^3(1-x)^2\Delta }{\Delta ^{*2}(k_1-x\Delta
)^2k_1}\;.
\end{equation}
The result of integration of the singular soft terms with taking into
account the contribution of $c^{+-}$ and the dimensional regularization can
be obtained from the general case of fixed $\Delta $ by putting $\Delta
\rightarrow 0$:
$$
\frac{\mu ^{4-D}}\pi \int_\delta ^{1-\delta }\frac{d\,x}{x(1-x)}\int
d^{D-2}k_1\left( R_{soft}\right) _{sing}
$$
\begin{equation}
=\frac{\overrightarrow{q}^4}{\overrightarrow{\Delta }^2}\left( \frac{35}9-4
\frac{\pi ^2}6+\frac{2^{4-D}\pi ^{\frac{D-1}2}}{\Gamma (\frac{D-3}2)\sin
\,(\pi \frac{D-4}2)}\left| \frac{\overrightarrow{\Delta }^2}{\mu ^2}\right|
^{\frac{D-4}2}\left( \frac 2{D-4}+4\ln \frac 1\delta \,-\frac{11}6\right)
\right) \,.
\end{equation}
The regular soft terms do not contain any divergency.
Their contributions are proportional to $
\overrightarrow{q}^4/\overrightarrow{\Delta }^2$:
$$
\int_0^1\frac{d\,x}{x(1-x)}\int \frac{d^2k_1}\pi \left|
c_{soft}^{++}(k_1,k_2)\right| _{reg}^2=
$$
$$
=\frac{\overrightarrow{q}^4}{\overrightarrow{\Delta }^2}\int_0^1d\,x\,\left(
\frac 2{1-x}\,\ln \,\frac 1x\,+2\,(2-x+x^2)\ln \,\frac
x{1-x}\,+1-8x+8x^2\right) =\frac{\overrightarrow{q}^4}{\overrightarrow{%
\Delta }^2}\left( \frac{\pi ^2}3-\frac 13\right) .
$$
For the interference
term we use the following relation which can be derived from above equations:
\begin{equation}
Re\,\left[\left( c_{soft}^{++}(k_1,k_2) c_{soft}^{++}(k_2,k_1)\right) _{reg}
\frac{k_1^*}{k_1}\frac{k_2^*}{k_2}\right]=-\frac 12\left( \left|
c_{soft}^{++}(k_1,k_2)\right| _{reg}^2+\left| c_{soft}^{++}(k_2,k_1)\right|
_{reg}^2\right)\,.
\end{equation}
Due to this relation we have
$$
\int_0^1\frac{d\,x}{x(1-x)}\int \frac{d^2k_1}\pi Re\,\left[\left(
c_{soft}^{++}(k_1,k_2)c_{soft}^{++}(k_1,k_2)\right)_{reg} \frac{k_1^{*}}{k_1}
\frac{k_2^{*}}{k_2}\right]
$$
\begin{equation}
=-\frac{\overrightarrow{q}^4}{\overrightarrow{\Delta }^2}\int_0^1d\,x\,%
\left( \frac 1{1-x}\,\ln \,\frac 1x\,+\frac 1x\,\ln \,\frac
1{1-x}+1-8x+8x^2\right) =-\frac{\overrightarrow{q}^4}{\overrightarrow{\Delta
}^2}\left( \frac{\pi ^2}3-\frac 13\right)
\end{equation}
The total contribution of the singular and regular terms in the soft region
$%
\Delta \rightarrow 0$ is
$$
\frac{\mu ^{4-D}}\pi \int_\delta ^{1-\delta }\frac{d\,x}{x(1-x)}\int
d^{D-2}k_1\,R_{soft}
$$
\begin{equation}
=\frac{\overrightarrow{q}^4}{\overrightarrow{\Delta }^2} \left(
\frac{67}{18}%
- \frac{\pi ^2}2+\frac{2^{4-D}\pi ^{\frac{D-1}2}}{\Gamma (\frac{D-3}2)\sin
\,(\pi \frac{D-4}2)}\left| \frac{\overrightarrow{\Delta }^2}{\mu ^2}\right|
^{\frac{D-4}2}\left( \frac 2{D-4}+4\ln \frac 1\delta \,-\frac{11}6\right)
\right) \,.
\end{equation}
Since the integration over $\Delta$ in the generalised BFKL equation leads
to the infrared divergency at $\Delta \,=\,0$ for $D \rightarrow 4$ , it
would be useful in the soft region $\Delta \rightarrow 0$ to obtain the
contribution of the real gluon production taking into account terms
vanishing at $D \rightarrow 4$. It can be done starting with the following
expression for the tensor $c^{\alpha_1\alpha_2}(k_1,k_2)$ (see Eq.(55)) in
the soft region:
$$
\frac{1}{x\overrightarrow{q}^2}c_{soft}^{\alpha_1\alpha_2}(k_1,k_2)=
- \frac{x k_1^{\alpha_1}((2-x)k_1^{\alpha_2}- \Delta^{\alpha_2})}
{\overrightarrow{k_1}^2(\overrightarrow{k_1}-x\overrightarrow{\Delta })^2}
$$
\begin{equation}
-(1-x)\frac{\delta^{\alpha_1\alpha_2}\left(\overrightarrow{k_1}-
x\overrightarrow{\Delta}\, , (1-2x)\overrightarrow{k_1}+
x\overrightarrow{\Delta}\right) +2x k_2^{\alpha_1}k_2^{\alpha_2}}
{2((\overrightarrow{k_1}-x\overrightarrow{\Delta })^2 +x(1-x)
\overrightarrow{\Delta }^2)(\overrightarrow{k_1}-x\overrightarrow{\Delta })
^2}\,.
\end{equation}
and
$$
\frac{1}{x\overrightarrow{q}^2}c_{soft}^{\alpha'_1\alpha_2}(k_1,k_2)
\Omega^{\alpha'_1\alpha_1}(k_1)=
- \frac{k_1^{\alpha_1}k_2^{\alpha_2}}
{(\overrightarrow{k_1}-x\overrightarrow{\Delta })^2 +x(1-x)
\overrightarrow{\Delta }^2)\overrightarrow{k_1}^2}
$$
\begin{equation}
-(1-x)\frac{\delta^{\alpha_1\alpha_2}\left(\overrightarrow{k_1}-
x\overrightarrow{\Delta}\, , (1-2x)\overrightarrow{k_1}+
x\overrightarrow{\Delta}\right) - 2(1-x)k_1^{\alpha_1}\Delta^{\alpha_2}+
2x\Delta^{\alpha_1}k_2^{\alpha_2}}
{2((\overrightarrow{k_1}-x\overrightarrow{\Delta })^2 +x(1-x)
\overrightarrow{\Delta }^2)(\overrightarrow{k_1}-x\overrightarrow{\Delta })
^2}\,.
\end{equation}
Using above equations we obtain
$$
\frac{1}{x^2\overrightarrow{q}^4}(c_{soft}^{\alpha_1\alpha_2}(k_1,k_2))^2 =
(D-2)\left( \frac{(1-x)\left(
\overrightarrow{k_1}-x \overrightarrow{\Delta }\, , (1-
2x)\overrightarrow{k_1}
+x\overrightarrow{\Delta }\right) }{2(
(\overrightarrow{k_1}-x\overrightarrow{\Delta })^2 +x(1-x)
\overrightarrow{\Delta }^2)(\overrightarrow{k_1}-x\overrightarrow{\Delta })
^2}\right) ^2
$$
\begin{equation}
+\frac{(\overrightarrow{k_1}-x\overrightarrow{\Delta })
^2-(1-x)(3-4x)\overrightarrow{k_1}^2}{2((\overrightarrow{k_1}-
x\overrightarrow{\Delta })^2+x(1-x)\overrightarrow{\Delta }^2)
(\overrightarrow{k_1}-x\overrightarrow{\Delta })^2 \overrightarrow{k_1}^2}
+\frac{2-x}{2\overrightarrow{k_1}^2( \overrightarrow{k_1}-
x\overrightarrow{\Delta }) ^2}\,,
\end{equation}
$$
c_{soft}^{\alpha_1\alpha_2}(k_1,k_2)c_{soft}^{\alpha'_2\alpha'_1}(k_2,k_1)
\Omega^{\alpha_1\alpha'_1}(k_1)\Omega^{\alpha_2\alpha'_2}(k_2)=
$$
\begin{equation}
-\frac{1}{2}\left((c_{soft}^{\alpha_1\alpha_2}(k_1,k_2))^2+
(c_{soft}^{\alpha_2\alpha_1}(k_2,k_1))^2\right)+\frac{\overrightarrow{q}^4}
{2\overrightarrow{k_1}^2\overrightarrow{k_2}^2}\,.
\end{equation}
Performing integration we obtain for arbitrary D:
$$
\frac{\mu ^{4-D}}{\pi} \int_\delta ^{1-\delta }\frac{d\,x}{x(1-x)}\int
d^{D-2}k_1\, (c_{soft}^{\alpha_1\alpha_2}(k_1,k_2))^2
=\frac{\overrightarrow{q}^4}{\overrightarrow{\Delta }^2}
\left(\frac{\pi\overrightarrow{\Delta }^2}{\mu ^2}\right)^{\frac{D-4}{2}}
\Gamma (3-\frac{D}{2})\times
$$
$$
\times\int_\delta ^{1-\delta }d\,x \left[
\frac{(2-x)\Gamma^2({D\over 2}-2)x^{D-5}}{2(1-x)\Gamma(D-4)}
+\frac{1}{2(1-x)}\int_0 ^x \frac{d\,z}{(z(1-z))^{3-{D\over 2}}}\right.
$$
$$
\left.+(x(1-x))^{{D\over 2}-2}\left({{D-2}\over 4}(1-2x)^2 +{1\over{D-
4}}\left({1\over{1-x}}-4+(6-D)x(1-x)\right)\right)\right]
$$
\begin{equation}
=\frac{\overrightarrow{q}^4}{\overrightarrow{\Delta }^2}
\left(\frac{\pi\overrightarrow{\Delta }^2}{\mu ^2}\right)^{\frac{D-4}{2}}
\Gamma (3-\frac{D}{2})\frac{\Gamma^2({D\over 2}-1)}
{\Gamma(D-2)}
\end{equation}
$$
\times\left[\frac{4}{(D-4)^2}+\frac{D-2}{2(D-4)(D-1)}+\frac{2(D-3)}
{D-4}\left( 2 \ln \frac{1}{\delta} +\psi(1)+\psi({D\over 2}-1)-2\psi(D-3)
\right)\right]\, ,
$$
where $\psi(x) =\frac{\Gamma'(x)}{\Gamma(x)}$, and
$$
\frac{\mu ^{4-D}}{\pi} \int_\delta ^{1-\delta }\frac{d\,x}{x(1-x)}\int
d^{D-2}k_1\,c_{soft}^{\alpha_1\alpha_2}(k_1,k_2)
c_{soft}^{\alpha'_2\alpha'_1}(k_2,k_1)
\Omega^{\alpha_1\alpha'_1}(k_1)\Omega^{\alpha_2\alpha'_2}(k_2)
$$
$$
=\frac{\overrightarrow{q}^4}{\overrightarrow{\Delta }^2}
\left(\frac{\pi\overrightarrow{\Delta }^2}{\mu ^2}\right)^{\frac{D-4}{2}}
\Gamma (3-\frac{D}{2})\int_\delta ^{1-\delta }d\,x
\left[-\frac{\Gamma^2({D\over 2}-2)}
{4\Gamma(D-4)}\left(\frac{2-x}{1-x}x^{D-5}+\frac{1+x}{x}(1-x)^{D-5}\right)
\right.
$$
$$
\left.+(x(1-x))^{{D\over 2}-2}\left(-{(D-2)\over 4}(1-2x)^2 +{1\over {D-
4}}\left(4-{1\over {2(1-x)}}-(6-D)x(1-x)\right)\right)\right.
$$
$$
\left.+\frac{2-x}{4x(1-x)}\int_0 ^x \frac{d\,z}{(z(1-z))^{3-{D\over 2}}}
+\frac{1+x}{4x(1-x)}\int_0 ^{1-x} \frac{d\,z}{(z(1-z))^{3-{D\over 2}}}
\right]
$$
\begin{equation}
=\frac{\overrightarrow{q}^4}{\overrightarrow{\Delta }^2}
\left(\frac{\pi\overrightarrow{\Delta }^2}{\mu ^2}\right)^{\frac{D-4}{2}}
\Gamma (3-\frac{D}{2})\frac{\Gamma^2({D\over 2}-1)}
{\Gamma(D-2)}
\end{equation}
$$
\times\left[-\frac{4}{(D-4)^2}-\frac{D-2}{2(D-4)(D-1)}+\frac{2(D-3)}
{D-4}\left( -\psi(1)-\psi({D\over 2}-1)+2\psi(D-3)
\right)\right]\, .
$$
Note, that it is possible to modify the definition of the soft and hard
parts of $c^{++}(k_1,k_2)$:
$$
c^{++}(k_1,k_2)=\widetilde{c}_{soft}^{++}(k_1,k_2)+\widetilde{c}%
_{hard}^{++}(k_1,k_2)\,,
$$
$$
\frac 1x\,\widetilde{c}_{soft}^{++}(k_1,k_2)=\frac{q_1^{*}q_2k_1(k_1-x\Delta
)-(2-x)q_1q_2^{*}\Delta k_1+q_1q_2^{*}\,\Delta ^2}{\overrightarrow{\Delta }%
^2\left( (\overrightarrow{k_1}-x\overrightarrow{\Delta })^2+x(1-x)
\overrightarrow{\Delta }^2\right) }
$$
$$
-\frac{q_1^{*}q_2k_1}{\overrightarrow{\Delta }^2(k_1^{*}-x\Delta ^{*})}%
+(2-x) \frac{q_1q_2^{*}}{\Delta ^{*}k_1^{*}}-(1-x)^2\frac{\Delta
q_1q_2^{*}}{%
\Delta ^{*}(k_1-x\Delta )k_1^{*}}\,,
$$
$$
\frac 1x\,\widetilde{c}_{hard}^{++}(k_1,k_2)=-\,\frac{Q_1^2}{(
\overrightarrow{k_1}-x\overrightarrow{q_1})^2+x(1-x)\overrightarrow{q_1}^2}
$$
\begin{equation}
+\frac{q_1^{*}k_1(k_1-x\Delta )-(2-x)q_1\Delta ^{*}k_1+q_1\overrightarrow{%
\Delta }^2}{\Delta ^{*}\left( (\overrightarrow{k_1}-x\overrightarrow{\Delta
}%
)^2+x(1-x)\overrightarrow{\Delta }^2\right) }-\frac{q_2^{*}k_1}{\Delta
^{*}k_1^{*}}
\end{equation}
in such way to include in the comparatively simple expression for $
\widetilde{c}_{soft}^{++}(k_1,k_2)$ all singular terms without the loss of
its good behaviour at large $k_1$. In the Regge limit $x\rightarrow 1$ and
fixed $k_i$ we obtain
\begin{equation}
\widetilde{c}_{soft}^{++}\rightarrow \frac{q_1q_2^{*}}{k_1^{*}k_2^{*}}
\end{equation}
and therefore in this region the contributions of the exact amplitude $%
c^{++} $ and approximate one $\widetilde{c}_{soft}^{++}$ to the differential
cross-section coincide. However, as a result of the singularity at $%
k_1=x\Delta $ after integration over $k_1$ these contributions to the total
cross-section in the Regge limit $x\rightarrow 1$ turn out to be different
(see below).
One can verify the following relation:
$$
\frac{k_1^{*}}{k_1}\,\frac{k_2^{*}}{k_2}\,\widetilde{c}%
_{soft}^{++}(k_2,k_1)= \frac{q_1q_2^{*}}{\Delta k_2}+\frac{q_1^{*}q_2}{%
\Delta k_1}+\frac{(k_1^{*}-x\Delta ^{*})(q_1q_2^{*}-q_1^{*}q_2)}{\Delta
\left( ( \overrightarrow{k_1}-x\overrightarrow{\Delta })^2+x(1-x)
\overrightarrow{\Delta }^2\right) }-\left( \widetilde{c}%
_{soft}^{++}(k_1,k_2)\right) ^{*}
$$
which can be used to simplify the interference terms after the subtraction
of the above singular terms:
$$
Re\,\left[ \left( \widetilde{c}_{soft}^{++}(k_1,k_2)\widetilde{c}%
_{soft}^{++}(k_2,k_1)\right) _{reg}\frac{k_1^{*}}{k_1}\frac{k_2^{*}}{k_2}%
\right] =-\frac 12\left( \left| \widetilde{c}_{soft}^{++}(k_1,k_2)\right|
_{reg}^2+\left| \widetilde{c}_{soft}^{++}(k_2,k_1)\right| _{reg}^2\right)
$$
$$
+Re\,\left[ \frac{q_1q_2^{*}-q_1^{*}q_2}{2\Delta }\,\left( \frac{k_1-k_2}{%
2k_1k_2}+\frac{k_1^{*}-x\Delta ^{*}}{(\overrightarrow{k_1}-x\overrightarrow{%
\Delta })^2+x(1-x)\overrightarrow{\Delta }^2}\right) \left( \widetilde{c}%
_{soft}^{++}(k_1,k_2)-\widetilde{c}_{soft}^{++}(k_2,k_1)\right) \right]
$$
\begin{equation}
-\frac{\overrightarrow{q_1}^2\overrightarrow{q_2}^2}{2\,\overrightarrow{k_1}%
^2\overrightarrow{k_2}^2}+Re\,\left[ \frac{\overrightarrow{q_1}
\overrightarrow{q_2}}{2k_1k_2}\,\left( \widetilde{c}_{soft}^{++}(k_1,k_2)+
\widetilde{c}_{soft}^{++}(k_2,k_1)\right) \right] .
\end{equation}
The result of the integration of this contribution is%
$$
\int_0^{1-\widetilde{\delta }}\frac{d\,x}{x(1-x)}\int \frac{d^2k_1}\pi
\left| \widetilde{c}_{soft}^{++}(k_1,k_2)\right| _{reg}^2=
$$
$$
=\frac{\overrightarrow{q_1}^2\overrightarrow{q_2}^2}{\overrightarrow{\Delta
}%
^2}\int_0^1d\,x\,\left( \frac 2{1-x}\,\ln \,\frac 1x\,+2\,(2-x+x^2)\ln
\,\frac x{1-x}\,+1-8x+8x^2\right)
$$
\begin{equation}
+\frac 4{\overrightarrow{\Delta }^2}\left( \overrightarrow{q_1}^2
\overrightarrow{q_2}^2-(\overrightarrow{q_1}\overrightarrow{q_2})^2\right)
\int_0^{1-\widetilde{\delta }}d\,x\,\frac x{1-x}(1-4x+2x^2)\,,
\end{equation}
$$
\int_{\widetilde{\delta }}^{1-\widetilde{\delta }}\frac{d\,x}{x(1-x)}\int
\frac{d^2k_1}\pi \,Re\,\left[ \left( \widetilde{c}_{soft}^{++}(k_1,k_2)
\widetilde{c}_{soft}^{++}(k_2,k_1)\right) _{reg}\frac{k_1^{*}}{k_1}\frac{%
k_2^{*}}{k_2}\right]=
$$
\begin{equation}
-\int_{\widetilde{\delta }%
}^{1-\widetilde{\delta }}\frac{d\,x}{x(1-x)}
\left[ 2\,\frac{\overrightarrow{q_1}^2\overrightarrow{q_2}^2-
(\overrightarrow{q_1}\overrightarrow{q_2})^2}{\overrightarrow{\Delta}^2}
+ \int
\frac{d^2k_1}\pi \,\frac 12\left( \left| \widetilde{c}_{soft}^{++}(k_1,k_2)%
\right| _{reg}^2+\left| \widetilde{c}_{soft}^{++}(k_2,k_1)\right|
_{reg}^2\right)\right],
\end{equation}
where we introduced the infinitesimal parameter $\widetilde{\delta }$
different from $\delta $ to show, that the divergency at $x=1$ of the
regularized bilinear combinations of $\widetilde{c}$ should cancel in the
total regularized contribution of $c$. This cancellation will be
demonstrated in the next paper.
We return now to the quark production amplitude $c(k_1,k_2)$ and present it
in the form analogous to the gluon case:
\begin{equation}
c(k_1,k_2)=c_{sing}(k_1,k_2)+c_{reg}(k_1,k_2)\,,
\end{equation}
where the singular and regular terms are chosen as follows
$$
\frac 1xc_{sing}(k_1,k_2)=-\frac{(1-x)\Delta q_1q_2^{*}k_1^{*}-x\Delta
^{*}q_1^{*}q_2k_1+x\overrightarrow{\Delta }^2q_1^{*}q_2}{\overrightarrow{%
\Delta }^2\left( (\overrightarrow{k_1}-x\overrightarrow{\Delta })^2+x(1-x)
\overrightarrow{\Delta }^2\right) }+\frac{(1-x)q_1q_2^{*}}{\Delta
^{*}(k_1-x\Delta )}-\frac{xq_1^{*}q_2}{\Delta (k_1^{*}-x\Delta ^{*})}
$$
and
\begin{equation}
\frac 1xc_{reg}(k_1,k_2)=\frac{(1-x)q_1k_1^{*}-xq_1^{*}k_1+x\overrightarrow{%
q_1}^2}{\left( (\overrightarrow{k_1}-x\overrightarrow{q_1})^2+x(1-x)
\overrightarrow{q_1}^2\right) }-\frac{(1-x)q_1k_1^{*}-xq_1^{*}k_1+xq_1^{*}%
\Delta }{\left( (\overrightarrow{k_1}-x\overrightarrow{\Delta })^2+x(1-x)
\overrightarrow{\Delta }^2\right) }\,.
\end{equation}
The term $c_{reg}$ does not lead to any divergency and give a regular
contribution to the BFKL kernel at the soft region $\Delta \rightarrow 0$.
The singular term $c_{sing}$ has the important symmetry property:
\begin{equation}
\frac 1{1-x}\,c_{sing}(k_2,k_1)=\frac 1x\,c_{sing}^{*}(k_1,k_2)\,,
\end{equation}
which is obvious from its another representation:
$$
\frac 1xc_{sing}(k_1,k_2)=
$$
\begin{equation}
=x(1-x)\,\frac{(1-x)\Delta q_1q_2^{*}(k_1^{*}-x\Delta ^{*})-x\Delta
^{*}q_1^{*}q_2(k_1-x\Delta )-2\,(\overrightarrow{k_1}-x\overrightarrow{%
\Delta })^2\overrightarrow{q_1}\overrightarrow{q_2}}{(\overrightarrow{k_1}-x
\overrightarrow{\Delta })^2\left( (\overrightarrow{k_1}-x\overrightarrow{%
\Delta })^2+x(1-x)\overrightarrow{\Delta }^2\right) }\,.
\end{equation}
Using this property, we obtain the following contributions for the bilinear
combinations of $c_{sing}$ taking into account the dimensional
regularization of the singularity at $\overrightarrow{k_1}\rightarrow x
\overrightarrow{\Delta }$ (using eq. (111) and averaging it over angles):
$$
\frac{\mu ^{4-D}}{\pi}\int_0^1\frac{d\,x}{x^2}\int d^{D-2}\,k_1
\left|c_{sing}(k_1,k_2)\right| ^2=\frac{\mu ^{4-D}}{\pi}\int_0^1
\frac{d\,x}{x(1-x)}\int d^{D-2}\,k_1 \,c_{sing}(k_1,k_2)c_{sing}(k_2,k_1)\,
$$
$$
=\frac{\mu ^{4-D}}{\pi}\int_0^1 d\,x\,x^2(1-x)^2\int d^{D-2}\,k_1,\frac{
\overrightarrow{\Delta }^2\overrightarrow{q_1}^2\overrightarrow{q_2}^2\left(
1-\frac{4x(1-x)}{D-2}\right) +4(\overrightarrow{k_1}-x\overrightarrow{\Delta
})^2\left( \overrightarrow{q_1},\overrightarrow{q_2}\right) ^2}{(
\overrightarrow{k_1}-x\overrightarrow{\Delta })^2\left(
(\overrightarrow{k_1}%
-x\overrightarrow{\Delta })^2+x(1-x)\overrightarrow{\Delta }^2\right) ^2}
$$
$$
=\frac{\Gamma (3-\frac D2)}{\overrightarrow{\Delta }^2}\int_0^1d\,x\,
\left( \frac{\pi \overrightarrow{\Delta }^2}{\mu ^2}x(1-x)\right) ^{\frac
D2-2}\times
$$
$$
\times\left( 4x(1-x)(\overrightarrow{q_1},\overrightarrow{q_2})^2+\frac{6-
D}{%
D-4}\left( 1-\frac{4x(1-x)}{D-2}\right) \overrightarrow{q_1}^2
\overrightarrow{q_2}^2\right)
$$
\begin{equation}
=\frac 1{\overrightarrow{\Delta }^2}\left( \frac{\pi \overrightarrow{\Delta
}%
^2}{\mu ^2}\right) ^{\frac D2-2}\Gamma (3-\frac D2)\frac{\Gamma (\frac
D2)\Gamma (\frac D2)}{\Gamma (D)}\,4\,\left( \frac{6-D}{D-4}\,
\overrightarrow{q_1}^2\overrightarrow{q_2}^2+\left( \overrightarrow{q_1},
\overrightarrow{q_2}\right) ^2\right)
\end{equation}
At fixed $\overrightarrow{\Delta }$ the singularity of this expression at $%
D\rightarrow 4$ is cancelled with the contribution from the fermion
correction to the RRP vertex $\left[ 12\right] $. The integration over $%
\Delta $ in the generalised BFKL equation leads to the infrared divergency
at $\Delta =0$ for $D\rightarrow 4$:
$$
-16N_cn_f\pi ^{\frac D2-1}\left( \frac{g^2\,\mu ^{4-D}}{2(2\pi )^{D-1}}%
\right) ^2\Gamma (2-\frac D2)\frac{\Gamma (\frac D2)\Gamma (\frac D2)}{%
\Gamma (D)}\int d^{D-2}\Delta \left( \overrightarrow{\Delta }^2\right)
^{\frac D2-3}.
$$
The quark correction to the RRP vertex expressed in terms of the bare
coupling constant does not give the singular contribution at small $\Delta $%
. This divergency is cancelled with the doubled contribution of the quark
correction to the gluon Regge trajectory $\left[ 13\right] $:
$$
\omega _q^{(2)}(-\overrightarrow{q}^2)=\frac{N_cn_f\pi ^{\frac D2-1}g^4}{%
(2\pi )^{2D-2}\mu ^{D-4}}\,\Gamma (2-\frac D2)\frac{\Gamma ^2(\frac D2)}{%
\Gamma (D)}\int \frac{d^{D-2}q_1\,\overrightarrow{q}^2}{\overrightarrow{q_1}%
^2\left( \overrightarrow{q}-\overrightarrow{q_1}\right) ^2}\left( 2(\frac{
\overrightarrow{q_1}}\mu )^{D-4}-(\frac{\overrightarrow{q}}\mu
)^{D-4}\right) .
$$
\section{Conclusion}
In this paper using the helicity representation we simplified the gluon and
quark production amplitudes in the quasi-multi-Regge kinematics for the
final states (see eqs (59) and (72)). The corresponding next to leading
contributions to the integral kernel of the BFKL equation were expressed in
terms of the integrals from the squares of these helicity amplitudes over
transverse and longitudinal momenta of produced particles (see eqs (82) and
(108)). All infrared divergencies are extracted from these expressions in an
explicit form and are regularized in the $D$-dimensional space. These
divergencies should cancel with the analogous divergencies from the virtual
corrections to the BFKL equation which were calculated earlier in
refs. [11]-[13]. In the end of the previous section the cancellation of the
infrared divergencies was demonstrated for the fermion contribution.
The total
next to leading correction to the integral kernel can be calculated in an
explicit form in terms of the dilogarithm integrals. We shall do it in our
next publications.
\vspace{1cm} \noindent
{\large {\bf Acknowledgements}}\\
We want to thank Universit\"at Hamburg, DESY-Hamburg and DESY-Zeuthen for
the hospitality during our stay in Germany. One of us (L.N.L.) is indebted
to the Alexander-von-Humboldt Foundation for the award, which gave him a
possibility to work on this problem during the last year. The fruitful
discussions with J. Bartels, J. Bl\"umlein, V. Del Duca, E. Kuraev, H.
Lotter and T. Ohl were very helpful. \newpage
\noindent
{\Large {\bf Appendix}}
For quantities entering in $R(+-)$ we obtain the simple expressions:
$$
\mid c^{+-}(k_1,k_2)\mid ^2=\frac{\overrightarrow{q_1}^2\overrightarrow{q_2}%
^2}{\overrightarrow{\Delta }^2}\mid \frac 1{k_1-x\Delta }-\frac 1{k_1}\mid
^2=\frac{x^2\overrightarrow{q_1}^2\overrightarrow{q_2}^2}{\overrightarrow{k_1
}^2(\overrightarrow{k_1}-x\overrightarrow{\Delta })^2}\,\,,
$$
$$
Re\,c^{+-}(k_1,k_2)\,c^{-
+}(k_2,k_1)\,\frac{k_1^{*}}{k_1}\,\frac{k_2}{k_2^{*}%
}=
$$
$$
=\frac{\overrightarrow{q_1}^2\overrightarrow{q_2}^2}{\overrightarrow{\Delta
}%
^2}Re\,\left( \frac 1{k_1-x\Delta }-\frac 1{k_1}\right) \left( \frac
1{x\,\Delta ^{*}-k_1^{*}}-\frac 1{\Delta ^{*}-k_1^{*}}\right)
$$
$$
=\frac{\overrightarrow{q_1}^2\overrightarrow{q_2}^2}2\left( \frac 1{
\overrightarrow{k_1}^2(\overrightarrow{k_1}-\overrightarrow{\Delta })^2}-
\frac{x^2}{\overrightarrow{k_1}^2(\overrightarrow{k_1}-x\overrightarrow{%
\Delta })^2}-\frac{(1-x)^2}{(\overrightarrow{k_1}-\overrightarrow{\Delta }%
)^2(\overrightarrow{k_1}-x\overrightarrow{\Delta })^2}\right) .
$$
More complicated expressions are obtained for the second contribution. We
begin with the squared amplitude:
$$
x^{-2}\,\mid c^{++}(k_1,k_2)\mid ^2=4\frac{\overrightarrow{q_1}^2}{
\overrightarrow{\Delta }^2}\,\frac{[(\overrightarrow{q_1}-
\overrightarrow{k_1%
})\times (\overrightarrow{\Delta }-\overrightarrow{k_1})]^2}{(
\overrightarrow{k_1}^2-2x\overrightarrow{q_1}\overrightarrow{k_1}+x
\overrightarrow{q_1}^2)(\overrightarrow{k_1}^2-2x\overrightarrow{\Delta }
\overrightarrow{k_1}+x\overrightarrow{\Delta }^2)}
$$
$$
+\left[ 1-\frac{\overrightarrow{q_1}^2}{\overrightarrow{\Delta }^2}%
+(1-x)\left( \frac{\overrightarrow{q_1}^2-2\overrightarrow{q_1}
\overrightarrow{k_1}}{\overrightarrow{k_1}^2-2x\overrightarrow{q_1}
\overrightarrow{k_1}+x\overrightarrow{q_1}^2}-\frac{\overrightarrow{q_1}^2}{
\overrightarrow{\Delta }^2}\frac{\overrightarrow{\Delta }^2-
2\overrightarrow{%
\Delta }\overrightarrow{k_1}}{\overrightarrow{k_1}^2-2x\overrightarrow{%
\Delta }\overrightarrow{k_1}+x\overrightarrow{\Delta }^2}\right) \right] ^2
$$
$$
+2\,Re\,\left[ \left( \frac{(q_1-k_1)^2}{\overrightarrow{k_1}^2-2x
\overrightarrow{q_1}\overrightarrow{k_1}+x\overrightarrow{q_1}^2}-\frac{
\overrightarrow{q_1}^2}{\overrightarrow{\Delta }^2}\frac{(\Delta -k_1)^2}{
\overrightarrow{k_1}^2-2x\overrightarrow{\Delta } \overrightarrow{k_1}+x
\overrightarrow{\Delta }^2}\right) \right.
$$
$$
\left. \left( \frac{q_2}{\Delta k_1}\left( \frac{(1-x)^2\Delta ^{*}q_1^{*}}{%
k_1^{*}-x\Delta ^{*}}-q_1^{*}(2-x)+k_1^{*}\right) +\frac{k_1^{*}q_1q_2^{*}}{
\overrightarrow{\Delta }^2(k_1-x\Delta )}\right) \right]
$$
$$
+\frac{(1-x)^2}{x^2}\,\frac{\overrightarrow{q_1}^2\overrightarrow{q_2}^2}{
\overrightarrow{\Delta }^2}\mid \frac{1-x}{k_1-x\Delta }-\frac 1{k_1}\mid
^2+ \frac{\overrightarrow{q_1}^2\overrightarrow{q_2}^2}{\overrightarrow{%
\Delta }^4}\,\mid 1+\frac{x\Delta }{k_1-x\Delta }\mid ^2
$$
$$
+\frac{\overrightarrow{q_2}^2}{\overrightarrow{\Delta }^2}\mid \frac{q_1 }{%
k_1}-1\mid ^2+\frac{1-x}{\overrightarrow{\Delta }^2}\,2\,Re\,\,\frac{%
q_1^2(q_2^{*})^2}{\Delta ^{*}}\left( \frac{(1-x)\Delta }{(k_1-x\Delta )^2}%
-\frac 1{k_1-x\Delta }\right)
$$
$$
-\frac{2\overrightarrow{q_2}^2}{\overrightarrow{\Delta }^2}Re\left[ \frac{1-
x%
}xq_1\left( \frac{1-x}{k_1-x\Delta }-\frac 1{k_1}\right) \left( \frac{%
q_1^{*} }{k_1^{*}}-1\right) +\frac{q_1^{*}q_2}{\Delta q_2^{*}}\left( \frac{%
q_1^{*}-x\Delta ^{*}}{k_1^{*}-x\Delta ^{*}}-1\right) \right] .
$$
It can be presented in the following form convenient for the subsequent
integration:
$$
x^{-2}\mid c^{++}(k_1,k_2)\mid ^2=
$$
$$
=(1-x)^2\left[ \frac{\overrightarrow{q_1}^2-2\overrightarrow{k_1}
\overrightarrow{q_1}}{(\overrightarrow{k_1}-x\overrightarrow{q_1})^2+x(1-x)
\overrightarrow{q_1}^2}-\frac{\overrightarrow{q_1}^2}{\overrightarrow{\Delta
}^2}\,\frac{\overrightarrow{\Delta }^2-2\overrightarrow{k_1}\overrightarrow{%
\Delta }}{(\overrightarrow{k_1}-x\overrightarrow{\Delta })^2+x(1-x)
\overrightarrow{\Delta }^2}\right] ^2
$$
$$
-2(1-x)\overrightarrow{q_1}^2\,\frac{(\overrightarrow{q_2},\overrightarrow{%
q_1}-\overrightarrow{k_1})^2+(\overrightarrow{q_2},\overrightarrow{\Delta }-
\overrightarrow{k_1})^2-\overrightarrow{q_2}^2(\overrightarrow{q_2}^2+
\overrightarrow{q_1}\overrightarrow{\Delta }-\overrightarrow{k_1}(
\overrightarrow{q_1}+\overrightarrow{\Delta }))}{\overrightarrow{\Delta }%
^2\left( (\overrightarrow{k_1}-x\overrightarrow{q_1})^2+x(1-x)
\overrightarrow{q_1}^2\right) \left( (\overrightarrow{k_1}-x\overrightarrow{%
\Delta })^2+x(1-x)\overrightarrow{\Delta }^2\right) }
$$
$$
+2\overrightarrow{q_1}^2\overrightarrow{q_2}^2\,\frac{(\overrightarrow{q_1}-
\overrightarrow{k_1},\overrightarrow{\Delta }-\overrightarrow{k_1})}{
\overrightarrow{\Delta }^2\left( (\overrightarrow{k_1}-x\overrightarrow{q_1}%
)^2+x(1-x)\overrightarrow{q_1}^2\right) \left( (\overrightarrow{k_1}-x
\overrightarrow{\Delta })^2+x(1-x)\overrightarrow{\Delta }^2\right) }
$$
$$
+\frac{\overrightarrow{q_1}^2\overrightarrow{q_2}^2}{\overrightarrow{\Delta
}%
^2}\left( \frac{2-x}{x\overrightarrow{k_1}^2}-\frac{2-x}{x(\overrightarrow{%
k_1}-x\overrightarrow{\Delta })^2}+\frac{4-4x+2x^2}{(\overrightarrow{k_1}-x
\overrightarrow{\Delta })^2}-\frac{4-4x+2x^2}{(\overrightarrow{k_1}-x
\overrightarrow{\Delta })^2+x(1-x)\overrightarrow{\Delta }^2}\right)
$$
$$
+\frac{(1-x)^2\overrightarrow{q_1}^2\overrightarrow{q_2}^2}{\overrightarrow{%
k_1}^2(\overrightarrow{k_1}-x\overrightarrow{\Delta })^2}-\frac{2(1-x+x^2)
\overrightarrow{q_1}^2\overrightarrow{q_2}^2}{\left( (\overrightarrow{k_1}-x
\overrightarrow{q_1})^2+x(1-x)\overrightarrow{q_1}^2\right) \overrightarrow{%
\Delta }^2}
$$
$$
+\frac{2(1-x)\overrightarrow{q_1}^2}{\overrightarrow{\Delta }^2}\left(
\frac{%
(3-2x)\overrightarrow{q_1}\overrightarrow{q_2}-(2-x)\overrightarrow{q_2}^2}{%
( \overrightarrow{k_1}-x\overrightarrow{q_1})^2+x(1-
x)\overrightarrow{q_1}^2}%
- \frac{(3-2x)\overrightarrow{q_1}\overrightarrow{q_2}-(2-x)\overrightarrow{%
q_2}^2}{(\overrightarrow{k_1}-x\overrightarrow{\Delta })^2+x(1-x)
\overrightarrow{\Delta }^2}\right)
$$
$$
+2\,Re\,\left( (1-x)\frac{(q_1)^2}{\overrightarrow{\Delta }^2}\,\frac{%
q_1^{*}(x\Delta q_2^{*}-2\Delta ^{*}q_2)+k_1^{*}(\Delta ^{*}q_2-x\Delta
q_2^{*})}{k_1\left( (\overrightarrow{k_1}-x\overrightarrow{q_1})^2+x(1-x)
\overrightarrow{q_1}^2\right) }\right.
$$
$$
\left. +\frac{q_1q_2^{*}}{\overrightarrow{\Delta }^2}\,\frac{%
xq_1^{*}(q_1-x\Delta )^2-2(1-x)^2q_1^{*}\Delta
^2+(1-x)k_1^{*}q_1(q_1-x\Delta )+(1-x)^2k_1^{*}\Delta ^2}{(k_1-x\Delta
)\left( (\overrightarrow{k_1}-x\overrightarrow{\Delta })^2+x(1-x)
\overrightarrow{\Delta }^2\right) }\right.
$$
$$
\left. +\frac{x^2(1-x)^2\overrightarrow{q_1}^2q_1q_2\Delta ^{*}}{\Delta
k_1(k_1^{*}-x\Delta ^{*})\left( (\overrightarrow{k_1}-x\overrightarrow{q_1}%
)^2+x(1-x)\overrightarrow{q_1}^2\right) }\right.
$$
$$
\left. +\frac{x^2\overrightarrow{q_1}^2q_2\Delta \left(
2(1-x)q_1^{*}+xq_2^{*}-(1-x)k_1^{*}\right) }{\overrightarrow{\Delta }%
^2k_1\left( (\overrightarrow{k_1}-x\overrightarrow{\Delta })^2+x(1-x)
\overrightarrow{\Delta }^2\right) }\right.
$$
$$
\left. +\frac{x^2(1-x)^2\overrightarrow{q_1}^2q_1^{*}q_2\left(
2(1-x)k_1\Delta -2k_1^2-\Delta ^2\right) }{\Delta ^2 k_1(k_1^{*}-x\Delta
^{*})\left( ( \overrightarrow{k_1}-x\overrightarrow{\Delta })^2+x(1-x)
\overrightarrow{\Delta }^2\right) }\right.
$$
$$
\left. +\frac{x^2(1-x)q_1q_2^{*}\Delta ((1-x)q_2+xq_1)}{\overrightarrow{%
\Delta }^2k_1(k_1-x\Delta )}+\frac{x^2(1-x)^2(q_1^{*}q_2)^2}{\Delta
^2(k_1^{*}-x\Delta ^{*})^2}\,\right) .
$$
For the interference term we obtain even more complicated expression:
$$
c^{++}(k_1,k_2)\,c^{++}(k_2,k_1)\,\frac{k_1^{*}}{k_1}\,\frac{k_2^{*}}{k_2}=
$$
$$
\frac{x(1-x)q_1^4\left( x(2-x)q_1^{*}+(1-x)^2k_1^{*}\right) \left(
q_1^{*}-x^2(q_2^{*}+k_1^{*})\right) }{k_1(\Delta -k_1)\left( (
\overrightarrow{k_1}-x\overrightarrow{q_1})^2+x(1-x)\overrightarrow{q_1}%
^2\right) \left( (\overrightarrow{k_1}+\overrightarrow{q_2}-
x\overrightarrow{%
q_1})^2+x(1-x)\overrightarrow{q_1}^2\right) }
$$
$$
-\frac{x(1-x)^2q_1\overrightarrow{q_1}^2\left(
x(2-x)q_1^{*}+(1-x)^2k_1^{*}\right) \left( (2+x)q_1-\Delta +k_1\right) }{%
k_1\left( (\overrightarrow{k_1}-x\overrightarrow{q_1})^2+x(1-x)
\overrightarrow{q_1}^2\right) \left( (\overrightarrow{k_1}+\overrightarrow{%
q_2}-x\overrightarrow{q_1})^2+x(1-x)\overrightarrow{q_1}^2\right) }
$$
$$
-\frac{x^2(1-x)q_1\overrightarrow{q_1}^2\left( (3-x)q_1-k_1\right) \left(
q_1^{*}-x^2(q_2^{*}+k_1^{*})\right) }{(\Delta -k_1)\left( (\overrightarrow{%
k_1}-x\overrightarrow{q_1})^2+x(1-x)\overrightarrow{q_1}^2\right) \left( (
\overrightarrow{k_1}+\overrightarrow{q_2}-x\overrightarrow{q_1})^2+x(1-x)
\overrightarrow{q_1}^2\right) }
$$
$$
+\frac{x^2(1-x)^2(q_1^{*})^2\left( (3-x)q_1-k_1\right) \left(
(2+x)q_1-\Delta +k_1\right) }{\left( (\overrightarrow{k_1}-x\overrightarrow{%
q_1})^2+x(1-x)\overrightarrow{q_1}^2\right) \left( (\overrightarrow{k_1}+
\overrightarrow{q_2}-x\overrightarrow{q_1})^2+x(1-x)\overrightarrow{q_1}%
^2\right) }
$$
$$
+\frac{x(1-x)\overrightarrow{q_1}^4\Delta ^2\left( x(2-x)\Delta
^{*}+(1-x)^2k_1^{*}\right) \left( \Delta ^{*}-x^2k_1^{*}\right) }{(\Delta
^{*})^2k_1(\Delta -k_1)\left( (\overrightarrow{k_1}-x\overrightarrow{\Delta
}%
)^2+x(1-x)\overrightarrow{\Delta }^2\right) ^2}
$$
$$
-\frac{x(1-x)^2\overrightarrow{q_1}^4\left( x(2-x)\Delta
^{*}+(1-x)^2k_1^{*}\right) \left( (1+x)\Delta +k_1\right) }{k_1\Delta
^{*}\left( (\overrightarrow{k_1}-x\overrightarrow{\Delta })^2+x(1-x)
\overrightarrow{\Delta }^2\right) ^2}
$$
$$
-\frac{x^2(1-x)\overrightarrow{q_1}^4\left( (3-x)\Delta -k_1\right) \left(
\Delta ^{*}-x^2k_1^{*}\right) }{\Delta ^{*}(\Delta -k_1)\left( (
\overrightarrow{k_1}-x\overrightarrow{\Delta })^2+x(1-x)\overrightarrow{%
\Delta }^2\right) ^2}
$$
$$
+\frac{x^2(1-x)^2\overrightarrow{q_1}^4\left( (3-x)\Delta -k_1\right) \left(
(1+x)\Delta +k_1\right) }{\Delta ^2\left( (\overrightarrow{k_1}-x
\overrightarrow{\Delta })^2+x(1-x)\overrightarrow{\Delta }^2\right) ^2}
$$
$$
-\frac{x(1-x)q_1^2\overrightarrow{q_1}^2\Delta \left(
x(2-x)q_1^{*}+(1-x)^2k_1^{*}\right) \left( \Delta ^{*}-x^2k_1^{*}\right) }{%
k_1\Delta ^{*}(\Delta -k_1)\left( (\overrightarrow{k_1}-
x\overrightarrow{q_1}%
)^2+x(1-x)\overrightarrow{q_1}^2\right) \left( (\overrightarrow{k_1}-x
\overrightarrow{\Delta })^2+x(1-x)\overrightarrow{\Delta }^2\right) }
$$
$$
+\frac{x(1-x)^2q_1^2\overrightarrow{q_1}^2\left(
x(2-x)q_1^{*}+(1-x)^2k_1^{*}\right) \left( (1+x)\Delta +k_1\right) }{%
k_1\Delta \left( (\overrightarrow{k_1}-x\overrightarrow{q_1})^2+x(1-x)
\overrightarrow{q_1}^2\right) \left( (\overrightarrow{k_1}-x\overrightarrow{%
\Delta })^2+x(1-x)\overrightarrow{\Delta }^2\right) }
$$
$$
+\frac{x^2(1-x)q_1^{*}\overrightarrow{q_1}^2\Delta \left(
(3-x)q_1-k_1\right) \left( \Delta ^{*}-x^2k_1^{*}\right) }{\Delta
^{*}(\Delta -k_1)\left( (\overrightarrow{k_1}-x\overrightarrow{q_1}%
)^2+x(1-x) \overrightarrow{q_1}^2\right) \left( (\overrightarrow{k_1}-x
\overrightarrow{\Delta })^2+x(1-x)\overrightarrow{\Delta }^2\right) }
$$
$$
-\frac{x^2(1-x)^2q_1^{*}\overrightarrow{q_1}^2\left( (3-x)q_1-k_1\right)
\left( (1+x)\Delta +k_1\right) }{\Delta \left( (\overrightarrow{k_1}-x
\overrightarrow{q_1})^2+x(1-x)\overrightarrow{q_1}^2\right) \left( (
\overrightarrow{k_1}-x\overrightarrow{\Delta })^2+x(1-x)\overrightarrow{%
\Delta }^2\right) }
$$
$$
-\frac{x(1-x)q_1^2\overrightarrow{q_1}^2\Delta \left( x(2-x)\Delta
^{*}+(1-x)^2k_1^{*}\right) \left( q_1^{*}-x^2(q_2^{*}+k_1^{*})\right) }{%
\Delta ^{*}k_1(\Delta -k_1)\left( (\overrightarrow{k_1}-x\overrightarrow{%
\Delta })^2+x(1-x)\overrightarrow{\Delta }^2\right) \left( (\overrightarrow{%
k_1}+\overrightarrow{q_2}-x\overrightarrow{q_1})^2+x(1-
x)\overrightarrow{q_1}%
^2\right) }
$$
$$
+\frac{x(1-x)^2q_1^{*}\overrightarrow{q_1}^2\Delta \left( x(2-x)\Delta
^{*}+(1-x)^2k_1^{*}\right) \left( (2+x)q_1-\Delta +k_1\right) }{\Delta
^{*}k_1\left( (\overrightarrow{k_1}-x\overrightarrow{\Delta })^2+x(1-x)
\overrightarrow{\Delta }^2\right) \left( (\overrightarrow{k_1}+
\overrightarrow{q_2}-x\overrightarrow{q_1})^2+x(1-x)\overrightarrow{q_1}%
^2\right) }
$$
$$
+\frac{x^2(1-x)q_1^2\overrightarrow{q_1}^2\left( (3-x)\Delta -k_1\right)
\left( q_1^{*}-x^2(q_2^{*}+k_1^{*})\right) }{\Delta (\Delta -k_1)\left( (
\overrightarrow{k_1}-x\overrightarrow{\Delta })^2+x(1-x)\overrightarrow{%
\Delta }^2\right) \left( (\overrightarrow{k_1}+\overrightarrow{q_2}-x
\overrightarrow{q_1})^2+x(1-x)\overrightarrow{q_1}^2\right) }
$$
$$
-\frac{x^2(1-x)^2q_1^{*}\overrightarrow{q_1}^2\left( (3-x)\Delta -k_1\right)
\left( (2+x)q_1-\Delta +k_1\right) }{\Delta \left( (\overrightarrow{k_1}-x
\overrightarrow{\Delta })^2+x(1-x)\overrightarrow{\Delta }^2\right) \left( (
\overrightarrow{k_1}+\overrightarrow{q_2}-x\overrightarrow{q_1})^2+x(1-x)
\overrightarrow{q_1}^2\right) }
$$
$$
+x^3q_1q_2^{*}\,\frac{xq_1^{*}k_1\left( (3-x)q_1-k_1\right) -q_1^2\left(
x(2-x)q_1^{*}+(1-x)^2k_1^{*}\right) }{\Delta ^{*}k_1(k_1-x\Delta )\left( (
\overrightarrow{k_1}-x\overrightarrow{q_1})^2+x(1-x)\overrightarrow{q_1}%
^2\right) }
$$
$$
-x^3q_1q_2^{*}\,\frac{xq_1^{*}k_1\left( (3-x)q_1-k_1\right) -q_1^2\left(
x(2-x)q_1^{*}+(1-x)^2k_1^{*}\right) }{\Delta ^{*}k_1(k_1-\Delta )\left( (
\overrightarrow{k_1}-x\overrightarrow{q_1})^2+x(1-x)\overrightarrow{q_1}%
^2\right) }
$$
$$
+x(1-x)^2q_1^{*}q_2\,\frac{xq_1^{*}k_1\left( (3-x)q_1-k_1\right)
-q_1^2\left( x(2-x)q_1^{*}+(1-x)^2k_1^{*}\right) }{\Delta
k_1(k_1^{*}-x\Delta ^{*})\left( (\overrightarrow{k_1}-x\overrightarrow{q_1}%
)^2+x(1-x)\overrightarrow{q_1}^2\right) }
$$
$$
+x^3\overrightarrow{q_1}^2q_1q_2^{*}\,\frac{x\Delta ^{*}k_1\left(
(x-3)\Delta +k_1\right) +\Delta ^2\left( x(2-x)\Delta
^{*}+(1-x)^2k_1^{*}\right) }{\Delta ^{*}\overrightarrow{\Delta }%
^2k_1(k_1-x\Delta )\left( (\overrightarrow{k_1}-x\overrightarrow{\Delta }%
)^2+x(1-x)\overrightarrow{\Delta }^2\right) }
$$
$$
-x^3\overrightarrow{q_1}^2q_1q_2^{*}\,\frac{x\Delta ^{*}k_1\left(
(x-3)\Delta +k_1\right) +\Delta ^2\left( x(2-x)\Delta
^{*}+(1-x)^2k_1^{*}\right) }{\Delta ^{*}\overrightarrow{\Delta }%
^2k_1(k_1-\Delta )\left( (\overrightarrow{k_1}-x\overrightarrow{\Delta }%
)^2+x(1-x)\overrightarrow{\Delta }^2\right) }
$$
$$
+x(1-x)^2\overrightarrow{q_1}^2q_1^{*}q_2\,\frac{x\Delta ^{*}k_1\left(
(x-3)\Delta +k_1\right) +\Delta ^2\left( x(2-x)\Delta
^{*}+(1-x)^2k_1^{*}\right) }{\Delta \overrightarrow{\Delta }%
^2k_1(k_1^{*}-x\Delta ^{*})\left( (\overrightarrow{k_1}-x\overrightarrow{%
\Delta })^2+x(1-x)\overrightarrow{\Delta }^2\right) }
$$
$$
+(1-x)^3q_1q_2^{*}\,\frac{(1-x)q_1^{*}(\Delta -k_1)\left( \Delta
-(2+x)q_1-k_1\right) +q_1^2\left( q_1^{*}-x^2(q_2^{*}+k_1^{*})\right) }{%
\Delta ^{*}(\Delta -k_1)(k_1-x\Delta )\left( (\overrightarrow{\Delta }-
\overrightarrow{k_1}-(1-x)\overrightarrow{q_1})^2+x(1-x)\overrightarrow{q_1}%
^2\right) }
$$
$$
-(1-x)^3q_1q_2^{*}\,\frac{(1-x)q_1^{*}(\Delta -k_1)\left( \Delta
-(2+x)q_1-k_1\right) +q_1^2\left( q_1^{*}-x^2(q_2^{*}+k_1^{*})\right) }{%
\Delta ^{*}(\Delta -k_1)k_1\left( (\overrightarrow{\Delta }-\overrightarrow{%
k_1}-(1-x)\overrightarrow{q_1})^2+x(1-x)\overrightarrow{q_1}^2\right) }
$$
$$
+x^2(1-x)q_1^{*}q_2\,\frac{(1-x)q_1^{*}(\Delta -k_1)\left( \Delta
-(2+x)q_1-k_1\right) +q_1^2\left( q_1^{*}-x^2(q_2^{*}+k_1^{*})\right) }{%
\Delta (\Delta -k_1)(k_1^{*}-x\Delta ^{*})\left( (\overrightarrow{\Delta }-
\overrightarrow{k_1}-(1-x)\overrightarrow{q_1})^2+x(1-x)\overrightarrow{q_1}%
^2\right) }
$$
$$
+(1-x)^3\overrightarrow{q_1}^2q_1q_2^{*}\,\frac{(1-x)\Delta ^{*}(\Delta
-k_1)\left( (1+x)\Delta +k_1\right) -\Delta ^2\left( \Delta
^{*}-x^2k_1^{*}\right) }{\Delta ^{*}\overrightarrow{\Delta }^2(k_1-x\Delta
)(\Delta -k_1)\left( (\overrightarrow{k_1}-x\overrightarrow{\Delta }%
)^2+x(1-x)\overrightarrow{\Delta }^2\right) }
$$
$$
-(1-x)^3\overrightarrow{q_1}^2q_1q_2^{*}\,\frac{(1-x)\Delta ^{*}(\Delta
-k_1)\left( (1+x)\Delta +k_1\right) -\Delta ^2\left( \Delta
^{*}-x^2k_1^{*}\right) }{\Delta ^{*}\overrightarrow{\Delta }^2k_1(\Delta
-k_1)\left( (\overrightarrow{k_1}-x\overrightarrow{\Delta })^2+x(1-x)
\overrightarrow{\Delta }^2\right) }
$$
$$
+x^2(1-x)\overrightarrow{q_1}^2q_1^{*}q_2\,\frac{(1-x)\Delta ^{*}(\Delta
-k_1)\left( (1+x)\Delta +k_1\right) -\Delta ^2\left( \Delta
^{*}-x^2k_1^{*}\right) }{\Delta \overrightarrow{\Delta }^2(k_1^{*}-x\Delta
^{*})(\Delta -k_1)\left( (\overrightarrow{k_1}-x\overrightarrow{\Delta }%
)^2+x(1-x)\overrightarrow{\Delta }^2\right) }
$$
$$
+\frac{(q_1q_2^{*})^2}{(\Delta ^{*})^2}\left( \frac{x^2}{k_1}-\frac{x^2}{%
k_1-x\Delta }\right) \left( \frac{(1-x)^2}{k_1-x\Delta }-\frac{(1-x)^2}{%
k_1-\Delta }\right) -\frac{x^2(1-x)^2(q_1^{*}q_2)^2}{\Delta
^2(k_1^{*}-x\Delta ^{*})^2}
$$
$$
+\frac{\overrightarrow{q_1}^2\overrightarrow{q_2}^2}{\overrightarrow{\Delta
}%
^2}\left( \frac{(1-x)^4}{k_1(k_1^{*}-x\Delta ^{*})}+\frac{x^4}{(k_1-\Delta
)(k_1^{*}-x\Delta ^{*})}-\frac{x^4+(1-x)^2}{(\overrightarrow{k_1}-x
\overrightarrow{\Delta })^2}\right) .
$$
Let us consider now the quark-anti-quark production. The squared amplitude $%
c(k_1,k_2)$ is given below:
$$
\frac 1{x^2}\mid c(k_1,k_2)\mid ^2=
$$
$$
\frac{\mid (1-x)q_1k_1^{*}-xq_1^{*}k_1+x\overrightarrow{q_1}^2\mid ^2}{%
\left( (\overrightarrow{k_1}-x\overrightarrow{q_1})^2+x(1-x)\overrightarrow{%
q_1}^2\right) ^2}+\frac{\overrightarrow{q_1}^4}{\overrightarrow{\Delta }^4}%
\, \frac{\mid (1-x)\Delta k_1^{*}-x\Delta ^{*}k_1+x\overrightarrow{\Delta }%
^2\mid ^2}{\left( (\overrightarrow{k_1}-x\overrightarrow{\Delta })^2+x(1-x)
\overrightarrow{\Delta }^2\right) ^2}
$$
$$
+2Re\,\left( -\,\frac{\overrightarrow{q_1}^2}{\overrightarrow{\Delta }^2}\,
\frac{\left( (1-x)q_1k_1^{*}-xq_1^{*}k_1+x\overrightarrow{q_1}^2\right)
\left( (1-x)\Delta ^{*}k_1-x\Delta k_1^{*}+x\overrightarrow{\Delta }%
^2\right) }{\left( (\overrightarrow{k_1}-x\overrightarrow{q_1})^2+x(1-x)
\overrightarrow{q_1}^2\right) \left( (\overrightarrow{k_1}-x\overrightarrow{%
\Delta })^2+x(1-x)\overrightarrow{\Delta }^2\right) }\right.
$$
$$
\left. +\frac{(1-x)q_1^{*}k_1-xq_1k_1^{*}+x\overrightarrow{q_1}^2}{(
\overrightarrow{k_1}-x\overrightarrow{q_1})^2+x(1-x)\overrightarrow{q_1}^2}%
\left( \frac{(1-x)q_1q_2^{*}}{\Delta ^{*}(k_1-x\Delta )}-\frac{xq_1^{*}q_2}{%
\Delta (k_1^{*}-x\Delta ^{*})}\right) \right.
$$
$$
\left. -\frac{\overrightarrow{q_1}^2}{\overrightarrow{\Delta }^2}\,\frac{%
(1-x)\Delta ^{*}k_1-x\Delta k_1^{*}+x\overrightarrow{\Delta }^2}{(
\overrightarrow{k_1}-x\overrightarrow{\Delta })^2+x(1-x)\overrightarrow{%
\Delta }^2}\left( \frac{(1-x)q_1q_2^{*}}{\Delta ^{*}(k_1-x\Delta )}-\frac{%
xq_1^{*}q_2}{\Delta (k_1^{*}-x\Delta ^{*})}\right) \right.
$$
$$
\left. -\frac{x(1-x)q_1^2(q_2^{*})^2}{(\Delta ^{*})^2(k_1-x\Delta )^2}%
\,\right) \,+\frac{\overrightarrow{q_1}^2\overrightarrow{q_2}^2\left(
x^2+(1-x)^2\right) }{\overrightarrow{\Delta }^2(\overrightarrow{k_1}-x
\overrightarrow{\Delta })^2}\,.
$$
This expression can be written in the following form convenient for the
subsequent integration:%
$$
\frac 1{x^2}\,\mid c(k_1,k_2)\mid ^2=
$$
$$
x(1-x)\left( \frac{\overrightarrow{q_1}^4\left( 6x(1-x)-1\right) +4(1-2x)
\overrightarrow{q_1}^2\overrightarrow{q_1}(\overrightarrow{k_1}-x
\overrightarrow{q_1})-2Re\,q_1^2(k_1^{*}-xq_1^{*})^2}{\left( (
\overrightarrow{k_1}-x\overrightarrow{q_1})^2+x(1-x)\overrightarrow{q_1}%
^2\right) ^2}\right.
$$
$$
\left. +\frac{\overrightarrow{q_1}^4}{\overrightarrow{\Delta }^4}\,\frac{
\overrightarrow{\Delta }^4\left( 6x(1-x)-1\right) +4(1-2x)\overrightarrow{%
\Delta }^2\overrightarrow{\Delta }(\overrightarrow{k_1}-x\overrightarrow{%
\Delta })-2Re\,\Delta ^2(k_1^{*}-x\Delta ^{*})^2}{\left(
(\overrightarrow{k_1%
}-x\overrightarrow{\Delta })^2+x(1-x)\overrightarrow{\Delta }^2\right) ^2}%
\right)
$$
$$
+2\frac{\overrightarrow{q_1}^2}{\overrightarrow{\Delta }^2}\,Re\,\frac{%
x(1-x)(2k_1^2 q_1^{*}\Delta ^{*}-k_1 \overrightarrow{q_1}^2 \Delta^* -q_1
\overrightarrow{\Delta}^2 k_1^{*})+x^2\Delta q_1^{*}\left( k_1\Delta
^{*}+q_1k_1^{*}-q_1\Delta ^{*}\right) }{\left( (\overrightarrow{k_1}-x
\overrightarrow{q_1})^2+x(1-x) \overrightarrow{q_1}^2\right) \left( (
\overrightarrow{k_1}-x\overrightarrow{\Delta })^2+x(1-x)\overrightarrow{%
\Delta }^2\right) }
$$
$$
+\frac{\overrightarrow{q_1}^2}{\overrightarrow{\Delta }^2}\left(
x^2+(1-x)^2\right) \,\left( \frac{-2x\overrightarrow{q_1}\overrightarrow{%
\Delta }(2\overrightarrow{k_1}\overrightarrow{q_1}-\overrightarrow{q_1}^2)
}{%
\left( (\overrightarrow{k_1}-x\overrightarrow{q_1})^2+x(1-x)
\overrightarrow{%
q_1}^2\right) \left( (\overrightarrow{k_1}-x\overrightarrow{\Delta }%
)^2+x(1-x)\overrightarrow{\Delta }^2\right) }\right.
$$
$$
\left. +\frac{\overrightarrow{q_1}^2-\overrightarrow{q_2}^2}{(
\overrightarrow{k_1}-x\overrightarrow{q_1})^2+x(1-x)\overrightarrow{q_1}^2}-
\frac{\overrightarrow{q_1}^2}{(\overrightarrow{k_1}-x\overrightarrow{\Delta
}%
)^2+x(1-x)\overrightarrow{\Delta }^2}+\frac{\overrightarrow{q_2}^2}{(
\overrightarrow{k_1}-x\overrightarrow{\Delta })^2}\right)
$$
$$
+2Re\,\left( \frac{xq_1q_2^{*}}{\Delta ^{*}(k_1-x\Delta )}\,\frac{%
(1-x)q_1(q_1^{*}-2k_1^{*})+\Delta q_1^{*}(x^2+(1-x)^2)-x\overrightarrow{q_1}%
^2}{(\overrightarrow{k_1}-x\overrightarrow{q_1})^2+x(1-
x)\overrightarrow{q_1}%
^2}\right.
$$
$$
\left. +\frac{\overrightarrow{q_1}^2}{\overrightarrow{\Delta }^2}\,\,\frac{%
2\Delta k_1^{*}-2(1-x)\overrightarrow{\Delta }^2}{(\overrightarrow{k_1}-
\overrightarrow{x\Delta })^2+x(1-x)\overrightarrow{\Delta }^2}\,\,\frac{%
x(1-x)q_1q_2^{*}}{\Delta ^{*}(k_1-x\Delta )}-\frac{x(1-x)(q_1q_2^{*})^2}{%
(\Delta ^{*})^2(k_1-x\Delta )^2}\right) \,.
$$
\thinspace The interference term for the quark-anti-quark production is
given below:%
$$
\frac 1{x(1-x)}\,c(k_1,k_2)\,c(k_2,k_1)=
$$
$$
\frac{\overrightarrow{k_1}^2\overrightarrow{q_1}^2\left( x^2+(1-x)^2\right)
-x(1-x)\,2Re\,(k_1^2q_1^{*}\,^2)+x\overrightarrow{q_1}^2\left( xq_1\Delta
^{*}+(1-x)q_1^{*}q_2\right) }{\left( (\overrightarrow{k_1}-x\overrightarrow{%
q_1})^2+x(1-x)\overrightarrow{q_1}^2\right) \left( (\overrightarrow{k_1}+
\overrightarrow{q_2}-x\overrightarrow{q_1})^2+x(1-x)\overrightarrow{q_1}%
^2\right) }
$$
$$
+\frac{xq_1^{*}k_1\left( (1-x)q_1^{*}\Delta -xq_1\Delta ^{*}\right)
+k_1^{*}\left( (1-2x)\overrightarrow{q_1}^2q_1+(1-x)q_1(xq_1\Delta
^{*}-(1-x)q_1^{*}\Delta )\right) }{\left( (\overrightarrow{k_1}-x
\overrightarrow{q_1})^2+x(1-x)\overrightarrow{q_1}^2\right) \left( (
\overrightarrow{k_1}+\overrightarrow{q_2}-x\overrightarrow{q_1})^2+x(1-x)
\overrightarrow{q_1}^2\right) }
$$
$$
+\frac{\overrightarrow{q_1}^4}{\overrightarrow{\Delta }^4}\,\frac{
\overrightarrow{k_1}^2\overrightarrow{\Delta }^2\left( x^2+(1-x)^2\right)
-x(1-x)\,2Re\,(k_1^2\Delta ^{*}{}^2)+x(1-2x)\overrightarrow{\Delta }%
^2\,2Re\,(k_1\Delta ^{*})+x^2\overrightarrow{\Delta }^4}{\left( (
\overrightarrow{k_1}-x\overrightarrow{\Delta })^2+x(1-x)\overrightarrow{%
\Delta }^2\right) ^2}
$$
$$
-\frac{\overrightarrow{q_1}^2}{\overrightarrow{\Delta }^2}\,\frac{
\overrightarrow{k_1}^2\left( (1-x)^2\Delta q_1^{*}+x^2\Delta ^{*}q_1\right)
-x(1-x)\,2Re\,\left( k_1^2\Delta ^{*}q_1^{*}\right) +x\overrightarrow{\Delta
}^2\left( xq_1\Delta ^{*}+(1-x)q_1^{*}q_2\right) }{\left( (\overrightarrow{%
k_1}+\overrightarrow{q_2}-x\overrightarrow{q_1})^2+x(1-
x)\overrightarrow{q_1}%
^2\right) \left( (\overrightarrow{k_1}-x\overrightarrow{\Delta })^2+x(1-x)
\overrightarrow{\Delta }^2\right) }
$$
$$
-\frac{\overrightarrow{q_1}^2}{\overrightarrow{\Delta }^2}\,\frac{x\Delta
^{*}k_1\left( (1-x)q_1^{*}(2\Delta -q_1)-xq_1\Delta ^{*}\right) +\Delta
k_1^{*}\left( (1-x)(xq_1\Delta ^{*}+(1-x)q_1^{*}q_2)-x^2q_1\Delta
^{*}\right) }{\left( (\overrightarrow{k_1}+\overrightarrow{q_2}-x
\overrightarrow{q_1})^2+x(1-x)\overrightarrow{q_1}^2\right) \left( (
\overrightarrow{k_1}-x \overrightarrow{\Delta })^2+x(1-x)\overrightarrow{%
\Delta }^2\right) }
$$
$$
-\frac{\overrightarrow{q_1}^2}{\overrightarrow{\Delta }^2}\,\frac{
\overrightarrow{k_1}^2\left( x^2q_1^{*}\Delta +(1-x)^2q_1\Delta ^{*}\right)
-x(1-x)\,2Re\,\left( k_1^2q_1^{*}\Delta ^{*}\right) +x^2\overrightarrow{q_1}%
^2\overrightarrow{\Delta }^2}{\left( (\overrightarrow{k_1}-x\overrightarrow{%
q_1})^2+x(1-x)\overrightarrow{q_1}^2\right) \left( (\overrightarrow{k_1}-x
\overrightarrow{\Delta })^2+x(1-x)\overrightarrow{\Delta }^2\right) }
$$
$$
-\frac{\overrightarrow{q_1}^2}{\overrightarrow{\Delta }^2}\,\frac{%
xq_1^{*}\Delta ^{*}k_1\left( (1-x)q_1-x\Delta \right) +xq_1\Delta
k_1^{*}\left( (1-x)\Delta ^{*}-xq_1^{*}\right) } {\left(
(\overrightarrow{k_1%
}-x\overrightarrow{q_1})^2+x(1-x)\overrightarrow{q_1}^2\right) \left( (
\overrightarrow{k_1}-x\overrightarrow{\Delta })^2+x(1-x)\overrightarrow{%
\Delta }^2\right) }
$$
$$
+\frac{x^2\overrightarrow{q_1}^2q_2^{*}k_1-xq_1q_2^{*}\left(
(1-x)q_1k_1^{*}+x\overrightarrow{q_1}^2\right) }{\Delta ^{*}(k_1-x\Delta
)\left( (\overrightarrow{k_1}-x\overrightarrow{q_1})^2+x(1-
x)\overrightarrow{%
q_1}^2\right) }
$$
$$
+\frac{(1-x)^2\overrightarrow{q_1}^2q_2k_1^{*}-x(1-x)q_1^{*2}q_2(k_1-q_1)}{%
\Delta (k_1^{*}-x\Delta ^{*})\left( (\overrightarrow{k_1}-x\overrightarrow{%
q_1})^2+x(1-x)\overrightarrow{q_1}^2\right) }
$$
$$
+\frac{(1-x)^2\overrightarrow{q_1}^2q_2^{*}k_1+(1-x)q_1q_2^{*}\left(
xq_1(\Delta ^{*}-k_1^{*})+(1-x)q_1^{*}q_2\right) }{\Delta ^{*}(k_1-x\Delta
)\left( (\overrightarrow{k_1}+\overrightarrow{q_2}-x\overrightarrow{q_1}%
)^2+x(1-x)\overrightarrow{q_1}^2\right) }
$$
$$
+\frac{x^2\overrightarrow{q_1}^2q_2k_1^{*}-xq_1^{*}q_2\left( xq_1\Delta
^{*}+(1-x)q_1^{*}(k_1+q_2)\right) }{\Delta (k_1^{*}-x\Delta ^{*})\left( (
\overrightarrow{k_1}+\overrightarrow{q_2}-x\overrightarrow{q_1})^2+x(1-x)
\overrightarrow{q_1}^2\right) }
$$
$$
-\frac{\overrightarrow{q_1}^2q_1q_2^{*}}{\overrightarrow{\Delta }^2\Delta
^{*}}\,\frac{\Delta ^{*}k_1(x^2+(1-x)^2)-x\Delta \left(
2(1-x)k_1^{*}-(1-2x)\Delta ^{*}\right) }{(k_1-x\Delta )\left( (
\overrightarrow{k_1}-x\overrightarrow{\Delta })^2+x(1-x)\overrightarrow{%
\Delta }^2\right) }
$$
$$
-\frac{\overrightarrow{q_1}^2q_1^{*}q_2}{\overrightarrow{\Delta }^2\Delta }%
\, \frac{\Delta k_1^{*}((1-x)^2+x^2)-2x(1-x)\Delta ^{*}k_1+x(1-2x)
\overrightarrow{\Delta }^2}{(k_1^{*}-x\Delta ^{*})\left(
(\overrightarrow{k_1%
}-x\overrightarrow{\Delta })^2+x(1-x)\overrightarrow{\Delta }^2\right) }
$$
$$
+\left( x^2+(1-x)^2\right) \frac{\overrightarrow{q_1}^2\overrightarrow{q_2}%
^2 }{\overrightarrow{\Delta }^2(\overrightarrow{k_1}-x\overrightarrow{\Delta
})^2}-2x(1-x)\,Re\,\frac{q_1^2q_2^{*2}}{\Delta ^{*2}(k_1-x\Delta )^2}
$$
\newpage
\noindent
|
2,877,628,090,517 | arxiv | \section{\label{sec:intro}Introduction}
In the limit of exact $SU(3)$ flavor symmetry, Coleman and Glashow \cite{coleman} first derived a set of relations among magnetic moments of the octet baryons. Their celebrated relations read
\begin{eqnarray}
\begin{array}{lcl}
\mu_{\Sigma^+}^{SU(3)} = \mu_p^{SU(3)}, & \qquad \qquad & \mu_{\Sigma^-}^{SU(3)} + \mu_n^{SU(3)} = -\mu_p^{SU(3)}, \\[4mm]
2\mu_\Lambda^{SU(3)} = \mu_n^{SU(3)}, & \qquad \qquad & \mu_{\Xi^-}^{SU(3)} = \mu_{\Sigma^-}^{SU(3)}, \\[4mm]
\mu_{\Xi^0}^{SU(3)} = \mu_n^{SU(3)}, & \qquad \qquad & 2\mu_{\Sigma^0\Lambda}^{SU(3)} = -\sqrt{3}\mu_n^{SU(3)}, \label{eq:cg}
\end{array}
\label{eq:treeval}
\end{eqnarray}
along with the isospin relation
\begin{equation}
\mu_{\Sigma^+}^{SU(3)} - 2 \mu_{\Sigma^0}^{SU(3)} + \mu_{\Sigma^-}^{SU(3)} = 0, \label{eq:isos}
\end{equation}
where $\mu_B^{SU(3)}$ represents the magnetic moment of the octet baryon $B$ in the $SU(3)$ symmetry limit.
Beyond the symmetry limit, various methods have been implemented for the evaluation of baryon magnetic moments. An important selection of such methods prior 2009 can be found in Ref.~\cite{rfm09}; a more recent analysis in the context of covariant chiral perturbation theory was presented in Ref.~\cite{xiao}. One of the earliest methods is chiral perturbation theory. Caldi and Pagels pointed out that nonanalytical corrections of orders $\mathcal{O}(m_q^{1/2})$ and $\mathcal{O}(m_q \ln m_q)$ in the perturbative series are calculable \cite{caldi}. They tackled the former and found them to be as large as the lowest-order values, which would indicate a breakdown of the perturbative expansion. It was not until the arrival of heavy baryon chiral perturbation theory (HBCHPT) first introduced by Jenkins and Manohar \cite{jm91a,jm91b} that some aspects of the theory were properly understood. When the method was applied to the renormalization of the baryon axial current, chiral logarithmic corrections to the axial couplings in hyperon semileptonic decays were found to be as large as the lowest order values when \textit{only intermediate octet baryons were included in the loops} \cite{jm91a}. The inclusion of both octet and decuplet baryons in the loops reduced considerably the corrections with respect to the case with the inclusion of octet states alone \cite{jm91b}. The cancellation pointed out phenomenologically in Refs.~\cite{jm91a,jm91b} was later proved in the context of the $1/N_c$ expansion of QCD in Refs.~\cite{dm91a,dm91b,djm94,djm95,dai}, where $N_c$ is the number of quark charges.
The earliest analysis of the magnetic moments of octet baryons in HBCHPT to orders $\mathcal{O}(m_q^{1/2})$ and $\mathcal{O}(m_q \ln m_q)$ was presented in Ref.~\cite{jen92}. There, it was concluded that, unlike the axial current case, the inclusion of intermediate decuplet baryons in the loops does not partially cancel the contribution from intermediate octet baryons. The use of the combined formalism in $1/N_c$ and chiral corrections \cite{march,jen96} has shed light on the subject \cite{rfm09,rfm14} by allowing one to perform a rigorous analytical evaluation of the cancellations that follow from the large-$N_c$ spin-flavor symmetry of baryons. In Ref.~\cite{rfm09}, one-loop corrections to magnetic moments to relative order $1/N_c^3$ in the $1/N_c$ expansion were carried out under the limit $\Delta\to 0$, where $\Delta \equiv M_T-M_B$ is the average decuplet-octet mass difference. A more refined analysis was later presented in Ref.~\cite{rfm14}, where the assumption of degenerate intermediate baryons was lifted and explicit $SU(3)$ symmetry breaking (SB) effects were also included.
The aim of the present paper is to improve the analyses of Refs.~\cite{rfm09,rfm14} in a few aspects. Mainly, all $1/N_c$ corrections to the baryon magnetic moment allowed for $N_c=3$ will be evaluated, motivated by a recent calculation of the baryon axial coupling \cite{rfm21}. While corrections of order $\mathcal{O}(m_q^{1/2})$ will be carried out for nonzero $\Delta$, corrections of order $\mathcal{O}(m_q \ln m_q)$ will keep the $\Delta=0$ assumption for reasons that will become apparent later. Complete expressions for all 27 magnetic moments of octet and decuplet baryons and decuplet-octet transition moments are provided. Despite their lengths, the analytical forms are basically simple and organized in a way that are easy to handle. Their main usefulness lies in that they can be used to perform an \textit{analytical} comparison to the available expressions obtained in conventional HBCHPT (the effective field theory with no $1/N_c$ expansion) of Ref.~\cite{jen92}. Therefore, the main contribution of this paper is to explicitly show that baryon chiral perturbation theory in the large-$N_c$ limit and HBCHPT analyses of baryon octet magnetic moments fully agree at the physical value $N_c=3$ for $N_f=3$ flavors of light quarks.
The organization of the paper is as follows. Some introductory aspects of large-$N_c$ chiral perturbation theory are provided in Sec.~\ref{sec:introln}; in passing, notation and conventions are introduced. After a brief review of baryon magnetic moments at tree level in Sec.~\ref{sec:tree}, the discussion is focused on the computation of one-loop corrections in Sec.~\ref{sec:1l}; because of their different group theoretical properties, corrections of orders $\mathcal{O}(m_q^{1/2})$ and $\mathcal{O}(m_q\ln m_q)$ are studied separately in Secs.~\ref{sec:mq} and \ref{sec:mqlnmq}, respectively, followed by their corresponding analytical comparisons with HBCHPT results. The issue of explicit SB is reviewed in Sec.~\ref{sec:sb}, based on the analysis of Ref.~\cite{rfm14}. Gathering together all partial results allows one to carry out a numerical analysis to determine the free parameters of the theory, by making a least-squares fit to the available data \cite{part}. Results are presented in Sec.~\ref{sec:num} and some closing remarks are provided in Sec.~\ref{sec:con}. The paper is complemented by five appendixes where the complete although lengthy formulas of baryon magnetic moments are relegated.
\section{\label{sec:introln}Operator analysis in the $1/N_c$ expansion}
The $1/N_c$ expansion has been very useful in understanding the spin-flavor structure of baryons in QCD \cite{dm91a,dm91b,djm94,djm95}.
For the physically interesting case of three light flavors, $N_f=3$, the lowest-lying baryon states fall into a representation of the spin-flavor group $SU(6)$. At the physical value $N_c=3$, this is the usual $\mathbf{56}$ dimensional representation of $SU(6)$. The $J^P = 1/2^+$ octet containing the nucleon and the $J^P = 3/2^+$ decuplet containing the $\Delta(1232)$ together make up the ground-state 56-plet, in which the orbital angular momenta between the quark pairs are zero and the spatial part of the state function is symmetric.
The present analysis builds on the $1/N_c$ baryon chiral Lagrangian $\mathcal{L}_{\text{baryon}}$ introduced in Ref.~\cite{jen96}. This Lagrangian incorporates nonet symmetry and the contracted spin-flavor symmetry for baryons in the large-$N_c$ limit; its definite form reads
\begin{equation}
\mathcal{L}_{\text{baryon}} = i \mathcal{D}^0 - \mathcal{M}_{\text{hyperfine}} + \text{Tr} \left(\mathcal{A}^k \lambda^c \right) A^{kc} + \frac{1}{N_c} \text{Tr} \left(\mathcal{A}^k \frac{2I}{\sqrt 6}\right) A^k + \ldots, \label{eq:ncch}
\end{equation}
with
\begin{equation}
\mathcal{D}^0 = \partial^0 \openone + \text{Tr} \left(\mathcal{V}^0 \lambda^c\right) T^c. \label{eq:kin}
\end{equation}
The ellipses in Eq.~(\ref{eq:ncch}) represent higher partial wave meson couplings which occur at subleading orders in the $1/N_c$ expansion for $N_c > 3$. In the large-$N_c$ limit, all of these higher partial waves vanish so the meson coupling to baryons is purely $p$ wave.
Meson fields participate in $\mathcal{L}_{\text{baryon}}$ through the vector and axial-vector combinations
\begin{equation}
\mathcal{V}^0 = \frac12 \left[ \xi \partial^0 \xi^\dagger + \xi^\dagger \partial^0 \xi \right], \qquad
\mathcal{A}^k = \frac{i}{2} \left(\xi \nabla^k \xi^\dagger - \xi^\dagger \nabla^k \xi\right), \qquad \qquad
\xi(x)=\exp[i\Pi(x)/f],
\end{equation}
where $\Pi(x)$ represents the nonet of Goldstone boson fields and $f \approx 93$ $\mathrm{MeV}/c^2$ is the pion decay constant.
Each term in $\mathcal{L}_{\text{baryon}}$ is made up by a baryon operator. The baryon kinetic energy term involves the spin-flavor identity, $\mathcal{M}_{\text{hyperfine}}$ represents the hyperfine baryon mass operator which incorporates the spin splittings of the tower of baryon states with spins $1/2,\ldots, N_c/2$ in the flavor representations, and $A^k$ and $A^{kc}$ stand for the flavor singlet and flavor octet axial current operators, respectively. All these baryon operators have an expansion in operators whose coefficients are inverse powers of $N_c$ \cite{djm95}. To a given order in $1/N_c$, the expansions can be truncated and linked to physics by evaluating their matrix elements between $SU(6)$ symmetric states at $N_c = 3$.
For any representation of $SU(6)$, polynomials in the generators
\begin{equation}
J^k = q^\dagger \frac{\sigma^k}{2} q, \qquad T^c = q^\dagger \frac{\lambda^c}{2} q, \qquad G^{kc} = q^\dagger
\frac{\sigma^k}{2}\frac{\lambda^c}{2} q, \label{eq:su6gen}
\end{equation}
form a complete set of operators \cite{djm95}. In the above relations, $q^\dagger$ and $q$ represent $SU(6)$ operators that create and annihilate states in the fundamental representation of $SU(6)$, and $\sigma^k$ and $\lambda^c$ are the Pauli spin and Gell-Mann flavor matrices, respectively. The spin-flavor generators satisfy the commutation relations listed in Table \ref{tab:surel}.
\begingroup
\begin{table}
\caption{\label{tab:surel}$SU(2N_f)$ commutation relations.}
\bigskip
\label{tab:su2fcomm}
\centerline{\vbox{ \tabskip=0pt \offinterlineskip
\halign{
\strut\quad $ # $\quad\hfil&\strut\quad $ # $\quad \hfil\cr
\multispan2\hfil $\left[J^i,T^a\right]=0,$ \hfil \cr
\noalign{\medskip}
\left[J^i,J^j\right]=i\epsilon^{ijk} J^k,
&\left[T^a,T^b\right]=i f^{abc} T^c,\cr
\noalign{\medskip}
\left[J^i,G^{ja}\right]=i\epsilon^{ijk} G^{ka},
&\left[T^a,G^{ib}\right]=i f^{abc} G^{ic},\cr
\noalign{\medskip}
\multispan2\hfil$\displaystyle [G^{ia},G^{jb}] = \frac{i}{4}\delta^{ij}
f^{abc} T^c + \frac{i}{2N_f} \delta^{ab} \epsilon^{ijk} J^k + \frac{i}{2} \epsilon^{ijk} d^{abc} G^{kc}.$ \hfill\cr
}}}
\end{table}
\endgroup
The way in which large-$N_c$ dynamics enters can best be seen through some examples. The $1/N_c$ expansion of the baryon mass operator $\mathcal{M}$ can be written as \cite{djm94,djm95}
\begin{eqnarray}
\mathcal{M} = \tilde{m}_0 N_c \openone + \sum_{n=2,4}^{N_c-1} \tilde{m}_{n} \frac{1}{N_c^{n-1}} J^n, \label{eq:mop}
\end{eqnarray}
where $\tilde{m}_n$ are unknown coefficients. While the first summand on the right-hand side of Eq.~(\ref{eq:mop}) is the overall spin-independent mass of the baryon multiplet and is removed from the chiral Lagrangian by the heavy baryon field redefinition~\cite{jm91a}, the spin-dependent ones define $\mathcal{M}_{\text{hyperfine}}$ introduced in the chiral Lagrangian (\ref{eq:ncch}). In the large-$N_c$ limit, $\Delta=\langle \mathcal{M}\rangle_{\frac32}-\langle \mathcal{M}\rangle_{\frac12} \propto 1/N_c$, so decuplet and octet baryons become degenerate and form a single irreducible representation of the contracted spin-flavor symmetry of baryons in large-$N_c$ QCD \cite{djm95}.
At the physical value $N_c=3$ the hyperfine mass expansion reduces to
\begin{eqnarray}
\mathcal{M} _{\text{hyperfine}} = \frac{\tilde{m}_2}{N_c} J^2, \label{eq:smop}
\end{eqnarray}
so $\Delta$ becomes $\tilde{m}_2$.
The baryon flavor singlet axial current $A^k$ is a spin-1 object and a singlet under $SU(3)$; its $1/N_c$ expansion reads \cite{djm95}
\begin{equation}
A^k = \sum_{n=1,3}^{N_c} b_n^{1,1} \frac{1}{N_c^{n-1}} \mathcal{D}_n^k, \label{eq:asin}
\end{equation}
where $\mathcal{D}_1^k = J^k$ and $\mathcal{D}_{2m+1}^k = \{J^2,\mathcal{D}_{2m-1}^k\}$ for $m\geq 1$. The superscript on the operator coefficients of $A^k$ denotes that they refer to the baryon singlet current. At $N_c=3$, Eq.~(\ref{eq:asin}) becomes
\begin{equation}
A^k = b_1^{1,1} J^k + b_3^{1,1} \frac{1}{N_c^2} \{J^2,J^k\}.
\end{equation}
The baryon flavor octet axial current $A^{kc}$ is a spin-1 object, an octet under $SU(3)$ and odd under time reversal; its $1/N_c$ expansion can be written as \cite{djm94,djm95}
\begin{equation}
A^{kc} = a_1 G^{kc} + \sum_{n=2,3}^{N_c} b_n \frac{1}{N_c^{n-1}} \mathcal{D}_n^{kc} + \sum_{n=3,5}^{N_c} c_n
\frac{1}{N_c^{n-1}} \mathcal{O}_n^{kc}, \label{eq:akcfull}
\end{equation}
where the unknown coefficients $a_1$, $b_n$, and $c_n$ have expansions in powers of $1/N_c$ and are order unity at leading order in the $1/N_c$ expansion. The basic operators in expansion (\ref{eq:akcfull}) are
\begin{eqnarray}
\mathcal{D}_2^{kc} & = & J^kT^c, \label{eq:d2kc} \\
\mathcal{D}_3^{kc} & = & \{J^k,\{J^r,G^{rc}\}\}, \label{eq:d3kc} \\
\mathcal{O}_3^{kc} & = & \{J^2,G^{kc}\} - \frac12 \{J^k,\{J^r,G^{rc}\}\}, \label{eq:o3kc}
\end{eqnarray}
so that $\mathcal{D}_n^{kc}=\{J^2,\mathcal{D}_{n-2}^{kc}\}$ and $\mathcal{O}_n^{kc}=\{J^2,\mathcal{O}_{n-2}^{kc}\}$ for $n\geq 4$. Notice that $\mathcal{D}_n^{kc}$ are diagonal operators with nonzero matrix elements only between states with the same spin, and the $\mathcal{O}_n^{kc}$ are purely off-diagonal operators with nonzero matrix elements only between states with different spin. At $N_c = 3$ the series (\ref{eq:akcfull}) can be truncated as
\begin{equation}
A^{kc} = a_1 G^{kc} + b_2 \frac{1}{N_c} \mathcal{D}_2^{kc} + b_3 \frac{1}{N_c^2} \mathcal{D}_3^{kc} + c_3 \frac{1}{N_c^2} \mathcal{O}_3^{kc}. \label{eq:akc}
\end{equation}
At leading order in the $1/N_c$ expansion, $A^{kc}$ is order $\mathcal{O}(N_c)$.
It should be emphasized that keeping all four terms in Eq.~(\ref{eq:akc}) allows for arbitrary values of the four possible $SU(3)$ symmetric couplings of pseudoscalar mesons to the octet and decuplet baryons $D$, $F$, $\mathcal{C}$, and $\mathcal{H}$ introduced in Refs.~\cite{jm91a,jm91b}. This is the reason why for $N_c=3$ it is not necessary to go beyond operator products of third order in the spin-flavor generators.
\section{\label{sec:tree}Baryon magnetic moment at tree level}
The starting point in the present analysis is the fact that in the large-$N_c$ limit, the baryon magnetic moments have the same kinematic properties as the baryon axial couplings so they can be expressed in terms of the very same operators \cite{dai}. Since much of the work has already been advanced in Refs.~\cite{rfm09,rfm14,rfm21}, some partial results presented in these references will be borrowed.
Accordingly, the $1/N_c$ expansion of the operator that yields the baryon magnetic moment operator becomes \cite{rfm09}
\begin{equation}
M^{kc} = m_1 G^{kc} + \frac{1}{N_c} m_2 \mathcal{D}_2^{kc} + \frac{1}{N_c^2} m_3 \mathcal{D}_3^{kc} + \frac{1}{N_c^2} m_4 \mathcal{O}_3^{kc}, \label{eq:mkc}
\end{equation}
which is also order $\mathcal{O}(N_c)$ at leading order in the $1/N_c$ expansion; $m_i$ are unknown coefficients which also possess a $1/N_c$ expansion starting at order 1. Under the assumption of $SU(3)$ symmetry, the unknown coefficients $m_i$ are independent of $k$ so they are unrelated to $a_1$, $b_2$, $b_3$, or $c_3$ \cite{rfm09}.
The baryon magnetic moment operator is thus defined as
\begin{equation}
M^k \equiv M^{kQ} = M^{k3} + \frac{1}{\sqrt{3}}M^{k8}, \label{eq:mQ}
\end{equation}
where the spin index will be fixed to 3 and the flavor index becomes $Q=3+(1/\sqrt{3})8$. Hereafter, any operators of the form $X^Q$ and $X^{\overline{Q}}$ should be understood as $X^3+(1/\sqrt{3})X^8$ and $X^3-(1/\sqrt{3})X^8$, respectively. The magnetic moments are proportional to the quark charge matrix $\mathrm{diag}(2/3,-1/3,-1/3)$, so they can be separated into isovector and isoscalar components, $M^{33}$ and $M^{38}$, respectively.
The baryon magnetic moments at tree level can be straightforwardly obtained by evaluating the matrix elements of the operators that appear in (\ref{eq:mQ}) between $SU(6)$ baryon symmetric states. The universality of operator (\ref{eq:mQ}) is such that it allows one to compute all possible $27$ magnetic moments: Eight magnetic moments for the octet baryons, ten more for the decuplet baryons and one for the octet-octet and eight for the decuplet-octet transition moments. At tree level they will be denoted by $\mu_{B}^{(0)} = \langle B|M^Q|B \rangle$, $\mu_{T}^{(0)} = \langle T|M^Q|T\rangle$, $\mu_{BB^\prime}^{(0)} = \langle B|M^Q|B^\prime \rangle$, and $\mu_{TB}^{(0)} = \langle T|M^Q|B \rangle$, where $B$ and $T$ stand for octet and decuplet baryons, respectively. The theoretical expressions can generically be given by
\begin{equation}
\mu_{B}^{(0)} = \sum_{j=1}^4 \mu_j \langle B|S_j^{3Q}|B\rangle, \label{eq:mmtre}
\end{equation}
where the coefficients $\mu_j$ can be easily read off from Eq.~(\ref{eq:mQ}) and the operator basis $\{S_i\}$ used to describe tree-level (and the singlet contribution of) magnetic moments reads
\begin{eqnarray}
\label{eq:basis1}
\begin{tabular}{lllll}
$S_1^{kc} = G^{kc}$, &
$S_2^{kc} = \mathcal{D}_2^{kc}$, &
$S_3^{kc} = \mathcal{D}_3^{ke}$, &
$S_4^{kc} = \mathcal{O}_3^{kc}$, &
$S_5^{kc} = \mathcal{D}_4^{kc}$, \\
$S_6^{kc} = \mathcal{D}_5^{ke}$, &
$S_7^{kc} = \mathcal{O}_5^{ke}$, &
$S_8^{kc} = \mathcal{D}_6^{kc}$, &
$S_9^{kc} = \mathcal{D}_7^{kc}$, &
$S_{10}^{kc} = \mathcal{O}_7^{kc}$.
\end{tabular}
\end{eqnarray}
Of course, it should be recalled that $\mu^{(0)}$ also define $\mu^{SU(3)}$; both quantities will be used interchangeably hereafter.
Nontrivial matrix elements\footnote{A baryon operator $X_j^{kc}$ yields a trivial matrix element in two possible ways: Either by definition $\langle X_j^{3c} \rangle = 0$ or $\langle X_j^{3c} \rangle = \langle\{J^2,X_{j-2}^{3c}\} \rangle$ for $c=3,8$. Hereafter, trivial matrix elements will not be listed in tables.} of the baryon operators that constitute the basis (\ref{eq:basis1}) are listed in Tables \ref{t:mm1O}, \ref{t:mm1T}, and \ref{t:mm1TO}. The resultant expressions for the magnetic moments at tree level are thus listed in the column labeled (a) in Table \ref{t:treeandnum}.
\begin{table*}
\caption{\label{t:mm1O}Nontrivial matrix elements of the operators involved in the magnetic moments of octet baryons at tree level. The entries for isoscalar components correspond to $\sqrt{3} \langle S_i^{38} \rangle$.}
\begin{ruledtabular}
\begin{tabular}{lccccccccc}
& $\displaystyle n$ & $\displaystyle p$ & $\displaystyle \Sigma^-$ & $\displaystyle \Sigma^0$ & $\displaystyle \Sigma^+$ & $\displaystyle \Xi^-$ & $\displaystyle \Xi^0$ & $\displaystyle \Lambda$ & $\displaystyle \Sigma^0\Lambda$ \\
\hline
$\langle S_1^{33} \rangle$ & $-\frac{5}{12}$ & $\frac{5}{12}$ & $-\frac13$ & $0$ & $\frac13$ & $\frac{1}{12}$ & $-\frac{1}{12}$ & $0$ & $\frac{1}{2 \sqrt{3}}$ \\
$\langle S_2^{33} \rangle$ & $-\frac14$ & $\frac14$ & $-\frac12$ & $0$ & $\frac12$ & $-\frac14$ & $\frac14$ & $0$ & $0$ \\
$\langle S_3^{33} \rangle$ & $-\frac54$ & $\frac54$ & $-1$ & $0$ & $1$ & $\frac14$ & $-\frac14$ & $0$ & $\frac{\sqrt{3}}{2}$ \\
\hline
$\langle S_1^{38} \rangle$ & $\frac14$ & $\frac14$ & $\frac12$ & $\frac12$ & $\frac12$ & $-\frac34$ & $-\frac34$ & $-\frac12$ & $0$ \\
$\langle S_2^{38} \rangle$ & $\frac34$ & $\frac34$ & $0$ & $0$ & $0$ & $-\frac34$ & $-\frac34$ & $0$ & $0$ \\
$\langle S_3^{38} \rangle$ & $\frac34$ & $\frac34$ & $\frac32$ & $\frac32$ & $\frac32$ & $-\frac94$ & $-\frac94$ & $-\frac32$ & $0$ \\
\end{tabular}
\end{ruledtabular}
\end{table*}
\begin{table*}
\caption{\label{t:mm1T}Nontrivial matrix elements of the operators involved in the magnetic moments of decuplet baryons at tree level. The entries for isoscalar components correspond to $\sqrt{3} \langle S_i^{38} \rangle$.}
\begin{ruledtabular}
\begin{tabular}{lcccccccccc}
& $\displaystyle \Delta^{++}$ & $\displaystyle \Delta^+$ & $\displaystyle \Delta^0$ & $\displaystyle \Delta^-$ & $\displaystyle {\Sigma^*}^+$ & $\displaystyle {\Sigma^*}^0$ & $\displaystyle {\Sigma^*}^-$ & $\displaystyle {\Xi^*}^0$ & $\displaystyle {\Xi^*}^-$ & $\displaystyle \Omega^-$ \\
\hline
$\langle S_1^{33} \rangle$ & $\frac34$ & $\frac14$ & $-\frac14$ & $-\frac34$ & $\frac12$ & $0$ & $-\frac12$ & $\frac14$ & $-\frac14$ & $0$ \\
$\langle S_2^{33} \rangle$ & $\frac94$ & $\frac34$ & $-\frac34$ & $-\frac94$ & $\frac32$ & $0$ & $-\frac32$ & $\frac34$ & $-\frac34$ & $0$ \\
$\langle S_3^{33} \rangle$ & $\frac{45}{4}$ & $\frac{15}{4}$ & $-\frac{15}{4}$ & $-\frac{45}{4}$ & $\frac{15}{2}$ & $0$ & $-\frac{15}{2}$ & $\frac{15}{4}$ & $-\frac{15}{4}$ & $0$ \\
\hline
$\langle S_1^{38} \rangle$ & $\frac34$ & $\frac34$ & $\frac34$ & $\frac34$ & $0$ & $0$ & $0$ & $-\frac34$ & $-\frac34$ & $-\frac32$ \\
$\langle S_2^{38} \rangle$ & $\frac94$ & $\frac94$ & $\frac94$ & $\frac94$ & $0$ & $0$ & $0$ & $-\frac94$ & $-\frac94$ & $-\frac92$ \\
$\langle S_3^{38} \rangle$ & $\frac{45}{4}$ & $\frac{45}{4}$ & $\frac{45}{4}$ & $\frac{45}{4}$ & $0$ & $0$ & $0$ & $-\frac{45}{4}$ & $-\frac{45}{4}$ & $-\frac{45}{2}$ \\
\end{tabular}
\end{ruledtabular}
\end{table*}
\begin{table*}
\caption{\label{t:mm1TO}Nontrivial matrix elements of the operators involved in the decuplet to octet transition moments at tree level. The entries for isovector and isoscalar components correspond to $\sqrt{2} \langle S_i^{33} \rangle$ and $\sqrt{6} \langle S_j^{38} \rangle$, respectively.}
\begin{ruledtabular}
\begin{tabular}{lcccccccc}
& $\displaystyle \Delta^+p$ & $\displaystyle \Delta^0n$ & $\displaystyle {\Sigma^*}^0\Lambda$ & $\displaystyle {\Sigma^*}^0\Sigma^0$ & $\displaystyle {\Sigma^*}^+\Sigma^+$ & $\displaystyle {\Sigma^*}^-\Sigma^-$ & $\displaystyle {\Xi^*}^0\Xi^0$ & $\displaystyle {\Xi^*}^-\Xi^-$ \\
\hline
$\langle S_1^{33} \rangle$ & $\frac23$ & $\frac23$ & $\frac{1}{\sqrt{3}}$ & $0$ & $\frac13$ & $-\frac13$ & $\frac13$ & $-\frac13$ \\
$\langle S_4^{33} \rangle$ & $3$ & $3$ & $\frac{3 \sqrt{3}}{2}$ & $0$ & $\frac32$ & $-\frac32$ & $\frac32$ & $-\frac32$ \\
\hline
$\langle S_1^{38} \rangle$ & $0$ & $0$ & $0$ & $1$ & $1$ & $1$ & $1$ & $1$ \\
$\langle S_4^{38} \rangle$ & $0$ & $0$ & $0$ & $\frac92$ & $\frac92$ & $\frac92$ & $\frac92$ & $\frac92$ \\
\end{tabular}
\end{ruledtabular}
\end{table*}
The main goal of the present analysis is to carry out an \textit{analytical} comparison with HBCHPT results of Ref.~\cite{jen92}. The comparison can be made following a simple procedure. First, it is convenient to introduce the relations between the operator coefficients $m_i$ of Eq.~(\ref{eq:mkc}) and the $SU(3)$ invariants $\mu_D$, $\mu_F$, $\mu_C$, and $\mu_T$ used to parametrize the baryon magnetic moments in HBCHPT \cite{jen92}. At $N_c=3$, the relations read \cite{rfm09}
\begin{subequations}
\begin{eqnarray}
\mu_D & = & \frac12 m_1 + \frac16 m_3, \\
\mu_F & = & \frac13 m_1 + \frac16 m_2 + \frac19 m_3, \\
\mu_C & = & \frac12 m_1 + \frac12 m_2 + \frac56 m_3, \\
\mu_T & = & -2 m_1 - m_4,
\end{eqnarray}
\end{subequations}
so the inverse relations become
\begin{subequations}
\label{eq:su3inv}
\begin{eqnarray}
m_1 & = & \frac32 \mu_D + \frac32 \mu_F - \frac12 \mu_C, \\
m_2 & = & -4 \mu_D + 6 \mu_F, \\
m_3 & = & \frac32 \mu_D - \frac92 \mu_F + \frac32 \mu_C, \\
m_4 & = & -3 \mu_D - 3 \mu_F + \mu_C - \mu_T.
\end{eqnarray}
\end{subequations}
Second, by using the inverse relations (\ref{eq:su3inv}), the tree-level magnetic moments can be rewritten in terms of the $SU(3)$ invariants $\mu_D$, $\mu_F$, $\mu_C$, and $\mu_T$, which yields the expressions listed in the column labeled (b) in Table \ref{t:treeandnum}. These last expressions are the ones suitable for comparison with HBCHPT. For octet and decuplet baryons these expressions fully agree with the ones reported in Ref.~\cite{jen92}. Tree-level magnetic moments for octet baryons are given in terms of $\alpha_B$ of Eq.~(23) of this reference, whereas for decuplet baryons, they are normalized to be $\mu_C$ times the electric charge of the corresponding baryon. For decuplet-octet transition moments, no explicit theoretical expressions in the context of HBCHPT are available so no direct comparison is possible.
\begin{table*}
\caption{\label{t:treeandnum} Tree-level expressions of baryon magnetic moments. Expressions in (a) are evaluated in the context of the $1/N_c$ expansion; expressions in (b) follow from the ones given in (a) by using relations (\ref{eq:su3inv}) to compare with heavy baryon chiral perturbation theory results.}
\begin{ruledtabular}
\begin{tabular}{lcc}
\textrm{Baryon} & \multicolumn{2}{c}{Tree-level values, $\mu_B^{(0)}$} \\
& (a) & (b) \\
\hline
$n$ & $-\frac13 m_1-\frac19 m_3$ & $-\frac23 \mu_D$ \\
$p$ & $\frac12 m_1+\frac16 m_2+\frac16 m_3$ & $\frac13 \mu_D+\mu_F$ \\
$\Sigma^-$ & $-\frac16 m_1-\frac16 m_2-\frac{1}{18} m_3$ & $\frac13 \mu_D-\mu_F$ \\
$\Sigma^0$ & $\frac16 m_1+\frac{1}{18} m_3$ & $\frac13 \mu_D$ \\
$\Sigma^+$ & $\frac12 m_1+\frac16 m_2+\frac16 m_3$ & $\frac13 \mu_D+\mu_F$ \\
$\Xi^-$ & $-\frac16 m_1-\frac16 m_2-\frac{1}{18} m_3$ & $\frac13 \mu_D-\mu_F$ \\
$\Xi^0$ & $-\frac13 m_1-\frac19 m_3$ & $-\frac23 \mu_D$ \\
$\Lambda$ & $-\frac16 m_1-\frac{1}{18} m_3$ & $-\frac13 \mu_D$ \\
$\Sigma^0\Lambda$ & $\frac{1}{2\sqrt{3}} m_1+\frac{1}{6\sqrt{3}} m_3$ & $\frac{1}{\sqrt{3}} \mu_D$ \\
$\Delta^{++}$ & $m_1+m_2+\frac53 m_3$ & $2 \mu_C$ \\
$\Delta^+$ & $\frac12 m_1+\frac12 m_2+\frac56 m_3$ & $\mu_C$ \\
$\Delta^0$ & $0$ & $0$ \\
$\Delta^-$ & $-\frac12 m_1-\frac12 m_2-\frac56 m_3$ & $-\mu_C$ \\
${\Sigma^*}^+$ & $\frac12 m_1+\frac12 m_2+\frac56 m_3$ & $\mu_C$ \\
${\Sigma^*}^0$ & $0$ & $0$ \\
${\Sigma^*}^-$ & $-\frac12 m_1-\frac12 m_2-\frac56 m_3$ & $-\mu_C$ \\
${\Xi^*}^0$ & $0$ & $0$ \\
${\Xi^*}^-$ & $-\frac12 m_1-\frac12 m_2-\frac56 m_3$ & $-\mu_C$ \\
$\Omega^-$ & $-\frac12 m_1-\frac12 m_2-\frac56 m_3$ & $-\mu_C$ \\
$\Delta^+p$ & $\frac{1}{3\sqrt{2}}(2 m_1+m_4)$ & $-\frac{1}{3\sqrt{2}} \mu_T$ \\
$\Delta^0n$ & $\frac{1}{3\sqrt{2}}(2 m_1+m_4)$ & $-\frac{1}{3\sqrt{2}} \mu_T$ \\
${\Sigma^*}^0\Lambda$ & $\frac{1}{2\sqrt{6}}(2 m_1+m_4)$ & $-\frac{1}{2\sqrt{6}} \mu_T$ \\
${\Sigma^*}^0\Sigma^0$ & $\frac{1}{6\sqrt{2}}(2 m_1+m_4)$ & $-\frac{1}{6\sqrt{2}} \mu_T$ \\
${\Sigma^*}^+\Sigma^+$ & $\frac{1}{3\sqrt{2}}(2 m_1+m_4)$ & $-\frac{1}{3\sqrt{2}} \mu_T$ \\
${\Sigma^*}^-\Sigma^-$ & $0$ & $0$ \\
${\Xi^*}^0\Xi^0$ & $\frac{1}{3\sqrt{2}}(2 m_1+m_4)$ & $-\frac{1}{3\sqrt{2}} \mu_T$ \\
${\Xi^*}^-\Xi^-$ & $0$ & $0$ \\
\end{tabular}
\end{ruledtabular}
\end{table*}
Once tree-level values of baryon magnetic moments are obtained, one-loop corrections are discussed in the next sections.
\section{\label{sec:1l}One-loop corrections to baryon magnetic moments}
Baryon magnetic moments get corrections at one-loop order from the diagrams displayed in Figs.~\ref{fig:mmloop1} and \ref{fig:mmloop2}, which contribute to orders $\mathcal{O}(m_q^{1/2})$ and $\mathcal{O}(m_q \ln m_q)$, respectively. The group theoretical properties of these diagrams have been discussed in detail in previous works \cite{rfm09,rfm14} to a certain order in the $1/N_c$ expansion, so some partial results will be borrowed. A useful $1/N_c$ power counting scheme introduced in Ref.~\cite{rfm00} becomes handy for the purposes of the present analysis. On general grounds, the meson-baryon vertex is proportional to $g_A/f$; in the large-$N_c$ limit, $g_A \propto N_c$ and $f\propto \sqrt{N_c}$, so that the meson-baryon vertex is of order $\mathcal{O}(\sqrt{N_c})$ and grows with $N_c$. The baryon propagator is $i/(k\cdot v)$ and is $N_c$ independent and so is the meson propagator. Besides, in the $\overline{\mathrm{MS}}$ scheme, loop integrals are given by the pole structure of the propagators, so loop integrals are $N_c$ independent too. The tree-level matrix element of the baryon magnetic moment is thus is of order $\mathcal{O}(N_c)$.
In this section, one-loop corrections will be evaluated to all orders allowed for $N_c=3$ in the $1/N_c$ expansion. Each correction is dealt with separately due to its inherent complexity.
\begin{figure}[ht]
\scalebox{0.32}{\includegraphics{Loop1mm.eps}}
\caption{\label{fig:mmloop1}Feynman diagrams that yield order $\mathcal{O}(m_q^{1/2})$ corrections to the magnetic moments of octet baryons. Dashed lines denote mesons and single and double solid lines denote octet and decuplet baryons, respectively. Similar diagrams arise for the magnetic moment of decuplet baryons and for decuplet-octet transition moments.}
\end{figure}
\begin{figure}[ht]
\scalebox{0.32}{\includegraphics{Loop2mm.eps}}
\caption{\label{fig:mmloop2}Feynman diagrams that yield order $\mathcal{O}(m_q \ln m_q)$ corrections to the magnetic moments of octet baryons. Dashed lines denote mesons and single and double solid lines denote octet and decuplet baryons, respectively. The wave function renormalization graphs are omitted in the figure but are nevertheless considered in the analysis. Similar diagrams arise for the magnetic moment of decuplet baryons and for decuplet-octet transition moments.}
\end{figure}
\subsection{\label{sec:mq}Order ${\mathcal O}(m_q^{1/2})$ correction}
The one-loop correction of order ${\mathcal O}(m_q^{1/2})$ to baryon magnetic moments arising from Fig.~\ref{fig:mmloop1} can be expressed as \cite{rfm09}
\begin{equation}
\delta M_{\mathrm{loop\, 1}}^k = \sum_{\textsf{j}} \epsilon^{ijk} A^{ia} \mathcal{P}_{\textsf{j}} A^{jb} \Gamma^{ab}(\Delta_{\textsf{j}}). \label{eq:corrloop1}
\end{equation}
This correction has been studied in Refs.~\cite{rfm09} and \cite{rfm14} to relative order $1/N_c^3$ in the $1/N_c$ expansion for $\Delta=0$ and $\Delta \neq 0$, respectively. For definiteness, in Eq.~(\ref{eq:corrloop1}), the explicit sum over spin $\mathsf{j}$ is indicated but the sums over spin and flavor indices are understood, the baryon axial current operators $A^{ia}$ and $A^{jb}$, Eq.~(\ref{eq:akc}), are used at the meson-baryon vertices, $\mathcal{P}_{\mathsf{j}}$ is a spin projection operator for spin $J=\mathsf{j}$, and $\Gamma^{ab}(\Delta_{\textsf{j}})$ is an antisymmetric tensor which depends on the difference of the hyperfine mass splitting for spin $J=\mathsf{j}$ and the external baryon. The most general form of $\mathcal{P}_{\mathsf{j}}$ for arbitrary $N_c$ can be found in Ref.~\cite{jen96}. The spin-$\frac12$ and spin-$\frac32$ projectors for $N_c=3$ required here reduce to
\begin{subequations}
\label{eq:projnc3}
\begin{eqnarray}
\mathcal{P}_\frac12 & = & -\frac13 \left[ J^2 - \frac{15}{4} \right], \\
\mathcal{P}_\frac32 & = & \frac13 \left[ J^2 - \frac34 \right],
\end{eqnarray}
\end{subequations}
with
\begin{subequations}
\begin{equation}
\Delta_\frac12 = \left\{
\begin{array}{ll}
\displaystyle 0, & \mathsf{j}_{\mathrm{ext}}=\frac12, \\[2mm]
\displaystyle -\Delta, & \mathsf{j}_{\mathrm{ext}}=\frac32,
\end{array}
\right.
\end{equation}
\begin{equation}
\Delta_\frac32 = \left\{
\begin{array}{ll}
\displaystyle \Delta, & \mathsf{j}_{\mathrm{ext}}=\frac12, \\[2mm]
\displaystyle 0, & \mathsf{j}_{\mathrm{ext}}=\frac32. \\[2mm]
\end{array}
\right.
\end{equation}
\end{subequations}
The $\Gamma^{ab}(\Delta_{\textsf{j}})$ tensor, in turn, can be decomposed as \cite{rfm14}
\begin{equation}
\Gamma^{ab}(\Delta_{\textsf{j}}) = A_0(\Delta_{\textsf{j}}) \Gamma_0^{ab} + A_1(\Delta_{\textsf{j}}) \Gamma_1^{ab} + A_2(\Delta_{\textsf{j}}) \Gamma_2^{ab},
\end{equation}
where the tensors $\Gamma_i^{ab}$ are written as \cite{dai}
\begin{subequations}
\begin{eqnarray}
& & \Gamma_0^{ab} = f^{abQ}, \\
& & \Gamma_1^{ab} = f^{ab\overline{Q}}, \\
& & \Gamma_2^{ab} = f^{aeQ}d^{be8} - f^{beQ}d^{ae8} - f^{abe}d^{eQ8}. \label{eq:tens}
\end{eqnarray}
\end{subequations}
$\Gamma_0^{ab}$ and $\Gamma_1^{ab}$ are both $SU(3)$ octets and transform as the electric charge, except that the latter is rotated by $\pi$ in isospin space. $\Gamma_2^{ab}$ breaks $SU(3)$ as $\mathbf{10}+\overline{\mathbf{10}}$ \cite{dai}.
The $A_i(\Delta_{\textsf{j}})$ coefficients, on the other hand, read
\begin{subequations}
\label{eq:ais}
\begin{eqnarray}
A_0(\Delta_{\textsf{j}}) & = & \frac13 [ I_1(m_\pi,\Delta_{\textsf{j}},\mu)+2I_1(m_K,\Delta_{\textsf{j}},\mu) ], \\
A_1(\Delta_{\textsf{j}}) & = & \frac13 [ I_1(m_\pi,\Delta_{\textsf{j}},\mu)-I_1(m_K,\Delta_{\textsf{j}},\mu) ], \\
A_2(\Delta_{\textsf{j}}) & = & \frac{1}{\sqrt{3}}[ I_1(m_\pi,\Delta_{\textsf{j}},\mu)-I_1(m_K,\Delta_{\textsf{j}},\mu) ],
\end{eqnarray}
\end{subequations}
which are expressed in terms of the loop integral \cite{jen92}
\begin{eqnarray}
\frac{8\pi^2 f^2}{M_N} I_1(m,\Delta,\mu) = -\Delta \ln \frac{m^2}{\mu^2} + \left\{ \begin{array}{ll} \displaystyle 2\sqrt{m^2-\Delta^2}\left[\frac{\pi}{2}-\tan^{-1} \frac{\Delta}{\sqrt{m^2-\Delta^2}} \right], & |\Delta| \leq m, \\ [6mm]
\displaystyle \sqrt{\Delta^2-m^2} \left[-2i\pi + \ln{\frac{\Delta-\sqrt{\Delta^2-m^2}}{\Delta+\sqrt{\Delta^2-m^2}}} \right], & |\Delta| > m, \end{array} \right. \label{eq:loopi}
\end{eqnarray}
where $M_N$ and $m$ denote the nucleon and meson masses, respectively, and $\mu$ is the scale of dimensional regularization. In the limit of vanishing $\Delta$, the integral reduces to
\begin{equation}
I_1(m,0,\mu) = \frac{1}{8\pi f^2} M_N m,
\end{equation}
where the order $\mathcal{O}(m_q^{1/2})$ now becomes evident. A close inspection to Eq.~(\ref{eq:corrloop1}) reveals that, according to the $1/N_c$ power counting scheme reviewed above, the diagram is actually $\mathcal{O}(m_q^{1/2}N_c)$, so it is leading order in $N_c$. In the limit of small $m_q$, this diagram should be the dominant source of SB.
Collecting all partial contributions, $\delta M_{\mathrm{loop\, 1}}^k$ can be expressed as \cite{rfm14}
\begin{equation}
\delta M_{\mathrm{loop\, 1}}^k = \sum_{\mathsf{j}} \left[A_0(\Delta_{\mathsf{j}}) M_{\mathbf{8},\mathrm{loop\, 1}}^{kQ}(\mathcal{P}_{\mathsf{j}}) + A_1(\Delta_{\mathsf{j}}) M_{\mathbf{8},\mathrm{loop\, 1}}^{k\overline{Q}}(\mathcal{P}_{\mathsf{j}}) + A_2(\Delta_{\mathsf{j}}) M_{\mathbf{10}+\overline{\mathbf{10}},\mathrm{loop\, 1}}^{kQ}(\mathcal{P}_{\mathsf{j}}) \right], \label{eq:loop1}
\end{equation}
where the flavor contributions $M_{\mathbf{rep},\mathrm{loop\, 1}}^{kc}$ transforming under representation $\mathbf{rep}$ of $SU(3)$ read
\begin{equation}
M_{\mathbf{8},\mathrm{loop\, 1}}^{kc}(\mathcal{P}_{\mathsf{j}}) = \epsilon^{ijk} f^{abc} A^{ia}\mathcal{P}_{\mathsf{j}}A^{jb}, \label{eq:m8l1}
\end{equation}
and
\begin{equation}
M_{\mathbf{10}+\overline{\mathbf{10}},\mathrm{loop\, 1}}^{kc}(\mathcal{P}_{\mathsf{j}}) = \epsilon^{ijk}(f^{aec}d^{be8} - f^{bec}d^{ae8} - f^{abe}d^{ec8})A^{ia} \mathcal{P}_{\mathsf{j}} A^{jb}. \label{eq:m10l1}
\end{equation}
Terms up to relative order $1/N_c^3$ in the $1/N_c$ expansion from the above expressions have been evaluated for spin-independent and spin-dependent contributions in Refs.~\cite{rfm09} and \cite{rfm14}, respectively. Terms that participate to the next relative order, $1/N_c^4$, for instance $\mathcal{D}_3^{ia} \mathcal{O}_3^{ia}$ or $\mathcal{D}_3^{ia} J^2 \mathcal{D}_3^{ia}$, would complete the calculation for $N_c=3$ so they are evaluated and listed in Appendix \ref{app:rloop1} for the sake of completeness.
Order $\mathcal{O}(m_q^{1/2})$ corrections to baryon magnetic moments can be cast into the generic form
\begin{equation}
\delta \mu_{B}^{\mathrm{(loop\, 1)}} = \sum_{j=1}^{41} \mu_j^{\mathrm{(loop\, 1)}} \langle B|O_j^{3Q}|B\rangle, \label{eq:mml1}
\end{equation}
where $\mu_j^{\mathrm{(loop\, 1)}}$ are some coefficients and the operator basis $\{O_j\}$ reads
\begin{eqnarray}
\label{eq:basis8}
\begin{array}{ll}
O_{1}^{kc} = d^{c8e} G^{ke}, &
O_{2}^{kc} = \delta^{c8} J^k, \\
O_{3}^{kc} = d^{c8e} \mathcal{D}_2^{ke}, &
O_{4}^{kc} = \{G^{kc},T^8\}, \\
O_{5}^{kc} = \{G^{k8},T^c\}, &
O_{6}^{kc} = i f^{c8e} [J^2,G^{ke}], \\
O_{7}^{kc} = d^{c8e} \mathcal{D}_3^{ke}, &
O_{8}^{kc} = d^{c8e} \mathcal{O}_3^{ke}, \\
O_{9}^{kc} = \{G^{kc},\{J^r,G^{r8}\}\}, &
O_{10}^{kc} = \{G^{k8},\{J^r,G^{rc}\}\}, \\
O_{11}^{kc} = \{J^k,\{T^c,T^8\}\}, &
O_{12}^{kc} = \{J^k,\{G^{rc},G^{r8}\}\}, \\
O_{13}^{kc} = \delta^{c8} \{J^2,J^k\}, &
O_{14}^{kc} = d^{c8e} \mathcal{D}_4^{ke}, \\
O_{15}^{kc} = \{\mathcal{D}_2^{kc},\{J^r,G^{r8}\}\}, &
O_{16}^{kc} = \{\mathcal{D}_2^{k8},\{J^r,G^{rc}\}\}, \\
O_{17}^{kc} = \{J^2,\{G^{kc},T^8\}\}, &
O_{18}^{kc} = \{J^2,\{G^{k8},T^c\}\}, \\
O_{19}^{kc} = i f^{c8e} \{J^2,[J^2,G^{ke}]\}, &
O_{20}^{kc} = d^{c8e} \mathcal{D}_5^{ke}, \\
O_{21}^{kc} = d^{c8e} \mathcal{O}_5^{ke}, &
O_{22}^{kc} = \{J^2,\{G^{kc},\{J^r,G^{r8}\}\}\}, \\
O_{23}^{kc} = \{J^2,\{G^{k8},\{J^r,G^{rc}\}\}\}, &
O_{24}^{kc} = \{J^2,\{J^k,\{T^c,T^8\}\}\}, \\
O_{25}^{kc} = \{J^2,\{J^k,\{G^{rc},G^{r8}\}\}\}, &
O_{26}^{kc} = \{J^k,\{\{J^m,G^{mc}\},\{J^r,G^{r8}\}\}\}, \\
O_{27}^{kc} = \delta^{c8} \{J^2,\{J^2,J^k\}\}, &
O_{28}^{kc} = d^{c8e} \mathcal{D}_6^{ke}, \\
O_{29}^{kc} = \{J^2,\{\mathcal{D}_2^{kc},\{J^r,G^{r8}\}\}\}, &
O_{30}^{kc} = \{J^2,\{\mathcal{D}_2^{k8},\{J^r,G^{rc}\}\}\}, \\
O_{31}^{kc} = \{J^2,\{J^2,\{G^{kc},T^8\}\}\}, &
O_{32}^{kc} = \{J^2,\{J^2,\{G^{k8},T^c\}\}\}, \\
O_{33}^{kc} = i f^{c8e} \{J^2,\{J^2,[J^2,G^{ke}]\}\}, &
O_{34}^{kc} = d^{c8e} \mathcal{D}_7^{ke}, \\
O_{35}^{kc} = d^{c8e} \mathcal{O}_7^{ke}, &
O_{36}^{kc} = \{J^2,\{J^2,\{G^{kc},\{J^r,G^{r8}\}\}\}\}, \\
O_{37}^{kc} = \{J^2,\{J^2,\{G^{k8},\{J^r,G^{rc}\}\}\}\}, &
O_{38}^{kc} = \{J^2,\{J^2,\{J^k,\{T^c,T^8\}\}\}\}, \\
O_{39}^{kc} = \{J^2,\{J^2,\{J^k,\{G^{rc},G^{r8}\}\}\}\}, &
O_{40}^{kc} = \{J^2,\{J^k,\{\{J^m,G^{mc}\},\{J^r,G^{r8}\}\}\}\}, \\
O_{41}^{kc} = \delta^{c8} \{J^2,\{J^2,\{J^2,J^k\}\}\}. & \\
\end{array}
\end{eqnarray}
Nontrivial matrix elements for the baryon operators contained in the operator basis (\ref{eq:basis8}) are listed in Tables \ref{t:mm8O}, \ref{t:mm8T}, and \ref{t:mm8TO}.
\begin{table*}
\caption{\label{t:mm8O}Nontrivial matrix elements of the operators involved in the magnetic moments of octet baryons: flavor octet representation.
The entries for isovector components correspond to $\sqrt{3} \langle O_i^{33} \rangle$.}
\begin{ruledtabular}
\begin{tabular}{lccccccccc}
& $\displaystyle n$ & $\displaystyle p$ & $\displaystyle \Sigma^-$ & $\displaystyle \Sigma^0$ & $\displaystyle \Sigma^+$ & $\displaystyle \Xi^-$ & $\displaystyle \Xi^0$ & $\displaystyle \Lambda$ & $\displaystyle \Sigma^0\Lambda$ \\[2mm]
\hline
$\langle O_{1}^{33} \rangle$ & $-\frac{5}{12}$ & $\frac{5}{12}$ & $-\frac13$ & $0$ & $\frac13$ & $\frac{1}{12}$ & $-\frac{1}{12}$ & $0$ & $\frac{1}{2 \sqrt{3}}$ \\
$\langle O_{2}^{33} \rangle$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \\
$\langle O_{3}^{33} \rangle$ & $-\frac14$ & $\frac14$ & $-\frac12$ & $0$ & $\frac12$ & $-\frac14$ & $\frac14$ & $0$ & $0$ \\
$\langle O_{4}^{33} \rangle$ & $-\frac54$ & $\frac54$ & $0$ & $0$ & $0$ & $-\frac14$ & $\frac14$ & $0$ & $0$ \\
$\langle O_{5}^{33} \rangle$ & $-\frac14$ & $\frac14$ & $-1$ & $0$ & $1$ & $\frac34$ & $-\frac34$ & $0$ & $0$ \\
$\langle O_{7}^{33} \rangle$ & $-\frac54$ & $\frac54$ & $-1$ & $0$ & $1$ & $\frac14$ & $-\frac14$ & $0$ & $\frac{\sqrt{3}}{2}$ \\
$\langle O_{9}^{33} \rangle$ & $-\frac58$ & $\frac58$ & $-1$ & $0$ & $1$ & $-\frac38$ & $\frac38$ & $0$ & $0$ \\
$\langle O_{10}^{33} \rangle$ & $-\frac58$ & $\frac58$ & $-1$ & $0$ & $1$ & $-\frac38$ & $\frac38$ & $0$ & $0$ \\
$\langle O_{11}^{33} \rangle$ & $-\frac32$ & $\frac32$ & $0$ & $0$ & $0$ & $\frac32$ & $-\frac32$ & $0$ & $0$ \\
$\langle O_{12}^{33} \rangle$ & $-\frac58$ & $\frac58$ & $-2$ & $0$ & $2$ & $-\frac{11}{8}$ & $\frac{11}{8}$ & $0$ & $-\frac{\sqrt{3}}{2}$ \\
$\langle O_{15}^{33} \rangle$ & $-\frac38$ & $\frac38$ & $-\frac32$ & $0$ & $\frac32$ & $\frac98$ & $-\frac98$ & $0$ & $0$ \\
$\langle O_{16}^{33} \rangle$ & $-\frac{15}{8}$ & $\frac{15}{8}$ & $0$ & $0$ & $0$ & $-\frac38$ & $\frac38$ & $0$ & $0$ \\
$\langle O_{26}^{33} \rangle$ & $-\frac{15}{8}$ & $\frac{15}{8}$ & $-3$ & $0$ & $3$ & $-\frac98$ & $\frac98$ & $0$ & $0$ \\
\hline
$\langle O_{1}^{38} \rangle$ & $-\frac{1}{12}$ & $-\frac{1}{12}$ & $-\frac16$ & $-\frac16$ & $-\frac16$ & $\frac14$ & $\frac14$ & $\frac16$ & $0$ \\
$\langle O_{2}^{38} \rangle$ & $\frac12$ & $\frac12$ & $\frac12$ & $\frac12$ & $\frac12$ & $\frac12$ & $\frac12$ & $\frac12$ & $0$ \\
$\langle O_{3}^{38} \rangle$ & $-\frac14$ & $-\frac14$ & $0$ & $0$ & $0$ & $\frac14$ & $\frac14$ & $0$ & $0$ \\
$\langle O_{4}^{38} \rangle$ & $\frac14$ & $\frac14$ & $0$ & $0$ & $0$ & $\frac34$ & $\frac34$ & $0$ & $0$ \\
$\langle O_{5}^{38} \rangle$ & $\frac14$ & $\frac14$ & $0$ & $0$ & $0$ & $\frac34$ & $\frac34$ & $0$ & $0$ \\
$\langle O_{7}^{38} \rangle$ & $-\frac14$ & $-\frac14$ & $-\frac12$ & $-\frac12$ & $-\frac12$ & $\frac34$ & $\frac34$ & $\frac12$ & $0$ \\
$\langle O_{9}^{38} \rangle$ & $\frac18$ & $\frac18$ & $\frac12$ & $\frac12$ & $\frac12$ & $\frac98$ & $\frac98$ & $\frac12$ & $0$ \\
$\langle O_{10}^{38} \rangle$ & $\frac18$ & $\frac18$ & $\frac12$ & $\frac12$ & $\frac12$ & $\frac98$ & $\frac98$ & $\frac12$ & $0$ \\
$\langle O_{11}^{38} \rangle$ & $\frac32$ & $\frac32$ & $0$ & $0$ & $0$ & $\frac32$ & $\frac32$ & $0$ & $0$ \\
$\langle O_{12}^{38} \rangle$ & $\frac18$ & $\frac18$ & $\frac32$ & $\frac32$ & $\frac32$ & $\frac{17}{8}$ & $\frac{17}{8}$ & $\frac12$ & $0$ \\
$\langle O_{15}^{38} \rangle$ & $\frac38$ & $\frac38$ & $0$ & $0$ & $0$ & $\frac98$ & $\frac98$ & $0$ & $0$ \\
$\langle O_{16}^{38} \rangle$ & $\frac38$ & $\frac38$ & $0$ & $0$ & $0$ & $\frac98$ & $\frac98$ & $0$ & $0$ \\
$\langle O_{26}^{38} \rangle$ & $\frac38$ & $\frac38$ & $\frac32$ & $\frac32$ & $\frac32$ & $\frac{27}{8}$ & $\frac{27}{8}$ & $\frac32$ & $0$ \\
\end{tabular}
\end{ruledtabular}
\end{table*}
\begin{table*}
\caption{\label{t:mm8T}Nontrivial matrix elements of the operators involved in the magnetic moments of decuplet baryons: flavor octet representation. The entries for isovector components correspond to $\sqrt{3} \langle O_i^{33} \rangle$.}
\begin{ruledtabular}
\begin{tabular}{lcccccccccc}
& $\displaystyle \Delta^{++}$ & $\displaystyle \Delta^+$ & $\displaystyle \Delta^0$ & $\displaystyle \Delta^-$ & $\displaystyle {\Sigma^*}^+$ & $\displaystyle {\Sigma^*}^0$ & $\displaystyle {\Sigma^*}^-$ & $\displaystyle {\Xi^*}^0$ & $\displaystyle {\Xi^*}^-$ & $\displaystyle \Omega^-$ \\[2mm]
\hline
$\langle O_{1}^{33} \rangle$ & $\frac34$ & $\frac14$ & $-\frac14$ & $-\frac34$ & $\frac12$ & $0$ & $-\frac12$ & $\frac14$ & $-\frac14$ & $0$ \\
$\langle O_{2}^{33} \rangle$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \\
$\langle O_{3}^{33} \rangle$ & $\frac94$ & $\frac34$ & $-\frac34$ & $-\frac94$ & $\frac32$ & $0$ & $-\frac32$ & $\frac34$ & $-\frac34$ & $0$ \\
$\langle O_{4}^{33} \rangle$ & $\frac94$ & $\frac34$ & $-\frac34$ & $-\frac94$ & $0$ & $0$ & $0$ & $-\frac34$ & $\frac34$ & $0$ \\
$\langle O_{5}^{33} \rangle$ & $\frac94$ & $\frac34$ & $-\frac34$ & $-\frac94$ & $0$ & $0$ & $0$ & $-\frac34$ & $\frac34$ & $0$ \\
$\langle O_{7}^{33} \rangle$ & $\frac{45}{4}$ & $\frac{15}{4}$ & $-\frac{15}{4}$ & $-\frac{45}{4}$ & $\frac{15}{2}$ & $0$ & $-\frac{15}{2}$ & $\frac{15}{4}$ & $-\frac{15}{4}$ & $0$ \\
$\langle O_{9}^{33} \rangle$ & $\frac{45}{8}$ & $\frac{15}{8}$ & $-\frac{15}{8}$ & $-\frac{45}{8}$ & $0$ & $0$ & $0$ & $-\frac{15}{8}$ & $\frac{15}{8}$ & $0$ \\
$\langle O_{10}^{33} \rangle$ & $\frac{45}{8}$ & $\frac{15}{8}$ & $-\frac{15}{8}$ & $-\frac{45}{8}$ & $0$ & $0$ & $0$ & $-\frac{15}{8}$ & $\frac{15}{8}$ & $0$ \\
$\langle O_{11}^{33} \rangle$ & $\frac{27}{2}$ & $\frac92$ & $-\frac92$ & $-\frac{27}{2}$ & $0$ & $0$ & $0$ & $-\frac92$ & $\frac92$ & $0$ \\
$\langle O_{12}^{33} \rangle$ & $\frac{45}{8}$ & $\frac{15}{8}$ & $-\frac{15}{8}$ & $-\frac{45}{8}$ & $\frac32$ & $0$ & $-\frac32$ & $-\frac38$ & $\frac38$ & $0$ \\
$\langle O_{15}^{33} \rangle$ & $\frac{135}{8}$ & $\frac{45}{8}$ & $-\frac{45}{8}$ & $-\frac{135}{8}$ & $0$ & $0$ & $0$ & $-\frac{45}{8}$ & $\frac{45}{8}$ & $0$ \\
$\langle O_{16}^{33} \rangle$ & $\frac{135}{8}$ & $\frac{45}{8}$ & $-\frac{45}{8}$ & $-\frac{135}{8}$ & $0$ & $0$ & $0$ & $-\frac{45}{8}$ & $\frac{45}{8}$ & $0$ \\
$\langle O_{26}^{33} \rangle$ & $\frac{675}{8}$ & $\frac{225}{8}$ & $-\frac{225}{8}$ & $-\frac{675}{8}$ & $0$ & $0$ & $0$ & $-\frac{225}{8}$ & $\frac{225}{8}$ & $0$ \\
\hline
$\langle O_{1}^{38} \rangle$ & $-\frac14$ & $-\frac14$ & $-\frac14$ & $-\frac14$ & $0$ & $0$ & $0$ & $\frac14$ & $\frac14$ & $\frac12$ \\
$\langle O_{2}^{38} \rangle$ & $\frac32$ & $\frac32$ & $\frac32$ & $\frac32$ & $\frac32$ & $\frac32$ & $\frac32$ & $\frac32$ & $\frac32$ & $\frac32$
\\
$\langle O_{3}^{38} \rangle$ & $-\frac34$ & $-\frac34$ & $-\frac34$ & $-\frac34$ & $0$ & $0$ & $0$ & $\frac34$ & $\frac34$ & $\frac32$ \\
$\langle O_{4}^{38} \rangle$ & $\frac34$ & $\frac34$ & $\frac34$ & $\frac34$ & $0$ & $0$ & $0$ & $\frac34$ & $\frac34$ & $3$ \\
$\langle O_{5}^{38} \rangle$ & $\frac34$ & $\frac34$ & $\frac34$ & $\frac34$ & $0$ & $0$ & $0$ & $\frac34$ & $\frac34$ & $3$ \\
$\langle O_{7}^{38} \rangle$ & $-\frac{15}{4}$ & $-\frac{15}{4}$ & $-\frac{15}{4}$ & $-\frac{15}{4}$ & $0$ & $0$ & $0$ & $\frac{15}{4}$ & $\frac{15}{4}$ & $\frac{15}{2}$ \\
$\langle O_{9}^{38} \rangle$ & $\frac{15}{8}$ & $\frac{15}{8}$ & $\frac{15}{8}$ & $\frac{15}{8}$ & $0$ & $0$ & $0$ & $\frac{15}{8}$ & $\frac{15}{8}$ & $\frac{15}{2}$ \\
$\langle O_{10}^{38} \rangle$ & $\frac{15}{8}$ & $\frac{15}{8}$ & $\frac{15}{8}$ & $\frac{15}{8}$ & $0$ & $0$ & $0$ & $\frac{15}{8}$ & $\frac{15}{8}$ & $\frac{15}{2}$ \\
$\langle O_{11}^{38} \rangle$ & $\frac92$ & $\frac92$ & $\frac92$ & $\frac92$ & $0$ & $0$ & $0$ & $\frac92$ & $\frac92$ & $18$ \\
$\langle O_{12}^{38} \rangle$ & $\frac{15}{8}$ & $\frac{15}{8}$ & $\frac{15}{8}$ & $\frac{15}{8}$ & $\frac32$ & $\frac32$ & $\frac32$ & $\frac{27}{8}$ & $\frac{27}{8}$ & $\frac{15}{2}$ \\
$\langle O_{15}^{38} \rangle$ & $\frac{45}{8}$ & $\frac{45}{8}$ & $\frac{45}{8}$ & $\frac{45}{8}$ & $0$ & $0$ & $0$ & $\frac{45}{8}$ & $\frac{45}{8}$ & $\frac{45}{2}$ \\
$\langle O_{16}^{38} \rangle$ & $\frac{45}{8}$ & $\frac{45}{8}$ & $\frac{45}{8}$ & $\frac{45}{8}$ & $0$ & $0$ & $0$ & $\frac{45}{8}$ & $\frac{45}{8}$ & $\frac{45}{2}$ \\
$\langle O_{26}^{38} \rangle$ & $\frac{225}{8}$ & $\frac{225}{8}$ & $\frac{225}{8}$ & $\frac{225}{8}$ & $0$ & $0$ & $0$ & $\frac{225}{8}$ & $\frac{225}{8}$ & $\frac{225}{2}$ \\
\end{tabular}
\end{ruledtabular}
\end{table*}
\begin{table*}
\caption{\label{t:mm8TO}Nontrivial matrix elements of the operators involved in the decuplet to octet transition magnetic moments: Flavor octet representation. The entries for isovector and isoscalar components correspond to $\sqrt{6} \langle O_i^{33} \rangle$ and $\sqrt{2} \langle O_j^{38} \rangle$, respectively.}
\begin{ruledtabular}
\begin{tabular}{lcccccccc}
& $\displaystyle \Delta^+p$ & $\displaystyle \Delta^0n$ & $\displaystyle {\Sigma^*}^0\Lambda$ & $\displaystyle {\Sigma^*}^0\Sigma^0$ & $\displaystyle {\Sigma^*}^+\Sigma^+$ & $\displaystyle {\Sigma^*}^-\Sigma^-$ & $\displaystyle {\Xi^*}^0\Xi^0$ & $\displaystyle {\Xi^*}^-\Xi^-$ \\[2mm]
\hline
$\langle O_{1}^{33} \rangle$ & $\frac23$ & $\frac23$ & $\frac{1}{\sqrt{3}}$ & $0$ & $\frac13$ & $-\frac13$ & $\frac13$ & $-\frac13$ \\
$\langle O_{4}^{33} \rangle$ & $2$ & $2$ & $0$ & $0$ & $0$ & $0$ & $-1$ & $1$ \\
$\langle O_{5}^{33} \rangle$ & $0$ & $0$ & $0$ & $0$ & $2$ & $-2$ & $1$ & $-1$ \\
$\langle O_{8}^{33} \rangle$ & $3$ & $3$ & $\frac{3 \sqrt{3}}{2}$ & $0$ & $\frac32$ & $-\frac32$ & $\frac32$ & $-\frac32$ \\
$\langle O_{9}^{33} \rangle$ & $3$ & $3$ & $-\frac{\sqrt{3}}{2}$ & $0$ & $\frac12$ & $-\frac12$ & $-2$ & $2$ \\
$\langle O_{10}^{33} \rangle$ & $0$ & $0$ & $-\frac{\sqrt{3}}{2}$ & $0$ & $\frac72$ & $-\frac72$ & $1$ & $-1$ \\
\hline
$\langle O_{1}^{38} \rangle$ & $0$ & $0$ & $0$ & $-\frac13$ & $-\frac13$ & $-\frac13$ & $-\frac13$ & $-\frac13$ \\
$\langle O_{4}^{38} \rangle$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $-1$ & $-1$ \\
$\langle O_{5}^{38} \rangle$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $-1$ & $-1$ \\
$\langle O_{8}^{38} \rangle$ & $0$ & $0$ & $0$ & $-\frac32$ & $-\frac32$ & $-\frac32$ & $-\frac32$ & $-\frac32$ \\
$\langle O_{9}^{38} \rangle$ & $0$ & $0$ & $0$ & $\frac12$ & $\frac12$ & $\frac12$ & $-2$ & $-2$ \\
$\langle O_{10}^{38} \rangle$ & $0$ & $0$ & $0$ & $\frac12$ & $\frac12$ & $\frac12$ & $-2$ & $-2$ \\
\end{tabular}
\end{ruledtabular}
\end{table*}
Resultant expressions are, for instance,
\begin{eqnarray}
\delta \mu_{\Sigma^-}^{\mathrm{(loop\, 1)}} & = & \left[ \frac{7}{18} a_1^2 + \frac29 a_1b_2 + \frac{1}{18} b_2^2 + \frac{7}{27} a_1b_3 + \frac{2}{27} b_2b_3 + \frac{7}{162} b_3^2 \right] I_1(m_\pi,0,\mu) \nonumber \\
& & \mbox{} + \left[ \frac{1}{36} a_1^2 - \frac{1}{18} a_1b_2 + \frac{1}{36} b_2^2 + \frac{1}{54} a_1b_3 - \frac{1}{54} b_2b_3 + \frac{1}{324} b_3^2 \right] I_1(m_K,0,\mu) \nonumber \\
& & \mbox{} + \left[ -\frac{1}{18} a_1^2 - \frac{1}{18} a_1c_3 - \frac{1}{72} c_3^2 \right] I_1(m_\pi,\Delta,\mu) + \left[ -\frac19 a_1^2 - \frac19 a_1c_3 - \frac{1}{36} c_3^2 \right] I_1(m_K,\Delta,\mu), \label{eq:case1}
\end{eqnarray}
and
\begin{eqnarray}
\delta \mu_{{\Sigma^*}^-}^{\mathrm{(loop\, 1)}} & = & \left[ \frac16 a_1^2 + \frac13 a_1b_2 + \frac16 b_2^2 + \frac59 a_1b_3 + \frac59 b_2b_3 + \frac{25}{54} b_3^2 \right] I_1(m_\pi,0,\mu) \nonumber \\
& & \mbox{} + \left[ \frac{1}{12} a_1^2 + \frac16 a_1b_2 + \frac{1}{12} b_2^2 + \frac{5}{18} a_1b_3 + \frac{5}{18} b_2b_3 + \frac{25}{108} b_3^2 \right] I_1(m_K,0,\mu) \nonumber \\
& & \mbox{} + \left[ \frac13 a_1^2 + \frac13 a_1c_3 + \frac{1}{12} c_3^2 \right] I_1(m_\pi,-\Delta,\mu) + \left[ \frac16 a_1^2 + \frac16 a_1c_3 + \frac{1}{24} c_3^2 \right] I_1(m_K,-\Delta,\mu). \label{eq:case2}
\end{eqnarray}
All 27 resultant expressions are listed in full in Appendix \ref{app:Loop1}.
It can be easily verified that Coleman and Glashow relations are satisfied when order $\mathcal{O}(m_q^{1/2})$ corrections are included to baryon magnetic moments, even for $\Delta\neq 0$. For decuplet baryons the $I=2$ and $I=3$ sum rules introduced in Ref.~\cite{lebed} are also satisfied. For $I=2$
\begin{equation}
\mu_{\Delta^{++}}^{\mathrm{(loop\, 1)}} - \mu_{\Delta^+}^{\mathrm{(loop\, 1)}} - \mu_{\Delta^0}^{\mathrm{(loop\, 1)}} + \mu_{\Delta^-}^{\mathrm{(loop\, 1)}} = 0,
\end{equation}
\begin{equation}
\mu_{{\Sigma^*}^+}^{\mathrm{(loop\, 1)}} - 2 \mu_{{\Sigma^*}^0}^{\mathrm{(loop\, 1)}} + \mu_{{\Sigma^*}^-}^{\mathrm{(loop\, 1)}} = 0,
\end{equation}
whereas for $I=3$
\begin{equation}
\mu_{\Delta^{++}}^{\mathrm{(loop\, 1)}} - 3 \mu_{\Delta^+}^{\mathrm{(loop\, 1)}} + 3 \mu_{\Delta^0}^{\mathrm{(loop\, 1)}} - \mu_{\Delta^-}^{\mathrm{(loop\, 1)}} = 0.
\end{equation}
For transition magnetic moments, the isotensor combinations for $I=2$ read \cite{lebed}
\begin{equation}
\mu_{\Delta^{+}p}^{\mathrm{(loop\, 1)}} - \mu_{\Delta^{0}n}^{\mathrm{(loop\, 1)}} = 0,
\end{equation}
and
\begin{equation}
\mu_{{\Sigma^{*}}^+\Sigma^+}^{\mathrm{(loop\, 1)}} - 2 \mu_{{\Sigma^{*}}^0\Sigma^0}^{\mathrm{(loop\, 1)}} + \mu_{{\Sigma^{*}}^-\Sigma^-}^{\mathrm{(loop\, 1)}} = 0, \label{eq:is6}
\end{equation}
where $\mu_X^{\mathrm{(loop\, 1)}}$ should be understood as $\mu_X + \delta \mu_X^{\mathrm{(loop\, 1)}}$ for baryon $X$.
\subsubsection{\label{sec:comL1}Comparison with heavy chiral perturbation theory results}
The full expressions (\ref{eq:mun}) to (\ref{eq:muxixi}) can be rewritten in terms of the flavor octet baryon-meson couplings $D$, $F$, $\mathcal{C}$, and $\mathcal{H}$ introduced in Refs.~\cite{jm91a,jm91b}, which are related to the coefficients of the $1/N_c$ expansion $a_1$, $b_2$, $b_3$, and $c_3$ at $N_c=3$. The relations are \cite{jen96}
\begin{subequations}
\label{eq:rel1}
\begin{eqnarray}
& & D = \frac12 a_1 + \frac16 b_3, \\
& & F = \frac13 a_1 + \frac16 b_2 + \frac19 b_3, \\
& & \mathcal{C} = - a_1 - \frac12 c_3, \\
& & \mathcal{H} = - \frac32 a_1 - \frac32 b_2 - \frac52 b_3.
\end{eqnarray}
\end{subequations}
so the inverse relations become
\begin{subequations}
\label{eq:rel1inv}
\begin{eqnarray}
& & a_1 = \frac32 D + \frac32 F + \frac16 \mathcal{H}, \\
& & b_2 = -4D + 6F, \\
& & b_3 = \frac32 D - \frac92 F - \frac12 \mathcal{H}, \\
& & c_3 = - 3D - 3F -2 \mathcal{C} - \frac13 \mathcal{H}.
\end{eqnarray}
\end{subequations}
Using the inverse relations (\ref{eq:rel1inv}), expressions (\ref{eq:mun}) to (\ref{eq:muxixi}) now become (\ref{eq:munch}) to (\ref{eq:muxixich}), respectively. In particular, for magnetic moments in the case study, Eqs.~(\ref{eq:case1}) and (\ref{eq:case2}) can be rewritten as
\begin{equation}
\delta \mu_{\Sigma^-}^{\mathrm{(loop\, 1)}} = \frac23(D^2+3F^2) I_1(m_\pi,0,\mu) + (D-F)^2 I_1(m_K,0,\mu) - \frac{1}{18} \mathcal{C}^2 I_1(m_\pi,\Delta,\mu) - \frac19 \mathcal{C}^2 I_1(m_K,\Delta,\mu),
\end{equation}
and
\begin{equation}
\delta \mu_{{\Sigma^*}^-}^{\mathrm{(loop\, 1)}} = \frac{2}{27} \mathcal{H}^2 I_1(m_\pi,0,\mu) + \frac{1}{27} \mathcal{H}^2 I_1(m_K,0,\mu) + \frac13 \mathcal{C}^2 I_1(m_\pi,-\Delta,\mu) + \frac16 \mathcal{C}^2 I_1(m_K,-\Delta,\mu).
\end{equation}
In the context of HBCHPT, order $\mathcal{O}(m_q^{1/2})$ corrections to the magnetic moments of octet baryons can be organized as \cite{jen92},
\begin{equation}
\delta \mu_i^{\mathrm{(loop\, 1)}} = \sum_{P=\pi, K}\beta_i^{(P)}I_1(m_P,0,\mu) + \sum_{P=\pi, K}\beta_i^{\prime (P)}I_1(m_P,\Delta,\mu), \label{eq:l1ch}
\end{equation}
where $\beta_i^{(P)}$ and $\beta_i^{\prime (P)}$ are the contributions arising from loop graphs of Fig.~\ref{fig:mmloop1} with intermediate octet and decuplet baryons, respectively. In the limit of vanishing $\Delta$, expressions (\ref{eq:munch}) to (\ref{eq:mul}) and (\ref{eq:musl} agree in full with the corresponding ones attainable from Eq.~(\ref{eq:l1ch}).
\subsection{\label{sec:mqlnmq}Order $\mathcal{O}(m_q \ln m_q)$ correction}
The one-loop corrections to baryon magnetic moments from the Feynman diagrams depicted in Fig.~\ref{fig:mmloop2} have a nonanalytic dependence on the quark mass of the form $m_q \ln m_q$. The computation of these diagrams requires a rather formidable effort to reduce the operator structures involved. In Refs.~\cite{rfm09,rfm14}, relative corrections to order $1/N_c^4$ in the $1/N_c$ expansion were included. The incorporation of all the structures present for $N_c=3$ needs the inclusion of relative terms of up to order $1/N_c^6$. Again, a great deal of computational ease is gained by using some of the operator structures already reduced in the renormalized baryon axial current computed in Ref.~\cite{rfm21}. Other structures appear for the first time and need to be reduced.
Diagrams \ref{fig:mmloop2}(a-d) present a few interesting features so they are studied first.
\subsubsection{Diagrams \ref{fig:mmloop2}(a)-\ref{fig:mmloop2}(d)}
Feynman diagrams depicted in Fig.~\ref{fig:mmloop2}(a)-~\ref{fig:mmloop2}(d) contribute to the baryon magnetic moment operator, for $\Delta=0$, as \cite{rfm09,rfm14}
\begin{equation}
\delta M_{\textrm{loop 2ad}}^k = \frac12 \left[A^{ja},\left[A^{jb},M^k \right] \right] \Pi^{ab}, \label{eq:corrloop2}
\end{equation}
The double commutator structure in Eq.~(\ref{eq:corrloop2}) involves three axial current operators, so naively this structure should be order $\mathcal{O}(N_c^3)$. However, it has been explicitly shown \cite{rfm00} that there are large-$N_c$ cancellations in the sum over intermediate baryon states in the loop. The cancellations are a consequence of the spin-flavor symmetry of large-$N_c$ QCD \cite{dm91a,dm91b,djm95} and only occur when the ratios of $F$, $D$, $\mathcal{C}$, and $\mathcal{H}$ are close to their $SU(6)$ values. Therefore, the double commutator structure is at most of order $\mathcal{O}(N_c)$.
On the other hand, $\Pi^{ab}$ is a symmetric tensor which contains meson-loop integrals and decomposes into flavor singlet, flavor $\mathbf{8}$, and flavor $\mathbf{27}$ representations as \cite{jen96}
\begin{equation}
\Pi^{ab} = F_\mathbf{1} \delta^{ab} + F_\mathbf{8} d^{ab8} + F_\mathbf{27} \left[ \delta^{a8} \delta^{b8} - \frac18 \delta^{ab} - \frac35 d^{ab8} d^{888}\right], \label{eq:pisym}
\end{equation}
where
\begin{equation}
F_\mathbf{1} = \frac18 \left[3I_2(m_\pi,0,\mu) + 4I_2(m_K,0,\mu) + I_2(m_\eta,0,\mu) \right], \label{eq:F1}
\end{equation}
\begin{equation}
F_\mathbf{8} = \frac{2\sqrt 3}{5} \left[\frac32 I_2(m_\pi,0,\mu) - I_2(m_K,0,\mu) - \frac12 I_2(m_\eta,0,\mu) \right], \label{eq:F8}
\end{equation}
and
\begin{equation}
F_\mathbf{27} = \frac13 I_2(m_\pi,0,\mu) - \frac43 I_2(m_K,0,\mu) + I_2(m_\eta,0,\mu). \label{eq:F27}
\end{equation}
Equations (\ref{eq:F1})-(\ref{eq:F27}) are linear combinations of $I_2(m_\pi,0,\mu)$, $I_2(m_K,0,\mu)$, and $I_2(m_\eta,0,\mu)$, where $I_2(m,\Delta,\mu)$ represents the loop integral, which can be found in Ref.~\cite{rfm14}. In the degeneracy limit $\Delta\to 0$, this function reduces to
\begin{equation}
I_2(m,0,\mu) = - \frac{m^2}{16\pi^2f^2} \ln{\frac{m^2}{\mu^2}}, \label{eq:fprime}
\end{equation}
where $\mu$ is the scale of dimensional regularization and only nonanalytic terms in $m$ have been retained.
Expression (\ref{eq:corrloop2}) can be organized in terms of the flavor $\mathbf{1}$, $\mathbf{8}$, and $\mathbf{27}$ contributions as \cite{rfm09}
\begin{equation}
\delta M_{\textrm{loop 2ad}}^k = F_\mathbf{1} M_{\mathbf{1},\textrm{loop 2ad}}^{kQ} + F_\mathbf{8} M_{\mathbf{8},\textrm{loop 2ad}}^{kQ} + F_\mathbf{27} M_{\mathbf{27},\textrm{loop 2ad}}^{kQ}. \label{eq:loop2ad}
\end{equation}
The matrix elements of the operator structures $M_{\mathbf{rep},\textrm{loop 2ad}}^{kQ}$ have the generic forms
\begin{equation}
\delta \mu_{j,\mathbf{1}}^{(\mathrm{loop\, 2ad})} = \sum_{j=1}^{10} \mu_{j,\mathrm{1}}^{(\mathrm{loop\, 2ad})} \langle B|S_j^{3Q}|B\rangle, \label{eq:mmsl2}
\end{equation}
\begin{equation}
\delta \mu_{j,\mathbf{8}}^{(\mathrm{loop\, 2ad})} = \sum_{j=1}^{41} \mu_{j,\mathrm{8}}^{(\mathrm{loop\, 2ad})} \langle B|O_j^{3Q}|B\rangle, \label{eq:mmol2}
\end{equation}
\begin{equation}
\delta \mu_{j,\mathbf{27}}^{(\mathrm{loop\, 2ad})} = \sum_{j=1}^{167} \mu_{j,\mathrm{27}}^{(\mathrm{loop\, 2ad})} \langle B|T_j^{3Q}|B\rangle, \label{eq:mmtl2}
\end{equation}
where as before $\mu_{j,\mathrm{rep}}^{(\mathrm{loop\, 2ad})}$ are some coefficients, the operator bases $\{S_i\}$ and $\{O_j\}$ are listed in (\ref{eq:basis1}) and (\ref{eq:basis8}), respectively, and the operator basis $\{T_k\}$ is given by
\begin{eqnarray}
\nonumber
\begin{array}{ll}
T_{1}^{kc} = f^{c8e} f^{8eg} G^{kg}, &
T_{2}^{kc} = d^{c8e} d^{8eg} G^{kg}, \\
T_{3}^{kc} = \delta^{c8} G^{k8}, &
T_{4}^{kc} = d^{c88} J^k, \\
T_{5}^{kc} = f^{c8e} f^{8eg} \mathcal{D}_2^{kg}, &
T_{6}^{kc} = d^{c8e} d^{8eg} \mathcal{D}_2^{kg}, \\
T_{7}^{kc} = d^{ceg} d^{88e} \mathcal{D}_2^{kg}, &
T_{8}^{kc} = \delta^{c8} \mathcal{D}_2^{k8}, \\
T_{9}^{kc} = d^{c8e} \{G^{ke},T^8\}, &
T_{10}^{kc} = d^{88e} \{G^{ke},T^c\}, \\
T_{11}^{kc} = i \epsilon^{kim} f^{c8e} f^{8eg} \{J^i,G^{mg}\}, &
T_{12}^{kc} = i f^{c8e} [G^{ke},\{J^r,G^{r8}\}], \\
T_{13}^{kc} = i f^{c8e} [G^{k8},\{J^r,G^{re}\}], &
T_{14}^{kc} = f^{c8e} f^{8eg} \mathcal{D}_3^{kg}, \\
T_{15}^{kc} = d^{c8e} d^{8eg} \mathcal{D}_3^{kg}, &
T_{16}^{kc} = d^{ceg} d^{88e} \mathcal{D}_3^{kg}, \\
T_{17}^{kc} = i f^{c8e} d^{8eg} \mathcal{D}_3^{kg}, &
T_{18}^{kc} = i d^{c8e} f^{8eg} \mathcal{D}_3^{kg}, \\
T_{19}^{kc} = \delta^{c8} \mathcal{D}_3^{k8}, &
T_{20}^{kc} = f^{c8e} f^{8eg} \mathcal{O}_3^{kg}, \\
T_{21}^{kc} = d^{c8e} d^{8eg} \mathcal{O}_3^{kg}, &
T_{22}^{kc} = d^{ceg} d^{88e} \mathcal{O}_3^{kg}, \\
T_{23}^{kc} = \delta^{c8} \mathcal{O}_3^{k8}, &
T_{24}^{kc} = d^{c88} \{J^2,J^k\}, \\
T_{25}^{kc} = \{G^{kc},\{T^8,T^8\}\}, &
T_{26}^{kc} = \{G^{k8},\{T^c,T^8\}\}, \\
T_{27}^{kc} = \{G^{kc},\{G^{r8},G^{r8}\}\}, &
T_{28}^{kc} = \{G^{k8},\{G^{rc},G^{r8}\}\}, \\
T_{29}^{kc} = d^{c8e} \{J^k,\{G^{re},G^{r8}\}\}, &
T_{30}^{kc} = d^{88e} \{J^k,\{G^{rc},G^{re}\}\}, \\
T_{31}^{kc} = d^{c8e} \{G^{ke},\{J^r,G^{r8}\}\}, &
T_{32}^{kc} = d^{c8e} \{G^{k8},\{J^r,G^{re}\}\}, \\
T_{33}^{kc} = d^{88e} \{G^{kc},\{J^r,G^{re}\}\}, &
T_{34}^{kc} = d^{88e} \{G^{ke},\{J^r,G^{rc}\}\}, \\
T_{35}^{kc} = \epsilon^{kim} f^{c8e} \{T^e,\{J^i,G^{m8}\}\}, &
T_{36}^{kc} = \epsilon^{kim} f^{c8e} \{T^8,\{J^i,G^{me}\}\}, \\
T_{37}^{kc} = f^{c8e} f^{8eg} \mathcal{D}_4^{kg}, &
T_{38}^{kc} = d^{c8e} d^{8eg} \mathcal{D}_4^{kg}, \\
T_{39}^{kc} = d^{ceg} d^{88e} \mathcal{D}_4^{kg}, &
T_{40}^{kc} = i f^{c8e} d^{8eg} \mathcal{D}_4^{kg}, \\
T_{41}^{kc} = \delta^{c8} \mathcal{D}_4^{k8}, &
T_{42}^{kc} = d^{c8e} \{J^2,\{G^{ke},T^8\}\}, \\
T_{43}^{kc} = d^{88e} \{J^2,\{G^{ke},T^c\}\}, &
T_{44}^{kc} = i \epsilon^{kim} f^{c8e} f^{8eg} \{J^2,\{J^i,G^{mg}\}\}, \\
T_{45}^{kc} = i \epsilon^{kim} \delta^{c8} \{J^2,\{J^i,G^{m8}\}\}, &
T_{46}^{kc} = \{\mathcal{D}_2^{kc},\{T^8,T^8\}\}, \\
T_{47}^{kc} = \{\mathcal{D}_2^{kc},\{G^{r8},G^{r8}\}\}, &
T_{48}^{kc} = \{\mathcal{D}_2^{k8},\{G^{rc},G^{r8}\}\}, \\
T_{49}^{kc} = d^{c8e} \{\mathcal{D}_2^{k8},\{J^r,G^{re}\}\}, &
T_{50}^{kc} = d^{88e} \{\mathcal{D}_2^{kc},\{J^r,G^{re}\}\}, \\
T_{51}^{kc} = i f^{c8e} \{\mathcal{D}_2^{ke},\{J^r,G^{r8}\}\}, &
T_{52}^{kc} = \{\{J^r,G^{rc}\},\{G^{k8},T^8\}\}, \\
T_{53}^{kc} = \{\{J^r,G^{r8}\},\{G^{kc},T^8\}\}, &
T_{54}^{kc} = \{\{J^r,G^{r8}\},\{G^{k8},T^c\}\}, \\
T_{55}^{kc} = i \epsilon^{kim} \{\{J^i,G^{m8}\},\{G^{r8},G^{rc}\}\}, &
T_{56}^{kc} = i \epsilon^{kim} \{\{J^i,G^{mc}\},\{G^{r8},G^{r8}\}\}, \\
T_{57}^{kc} = i \epsilon^{rim} \{G^{k8},\{J^r,\{G^{ic},G^{m8}\}\}\}, &
T_{58}^{kc} = i \epsilon^{rim} d^{c8e} \{J^k,\{J^r,\{G^{i8},G^{me}\}\}\}, \\
T_{59}^{kc} = i \epsilon^{kim} f^{cae} f^{8eb} \{\{J^i,G^{m8}\},\{T^a,T^b\}\}, &
T_{60}^{kc} = i f^{c8e} \{J^k,[\{J^i,G^{ie}\},\{J^r,G^{r8}\}]\}, \\
T_{61}^{kc} = i f^{c8e} \{\{J^r,G^{re}\},[J^2,G^{k8}]\}, &
T_{62}^{kc} = i f^{c8e} \{\{J^r,G^{r8}\},[J^2,G^{ke}]\}, \\
T_{63}^{kc} = i f^{c8e} \{J^2,[G^{ke},\{J^r,G^{r8}\}]\}, &
T_{64}^{kc} = i f^{c8e} \{J^2,[G^{k8},\{J^r,G^{re}\}]\}, \\
T_{65}^{kc} = d^{c8e} \{J^2,[G^{ke},\{J^r,G^{r8}\}]\}, &
T_{66}^{kc} = d^{c8e} \{J^2,[G^{k8},\{J^r,G^{re}\}]\}, \\
T_{67}^{kc} = [G^{kc},\{\{J^m,G^{m8}\},\{J^r,G^{r8}\}\}], &
T_{68}^{kc} = [G^{k8},\{\{J^m,G^{m8}\},\{J^r,G^{rc}\}\}], \\
T_{69}^{kc} = \{\{J^m,G^{mc}\},[G^{k8},\{J^r,G^{r8}\}]\}, &
T_{70}^{kc} = i \epsilon^{kim} f^{cea} f^{e8b} \{\{J^i,G^{m8}\},\{G^{ra},G^{rb}\}\}, \\
T_{71}^{kc} = f^{c8e} f^{8eg} \mathcal{D}_5^{kg}, &
T_{72}^{kc} = d^{c8e} d^{8eg} \mathcal{D}_5^{kg}, \\
T_{73}^{kc} = d^{ceg} d^{88e} \mathcal{D}_5^{kg}, &
T_{74}^{kc} = i f^{c8e} d^{8eg} \mathcal{D}_5^{kg}, \\
T_{75}^{kc} = i d^{c8e} f^{8eg} \mathcal{D}_5^{kg}, &
T_{76}^{kc} = \delta^{c8} \mathcal{D}_5^{k8}, \\
T_{77}^{kc} = f^{c8e} f^{8eg} \mathcal{O}_5^{kg}, &
T_{78}^{kc} = d^{c8e} d^{8eg} \mathcal{O}_5^{kg}, \\
T_{79}^{kc} = d^{ceg} d^{88e} \mathcal{O}_5^{kg}, &
T_{80}^{kc} = \delta^{c8} \mathcal{O}_5^{k8}, \\
T_{81}^{kc} = d^{c88} \{J^2,\{J^2,J^k\}\}, &
T_{82}^{kc} = \{J^2,\{G^{kc},\{T^8,T^8\}\}\}, \\
T_{83}^{kc} = \{J^2,\{G^{k8},\{T^c,T^8\}\}\}, &
T_{84}^{kc} = \{J^2,\{G^{kc},\{G^{r8},G^{r8}\}\}\}, \\
T_{85}^{kc} = \{J^2,\{G^{k8},\{G^{rc},G^{r8}\}\}\}, &
T_{86}^{kc} = d^{c8e} \{J^2,\{J^k,\{G^{re},G^{r8}\}\}\}, \\
T_{87}^{kc} = d^{88e} \{J^2,\{J^k,\{G^{rc},G^{re}\}\}\}, &
T_{88}^{kc} = d^{c8e} \{J^2,\{G^{ke},\{J^r,G^{r8}\}\}\}, \\
\end{array}
\end{eqnarray}
\begin{eqnarray}
\label{eq:basis27}
\begin{array}{ll}
T_{89}^{kc} = d^{c8e} \{J^2,\{G^{k8},\{J^r,G^{re}\}\}\}, &
T_{90}^{kc} = d^{88e} \{J^2,\{G^{kc},\{J^r,G^{re}\}\}\}, \\
T_{91}^{kc} = d^{88e} \{J^2,\{G^{ke},\{J^r,G^{rc}\}\}\}, &
T_{92}^{kc} = \epsilon^{kim} f^{c8e} \{J^2,\{T^e,\{J^i,G^{m8}\}\}\}, \\
T_{93}^{kc} = \epsilon^{kim} f^{c8e} \{J^2,\{T^8,\{J^i,G^{me}\}\}\}, &
T_{94}^{kc} = \{G^{kc},\{\{J^m,G^{m8}\},\{J^r,G^{r8}\}\}\}, \\
T_{95}^{kc} = \{G^{k8},\{\{J^m,G^{m8}\},\{J^r,G^{rc}\}\}\}, &
T_{96}^{kc} = \{J^k,\{\{J^m,G^{mc}\},\{G^{r8},G^{r8}\}\}\}, \\
T_{97}^{kc} = \{J^k,\{\{J^m,G^{m8}\},\{G^{r8},G^{rc}\}\}\}, &
T_{98}^{kc} = \{\mathcal{D}_2^{kc},\{T^8,\{J^r,G^{r8}\}\}\}, \\
T_{99}^{kc} = \{\mathcal{D}_2^{k8},\{T^8,\{J^r,G^{rc}\}\}\}, &
T_{100}^{kc} = d^{c8e} \{\mathcal{D}_3^{ke},\{J^r,G^{r8}\}\}, \\
T_{101}^{kc} = d^{88e} \{\mathcal{D}_3^{kc},\{J^r,G^{re}\}\}, &
T_{102}^{kc} = \epsilon^{kim} f^{ab8} \{\{J^i,G^{m8}\},\{T^a,\{G^{rb},G^{rc}\}\}\}, \\
T_{103}^{kc} = i \epsilon^{kim} d^{c8e} \{J^2,\{T^e,\{J^i,G^{m8}\}\}\}, &
T_{104}^{kc} = i \epsilon^{kil} [\{J^i,G^{l8}\},\{\{J^m,G^{m8}\},\{J^r,G^{rc}\}\}], \\
T_{105}^{kc} = f^{c8e} f^{8eg} \mathcal{D}_6^{kg}, &
T_{106}^{kc} = d^{c8e} d^{8eg} \mathcal{D}_6^{kg}, \\
T_{107}^{kc} = d^{ceg} d^{88e} \mathcal{D}_6^{kg}, &
T_{108}^{kc} = i f^{c8e} d^{8eg} \mathcal{D}_6^{kg}, \\
T_{109}^{kc} = \delta^{c8} \mathcal{D}_6^{k8}, &
T_{110}^{kc} = d^{c8e} \{J^2,\{J^2,\{G^{ke},T^8\}\}\}, \\
T_{111}^{kc} = d^{88e} \{J^2,\{J^2,\{G^{ke},T^c\}\}\}, &
T_{112}^{kc} = i \epsilon^{kim} \delta^{c8} \{J^2,\{J^2,\{J^i,G^{m8}\}\}\}, \\
T_{113}^{kc} = \{J^2,\{\mathcal{D}_2^{kc},\{G^{r8},G^{r8}\}\}\}, &
T_{114}^{kc} = \{J^2,\{\mathcal{D}_2^{k8},\{G^{rc},G^{r8}\}\}\}, \\
T_{115}^{kc} = d^{c8e} \{J^2,\{\mathcal{D}_2^{k8},\{J^r,G^{re}\}\}\}, &
T_{116}^{kc} = d^{88e} \{J^2,\{\mathcal{D}_2^{kc},\{J^r,G^{re}\}\}\}, \\
T_{117}^{kc} = i f^{c8e} \{J^2,\{\mathcal{D}_2^{ke},\{J^r,G^{r8}\}\}\}, &
T_{118}^{kc} = \{J^2,\{\{J^r,G^{rc}\},\{G^{k8},T^8\}\}\}, \\
T_{119}^{kc} = \{J^2,\{\{J^r,G^{r8}\},\{G^{kc},T^8\}\}\}, &
T_{120}^{kc} = \{J^2,\{\{J^r,G^{r8}\},\{G^{k8},T^c\}\}\}, \\
T_{121}^{kc} = i \epsilon^{kim} \{J^2,\{\{T^c,T^8\},\{J^i,G^{m8}\}\}\}, &
T_{122}^{kc} = i \epsilon^{kim} \{J^2,\{\{G^{rc},G^{r8}\},\{J^i,G^{m8}\}\}\}, \\
T_{123}^{kc} = i \epsilon^{kim} \{J^2,\{\{G^{r8},G^{r8}\},\{J^i,G^{mc}\}\}\}, &
T_{124}^{kc} = i \epsilon^{rim} \{J^2,\{G^{k8},\{J^r,\{G^{ic},G^{m8}\}\}\}\}, \\
T_{125}^{kc} = i \epsilon^{rim} d^{c8e} \{J^2,\{J^k,\{J^r,\{G^{i8},G^{me}\}\}\}\}, &
T_{126}^{kc} = i \epsilon^{kim} f^{cae} f^{8eb} \{J^2,\{\{J^i,G^{m8}\},\{T^a,T^b\}\}\}, \\
T_{127}^{kc} = i f^{c8e} \{J^2,\{J^k,[\{J^i,G^{ie}\},\{J^r,G^{r8}\}]\}\}, &
T_{128}^{kc} = i f^{c8e} \{J^2,\{\{J^r,G^{re}\},[J^2,G^{k8}]\}\}, \\
T_{129}^{kc} = i f^{c8e} \{J^2,\{\{J^r,G^{r8}\},[J^2,G^{ke}]\}\}, &
T_{130}^{kc} = i f^{c8e} \{J^2,\{J^2,[G^{ke},\{J^r,G^{r8}\}]\}\}, \\
T_{131}^{kc} = i f^{c8e} \{J^2,\{J^2,[G^{k8},\{J^r,G^{re}\}]\}\}, &
T_{132}^{kc} = \{\mathcal{D}_2^{kc},\{\{J^m,G^{m8}\},\{J^r,G^{r8}\}\}\}, \\
T_{133}^{kc} = \{\mathcal{D}_2^{k8},\{\{J^m,G^{mc}\},\{J^r,G^{r8}\}\}\}, &
T_{134}^{kc} = i \epsilon^{kim} [\{T^8,\{J^r,G^{r8}\}\},\{J^2,\{J^i,G^{mc}\}\}], \\
T_{135}^{kc} = d^{c8e} \{J^2,\{J^2,[G^{ke},\{J^r,G^{r8}\}]\}\}, &
T_{136}^{kc} = d^{c8e} \{J^2,\{J^2,[G^{k8},\{J^r,G^{re}\}]\}\}, \\
T_{137}^{kc} = \{J^2,[G^{kc},\{\{J^m,G^{m8}\},\{J^r,G^{r8}\}\}]\}, &
T_{138}^{kc} = \{J^2,[G^{k8},\{\{J^m,G^{m8}\},\{J^r,G^{rc}\}\}]\}, \\
T_{139}^{kc} = \{J^2,\{\{J^m,G^{mc}\},[G^{k8},\{J^r,G^{r8}\}]\}\}, &
T_{140}^{kc} = f^{c8e} f^{8eg} \mathcal{D}_7^{kg}, \\
T_{141}^{kc} = d^{c8e} d^{8eg} \mathcal{D}_7^{kg}, &
T_{142}^{kc} = d^{ceg} d^{88e} \mathcal{D}_7^{kg}, \\
T_{143}^{kc} = \delta^{c8} \mathcal{D}_7^{k8}, &
T_{144}^{kc} = f^{c8e} f^{8eg} \mathcal{O}_7^{kg}, \\
T_{145}^{kc} = d^{c8e} d^{8eg} \mathcal{O}_7^{kg}, &
T_{146}^{kc} = d^{ceg} d^{88e} \mathcal{O}_7^{kg}, \\
T_{147}^{kc} = \delta^{c8} \mathcal{O}_7^{k8}, &
T_{148}^{kc} = d^{c88} \{J^2,\{J^2,\{J^2,J^k\}\}\}, \\
T_{149}^{kc} = \{J^2,\{J^2,\{G^{kc},\{G^{r8},G^{r8}\}\}\}\}, &
T_{150}^{kc} = \{J^2,\{J^2,\{G^{k8},\{G^{rc},G^{r8}\}\}\}\}, \\
T_{151}^{kc} = d^{c8e} \{J^2,\{J^2,\{J^k,\{G^{re},G^{r8}\}\}\}\}, &
T_{152}^{kc} = d^{88e} \{J^2,\{J^2,\{J^k,\{G^{rc},G^{re}\}\}\}\}, \\
T_{153}^{kc} = d^{c8e} \{J^2,\{J^2,\{G^{ke},\{J^r,G^{r8}\}\}\}\}, &
T_{154}^{kc} = d^{c8e} \{J^2,\{J^2,\{G^{k8},\{J^r,G^{re}\}\}\}\}, \\
T_{155}^{kc} = d^{88e} \{J^2,\{J^2,\{G^{kc},\{J^r,G^{re}\}\}\}\}, &
T_{156}^{kc} = d^{88e} \{J^2,\{J^2,\{G^{ke},\{J^r,G^{rc}\}\}\}\}, \\
T_{157}^{kc} = \epsilon^{kim} f^{c8e} \{J^2,\{J^2,\{T^e,\{J^i,G^{m8}\}\}\}\}, &
T_{158}^{kc} = \{J^2,\{G^{kc},\{\{J^m,G^{m8}\},\{J^r,G^{r8}\}\}\}\}, \\
T_{159}^{kc} = \{J^2,\{G^{k8},\{\{J^m,G^{m8}\},\{J^r,G^{rc}\}\}\}\}, &
T_{160}^{kc} = \{J^2,\{J^k,\{\{J^m,G^{mc}\},\{G^{r8},G^{r8}\}\}\}\}, \\
T_{161}^{kc} = \{J^2,\{J^k,\{\{J^m,G^{m8}\},\{G^{r8},G^{rc}\}\}\}\}, &
T_{162}^{kc} = d^{c8e} \{J^2,\{\mathcal{D}_3^{ke},\{J^r,G^{r8}\}\}\}, \\
T_{163}^{kc} = d^{88e} \{J^2,\{\mathcal{D}_3^{kc},\{J^r,G^{re}\}\}\}, &
T_{164}^{kc} = \epsilon^{kim} f^{ab8} \{J^2,\{\{J^i,G^{m8}\},\{T^a,\{G^{rb},G^{rc}\}\}\}\}, \\
T_{165}^{kc} = i \epsilon^{kil} \{J^2,[\{J^i,G^{l8}\},\{\{J^m,G^{m8}\},\{J^r,G^{rc}\}\}]\}, &
T_{166}^{kc} = \{\mathcal{D}_3^{kc},\{\{J^m,G^{m8}\},\{J^r,G^{r8}\}\}\}, \\
T_{167}^{kc} = i \epsilon^{kil} \{J^2,\{J^i,\{J^r,[G^{l8},\{G^{r8},\{J^m,G^{mc}\}\}]\}\}\}. &
\end{array}
\end{eqnarray}
The corresponding nontrivial matrix elements of the operators in basis (\ref{eq:basis27}) are listed in Tables \ref{t:mm2733O}--\ref{t:mm2738TO}.
\begin{table*}
\caption{\label{t:mm2733O}Nontrivial matrix elements of the operators involved in the magnetic moments of octet baryons: flavor $\mathbf{27}$ representation.}
\begin{ruledtabular}
\begin{tabular}{lccccccccc}
& $\displaystyle n$ & $\displaystyle p$ & $\displaystyle \Sigma^-$ & $\displaystyle \Sigma^0$ & $\displaystyle \Sigma^+$ & $\displaystyle \Xi^-$ & $\displaystyle \Xi^0$ & $\displaystyle \Lambda$ & $\displaystyle \Lambda\Sigma^0$ \\[2mm]
\hline
$\langle T_{2}^{33} \rangle$ & $-\frac{5}{36}$ & $\frac{5}{36}$ & $-\frac19$ & $0$ & $\frac19$ & $\frac{1}{36}$ & $-\frac{1}{36}$ & $0$ & $\frac{1}{6 \sqrt{3}}$ \\
$\langle T_{3}^{33} \rangle$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \\
$\langle T_{4}^{33} \rangle$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \\
$\langle T_{6}^{33} \rangle$ & $-\frac{1}{12}$ & $\frac{1}{12}$ & $-\frac16$ & $0$ & $\frac16$ & $-\frac{1}{12}$ & $\frac{1}{12}$ & $0$ & $0$ \\
$\langle T_{7}^{33} \rangle$ & $\frac{1}{12}$ & $-\frac{1}{12}$ & $\frac16$ & $0$ & $-\frac16$ & $\frac{1}{12}$ & $-\frac{1}{12}$ & $0$ & $0$ \\
$\langle T_{8}^{33} \rangle$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \\
$\langle T_{9}^{33} \rangle$ & $-\frac{5}{12}$ & $\frac{5}{12}$ & $0$ & $0$ & $0$ & $-\frac{1}{12}$ & $\frac{1}{12}$ & $0$ & $0$ \\
$\langle T_{10}^{33} \rangle$ & $\frac{1}{12}$ & $-\frac{1}{12}$ & $\frac13$ & $0$ & $-\frac13$ & $-\frac14$ & $\frac14$ & $0$ & $0$ \\
$\langle T_{15}^{33} \rangle$ & $-\frac{5}{12}$ & $\frac{5}{12}$ & $-\frac13$ & $0$ & $\frac13$ & $\frac{1}{12}$ & $-\frac{1}{12}$ & $0$ & $\frac{1}{2 \sqrt{3}}$ \\
$\langle T_{16}^{33} \rangle$ & $\frac{5}{12}$ & $-\frac{5}{12}$ & $\frac13$ & $0$ & $-\frac13$ & $-\frac{1}{12}$ & $\frac{1}{12}$ & $0$ & $-\frac{1}{2 \sqrt{3}}$ \\
$\langle T_{19}^{33} \rangle$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \\
$\langle T_{25}^{33} \rangle$ & $-\frac54$ & $\frac54$ & $0$ & $0$ & $0$ & $\frac14$ & $-\frac14$ & $0$ & $0$ \\
$\langle T_{26}^{33} \rangle$ & $-\frac14$ & $\frac14$ & $0$ & $0$ & $0$ & $-\frac34$ & $\frac34$ & $0$ & $0$ \\
$\langle T_{27}^{33} \rangle$ & $-\frac{5}{48}$ & $\frac{5}{48}$ & $-1$ & $0$ & $1$ & $\frac{17}{48}$ & $-\frac{17}{48}$ & $0$ & $\frac{1}{\sqrt{3}}$ \\
$\langle T_{28}^{33} \rangle$ & $-\frac{5}{48}$ & $\frac{5}{48}$ & $-\frac23$ & $0$ & $\frac23$ & $\frac{11}{16}$ & $-\frac{11}{16}$ & $0$ & $0$ \\
$\langle T_{29}^{33} \rangle$ & $-\frac{5}{24}$ & $\frac{5}{24}$ & $-\frac23$ & $0$ & $\frac23$ & $-\frac{11}{24}$ & $\frac{11}{24}$ & $0$ & $-\frac{1}{2 \sqrt{3}}$ \\
$\langle T_{30}^{33} \rangle$ & $\frac{5}{24}$ & $-\frac{5}{24}$ & $\frac23$ & $0$ & $-\frac23$ & $\frac{11}{24}$ & $-\frac{11}{24}$ & $0$ & $\frac{1}{2 \sqrt{3}}$ \\
$\langle T_{31}^{33} \rangle$ & $-\frac{5}{24}$ & $\frac{5}{24}$ & $-\frac13$ & $0$ & $\frac13$ & $-\frac18$ & $\frac18$ & $0$ & $0$ \\
$\langle T_{32}^{33} \rangle$ & $-\frac{5}{24}$ & $\frac{5}{24}$ & $-\frac13$ & $0$ & $\frac13$ & $-\frac18$ & $\frac18$ & $0$ & $0$ \\
$\langle T_{33}^{33} \rangle$ & $\frac{5}{24}$ & $-\frac{5}{24}$ & $\frac13$ & $0$ & $-\frac13$ & $\frac18$ & $-\frac18$ & $0$ & $0$ \\
$\langle T_{34}^{33} \rangle$ & $\frac{5}{24}$ & $-\frac{5}{24}$ & $\frac13$ & $0$ & $-\frac13$ & $\frac18$ & $-\frac18$ & $0$ & $0$ \\
$\langle T_{46}^{33} \rangle$ & $-\frac34$ & $\frac34$ & $0$ & $0$ & $0$ & $-\frac34$ & $\frac34$ & $0$ & $0$ \\
$\langle T_{47}^{33} \rangle$ & $-\frac{1}{16}$ & $\frac{1}{16}$ & $-\frac32$ & $0$ & $\frac32$ & $-\frac{17}{16}$ & $\frac{17}{16}$ & $0$ & $0$ \\
$\langle T_{48}^{33} \rangle$ & $-\frac{5}{16}$ & $\frac{5}{16}$ & $0$ & $0$ & $0$ & $\frac{11}{16}$ & $-\frac{11}{16}$ & $0$ & $0$ \\
$\langle T_{49}^{33} \rangle$ & $-\frac58$ & $\frac58$ & $0$ & $0$ & $0$ & $-\frac18$ & $\frac18$ & $0$ & $0$ \\
$\langle T_{50}^{33} \rangle$ & $\frac18$ & $-\frac18$ & $\frac12$ & $0$ & $-\frac12$ & $-\frac38$ & $\frac38$ & $0$ & $0$ \\
$\langle T_{52}^{33} \rangle$ & $-\frac58$ & $\frac58$ & $0$ & $0$ & $0$ & $\frac38$ & $-\frac38$ & $0$ & $0$ \\
$\langle T_{53}^{33} \rangle$ & $-\frac58$ & $\frac58$ & $0$ & $0$ & $0$ & $\frac38$ & $-\frac38$ & $0$ & $0$ \\
$\langle T_{54}^{33} \rangle$ & $-\frac18$ & $\frac18$ & $-1$ & $0$ & $1$ & $-\frac98$ & $\frac98$ & $0$ & $0$ \\
$\langle T_{58}^{33} \rangle$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $-\frac{\sqrt{3}}{2}$ \\
$\langle T_{65}^{33} \rangle$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $-\frac{\sqrt{3}}{4}$ \\
$\langle T_{66}^{33} \rangle$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $\frac{\sqrt{3}}{4}$ \\
$\langle T_{94}^{33} \rangle$ & $-\frac{5}{16}$ & $\frac{5}{16}$ & $-1$ & $0$ & $1$ & $\frac{9}{16}$ & $-\frac{9}{16}$ & $0$ & $\frac{\sqrt{3}}{2}$ \\
$\langle T_{95}^{33} \rangle$ & $-\frac{5}{16}$ & $\frac{5}{16}$ & $-1$ & $0$ & $1$ & $\frac{9}{16}$ & $-\frac{9}{16}$ & $0$ & $0$ \\
$\langle T_{96}^{33} \rangle$ & $-\frac{5}{16}$ & $\frac{5}{16}$ & $-3$ & $0$ & $3$ & $\frac{17}{16}$ & $-\frac{17}{16}$ & $0$ & $\sqrt{3}$ \\
$\langle T_{97}^{33} \rangle$ & $-\frac{5}{16}$ & $\frac{5}{16}$ & $-2$ & $0$ & $2$ & $\frac{33}{16}$ & $-\frac{33}{16}$ & $0$ & $0$ \\
$\langle T_{98}^{33} \rangle$ & $-\frac38$ & $\frac38$ & $0$ & $0$ & $0$ & $-\frac98$ & $\frac98$ & $0$ & $0$ \\
$\langle T_{99}^{33} \rangle$ & $-\frac{15}{8}$ & $\frac{15}{8}$ & $0$ & $0$ & $0$ & $\frac38$ & $-\frac38$ & $0$ & $0$ \\
$\langle T_{100}^{33} \rangle$ & $-\frac58$ & $\frac58$ & $-1$ & $0$ & $1$ & $-\frac38$ & $\frac38$ & $0$ & $0$ \\
$\langle T_{101}^{33} \rangle$ & $\frac58$ & $-\frac58$ & $1$ & $0$ & $-1$ & $\frac38$ & $-\frac38$ & $0$ & $0$ \\
$\langle T_{120}^{33} \rangle$ & $-\frac{3}{16}$ & $\frac{3}{16}$ & $-\frac32$ & $0$ & $\frac32$ & $-\frac{27}{16}$ & $\frac{27}{16}$ & $0$ & $0$ \\
$\langle T_{132}^{33} \rangle$ & $-\frac{3}{16}$ & $\frac{3}{16}$ & $-\frac32$ & $0$ & $\frac32$ & $-\frac{27}{16}$ & $\frac{27}{16}$ & $0$ & $0$ \\
$\langle T_{133}^{33} \rangle$ & $-\frac{15}{16}$ & $\frac{15}{16}$ & $0$ & $0$ & $0$ & $\frac{9}{16}$ & $-\frac{9}{16}$ & $0$ & $0$ \\
$\langle T_{166}^{33} \rangle$ & $-\frac{15}{16}$ & $\frac{15}{16}$ & $-3$ & $0$ & $3$ & $\frac{27}{16}$ & $-\frac{27}{16}$ & $0$ & $\frac{3\sqrt{3}}{2}$ \\
\end{tabular}
\end{ruledtabular}
\end{table*}
\begin{table*}
\caption{\label{t:mm2738O}Nontrivial matrix elements of the operators involved in the magnetic moments of octet baryons: flavor $\mathbf{27}$ representation. The entries correspond to $\sqrt{3} \langle T_i^{38} \rangle$.}
\begin{ruledtabular}
\begin{tabular}{lccccccccc}
& $\displaystyle n$ & $\displaystyle p$ & $\displaystyle \Sigma^-$ & $\displaystyle \Sigma^0$ & $\displaystyle \Sigma^+$ & $\displaystyle \Xi^-$ & $\displaystyle \Xi^0$ & $\displaystyle \Lambda$ & $\displaystyle \Lambda\Sigma^0$ \\[2mm]
\hline
$\langle T_{2}^{38} \rangle$ & $\frac{1}{12}$ & $\frac{1}{12}$ & $\frac16$ & $\frac16$ & $\frac16$ & $-\frac14$ & $-\frac14$ & $-\frac16$ & $0$ \\
$\langle T_{3}^{38} \rangle$ & $\frac14$ & $\frac14$ & $\frac12$ & $\frac12$ & $\frac12$ & $-\frac34$ & $-\frac34$ & $-\frac12$ & $0$ \\
$\langle T_{4}^{38} \rangle$ & $-\frac12$ & $-\frac12$ & $-\frac12$ & $-\frac12$ & $-\frac12$ & $-\frac12$ & $-\frac12$ & $-\frac12$ & $0$ \\
$\langle T_{6}^{38} \rangle$ & $\frac14$ & $\frac14$ & $0$ & $0$ & $0$ & $-\frac14$ & $-\frac14$ & $0$ & $0$ \\
$\langle T_{7}^{38} \rangle$ & $\frac14$ & $\frac14$ & $0$ & $0$ & $0$ & $-\frac14$ & $-\frac14$ & $0$ & $0$ \\
$\langle T_{8}^{38} \rangle$ & $\frac34$ & $\frac34$ & $0$ & $0$ & $0$ & $-\frac34$ & $-\frac34$ & $0$ & $0$ \\
$\langle T_{9}^{38} \rangle$ & $-\frac14$ & $-\frac14$ & $0$ & $0$ & $0$ & $-\frac34$ & $-\frac34$ & $0$ & $0$ \\
$\langle T_{10}^{38} \rangle$ & $-\frac14$ & $-\frac14$ & $0$ & $0$ & $0$ & $-\frac34$ & $-\frac34$ & $0$ & $0$ \\
$\langle T_{15}^{38} \rangle$ & $\frac14$ & $\frac14$ & $\frac12$ & $\frac12$ & $\frac12$ & $-\frac34$ & $-\frac34$ & $-\frac12$ & $0$ \\
$\langle T_{16}^{38} \rangle$ & $\frac14$ & $\frac14$ & $\frac12$ & $\frac12$ & $\frac12$ & $-\frac34$ & $-\frac34$ & $-\frac12$ & $0$ \\
$\langle T_{19}^{38} \rangle$ & $\frac34$ & $\frac34$ & $\frac32$ & $\frac32$ & $\frac32$ & $-\frac94$ & $-\frac94$ & $-\frac32$ & $0$ \\
$\langle T_{25}^{38} \rangle$ & $\frac34$ & $\frac34$ & $0$ & $0$ & $0$ & $-\frac94$ & $-\frac94$ & $0$ & $0$ \\
$\langle T_{26}^{38} \rangle$ & $\frac34$ & $\frac34$ & $0$ & $0$ & $0$ & $-\frac94$ & $-\frac94$ & $0$ & $0$ \\
$\langle T_{27}^{38} \rangle$ & $\frac{1}{16}$ & $\frac{1}{16}$ & $\frac32$ & $\frac32$ & $\frac32$ & $-\frac{51}{16}$ & $-\frac{51}{16}$ & $-\frac12$ & $0$ \\
$\langle T_{28}^{38} \rangle$ & $\frac{1}{16}$ & $\frac{1}{16}$ & $\frac32$ & $\frac32$ & $\frac32$ & $-\frac{51}{16}$ & $-\frac{51}{16}$ & $-\frac12$ & $0$ \\
$\langle T_{29}^{38} \rangle$ & $-\frac18$ & $-\frac18$ & $-\frac32$ & $-\frac32$ & $-\frac32$ & $-\frac{17}{8}$ & $-\frac{17}{8}$ & $-\frac12$ & $0$ \\
$\langle T_{30}^{38} \rangle$ & $-\frac18$ & $-\frac18$ & $-\frac32$ & $-\frac32$ & $-\frac32$ & $-\frac{17}{8}$ & $-\frac{17}{8}$ & $-\frac12$ & $0$ \\
$\langle T_{31}^{38} \rangle$ & $-\frac18$ & $-\frac18$ & $-\frac12$ & $-\frac12$ & $-\frac12$ & $-\frac98$ & $-\frac98$ & $-\frac12$ & $0$ \\
$\langle T_{32}^{38} \rangle$ & $-\frac18$ & $-\frac18$ & $-\frac12$ & $-\frac12$ & $-\frac12$ & $-\frac98$ & $-\frac98$ & $-\frac12$ & $0$ \\
$\langle T_{33}^{38} \rangle$ & $-\frac18$ & $-\frac18$ & $-\frac12$ & $-\frac12$ & $-\frac12$ & $-\frac98$ & $-\frac98$ & $-\frac12$ & $0$ \\
$\langle T_{34}^{38} \rangle$ & $-\frac18$ & $-\frac18$ & $-\frac12$ & $-\frac12$ & $-\frac12$ & $-\frac98$ & $-\frac98$ & $-\frac12$ & $0$ \\
$\langle T_{46}^{38} \rangle$ & $\frac94$ & $\frac94$ & $0$ & $0$ & $0$ & $-\frac94$ & $-\frac94$ & $0$ & $0$ \\
$\langle T_{47}^{38} \rangle$ & $\frac{3}{16}$ & $\frac{3}{16}$ & $0$ & $0$ & $0$ & $-\frac{51}{16}$ & $-\frac{51}{16}$ & $0$ & $0$ \\
$\langle T_{48}^{38} \rangle$ & $\frac{3}{16}$ & $\frac{3}{16}$ & $0$ & $0$ & $0$ & $-\frac{51}{16}$ & $-\frac{51}{16}$ & $0$ & $0$ \\
$\langle T_{49}^{38} \rangle$ & $-\frac38$ & $-\frac38$ & $0$ & $0$ & $0$ & $-\frac98$ & $-\frac98$ & $0$ & $0$ \\
$\langle T_{50}^{38} \rangle$ & $-\frac38$ & $-\frac38$ & $0$ & $0$ & $0$ & $-\frac98$ & $-\frac98$ & $0$ & $0$ \\
$\langle T_{52}^{38} \rangle$ & $\frac38$ & $\frac38$ & $0$ & $0$ & $0$ & $-\frac{27}{8}$ & $-\frac{27}{8}$ & $0$ & $0$ \\
$\langle T_{53}^{38} \rangle$ & $\frac38$ & $\frac38$ & $0$ & $0$ & $0$ & $-\frac{27}{8}$ & $-\frac{27}{8}$ & $0$ & $0$ \\
$\langle T_{54}^{38} \rangle$ & $\frac38$ & $\frac38$ & $0$ & $0$ & $0$ & $-\frac{27}{8}$ & $-\frac{27}{8}$ & $0$ & $0$ \\
$\langle T_{58}^{38} \rangle$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \\
$\langle T_{65}^{38} \rangle$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \\
$\langle T_{66}^{38} \rangle$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \\
$\langle T_{94}^{38} \rangle$ & $\frac{3}{16}$ & $\frac{3}{16}$ & $\frac32$ & $\frac32$ & $\frac32$ & $-\frac{81}{16}$ & $-\frac{81}{16}$ & $-\frac32$ & $0$ \\
$\langle T_{95}^{38} \rangle$ & $\frac{3}{16}$ & $\frac{3}{16}$ & $\frac32$ & $\frac32$ & $\frac32$ & $-\frac{81}{16}$ & $-\frac{81}{16}$ & $-\frac32$ & $0$ \\
$\langle T_{96}^{38} \rangle$ & $\frac{3}{16}$ & $\frac{3}{16}$ & $\frac92$ & $\frac92$ & $\frac92$ & $-\frac{153}{16}$ & $-\frac{153}{16}$ & $-\frac32$ & $0$ \\
$\langle T_{97}^{38} \rangle$ & $\frac{3}{16}$ & $\frac{3}{16}$ & $\frac92$ & $\frac92$ & $\frac92$ & $-\frac{153}{16}$ & $-\frac{153}{16}$ & $-\frac32$ & $0$ \\
$\langle T_{98}^{38} \rangle$ & $\frac98$ & $\frac98$ & $0$ & $0$ & $0$ & $-\frac{27}{8}$ & $-\frac{27}{8}$ & $0$ & $0$ \\
$\langle T_{99}^{38} \rangle$ & $\frac98$ & $\frac98$ & $0$ & $0$ & $0$ & $-\frac{27}{8}$ & $-\frac{27}{8}$ & $0$ & $0$ \\
$\langle T_{100}^{38} \rangle$ & $-\frac38$ & $-\frac38$ & $-\frac32$ & $-\frac32$ & $-\frac32$ & $-\frac{27}{8}$ & $-\frac{27}{8}$ & $-\frac32$ & $0$ \\
$\langle T_{101}^{38} \rangle$ & $-\frac38$ & $-\frac38$ & $-\frac32$ & $-\frac32$ & $-\frac32$ & $-\frac{27}{8}$ & $-\frac{27}{8}$ & $-\frac32$ & $0$ \\
$\langle T_{120}^{38} \rangle$ & $\frac{9}{16}$ & $\frac{9}{16}$ & $0$ & $0$ & $0$ & $-\frac{81}{16}$ & $-\frac{81}{16}$ & $0$ & $0$ \\
$\langle T_{132}^{38} \rangle$ & $\frac{9}{16}$ & $\frac{9}{16}$ & $0$ & $0$ & $0$ & $-\frac{81}{16}$ & $-\frac{81}{16}$ & $0$ & $0$ \\
$\langle T_{133}^{38} \rangle$ & $\frac{9}{16}$ & $\frac{9}{16}$ & $0$ & $0$ & $0$ & $-\frac{81}{16}$ & $-\frac{81}{16}$ & $0$ & $0$ \\
$\langle T_{166}^{38} \rangle$ & $\frac{9}{16}$ & $\frac{9}{16}$ & $\frac92$ & $\frac92$ & $\frac92$ & $-\frac{243}{16}$ & $-\frac{243}{16}$ & $-\frac92$ & $0$ \\
\end{tabular}
\end{ruledtabular}
\end{table*}
\begin{table*}
\caption{\label{t:mm2733T}Nontrivial matrix elements of the operators involved in the magnetic moments of decuplet baryons: flavor $\mathbf{27}$ representation.}
\begin{ruledtabular}
\begin{tabular}{lcccccccccc}
& $\displaystyle \Delta^{++}$ & $\displaystyle \Delta^+$ & $\displaystyle \Delta^0$ & $\displaystyle \Delta^-$ & $\displaystyle {\Sigma^*}^+$ & $\displaystyle {\Sigma^*}^0$ & $\displaystyle {\Sigma^*}^-$ & $\displaystyle {\Xi^*}^0$ & $\displaystyle {\Xi^*}^-$ & $\displaystyle \Omega^-$ \\[2mm]
\hline
$\langle T_{2}^{33} \rangle$ & $\frac14$ & $\frac{1}{12}$ & $-\frac{1}{12}$ & $-\frac14$ & $\frac16$ & $0$ & $-\frac16$ & $\frac{1}{12}$ & $-\frac{1}{12}$ & $0$ \\
$\langle T_{3}^{33} \rangle$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \\
$\langle T_{4}^{33} \rangle$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \\
$\langle T_{6}^{33} \rangle$ & $\frac34$ & $\frac14$ & $-\frac14$ & $-\frac34$ & $\frac12$ & $0$ & $-\frac12$ & $\frac14$ & $-\frac14$ & $0$ \\
$\langle T_{7}^{33} \rangle$ & $-\frac34$ & $-\frac14$ & $\frac14$ & $\frac34$ & $-\frac12$ & $0$ & $\frac12$ & $-\frac14$ & $\frac14$ & $0$ \\
$\langle T_{8}^{33} \rangle$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \\
$\langle T_{9}^{33} \rangle$ & $\frac34$ & $\frac14$ & $-\frac14$ & $-\frac34$ & $0$ & $0$ & $0$ & $-\frac14$ & $\frac14$ & $0$ \\
$\langle T_{10}^{33} \rangle$ & $-\frac34$ & $-\frac14$ & $\frac14$ & $\frac34$ & $0$ & $0$ & $0$ & $\frac14$ & $-\frac14$ & $0$ \\
$\langle T_{15}^{33} \rangle$ & $\frac{15}{4}$ & $\frac54$ & $-\frac54$ & $-\frac{15}{4}$ & $\frac52$ & $0$ & $-\frac52$ & $\frac54$ & $-\frac54$ & $0$ \\
$\langle T_{16}^{33} \rangle$ & $-\frac{15}{4}$ & $-\frac54$ & $\frac54$ & $\frac{15}{4}$ & $-\frac52$ & $0$ & $\frac52$ & $-\frac54$ & $\frac54$ & $0$ \\
$\langle T_{19}^{33} \rangle$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \\
$\langle T_{25}^{33} \rangle$ & $\frac94$ & $\frac34$ & $-\frac34$ & $-\frac94$ & $0$ & $0$ & $0$ & $\frac34$ & $-\frac34$ & $0$ \\
$\langle T_{26}^{33} \rangle$ & $\frac94$ & $\frac34$ & $-\frac34$ & $-\frac94$ & $0$ & $0$ & $0$ & $\frac34$ & $-\frac34$ & $0$ \\
$\langle T_{27}^{33} \rangle$ & $\frac{15}{16}$ & $\frac{5}{16}$ & $-\frac{5}{16}$ & $-\frac{15}{16}$ & $\frac12$ & $0$ & $-\frac12$ & $\frac{9}{16}$ & $-\frac{9}{16}$ & $0$ \\
$\langle T_{28}^{33} \rangle$ & $\frac{15}{16}$ & $\frac{5}{16}$ & $-\frac{5}{16}$ & $-\frac{15}{16}$ & $0$ & $0$ & $0$ & $\frac{1}{16}$ & $-\frac{1}{16}$ & $0$ \\
$\langle T_{29}^{33} \rangle$ & $\frac{15}{8}$ & $\frac58$ & $-\frac58$ & $-\frac{15}{8}$ & $\frac12$ & $0$ & $-\frac12$ & $-\frac18$ & $\frac18$ & $0$ \\
$\langle T_{30}^{33} \rangle$ & $-\frac{15}{8}$ & $-\frac58$ & $\frac58$ & $\frac{15}{8}$ & $-\frac12$ & $0$ & $\frac12$ & $\frac18$ & $-\frac18$ & $0$ \\
$\langle T_{31}^{33} \rangle$ & $\frac{15}{8}$ & $\frac58$ & $-\frac58$ & $-\frac{15}{8}$ & $0$ & $0$ & $0$ & $-\frac58$ & $\frac58$ & $0$ \\
$\langle T_{32}^{33} \rangle$ & $\frac{15}{8}$ & $\frac58$ & $-\frac58$ & $-\frac{15}{8}$ & $0$ & $0$ & $0$ & $-\frac58$ & $\frac58$ & $0$ \\
$\langle T_{33}^{33} \rangle$ & $-\frac{15}{8}$ & $-\frac58$ & $\frac58$ & $\frac{15}{8}$ & $0$ & $0$ & $0$ & $\frac58$ & $-\frac58$ & $0$ \\
$\langle T_{34}^{33} \rangle$ & $-\frac{15}{8}$ & $-\frac58$ & $\frac58$ & $\frac{15}{8}$ & $0$ & $0$ & $0$ & $\frac58$ & $-\frac58$ & $0$ \\
$\langle T_{46}^{33} \rangle$ & $\frac{27}{4}$ & $\frac94$ & $-\frac94$ & $-\frac{27}{4}$ & $0$ & $0$ & $0$ & $\frac94$ & $-\frac94$ & $0$ \\
$\langle T_{47}^{33} \rangle$ & $\frac{45}{16}$ & $\frac{15}{16}$ & $-\frac{15}{16}$ & $-\frac{45}{16}$ & $\frac32$ & $0$ & $-\frac32$ & $\frac{27}{16}$ & $-\frac{27}{16}$ & $0$ \\
$\langle T_{48}^{33} \rangle$ & $\frac{45}{16}$ & $\frac{15}{16}$ & $-\frac{15}{16}$ & $-\frac{45}{16}$ & $0$ & $0$ & $0$ & $\frac{3}{16}$ & $-\frac{3}{16}$ & $0$ \\
$\langle T_{49}^{33} \rangle$ & $\frac{45}{8}$ & $\frac{15}{8}$ & $-\frac{15}{8}$ & $-\frac{45}{8}$ & $0$ & $0$ & $0$ & $-\frac{15}{8}$ & $\frac{15}{8}$ & $0$ \\
$\langle T_{50}^{33} \rangle$ & $-\frac{45}{8}$ & $-\frac{15}{8}$ & $\frac{15}{8}$ & $\frac{45}{8}$ & $0$ & $0$ & $0$ & $\frac{15}{8}$ & $-\frac{15}{8}$ & $0$ \\
$\langle T_{52}^{33} \rangle$ & $\frac{45}{8}$ & $\frac{15}{8}$ & $-\frac{15}{8}$ & $-\frac{45}{8}$ & $0$ & $0$ & $0$ & $\frac{15}{8}$ & $-\frac{15}{8}$ & $0$ \\
$\langle T_{53}^{33} \rangle$ & $\frac{45}{8}$ & $\frac{15}{8}$ & $-\frac{15}{8}$ & $-\frac{45}{8}$ & $0$ & $0$ & $0$ & $\frac{15}{8}$ & $-\frac{15}{8}$ & $0$ \\
$\langle T_{54}^{33} \rangle$ & $\frac{45}{8}$ & $\frac{15}{8}$ & $-\frac{15}{8}$ & $-\frac{45}{8}$ & $0$ & $0$ & $0$ & $\frac{15}{8}$ & $-\frac{15}{8}$ & $0$ \\
$\langle T_{94}^{33} \rangle$ & $\frac{225}{16}$ & $\frac{75}{16}$ & $-\frac{75}{16}$ & $-\frac{225}{16}$ & $0$ & $0$ & $0$ & $\frac{75}{16}$ & $-\frac{75}{16}$ & $0$ \\
$\langle T_{95}^{33} \rangle$ & $\frac{225}{16}$ & $\frac{75}{16}$ & $-\frac{75}{16}$ & $-\frac{225}{16}$ & $0$ & $0$ & $0$ & $\frac{75}{16}$ & $-\frac{75}{16}$ & $0$ \\
$\langle T_{96}^{33} \rangle$ & $\frac{225}{16}$ & $\frac{75}{16}$ & $-\frac{75}{16}$ & $-\frac{225}{16}$ & $\frac{15}{2}$ & $0$ & $-\frac{15}{2}$ & $\frac{135}{16}$ & $-\frac{135}{16}$ & $0$ \\
$\langle T_{97}^{33} \rangle$ & $\frac{225}{16}$ & $\frac{75}{16}$ & $-\frac{75}{16}$ & $-\frac{225}{16}$ & $0$ & $0$ & $0$ & $\frac{15}{16}$ & $-\frac{15}{16}$ & $0$ \\
$\langle T_{98}^{33} \rangle$ & $\frac{135}{8}$ & $\frac{45}{8}$ & $-\frac{45}{8}$ & $-\frac{135}{8}$ & $0$ & $0$ & $0$ & $\frac{45}{8}$ & $-\frac{45}{8}$ & $0$ \\
$\langle T_{99}^{33} \rangle$ & $\frac{135}{8}$ & $\frac{45}{8}$ & $-\frac{45}{8}$ & $-\frac{135}{8}$ & $0$ & $0$ & $0$ & $\frac{45}{8}$ & $-\frac{45}{8}$ & $0$ \\
$\langle T_{100}^{33} \rangle$ & $\frac{225}{8}$ & $\frac{75}{8}$ & $-\frac{75}{8}$ & $-\frac{225}{8}$ & $0$ & $0$ & $0$ & $-\frac{75}{8}$ & $\frac{75}{8}$ & $0$ \\
$\langle T_{101}^{33} \rangle$ & $-\frac{225}{8}$ & $-\frac{75}{8}$ & $\frac{75}{8}$ & $\frac{225}{8}$ & $0$ & $0$ & $0$ & $\frac{75}{8}$ & $-\frac{75}{8}$ & $0$ \\
$\langle T_{120}^{33} \rangle$ & $\frac{675}{16}$ & $\frac{225}{16}$ & $-\frac{225}{16}$ & $-\frac{675}{16}$ & $0$ & $0$ & $0$ & $\frac{225}{16}$ & $-\frac{225}{16}$ & $0$ \\
$\langle T_{132}^{33} \rangle$ & $\frac{675}{16}$ & $\frac{225}{16}$ & $-\frac{225}{16}$ & $-\frac{675}{16}$ & $0$ & $0$ & $0$ & $\frac{225}{16}$ & $-\frac{225}{16}$ & $0$ \\
$\langle T_{133}^{33} \rangle$ & $\frac{675}{16}$ & $\frac{225}{16}$ & $-\frac{225}{16}$ & $-\frac{675}{16}$ & $0$ & $0$ & $0$ & $\frac{225}{16}$ & $-\frac{225}{16}$ & $0$ \\
$\langle T_{166}^{33} \rangle$ & $\frac{3375}{16}$ & $\frac{1125}{16}$ & $-\frac{1125}{16}$ & $-\frac{3375}{16}$ & $0$ & $0$ & $0$ & $\frac{1125}{16}$ & $-\frac{1125}{16}$ & $0$ \\
\end{tabular}
\end{ruledtabular}
\end{table*}
\begin{table*}
\caption{\label{t:mm2738T}Nontrivial matrix elements of the operators involved in the magnetic moments of decuplet baryons: flavor $\mathbf{27}$ representation. The entries correspond to $\sqrt{3} \langle T_i^{38} \rangle$.}
\begin{ruledtabular}
\begin{tabular}{lcccccccccc}
& $\displaystyle \Delta^{++}$ & $\displaystyle \Delta^+$ & $\displaystyle \Delta^0$ & $\displaystyle \Delta^-$ & $\displaystyle {\Sigma^*}^+$ & $\displaystyle {\Sigma^*}^0$ & $\displaystyle {\Sigma^*}^-$ & $\displaystyle {\Xi^*}^0$ & $\displaystyle {\Xi^*}^-$ & $\displaystyle \Omega^-$ \\[2mm]
\hline
$\langle T_{2}^{38} \rangle$ & $\frac14$ & $\frac14$ & $\frac14$ & $\frac14$ & $0$ & $0$ & $0$ & $-\frac14$ & $-\frac14$ & $-\frac12$ \\
$\langle T_{3}^{38} \rangle$ & $\frac34$ & $\frac34$ & $\frac34$ & $\frac34$ & $0$ & $0$ & $0$ & $-\frac34$ & $-\frac34$ & $-\frac32$ \\
$\langle T_{4}^{38} \rangle$ & $-\frac32$ & $-\frac32$ & $-\frac32$ & $-\frac32$ & $-\frac32$ & $-\frac32$ & $-\frac32$ & $-\frac32$ & $-\frac32$ & $-\frac32$ \\
$\langle T_{6}^{38} \rangle$ & $\frac34$ & $\frac34$ & $\frac34$ & $\frac34$ & $0$ & $0$ & $0$ & $-\frac34$ & $-\frac34$ & $-\frac32$ \\
$\langle T_{7}^{38} \rangle$ & $\frac34$ & $\frac34$ & $\frac34$ & $\frac34$ & $0$ & $0$ & $0$ & $-\frac34$ & $-\frac34$ & $-\frac32$ \\
$\langle T_{8}^{38} \rangle$ & $\frac94$ & $\frac94$ & $\frac94$ & $\frac94$ & $0$ & $0$ & $0$ & $-\frac94$ & $-\frac94$ & $-\frac92$ \\
$\langle T_{9}^{38} \rangle$ & $-\frac34$ & $-\frac34$ & $-\frac34$ & $-\frac34$ & $0$ & $0$ & $0$ & $-\frac34$ & $-\frac34$ & $-3$ \\
$\langle T_{10}^{38} \rangle$ & $-\frac34$ & $-\frac34$ & $-\frac34$ & $-\frac34$ & $0$ & $0$ & $0$ & $-\frac34$ & $-\frac34$ & $-3$ \\
$\langle T_{15}^{38} \rangle$ & $\frac{15}{4}$ & $\frac{15}{4}$ & $\frac{15}{4}$ & $\frac{15}{4}$ & $0$ & $0$ & $0$ & $-\frac{15}{4}$ & $-\frac{15}{4}$ & $-\frac{15}{2}$ \\
$\langle T_{16}^{38} \rangle$ & $\frac{15}{4}$ & $\frac{15}{4}$ & $\frac{15}{4}$ & $\frac{15}{4}$ & $0$ & $0$ & $0$ & $-\frac{15}{4}$ & $-\frac{15}{4}$ & $-\frac{15}{2}$ \\
$\langle T_{19}^{38} \rangle$ & $\frac{45}{4}$ & $\frac{45}{4}$ & $\frac{45}{4}$ & $\frac{45}{4}$ & $0$ & $0$ & $0$ & $-\frac{45}{4}$ & $-\frac{45}{4}$ & $-\frac{45}{2}$ \\
$\langle T_{25}^{38} \rangle$ & $\frac94$ & $\frac94$ & $\frac94$ & $\frac94$ & $0$ & $0$ & $0$ & $-\frac94$ & $-\frac94$ & $-18$ \\
$\langle T_{26}^{38} \rangle$ & $\frac94$ & $\frac94$ & $\frac94$ & $\frac94$ & $0$ & $0$ & $0$ & $-\frac94$ & $-\frac94$ & $-18$ \\
$\langle T_{27}^{38} \rangle$ & $\frac{15}{16}$ & $\frac{15}{16}$ & $\frac{15}{16}$ & $\frac{15}{16}$ & $0$ & $0$ & $0$ & $-\frac{27}{16}$ & $-\frac{27}{16}$ & $-\frac{15}{2}$ \\
$\langle T_{28}^{38} \rangle$ & $\frac{15}{16}$ & $\frac{15}{16}$ & $\frac{15}{16}$ & $\frac{15}{16}$ & $0$ & $0$ & $0$ & $-\frac{27}{16}$ & $-\frac{27}{16}$ & $-\frac{15}{2}$ \\
$\langle T_{29}^{38} \rangle$ & $-\frac{15}{8}$ & $-\frac{15}{8}$ & $-\frac{15}{8}$ & $-\frac{15}{8}$ & $-\frac32$ & $-\frac32$ & $-\frac32$ & $-\frac{27}{8}$ & $-\frac{27}{8}$ & $-\frac{15}{2}$ \\
$\langle T_{30}^{38} \rangle$ & $-\frac{15}{8}$ & $-\frac{15}{8}$ & $-\frac{15}{8}$ & $-\frac{15}{8}$ & $-\frac32$ & $-\frac32$ & $-\frac32$ & $-\frac{27}{8}$ & $-\frac{27}{8}$ & $-\frac{15}{2}$ \\
$\langle T_{31}^{38} \rangle$ & $-\frac{15}{8}$ & $-\frac{15}{8}$ & $-\frac{15}{8}$ & $-\frac{15}{8}$ & $0$ & $0$ & $0$ & $-\frac{15}{8}$ & $-\frac{15}{8}$ & $-\frac{15}{2}$ \\
$\langle T_{32}^{38} \rangle$ & $-\frac{15}{8}$ & $-\frac{15}{8}$ & $-\frac{15}{8}$ & $-\frac{15}{8}$ & $0$ & $0$ & $0$ & $-\frac{15}{8}$ & $-\frac{15}{8}$ & $-\frac{15}{2}$ \\
$\langle T_{33}^{38} \rangle$ & $-\frac{15}{8}$ & $-\frac{15}{8}$ & $-\frac{15}{8}$ & $-\frac{15}{8}$ & $0$ & $0$ & $0$ & $-\frac{15}{8}$ & $-\frac{15}{8}$ & $-\frac{15}{2}$ \\
$\langle T_{34}^{38} \rangle$ & $-\frac{15}{8}$ & $-\frac{15}{8}$ & $-\frac{15}{8}$ & $-\frac{15}{8}$ & $0$ & $0$ & $0$ & $-\frac{15}{8}$ & $-\frac{15}{8}$ & $-\frac{15}{2}$ \\
$\langle T_{46}^{38} \rangle$ & $\frac{27}{4}$ & $\frac{27}{4}$ & $\frac{27}{4}$ & $\frac{27}{4}$ & $0$ & $0$ & $0$ & $-\frac{27}{4}$ & $-\frac{27}{4}$ & $-54$ \\
$\langle T_{47}^{38} \rangle$ & $\frac{45}{16}$ & $\frac{45}{16}$ & $\frac{45}{16}$ & $\frac{45}{16}$ & $0$ & $0$ & $0$ & $-\frac{81}{16}$ & $-\frac{81}{16}$ & $-\frac{45}{2}$ \\
$\langle T_{48}^{38} \rangle$ & $\frac{45}{16}$ & $\frac{45}{16}$ & $\frac{45}{16}$ & $\frac{45}{16}$ & $0$ & $0$ & $0$ & $-\frac{81}{16}$ & $-\frac{81}{16}$ & $-\frac{45}{2}$ \\
$\langle T_{49}^{38} \rangle$ & $-\frac{45}{8}$ & $-\frac{45}{8}$ & $-\frac{45}{8}$ & $-\frac{45}{8}$ & $0$ & $0$ & $0$ & $-\frac{45}{8}$ & $-\frac{45}{8}$ & $-\frac{45}{2}$ \\
$\langle T_{50}^{38} \rangle$ & $-\frac{45}{8}$ & $-\frac{45}{8}$ & $-\frac{45}{8}$ & $-\frac{45}{8}$ & $0$ & $0$ & $0$ & $-\frac{45}{8}$ & $-\frac{45}{8}$ & $-\frac{45}{2}$ \\
$\langle T_{52}^{38} \rangle$ & $\frac{45}{8}$ & $\frac{45}{8}$ & $\frac{45}{8}$ & $\frac{45}{8}$ & $0$ & $0$ & $0$ & $-\frac{45}{8}$ & $-\frac{45}{8}$ & $-45$ \\
$\langle T_{53}^{38} \rangle$ & $\frac{45}{8}$ & $\frac{45}{8}$ & $\frac{45}{8}$ & $\frac{45}{8}$ & $0$ & $0$ & $0$ & $-\frac{45}{8}$ & $-\frac{45}{8}$ & $-45$ \\
$\langle T_{54}^{38} \rangle$ & $\frac{45}{8}$ & $\frac{45}{8}$ & $\frac{45}{8}$ & $\frac{45}{8}$ & $0$ & $0$ & $0$ & $-\frac{45}{8}$ & $-\frac{45}{8}$ & $-45$ \\
$\langle T_{94}^{38} \rangle$ & $\frac{225}{16}$ & $\frac{225}{16}$ & $\frac{225}{16}$ & $\frac{225}{16}$ & $0$ & $0$ & $0$ & $-\frac{225}{16}$ & $-\frac{225}{16}$ & $-\frac{225}{2}$ \\
$\langle T_{95}^{38} \rangle$ & $\frac{225}{16}$ & $\frac{225}{16}$ & $\frac{225}{16}$ & $\frac{225}{16}$ & $0$ & $0$ & $0$ & $-\frac{225}{16}$ & $-\frac{225}{16}$ & $-\frac{225}{2}$ \\
$\langle T_{96}^{38} \rangle$ & $\frac{225}{16}$ & $\frac{225}{16}$ & $\frac{225}{16}$ & $\frac{225}{16}$ & $0$ & $0$ & $0$ & $-\frac{405}{16}$ & $-\frac{405}{16}$ & $-\frac{225}{2}$ \\
$\langle T_{97}^{38} \rangle$ & $\frac{225}{16}$ & $\frac{225}{16}$ & $\frac{225}{16}$ & $\frac{225}{16}$ & $0$ & $0$ & $0$ & $-\frac{405}{16}$ & $-\frac{405}{16}$ & $-\frac{225}{2}$ \\
$\langle T_{98}^{38} \rangle$ & $\frac{135}{8}$ & $\frac{135}{8}$ & $\frac{135}{8}$ & $\frac{135}{8}$ & $0$ & $0$ & $0$ & $-\frac{135}{8}$ & $-\frac{135}{8}$ & $-135$ \\
$\langle T_{99}^{38} \rangle$ & $\frac{135}{8}$ & $\frac{135}{8}$ & $\frac{135}{8}$ & $\frac{135}{8}$ & $0$ & $0$ & $0$ & $-\frac{135}{8}$ & $-\frac{135}{8}$ & $-135$ \\
$\langle T_{100}^{38} \rangle$ & $-\frac{225}{8}$ & $-\frac{225}{8}$ & $-\frac{225}{8}$ & $-\frac{225}{8}$ & $0$ & $0$ & $0$ & $-\frac{225}{8}$ & $-\frac{225}{8}$ & $-\frac{225}{2}$ \\
$\langle T_{101}^{38} \rangle$ & $-\frac{225}{8}$ & $-\frac{225}{8}$ & $-\frac{225}{8}$ & $-\frac{225}{8}$ & $0$ & $0$ & $0$ & $-\frac{225}{8}$ & $-\frac{225}{8}$ & $-\frac{225}{2}$ \\
$\langle T_{120}^{38} \rangle$ & $\frac{675}{16}$ & $\frac{675}{16}$ & $\frac{675}{16}$ & $\frac{675}{16}$ & $0$ & $0$ & $0$ & $-\frac{675}{16}$ & $-\frac{675}{16}$ & $-\frac{675}{2}$ \\
$\langle T_{132}^{38} \rangle$ & $\frac{675}{16}$ & $\frac{675}{16}$ & $\frac{675}{16}$ & $\frac{675}{16}$ & $0$ & $0$ & $0$ & $-\frac{675}{16}$ & $-\frac{675}{16}$ & $-\frac{675}{2}$ \\
$\langle T_{133}^{38} \rangle$ & $\frac{675}{16}$ & $\frac{675}{16}$ & $\frac{675}{16}$ & $\frac{675}{16}$ & $0$ & $0$ & $0$ & $-\frac{675}{16}$ & $-\frac{675}{16}$ & $-\frac{675}{2}$ \\
$\langle T_{166}^{38} \rangle$ & $\frac{3375}{16}$ & $\frac{3375}{16}$ & $\frac{3375}{16}$ & $\frac{3375}{16}$ & $0$ & $0$ & $0$ & $-\frac{3375}{16}$ & $-\frac{3375}{16}$ & $-\frac{3375}{2}$ \\
\end{tabular}
\end{ruledtabular}
\end{table*}
\begin{table*}
\caption{\label{t:mm2733TO}Nontrivial matrix elements of the operators involved in the magnetic moments of decuplet baryons: flavor $\mathbf{27}$ representation. The entries correspond to $\sqrt{2} \langle T_i^{33} \rangle$.}
\begin{ruledtabular}
\begin{tabular}{lcccccccc}
& $\displaystyle \Delta^+p$ & $\displaystyle \Delta^0n$ & $\displaystyle {\Sigma^*}^0\Lambda$ & $\displaystyle {\Sigma^*}^0\Sigma^0$ & $\displaystyle {\Sigma^*}^+\Sigma^+$ & $\displaystyle {\Sigma^*}^-\Sigma^-$ & $\displaystyle {\Xi^*}^0\Xi^0$ & $\displaystyle {\Xi^*}^-\Xi^-$ \\[2mm]
\hline
$\langle T_{2}^{33} \rangle$ & $\frac29$ & $\frac29$ & $\frac{1}{3 \sqrt{3}}$ & $0$ & $\frac19$ & $-\frac19$ & $\frac19$ & $-\frac19$ \\
$\langle T_{3}^{33} \rangle$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \\
$\langle T_{9}^{33} \rangle$ & $\frac23$ & $\frac23$ & $0$ & $0$ & $0$ & $0$ & $-\frac13$ & $\frac13$ \\
$\langle T_{10}^{33} \rangle$ & $0$ & $0$ & $0$ & $0$ & $-\frac23$ & $\frac23$ & $-\frac13$ & $\frac13$ \\
$\langle T_{21}^{33} \rangle$ & $1$ & $1$ & $\frac{\sqrt{3}}{2}$ & $0$ & $\frac12$ & $-\frac12$ & $\frac12$ & $-\frac12$ \\
$\langle T_{22}^{33} \rangle$ & $-1$ & $-1$ & $-\frac{\sqrt{3}}{2}$ & $0$ & $-\frac12$ & $\frac12$ & $-\frac12$ & $\frac12$ \\
$\langle T_{23}^{33} \rangle$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \\
$\langle T_{25}^{33} \rangle$ & $2$ & $2$ & $0$ & $0$ & $0$ & $0$ & $1$ & $-1$ \\
$\langle T_{26}^{33} \rangle$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $-1$ & $1$ \\
$\langle T_{27}^{33} \rangle$ & $\frac12$ & $\frac12$ & $\frac{1}{\sqrt{3}}$ & $0$ & $\frac23$ & $-\frac23$ & $\frac{13}{12}$ & $-\frac{13}{12}$ \\
$\langle T_{28}^{33} \rangle$ & $0$ & $0$ & $\frac{1}{2 \sqrt{3}}$ & $0$ & $\frac56$ & $-\frac56$ & $\frac{5}{12}$ & $-\frac{5}{12}$ \\
$\langle T_{31}^{33} \rangle$ & $1$ & $1$ & $-\frac{1}{2 \sqrt{3}}$ & $0$ & $\frac16$ & $-\frac16$ & $-\frac23$ & $\frac23$ \\
$\langle T_{32}^{33} \rangle$ & $0$ & $0$ & $-\frac{1}{2 \sqrt{3}}$ & $0$ & $\frac76$ & $-\frac76$ & $\frac13$ & $-\frac13$ \\
$\langle T_{33}^{33} \rangle$ & $-1$ & $-1$ & $\frac{1}{2 \sqrt{3}}$ & $0$ & $-\frac16$ & $\frac16$ & $\frac23$ & $-\frac23$ \\
$\langle T_{34}^{33} \rangle$ & $0$ & $0$ & $\frac{1}{2 \sqrt{3}}$ & $0$ & $-\frac76$ & $\frac76$ & $-\frac13$ & $\frac13$ \\
$\langle T_{45}^{33} \rangle$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \\
$\langle T_{52}^{33} \rangle$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $-1$ & $1$ \\
$\langle T_{53}^{33} \rangle$ & $3$ & $3$ & $0$ & $0$ & $0$ & $0$ & $2$ & $-2$ \\
$\langle T_{54}^{33} \rangle$ & $0$ & $0$ & $0$ & $0$ & $1$ & $-1$ & $-2$ & $2$ \\
$\langle T_{55}^{33} \rangle$ & $0$ & $0$ & $-\frac{\sqrt{3}}{2}$ & $0$ & $-\frac52$ & $\frac52$ & $-\frac54$ & $\frac54$ \\
$\langle T_{56}^{33} \rangle$ & $-\frac32$ & $-\frac32$ & $-\sqrt{3}$ & $0$ & $-2$ & $2$ & $-\frac{13}{4}$ & $\frac{13}{4}$ \\
$\langle T_{57}^{33} \rangle$ & $0$ & $0$ & $-\frac{\sqrt{3}}{2}$ & $0$ & $0$ & $0$ & $0$ & $0$ \\
$\langle T_{59}^{33} \rangle$ & $0$ & $0$ & $-\frac{3 \sqrt{3}}{2}$ & $0$ & $\frac92$ & $-\frac92$ & $3$ & $-3$ \\
$\langle T_{65}^{33} \rangle$ & $-3$ & $-3$ & $-\frac{3 \sqrt{3}}{4}$ & $0$ & $\frac34$ & $-\frac34$ & $\frac34$ & $-\frac34$ \\
$\langle T_{66}^{33} \rangle$ & $0$ & $0$ & $-\frac{3 \sqrt{3}}{4}$ & $0$ & $-\frac94$ & $\frac94$ & $-\frac94$ & $\frac94$ \\
$\langle T_{67}^{33} \rangle$ & $-6$ & $-6$ & $\frac{\sqrt{3}}{2}$ & $0$ & $\frac12$ & $-\frac12$ & $-2$ & $2$ \\
$\langle T_{68}^{33} \rangle$ & $0$ & $0$ & $0$ & $0$ & $1$ & $-1$ & $\frac72$ & $-\frac72$ \\
$\langle T_{69}^{33} \rangle$ & $0$ & $0$ & $-\frac{\sqrt{3}}{4}$ & $0$ & $\frac74$ & $-\frac74$ & $\frac12$ & $-\frac12$ \\
$\langle T_{70}^{33} \rangle$ & $0$ & $0$ & $\frac{3 \sqrt{3}}{8}$ & $0$ & $\frac{39}{8}$ & $-\frac{39}{8}$ & $\frac{15}{4}$ & $-\frac{15}{4}$ \\
$\langle T_{94}^{33} \rangle$ & $\frac{13}{2}$ & $\frac{13}{2}$ & $\frac{\sqrt{3}}{2}$ & $0$ & $\frac12$ & $-\frac12$ & $\frac{17}{4}$ & $-\frac{17}{4}$ \\
$\langle T_{95}^{33} \rangle$ & $0$ & $0$ & $0$ & $0$ & $1$ & $-1$ & $-\frac{11}{4}$ & $\frac{11}{4}$ \\
$\langle T_{103}^{33} \rangle$ & $0$ & $0$ & $0$ & $0$ & $-9$ & $9$ & $-\frac92$ & $\frac92$ \\
$\langle T_{104}^{33} \rangle$ & $0$ & $0$ & $0$ & $0$ & $-3$ & $3$ & $-\frac{21}{2}$ & $\frac{21}{2}$ \\
$\langle T_{120}^{33} \rangle$ & $0$ & $0$ & $0$ & $0$ & $\frac92$ & $-\frac92$ & $-9$ & $9$ \\
$\langle T_{121}^{33} \rangle$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $\frac{27}{2}$ & $-\frac{27}{2}$ \\
$\langle T_{122}^{33} \rangle$ & $0$ & $0$ & $-\frac{9 \sqrt{3}}{4}$ & $0$ & $-\frac{45}{4}$ & $\frac{45}{4}$ & $-\frac{45}{8}$ & $\frac{45}{8}$ \\
$\langle T_{123}^{33} \rangle$ & $-\frac{27}{4}$ & $-\frac{27}{4}$ & $-\frac{9 \sqrt{3}}{2}$ & $0$ & $-9$ & $9$ & $-\frac{117}{8}$ & $\frac{117}{8}$ \\
$\langle T_{134}^{33} \rangle$ & $-27$ & $-27$ & $0$ & $0$ & $0$ & $0$ & $-\frac{27}{4}$ & $\frac{27}{4}$ \\
$\langle T_{167}^{33} \rangle$ & $0$ & $0$ & $\frac{9 \sqrt{3}}{8}$ & $0$ & $-\frac{81}{8}$ & $\frac{81}{8}$ & $-\frac{351}{8}$ & $\frac{351}{8}$ \\
\end{tabular}
\end{ruledtabular}
\end{table*}
\begin{table*}
\caption{\label{t:mm2738TO}Nontrivial matrix elements of the operators involved in the magnetic moments of decuplet baryons: flavor $\mathbf{27}$ representation. The entries correspond to $\sqrt{6} \langle T_i^{38} \rangle$.}
\begin{ruledtabular}
\begin{tabular}{lcccccccc}
& $\displaystyle \Delta^+p$ & $\displaystyle \Delta^0n$ & $\displaystyle {\Sigma^*}^0\Lambda$ & $\displaystyle {\Sigma^*}^0\Sigma^0$ & $\displaystyle {\Sigma^*}^+\Sigma^+$ & $\displaystyle {\Sigma^*}^-\Sigma^-$ & $\displaystyle {\Xi^*}^0\Xi^0$ & $\displaystyle {\Xi^*}^-\Xi^-$ \\[2mm]
\hline
$\langle T_{2}^{38} \rangle$ & $0$ & $0$ & $0$ & $\frac13$ & $\frac13$ & $\frac13$ & $\frac13$ & $\frac13$ \\
$\langle T_{3}^{38} \rangle$ & $0$ & $0$ & $0$ & $1$ & $1$ & $1$ & $1$ & $1$ \\
$\langle T_{9}^{38} \rangle$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $1$ & $1$ \\
$\langle T_{10}^{38} \rangle$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $1$ & $1$ \\
$\langle T_{21}^{38} \rangle$ & $0$ & $0$ & $0$ & $\frac32$ & $\frac32$ & $\frac32$ & $\frac32$ & $\frac32$ \\
$\langle T_{22}^{38} \rangle$ & $0$ & $0$ & $0$ & $\frac32$ & $\frac32$ & $\frac32$ & $\frac32$ & $\frac32$ \\
$\langle T_{23}^{38} \rangle$ & $0$ & $0$ & $0$ & $\frac92$ & $\frac92$ & $\frac92$ & $\frac92$ & $\frac92$ \\
$\langle T_{25}^{38} \rangle$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $3$ & $3$ \\
$\langle T_{26}^{38} \rangle$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $3$ & $3$ \\
$\langle T_{27}^{38} \rangle$ & $0$ & $0$ & $0$ & $2$ & $2$ & $2$ & $\frac{13}{4}$ & $\frac{13}{4}$ \\
$\langle T_{28}^{38} \rangle$ & $0$ & $0$ & $0$ & $2$ & $2$ & $2$ & $\frac{13}{4}$ & $\frac{13}{4}$ \\
$\langle T_{31}^{38} \rangle$ & $0$ & $0$ & $0$ & $-\frac12$ & $-\frac12$ & $-\frac12$ & $2$ & $2$ \\
$\langle T_{32}^{38} \rangle$ & $0$ & $0$ & $0$ & $-\frac12$ & $-\frac12$ & $-\frac12$ & $2$ & $2$ \\
$\langle T_{33}^{38} \rangle$ & $0$ & $0$ & $0$ & $-\frac12$ & $-\frac12$ & $-\frac12$ & $2$ & $2$ \\
$\langle T_{34}^{38} \rangle$ & $0$ & $0$ & $0$ & $-\frac12$ & $-\frac12$ & $-\frac12$ & $2$ & $2$ \\
$\langle T_{45}^{38} \rangle$ & $0$ & $0$ & $0$ & $-\frac{27}{2}$ & $-\frac{27}{2}$ & $-\frac{27}{2}$ & $-\frac{27}{2}$ & $-\frac{27}{2}$ \\
$\langle T_{52}^{38} \rangle$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $6$ & $6$ \\
$\langle T_{53}^{38} \rangle$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $6$ & $6$ \\
$\langle T_{54}^{38} \rangle$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $6$ & $6$ \\
$\langle T_{55}^{38} \rangle$ & $0$ & $0$ & $0$ & $-6$ & $-6$ & $-6$ & $-\frac{39}{4}$ & $-\frac{39}{4}$ \\
$\langle T_{56}^{38} \rangle$ & $0$ & $0$ & $0$ & $-6$ & $-6$ & $-6$ & $-\frac{39}{4}$ & $-\frac{39}{4}$ \\
$\langle T_{57}^{38} \rangle$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \\
$\langle T_{59}^{38} \rangle$ & $0$ & $0$ & $0$ & $\frac{45}{2}$ & $\frac{45}{2}$ & $\frac{45}{2}$ & $27$ & $27$ \\
$\langle T_{65}^{38} \rangle$ & $0$ & $0$ & $0$ & $-\frac94$ & $-\frac94$ & $-\frac94$ & $-\frac94$ & $-\frac94$ \\
$\langle T_{66}^{38} \rangle$ & $0$ & $0$ & $0$ & $-\frac94$ & $-\frac94$ & $-\frac94$ & $-\frac94$ & $-\frac94$ \\
$\langle T_{67}^{38} \rangle$ & $0$ & $0$ & $0$ & $\frac32$ & $\frac32$ & $\frac32$ & $-6$ & $-6$ \\
$\langle T_{68}^{38} \rangle$ & $0$ & $0$ & $0$ & $\frac32$ & $\frac32$ & $\frac32$ & $-6$ & $-6$ \\
$\langle T_{69}^{38} \rangle$ & $0$ & $0$ & $0$ & $\frac34$ & $\frac34$ & $\frac34$ & $-3$ & $-3$ \\
$\langle T_{70}^{38} \rangle$ & $0$ & $0$ & $0$ & $\frac{171}{8}$ & $\frac{171}{8}$ & $\frac{171}{8}$ & $\frac{99}{4}$ & $\frac{99}{4}$ \\
$\langle T_{94}^{38} \rangle$ & $0$ & $0$ & $0$ & $\frac32$ & $\frac32$ & $\frac32$ & $\frac{51}{4}$ & $\frac{51}{4}$ \\
$\langle T_{95}^{38} \rangle$ & $0$ & $0$ & $0$ & $\frac32$ & $\frac32$ & $\frac32$ & $\frac{51}{4}$ & $\frac{51}{4}$ \\
$\langle T_{103}^{38} \rangle$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $-\frac{27}{2}$ & $-\frac{27}{2}$ \\
$\langle T_{104}^{38} \rangle$ & $0$ & $0$ & $0$ & $-\frac92$ & $-\frac92$ & $-\frac92$ & $18$ & $18$ \\
$\langle T_{120}^{38} \rangle$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $27$ & $27$ \\
$\langle T_{121}^{38} \rangle$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $-\frac{81}{2}$ & $-\frac{81}{2}$ \\
$\langle T_{122}^{38} \rangle$ & $0$ & $0$ & $0$ & $-27$ & $-27$ & $-27$ & $-\frac{351}{8}$ & $-\frac{351}{8}$ \\
$\langle T_{123}^{38} \rangle$ & $0$ & $0$ & $0$ & $-27$ & $-27$ & $-27$ & $-\frac{351}{8}$ & $-\frac{351}{8}$ \\
$\langle T_{134}^{38} \rangle$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $-\frac{81}{4}$ & $-\frac{81}{4}$ \\
$\langle T_{167}^{38} \rangle$ & $0$ & $0$ & $0$ & $-\frac{189}{8}$ & $-\frac{189}{8}$ & $-\frac{189}{8}$ & $\frac{621}{8}$ & $\frac{621}{8}$ \\
\end{tabular}
\end{ruledtabular}
\end{table*}
Collecting all partial results, order $\mathcal{O}(m_q \ln m_q)$ corrections to baryon magnetic moments from diagrams \ref{fig:mmloop2}(a)-\ref{fig:mmloop2}(d), for the usual examples, read
\begin{eqnarray}
\delta \mu_{\Sigma^-}^{\mathrm{(loop\, 2ad)}} & = & \left[ \left( - \frac{1}{12} a_1^2 - \frac{13}{108} a_1b_2 - \frac{5}{81} a_1b_3 + \frac{1}{108} a_1c_3 - \frac{1}{36} b_2^2 - \frac{1}{36} b_2b_3 - \frac{1}{54} b_2c_3 - \frac{1}{81} b_3^2 + \frac{1}{162} b_3c_3 - \frac{1}{432} c_3^2 \right) m_1 \right. \nonumber \\
& & \mbox{} + \left( - \frac{13}{72} a_1^2 - \frac{7}{54} a_1b_2 - \frac{37}{324} a_1b_3 - \frac{1}{108} a_1c_3 - \frac{7}{216} b_2^2 - \frac{7}{162} b_2b_3 - \frac{37}{1944} b_3^2 - \frac{1}{432} c_3^2 \right) m_2 \nonumber \\
& & \mbox{} + \left( \frac{7}{324} a_1^2 - \frac{1}{36} a_1b_2 - \frac{2}{81} a_1b_3 + \frac{19}{324} a_1c_3 - \frac{1}{108} b_2^2 - \frac{1}{108} b_2b_3 - \frac{1}{243} b_3^2 + \frac{19}{1296} c_3^2 \right) m_3 \nonumber \\
& & \mbox{} + \left. \left( \frac{1}{54} a_1^2 - \frac{1}{54} a_1b_2 + \frac{1}{162} a_1b_3 + \frac{1}{108} a_1c_3 - \frac{1}{108} b_2c_3 + \frac{1}{324} b_3c_3 \right) m_4 \right] I_2(m_\pi,0,\mu) \nonumber \\
& & \mbox{} + \left[ \left( - \frac{11}{144} a_1^2 - \frac{31}{216} a_1b_2 - \frac{89}{648} a_1b_3 + \frac{7}{54} a_1c_3 - \frac{1}{48} b_2^2 - \frac{5}{216} b_2b_3 - \frac{1}{27} b_2c_3 - \frac{35}{1296} b_3^2 + \frac{1}{81} b_3c_3 + \frac{5}{216} c_3^2 \right) m_1 \right. \nonumber \\
& & \mbox{} + \left( - \frac{7}{48} a_1^2 - \frac{17}{216} a_1b_2 - \frac{103}{648} a_1b_3 + \frac{5}{54} a_1c_3 - \frac{7}{432} b_2^2 - \frac{17}{648} b_2b_3 - \frac{103}{3888} b_3^2 + \frac{5}{216} c_3^2 \right) m_2 \nonumber \\
& & \mbox{} + \left( \frac{575}{1296} a_1^2 - \frac{5}{216} a_1b_2 - \frac{35}{648} a_1b_3 + \frac{85}{162} a_1c_3 - \frac{1}{144} b_2^2 - \frac{5}{648} b_2b_3 - \frac{35}{3888} b_3^2 + \frac{85}{648} c_3^2 \right) m_3 \nonumber \\
& & \mbox{} + \left. \left( \frac{1}{27} a_1^2 - \frac{1}{27} a_1b_2 + \frac{1}{81} a_1b_3 + \frac{1}{54} a_1c_3 - \frac{1}{54} b_2c_3 + \frac{1}{162} b_3c_3 \right) m_4 \right] I_2(m_K,0,\mu) \nonumber \\
& & \mbox{} + \left[ \left( - \frac{1}{27} a_1b_3 + \frac{1}{18} a_1c_3 - \frac{1}{162} b_3^2 + \frac{1}{72} c_3^2 \right) m_1 + \left( - \frac{1}{27} a_1b_3 + \frac{1}{18} a_1c_3 - \frac{1}{162} b_3^2 + \frac{1}{72} c_3^2 \right) m_2 \right. \nonumber \\
& & \mbox{} + \left. \left( \frac{5}{27} a_1^2 - \frac{1}{81} a_1b_3 + \frac{11}{54} a_1c_3 - \frac{1}{486} b_3^2 + \frac{11}{216} c_3^2 \right) m_3 \right] I_2(m_\eta,0,\mu),
\end{eqnarray}
and
\begin{eqnarray}
\delta \mu_{{\Sigma^*}^-}^{\mathrm{(loop\, 2ad)}} & = & \left[ \left( - \frac18 a_1^2 - \frac{11}{36} a_1b_2 - \frac{55}{108} a_1b_3 + \frac{1}{36} a_1c_3 - \frac{19}{72} b_2^2 - \frac{95}{108} b_2b_3 + \frac19 b_2c_3 - \frac{475}{648} b_3^2 + \frac{5}{27} b_3c_3 - \frac{1}{48} c_3^2 \right) m_1 \right. \nonumber \\
& & \mbox{} + \left( - \frac{11}{24} a_1^2 - \frac{19}{36} a_1b_2 - \frac{95}{108} a_1b_3 - \frac{7}{36} a_1c_3 - \frac{19}{72} b_2^2 - \frac{95}{108} b_2b_3 - \frac{475}{648} b_3^2 - \frac{7}{144} c_3^2 \right) m_2 \nonumber \\
& & \mbox{} + \left( - \frac{161}{216} a_1^2 - \frac{95}{108} a_1b_2 - \frac{475}{324} a_1b_3 - \frac{11}{36} a_1c_3 - \frac{95}{216} b_2^2 - \frac{475}{324} b_2b_3 - \frac{2375}{1944} b_3^2 - \frac{11}{144} c_3^2 \right) m_3 \nonumber \\
& & \mbox{} + \left. \left( \frac19 a_1^2 + \frac19 a_1b_2 + \frac{5}{27} a_1b_3 + \frac{1}{18} a_1c_3 + \frac{1}{18} b_2c_3 + \frac{5}{54} b_3c_3 \right) m_4 \right] I_2(m_\pi,0,\mu) \nonumber \\
& & \mbox{} + \left[ \left( - \frac{13}{48} a_1^2 - \frac{35}{72} a_1b_2 - \frac{175}{216} a_1b_3 - \frac{1}{36} a_1c_3 - \frac{43}{144} b_2^2 - \frac{215}{216} b_2b_3 + \frac{1}{18} b_2c_3 - \frac{1075}{1296} b_3^2 + \frac{5}{54} b_3c_3 - \frac{1}{48} c_3^2 \right) m_1 \right. \nonumber \\
& & \mbox{} + \left( - \frac{7}{16} a_1^2 - \frac{43}{72} a_1b_2 - \frac{215}{216} a_1b_3 - \frac{5}{36} a_1c_3 - \frac{43}{144} b_2^2 - \frac{215}{216} b_2b_3 - \frac{1075}{1296} b_3^2 - \frac{5}{144} c_3^2 \right) m_2 \nonumber \\
& & \mbox{} + \left( - \frac{323}{432} a_1^2 - \frac{215}{216} a_1b_2 - \frac{1075}{648} a_1b_3 - \frac14 a_1c_3 - \frac{215}{432} b_2^2 - \frac{1075}{648} b_2b_3 - \frac{5375}{3888} b_3^2 - \frac{1}{16} c_3^2 \right) m_3 \nonumber \\
& & \mbox{} + \left. \left( \frac{1}{18} a_1^2 + \frac{1}{18} a_1b_2 + \frac{5}{54} a_1b_3 + \frac{1}{36} a_1c_3 + \frac{1}{36} b_2c_3 + \frac{5}{108} b_3c_3 \right) m_4 \right] I_2(m_K,0,\mu) \nonumber \\
& & \mbox{} + \left[ \left( - \frac{1}{12} a_1^2 - \frac{1}{12} a_1c_3 - \frac{1}{48} c_3^2 \right) m_1 + \left( - \frac{1}{12} a_1^2 - \frac{1}{12} a_1c_3 - \frac{1}{48} c_3^2 \right) m_2 \right. \nonumber \\
& & \mbox{} + \left. \left( - \frac{7}{36} a_1^2 - \frac{7}{36} a_1c_3 - \frac{7}{144} c_3^2 \right) m_3 \right] I_2(m_\eta,0,\mu),
\end{eqnarray}
All 27 allowed magnetic moments are listed in Appendix \ref{sec:Loop2ad}, Eqs.~(\ref{eq:mmnloop2ad}) to (\ref{eq:mmxsmxmloop2ad}).
The use of relations (\ref{eq:su3inv}) and (\ref{eq:rel1inv}) yields the magnetic moments expressed in terms of the $SU(3)$ invariants $\mu_D$, $\mu_F$, $\mu_C$, $\mu_T$, $D$, $F$, $\mathcal{C}$, and $\mathcal{H}$, namely,
\begin{eqnarray}
\delta \mu_{\Sigma^-}^{\mathrm{(loop\, 2ad)}} & = & \left[ \left( \frac29 D^2 + \frac23 D F + \frac83 F^2 + \frac19 \mathcal{C}^2 \right) \mu_ D + \left( - D^2 - 7 F^2 - \frac13 \mathcal{C}^2 \right) \mu_ F + \frac{5}{54} \mathcal{C}^2 \mu_C + \frac19 (D - F) \mathcal{C} \mu_T \right] I_2(m_\pi,0,\mu) \nonumber \\
& & \mbox{} + \left[ \left( \frac56 D^2 + D F + \frac56 F^2 + \frac59 \mathcal{C}^2 \right) \mu_ D + \left( - \frac72 D^2 - D F - \frac72 F^2 - \frac53 \mathcal{C}^2 \right) \mu_ F \right. \nonumber \\
& & \mbox{} + \left. \frac{20}{27} \mathcal{C}^2 \mu_C + \frac29 (D - F) \mathcal{C} \mu_T \right] I_2(m_K,0,\mu) \nonumber \\
& & \mbox{} + \left[ \left( \frac49 D^2 + \frac16 \mathcal{C}^2 \right) \mu_ D + \left( - \frac43 D^2 - \frac12 \mathcal{C}^2 \right) \mu_ F + \frac{5}{18} \mathcal{C}^2 \mu_C \right] I_2(m_\eta,0,\mu),
\end{eqnarray}
and
\begin{eqnarray}
\delta \mu_{{\Sigma^*}^-}^{\mathrm{(loop\, 2ad)}} & = & \left[ \frac{7}{36} \mathcal{C}^2 \mu_ D + \frac{1}{12} \mathcal{C}^2 \mu_ F + \left( - \frac{5}{12} \mathcal{C}^2 - \frac{19}{81} \mathcal{H}^2 \right) \mu_C - \frac{2}{27} \mathcal{C} \mathcal{H} \mu_T \right] I_2(m_\pi,0,\mu) \nonumber \\
& & \mbox{} + \left[ \frac{1}{18} \mathcal{C}^2 \mu_ D + \frac16 \mathcal{C}^2 \mu_ F + \left( - \frac13 \mathcal{C}^2 - \frac{43}{162} \mathcal{H}^2 \right) \mu_C - \frac{1}{27} \mathcal{C} \mathcal{H} \mu_T \right] I_2(m_K,0,\mu) \nonumber \\
& & \mbox{} + \left[ - \frac{1}{12} \mathcal{C}^2 \mu_ D + \frac14 \mathcal{C}^2 \mu_ F - \frac14 \mathcal{C}^2 \mu_C \right] I_2(m_\eta,0,\mu).
\end{eqnarray}
Equations (\ref{eq:mmnloop2adch}) to (\ref{eq:mmxsmxmloop2adch}) of Appendix \ref{sec:Loop2ad} are the counterparts of (\ref{eq:mmnloop2ad}) to (\ref{eq:mmxsmxmloop2ad}), respectively.
\subsection{\label{sec:mqlnmqe}Diagrams \ref{fig:mmloop2}(e)}
Corrections to magnetic moments from the diagram \ref{fig:mmloop2}(e) are straightforwardly evaluated as \cite{rfm09,rfm14}
\begin{equation}
\delta M_{\textrm{loop 2e}}^k = - \frac12 \left[T^a,\left[T^b,M^k \right] \right] \Pi^{ab}, \label{eq:corrloop3}
\end{equation}
where $\Pi^{ab}$ is the symmetric tensor already displayed in Eq.~(\ref{eq:pisym}), except for the fact that the corresponding loop integral is now $I_3(m,\mu)$. Retaining only the nonanalytic pieces of that integral, it turns out that
\begin{equation}
I_3(m,\mu) = - I_2(m,0,\mu),
\end{equation}
where $I_2(m,0,\mu)$ is given in Eq.~(\ref{eq:fprime}).
Explicit results for the case study are thus
\begin{equation}
\delta \mu_{\Sigma^-}^{\mathrm{(loop\, 2e)}} = \left[ \frac13 m_1 + \frac16 m_2 + \frac19 m_3 \right] I_2(m_\pi,0,\mu) + \left[ - \frac{1}{12} m_1 + \frac{1}{12} m_2 - \frac{1}{36} m_3 \right] I_2(m_K,0,\mu),
\end{equation}
and
\begin{equation}
\delta \mu_{{\Sigma^*}^-}^{\mathrm{(loop\, 2e)}} = \left[ \frac12 m_1 + \frac12 m_2 + \frac56 m_3 \right] I_2(m_\pi,0,\mu) + \left[ \frac14 m_1 + \frac14 m_2 + \frac{5}{12} m_3 \right] I_2(m_K,0,\mu),
\end{equation}
or equivalently, in terms of the $SU(3)$ invariants
\begin{equation}
\delta \mu_{\Sigma^-}^{\mathrm{(loop\, 2e)}} = \mu_F I_2(m_\pi,0,\mu) - \frac12 (\mu_D - \mu_F ) I_2(m_K,0,\mu),
\end{equation}
and
\begin{equation}
\delta \mu_{{\Sigma^*}^-}^{\mathrm{(loop\, 2e)}} = \mu_C I_2(m_\pi,0,\mu) + \frac12 \mu_C I_2(m_K,0,\mu).
\end{equation}
All allowed expressions are listed in Appendix \ref{app:Loop2}, Eqs.~(\ref{eq:mmnloop2e}) to (\ref{eq:mmxsmxmloop2e}), and their corresponding expressions in terms of the $SU(3)$ invariants are listed in Eqs.~(\ref{eq:mmnloop2ech}) to (\ref{eq:mmxsmxmloop2ech}).
\subsubsection{\label{sec:comL2}Comparison with heavy chiral perturbation theory results}
In HBCHPT, the corrections to magnetic moments from the Feynman diagrams displayed in Fig.~\ref{fig:mmloop2} can be organized as \cite{jen92}
\begin{equation}
\delta \mu_i^{\mathrm{(loop\, 2)}} = \sum_{P=\pi,K,\eta} -\frac12 (\overline{\gamma}_i^{(P)}-2\overline{\lambda}_i^{(P)}\alpha_i) \left[ -\frac{1}{16\pi^2 f^2}m_P^2 \ln\frac{m_P^2}{\mu^2} \right], \label{eq:l2ch}
\end{equation}
where the coefficients $\overline{\gamma}_i^{(P)}$, $\overline{\lambda}_i^{(P)}$, and $\alpha_i$ are listed in that reference.
The comparison between the expressions extracted from Eq.~(\ref{eq:l2ch}) fully agree with the ones found here for octet baryons listed in Appendix \ref{app:Loop2}, taking into account a missing factor of $-5/2$ in the contribution from the graph \ref{fig:mmloop2}(b) and the additional corrections noted in the erratum to \cite{jen92}.
\section{\label{sec:sb}Explicit $SU(3)$ symmetry breaking}
As it has already been discussed in Ref.~\cite{rfm14}, in the conventional chiral momentum counting scheme, tree diagrams involving higher order vertices will also contribute to the magnetic moments along with the one-loop contributions of orders $\mathcal{O}(m_q^{1/2})$ and $\mathcal{O}(m_q \ln m_q)$. These higher order contributions are needed as counterterms for the divergent parts of the loops integrals. The leading $SU(3)$ breaking effects to the magnetic moments thus will also have contributions from the effective Lagrangian of order $p^4$, which yield contributions linear in the quark mass \cite{krause}. In the combined formalism, a convenient way of accounting for terms of order $\mathcal{O}(m_q)$ springs from the fact that flavor $SU(3)$ SB transforms as a flavor octet. Neglecting isospin breaking and including first order $SU(3)$ SB. $M^{kc}$ thus has pieces transforming according to all $SU(3)$ representations contained in the tensor product $(1,\mathbf{8}\otimes \mathbf{8})=(1,\mathbf{1}) \oplus (1,\mathbf{8}_S) \oplus (1,\mathbf{8}_A) \oplus (1,\mathbf{10}+\overline{\mathbf{10}}) \oplus (1,\mathbf{27})$, namely,
\begin{equation}
\delta M_{\mathrm{SB}}^{kc} = \delta M_{\mathrm{SB},\mathbf{\mathbf{1}}}^{kc} + \delta M_{\mathrm{SB},\mathbf{\mathbf{8}}}^{kc} + \delta M_{\mathrm{SB},\mathbf{\mathbf{10}+\overline{\mathbf{10}}}}^{kc} + \delta M_{\mathrm{SB},\mathbf{\mathbf{27}}}^{kc}. \label{eq:akcsb}
\end{equation}
Following the detailed analysis presented in Ref.~\cite{rfm14}, explicit SB to the baryon magnetic operator can be cast into the form
\begin{eqnarray}
\delta M_{\mathrm{SB}}^{kc} & = & \left[ m_1^{1,\mathbf{1}} \delta^{c8}J^k + m_3^{1,\mathbf{1}} \frac{1}{N_c^2} \delta^{c8} \{J^2,J^k\} \right] \nonumber \\
& & \mbox{} + \left[ n_1^{1,\mathbf{8}} d^{ce8} G^{ke} + n_2^{1,\mathbf{8}} \frac{1}{N_c} d^{ce8} \mathcal{D}_2^{ke} + n_3^{1,\mathbf{8}} \frac{1}{N_c^2} d^{ce8} \mathcal{D}_3^{ke} + \bar{n}_3^{1,\mathbf{8}} \frac{1}{N_c^2} d^{ce8} \mathcal{O}_3^{ke} \right] \nonumber \\
& & \mbox{} + \left[ m_2^{1,\mathbf{10}+\overline{\mathbf{10}}} \frac{1}{N_c} \left( \{G^{kc},T^8\}-\{G^{k8},T^c\} \right)
+ m_3^{1,\mathbf{10}+\overline{\mathbf{10}}} \frac{1}{N_c^2} \left( \{G^{kc},\{J^r,G^{r8}\}\}-\{G^{k8},\{J^r,G^{rc}\}\} \right) \right] \nonumber \\
& & \mbox{} + \left[ m_2^{1,\mathbf{27}} \frac{1}{N_c} \left( \{G^{kc},T^8\}+\{G^{k8},T^c\} \right) + m_3^{1,\mathbf{27}} \frac{1}{N_c^2} \{J^k,\{T^c,T^8\}\} \right. \nonumber \\
& & \mbox{\hglue0.5truecm} \left. + \, \bar{m}_3^{1,\mathbf{27}} \frac{1}{N_c^2} \left( \{G^{kc},\{J^r,G^{r8}\}\}+\{G^{k8},\{J^r,G^{rc}\}\} \right) \right]. \label{eq:sb}
\end{eqnarray}
where the superscripts attached to the eleven unknown coefficients $m_i^{1,\mathbf{rep}}$ and $n_j^{1,\mathbf{rep}}$ indicate the spin-flavor representation $\mathbf{rep}$ they fall in. Although the series has been truncated at the $3$-body level, higher-order terms can be obtained by anticommuting the operators retained with $J^2$.
Equation (\ref{eq:sb}) is the one to be used in the numerical analysis. By using the appropriate matrix elements listed in Tables \ref{t:mm8O}-\ref{t:mm8TO}, the explicit SB contributions to magnetic moments in the usual examples read
\begin{equation}
\sqrt{3} \delta \mu_{\Sigma^-}^{\mathrm{SB}} = \frac12 m_1^{1,\mathbf{1}} + \frac{1}{12} m_3^{1,\mathbf{1}} - \frac12 n_1^{1,\mathbf{8}} - \frac16 n_2^{1,\mathbf{8}} - \frac16 n_3^{1,\mathbf{8}} + \frac13 m_2^{1,\mathbf{10}+\overline{\mathbf{10}}} - \frac13 m_2^{1,\mathbf{27}} - \frac19 \bar{c}_3^{1,\mathbf{27}},
\end{equation}
and
\begin{equation}
\sqrt{3} \delta \mu_{{\Sigma^*}^-}^{\mathrm{SB}} = \frac32 m_1^{1,\mathbf{1}} + \frac54 m_3^{1,\mathbf{1}} - \frac12 n_1^{1,\mathbf{8}} - \frac12 n_2^{1,\mathbf{8}} - \frac56 n_3^{1,\mathbf{8}}.
\end{equation}
The complete list of expressions is given in Appendix \ref{app:SB}.
\section{\label{sec:num}Numerical analysis}
A number of different fits to the experimental data can now be performed. These fits, however, are not intended to be definitive; instead, they can be useful in testing the working assumptions. The theoretical formulas are not as accurate enough as the experimental measurements so a theoretical error has to be included to get a meaningful $\chi^2$. Thus, the dominant error in all the fits is theoretical.
On the experimental bent, the Review of Particle Physics \cite{part} lists values for only ten magnetic moments: Seven out of the eight octet baryons ($\mu_{\Sigma^0}$ remains unknown), $\mu_{\Omega^-}$, and the transition moments $\mu_{\Sigma^0\Lambda}$ and $\mu_{\Delta^+p}$. The latter can be obtained from the $\Delta\to N \gamma$ helicity amplitudes $A_{\frac12}$ and $A_\frac32$. A consistent extraction of $\mu_\Delta^{++}$ can be used \cite{lopez}, together with two more pieces of information, namely, $\mu_{{\Sigma^*}^0\Lambda}$ and $\mu_{{\Sigma^*}^+\Sigma^+}$, which can be extracted from Refs.~\cite{clas1} and \cite{clas2}, respectively. Additional inputs are the physical masses of the $\pi$, $K$, and $\eta$ pseudoscalar mesons, the average decuplet-octet mass difference $\Delta=0.231$ GeV, which follows from the average baryon decuplet and octet and masses, $M_T=1.382$ GeV and $M_B=1.151$ GeV, respectively. Similarly, the pion decay constant is set to $f=93$ MeV and the scale of dimensional regularization used is $\mu = 1$ GeV.
The standard $\chi^2$ function to be minimized is written as
\begin{equation}
\chi^2 = \sum_{i=1}^N \left[ \frac{\mu_i^\mathrm{exp} - \mu_i^\mathrm{th}}{\Delta \mu_i^\mathrm{exp}} \right]^2, \label{eq:stchi2}
\end{equation}
where $\mu_i^\mathrm{exp}$ and $\Delta \mu_i^\mathrm{exp}$ are the available measured magnetic moments and their corresponding uncertainties, respectively, and $\mu_i^\mathrm{th}$ are their theoretical counterparts, which are constituted by the sum of tree-level values $\mu_i^{(0)}$, one-loop corrections $\delta \mu^{(\mathrm{loop}\, n)}$, and explicit SB corrections $\delta \mu^\mathrm{SB}$, i.e.,
\begin{equation}
\mu_i^\mathrm{th} = \mu_i^{(0)} + \delta \mu^{\mathrm{(loop\, 1)}} + \delta \mu^{\mathrm{(loop\, 2ad)}} + \mu^{\mathrm{(loop 2e)}}
+ \delta \mu^\mathrm{SB}.
\end{equation}
The free parameters in the theory are the operator coefficients $a_1$, $b_2$, $b_3$, and $c_3$ from the baryon axial current operator $A^{kc}$ (\ref{eq:akc}). Four additional parameters $m_k$ are introduced in the definition of the baryon magnetic moment operator $M^k$ (\ref{eq:mQ}). There are eleven additional parameters coming from explicit $SU(3)$ SB. In total, there are 19 free parameters to be determined and only $N=13$ pieces of experimental information.
The simplest possibility is an $SU(3)$ symmetric fit neglecting all $SU(3)$ breaking effects, which involves only the four parameters $m_i$. Keeping in mind that in most hadronic quantities $SU(3)$ breaking is around 20\%–30\% and that the theoretical errors are of order $\epsilon/N_c$, where $\epsilon$ is a measure of $SU(3)$ breaking, then a fair estimate of the theoretical error to be added in quadrature to the experimental ones is $\pm 0.30 \, \mu_N$ [recall that baryon magnetic moments are order $\mathcal{O}(N_c)$ at leading order in $N_c$]. The results are listed in the column labeled Fit 1 in Table \ref{t:bestfitp}. In this case, $\chi^2 = 12.22$ for nine degrees of freedom, but this particular value only reflects the choice of theoretical error. Adding smaller theoretical errors lowers the errors in the parameters at the expense of increasing $\chi^2$ and, except for $m_3$, the central values of the remaining coefficients change a little. The closeness of $\chi^2/\mathrm{dof}$ to one might be interpreted as a sign that $SU(3)$ SB is indeed around 30\%.
To proceed further, in order to gain predictive power, a few assumptions on the unknown parameters should be made. First, the values of the operator coefficients $a_1$, $b_2$, $b_3$, and $c_3$ can be borrowed from the recent analysis of the baryon axial current presented in Ref.~\cite{rfm21}, namely,
\begin{equation}
a_1=1.20 \pm 0.07, \quad b_2=-1.60 \pm 0.18, \quad b_3=1.25 \pm 0.07, \quad c_3=0.46 \pm 0.09,
\end{equation}
which are extracted from Table II of Ref.~\cite{rfm21}, labeled as Fit B.
The relevant parameters $m_k$ should be determined in full, so a few restrictions can be imposed on the parameters from explicit SB. The simplest one is to keep terms up to relative order $1/N_c$, so the relevant parameters become $m_1^{1,\mathbf{1}}$, $n_1^{1,\mathbf{8}}$, $n_2^{1,\mathbf{8}}$, $m_2^{1,\mathbf{10}+\overline{\mathbf{10}}}$, and $m_2^{1,\mathbf{27}}$.
In order to get a consistent least-squares fit, a theoretical uncertainty of $\pm 1/N_c^2 = \pm 0.11$ will be added in quadrature to the experimental errors to account for the omitted terms mentioned above. Without further ado, the fit yields the best-fit parameters listed in Table \ref{t:bestfitp} under the label Fit 2. In this case, $\chi^2 = 14.55/4$ dof and although it exceeds expectations, the best-fit parameters are fairly order 1 (except for $m_1$) and yield reasonable predictions, as it can be verified in the predicted magnetic moments listed in Table \ref{t:numbers}. Explicit SB and one-loop corrections to tree-level values roughly represent 30\%-40\%, which are in accordance with first-order SB.
\begin{table*}
\caption{\label{t:bestfitp}Best-fit parameters from least-squares fits: Fit 1 is an $SU(3)$ fit; Fit 2 includes one-loop and partial SB corrections (see the text); Fit 3 constitutes the so-called prior fit. The resulting values of the corresponding $SU(3)$ couplings $\mu_D$, $\mu_F$, $\mu_C$, and $\mu_T$ are also shown.}
\begin{ruledtabular}
\begin{tabular}{lrrr}
Parameter & Fit 1 & Fit 2 & Fit 3 \\
\hline
$m_1$ & $ 5.07 \pm 0.42$ & $ 7.86 \pm 0.09$ & $ 7.86 \pm 0.09$ \\
$m_2$ & $ 0.73 \pm 1.28$ & $-0.01 \pm 0.18$ & $ 0.01 \pm 0.19$ \\
$m_3$ & $-0.41 \pm 0.82$ & $-1.01 \pm 0.13$ & $-1.01 \pm 0.13$ \\
$m_4$ & $ 4.05 \pm 1.27$ & $ 1.67 \pm 0.23$ & $ 1.67 \pm 0.24$ \\
$m_1^{1,\mathbf{1}}$ & & $ 0.16 \pm 0.20$ & $0.16 \pm 0.20$ \\
$m_3^{1,\mathbf{1}}$ & & & $0.021 \pm 0.100$ \\
$n_1^{1,\mathbf{8}}$ & & $-0.71 \pm 0.38$ & $-0.69 \pm 0.38$ \\
$n_2^{1,\mathbf{8}}$ & & $-2.61 \pm 0.89$ & $-2.65 \pm 0.90$ \\
$n_3^{1,\mathbf{8}}$ & & & $0.010 \pm 0.100$ \\
$\bar{n}_3^{1,\mathbf{8}}$ & & & $0.006 \pm 0.100$ \\
$m_2^{1,\mathbf{10}+\overline{\mathbf{10}}}$ & & $-2.35 \pm 0.23$ & $-2.35 \pm 0.23411$ \\
$m_3^{1,\mathbf{10}+\overline{\mathbf{10}}}$ & & & $0.011 \pm 0.100$ \\
$m_2^{1,\mathbf{27}}$ & & $ 0.71 \pm 0.33$ & $0.68 \pm 0.35$ \\
$m_3^{1,\mathbf{27}}$ & & & $0.025 \pm 0.100$ \\
$\bar{m}_3^{1,\mathbf{27}}$ & & & $0.017 \pm 0.100$ \\
$\chi^2$ & $12.22$ & $14.56$ & $14.38$ \\
\hline
$\mu_D$ & $ 2.47 \pm 0.23$ & $ 3.76 \pm 0.05$ & $3.76 \pm 0.02$ \\
$\mu_F$ & $ 1.77 \pm 0.15$ & $ 2.51 \pm 0.03$ & $2.30 \pm 0.03$ \\
$\mu_C$ & $ 2.56 \pm 0.21$ & $ 3.08 \pm 0.08$ & $2.50 \pm 0.06$ \\
$\mu_T$ & $-14.18 \pm 0.95$ & $-17.38 \pm 0.33$ & $-17.39 \pm 0.24$ \\
$\mu_D/\mu_F$ & $1.40 \pm 0.13$ & $ 1.50 \pm 0.02$ & $1.62 \pm 0.02$ \\
\end{tabular}
\end{ruledtabular}
\end{table*}
In general, predictions are consistent with data and with other determinations. For instance, in the context of the $1/N_c$ expansion alone \cite{lebed}, there is an overall agreement. In the context of heavy baryon chiral perturbation theory \cite{m97} and relativistic baryon chiral perturbation theory \cite{k00}, there is a reasonable agreement with calculations for octet baryons to third order. These references, however, present more refined calculations to fourth order. At the level of precision presented in this work, no comparison is possible yet. Theoretical expressions need be improved, for instance, by lifting the $\Delta=0$ assumption in graphs \ref{fig:mmloop2}(a)-\ref{fig:mmloop2}(d). This could improve the determinations of $\mu_C$ and $\mu_T$ to a reasonable extent. Actually, the analysis of Ref.~\cite{rfm14} where partial terms containing a nonzero $\Delta$ in loop diagrams \ref{fig:mmloop2}(a)-\ref{fig:mmloop2}(d) seems to point in the right direction.
\begin{table*}
\caption{\label{t:numbers}Predicted baryon magnetic moments using the best-fit parameters from Fit 2.}
\begin{ruledtabular}
\begin{tabular}{lcrrrrrr}
& $\displaystyle \mu^{\mathrm{exp}}$ & $\displaystyle \mu^{\mathrm{th}}$ & $\displaystyle \mu^{(0)}$ & $\displaystyle \delta \mu^\mathrm{SB}$ & $\displaystyle \delta \mu^{\mathrm{(loop\, 1)}}$ & $\displaystyle \delta \mu^{\mathrm{(loop\, 2ad)}}$ & $\displaystyle \delta \mu^{\mathrm{(loop 2e)}}$ \\
\hline
$n$ & $-1.9130 \pm 0.000$ & $-2.079$ & $-2.507$ & $ 0.818$ & $ 0.804$ & $-0.861$ & $-0.334$ \\
$p$ & $ 2.7928 \pm 0.000$ & $ 2.852$ & $ 3.760$ & $-0.266$ & $-2.064$ & $ 0.616$ & $ 0.807$ \\
$\Sigma^-$ & $-1.160 \pm 0.025$ & $-1.108$ & $-1.253$ & $-0.085$ & $ 0.487$ & $-0.275$ & $ 0.017$ \\
$\Sigma^0$ & & $ 0.702$ & $ 1.253$ & $ 0.116$ & $-1.531$ & $ 0.390$ & $ 0.474$ \\
$\Sigma^+$ & $ 2.458 \pm 0.010$ & $ 2.512$ & $ 3.760$ & $ 0.317$ & $-3.550$ & $ 1.055$ & $ 0.930$ \\
$\Xi^-$ & $-0.6507 \pm 0.0025$ & $-0.602$ & $-1.253$ & $ 0.637$ & $ 1.059$ & $-0.449$ & $-0.596$ \\
$\Xi^0$ & $-1.250 \pm 0.014$ & $-1.279$ & $-2.507$ & $-0.587$ & $ 3.263$ & $-0.661$ & $-0.788$ \\
$\Lambda$ & $-0.613 \pm 0.004$ & $-0.487$ & $-1.253$ & $-0.021$ & $ 1.531$ & $-0.765$ & $ 0.021$ \\
$\Sigma^0\Lambda$ & $ 1.61 \pm 0.08$ & $ 1.239$ & $ 2.171$ & $-0.119$ & $-1.464$ & $ 0.255$ & $ 0.395$ \\
$\Delta^{++}$ & $ 6.14 \pm 0.51$\footnote{Value reported in Ref.~\cite{lopez}}& $ 5.695$ & $ 6.170$ & $ 0.007$ & $-3.273$ & $ 1.366$ & $ 1.426$ \\
$\Delta^+$ & & $ 2.821$ & $ 3.085$ & $ 0.554$ & $-2.278$ & $ 0.596$ & $ 0.864$ \\
$\Delta^0$ & & $-0.156$ & $ 0.000$ & $ 1.101$ & $-1.283$ & $-0.277$ & $ 0.302$ \\
$\Delta^-$ & & $-3.082$ & $-3.085$ & $ 1.649$ & $-0.288$ & $-1.098$ & $-0.260$ \\
${\Sigma^*}^+$ & & $ 2.044$ & $ 3.085$ & $-0.818$ & $-0.995$ & $ 0.210$ & $ 0.562$ \\
${\Sigma^*}^0$ & & $-0.361$ & $ 0.000$ & $ 0.142$ & $ 0.000$ & $-0.503$ & $ 0.000$ \\
${\Sigma^*}^-$ & & $-2.766$ & $-3.085$ & $ 1.101$ & $ 0.995$ & $-1.216$ & $-0.562$ \\
${\Xi^*}^0$ & & $-0.518$ & $ 0.000$ & $-0.818$ & $ 1.283$ & $-0.681$ & $-0.302$ \\
${\Xi^*}^-$ & & $-2.475$ & $-3.085$ & $ 0.554$ & $ 2.278$ & $-1.358$ & $-0.864$ \\
$\Omega^-$ & $-2.02 \pm 0.05$ & $-2.053$ & $-3.085$ & $ 0.007$ & $ 3.560$ & $-1.370$ & $-1.166$ \\
$\Delta^+ p$ & $ 3.51 \pm 0.09$ & $ 3.381$ & $ 4.097$ & $-0.638$ & $-3.071$ & $ 2.247$ & $ 0.746$ \\
$\Delta^0 n$ & & $ 3.381$ & $ 4.097$ & $-0.638$ & $-3.071$ & $ 2.247$ & $ 0.746$ \\
${\Sigma^*}^0\Lambda$ & $ 2.73 \pm 0.25$\footnote{Value extracted from Ref.~\cite{clas1}}& $ 2.885$ & $ 3.548$ & $-0.168$ & $-3.089$ & $ 2.071$ & $ 0.522$ \\
${\Sigma^*}^0\Sigma^0$ & & $ 1.284$ & $ 2.049$ & $ 0.097$ & $-3.048$ & $ 1.413$ & $ 0.774$ \\
${\Sigma^*}^+\Sigma^+$ & $ 3.17 \pm 0.36$\footnote{Value extracted from Ref.~\cite{clas2}} & $ 3.456$ & $ 4.097$ & $ 0.833$ & $-5.327$ & $ 2.705$ & $ 1.147$ \\
${\Sigma^*}^-\Sigma^-$ & & $-0.888$ & $ 0.000$ & $-0.640$ & $-0.769$ & $ 0.121$ & $ 0.401$ \\
${\Xi^*}^0\Xi^0$ & & $ 3.064$ & $ 4.097$ & $ 0.444$ & $-5.327$ & $ 2.702$ & $ 1.147$ \\
${\Xi^*}^-\Xi^-$ & & $-0.892$ & $ 0.000$ & $-0.640$ & $-0.769$ & $ 0.116$ & $ 0.401$ \\
\end{tabular}
\end{ruledtabular}
\end{table*}
An alternative approach to get at least an estimate of the size of the omitted free parameters of Fit 2 above can be achieved following the lines of the fitting procedure implemented in Ref.~\cite{severt}. The approach, adapted to the present analysis, consists in using the prior fit \cite{sch} to extend the standard $\chi^2$ of Eq.~(\ref{eq:stchi2}) to
\begin{equation}
\chi^2_\mathrm{prior} = \chi^2 + \left[ \frac{m_3^{1,\mathbf{1}}}{\Delta m_3^{1,\mathbf{1}}} \right]^2 +
\left[ \frac{n_3^{1,\mathbf{8}}}{\Delta n_3^{1,\mathbf{8}}} \right]^2 +
\left[ \frac{\bar{n}_3^{1,\mathbf{8}}}{\Delta \bar{n}_3^{1,\mathbf{8}}} \right]^2 +
\left[ \frac{m_3^{1,\mathbf{10}+\overline{\mathbf{10}}}}{\Delta m_3^{1,\mathbf{10}+\overline{\mathbf{10}}}} \right]^2 +
\left[ \frac{m_3^{1,\mathbf{27}}}{\Delta m_3^{1,\mathbf{27}}} \right]^2 +
\left[ \frac{\bar{m}_3^{1,\mathbf{27}}}{\Delta \bar{m}_3^{1,\mathbf{27}}} \right]^2,
\end{equation}
where $m_3^{1,\mathbf{rep}}$ and $n_3^{1,\mathbf{rep}}$ are the unknown coefficients that come along 3-body operators from explicit SB weighted by their respective errors. While the extra terms added to $\chi^2$ guarantees that these six parameters get values around zero (approximately Gaussian distributed \cite{severt}), the remaining nine parameters are the ones actually fitted to the experimental data. For definiteness, the nominal theoretical errors $\Delta m_3^{1,\mathbf{rep}} = \Delta n_3^{1,\mathbf{rep}} = 0.100$ have been used and the corresponding best-fit parameters are listed in Table \ref{t:bestfitp} under the label Fit 3. It is convenient to point out that nominal errors of $\pm 0.200$ and $\pm 0.050$ produce $\chi_\mathrm{prior}^2=13.97$ and $\chi_\mathrm{prior}^2=14.51$, respectively. In all cases, the six parameters referred to above are small compared to the ones retained in the standard fit, which suggest that the assumption of neglecting them in the analysis is justified.
\section{\label{sec:con}Concluding remarks}
Baryon magnetic moments to orders $\mathcal{O}(m_q^{1/2})$ and $\mathcal{O}(m_q \ln m_q)$ are evaluated in the present paper in the context of chiral perturbation theory in the large-$N_c$ limit. All the operator structures that appear for $N_c=3$ are accounted for in the analysis. Regrettably, the expressions obtained are rather long; however, including them in full is necessary to make the paper self-contained.
The approach presented here is twofold. On the one hand, previous analyses \cite{rfm09,rfm14} get improved with the addition of new terms not considered before, and, on the other hand, the complete structures presented allow one to carry out a full comparison with the conventional chiral perturbation theory results by using the relations between the operator coefficients $a_1$, $b_2$, $b_3$, and $c_3$ and the $SU(3)$ invariants $\mu_D$, $\mu_F$, $\mu_C$, and $\mu_T$.
The main conclusion obtained is that theoretical expressions of baryon magnetic moments agree in both theories at the physical value $N_c=3$ for $N_f=3$ flavors of light quarks.
A preliminary numerical analysis via a least-squares fit is also conducted to explore the free parameters in the theory. Although a stable fit is observed, the best-fit parameters are not entirely satisfactory with the assumptions made. The main issue is the lack of experimental data to perform a detailed determination of all the free parameters. In order to improve the theoretical expressions, also the effects of a nonzero decuplet-octet baryon mass difference in the diagrams of order $\mathcal{O}(m_q \ln m_q)$ are needed. The calculation of these contributions, however, involves a non-negligible effort which can be attempted elsewhere. The approach discussed here will constitute useful guidance for this enterprise. Of course, new and/or improved measurements of baryon magnetic moments will be welcome in the future.
\begin{acknowledgments}
The authors are grateful to Consejo Nacional de Ciencia y Tecnolog{\'\i}a (Mexico) for partial support.
\end{acknowledgments}
|
2,877,628,090,518 | arxiv |
\section{Introduction}
Substantial efforts have been made to describe the center-outward ordering of data. While it is straightforward for univariate measurements, developing such a measure is challenging for multivariate data due to the lack of total ordering in $\mathbb{R}^p$ for $p>1$, not to speak of random objects, which refer to data that take values in general metric spaces.
Statistical depth has emerged as the main device to determine the centrality of multivariate data with respect to an underlying probability distribution, and has a long history and rich literature including halfspace depth \citep{tuke:75}, convex hull peeling depth \citep{barn:76}, simplicial volume depth \citep{oja:83}, simplicial depth \citep{liu:90}, zonoid depth \citep{kosh:97}, projection depth \citep{zuo:00}, and spatial depth \citep{serf:02},
as well as families of depths \citep[][]{pain:13,yang:18}, with a recent review in
\citet{mosl:21}.
Choosing among different notions of depths, \citet{zuo:00} and \citet{dyck:04} discussed two sets of desirable postulates regarding invariance, monotonicity, continuity and convexity; see \citet{mosl:13} for a more recent discussion.
Consequently, depths constitute a versatile tool for descriptive and inferential statistics that has been employed, for example, in quantification of outlyingness and deepest points, data visualization, regression, classification, clustering and testing \citep[][among others]{liu:99,rous:99:2, rous:99,jorn:04, li:04:2, ghos:05,li:12,liu:13,lang:14}.
Notions of depths have also met with growing popularity in the analysis of functional data that consist of random functions. Various definitions of depths have been proposed for functional data that include two major classes of depths \citep{mosl:21}, integrated depths \citep{nagy:16:2} and infimal depths \citep{mosl:12,mosl:13}. Integrated depths include integrals of cross-sectional univariate depths \citep{frai:01}, modified band depth \citep{lope:09}, modified half-region depth \citep{lope:11} and multivariate functional integrated depths \citep{clae:14}, while depths of infimal type include
band depth \citep{lope:09}, functional Tukey depth \citep{dutt:11}, and half-region depth \citep{lope:11}.
We refer to \citet{gijb:17} for a comprehensive review and discussion of functional depths and their desirable properties; see also \citet{niet:16}.
In this paper we go beyond multivariate and functional data, which are usually viewed as random elements taking values in a vector space, and consider more
complex data that we refer to as random objects. Random objects reside in a space equipped with a metric or dissimilarity measure, which generally lacks a linear structure and is not a vector space. These include data on finite- and infinite-dimensional manifolds such as the Wasserstein space of distributions or data in general Hadamard spaces \citep{lin:19}.
Such data are increasingly becoming available in the course of the evolving framework of data science.
Notions of depths have been generalized to data lying in a finite-dimensional manifold, including circular and spherical data \citep{liu:92:2}, covariance or correlation matrices \citep{zhan:02:2,chen:18:2,pain:18} and Hermitian positive definite matrices \citep{chau:19}.
Lens depth, which is based on the notion of distance, was first introduced in multivariate settings \citep{liu:11} and has recently been extended to data taking values in Riemannian manifolds and semimetric/metric spaces \citep{klei:17,chol:20,geen:21}. Other very recent parallel developments which appeared on arXiv while this article was finalized include an extension of Tukey's depth to the case of random objects \citep{dai:21:2} and an approach to nonparametric inference in metric spaces using a distributional approach \citep{wang:21:1}, extending recent work on ball distance correlation \citep{pan:20}.
Exploring the geometry of random objects as shaped by the underlying metric $d$ is crucial for modern data science applications. Our primary goal is thus to develop a toolbox for this purpose. It turns out that these tools can be harnessed to obtain a notion of data depth that differs substantially from existing notions of depth and thus offers a new perspective. Previous approaches that focused on depth for data that are either not finite-dimensional or do not reside in a linear space adopted classical notions of depth for multivariate data as a starting point, then extending them to more general spaces in various ways. In contrast, from the get-go we develop tools directly aimed at random objects in metric spaces; the key notion of transport depth has so far not been considered even for classical structured linear spaces. Our starting point are depth profiles for each element in the space $\Omega$ that lend themselves to visualize random objects and in combination with optimal transport quantify centrality and outlyingness. Depth profiles emerge as basic tools for the exploratory analysis of samples of random objects. Our methods are grounded in principled modeling and supported by theory. We develop sample based estimators of the proposed notions of depth profiles, transport ranks and transport medians and establish their convergence to the corresponding population targets. For the theory challenges arise from both the nonlinearity of the underlying metric space and from the fact that the estimated depth profiles are not independent. We overcome these challenges by employing tools from empirical processes.
The \emph{depth profiles} that are key to our approach are indexed by the elements $\omega \in \Omega$. Given $\omega$, the depth profile at $\omega$ is the distribution of the distances between $\omega$ and the other elements of $\Omega$, where this distribution is determined by the probability measure on $\Omega$ that governs the distribution of the random objects across $\Omega$ and generates the observed data. The depth profile thus associates a one-dimensional distribution to each $\omega \in \Omega$. Depth profiles indicate the relative location of $\omega$ within $\Omega$, and for the case of a data sample the relative location of $\omega$ within the data point cloud of random objects. A second key idea is to harness optimal transports between the distributions that constitute the depth profiles at different elements $\omega$. These optimal transports then lead to the definition of a \emph{transport rank} that constitutes a notion of depth and provides an outwards ordering for random objects. We will show that transport ranks can also be used to identify the most central random objects, the \emph{transport median set}.
In Section~\ref{sec:method}, we introduce the key ingredients of our methodology, i.e. depth profiles, transport ranks and transport median sets. In Section~\ref{sec:prop}, we describe their properties and fundamental features, for example, invariance under measure preserving and distance preserving maps. We propose sample based estimators and establish asymptotic guarantees in Section~\ref{sec:est} and then proceed to illustrate the efficacy and visualization of depth profiles and transport ranks with simulated multivariate and distribution-valued data in Section~\ref{sec:simu}. The potential of the new notions for data science will be demonstrated through data applications in Section~\ref{sec:app}, featuring human mortality distributions, U.S. electricity generation compositions, and Manhattan Yellow Taxi trip records. Proofs and auxiliary results are in the Supplement.
\section{Depth Profiles, Transport Ranks and Transport Median Set}\label{sec:method}
In this section we introduce and motivate the key notions for our approach. We assume that data and random objects of interest are situated in a totally bounded separable metric space $(\Omega,d)$.
Consider a probability space $(S,\mathcal{S},\mathbb{P})$,
where $\mathcal{S}$ is the Borel sigma algebra on a domain $S$ and $\mathbb{P}$ is a probability measure.
A random object $X$ is a measurable map, $X\colon S\rightarrow\Omega$ and $P$ is a Borel probability measure that generates the law of $X$, i.e.
$P(A) = \mathbb{P} (\{s\in S: X(s) \in A\}) \eqqcolon \mathbb{P}(X\in A) = \mathbb{P}(X^{-1}(A)) \eqqcolon \mathbb{P} X^{-1} (A)$, for any Borel measurable $A\subseteq\Omega$.
For any $\omega \in \Omega$, let $F_\omega$ denote the cumulative distribution function (cdf) of the distribution of the distance between $\omega$ and a random element $X$ that is distributed according to $P$. We suppress the dependence of $F_\omega$ on $P$ to keep the notation simple.
Formally, for any $t \in \mathbb{R}$, we define the \emph{depth profile} at $\omega$ as
\begin{equation}
F_\omega(t) = \mathbb{P} \left( d(\omega,X) \leq t \right), \label{dp}
\end{equation}
so that $F_\omega$ is a one-dimensional distribution that captures the probability masses enclosed by a ball in $\Omega$ that has center $\omega$ and radius $t$, for all $t \ge 0.$
Another interpretation is that the depth profile at $\omega$ is the distribution of the distances that need to be covered to reach other elements of $\Omega$ when starting out at $\omega$, as dictated by the distribution $P$ of random elements $X$ that take values in $\Omega$.
An element $\omega$ that is centrally located, i.e. close to most other elements, will have a density profile with more mass near 0, in contrast to a distantly located or outlying element whose depth profile will assign mass farther away from 0.
If depth profiles have densities, for a centrally located $\omega$, this density will have a mode near 0, while the density near 0 will be small for a distantly located $\omega$.
A central notion will be the $\omega$-indexed stochastic process $\{d(\omega,X)\}_{\omega \in \Omega}$.
For each $k \in \mathbb{N}$ and collection of indices $i_1,i_2,\dots,i_k$, we consider the $\mathbb{R}^k$ valued random variable $(d(\omega_{i_1},X),d(\omega_{i_2},X),\dots, d(\omega_{i_k},X))$
defining a
probability measure $\dveck_{i_1,i_2,\dots,i_k}$ such that
\begin{equation*}
\dveck_{i_1,i_2,\dots,i_k} \left(A_1 \times A_2 \times \dots \times A_k\right) \coloneqq \mathbb{P}\left(d(\omega_{i_1},X) \in A_1, d(\omega_{i_2},X) \in A_2,\dots, d(\omega_{i_k},X) \in A_k \right)
\end{equation*}
for any Borel sets $A_1, A_2, \dots, A_k \subseteq \mathbb{R}$.
Suppose that $\dveck_{i_1,i_2,\dots,i_k}$ satisfies the following conditions:
\begin{enumerate}[label = (\roman*)]
\item for any permutation $\pi=(\pi(1),\dots,\pi(k))$ of $\{1,\dots,k\}$ and measurable sets $A_j \subseteq \mathbb{R}$,
\begin{equation*}
\dveck_{\pi(i_1),\pi(i_2),\dots,\pi(i_k)} \left(A_{\pi(1)} \times A_{\pi(2)} \times \dots \times A_{\pi(k)}\right)=\dveck_{i_1,i_2,\dots,i_k} \left(A_1 \times A_2 \times\dots\times A_k\right).
\end{equation*}
\item for all measurable sets $A_j \subseteq \mathbb{R}$ and for any $m \in \mathbb{N}$
\begin{align*}
&\dveck_{i_1,i_2,\dots,i_k} \left(A_1 \times A_2 \times \dots \times A_k\right)\\
&=\dveck_{i_1,i_2,\dots,i_k,i_{k+1},\dots,i_{k+m}} \left(A_1 \times A_2 \times \dots \times A_k \times \mathbb{R} \times\dots\times \mathbb{R}\right).
\end{align*}
\end{enumerate}
Then by Kolmogorov's extension theorem, there exists a unique probability measure $\nu$ on $\mathbb{R}^\Omega \coloneqq \{\omega\mapsto g(\omega): \omega\in\Omega,\, g(\omega)\in\mathbb{R}\}$, the underlying law of the stochastic process $\{d(\omega,X)\}_{\omega \in \Omega}$, whose finite dimensional marginals are given by $\dveck_{i_1,i_2,\dots,i_k}$, whence the stochastic process $\{d(\omega,X)\}_{\omega \in \Omega}$ is well defined.
For $\omega \in \Omega$ and $r > 0$, define the open ball $O_{\omega,r}=\{x \in \Omega: d(\omega,x) < r \}$ with radius $r$ centered at $\omega$. Starting with the open balls $\{O_{\omega,r}\}_{\omega \in \Omega, r>0}$, we form an algebra $\mathcal{B}_0$ of subsets of $\Omega$, which includes the empty set and open balls and is closed under complements, finite unions and finite intersections.
On $\mathcal{B}_0$, we define a pre-measure $P_0$, given by the marginals of the law of $\{d(\omega,X)\}_{\omega \in \Omega}$ such that $P_0(B)=P(B)=\mathbb{P}(X^{-1}(B))$ for all $B \in \mathcal{B}_0$. When $\Omega$ is separable, $\mathcal{B}_0$ generates the Borel sigma algebra on $\Omega$ since it is an algebra containing the open balls.
Hence by the Hahn--Kolmogorov theorem, a version of the Carath\'eodory's extension theorem, there exists a unique extension of $P_0$ to the Borel sigma algebra of $\Omega$ whose restriction to $\mathcal{B}_0$ coincides with $P_0$. By uniqueness, the extension of $P_0$ is $P$. Hence the marginals of the law of $\{d(\omega,X)\}_{\omega \in \Omega}$ uniquely characterize the underlying Borel probability measure of $X$ on separable metric spaces.
The collection of depth profiles\xspace $\{F_\omega: \omega \in \Omega\}$ represents the one-dimensional marginals of the stochastic process $\{d(\omega,X)\}_{\omega \in \Omega}$. Our goal is to use these simple marginals to obtain information about the very complex distribution of the $\Omega$-valued random variable $X$. The empirical versions of these marginals are the estimated depth profiles introduced below that we utilize to explore the geometry of point clouds of random objects.
Considering the depth profile\xspace $F_X$ of $X$, for each $\omega$, the push-forward map of $F_\omega$ to $F_X$, given by $F_X^{-1}(F_\omega(\cdot))$, outlines the shape of the optimal transport from the depth profile\xspace $F_\omega$ to the depth profile\xspace $F_X$.
We will use the optimal transport map subtracted by the identity map \citep{ambr:08},
\begin{eqnarray} \label{TM} H_{X,\omega} (u) =F_X^{-1}(F_\omega(u))-u,\quad u \ge 0,\end{eqnarray} to assign a measure of centrality to an element $\omega \in \Omega$ with respect to $P$.
When $F_\omega$ is continuous, by change of variable, the integral
\begin{eqnarray} \label{Habs} \int |H_{X,\omega} (u)|\mathrm{d} F_\omega(u)=\int_0^1\left|F_{\obj}^{-1}(u)-F_\omega^{-1}(u)\right|\mathrm{d} u \end{eqnarray} quantifies the amount of mass that needs to be transferred when transporting $F_\omega$ to $F_X$;
\begin{eqnarray} \label{Hsign} \text{sign} \left(\int H_{X,\omega} (u) \mathrm{d} F_\omega(u)\right)=\mathrm{sign}\left(\int_0^1[F_{\obj}^{-1}(u)-F_\omega^{-1}(u)]\mathrm{d} u\right)\end{eqnarray}
summarizes overall the direction of mass transfer, either from right to left in which case the sign is negative, or from left to right when it is positive.
The intuition about the utility of these notions is that if $\omega$ is more centrally located than $X$ in regard to the measure $P$, we expect the sign to be positive as the mass transfer is predominantly from left to right and the amount of transferred mass to reflect the outlyingness differential between $\omega$ and $X$.
For example, for distributions $P$ that are symmetric around and declining as moving away from a central point $\omega_\oplus \in \Omega$, we expect that the sign in \eqref{Hsign} with $\omega=\omega_\oplus$ is positive almost surely and that given $X$, the amount of mass transfer is increasing as $d(\omega_\oplus,X)$ increases.
To determine the degree of centrality or outlyingness it makes therefore sense to take the expected value of the mass transferred for a random $X$, calibrated by the sign indicating the transfer direction.
This is illustrated in Figure~\ref{fig:ex_depthprfl} for the case where $X$ is a bivariate Gaussian random variable with mean zero and covariance ${\rm diag}(2,1)$.
For $x\in\{(0,0),(2,0),(4,0),(6,0)\}$ and $\omega = (2,2)$, their corresponding depth profiles are depicted as densities $f_{x}$ and $f_{\omega}$ in the left panel, where the distances from $x$ to the rest of the data are increasing as $x$ moves away from the origin.
For $x\in\{(0,0),(2,0),(4,0),(6,0)\}$, the maps $H_{x,\omega}$ as per \eqref{TM} to a fixed element $\omega=(2,2)$ are in the right panel.
For $\omega=(2,2)$, mass moves to the left when transporting $F_\omega$ to $F_{x}$ for $x\in\{(0,0),(2,0)\}$, which are closer to the origin, and moves to the right for $x\in\{(4,0),(6,0)\}$, which are farther away from the origin.
\begin{figure}[!htb]
\centering
\includegraphics[width=.75\linewidth]{one_sample_plots/ex_depthprfl.pdf}
\caption{Left: Depth profiles, represented by the corresponding densities, of five points as indicated, with respect to a bivariate Gaussian distribution with mean zero and covariance ${\rm diag}(2,1)$.
Right: Transport maps $H_{x,\omega}$ as per \eqref{TM} for $x\in\{(0,0),(2,0),(4,0),(6,0)\}$ and $\omega = (2,2)$, where negative (positive) values indicate transport to the right (left).}
\label{fig:ex_depthprfl}
\end{figure}
This motivates \emph{transport ranks} to measure centrality (which can be viewed as a notion of depth) of an element $\omega\in \Omega$ with respect to $P$ as the expected signed amount of mass
transfer when transporting $F_\omega$ to $F_X$, where $P = \probX^{-1}$. Formally,
\begin{equation}\aligned\label{eq:rank}
\rank_{\omega} = \mathbb{E}\left\{\mathrm{sign}\left(\int_0^1[F_{\obj}^{-1}(u)-F_\omega^{-1}(u)]\mathrm{d} u\right) \int_0^1\left|F_{\obj}^{-1}(u)-F_\omega^{-1}(u)\right|\mathrm{d} u\right\}. \endaligned\end{equation}
Given a sample of independent observations $X_1, \dots, X_n$ from $P$, we define a pairwise comparison graph $\mathcal{V}$, where \begin{eqnarray} \label{V} \mathcal{V}_{ij}=\left\{\mathrm{sign}\left(\int_0^1[F_{X_i}^{-1}(u)-F_{X_j}^{-1}(u)]\mathrm{d} u\right) \int_0^1\left|F_{X_i}^{-1}(u)-F_{X_j}^{-1}(u)\right|\mathrm{d} u\right\}, \, 1 \le i,j \le n,\end{eqnarray}
and $\mathcal{V}_{ij}$
expresses a relative order of centrality between pairs $X_i$ and $X_j$ by using their depth profiles $F_{X_i}$ and $F_{X_j}$, where
$X_i$ is more central than $X_j$ if
$\mathcal{V}_{ij}<0$ and $X_j$ is more central than $X_i$ if
$\mathcal{V}_{ij}>0$, and the size of the difference in centrality is indicated by $|\mathcal{V}_{ij}|$.
This is similar to the notion of \emph{HodgeRank} \citep{jian:11:2}. The matrix $\mathcal{V}$ is skew symmetric. The empirical transport rank of $X_i$, with regard to the empirical measure corresponding to $\{X_1, \dots, X_n\}$, is given by \begin{eqnarray} \label{er} \mathbb{E}_{X_j}\left(\left\{\mathrm{sign}\left(\int_0^1[F_{X_i}^{-1}(u)-F_{X_j}^{-1}(u)]\mathrm{d} u\right) \int_0^1\left|F_{X_i}^{-1}(u)-F_{X_j}^{-1}(u)\right|\mathrm{d} u\right\}\right). \end{eqnarray}
It expresses the aggregated preference of $X_i$ with respect to the rest of the data cloud. The more positive the transport rank of $X_i$ is, the more centered it is relative to the other sample elements,
because the optimal transports from $X_i$ to the other points in the data cloud tend to move mass to the right.
Equipped with an ordering of the elements of $\Omega$ by means of their transport rank we define the \emph{transport median} set of $P$ as the collection of points in the support $\Omega_P\subset\Omega$ of $P$ which have maximal transport rank and are therefore most central.
Formally, the transport median set $\mathcal{M}_\oplus$ is defined as
\begin{equation} \label{R}
\mathcal{M}_\oplus = \argmax_{\omega \in \Omega_P} \rank_\omega.
\end{equation}
The depth profiles\xspace of the data objects together with the transport ranks and the transport median set are the key ingredients of the proposed toolkit and notion of centrality for the exploration of samples of random objects. These devices lend themselves to devise depth profile based methods for clustering, classification and outlier detection, all challenging tasks when one deals with random objects.
\section{Properties of Depth Profiles and Transport Ranks} \label{sec:prop}
We discuss here some desirable properties of depth profiles\xspace, transport ranks and the transport median set. By Lemma~\ref{lma:UC} established in Section~\ref{sec:est} under Assumption \ref{ass:dpfctn}, the depth profiles $F_\omega(\cdot)$ and the associated quantile function representations $F_\omega^{-1}(\cdot)$ are uniformly Lipschitz in $\omega$ provided that the depth profiles\xspace have uniformly upper bounded densities with respect to the Lebesgue measure.
This means that $F_{\omega_1}$ and $F_{\omega_2}$ are uniformly close to each other as long as $\omega_1$ and $\omega_2$ are close, and the distance between $F_{\omega_1}$ and $F_{\omega_2}$ is upper bounded by a constant factor of $d(\omega_1,\omega_2)$.
Moreover, under the well-separateness Assumption \ref{ass:separation} in Section~\ref{sec:est}
the transport rank $\rank_\omega$ is uniformly continuous in $\omega$ as shown in Lemma~\ref{lma:LUC}.
Let $(\tilde{\Omega},\tilde{d})$ be a metric space. A map $h\colon \Omega \rightarrow \tilde{\Omega}$ is isometric if $d(\omega_1,\omega_2)=\tilde{d}(h(\omega_1),h(\omega_2))$ for all $\omega_1,\omega_2 \in \Omega$. Proposition~\ref{prop:properties}(a) below establishes the invariance of the depth profiles, and thereby the transport rank, under isometric transformations. In Euclidean spaces this ensures that the depth profiles are invariant under affine transformations and rotations.
Next we consider situations when the distribution $P$ of $X$ concentrates around a point $\omega_\oplus \in \Omega$, specifically, when there exists $\omega_\oplus\in\Omega$ such that $d(\omega_\oplus,\omega_1) \leq d(\omega_\oplus,\omega_2)$ implies \begin{equation}
\label{center}
F_{\omega_1}(u) \geq F_{\omega_2}(u)
\end{equation}
for $u$ almost everywhere in $\mathbb{R}$, with a strict inequality $d(\omega_\oplus,\omega_1) < d(\omega_\oplus,\omega_2)$ implying that $F_{\omega_1}(u) > F_{\omega_2}(u)$ on a set with positive Lebesgue measure.
We call such $\omega_\oplus$ an $\Omega$-valued \emph{mode} of $P$. Note that the definition of modes implies the uniqueness of $\omega_\oplus$.
This condition says that a $d$-ball of radius $u$ around $\omega_\oplus$ contains more mass under $P$ than a similar ball around any other point in $\Omega$ and the mass contained around a point $\omega$ decreases as distance from $\omega_\oplus$ increases.
In the above definition, if for any two points $\omega_1,\omega_2$, $F_{\omega_1}(u)=F_{\omega_2}(u)$ almost everywhere under the Lebesgue measure, then $d(\omega_\oplus,\omega_1)=d(\omega_\oplus,\omega_2)$, which implies that the mode $\omega_\oplus$, if it exists, is unique. Proposition~\ref{prop:properties}(b) states that if $P$ has a mode, then the transport rank of the mode is positive and the transport median is unique and corresponds to the mode.
In fact the transport rank decreases as the distance from the mode increases and stays constant for all $\omega$ which are equidistant from $\omega_\oplus$. For distributions that concentrate around their unique Fr\'{e}chet\xspace mean \citep{frec:48}, the Fr\'{e}chet\xspace mean is the mode and hence the transport median \citep{luna:20}.
\begin{Proposition}
\label{prop:properties}
For a separable metric space $(\Omega,d)$ the depth profiles $F_\omega$ and the transport ranks $\rank_\omega$ satisfy the following properties
\begin{enumerate}[label=(\alph*)]
\item\label{prop1a} Let $h\colon\Omega \rightarrow \tilde{\Omega}$ be a bijective isometric measurable map between $(\Omega,d)$ and $(\tilde{\Omega},\tilde{d})$ and $P_h(\cdot)=P(h^{-1}(\cdot))$ the push-forward measure on $\tilde{\Omega}$.
Then $F_{h(\omega)}^{P_h}(u)=F_\omega^P(u)$ for all $u \in \mathbb{R}$, and hence $\rank_{h(\omega)}^{P_h}=\rank_\omega^P$.
Here, $F_\omega^P(u)=\mathbb{P}(d(\omega,X)\leq u)$, where $X$ is a $\Omega$-valued random element such that $P = \mathbb{P} X^{-1}$ (and hence $P_h = \mathbb{P} (h(X))^{-1}$), $F_{h(\omega)}^{P_h}(u)=\mathbb{P}(\tilde{d}(h(\omega),h(X))\leq u)$,
$\rank_\omega^P$ is the transport rank of $\omega$ with respect to $P$ and
$\rank_{h(\omega)}^{P_h}$ is the transport rank of $h(\omega)$ with respect to $P_h$.
\item\label{prop1b} Let $\omega_\oplus$ be a mode as per \eqref{center} of $P$. Then $\rank_{\omega_\oplus} \geq 0$ and $\rank_\omega$ is a non-increasing function of $d(\omega_\oplus,\omega)$. Moreover $\rank_{\omega_1}=\rank_{\omega_2}$ if $d(\omega_\oplus,\omega_1)=d(\omega_\oplus,\omega_2)$. Hence for such $P$ the transport median $\mathcal{M}_\oplus$ is unique and $\mathcal{M}_\oplus=\omega_\oplus$.
\end{enumerate}
\end{Proposition}
Proposition~\ref{prop:properties}(b) provides a characterization of the radial
ordering induced by the transport rank for the special case where the data distribution on $\Omega$ has a mode.
Consider a curve $\gamma\colon I \rightarrow \Omega$ in the space $(\Omega,d)$ where $I \subset \mathbb{R}$ is a non-empty interval. The length of the curve $L(\gamma)$ is defined as
\begin{equation} \label{sup}
L(\gamma)=\sup \sum_{i=1}^k d(\gamma(t_i),\gamma(t_{i-1})),
\end{equation}
where the supremum in (\ref{sup}) is taken over all $t_0 \leq t_1 \leq \dots\le t_k$ in $I$ and all $k \in \mathbb{N}$; $\gamma$ is called a rectifiable curve if $L(\gamma)<\infty$. A length metric $d_L$ associated with the metric $d$ of $\Omega$ is then
\begin{equation} \label{L}
d_L(x,y)=\inf_{\gamma\in\mathcal{L}} L(\gamma),
\end{equation}
where the infimum in (\ref{L}) is taken over the class of curves $\mathcal{L}=\{\gamma\colon [0,1] \rightarrow \Omega\, | \, \gamma(0)=x, \gamma(1)=y,\,\text{and } \gamma \text{ is rectifiable} \}$.
Then $(\Omega,d)$ is a length space if $d(x,y)=d_L(x,y)$ for all $x,y \in \Omega$. A curve $\gamma\colon I \rightarrow \Omega$ has constant speed if there exists $\lambda > 0$, the speed of $\gamma$, such that $L(\gamma|_{[t,t']})=\lambda |t-t'|$ for all $t \leq t'$ contained in $I$. A constant speed curve $\gamma$ is a geodesic if $L(\gamma|_{[t,t']})=d(\gamma(t),\gamma(t'))=\lambda |t-t'|$ for all $t \leq t'$ contained in $I$, and $(\Omega,d)$ is a geodesic space if for all pairs of points $x,y \in \Omega$, there exists a geodesic $\gamma\colon[0,1]\rightarrow \Omega$ such that $\gamma(0)=x$ and $\gamma(1)=y$. Corollary~\ref{geodesic} shows the monotonicity of the transport rank along geodesics starting from a mode when $\Omega$ is a geodesic space.
\begin{Corollary}
\label{geodesic}
Let $(\Omega,d)$ be a geodesic space and $P$ a distribution on $\Omega$ such that $\omega_\oplus$ is the mode of $P$. Let $\gamma\colon [0,1]\rightarrow\Omega$ be a geodesic such that $\gamma(0)=\omega_\oplus$ and $d_L(\gamma(0),\gamma(1)) = d(\gamma(0),\gamma(1))$. Then for any $t,t' \in [0,1]$ such that $t \leq t'$, $\rank_{\gamma(t)} \leq \rank_{\gamma(t')}$ with strict inequality if $t < t'$.
\end{Corollary}
The set of maximizers of $\rank_\omega$ in $\Omega_P$ constitutes the transport median set defined in (\ref{R}). The function $\rank_\omega$ is uniformly continuous under Assumption \ref{ass:separation}. Therefore when \ref{ass:separation} holds, the transport median set is guaranteed to be non-empty whenever $\Omega_P$ is compact.
If $\Omega$ is a length space which is complete and locally compact, then by the Hopf--Rinow theorem, $\Omega$ is a geodesic space, and if $\Omega_P$ is any bounded closed subset of $\Omega$, then it is guaranteed to be compact.
Equipped with an ordering of the elements of $\Omega$ in terms of their transport ranks,
it is natural to consider level sets of the form $L_\alpha = \{\omega \in \Omega: \rank_\omega = \alpha\}$ and nested superlevel sets $L^{+}_\alpha = \{\omega \in \Omega: \rank_\omega \geq \alpha\}$.
By definition, $L^{+}_{\alpha_1} \subseteq L^{+}_{\alpha_2}$ whenever $\alpha_1 \geq \alpha_2$. By the continuity of $\rank_\omega$ under Assumption \ref{ass:separation} the sets $L_\alpha$ and $L^{+}_\alpha$ are closed. Moreover when $(\Omega,d)$ is a bounded, complete and locally compact length space, by the Hopf--Rinow theorem $L_\alpha$ and $L^{+}_\alpha$ are compact as well. There are numerous important applications of level sets and superlevel sets of random objects, for example
superlevel sets $L^{+}_\alpha$ can be viewed as depth regions and
level sets can be used to define \emph{depth quantile sets}. These can be viewed as a generalization of univariate quantiles to the case of random objects. Specifically, a $\zeta$-level depth quantile set can be defined as $L_{\alpha}$ such that $P(X \in L^{+}_{\alpha}) = \zeta$, for $\zeta\in(0,1)$.
Complements of superlevel sets can be used to identify potential outliers by highlighting observations with low transport ranks and can be also employed for data trimming by excluding points which have transport ranks lower than a threshold $\alpha$; one then might consider maximizers of $\rank_{\omega}$ over trimmed versions of $\Omega_P$ to obtain trimmed analogues of the transport median set $\mathcal{M}_\oplus$.
\section{Estimation and Theory}\label{sec:est}
While so far we have introduced the notions of profile depth, transport rank and transport median sets at the population level, in practice one needs to estimate these quantities from a data sample of random objects $\{X_{i}\}_{i=1}^{n}$, i.e. a sample of $n$ independent realizations of $X$.
For estimating the depth profiles $F_\omega$, $\omega \in \Omega_P$, we consider the empirical estimates
\begin{equation}
\label{eq:FhatO}
{\widehat{F}_\omega}(t) = \frac{1}{n} \sum_{i=1}^{n} \mathbb{I} \{d(\omega, X_{i}) \leq t\},\quad t \in \mathbb{R}.
\end{equation}
Replacing expectations with empirical means and using estimated depth profiles $\widehat{F}_{\obj_{\subidx}}$ as surrogates of $F_{\obj_{\subidx}}$, we obtain estimates for the transport $\omega\in\Omega$ defined in \eqref{eq:rank} as
\begin{equation}\aligned\label{eq:hrank}
\widehat{\rank}_{\omega} = \frac{1}{n}\sum_{i=1}^{n} \mathrm{sign}\left(\int_0^1\left[\widehat{F}_{\obj_{\subidx}}^{-1}(u)-{\widehat{F}_\omega}^{-1}(u)\right]\mathrm{d} u\right) \int_0^1\left|\widehat{F}_{\obj_{\subidx}}^{-1}(u)-{\widehat{F}_\omega}^{-1}(u)\right|\mathrm{d} u. \endaligned\end{equation}
The term $\int_0^1|\widehat{F}_{\obj_{\subidx}}^{-1}(u)-{\widehat{F}_\omega}^{-1}(u)|\mathrm{d} u$ quantifies the discrepancy between $\widehat{F}_{\obj_{\subidx}}$ and ${\widehat{F}_\omega}$, while the sign of $\int_0^1[\widehat{F}_{\obj_{\subidx}}^{-1}(u)-{\widehat{F}_\omega}^{-1}(u)]\mathrm{d} u$ provides a comparison of the outlyingness of $\omega$ and $X_{i}$; a positive or negative sign indicates that $\omega$ is more central or outlying than $X_{i}$, respectively. Finally we define the estimated transport median set $\widehat{\mathcal{M}}_\oplus$ as
\begin{equation*}
\widehat{\mathcal{M}}_\oplus = \argmax_{\omega \in \{X_1,X_2, \dots, X_n\}} \widehat{\rank}_\omega.
\end{equation*}
To obtain asymptotic properties of these estimators and convergence towards their population targets, we require the following assumptions.
\begin{enumerate}[label = (A\arabic*)]
\item\label{ass:entropy} Let $N(\varepsilon, \Omega, d)$ be the covering number of the space $\Omega$ with balls of radius $\varepsilon$ and $\log N(\varepsilon, \Omega, d)$ the corresponding metric entropy, which satisfies
\begin{equation} \label{entropy}
\varepsilon \log N(\varepsilon, \Omega, d) \rightarrow 0\quad \text{as} \quad \varepsilon \rightarrow 0.
\end{equation}
\item\label{ass:dpfctn}
For every $\omega \in \Omega$ assume that $F_\omega$ is absolutely continuous with continuous density $f_\omega$ and let
$\underline{\Delta}_{\omega} = \inf_{t \in \mathrm{supp}(f_\omega)} f_\omega(t) \ \text{and} \ \overline{\Delta}_{\omega} = \sup_{t \in \mathbb{R}} f_\omega(t).$
Assume that $\underline{\Delta}_{\omega}>0$ for each $\omega \in \Omega$. Moreover assume
there exists
$\overline{\Delta} > 0$ such that
\begin{equation*}
\sup_{\omega \in \Omega} \overline{\Delta}_\omega \leq \overline{\Delta}.
\end{equation*}
\item\label{ass:separation} {For some $\eta,K > 0$ and for all $0 < \varepsilon < \eta$, there exists $ \tau(\varepsilon) > K\varepsilon $} such that
\begin{equation*}
\inf_{d(\omega_1,\omega_2)>\varepsilon} \left| \int_{0}^1 F_{\omega_1}^{-1}(u) \mathrm{d}{u}- \int_{0}^1 F_{\omega_2}^{-1}(u) \mathrm{d}{u}\right| \geq \tau(\varepsilon).
\end{equation*}
\item\label{ass:separation2} For some $\eta' > 0$, for any $0 < \varepsilon < \eta'$, $\alpha(\varepsilon)= \inf_{\omega_\oplus \in \mathcal{M}_\oplus}\inf_{d(\omega,\omega_\oplus)>\varepsilon} \left| \rank_{\omega}-\rank_{\omega_\oplus}\right| > 0$.
\end{enumerate}
Assumptions \ref{ass:entropy} and \ref{ass:dpfctn} are necessary for Theorem~\ref{thm:fhat} which provides the uniform convergence of ${\widehat{F}_\omega}$ to $F_\omega$. This is the primary device to overcome the dependence between the summands in the estimator of the transport rank and to establish its uniform convergence to the population transport rank. For any $t \in \mathbb{R}$, $F_\omega (t)= \mathbb{E}({\widehat{F}_\omega} (t))$. We define functions $y_{\omega,t}(x) = \mathbb{I}\{d(\omega,x) \leq t\}:\, \Omega \rightarrow \mathbb{R}$ and the function class $\mathcal{F}=\{ y_{\omega,t} : \omega \in \Omega, t \in \mathbb{R} \}$. Theorem~\ref{thm:fhat} establishes that under assumptions \ref{ass:entropy} and \ref{ass:dpfctn} the function class $\mathcal{F}$ is $P$-Donsker.
\begin{Theorem}
\label{thm:fhat}
Under assumptions \ref{ass:entropy} and \ref{ass:dpfctn}, $\{ \sqrt{n}({\widehat{F}_\omega} (t)-F_\omega (t)): \omega \in \Omega,\, t \in \mathbb{R} \}$ converges weakly to a zero-mean Gaussian Process $\mathbb{G}_P$ with covariance given by
\begin{equation*}
\mathcal{C}_{(\omega_1,t_1),(\omega_2,t_2)} = {\rm Cov}(y_{\omega_1,t_1}(X)y_{\omega_2,t_2}(X)).
\end{equation*}
\end{Theorem}
Assumption \ref{ass:entropy} is a restriction on the complexity of the metric space $(\Omega,d)$ which is satisfied for a broad class of spaces. Assumption \ref{ass:dpfctn} is a mild regularity condition on the depth profiles. Assumption \ref{ass:separation} guarantees that in a small neighborhood of any $\omega \in \Omega$, the ``expectations" of the depth profiles, i.e. $\int_{0}^1 F_\omega^{-1}(u)\mathrm{d}{u}=\int_{0}^1 u f_{\omega}(u) \mathrm{d}{u}$ are well separated. Assumption \ref{ass:separation2}, which is needed primarily to guarantee convergence of the estimated transport median set $\widehat{\mathcal{M}}_\oplus$, guarantees that the transport ranks of points outside of $\varepsilon$-neighbourhoods of any $\omega_\oplus \in \mathcal{M}_\oplus$ are well separated from $\rank_{\omega_\oplus}$ as long as $\varepsilon$ is small enough.
Any space $(\Omega,d)$ such that $\log N(\varepsilon, \Omega, d) = O\left(\frac{1}{\varepsilon^\alpha}\right)$ for some $\alpha < 1$ satisfies Assumption \ref{ass:entropy}. This is true for any $(\Omega,d)$ which can be represented as a subset of elements in a finite dimensional Euclidean space, for example the space of graph Laplacians or network adjacency matrices with fixed number of nodes \citep{kola:20,gine:17}, SPD matrices of a fixed size \citep{dryd:09,than:21}, simplex valued objects in a fixed dimension \citep{jeon:20,chen:12} and the space of phylogenetic trees with the same number of tips \citep{kim:20,bill:01}.
It holds that $\log N(\varepsilon, \Omega, d) = O\left(\varepsilon^{-\alpha}\right)$ for any $\alpha < 1$ when $\Omega$ is a VC-class of sets or a VC-class of functions \citep[Theorems~2.6.4 and 2.6.7,][]{well:96}.
Assumption \ref{ass:entropy} holds for $p$-dimensional smooth function classes $C_1^\alpha(\mathcal{X})$ \citep[page 155,][]{well:96} on bounded convex sets $\mathcal{X}$ in $\mathbb{R}^p$ equipped with the $\|\cdot\|_\infty$-norm \citep[Theorem~2.7.1,][]{well:96} and $\|\cdot\|_{r,Q}$-norm which is the $L_r(Q)$ for any probability measure $Q$ on $\mathbb{R}^p$ \citep[Corollary~2.7.2,][]{well:96}, if $\alpha \geq p+1$.
Of particular interest for many applications is the case when $\Omega$ is the space of one-dimensional distributions on some compact interval $I \subset \mathbb{R}$ with the underlying metric $d=d_W$ with $d_W$ being the 2-Wasserstein metric \citep{mull:19:5}, defined in (\ref {eq:dwass}). If $\Omega$ is represented using the quantile function of the distributions then, without any further assumptions, $\log N(\varepsilon, \Omega, d_W)$ is upper and lower bounded by a factor of $1/\varepsilon$ \citep[Proposition~2.1,][]{blei:07} which does not meet the criterion in \ref{ass:entropy}. However, if we assume that the distributions in $\Omega$ are absolutely continuous with respect to the Lebesgue measure on $I$ with smooth densities uniformly taking values in some interval $[l_\Omega,u_\Omega]$, then $\Omega$ equipped with $d_W$ satisfies \ref{ass:entropy}. To see this, observe that with the above characterization of $\Omega$ the quantile functions corresponding to the distributions in $\Omega$ have smooth derivatives that are uniformly bounded.
If we let $\mathcal{Q}_{deriv}$ denote the space of the uniformly bounded derivatives of the quantile functions in $\Omega$, then $\log N(\varepsilon, \mathcal{Q}_{deriv}, \|\cdot\|_1) = O\left(\varepsilon^{-1}\right)$, where $\|\cdot\|_1$ is the $L_1$ norm under the Lebesgue measure on $I$ \citep[Corollary~2.7.2,][]{well:96}.
Using Lemma~1 in \cite{gao:09}, with $\mathcal{F} \equiv \mathcal{Q}_{deriv}$, $\mathcal{G} \equiv \Omega$, $\alpha(x)=x$ and $\phi(\varepsilon)=K/\varepsilon$ for some constant $K$, $\log N(\varepsilon, \Omega, d_W) = O\left(\varepsilon^{-1/2}\right)$ which meets the requirement of \ref{ass:entropy}.
If $\Omega$ is the space of $p$-dimensional distributions on a compact convex set $I \subset \mathbb{R}^p$, represented using their distribution functions endowed with the {$L_{r}$} metric with respect to the Lebesgue measure on $I$, then \ref{ass:entropy} is satisfied if $\Omega \subset C_1^\alpha(I)$ for $\alpha \geq p+1$ (see above discussion).
Next we discuss the asymptotic convergence of the estimates $\widehat{\rank}_\omega$ of transport rank. For this we need Assumption \ref{ass:separation} together with assumptions \ref{ass:entropy} and \ref{ass:dpfctn}. Theorem~\ref{thm:Rhat} establishes a $\sqrt{n}$-rate of convergence uniformly in $\omega$ for $\widehat{\rank}_\omega$. The proof of Theorem~\ref{thm:Rhat} relies on Corollary~\ref{thm:qhat}, which follows from the proof of Theorem~\ref{thm:fhat} and Lemma~\ref{lma:UC}, which shows that the population depth profiles $F_\omega$ and the corresponding quantile representations $F_\omega^{-1}$ and ${\widehat{F}_\omega}^{-1}$ are (almost surely) Lipschitz in $\omega$.
\begin{Corollary}
\label{thm:qhat}
Under assumptions \ref{ass:entropy} and \ref{ass:dpfctn},
\begin{equation*}
{\sqrt{n}} \sup_{\omega \in \Omega} \sup_{u \in [0,1]} \left| {\widehat{F}_\omega}^{-1} (u)-F_\omega^{-1} (u) \right| = O_{\mathbb{P}}(1).
\end{equation*}
\end{Corollary}
\begin{Lemma}
\label{lma:UC}
For any $\omega_1,\omega_2 \in \Omega$ under Assumption \ref{ass:dpfctn},
\begin{eqnarray*}
&&\sup_{u \in [0,1]} |F^{-1}_{\omega_1}(u)-F^{-1}_{\omega_2}(u)| \leq d(\omega_1,\omega_2),\\
&&\sup_{u \in [0,1]} |\widehat{F}^{-1}_{\omega_1}(u)-\widehat{F}^{-1}_{\omega_2}(u)| \leq d(\omega_1,\omega_2) \ \text{(almost surely)} \ \text{and}\\
&&\sup_{u \in [0,1]} |F_{\omega_1}(u)-F_{\omega_2}(u)| \leq \overline{\Delta} d(\omega_1,\omega_2).
\end{eqnarray*}
\end{Lemma}
\begin{Theorem}
\label{thm:Rhat}
Under assumptions \ref{ass:entropy}--\ref{ass:separation}
\begin{equation*}
\sqrt{n} \sup_{\omega \in \Omega} |\widehat{\rank}_\omega - \rank_\omega| = O_{\mathbb{P}}(1).
\end{equation*}
\end{Theorem}
Next we establish the convergence of the estimated transport median set $\widehat{\mathcal{M}}_\oplus$ to $\mathcal{M}_\oplus$. Define the Hausdorff metric between $\widehat{\mathcal{M}}_\oplus$ and $\mathcal{M}_\oplus$ as $\rho_H\left(\widehat{\mathcal{M}}_\oplus,\mathcal{M}_\oplus\right)$ where
\begin{equation}
\rho_H\left(\widehat{\mathcal{M}}_\oplus,\mathcal{M}_\oplus\right) = \max \left( \sup_{\omega \in \widehat{\mathcal{M}}_\oplus} d(\omega,\mathcal{M}_\oplus), \sup_{\omega \in \mathcal{M}_\oplus} d(\omega,\widehat{\mathcal{M}}_\oplus) \right)
\end{equation}
where for any $\omega \in \Omega$ and any subset $\mathcal{S} \subset \Omega$, $d(\omega,\mathcal{S}) = \inf_{s \in S} d(\omega,s)$. Theorem~\ref{thm:Mhat} shows the asymptotic closeness in the Hausdorff metric of the estimated transport median set and the true transport median set. The proof of Theorem~\ref{thm:Mhat} relies on assumptions \ref{ass:entropy}--\ref{ass:separation2} and Lemma~\ref{lma:LUC} that gives local uniform continuity within $\eta$-neighborhoods of the transport rank $\rank_\omega$ in $\omega$.
\begin{Lemma}
\label{lma:LUC}
Under Assumption \ref{ass:separation}, for any $\omega_1,\omega_2 \in \Omega$ and $\,0 < \varepsilon < \eta$, with $\eta$ as in Assumption \ref{ass:separation}, $d(\omega_1,\omega_2) < \varepsilon$ implies that
\begin{equation*}
|\rank_{\omega_1}-\rank_{\omega_2}| \leq \varepsilon \left(1+2/K\right)
\end{equation*}
where $K>0$ is as in Assumption \ref{ass:separation}.
\end{Lemma}
\begin{Theorem}
\label{thm:Mhat}
Assume that the distribution $P$ is such that $\mathcal{M}_\oplus$ is non-empty. Under assumptions \ref{ass:entropy}--\ref{ass:separation2}
\begin{equation*}
\rho_H\left(\widehat{\mathcal{M}}_\oplus,\mathcal{M}_\oplus\right) = o_{\mathbb{P}}(1).
\end{equation*}
\end{Theorem}
\section{Simulations}\label{sec:simu}
For numerical experiments, we perform clustering analysis of the observations $X_{i}$ based on their depth profiles $\widehat{F}_{\obj_{\subidx}}$ as per \eqref{eq:FhatO} with $\omega = X_{i}$.
Note that $\widehat{F}_{\obj_{\subidx}}$ are distributions and hence do not lie in a linear/Euclidean space.
Extending $k$-means clustering to distribution-valued data, we consider the Wasserstein space $(\mathcal{W},d_W)$ of absolute continuous distributions on $\mathbb{R}$ with finite second moments, where $d_W$ denotes the Wasserstein metric given by
\begin{equation}\aligned\label{eq:dwass}
d_W(F_1,F_2) = \left(\int_0^1\left[F_1^{-1}(u)-F_2^{-1}(u)\right]^2\mathrm{d} u\right)^{1/2} = d_{L^2}(F_1^{-1},F_2^{-1}),\ \text{for }F_1,F_2\in\mathcal{W}. \endaligned\end{equation}
Here, $F_l^{-1}$ denotes the quantile function corresponding to $F_l$, for $l=1,2$; specifically, $F_l^{-1}(u) = \inf\{x\in\mathbb{R}: F_l(x)\ge u\}$, for $u\in(0,1)$.
The $n$ observations are partitioned into $k$ subsets $S_1,\dots,S_k$ so as to minimize the within-cluster sums of squared Wasserstein distances,
\begin{equation}\aligned\label{eq:kwmeans}
\argmin_{\{S_1,\dots,S_k\}} \sum_{j=1}^k\sum_{i:X_{i}\in S_{j}} d_W^2(\widehat{F}_{\obj_{\subidx}},\widehat{F}_{\clust_{\clidx}\oplus}), \endaligned\end{equation}
where
\begin{equation}\aligned\label{eq:wclMeanDepthPrfl}
\widehat{F}_{\clust_{\clidx}\oplus} = \argmin_{q\in\mathcal{W}}\sum_{i:X_{i}\in S_{j}}d_W^2(\widehat{F}_{\obj_{\subidx}},q) \endaligned\end{equation}
is the empirical Wasserstein barycenter of depth profiles of observations within $S_{j}$, for $j=1,\dots,k$.
We refer to this method as $k$-W-means clustering.
Specifically, numbers of clusters $k$ are chosen by the minimizer of the ratio between the adjusted between-group sums of squares (BGSS) and within-group sums of squares (WGSS) \citep{cali:74},
$k^{*}=\argmax_{k} [{\mathrm{BGSS}/(k-1)}]/[{\mathrm{WGSS}/(n-k)}],$
\begin{equation}\aligned\nonumber
\mathrm{WGSS} &= \sum_{j=1}^{k} \frac{1}{|S_{j}|}
\sum_{i<i':X_{i},X_{i'}\inS_{j}} d_W^2(\widehat{F}_{\obj_{\subidx}},\widehat{F}_{\obj_{\vsubidx}}),\,
\mathrm{BGSS} = \frac{1}{n}
\sum_{1\lei<i'\le n} d_W^2(\widehat{F}_{\obj_{\subidx}},\widehat{F}_{\obj_{\vsubidx}}) - \mathrm{WGSS}.\endaligned\end{equation}
We implement the $k$-W-means clustering using the R package \texttt{NbClust} \citep{nbclust}.
In what follows, we order clusters identified by $k$-W-means clustering according to the average transport ranks within each cluster in a descending order. Specifically, the first cluster having the largest average transport rank is the most central, while the last cluster having the smallest average transport rank corresponds to the most outlying cluster.
\subsection{Simulations for Multivariate Gaussians}
For $p=2$ and $50$, we sample $n=500$ observations $\{X_{i}\}_{i=1}^n$ independently from a $p$-dimensional Gaussian distribution $N(\bm\mu,\bm\Sigma)$, where $\bm\mu=\bm{0}$ and $\bm\Sigma = {\rm diag}(p,p-1,\dots,1)$. The depth profiles $\widehat{F}_{\obj_{\subidx}}$ \eqref{eq:FhatO} and transport ranks $\widehat{\rank}_{X_{i}}$ \eqref{eq:hrank} are computed for each observation with $d$ being the Euclidean metric in $\mathbb{R}^p$.
For $p=2$, the transport ranks \eqref{eq:hrank} based on depth profiles capture the center-outward ordering of the $2$-dimensional Gaussian data and are indeed highly correlated with the the squared Mahalanobis distance from each observation $X_{i}$ to the mean $\bm\mu$, i.e. $(X_{i}-\bm\mu)^\top\bm\Sigma^{-1}(X_{i}-\bm\mu)$ (Figure~\ref{fig:2dGauss}).
The $k$-W-means clustering based on the depth profiles partitions the data into $k=7$ clusters, lying from the closest to the farthest from the center $\bm\mu$ (Figure~\ref{fig:2dGauss}); specifically, the Wasserstein barycenters of the depth profiles within each cluster \eqref{eq:wclMeanDepthPrfl} shift to the right from cluster 1 to cluster 7, indicating increased distances from the other observations.
Figure~\ref{fig:50dGauss} demonstrates similar findings for a simulated sample of $n=500$ observations from a $50$-dimensional Gaussian distribution $N(\bm\mu,\bm\Sigma)$ with $\bm\mu=\bm{0}$ and $\bm\Sigma = {\rm diag}(50,49,\dots,1)$.
\begin{figure}[hbt!]
\centering
\begin{subfigure}[b]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{one_sample_plots/2dGauss_raw_HodgeRank.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{one_sample_plots/2dGauss_HodgeRank_vs_logDensity.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{one_sample_plots/2dGauss_raw_kmeans7rankOfMeanHodgeRankWithinClust.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{one_sample_plots/2dGauss_withinClustMeanOfDepthFctn_kmeans7rankOfMeanHodgeRankWithinClust.pdf}
\end{subfigure}
\caption{Scatter plots of a sample of $n=500$ observations generated from a $2$-dimensional Gaussian distribution $N(\bm\mu,\bm\Sigma)$ with $\bm\mu=\bm{0}$ and $\bm\Sigma = {\rm diag}(2,1)$, where the points are colored according to their transport ranks \eqref{eq:hrank} (top left) and $k$-W-means clustering \eqref{eq:kwmeans} of the depth profiles $\widehat{F}_{\obj_{\subidx}}$ \eqref{eq:FhatO} with $\omega=X_{i}$ and $d$ being the Euclidean metric (bottom left).
Top right: Transport ranks versus logarithms of the density function of $N(\bm\mu,\bm\Sigma)$ evaluated at each observation; the Pearson correlation is 0.959.
Bottom right: Wasserstein barycenters of the depth profiles within each cluster \eqref{eq:wclMeanDepthPrfl}.
} \label{fig:2dGauss}
\end{figure}
\begin{figure}[hbt!]
\centering
\begin{subfigure}[b]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{one_sample_plots/50dGauss_dpMds_HodgeRank.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{one_sample_plots/50dGauss_HodgeRank_vs_logDensity.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{one_sample_plots/50dGauss_dpMds_kmeans8rankOfMeanHodgeRankWithinClust.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{one_sample_plots/50dGauss_withinClustMeanOfDepthFctn_kmeans8rankOfMeanHodgeRankWithinClust.pdf}
\end{subfigure}
\caption{Two-dimensional MDS with respect to the Wasserstein metric $d_W$ in \eqref{eq:dwass} of the depth profiles $\widehat{F}_{\obj_{\subidx}}$ \eqref{eq:FhatO} with $\omega=X_{i}$ of a sample of $n=500$ observations generated from a $50$-dimensional Gaussian distribution $N(\bm\mu,\bm\Sigma)$ with $\bm\mu=\bm{0}$ and $\bm\Sigma = {\rm diag}(50,49,\dots,1)$, where the points are colored according to their transport ranks \eqref{eq:hrank} (top left) and $k$-W-means clustering \eqref{eq:kwmeans} of the depth profiles $\widehat{F}_{\obj_{\subidx}}$ (bottom left).
Top right: Transport ranks versus logarithms of the density function of $N(\bm\mu,\bm\Sigma)$ evaluated at each observation, with Pearson correlation 0.874.
Bottom right: Wasserstein barycenters of the depth profiles within each cluster \eqref{eq:wclMeanDepthPrfl}.
}
\label{fig:50dGauss}
\end{figure}
\subsection{Simulations for Distributional Data}
We consider a sample of $n=500$ one-dimensional Gaussian distributions $\{N(\mu_{i},\sigma_{i}^2)\}_{i=1}^{n}$ that are generated from a mixture of two distributions of distributions, i.e. there exist two groups of distributions.
Specifically, we first generate $Z_{i}\sim\text{Bernoulli}(p)$ and then sample $\mu_{i}\sim N(-2,0.5^2)$ if $Z_{i}=1$ and $\mu_{i}\sim N(2,0.5^2)$ if $Z_{i}=0$; $\sigma_{i}\sim\mathrm{Gamma}(2,4)$.
Here, $\mathrm{Gamma}(\alpha,\beta)$ denotes a gamma distribution with shape $\alpha$ and rate $\beta$. In addition, we consider balanced and unbalanced designs with $p=0.5$ and $0.2$, respectively.
When the design is balanced, distributions sampled from the two groups have similar depth profiles, and
distributions lying closer to the empirical barycenter of all the $500$ distributions have larger transport ranks. Hence it turns out as expected that the $k$-W-means clustering of the depth profiles cannot identify distributions sampled from the two different groups in this case.
In contrast, when $p=0.2$ and the design is unbalanced, the deepest observations in the sample lie in the larger subsample and the distributions in the smaller subsample have smaller transport ranks as compared to the other group.
In addition, the $k$-W-means clustering almost perfectly distinguishes the two groups in the unbalanced case (Figure~\ref{fig:distnSimu}).
\def\insertFigwoBrckts{
\includegraphics[width=0.4\linewidth]{one_sample_plots/1dGausDistn_p=0.5_diffMeanSetup2_raw_HodgeRank.pdf}
\includegraphics[width = 0.4\linewidth]{one_sample_plots/1dGausDistn_p=0.2_diffMeanSetup2_raw_HodgeRank.pdf}\\
\includegraphics[width=0.4\linewidth]{one_sample_plots/1dGausDistn_p=0.5_diffMeanSetup2_raw_kmeans22rankOfMeanHodgeRankWithinClust.pdf}
\includegraphics[width = 0.4\linewidth]{one_sample_plots/1dGausDistn_p=0.2_diffMeanSetup2_raw_kmeans2rankOfMeanHodgeRankWithinClust.pdf}\\
\includegraphics[width = 0.4\linewidth]{one_sample_plots/1dGausDistn_p=0.5_diffMeanSetup2_withinClustMeanOfDepthFctn_kmeans22rankOfMeanHodgeRankWithinClust.pdf}
\includegraphics[width = 0.4\linewidth]{one_sample_plots/1dGausDistn_p=0.2_diffMeanSetup2_withinClustMeanOfDepthFctn_kmeans2rankOfMeanHodgeRankWithinClust.pdf}
}
\def\insertFigwBrckts{
\includegraphics[width=0.4\linewidth]{one_sample_plots/{1dGausDistn_p=0.5_diffMeanSetup2_raw_HodgeRank}.pdf}
\includegraphics[width = 0.4\linewidth]{one_sample_plots/{1dGausDistn_p=0.2_diffMeanSetup2_raw_HodgeRank}.pdf}\\
\includegraphics[width=0.4\linewidth]{one_sample_plots/{1dGausDistn_p=0.5_diffMeanSetup2_raw_kmeans22rankOfMeanHodgeRankWithinClust}.pdf}
\includegraphics[width = 0.4\linewidth]{one_sample_plots/{1dGausDistn_p=0.2_diffMeanSetup2_raw_kmeans2rankOfMeanHodgeRankWithinClust}.pdf}\\
\includegraphics[width = 0.4\linewidth]{one_sample_plots/{1dGausDistn_p=0.5_diffMeanSetup2_withinClustMeanOfDepthFctn_kmeans22rankOfMeanHodgeRankWithinClust}.pdf}
\includegraphics[width = 0.4\linewidth]{one_sample_plots/{1dGausDistn_p=0.2_diffMeanSetup2_withinClustMeanOfDepthFctn_kmeans2rankOfMeanHodgeRankWithinClust}.pdf}
}
\begin{figure}[htbp
\centering
\if10{
\insertFigwoBrckts
}\fi
\if00{
\if10{\insertFigwoBrckts}\fi
\if00{\insertFigwBrckts}\fi
}\fi
\vspace{-.35cm}
\caption{
Analysis of samples of $n=500$ one-dimensional Gaussian distributions $\{N(\mu_{i},\sigma_{i}^2)\}_{i=1}^{n}$ with $p=0.5$ (left) and $0.2$ (right), respectively.
Top two rows: Scatterplots of mean $\mu_{i}$ and standard deviation (SD) $\sigma_{i}$, where the points are colored according to the transport ranks \eqref{eq:hrank} (top) and the $k$-W-means clustering \eqref{eq:kwmeans} of the depth profiles $\widehat{F}_{\obj_{\subidx}}$ \eqref{eq:FhatO} with $\omega=X_{i}$ (middle).
Bottom: Wasserstein barycenters of the depth profiles $\widehat{F}_{\obj_{\subidx}}$ for the distributions within each cluster.
}
\label{fig:distnSimu}
\end{figure}
\section{Data Applications} \label{sec:app}
\subsection{Human Mortality Data}
Understanding human longevity has been of long-standing interest. A quantification is given by age-at-death distributions. We consider age-at-death distributions for different countries, which are obtained from the Human Mortality Database (\url{http://www.mortality.org}) for the year 2000 for $n=34$ countries (or areas), separately for males and females. For this sample of distributional data we adopt the Wasserstein metric $d_W$ (\ref{eq:dwass}). To analyze the data geometry of this sample of distributions $\{X_{i}\}_{i=1}^{34}$
we obtained depth profiles $\widehat{F}_{\obj_{\subidx}}$ \eqref{eq:FhatO} with $\omega=X_{i}$ for each country.
Applying $k$-W-means clustering to the depth profiles for age-at-death distributions of females and males, the results are shown together with the transport ranks in Figures~\ref{fig:mort2000female_dpMds} and \ref{fig:mort2000male_dpMds}, respectively.
\begin{figure}[hbt!]
\centering
\begin{subfigure}[b]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{one_sample_plots/mort2000female_dpMds_HodgeRank.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{one_sample_plots/mort2000female_dpMds_kmeans12rankOfMeanHodgeRankWithinClust.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{one_sample_plots/mort2000female_withinClustMeanOfDepthFctn_kmeans12rankOfMeanHodgeRankWithinClust.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{one_sample_plots/mort2000female_mdsCirclePlot.pdf}
\end{subfigure}
\caption{Top: Two-dimensional MDS with respect to the Wasserstein metric $d_W$ in \eqref{eq:dwass} of the depth profiles $\widehat{F}_{\obj_{\subidx}}$ \eqref{eq:FhatO} with $\omega=X_{i}$ of the age-at-death distributions of females in 2000 for the 34 countries, where the points are colored according to the $k$-W-means clustering \eqref{eq:kwmeans} of the depth profiles $\widehat{F}_{\obj_{\subidx}}$ \eqref{eq:FhatO} with $\omega=X_{i}$ (left) and their transport ranks \eqref{eq:hrank} (right).
Bottom left: Wasserstein barycenters of the depth profiles $\widehat{F}_{\obj_{\subidx}}$ for the age-at-death distributions of females in 2000 within each cluster. Bottom right: Nested circle plot for for the age-at-death distributions of females in 2000.} \label{fig:mort2000female_dpMds}
\end{figure}
\begin{figure}[hbt!]
\centering
\begin{subfigure}[b]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{one_sample_plots/mort2000male_dpMds_HodgeRank.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{one_sample_plots/mort2000male_dpMds_kmeans12rankOfMeanHodgeRankWithinClust.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{one_sample_plots/mort2000male_withinClustMeanOfDepthFctn_kmeans12rankOfMeanHodgeRankWithinClust.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{one_sample_plots/mort2000male_mdsCirclePlot.pdf}
\end{subfigure}
\caption{Top: Two-dimensional MDS with respect to the Wasserstein metric $d_W$ in \eqref{eq:dwass} of the depth profiles $\widehat{F}_{\obj_{\subidx}}$ \eqref{eq:FhatO} with $\omega=X_{i}$ of the age-at-death distributions of males in 2000 for the 34 countries, where the points are colored according to the $k$-W-means clustering \eqref{eq:kwmeans} of the depth profiles $\widehat{F}_{\obj_{\subidx}}$ \eqref{eq:FhatO} with $\omega=X_{i}$ (left) and their transport ranks \eqref{eq:hrank} (right).
Bottom left: Wasserstein barycenters of the depth profiles $\widehat{F}_{\obj_{\subidx}}$ for the age-at-death distributions of males in 2000 within each cluster. Bottom right: Nested circle plot for for the age-at-death distributions of males in 2000.} \label{fig:mort2000male_dpMds}
\end{figure}
We visualize the results with \emph{nested circle plots}, where points representing observations $\{X_{i}\}_{i=1}^{n}$ are placed on nested circles such that each cluster takes up one circle.
In addition, the locations of each observation on a circle are determined by two-dimensional classical/metric multidimensional scaling (MDS) \citep{mard:78} of observations, i.e., MDS with respect to the Wasserstein metric $d_W$ \eqref{eq:dwass} of the age-at-death distributions in this case. We will refer to this application of MDS as \emph{object MDS}.
Specifically, we implement MDS using the function \texttt{cmdscale()} in the R build-in package \texttt{stats} \citep{R}.
In terms of polar coordinates, angles of points on a nested circle plot are equal to the angles of the corresponding observations on the MDS plot, and radii are determined by the clusters to which the observations belong.
Hence, nested circle plots reflect both the similarity of individual observations in terms of the distance of the metric space where the observations take values and also the center-outward ordering as provided by their transport ranks.
We note that
in the top two panels of Figures~\ref{fig:mort2000female_dpMds}--\ref{fig:mort2000male_dpMds} MDS is applied to the estimated depth profiles $\widehat{F}_{\obj_{\subidx}}$ with metric $d_W$, referred to as \emph{depth MDS}, in contrast to object MDS, which is applied to the observations $X_{i}$ themselves, with metric $d$ in $\Omega$; in this particular example $d=d_W$.
For both females and males, two groups of countries stand out in terms of depth profiles as compared to the others:
One group includes Japan, Switzerland and Sweden; and the other one Eastern European countries, such as Russia, Ukraine, Belarus, Latvia and Estonia.
While these two groups are both outlying,
the former group is characterized by enhanced longevity and the latter by reduced longevity. Luxembourg and Belgium belong to the most central cluster for both females and males.
Iceland and Spain are among the most outlying countries for one gender only, with highest longevity for males in Iceland and for females in Spain.
France and Israel belong to the most central cluster for males and females, respectively, but are more outlying for females and males, respectively.
Overall, as shown in the bottom left panels of Figures~\ref{fig:mort2000female_dpMds}--\ref{fig:mort2000male_dpMds}, the age-at-death distributions for males for the outlying countries are much farther away from the others than those of females.
In particular, the empirical Fr\'{e}chet\xspace variance of the depth profiles, $n^{-1}\sum_{i=1}^nd_W^2(\widehat{F}_{\obj_{\subidx}},\widehat{F}_{\oplus})$, of age-at-death distributions for females and males of different countries is 2.08 and 8.22, respectively, where $\widehat{F}_{\oplus} = \argmin_{\omega\in\mathcal{W}}\sum_{i=1}^nd_W^2(\widehat{F}_{\obj_{\subidx}},\omega)$ is the empirical Fr\'{e}chet\xspace mean of the depth profiles.
\subsection{U.S. Electricity Generation Data}
Compositional data comprise another type of data that do not lie in a vector space. Such data are commonly encountered and consist of vectors of nonnegative elements that sum up to 1. Examples include geochemical compositions and microbiome data. Various approaches to handle the nonlinearity that is inherent in such data have been developed \citep{aitc:86,scea:14}. We consider here the U.S. electricity generation data which are publicly available on the website of the U.S. Energy Information Administration (\url{http://www.eia.gov/electricity}).
The data consist of net generation of electricity from different sources for each state. Here, we consider the data for the year 2000. In preprocessing, we excluded the ``pumped storage'' category due to errors in these data and then
merged the energy sources into three categories: Natural Gas, consisting of ``natural gas'' alone; Other Fossil, consisting of ``coal'', ``petroleum'' and ``other gases''; Renewables and Nuclear, combining the remaining sources ``hydroelectric conventional'', ``solar thermal and photovoltaic'', ``geothermal'', ``wind'', ``wood and wood derived fuels'', ``other biomass'', ``nuclear'' and ``other''.
Hence, we have a sample of $n=50$ observations $\{X_{i}\}_{i=1}^{n}$, each of which takes values in a 2-simplex $\Delta^2 =\{ \bm{x}\in\mathbb{R}^3: \bm{x}^\top\mathbf{1}_3 = 1 \}$, where $\mathbf{1}_3 = (1,1,1)^\top$.
Since the component-wise square root $\sqrt{\bm{x}} = (\sqrt{x_1},\sqrt{x_2},\sqrt{x_3})^\top$ of an element $\bm{x}\in\Delta^2$ lies in the sphere $\mathcal{S}^2$,
we adopt the geodesic metric on this sphere
\begin{equation}\aligned\label{eq:dsphe}
d_S(\bm{x},\bm{y}) = \arccos(\sqrt{\bm{x}}^\top \sqrt{\bm{y}}),\text{ for } \bm{x},\bm{y}\in\Delta^2. \endaligned\end{equation}
We then compared the estimated transport ranks \eqref{eq:hrank} for each state with the angular Tukey depths \citep[ATDs,][]{liu:92:2} of $\{\sqrt{X_{i}}\}_{i=1}^{n}$.
Overall, the proposed transport ranks and ATDs yield similar center-outward ordering of the 50 states for these data (Figure~\ref{fig:energy2000_ternary_rankAndATD}).
Maryland emerges as the transport median and is also at the median for ATDs.
On closer inspection one finds some interesting discrepancies between transport ranks and the ATD measure of outlyingness, especially for the states that are relatively close or far away from the center Maryland in terms of their outlyingness.
The states near Maryland, as shown in orange and light violet in the bottom panels of Figure~\ref{fig:energy2000_ternary_rankAndATD}, all have high transport ranks, while their ATDs vary widely.
In particular, Montana, with an electricity generation pattern very similar to that of Maryland, has the lowest ATD level while it has a high transport rank.
A subset of states that are colored in turquoise and light violet in the bottom panels of Figure~\ref{fig:energy2000_ternary_rankAndATD} have the lowest ATDs among all states but have a much wider range of transport ranks.
For example,
Hawaii and Delaware which are not far from Maryland in terms of energy generation, have high transport ranks and low ATD levels. The overall conclusion is that transport ranks are much better suited than ATDs for studying the geometry of this data set and for quantifying outlyingness.
\begin{figure}[hbt!]
\centering
\begin{subfigure}[b]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{one_sample_plots/energy2000_ternary_HodgeRank.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{one_sample_plots/energy2000_ternary_ATD.pdf}
\end{subfigure}\\\vspace{1em}
\begin{subfigure}[t]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{one_sample_plots/energy2000_HodgeRank_vs_ATD.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{one_sample_plots/energy2000_ternary_highlight.pdf}
\end{subfigure}
\caption{Ternary plot of compositions of electricity generation in the year 2000 for the 50 states in the U.S.. Color indicates transport ranks \eqref{eq:hrank} (top left) and ATD levels (top right). The Pearson correlation between the transport ranks and the values of the log density function is 0.723 and the straight line in the bottom left panel shows the least squares fit (just to provide a perspective). In the bottom two panels, a subset of states with similarly small ATDs but different transport ranks is highlighted in orange, and another subset with similarly high transport ranks but different ATDs is highlighted in turquoise; the intersection of the two subsets is colored in light violet.}
\label{fig:energy2000_ternary_rankAndATD}
\end{figure}
Figure~\ref{fig:energy2000_ternary_dpMdsClust} shows the $k$-W-means clustering results for the depth profiles of the electricity generation compositions; the 50 states are divided into three clusters.
Most states mainly use the Other Fossil source in electricity generation rather than Natural Gas; the latter is considered to be the cleanest fossil fuel source \citep{fara:16}.
The states with highest centrality in terms of the proposed transport ranks utilize little Natural Gas and similar amounts of Other Fossil and Renewables and Nuclear.
In contrast, California, Idaho, Rhode Island, and Vermont are grouped into the most outlying transport rank cluster, as they utilize the smallest fraction of the (undesirable) Other Fossil source.
\begin{figure}[hbt!]
\centering
\begin{subfigure}[t]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{one_sample_plots/energy2000_ternary_kmeans3rankOfMeanHodgeRankWithinClust.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{one_sample_plots/energy2000_withinClustMeanOfDepthFctn_kmeans3rankOfMeanHodgeRankWithinClust.pdf}
\end{subfigure}
\caption{Left: Ternary plot of compositions of electricity generation in 2000 for the 50 U.S. states in the U.S.; points are colored according to $k$-W-means clustering \eqref{eq:kwmeans} of the depth profiles $\widehat{F}_{\obj_{\subidx}}$ \eqref{eq:FhatO} with $\omega=X_{i}$ and $d=d_S$ in \eqref{eq:dsphe}.
Right: Wasserstein barycenters of the depth profiles within each cluster \eqref{eq:wclMeanDepthPrfl}.
}\label{fig:energy2000_ternary_dpMdsClust}
\end{figure}
\subsection{Manhattan Yellow Taxi Data}
Yellow taxi trip records in New York City (NYC) including pick-up and drop-off dates/times, pick-up and drop-off locations, and driver-reported passenger counts are available at
\url{http://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page}).
We focus on the data pertaining to Manhattan and, excluding Governor's Island, Ellis Island, and Liberty Island, divide the remaining 66 zones of Manhattan into 13 regions (Table~\ref{tab:taxiRgns} in the Supplement).
Of interest are networks that represent how many people traveled between these areas during a day.
To this end, we construct networks for yellow taxi trips between the 13 regions for each day in the year 2019, obtaining a 13-dimensional graph adjacency matrix for each day, where each entry holds the edge weight given by the total number of passengers traveling between the two corresponding regions within the given day. The edge weights are then normalized by the maximum edge weight for each day so that they lie in $[0,1]$. We choose the Frobenius metric $d_F$ as metric between the graph adjacency matrices,
\begin{equation}\aligned\label{eq:dfrob}
d_F(\mathbf{R}_1,\mathbf{R}_2) = \left\{{\rm trace}\left[(\mathbf{R}_1-\mathbf{R}_2)(\mathbf{R}_1-\mathbf{R}_2)^\top\right]\right\}^{1/2}, \text{ for } \mathbf{R}_1, \mathbf{R}_2 \in\mathbb{R}^{13\times 13}. \endaligned\end{equation}
Weekdays are found to have lower transport ranks and are more central, while weekends have higher transport ranks and are more outlying (Figure~\ref{fig:taxi_dpMds}).
The $k$-W-means clustering of the depth profiles yields two clusters, which almost entirely correspond to weekdays and weekends, respectively.
Only twelve days are included in ``opposite'' clusters. Among these, Independence Day, July 5, Veterans Day, and New Year's Eve are weekdays but also holidays or a Friday after a holiday.
Every weekday between September 23 and September 30, including September 23--26, was designated as a ``gridlock alert day'' by the NYC Department of Transportation, due to the UN General Assembly meetings held from September 24 through 30, i.e. these are the days likely to feature the heaviest traffic of the year.
May 12 was Japan Day, which is an annual event hosted by Japanese community of New York including a wide range of activities throughout the day; likely
December 21 and 22 had traffic congestion due to the impending holidays. So it is not surprising that these three days were assigned to the weekday cluster.
\begin{figure}[hbt!]
\centering
\begin{subfigure}[t]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{one_sample_plots/nyctaxi2019_grpRgns_normEdgeWts_dpMds_HodgeRank_isWeekend.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{one_sample_plots/nyctaxi2019_grpRgns_normEdgeWts_dpMds_kmeans2rankOfMeanHodgeRankWithinClust_isWeekend.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{one_sample_plots/nyctaxi2019_grpRgns_normEdgeWts_withinClustMeanOfDepthFctn_kmeans2rankOfMeanHodgeRankWithinClust.pdf}
\end{subfigure}
\caption{Two-dimensional MDS of the depth profiles of the daily Manhattan Yellow Taxi transport networks in 2019 with normalized edge weights, where the points are colored according to their transport ranks \eqref{eq:hrank} (top left) and $k$-W-means clustering \eqref{eq:kwmeans} of the depth profiles $\widehat{F}_{\obj_{\subidx}}$ \eqref{eq:FhatO} with $\omega=X_{i}$ and $d=d_F$ in \eqref{eq:dfrob} (top right). Bottom: Wasserstein barycenters of the depth profiles within each cluster.} \label{fig:taxi_dpMds}
\end{figure}
\section{Discussion}
In this paper we introduce a new toolbox for the analysis of metric space valued data or random objects. Key tools are the depth profiles, transport ranks and transport median sets. Depth profiles are canonical and straightforward and their combination with optimal transport leads to transport ranks in a natural way. Transport ranks can
then be harnessed to arrive at a notion of data depth. Depth profiles along with $k$-W-means clustering also lend themselves for the construction of level sets that correspond to these clusters and ultimately a notion of quantiles for random object data.
Apart from opening a new arena for future explorations of general metric-space valued data, the proposed toolbox may also be of interest for revisiting data analysis in classical Euclidean spaces; for example it leads to a notion of depth that has not been hitherto explored in these more traditional spaces.
While the notions we introduce are population based, sample based estimators can be easily and efficiently constructed. Theoretical analysis shows that they converge to their population targets.
This new approach is supported by theory for a wide class of metric spaces in conjunction with a probability measure defined on the space, as long as an entropy condition is satisfied and the spaces are totally bounded and separable. While this does place a restriction on the spaces where the proposed tools are supported by theory, we show that many complex spaces of practical interest such as distributional data, networks and tree spaces are covered under mild regularity conditions. From a practical perspective, as long as one can compute pairwise distances, the proposed tools can be applied for all kinds of exploratory data analysis tasks for metric space valued data, for example the identification of data medians or outliers. The data examples clearly demonstrate the utility of the proposed tools for distributional, compositional and network data.
Various extensions will be of interest for future research. For example, to understand the interplay between depth profiles\xspace and distances between objects in $\Omega$ one may work with generalizations of the transport rank $\rank_{\omega}$, the \emph{generalized transport rank} $R^{\psi}_\omega$, as
\begin{equation*}
R^{\psi}_{\omega} = \mathbb{E}\left\{\psi(d(\omega,X))\mathrm{sign}\left(\int_0^1[F_{\obj}^{-1}(u)-F_\omega^{-1}(u)]\mathrm{d} u\right) \int_0^1\left|F_{\obj}^{-1}(u)-F_\omega^{-1}(u)\right|\mathrm{d} u\right\},
\end{equation*}
where $\psi(\cdot)$ is a weight function which can be tuned depending on the goal of the data analysis. Through $\psi(d(\omega,X))$ one may emphasize either observations closer to $\omega$ or those farther from $\omega$. One may also consider other distributional metrics $m(F_X,F_\omega)$ to gauge the distance between depth profiles and to derive transport ranks, level sets and quantiles.
Other extensions of interest concern modes and medians of random objects, extending implementations and theory for transport median sets.
\bibliography{depth}
\end{document}
|
2,877,628,090,519 | arxiv | \section{Introduction}
Continuous Integration (CI) is a software development practice by which developers integrate code into a shared repository several times a day \cite{fowler2006continuous}.
However, CI gains adoption in practice, difficulties \textit{e.g.,}\xspace \cite{pinto2017inadequate} and pain points \textit{e.g.,}\xspace \cite{widder2019conceptual} have been discovered about it.
As software companies adopt CI, they execute builds for many of projects, and they do so very frequently.
As workload increases, two main problems appear:
(1) the time to receive feedback from the build process increases, as software builds often outnumber the available computational resources --- having to wait in build queues,
and
(2) the computational cost of running builds also becomes very high.
Previous studies \textit{e.g.,}\xspace \cite{memon2017taming} have highlighted the long time that developers have to wait to receive feedback about their builds.
For example, at Google, developers must wait 45 minutes to 9 hours to receive testing results \cite{liang2018redefining}.
Even just the dependency-retrieval step of CI can take up to an hour per build \cite{celik2016build}.
Regarding the high cost of running builds, that is also highlighted in other studies \cite{herzig2015art, hilton2017trade, hilton2016usage, pinto2017inadequate, widder2019conceptual}.
The cost of CI reaches millions of dollars, \textit{e.g.,}\xspace at Google \cite{hilton2016usage} and Microsoft \cite{herzig2015art}.
While other problems exist for CI, we focus on these two because they are the ones that most existing techniques have focused on addressing.
They are also interrelated, since cost-reduction techniques may also reduce time-to-feedback --- \textit{e.g.,}\xspace skipping some tests may cause other tests to fail earlier.
Multiple techniques have been proposed to improve CI.
Most of them have the goal of reducing either its \textbf{time-to-feedback} or its \textbf{computational cost}.
All such techniques consider the observation of build failures to be more valuable than build passes, because failures provide actionable feedback, \textit{i.e.,}\xspace they point to a problem that needs to be addressed.
\textbf{Time-to-feedback-reduction} techniques aim to observe \textbf{failures earlier} --- by \textbf{prioritizing} failing executions over passing ones.
These techniques may operate in two different levels of granularity, by prioritizing: test executions \textit{e.g.,}\xspace \cite{elbaum2014techniques}, or build executions \textit{e.g.,}\xspace \cite{liang2018redefining}.
\textbf{Computational-cost-reduction} techniques aim to observe \textbf{failures only} --- by \textbf{selectively executing} failing builds only, saving the cost of executing passing ones.
They also may operate at two different levels of granularity, selecting: test executions \textit{e.g.,}\xspace \cite{Machalica2019predictive}, or build executions \textit{e.g.,}\xspace \cite{abdalkareem2019commits}.
To the extent of our knowledge, the existing techniques to improve CI have been evaluated under different settings, making it hard to compare them.
Previous studies used different software projects, different metrics, and rarely compared one technique to another.
However, we expect that different choices of goal, granularity, and technique design will bring different trade-offs.
For example, cost-reduction techniques at build-granularity may be more \emph{risky} than a test-granularity one, \textit{i.e.,}\xspace it may save more cost when it skips all the tests in a build, but it may also make more mistakes if it skips many failing tests in a build.
However, the opposite may be true, if test-granularity cost-reduction techniques also skip a large ratio of full builds (\textit{i.e.,}\xspace all the tests in the build).
On another example, test-selection techniques may be a good alternative to test-prioritization techniques that also saves cost as an added benefit, or they may instead delay the observation of test failures if they mispredict too many of them.
To the best of our knowledge, how these trade-offs manifest in practice is still mostly unknown.
Empirically understanding these trade-offs will have valuable practical implications for the design of future techniques and for practitioners adopting them.
In this paper, we perform the first evaluation of the existing strategies to improve CI.
We aim to understand the trade-offs between these techniques
for three dimensions:
(D1) computational-cost reduction,
(D2) missed failure observation, and
(D3) early feedback.
For this goal, we performed a large-scale evaluation.
We replicated and evaluated all the existing 10 CI-improving techniques from the research literature,
representing the two goals (time-to-feedback and computational-cost reduction) and the two levels of granularity (build-level and test-level) for which such techniques have been proposed.
We evaluated these techniques under the same settings, using the state-of-the-art dataset of continuous-integration data: TravisTorrent \cite{msr17challenge}.
To be able to study all techniques, we extended TravisTorrent in multiple ways, mining additional Travis logs, Github commits, and building dependency graphs for all our studied projects.
Finally, we measured the effectiveness of all techniques with 10 metrics in 3 dimensions.
We included every metric that any previous evaluation of our studied techniques used (7), refitted 2 others and designed an additional one.
We analyzed the results obtained by all techniques on all metrics across all 3 dimensions, and we synthesized our observations, to understand which design decisions helped and which ones did not for each dimension.
Finally, we further reflect on our results to provide a wide set of recommendations for the design of future techniques in this research area.
The main contributions of this paper are:
(1) the first comprehensive evaluation of CI-improving techniques;
(2) a collection of metrics to measure the performance of CI-improving techniques over various dimensions;
(3) an extended Travis Torrent dataset with: detailed test and commit, and dependencies information;
(4) the replication of 14 variants of 10 CI-improving techniques;
(5) evidence for researchers to design future CI-improving techniques.
\section{Approaches to Improve Continuous Integration}
\label{sec:taxonomy}
\begin{figure*}
\centering
\includegraphics[width=0.8\linewidth]{pdfs/introduction.pdf}
\caption{
Example timeline.
Failing tests in gray.
Build-selection runs builds fully when it predicts a failing build.
Test-selection runs builds partially (for tests that would fail).
Build-prioritization changes the build sequence.
Test-prioritization changes the test sequence within a build.
}
\label{fig:builds}
\end{figure*}
We summarize technique families in \cref{tab:techniques} and discuss each technique in detail in \cref{sec: Techniques}.
Figure~\ref{fig:builds} depicts a non-interventional example timeline of builds, a timeline in which a build-selection technique is applied, a timeline produced by build-prioritization technique, a timeline where a test-selection technique is applied, and a timeline with applying a test-prioritization approach.
The example timeline shows a chronological numbered sequence of builds in CI.
Each build is made up of at least one test.
We depict each test suite as a rectangle with a test number (e.g., t1).
Failing tests are then highlighted in gray.
The length of the rectangle refers to the time duration for the test to be executed.
We depict skipped tests with a dashed rectangle.
In the most ideal cost-saving scenario, all of the passing tests would be skipped and all of the failing tests would be observed as soon as possible.
\subsection{Computational-cost Reduction}
\subsubsection{Test-level granularity}
Test-selection techniques \cite{gligoric2015practical,herzig2015art, Machalica2019predictive, memon2017taming, shi2017optimizing, zhang2018hybrid, zhu2019framework} aim at automatically detect and label tests that are not going to fail.
These test-level approaches collect information from test history and project dependency along with the current commit and use some heuristic models to detect failing tests and skip the others.
Figure~\ref{fig:builds} also illustrates how this type of techniques works in the simulation timeline.
After a test-selection approach is activated, it selects a subset of tests (e.g., t2 in build \#2, t4 in build \#4) that it predicts to have a possibility to fail and decides to skip the others (e.g., t3 in build \#1, t1 in build \#5).
For those tests that are not selected in the timeline and get skipped, we depict them as dashed rectangles.
In this paper, we consider it can skip some builds when it selects no test in those builds.
\subsubsection{Build-level granularity}
Build-selection techniques \cite{abdalkareem_tse2020, abdalkareem2019commits, hassan2017change, jin2020_icse, ni2017cost} aim at automatically detect and label commits and builds that can be CI skipped.
Some approaches \cite{hassan2017change, jin2020_icse, ni2017cost} try to detect failing builds and skip those passing builds to achieve cost-saving.
Others \cite{abdalkareem_tse2020, abdalkareem2019commits} aim at identifying commits that can be CI skipped.
\cref{fig:builds} illustrates how they work in the simulation timeline.
As a build-level technique, when build-selection approach decides to skip a build (e.g., build \#2, \#4, \#6), normally it skips all of the tests in that build.
The inner test sequence is not changed and all of tests are run in an executed build.
\subsection{Time-to-feedback Reduction}
\subsubsection{Test-level granularity}
Test-prioritization techniques \cite{elbaum2014techniques, luo2018assessing, marijan2013test, mostafa2017perfranker, thomas2014static} try to give high priority to tests that are predicted to be failed so that developers could be informed in a shorter time.
This family of approaches normally rearrange the execution order of tests within a build to make predicted-to-fail tests run earlier by analyzing information such as test failing history and test context.
Figure~\ref{fig:builds} depicts an example of how this type of techniques works in the simulation timeline.
With a test-level approach being activated, the CI system gives different tests different priorities and firstly executes those tests with a higher priority (e.g., t4 in build \#2, t2 in build \#3) as well as delays low-priority tests (e.g., t1 in build \#3, t2 in build \#6).
The sequence of test executions in this timeline gets rearranged and the start-time for tests that are more likely to fail move ahead in time.
Also, all tests are executed at last.
\subsubsection{Build-level granularity}
Build-prioritization techniques
\cite{liang2018redefining} aim at automatically
prioritizes commits that are waiting for being executed.
They favor builds with a larger percentage of test suites that have been found to fail recently and builds including test suites that have not been executed recently as an alternative path.
Figure~\ref{fig:builds} also shows how this family of techniques works in the simulation timeline.
Build-prioritization techniques will only be activated when there is a collision of builds (i.e., there are multiple builds waiting to occupy the limited resource).
The technique is build-level so it will not change the inner order of the test executions and it will normally change the sequence of tests across builds when the approach is activated (e.g., build \#4, \#5).
None of tests become dashed in this timeline because they all eventually execute.
\section{Research Method}
In this paper, we replicated and evaluated 14 variants of 10 CI-improving techniques, covering their two goals (time-to-feedback and computational-cost reduction) and their two levels of granularity (build-level and test-level) with 1 perfect technique for the ideal timeline.
We evaluate them over 100 software projects in TravisTorrent, which we extended to be able to run all such kinds of techniques.
Our goal is to understand the trade-offs between existing CI-improving techniques, and between the metrics that have been used to evaluate them.
We perform 2 empirical studies to analyze these trade-offs for the following 3 dimensions of CI-improving techniques, using 10 metrics.
We only include selection techniques in Empirical Study 1 since prioritization techniques have no power in cost saving by nature.
We involve selection and prioritization techniques in Empirical Study 2 because both of them can have an impact on fault detection, \textit{e.g.,}\xspace wrongly-skipped failing builds by selection approaches can cause delay in fault detection.
\begin{smalldescriptionrq}
\item[Empirical Study 1: Cost Saving]
\item[\ \ D1:] \textbf{Computational-cost Reduction}
\item[\ \ D2:] \textbf{Missed Failure Observation}
\item[Empirical Study 2: Time-to-feedback Reduction]
\item[\ \ D3:] \textbf{Early Feedback}
\end{smalldescriptionrq}
For each dimension, we study:
\begin{smalldescriptionrq}
\item[RQ1:] \textbf{What design decisions helped this dimension?}
\item[RQ2:] \textbf{What design decisions did not help this dimension?}
\end{smalldescriptionrq}
\subsection{Data Set}
We perform our study over the Travis Torrent dataset \cite{beller2017oops}, which includes 1,359 projects (402 Java projects and 898 Ruby projects) with data for 2,640,825 build instances.
We remove ``toy projects'' from the data set by studying those that are more than one year old, and that have at least 200 builds and at least 1000 lines of source code, which is a criteria applied in multiple other works \cite{ni2017cost,islam2017insights}.
To be able to evaluate test-granularity techniques, we also filter out those projects whose build logs do not contain any test information.
We focused our study on builds with passing or failing result, rather than error or canceled --- since they can be exceptions or may happen during the initialization and get aborted immediately before the real build starts.
Besides, in Travis a single push or pull-request can trigger a build with multiple jobs, and each job corresponds to a configuration of the building step.
We did a preliminary investigation of these builds and found that these jobs with the same build identifier normally share the same build result and build duration.
Thus, as many existing papers have done \cite{gallaba2018noise,rebouccas2017does,jain2019brief}, we considered these jobs as a single build.
After this filtering process, we obtained 82,427 builds from 100 projects (13,464 failing builds).
To be able to execute all our studied techniques, we extended the information in TravisTorrent of these 100 projects in multiple ways.
First of all, we needed to know the duration of each individual test for the comparison and replication.
Also, to replicate some techniques, \textit{e.g.,}\xspace \cite{herzig2015art,elbaum2014techniques}, we needed to capture the historical failure ratio for each individual test.
To obtain these information, we built scripts to download the raw build logs from Travis and parse them to extract all of the information about test executions, such as test name, duration and outcome.
Some techniques, \textit{e.g.,}\xspace \cite{Machalica2019predictive, abdalkareem2019commits}, require additional information that TravisTorrent does not provide for builds, such as the content of commit messages, changed source lines and changed file names.
For that, we also mined additional information about commits in the projects' code repositories through Github.
Then, we matched each test with its corresponding test file in the project.
Finally, to be able to run other techniques, \textit{e.g.,}\xspace \cite{gligoric2015practical,Machalica2019predictive}, we built a dependency graph for the source code of each project using a static code analysis tool (Scitool Understand \cite{SciTool}) to determine the paths between the source files and test files.
\subsection{Evaluation Process}
\label{evaluation_processs}
We evaluate the techniques in a real-world scenario, to understand as best as possible the behavior that the techniques would show in practice.
We take two measures for that.
First, we respect the original chronological order of build and test operations when training techniques.
We achieve that by using an 11-fold, chronological variant of cross-validation.
For each project, we split its chronological timeline into 11 folds.
We use the first chronological fold only for testing, and we iteratively test the other 10 folds.
For each testing fold, we train on all the folds that precede it chronologically.
This approach has been used in previous works \textit{e.g.,}\xspace \cite{bettenburg2008duplicate,servant12icse} to avoid training with information that would not be available in practice, \textit{i.e.,}\xspace it happens in the future.
We follow this approach for all the techniques based on machine learning, \textit{e.g.,}\xspace \cite{Machalica2019predictive}.
For techniques that do not require training, \textit{e.g.,}\xspace \cite{abdalkareem2019commits}, we simply execute them over the same last 10 folds.
For techniques that train on data from other projects, \textit{i.e.,}\xspace for cross-project technique variants, we also executed them over the same last-10-fold timeline --- and we divided them into 10 \emph{project} folds to do cross-project cross-validation, \textit{i.e.,}\xspace for each project, the technique is trained on 90 other projects and tested on its last 10 fold data.
Second, we respect the real-world availability of information.
That is, for selection-based techniques, when a build or test is skipped, the technique will not know its outcome.
For techniques that rely on the last build or test outcome \textit{e.g.,}\xspace \cite{hassan2017automatic}, we only inform them of the outcome of the last \emph{executed} build or test.
Additionally, when builds are skipped, we accumulate their code changes into the subsequent build.
\begin{table}
\caption{
Studied Techniques.
}
\label{tab:techniques}
\label{tab:overview}
\small
\begin{tabular}{|m{1cm}|p{1.6cm}|p{1.6cm}|p{2.6cm}|}
\hline
\multicolumn{1}{|c|}{\textbf{Goal}} & \multicolumn{1}{c|}{\textbf{Approach}} & \multicolumn{1}{c|}{\textbf{Granularity}} & \multicolumn{1}{c|}{\textbf{Studied Technique}} \\ \hline
\multirow{4}{1cm}{Time to Feed\-back} & \multirow{4}{*}{Prioritization} & \multirow{3}{*}{Test} & PT\_Marijan13 \cite{marijan2013test} \\ \cline{4-4}
& & & PT\_Elbaum14 \cite{elbaum2014techniques} \\ \cline{4-4}
& & & PT\_Thomas14 \cite{thomas2014static} \\ \cline{3-4}
& & Build & PB\_Liang18 \cite{liang2018cost} \\ \hline
\multirow{6}{1cm}{\shortstack[l]{Comput-\\ational \\ \\ Cost}} & \multirow{6}{*}{Selection} & \multirow{3}{*}{Test} & ST\_Gligoric15 \cite{gligoric2015practical} \\ \cline{4-4}
& & & ST\_Herzig15 \cite{herzig2015art} \\ \cline{4-4}
& & & ST\_Mach19 \cite{Machalica2019predictive} \\ \cline{3-4}
& & \multirow{3}{*}{Build} & SB\_Hassan17 \cite{hassan2017change} \\ \cline{4-4}
& & & SB\_Abd19 \cite{abdalkareem2019commits} \\ \cline{4-4}
& & & SB\_Jin20 \cite{jin2020_icse} \\ \hline
\end{tabular}
\vspace{-.2in}
\end{table}
\subsection{Replicated Techniques}
\label{sec: Techniques}
We replicated and studied all the techniques that have been proposed to improve CI by reducing the time to feedback or reducing its cost.
In addition to these, there are other techniques that were proposed before CI and that could also be applied for these two goals: test prioritization techniques, and test selection techniques.
Therefore, we also replicated and studied a state-of-the-art technique in each of these two categories that were not originally proposed for CI.
We summarize all our studied techniques in Table~\ref{tab:overview}.
In total, we studied 10 techniques, across two goals (reducing time to feedback and cost) and two granularities (test and build levels).
Since we also studied multiple variants of some techniques, our evaluation included 14 total technique variants.
To provide a reference point, we also studied a perfect technique:
\textit{Perfect Technique}.
It achieves the goal of each metric perfectly --- it predicts which tests or builds will fail with 100\% accuracy, prioritizing or selecting them perfectly.
We include the detailed description for each technique in \cref{sec:tech1} and \cref{sec:tech2}.
\section{Empirical Study 1: Cost Saving}
\subsection{Studied Techniques}
\label{sec:tech1}
\subsubsection{Test-selection Techniques}
We replicated all the test-selection techniques that were proposed for improving CI: ST\_Mach19 \cite{Machalica2019predictive} and ST\_Herzig15 \cite{herzig2015art}.
To provide even more context for our study, we also evaluate a state-of-the-art test-selection technique: ST\_Gligoric15 \cite{gligoric2015practical} --- since test-selection techniques have also been proposed outside the context of CI, \textit{e.g.,}\xspace \cite{zhang2018hybrid,gligoric2015practical,yoo2012regression,yoo2007pareto,rothermel1997safe,rothermel1996analyzing}.
\noindent\textbf{ST\_Gligoric15 \cite{gligoric2015practical}} skips tests that cannot reach the changed files, by tracking dynamic dependencies of tests on files.
A test can be skipped in the new revision if none of its dependent files changed.
The rationale is that tests that cannot reach changed files cannot detect faults in them.
\noindent\textbf{ST\_Herzig15 \cite{herzig2015art}} is based on a cost model, which dynamically skips tests when the expected cost of running the test exceeds the expected cost of removing it, considering both the machine cost and human inspection cost \cite{bell2018deflaker, herzig2015empirically}.
This technique tends to skip tests that mostly passed in the past or that have long runtime.
\noindent\textbf{ST\_Mach19 \cite{Machalica2019predictive}} proposes a Machine Learning algorithm with combined features of commit changes and test historical information.
We studied two variants of it: one is trained in the past builds within the same project in which it is applied (\emph{ST\_Mach19\_W}), and the other is trained in the builds of different software projects than the one in which it will be applied (\emph{ST\_Mach19\_C}).
It uses the following features: file extensions, change history, failure rates, project name, number of tests and minimal distance.
\subsubsection{Build-selection Techniques}
We then replicated all build-selection techniques that jave been proposed for improving CI: SB\_Abd19 \cite{abdalkareem2019commits}, and SB\_Jin20 \cite{jin2020_icse}.
To provide even more context for our study, we also replicated a state-of-the-art build-prediction technique: SB\_Hassan17 \cite{hassan2017change}.
\noindent\textbf{SB\_Hassan17 \cite{hassan2017change}} predicts every build's outcome based on the information from last build.
Builds can be skipped when they are predicted to pass.
In our study, information from the previous build is blinded if the build does not get executed.
We study two variants of this technique (\emph{SB\_Hassan17\_W} and \emph{SB\_Hassan17\_C}) as we did for \emph{ST\_Mach19}.
\noinden
\noindent\textbf{SB\_Abd19 \cite{abdalkareem2019commits}} uses a rule-based approach to skip commits that only have \emph{safe} changes, \textit{e.g.,}\xspace changes on configuration or document files.
This technique is expected to capture most failing builds since it only skips builds considered safe to skip.
\noindent\textbf{SB\_Jin20 \cite{jin2020_icse}} aims at saving CI cost by skipping passing builds.
Their strategy is to capture the first failing build in a subsequence of failing builds and continuously build until a passing build appears.
We replicated this technique under the configuration that provided the optimal effectiveness \cite{jin2020_icse}.
We studied three variants of this technique: \emph{SB\_Jin20\_W} \& \emph{SB\_Jin20\_C} as we did previously, and also a rule-of-thumb variant (SB\_Jin20\_S) that skips builds with $<4$ changed files.
\subsection{D1: Computational-cost Reduction}
\label{sec:d1}
We studied four metrics for D1.
We plot the result of each metric in a box plot where each box represents the distribution of values for all the studied projects.
\subsubsection{Studied Metrics}
\label{sec:rq1_metric}
\noindent\textbf{Build time saved}
measures the proportion of total build time that is skipped among all build time per project.
It was covered in SB\_Abd19 \cite{abdalkareem2019commits}.
\noindent\textbf{Test time saved}
measures the same as the previous metric but in terms of test time.
The previous work ST\_Gligoric15 \cite{gligoric2015practical} used this metric in its evaluation.
It shows how much time applying a technique could save during the phase of test executions.
\noindent\textbf{Builds number saved}
measures the proportion
of builds that are saved among all builds.
It was studied by SB\_Abd19 \cite{abdalkareem2019commits} and SB\_Jin20 \cite{jin2020_icse}.
It represents how many resources could be saved as the number of builds.
\noindent\textbf{Tests number saved}
measures the same as the previous metric but in term of tests.
Previous papers \cite{gligoric2015practical,herzig2015art} studied this metric.
It represents how many resources could be saved during test executions.
\subsubsection{Analysis of Results}
\label{sec:d1rq1}
\mysubsection{Comparing Metrics}
When we compare the techniques'
test number vs. test time saved,
\obs{most of them saved a very similar ratio of test time than ratio of tests (except ST\_Herzig15)}.
When comparing
build number vs. build time,
\obs{build-granularity techniques saved a very similar ratio of build time as of builds}.
Also, \obs{test-granularity techniques saved a larger ratio of build time than of builds}.
This means that \interp{test-granularity techniques save build time when they skip builds partially --- when they skipped some of their tests}.
When comparing
test number vs. build number,
\obs{build-granularity techniques saved a very similar ratio of builds and tests}.
Also, \obs{test-granularity techniques saved a much lower ratio of builds than of tests} --- some dramatically so (ST\_Herzig15 and ST\_Mach19\_C).
This means that \interp{test-granularity techniques saved a low ratio of full builds}.
When comparing
test time vs. build time,
\obs{build-granularity techniques saved very similar ratios of test time and build time}.
Also, \obs{test-granularity techniques saved a much lower ratio of build time than of test time}.
This observation extends our earlier one: every build that these techniques did not skip fully, and thus did not skip its build-preparation time, reduced their ability to save build time to an important extent.
\mysubsection{Comparing Granularities}
By comparing test vs. build-granularity techniques,
\obs{build-granularity techniques generally saved higher build-time cost} --- except for SB\_Abd19.
Build-granularity techniques have the advantage of skipping both test-execution and build-preparation time, while test-granularity techniques have the advantage of skipping tests spread over many builds, not only on those that get fully skipped.
Our observation implies that skipping full builds was a better strategy for saving cost.
\HIDDEN
{
}
\mysubsection{Comparing Techniques}
We first observed that \obs{SB\_Mach19\_C and SB\_Jin20\_C skipped fewer builds than their counterparts that were trained only with data within the same project (SB\_Mach19\_W, SB\_Jin20\_W)}.
After having been trained with a more diverse set of build and tests (across many projects), these techniques became less confident to skip them.
ST\_Herzig15 saved very low ratio of build time despite saving a large ratio of tests.
This is because it very rarely skips tests that failed many times in the past --- regardless of the code changes in the build.
So, within each build, it very rarely skipped the tests with the most past failures --- thus very rarely skipping builds fully.
SB\_Abd19 saved a median 21\% build time, which is a relatively high amount, considering that it only skipped builds with non-executable changes, \textit{e.g.,}\xspace that only changed formatting or comments.
ST\_Mach19\_W and ST\_Gligoric15 skipped a relatively high ratio of build time (competitively with build selection techniques) because they skipped many full builds.
This is because they analyze the relationship between code changes and tests inside a build.
ST\_Gligoric15 skips all tests that cannot execute the code changes, and ST\_Mach19\_W considers the distance between the changes and the tests in its predictor.
This allows both techniques to fully skip those builds in which no test can execute the code changes --- \textit{i.e.,}\xspace when only non-executable code was changed, or when no tests exist to execute the changes.
SB\_Jin20\_W and SB\_Jin20\_S saved high ratios of build time, since they both focused on skipping full builds.
While SB\_Jin20\_S provided higher savings, we expect it to also skip a higher ratio of skipped failing builds (see \cref{sec:d2}) --- SB\_Jin20\_S simply skips builds with $<$4 commits.
Finally, SB\_Hassan17\_W and SB\_Hassan17\_C skipped too much build time (higher than the perfect baseline).
This is because they mostly rely on the status of the previous build, which is unknown if skipped.
So, as soon as they observe a passing build, they recurrently skip all subsequent builds.
\begin{figure*}%
\centering
\subfloat{{\includegraphics[width=0.4\linewidth]{pdfs/Tests_number_saved_new.pdf} }}%
\qquad
\subfloat{{\includegraphics[width=0.4\linewidth]{pdfs/Test_time_saved_new.pdf} }}%
\centering
\subfloat{{\includegraphics[width=0.4\linewidth]{pdfs/Builds_number_saved_new.pdf} }}%
\qquad
\subfloat{{\includegraphics[width=0.4\linewidth]{pdfs/Build_time_saved_new.pdf} }}%
\caption{Results for Cost Saving Metrics. Prioritization techniques not included, since they do not skip tests/builds.}
\vspace{-.1in}
\label{fig:amount_save}%
\end{figure*}
\begin{figure}%
\centering
\subfloat{{\includegraphics[width=0.8\linewidth]{pdfs/Skipped_failing_test_proportion_new.pdf} }}%
\qquad
\subfloat{{\includegraphics[width=0.8\linewidth]{pdfs/Skipped_failing_build_proportion_new.pdf} }}%
\vspace{-.1in}
\caption{Results for Missed Failure Observation Metrics. Prioritization techniques not included, since they do not skip tests/builds.}
\vspace{-.2in}
\label{fig:skip_proportion}%
\end{figure}
\subsection{D2: Missed Failure Observation}
\label{sec:d2}
\subsubsection{Studied Metrics}
\mysubsection{Proportion of skipped failing tests}
This metric measures the undesired side effect of cost-saving techniques skipping some of the failing test cases.
It was used by ST\_Herzig15 \cite{herzig2015art}.
\mysubsection{Proportion of skipped failing builds}
This metric measures the proportion of failing builds that are skipped among all failing builds.
It was covered in SB\_Jin20 \cite{jin2020_icse}.
\subsubsection{Analysis of Results}
\mysubsection{Comparing Metrics}
All techniques generally skipped a very similar ratio of failing tests than builds, with small differences.
\obs{ST\_Mach19\_C, ST\_Herzig15, ST\_Gligoric15, SB\_Jin20\_S skipped a slightly higher ratio of failing tests than builds.}
This is explained by test-granularity techniques skipping partial builds in addition to full builds, and thus they also skipped a higher ratio of failing tests.
The case of SB\_Jin20\_S is different: it skipped a higher ratio of tests because it skipped fewer builds with no failing tests --- few changed $<4$ files.
SB\_Abd19, SB\_Jin20\_C, ST\_Mach19\_W and SB\_Jin20\_W skipped a slightly higher ratio of failing builds than tests.
This means that these techniques skipped failing builds with lower than average (or no) failing tests, \textit{e.g.,}\xspace failing due to configuration or compilation errors (which amount to 35\% of failing builds).
Finally, SB\_Hassan17\_C and SB\_Hassan17\_W skipped most failing (and passing) tests and builds.
\mysubsection{Comparing Granularities}
\obs{Build-granularity techniques generally skipped higher ratios of failing builds and tests than test-granularity techniques --- except for SB\_Abd19}.
They generally skipped a higher ratio of all tests and builds.
\HIDDEN
{
}
\mysubsection{Comparing Techniques}
If we rank techniques on these two metrics of side-effect, we observe that they rank almost exactly in the opposite order as they would according to build time saved (for D1).
This shows a clear trade-off between cost-saving and its side effect of skipping failures.
\begin{figure*}%
\centering
\subfloat{{\includegraphics[width=0.4\linewidth]{pdfs/Position_changed_builds.pdf} }}%
\qquad
\subfloat{{\includegraphics[width=0.4\linewidth]{pdfs/Position_all_builds.pdf} }}%
\centering
\subfloat{{\includegraphics[width=0.4\linewidth]{pdfs/Build_queue_length_saved.pdf} }}%
\qquad
\subfloat{{\includegraphics[width=0.4\linewidth]{pdfs/Position_all_tests.pdf} }}%
\vspace{-.1in}
\caption{Results for Time-to-feedback Reduction Metrics.}
\label{fig:position}%
\vspace{-.2in}
\end{figure*}
\section{Empirical Study 2. D3: Time-to-feedback Reduction}
In D3, we study how much prioritization techniques advance the observation of failures and how much the side effect in D2 will influence it.
So, we study all the time-to-feedback and computational-cost reduction techniques.
\subsection{Studied Techniques}
\label{sec:tech2}
We only describe here the techniques that we did not describe in earlier sections: prioritization techniques.
\subsubsection{Test-prioritization Techniques}
For this family of techniques, we replicated all the test-prioritization techniques that were proposed for improving CI: PT\_Elbaum14 \cite{elbaum2014techniques} and PT\_Marijan13 \cite{marijan2013test}.
To further extend this study, we also replicated the state-of-the-art test case prioritization (TCP) technique.
We chose the technique that provided the highest effectiveness in the most recent evaluation of TCP techniques \cite{luo2018assessing}: PT\_Thomas14 \cite{thomas2014static}.
TCP was a rich research area before CI became a common practice, \textit{e.g.,}\xspace \cite{mostafa2017perfranker, thomas2014static, elbaum2002test, rothermel2001prioritizing}.
We apply these techniques to prioritize tests within each build.
\noindent\textbf{PT\_Marijan13 \cite{marijan2013test}} prioritizes tests that failed recently or have a shorter duration.
Tests are ordered based on their historical failure data, test execution time and domain-specific heuristics.
\noindent\textbf{PT\_Elbaum14 \cite{elbaum2014techniques}} favors tests that failed either recently or a long time ago.
\noindent\textbf{PT\_Thomas14 \cite{thomas2014static}} uses topic modeling to diversity the tests that get executed earlier.
Every prioritized test is selected if it contains the most different topics from the previous test in its identifiers and comments.
The rationale behind this is that similar tests often find similar problems.
\subsubsection{Build-Prioritization Techniques}
To the extent of our knowledge, only one technique has been proposed to prioritize software builds, PB\_Liang18 \cite{liang2018redefining}.
\noindent\textbf{PB\_Liang18 \cite{liang2018redefining}} executes builds containing a recently-failing and recently-non-executed test in a collision queue.
We apply PB\_Liang18 to prioritize builds within a build waiting queue, as its previous evaluation did \cite{liang2018redefining}.
Queues form when build executions overlap in time.
\subsection{Studied Metrics}
\subsubsection{Positions shifted for observed failing tests within a build}
measures the shifted positions for all observed failing tests (prioritized or not).
A similar metric to this one was used in the evaluations of PT\_Marijan13 \cite{marijan2013test}, PT\_Elbaum14 \cite{elbaum2014techniques}, and PT\_Thomas14 \cite{thomas2014static}.
For test-selection techniques, we measure the average number of shifted positions for all remaining tests --- when a test is skipped, the next one can now run one position earlier.
\subsubsection{Positions shifted for treated failing builds}
measures the number of builds between every treated (delayed/advanced) failing build's original observation position and its new position.
This metric was studied by SB\_Jin20 \cite{jin2020_icse}.
For test-granularity techniques, this metric is not impacted, since the build is still executed in the same position.
For build-selection techniques, we consider that when a build is skipped, it will run as the next build (its tests will run on it).
\subsubsection{Positions shifted for all failing builds}
measures the same as the previous one, but now across all failing builds.
PB\_Liang18 used a similar metric in its evaluation \cite{liang2018redefining}.
Through this metric, we can understand the impact of
the previous metric over all builds.
\subsubsection{Build-queue-length saved}
This is a metric designed by us to measure how applying a technique could relieve the collision problem: when multiple builds are waiting to be executed within a limited resource.
We follow the same configuration in PB\_Liang18's paper.
The build-queue-length refers to the median number of builds waiting ahead for each build in each project.
With a pre-experiment on all projects, we find that for only one project - "Rails/Rails", the median value of every build's waiting queue is bigger than 0.
Thus, we only report the result for this metric on that project.
\subsection{Analysis of Results}
\mysubsection{Comparing Metrics}
When comparing positions shifted for
treated failing builds vs. all failing builds,
\obs{for all techniques, the advance (PB\_Liang18) or delay (others) that they introduce in the observation of failing builds is much lower when measured across the whole population of failing builds.}
The upside of this is that the undesired effect of most techniques (\textit{i.e.,}\xspace delay of failure observation) is very low across all failing builds (median 0--2 builds).
The downside is that the desired effect of PB\_Liang18 (\textit{i.e.,}\xspace advance of failure observation) is also very low across all failing builds (median 0 builds).
Next, we compare the performance of test selection techniques (\textit{i.e.,}\xspace the only overlapping technique family)
in
the positions that observed failing tests shifted within a build vs.
the positions that failing builds shifted across all builds.
We observe that \obs{test selection techniques provided some advancement in the observation of test failures (lower than most test prioritization techniques), while introducing a very low delay in observation of build failures (median 0--2)}.
\mysubsection{Comparing Granularities}
We did not observe a substantial difference when comparing granularities --- we observed stronger differences when comparing techniques.
\mysubsection{Comparing Technique Strategies}
When comparing technique strategies (prioritization vs. selection),
test-selection techniques provided some advancement in the observation of failing tests within a build,
but test-prioritization techniques provided better results overall (except PT\_Elbaum14).
\mysubsection{Comparing Techniques}
PT\_Marijan3 and PT\_Thomas14 behave very similarly --- despite their different approaches to prioritization --- and they are both close to perfect, prioritizing most tests correctly.
PT\_Elbaum14 provides a lower advancement of test failures (also lower than many test-selection techniques), since it uses a simpler criterion --- prioritizing tests that were executed very recently or a long time ago.
All test-selection techniques provided a very similar advancement of test-failure observation, except ST\_Herzig15 which was slightly better.
Interestingly, ST\_Herzig15 was one of the techniques with the lowest delay in build-failure observation (median 0 for all failing builds).
At the build-granularity, PB\_Liang18 had a very low impact in prioritizing builds because builds very rarely occurred concurrently in our dataset --- only the Rails project had a meaningful number of concurrent builds.
An important metric in PB\_Liang18's original evaluation was the savings in the build-queue length.
We plot the results for all techniques for this metric in \cref{fig:position}.
Interestingly, we also observed that test-selection and build-selection techniques also had a strong impact in this metric --- less so for test-selection techniques and SB\_Abd19 because they skip fewer full builds (see \cref{sec:d1rq1}).
Regarding build-selection techniques, those that saved more builds (see \cref{sec:d1rq1}) also saved more in the build-queue-length metric, but also introduced higher delays in build-failure observation.
\HIDDEN{
Discussion:
Why is this surprising? We didn't know if test-granularity techniques...
... saved more or fewer full builds than build-granularity techniques
... saved many or few partial builds than build-granularity techniques
Also, while it is obvious that we could combine test skipping and build skipping, we don't know if it would be counter-productive in terms of missing failures.
Can we really save more cost targeting slower builds?
This opportunity, however, only applies if build duration within a project varies over time.
Can we give a number?
Motivation
This field is becoming extremely popular - in academia and industry
Somebody has to design how to move it forward
Put all findings together, and provide recommendations
Some of our findings may be obvious in retrospect, but:
- no technique explicitly mentions the opportunities that we highlight
- so they are only obvious in retrospect
- we didn't know if techniques were achieving our highlighted opportunities already
- there may not have been much space for improvement
- we are setting the direction to move this field forward
- we are the first to explicitly propose our opportunities/implications for improvement, and to provide empirical evidence for them
- It's not obvious what would work and what wouldn't
- it clearly was not obvious to the techniques' original authors!!
Could we guess the results based on a technical analysis of the techniques?
No, because we don't know the shape of the data in any of the dimensions that matter!
We could have studied separately how common each desirable characteristic is
But it's better to simply run the techniques.
We are also contributing an evaluation framework, with metrics to allow evaluations across-the-field
what's the closest published thing in this respect?
}
\section{Answers for Research Questions and Implications}
\label{sec:discussion}
We synthesize our observations and we lay out their implications to advance this area of research.
\subsection{D1: Computational-cost Reduction}
\label{sec:discussion-d1}
\subsubsection{RQ1: What design decisions did not help?}
First, we report on \textbf{missed opportunities} for saving more computational cost.
Cost-saving techniques focused on skipping passing builds and tests, but they \textbf{did not specifically target those that would provide the highest savings}, \textit{i.e.,}\xspace slower tests, slower builds, or all tests in a build (in the case of tests-selection).
This is demonstrated by the fact that build-granularity techniques saved similar ratios of test number, test time, build number, and build time; and that test-granularity techniques saved similar ratios of test number and test time, and lower ratios of build time than test time.
We also learned that \textbf{training cost-saving techniques across projects} harmed their predictions.
In other fields, training with data from multiple projects is considered to increase the accuracy of predictors.
For cost-saving techniques, though, this exposed the techniques to more diverse sets of failures, making more builds/tests ``look like a failure'', resulting on the predictors saving less cost (being less inclined to skip builds and tests).
Test-selection techniques were also limited in the cost that they could save when \textbf{they did not target saving full builds} --- ST\_Mach19\_C and ST\_Herzig15 saved very low build time despite saving a high ratio of tests.
An additional aspect that contributed to ST\_Herzig15 saving limited build time (despite saving high number of tests) is that \textbf{it only used features characterizing the tests}, but not the code changes in the build --- \textit{e.g.,}\xspace missing the opportunity to skip full builds for no-code changes.
\subsubsection{RQ2: What design decisions helped?}
Other design decisions allowed techniques to save high cost.
A particularly useful design decision was \textbf{trying to predict seemingly-safe builds and tests} --- SB\_Abd19 saved 21\% builds simply by skipping builds with no-code changes, and ST\_Gligoric15 saved 36\% builds skipping tests that did not cover the code changed in the build.
Another decision that provided high cost savings was to \textbf{skip full builds instead of individual tests} --- thus also saving build-preparation time.
Skipping all tests in a build allows to skip the time to prepare the build (\textit{i.e.,}\xspace compilation and other overhead like virtual machine preparation), and we observed that \textbf{build-preparation takes a large portion of build time}.
An illustrative example is how ST\_Gligoric15 and ST\_Herzig15 saved about the same ratio of test time, but ST\_Gligoric15 saved much higher build time because it saved a much higher ratio of full builds.
Test-selection techniques, however, performed really well in terms of saving a high ratio of tests (84\% by ST\_Herzig15 and 80\% by ST\_Machalica\_W).
This is because they could save some cost spread out across many builds --- \textit{i.e.,}\xspace \textbf{skipping partial builds achieved high cost savings}.
However, the test-selection \textbf{techniques that skipped full builds also achieved high savings}.
Intentionally or not, ST\_Gligoric15 saved many full builds by simply skipping all tests that did not cover the changed code.
ST\_Mach19\_W also saved many full builds by approximating the same idea: one of its predictor's features is the distance between the changed code and the test.
\subsubsection{Implications for Future Techniques}
Our results have multiple implications for the design of future techniques.
First, we encourage future techniques to consider \textbf{hybrid approaches} to save both full builds and also partial builds, \textit{i.e.,}\xspace to save cost at both build and test granularity.
Future techniques should also leverage the beneficial factors that we already observed, such as \textbf{skipping full builds with no-code changes or no tests to cover them}.
To save more full builds, novel prediction features could be designed, \textbf{targeting slower builds} if possible --- which no existing technique attempts.
To save more tests, existing techniques already provide very useful features (saving a high ratio of tests), but other new features could be designed to target \textbf{saving more and slower tests}, and considering the \textbf{relationships between the tests and the code changes} in the build.
Finally, our observations also show that \textbf{build time saved} is the metric that most comprehensively shows the cost saved by all existing techniques --- even though cross-referencing multiple metrics allows for additional observations, as we did in this study.
\subsection{D2: Missed Failure Observation}
\label{sec:discussion-d2}
\subsubsection{RQ1: What design decisions did not help?}
In terms of the proportion of builds and tests that were skipped by cost-saving techniques, we generally observe that \textbf{the decisions that made techniques save higher cost also made them make more mistakes}, \textit{i.e.,}\xspace skip higher ratios of failing builds and tests.
It was also particularly interesting that \textbf{seemingly-safe techniques} --- SB\_Abd19 and ST\_Gligoric15 --- still \textbf{showed pretty high ratios of skipped failing builds and tests}.
Our study thus shows that skipping builds with no-code changes or without tests to execute them is not enough to guarantee that they will not fail.
A quick look discovered that the builds and tests skipped by these techniques failed for different reasons, such as configuration or compilation errors (present in 35\% of failing builds).
\subsubsection{RQ2: What design decisions helped?}
One design decision that reduced the skipped failing tests and builds was \textbf{training techniques across projects}.
All the \_C variants skipped lower ratios than their \_W counterparts (except SB\_Hassan17\_C).
Also \textbf{test-granularity techniques generally skipped lower ratios of failing tests} than build-granularity techniques did of builds.
\subsubsection{Implications for Future Techniques}
These results imply multiple recommendations for future techniques.
First, future techniques should design \textbf{novel features to predict failures that are caused by no-code changes}, \textit{e.g.,}\xspace configuration changes, to avoid assuming that seemingly-safe builds will not fail.
Second, future techniques should attempt to \textbf{break this trade-off between saving cost and skipping failures}.
Existing techniques generally increase cost savings by also increasing missed failure observations.
Future techniques should attempt to improve one of the two dimensions by keeping the other one fixed (or optimal).
Finally, future studies should propose \textbf{new metrics to better assess the trade-off between cost-saving and skipped-failures} of various techniques --- since most techniques succeed in one at the expense of the other.
SB\_Jin20 \cite{jin2020_icse} proposed the harmonic mean of the two as a balanced metric, but further study is granted to understand whether both should be valued equally or in a weighted manner --- particularly considering the much higher ratio of passes to failures in CI datasets.
\subsection{D3: Time-to-feedback Reduction}
\label{sec:discussion-d3}
\subsubsection{RQ1: What design decisions did not help?}
Unsurprisingly, \textbf{build-selection techniques did not advance the observation of build failures at all}, but at least they introduced very low delays in the observation of failing builds (and also saved some computational cost).
Similarly, \textbf{test-selection techniques also introduced a small delay in the observation of test failures}.
\textbf{Build-prioritization also showed very limited advancement in observing failing builds}, but that was mainly because only one of our studied projects (open-source) had some contention in the build queue.
We expect that industrial software project would obtain a much higher benefit from this approach.
Finally, we also observed that the build-selection techniques that produced \textbf{higher cost savings also introduced higher delays in build-failure observation}, showing again the tension between both goals.
\subsubsection{RQ2: What design decisions helped?}
\textbf{The best techniques to provide early feedback were test-prioritization techniques}.
In fact, PT\_Thomas14 provided near perfect results.
We also found that \textbf{test-selection techniques provided lower, but competitive advancement of test failure observation}, while also providing some cost savings.
For example, ST\_Herzig15 provided high advancement of test-failure observation within a build, with very low delay of build-failure observation, while also saving some computational cost.
Similarly, we observed that \textbf{build-selection techniques could also provide reductions in build-queue-length that were competitive with build prioritization}.
\subsubsection{Implications for Future Techniques}
For future techniques, we recommend to \textbf{combine test prioritization with test selection techniques} --- since prioritization techniques could stop after the first failure is identified, and save the cost of running the remaining tests.
We found that test-prioritization techniques already reached very high results (PT\_Thomas14 is near perfect), so the features that they use could be also very useful for test selection to save cost.
Conversely, existing test-selection techniques that already perform very well for cost-savings (\textit{e.g.,}\xspace ST\_Herzig15) could be improved in their ability to advance failure observation.
Similarly, we recommend to further study the application of \textbf{build-selection techniques to provide early observation of build failures} by reducing the build queue via skipping builds in industrial projects in which parallel build requests are a larger issue.
Finally, there is also space to develop new metrics that could capture the balance that techniques provide across all dimensions D1--D3.
\subsection{Standing on the Shoulders of Giants}
\label{sec:extended-observations}
Our findings confirm and extend previous work:
\subsubsection{D1}
Beller \textit{et al.}\xspace \cite{beller2017oops} observed that test time is a low proportion of build time.
We extend this observation by finding that our studied test-selection techniques infrequently skipped full (all tests within) builds, which strongly limited their cost-saving ability.
We thus recommend test-selection to incentivize skipping full builds to save higher cost in CI.
\subsubsection{D2}
Jin and Servant \cite{jin2020_icse} observed a trade-off of higher cost savings incurring more missed build failures in their technique.
We extend this observation by finding that all our studied techniques were affected by that trade-off (techniques ranked equally by cost savings as by missed failures).
We additionally identified clear strategies that made techniques miss fewer failures: training across projects, and operating at test granularity.
We also observed that a seemingly-safe technique \cite{abdalkareem2019commits} still missed a high ratio of failures.
Finally, we elicited the need for better prediction of safe builds, and new metrics to compare trade-offs.
\subsubsection{D3}
Herzig \textit{et al.}\xspace \cite{herzig2015art} found that their test-granularity technique incurs low delay in build-failure observations.
We extend this observation by finding that all our other studied test-granularity techniques also incur low build-failure-observation delay, measured across all failing builds.
\HIDDEN
{
\section{Implications}
\mysubsection{For researchers}
Our work provides a set of metrics for the three dimensions when evaluating performances of computational-cost reduction and early feedback time.
Among those metrics, we find that build time seems to be the best metric for evaluating cost reduction, even for test-based techniques.
Other metrics are useful in addition, to observe specific effects, \textit{e.g.,}\xspace test time can provide detailed information for techniques' preference on test duration.
For evaluating side effects, we find that both metrics have some advantages.
In that case, a new metric should be defined to be able to make both kinds of observations.
It should measure the proportion of skipped failures by combing skipped builds and tests.
We also find metrics not comprehensive enough when evaluating the dimension of early feedback time.
There should be new metrics coming out, measuring cumulative delay (or advance) for early-feedback techniques, and new metric of balance cost-saving with cumulative delay (for cost-saving techniques)
We also find some opportunities for improving existing techniques.
Regarding to cost-saving, future techniques may want to bridge the gap between test and build selection. A hybrid technique could take advantage of both.
Also, new techniques should aim to: consider skipping builds partially (like test-based techniques), but also consider build-specific factors (like predicting errors not revealed by test failures).
Besides, new techniques could focus on either: best balance, best fully-safe technique, or best savings with lowest delay.
Finally, there is a high potential to improve the safe-skip build selection approaches.
We also make some interesting observations that could guide future research.
We find that in our data set, 35\% of failing builds have no failing test.
This value is similar with previous work's \cite{beller2017oops} results (30\% for Java and 33\% for Ruby).
However, both test and build selection techniques have no specific preference in this kind of builds.
Maybe future techniques should split these builds from the population and analyse them separately.
Besides, we observe that test-selection techniques do have some builds with no test got selected, but they save less proportion of build time compared with the proportion of saved tests.
This is because the build is made up of several phases and test execution belong to one of them, taking a medium proportion (about 40\%).
The phase such as installation or release could also take a lot of time and it becomes a blank space for test selection techniques in CI context.
As a result, there is a big potential to save build time.
\mysubsection{For practitioners}
Our findings can help practitioners decide which technique is best for them to maximize the benefit by understanding the data in their project.
For example, if your project has a lot of collision problems, applying PB\_Liang18 can help you advance feedback time.
Or if your project has redundant builds because of safe modifications such as Readme files, you can probably take advantage of SB\_Abd19 to skip some safe builds.
Or if your project has tests with a strong relationship to your source codes, ST\_Gligoric15 may help save some passing tests and builds since the changes don't touch any of them.
\mysubsection{For tool-builders}
We find that there could be some changes on CI frameworks, \textit{e.g.,}\xspace Travis CI.
From results of Dimension 3, we find that test-prioritization techniques' performance on the positions advanced for all failing tests within a build is very similar with the performance on the positions advanced for prioritized failing tests within a build.
This reflects that maybe all of the failing tests get prioritized.
We then did an extra experiment and we found that in average more than 95\% of failing tests get prioritized and are not at their original position any more.
This means there is potential to improve the original order of test executions.
In that case, maybe Travis could include test prioritization technique into the build workflow automatically.
}
\section{Threats to Validity}
\subsection{Internal Validity}
To guard internal validity, we carefully tested our evaluation tools on subsets of our dataset while developing them.
Our analysis could also be influenced by incorrect information in our analyzed dataset.
For this, we studied a popular dataset that is prevalent among continuous integration studies: TravisTorrent \cite{beller2017travistorrent}.
Furthermore, many of our studied techniques \cite{abdalkareem2019commits,hassan2017change,jin2020_icse,liang2018redefining} were originally evaluated on TravisTorrent projects.
Additionally, we extensively curated TravisTorrent, removing: toy projects following standard practice \cite{islam2017insights, ni2017cost}, unusable projects for test-granularity techniques, and cancelled builds as in past work \cite{gallaba2018noise,jain2019brief,rebouccas2017does}.
Finally, we also followed the advice in Gallaba \textit{et al.}\xspace's study \cite{gallaba2018noise} to consider the nuance in the TravisTorrent dataset.
We did so in the following ways:
(1) We considered passing builds with ignored failures as passing.
Developers manually flag such failures to be ignored when they cannot officially support them \cite{gallaba2018noise}, and thus should not represent the status of the build.
(2) We considered builds that fail after another failure as correctly labeled, because they flag an unsolved problem, being informative for developers.
(3) We considered failing builds with passing jobs as failing builds.
If at least one job fails, it signals a problem, informing developers.
Our results may also be affected by flaky tests causing spurious failing builds.
However, CI systems are expected to function even in the presence of flaky tests, since most companies do not consider it economically viable to remove them, e.g., \cite{Machalica2019predictive, micco2017state}.
Besides, cross validation may make unrealistic use of chronological events
To address this problem, we used time-based cross validation \cite{bettenburg2008duplicate}.
Our observed build and test runtimes may have been influenced by the load experienced in the build server at the time.
However, we consider this potential impact to be very low, since we observed that the standard variance in test duration across builds was 0.5 seconds.
\subsection{External Validity}
To increase external validity, we selected the popular dataset TravisTorrent, which has been analyzed by many other research works.
The projects we chose were all Java or Ruby projects, because there are no projects with other programming languages in the data set.
Although these two programming languages are popular, different CI habits in other languages may provide slightly different results to the ones in this study.
Our observations may slightly vary for separate software projects, but our goal was to derive general observations for a real-world population of software projects.
\subsection{Construct Validity}
A threat to construct validity is whether we studied software projects that are similar to those that suffer most accutely from high CI cost and delays in failure observation, e.g., the projects at Google \cite{hilton2016usage} and Microsoft \cite{herzig2015art}.
We studied the TravisTorrent dataset, which is the standard dataset used in the literature to evaluate techniques to save cost in CI \cite{abdalkareem2019commits,jin2020_icse,liang2018redefining,chen2020buildfast}.
One of our studied projects (Rails) is particularly similar to industrial software projects.
Rails was used alongside two other Google datasets to evaluate PB\_Liang18 \textit{et al.}\xspace \cite{liang2018redefining}, and it had similar magnitudes of test suites (thousands), test executions (millions) and test execution time (millions of seconds).
Nevertheless, early observation (or prediction) of build failures is beneficial, regardless of how much load a project's CI system experiences.
It allows developers to not have to wait for builds to finish, which is the motivation of multiple previous works, e.g., \cite{hassan2017change,abdalkareem2019commits}.
In particular, Abdalkareem \textit{et al.}\xspace \cite{abdalkareem2019commits} found that developers from small projects --- as small as 168 commits --- also chose to manually skip commits in CI to save time.
These savings can be substantial for the projects in our studied dataset: test-suite runtime varies from project to project (median 2.3 mins, 75th percentile 26 mins) but, more importantly, saving full builds could save much higher cost (median 14 mins, 75th percentile 52 mins).
Also, many builds (20\%) take longer than 30 minutes \cite{beller2017oops}.
Test-selection could save higher cost if it leaned harder towards skipping full builds, but we found in this study that this incentive is not yet strongly leveraged by our studied test selection techniques.
\section{Related Work}
\subsection{Empirical Studies of CI and its Cost and Benefit}
Multiple researchers focused on understanding the practice of CI, studying both practitioners \textit{e.g.,}\xspace \cite{hilton2016usage} and software repositories \cite{vasilescu2015quality}.
Vasilescu \textit{et al.}\xspace studied CI as a tool in social coding \cite{vasilescu2014continuous}, and later studied its impact on software quality and productivity \cite{vasilescu2015quality}.
Zhao \textit{et al.}\xspace studied the impact of CI in other development practices, like bug-fixing and testing \cite{zhao2017impact}.
Stahl \textit{et al.}\xspace \cite{staahl2013experienced} and Hilton \textit{et al.}\xspace \cite{hilton2016usage} studied the benefits and costs of using CI, and the trade-offs between them \cite{hilton2017trade}.
Lepannen \textit{et al.}\xspace similarly studied the costs and benefits of continuous delivery \cite{leppanen2015highways}.
Felidr\'e \textit{et al.}\xspace \cite{felidre19} studied the adherence of projects to the original CI rules \cite{fowler2006continuous}.
Other recent studies analyzed testing practices \cite{gautam2017empirical}, difficulties \cite{pinto2017inadequate} and pain points \cite{widder2019conceptual} in CI.
The high cost of running builds is highlighted by many empirical studies as an important problem in CI \cite{hilton2016usage,hilton2017trade,pinto2017inadequate,widder2019conceptual,herzig2015art} --- which reaches millions of dollars in large companies, \textit{e.g.,}\xspace at Google \cite{hilton2016usage} and Microsoft \cite{herzig2015art}.
People \cite{hilton2017trade,vasilescu2015quality} believe that the benefit of CI is mainly lying in the early fault detection.
Others \cite{hilton2016usage,leppanen2015highways} find that projects adopting CI are able to adopt pull requests and release in a shorter time.
Some also find that CI can help developer team in other areas such as providing a common build environment \cite{hilton2017trade} and increasing team communication \cite{staahl2013experienced}.
\subsection{Approaches to Reduce Time-to-feedback in CI}
A related effort for improving CI aims at speeding up its feedback by prioritizing its tasks.
The most common approach in this direction is to apply test case prioritization (TCP) techniques \textit{e.g.,}\xspace \cite{luo2018assessing,mostafa2017perfranker,elbaum2014techniques,marijan2013test,elbaum2002test,rothermel2001prioritizing,zhu2018test} so that builds fail faster.
These techniques, even though not designed to work in CI environment, have been claimed to have a potential to provide CI users earlier fault observation.
Another similar approach achieves faster feedback by prioritizing builds instead of tests \cite{liang2018redefining}.
Their paper grants higher priority to those builds that are more likely to fail according to the historical failing information and works well for those projects that have a ton of collision issues.
Naturally, these kinds of techniques don't provide benefit in saving the cost.
In this paper, we study both test-prioritization techniques as well as build-prioritization techniques in terms of advancement of failure observation and compare them with selection techniques.
\subsection{Approaches to Reduce Cost of CI}
A popular effort to reduce the cost of CI focuses on understanding what causes long build durations \textit{e.g.,}\xspace \cite{ghaleb2019empirical,tufano_icse_2019}.
Thus, most of the approaches that reduce the cost of CI aim at making builds faster by running fewer test cases on each build.
It is found that a ton of passing tests could be saved in this way \cite{labuschagne2017measuring}.
Some approaches use historical test failures to select tests \cite{herzig2015art,elbaum2014techniques}.
Others run tests with a small distance to code changes \cite{memon2017taming} or skip testing unchanged modules \cite{shi2017optimizing}.
Recently, Machalica \textit{et al.}\xspace predicted test case failures using a machine learning classifier \cite{Machalica2019predictive}.
These techniques are based on the broader field of regression test-selection (RTS) \textit{e.g.,}\xspace \cite{zhu2019framework,zhang2018hybrid,gligoric2015practical,yoo2012regression,yoo2007pareto,rothermel1997safe,rothermel1996analyzing}.
While these techniques focus on making every build cheaper, other work addresses the cost of CI differently: by reducing the total number of builds that get executed.
A related recent technique saves cost in CI by not building when builds only include non-code changes \cite{abdalkareem2019commits,abdalkareem_tse2020}.
They firstly create a rule-based selection technique and then take advantage of machine learning algorithm to improve the accuracy.
Then Jin and Servant propose a build strategy that developing team should skip those less informative passing builds through build outcome prediction.
Finally, other complementary efforts to reduce build duration have targeted speeding up the compilation process \textit{e.g.,}\xspace \cite{celik2016build} or the initiation of testing machines \textit{e.g.,}\xspace \cite{gambi2015improving}.
In this paper, we refer cost-reduction techniques as selection techniques.
We pick techniques in both build-selection techniques and test-selection techniques and examines their performance in different cost-saving and fault-observation metrics.
\subsection{Evaluation frameworks for similar techniques}
Multiple research works focus on comparing cross-tool performance with an evaluation framework.
Zhu \textit{et al.}\xspace \cite{zhu2019framework} propose a regression test selection framework to check the output against rules inspired by existing test suites for three techniques.
Leong \textit{et al.}\xspace \cite{leong2019assessing} propose a test selection algorithm evaluation method and evaluate five potential regression test selection algorithms, finding that the test selection problem remains largely open.
Najafi \textit{et al.}\xspace \cite{najafi2019improving} studied the impact of test execution history on test selection and prioritization techniques.
Luo \textit{et al.}\xspace \cite{luo2018assessing} conduct the first empirical study comparing the performance of eight test prioritization techniques applied to both real-world and mutation faults and find that the relative performance of the studied test prioritization techniques on mutants may not strongly correlate with performance on real faults.
Lou \textit{et al.}\xspace \cite{lou2019survey} systematically created a taxonomy of existing works in test-case prioritization, classifying them in: algorithms, criteria, measurements, constraints, scenarios, and empirical studies.
Differently to these works, our study in this paper specifically targets the context of CI, and it has a broader focus than test prioritization or selection.
Our study is the first to compare all the techniques proposed to reduce time-to-feedback or cost in CI, including prioritization and selection techniques, at test and build granularities.
We performed observations comparing across 2 goals, 3 dimensions, 10 metrics, 2 granularities, and 10 techniques.
Most of our observations required comparisons at broad scope.
For example: we revealed the need for a new incentive in test selection to skip full test suites (to also save build-preparation time), which would not be relevant in studies outside the scope of CI.
\section{Conclusions and Future work}
In this article, we performed the most exhaustive evaluation of CI-improving techniques to date.
We evaluated 14 variants of 10 CI-improving approaches from 4 families on 100 real-world projects.
We compared their results across 10 metrics in 3 dimensions.
We derived many observations from this evaluation, which we then synthesized to understand the design decisions that helped each dimension of metrics, as well as those that had a negative impact on it.
Finally, we provide a set of recommendations for future techniques in this research area to take advantage of the factors that we observe were beneficial, and we lay out also future directions to improve on those factors that were not.
We lay out plans to combine approaches at test and build granularities to save further costs, and to combine selection and prioritization approaches to improve on the early observation of failures while also saving some cost.
Such techniques could consider additional history-based prediction features, such as the project's code-change history, \textit{e.g.,}\xspace \cite{servant2011history,servant2012history,servant2013supporting,servant2013chronos,servant2017fuzzy}, since test-execution history was beneficial for some techniques, \textit{e.g.,}\xspace \cite{herzig2015art}.
We also discuss the need of future metrics to capture the various characteristics of these techniques in a more holistic way.
In the future, we will work on designing a comprehensive technique that combines selection and prioritization as well as build and test granularities to maximize the benefit of CI while reducing its cost as much as possible.
\section{Replication}
We include a replication package for our paper \cite{xianhao_jin_2020_3696084}.
\bibliographystyle{abbrv}
|
2,877,628,090,520 | arxiv | \section{Introduction}
Absorption spectroscopy is an important tool for studying the electronic
properties of materials.
For semiconductors and insulators, the low energy part of the absorption spectrum is typically
dominated by excitonic effects, which originate from electron-hole interactions that may be
screened by the other electrons.
The most common computational methods currently used for simulating absorption spectra
are time-dependent density functional theory (TDDFT)~\cite{runge1984,reining2002,ullrich2016} and the Bethe-Salpeter equation
based on the GW approximation to the self-energy (GW-BSE)~\cite{sham1966,hanke1980,albrecht1998,rohlfing2000,onida2002}.
In TDDFT, it has long been
recognized that the inclusion of nonlocal exchange is critical for the description of
excitons~\cite{Bruneval2006,Botti2007,Paier2008,izmaylov2008}, and promising recent work has applied
screened or dielectric-dependent
range-separated hybrids~\cite{Wing2019,Tal2020}.
The GW-BSE approach is based on time-dependent many-body perturbation theory and
typically includes screening at the level of the random-phase approximation. The
predictions are reasonably accurate when compared to experiments, although
implementation details, starting point dependence, and
the absence of finite-temperature or vibrational effects make a rigorous
evaluation challenging. For example, benchmark studies on
molecules~\cite{bruneval2015,jacquemin2016,jacquemin2017,gui2018} have found
that the GW-BSE results depend strongly on the reference
functional and the optimal functionals are very different than those typically used
for solid-state calculations.
Recent years have seen rapid development of wavefunction-based quantum chemistry techniques
for periodic solids~\cite{hirata2001,hirata2004,katagiri2005,gruneis2011,mcclain2017,gruber2018,zhang2019}.
In the present context of neutral excitation energies, equation-of-motion
coupled-cluster theory with single and double excitations (EOM-CCSD)
is a promising alternative to TDDFT or GW-BSE. For example, our group has
applied EOM-CCSD to
study plasmons in models of metals~\cite{Lewis2019,Lau2020}, as well as singly- and
doubly-excited states in a molecular crystal~\cite{Lewis2020}. Recently, the two
of us presented a systematic study of EOM-CCSD for a range of semiconductors and
insulators, finding an accuracy of about 0.3~eV for the first singlet excitation
energy~\cite{Wang2020}. Although these preliminary results are encouraging,
the optical response of solids is encoded
in the full energy-dependent absorption spectrum, which depends on all excited
states in the energy range of interest and their oscillator strengths. Here, we
extend our previous work and study the absorption spectra of semiconductors
and insulators predicted by EOM-CCSD.
The remainder of the paper is organized as follows. In Sec.~\ref{sec:theory},
we briefly describe the theory underlying the calculation of solid-state
absorption spectra with periodic EOM-CCSD. In Sec.~\ref{sec:comput}, we provide
computational details about the basis sets used, integral
evaluation, and $k$-point sampling. In Sec.~\ref{sec:results}, we first
present and discuss our final EOM-CCSD optical absorption spectra for six
solids, before demonstrating detailed studies of the impact of various
approximations. Finally, in Sec,~\ref{sec:conc}, we summarize our results and
conclude with future directions.
\section{Theory}
\label{sec:theory}
Within EOM-CC theory~\cite{emrich1981,emrich1981a,koch1990,stanton1993,bartlett2007,krylov2008,bartlett2012},
excited states with momentum ${\bm{q}}$ are
given by
\begin{equation}
|\Psi({\bm{q}})\rangle = \hat{R}({\bm{q}}) e^{\hat{T}} |\Phi_0\rangle
\end{equation}
where $\hat{T}$ creates momentum-conserving particle-hole excitations
and $\hat{R}$ creates particle-hole excitations with momentum~${\bm{q}}$.
The $\hat{T}$ operator is determined by the solution of the nonlinear
CC amplitude equations and the $\hat{R}$ operator is determined by a
non-Hermitian matrix eigenvalue problem.
In crystals, the density of excited states prohibits their
direct enumeration.
Instead, the absorption (or scattering) spectrum $S_{\bm{q}}(\omega)$ can be obtained
directly at arbitrary frequency by using the solution to a system of linear
equations,
\begin{subequations}
\label{eq:spectrum_final}
\begin{align}
&S_{{\bm{q}}}(\omega) = -\pi^{-1} \mathrm{Im} \langle \Phi_0 | (1 + \hat{\Lambda})
\hat{\bar{\mu}}_{\bm{q}}^\dagger |x_{\bm{q}}(\omega) \rangle \\
\label{eq:spectrum_lineq}
&[\omega - (\hat{\bar{H}} - E_0) + i\eta] |x_{\bm{q}}(\omega) \rangle
= \hat{\bar{\mu}}_{\bm{q}} |\Phi_0 \rangle.
\end{align}
\end{subequations}
where $\hat{\bar{O}} = e^{-\hat{T}} \hat{O} e^{\hat{T}}$ are similarity-transformed operators,
$\hat{\Lambda}$ is the deexcitation operator needed for expectation values
in CC theory,
$\hat{\mu}_{\bm{q}} = \sum_{cv{\bm{k}}} \mu_{c{\bm{k}}+{\bm{q}},v{\bm{k}}} \hat{a}^\dagger_{c{\bm{k}}+{\bm{q}}} \hat{a}_{v{\bm{k}}}$
is the transition operator with momentum ${\bm{q}}$,
and $\eta$ is a numerical Lorentzian linewidth.
Here, we study the performance of EOM-CC with single and double
excitations (EOM-CCSD), i.e.~$\hat{T}=\hat{T}_1+\hat{T}_2$,
$\hat{\Lambda} = \hat{\Lambda}_1 + \hat{\Lambda}_2$, and $\hat{R}=\hat{R}_1+\hat{R}_2$, and focus on
absorption spectra with ${\bm{q}}=0$.
For each frequency $\omega$, the cost of iteratively solving the system of
linear equations~(Eq.~\ref{eq:spectrum_lineq}) scales as $O(N_k^4 N_o^2 N_v^4)$,
where $N_k$ is the number of $k$-points sampled
in the Brillouin zone and $N_o,N_v$ are the number of occupied
and virtual orbitals in the unit cell.
In practice, the iterative solution converges slowly for some
values of $\omega$. Therefore, in this work we also test
and apply so-called partitioned EOM-CCSD~\cite{nooijen1995,stanton1995,gwaltney1996}, where the double
excitation block of the similarity transformed Hamiltonian is
approximated by a diagonal matrix of orbital energy differences.
This reduces the iterative scaling of the EOM step to
$O(N_k^3 N_o^2 N_v^3)$.
As an alternative to EOM-CC, one can use the linear-response coupled cluster (LR-CC) theory
to calculate
excited-state properties. When no truncation is carried out in the excitation
levels, LR-CC and EOM-CC both give exact results. At a truncated excitation level,
the methods yield identical excitation energies but different
excited-state properties, such as transition dipole moments, and only properties
predicted by LR-CC are properly size extensive~\cite{kobayashi1994,koch1994}.
Although this finding calls into question the applicability of EOM-CCSD for solid-state
absorption spectra, the violation of size extensivity is
strongly mitigated when large basis sets are used~\cite{Caricato2009}.
In this work, we observe no problems associated with this deficiency of
EOM-CCSD for spectra, perhaps because of the near completeness of the basis set in
periodic solids.
\section{Computational Details}
\label{sec:comput}
The relatively high cost of periodic CCSD calculations makes it challenging to achieve
convergence to the complete basis set and thermodynamic limits. We have tested convergence with
respect to Brillouin zone sampling, basis sets, frozen orbitals, and the partioned EOM approximation,
which will be discussed in Sec.~\ref{ssec:approx}.
Based on our studies, our final calculations presented here are performed in the
following way. We use GTH pseudopotentials~\cite{goedecker1996,hartwigsen1998} and the corresponding
polarized double-zeta basis set (DZVP)~\cite{vandevondele2005}. Two-electron repulsion integrals
were treated by Gaussian density fitting with an even-tempered auxiliary basis
(see ref~\citenum{sun2017} for more details). In the CCSD calculations,
we correlate the highest four occupied and the lowest four virtual orbitals at
each $k$-point, while all of the other orbitals are frozen. The partitioning
approximation is made to the similarity transformed Hamiltonian whereby the
dense doubles block is replaced by a diagonal matrix of orbital
energy differences~\cite{nooijen1995,stanton1995,gwaltney1996}. The Brillouin
zone was sampled with a uniform mesh of up to $N_k=5\times5\times5$ $k$-points.
The $k$-point mesh is shifted to include
either the $\Gamma$ point or the random symmetry-breaking point
${\bm{k}}=(0.11, 0.21, 0.31)$ (in fractions of the reciprocal lattice vectors).
Such random shifts have been previously shown to yield absorption spectra that
converge more quickly to the thermodynamic limit~\cite{ahmadpourmonazam2013}.
Lastly, we separately treat the convergence
of the first excitation energy and extrapolate to the thermodynamic limit by
assuming finite-size errors that scale as $N_k^{-1}$, as discussed in our previous
work~\cite{Wang2020}. We then rigidly shift the entire absorption spectrum by
this finite-size correction, which is 0.1--0.4~eV for the materials and $k$-point
meshes considered here.
All calculations were performed with PySCF~\cite{sun2018,sun2020a}.
\section{Results and discussion}
\label{sec:results}
\subsection{EOM-CCSD absorption spectrum for six solids}
In Fig.~\ref{fig:spectra}, we show our best and final results for the EOM-CCSD
absorption spectra of six three-dimensional semiconducting and insulating materials:
Si, SiC, C, MgO, BN, and LiF. The experimental lattice constants were used for all systems:
Si (5.431~\AA), SiC (4.350~\AA), C (3.567~\AA), MgO (4.213~\AA), BN (3.615~\AA), and LiF (4.035~\AA).
Spectra were obtained using a $5\times 5\times 5$ $k$-point mesh;
in order to give some sense of possible finite-size
errors, we show EOM-CCSD results obtained with the two $k$-point shifts mentioned above.
Our EOM-CCSD absorption spectra are compared to experimental ones and
to those obtained by configuration interaction with single excitations (CIS),
which was performed with a denser $7\times 7\times 7$ $k$-point mesh.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{fig-spec.pdf}
\caption{Absorption spectra of Si, SiC, C, MgO, BN, and LiF in the DZVP basis set.
A \mesh{5} $k$-point mesh is used for all CCSD spectra (green) and a \mesh{7} $k$-point
mesh for all CIS spectra (blue).
CCSD spectra are shown with $k$-point meshes that are shifted to include the
$\Gamma$ point (dashed green) and a random, symmetry-breaking $k$-point
(solid green). The corresponding EOM-CCSD first excitation energies are
indicated by green vertical lines.
A broadening of $\eta=0.54$~eV is used in all calculations except for silicon
and MgO where a smaller broadening of $\eta=0.08$~eV is used to resolve the
sharp peaks. For BN, the shaded region of the experimental spectrum should be
ignored as it has been attributed to defects and polymorphism~\cite{tararan2018}.
}
\label{fig:spectra}
\end{figure*}
As seen in Fig.~\ref{fig:spectra}, the EOM-CCSD spectra are in reasonably good agreement with experiment.
Different $k$-point shifts give similar spectra for large gap insulators (like LiF and BN) and
different spectra for smaller gap semiconductors (like Si and SiC), whose main features are shifted
from one another by as much as 1~eV. When compared to experiment, the EOM-CCSD spectra are shifted
to higher energies by about 1~eV, but otherwise have very similar lineshapes, indicating an accurate
description of excitonic interactions and concomitant redistribution of spectral weight. By comparison,
CIS spectra massively overestimate the excitation energies of solids by 3~eV or more, as shown in
our previous work~\cite{Wang2020}, and often have qualitatively incorrect spectral structure.
Because Hartree-Fock-based CIS is identical to unscreened GW-BSE, these results emphasize
the well-known importance of screening, especially in semiconductors.
We believe that the shift to higher energies that is exhibited by EOM-CCSD when compared
to experiment is mostly attributable to the missing correlation due to the neglect of
triple (and higher) excitations and the absence of vibrational and finite-temperature effects,
which are of course present in experiments and absent in the calculations. With regards to
electron correlation, it is interesting to note that our previous work, which
did not study spectra, found that EOM-CCSD overestimated the first excitation
energy by about 0.3~eV, which is noticably smaller than the
deviations seen in the spectra in Fig.~\ref{fig:spectra}. This discrepancy (i.e.~overestimation
by 0.3~eV versus 1~eV or more) is because
the first excited state, especially in indirect gap materials, is typically
weakly absorbing and contributes to the gradual onset of absorption. However,
experimentally reported first excitation energies are typically those of
spectral peaks or intense features, which occur at higher energies than the
onset of absorption. In Fig.~\ref{fig:spectra}, the green vertical lines indicate the calculated
first excitation energies, which are corrected for finite-size effects and
other approximations mentioned in Sec.~\ref{sec:comput}.
In contrast to the apparent
differences in spectra, the use of differently shifted $k$-point meshes produce first excitation
energies that agree reasonably well with each other, with a difference of 0.04--0.4~eV.
The first excitation energies differ from
our previously reported values~\cite{Wang2020} (by 0.2~eV or less) due to a slightly different treatment of the
finite-size effects in the current work, i.e.~(a) we extrapolate the data using a function of the form
$E_\infty + aN_k^{-1}$,
(b) here, the $5\times 5\times 5$ result is included in extrapolation, and (c) the partitioning
approximation is corrected by a constant shift deduced from the $3\times 3\times 3$ result.
Naturally, we believe that the comparison of spectra, as done here,
enables the most fair evaluation of the accuracy. However, this overestimation of excited states
by 1~eV or more is significantly higher than the known performance of EOM-CCSD in molecules.
This difference is surprising, especially given that the excitonic states
contributing to absorption are all predominantly single-excitation in character and
that the EOM-CCSD polarizability has most of the diagrammatic content of the GW-BSE polarizability,
plus more~\cite{Lange2018,Berkelbach2018,Lewis2019,Lewis2020}.
Among the six solids in Fig.~\ref{fig:spectra}, a few show noticeable
differences between the EOM-CCSD and experimental spectra.
The worst agreement is for silicon, which has the smallest gap of all materials considered.
In its experimental spectrum, the main features are the two peaks at 3.5~eV and 4.3~eV with similar intensity.
While the CCSD spectra with both $k$-point shifts predict the position of the first peak reasonably well,
the randomly shifted $k$-point mesh severely underestimates its intensity
relative to that of the higher-lying peaks.
In contrast, the $\Gamma$-inclusive $k$-point mesh correctly gives similar intensity
for the two-peak structure, although the intensity between the two peaks is strongly underestimated.
We believe that the poor agreement between theory and experiment is due to the large remaining finite-size effects,
which are expected to be largest for this small-gap semiconductor.
\subsection{Approximations and error corrections}
\label{ssec:approx}
Finite-size errors of excited-state properties like absorption spectra
have been widely discussed in the TDDFT and GW-BSE literature~\cite{rohlfing2000,laskowski2005,fuchs2008,sander2015,Wing2019},
in part due to the relative maturity and low computational cost of these methods.
In constract, the finite-size errors of wavefunction-based methods such
as CCSD have been studied signficantly less, especially for spectra.
In the following, we will use diamond as an example to study the finite-size errors
of the spectra predicted by the EOM-CCSD.
As a warm-up to EOM-CCSD, we first consider CIS, which forms a minimal theory for
electronic excited states in the condensed phase and is qualitatively comparable to TDDFT and GW-BSE.
Importantly, the relatively low cost of CIS allows us to study the convergence with respect to
Brillouin zone sampling up to relatively large $k$-point meshes.
In the upper panels of Fig.~\ref{fig:kcenter}, we show the CIS absorption spectra computed with various
$k$-point meshes centered at $\Gamma$ (right column) or randomly shifted (left column), including up to \mesh{7} $k$-point meshes.
At low mesh densities (like \mesh{3}), the spectra computed with different $k$-point shifts
show a large discrepancy in both peak positions and intensities.
This discrepancy is due to insufficient Brillouin zone sampling and largely depends on details of the
band structure.
As the mesh density is increased, the spectra converge to a similar result, but
the convergence is much more rapid with the randomly shifted $k$-point mesh.
Even for this insulator, the CIS spectra are not graphically converged with a
\mesh{7} mesh. This must be kept in mind when analyzing the EOM-CCSD spectra in
Fig.~\ref{fig:spectra}, which are limited to \mesh{5} meshes.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{fig-kcenter-gamma-vs-rand.pdf}
\caption{Convergence of the CIS (top) and EOM-CCSD (bottom) spectra of diamond
using various $k$-point shifts and sampling densities.}
\label{fig:kcenter}
\end{figure}
In the lower panels of Fig.~\ref{fig:kcenter}, the EOM-CCSD spectra of diamond with the same two $k$-point shifts are shown, for mesh
densities ranging from \mesh{3} to \mesh{5}.
As for CIS, we again see that the randomly shifted mesh provides significantly faster convergence towards the thermodynamic
limit. In fact, the \mesh{4} and \mesh{5} are very similar and suggest semiquantitative convergence, especially at low
energies.
In addition to the spectral intensities, the excitation energies also exhibit
large finite-size errors. These latter finite-size errors are simpler to remove
by extrapolation. Our final EOM-CCSD spectra shown in Fig.~\ref{fig:spectra}
have been rigidly shifted by the finite-size error of the first excitation
energy. This finite-size error is determined by extrapolation, assuming that the
finite-size error decays as $O(N_k^{-1})$. Raw data and extrapolation fits are
shown in Fig.~\ref{fig:extrap} for four of the solids studied here. As expected,
we see that the convergence is erratic for indirect gap materials (C and Si) but
significantly smoother for direct gap materials (LiF and MgO). At the largest $k$-point
meshes, the finite-size errors are in the range of 0.1--0.4~eV.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{fig-extrap.pdf}
\caption{Extrapolation of first excitation energies to the thermodynamic limit for C, Si, LiF, and MgO. Frozen virtual orbitals and partitioning are used in all cases.}
\label{fig:extrap}
\end{figure}
Beyond the finite-size errors, we have studied the effects of three other approximations:
incomplete basis set, frozen orbitals, and the partitioning of EOM-CCSD, as shown
in Fig.~\ref{fig:approx} for diamond with the same randomly-shifted $k$-point mesh as above.
In Fig.~\ref{fig:approx}(a), we show that the basis set incompleteness error is negligible by
comparing the spectrum
obtained with two types of pseudopotentials, GTH~\cite{goedecker1996,hartwigsen1998} and ccECP~\cite{bennett2017,bennett2018,annaberdiyev2018,wang2019a}, combined with their
corresponding double- and triple-zeta basis sets. These calculations were performed with a
\mesh{2} $k$-point mesh and without freezing any orbitals.
Additionally, we see that the use of two distinct pseudopotentials
does not introduce a noticeable difference in the calculated spectra.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{fig-diff-c.pdf}
\caption{EOM-CCSD absorption spectra of diamond with various basis sets
and approximations as indicated. A random $k$-point shift and
a Lorentzian broadening of $\eta=0.54$~eV is used in all cases.
(a) Comparison of EOM-CCSD spectra (without partitioning and
with all orbitals correlated) using four different basis
set/pseudopotential combinations as indicated. The Brillouin zone was sampled
with a $2\times2\times2$ $k$-point mesh.
(b) Comparison of the full EOM-CCSD spectra and various approximations as
indicated using the DZVP basis set and GTH pseudopotential.
The Brillouin zone was sampled with a $3\times3\times3$
$k$-point mesh.}
\label{fig:approx}
\end{figure}
In Fig.~\ref{fig:approx}(b), we test the impact of orbital freezing and partitioning
by showing four spectra, all performed with a \mesh{3} $k$-point mesh:
(1) the EOM-CCSD spectrum without approximations,
(2) the EOM-CCSD spectrum with only
4 occupied and 4 virtual orbitals correlated,
(3) the partitioned EOM-CCSD spectrum without any frozen orbitals, and
(4) the partitioned EOM-CCSD spectrum with 4 occupied and 4 virtual orbitals correlated.
We see that freezing orbitals causes a roughly rigid shift of the spectrum to
higher energy by about 0.2--0.5~eV. The shift is not perfectly rigid and, as
expected, the discrepancy is worst at high energies. In contrast, the
partitioning error causes a roughly rigid shift to lower energy by a similar
amount. When both approximations are applied, we obtain a spectrum close
to the one without approximations due to fortuitous cancellation of error, justifying
our use of this affordable approach when scaling up to larger $k$-point meshes.
The effect of all errors discussed in this subsection can be approximated with a rigid
spectral shift according to the error in the first excitation energy.
These corrections for both $\Gamma$-centered and randomly shifted $k$-point
meshes are summarized in Table~\ref{tab:correction} for all six material studied.
The base result ($E_{555}$) is obtained with partitioned EOM-CCSD using frozen
orbitals and a \mesh{5} $k$-point mesh.
To this, we apply two composite-style
corrections: $\Delta_{\mathrm{TDL}}$ is the difference between the excitation
energy in the thermodynamic limit obtained by extrapolation and $E_{555}$, where
all calculations are performed with partitioned EOM-CCSD/DZVP with frozen
orbitals, and
$\Delta_{\mathrm{frz+part}}$ is the difference between EOM-CCSD without approximations
and partitioned EOM-CCSD with frozen orbitals, using a \mesh{3} $k$-point mesh.
These two corrections are roughly comparable in magnitude but strongly system dependent.
Although each correction alone may shift the energy by up to 0.5~eV,
the final correction is typically quite small due to error cancellation.
To reiterate, the final spectra presented in Fig.~\ref{fig:spectra} were obtained with
a \mesh{5} $k$-point mesh using
partitioned EOM-CCSD, correlating 4 occupied and 4 virtual orbitals per $k$-point,
and then rigidly shifted according to the corrections given in Tab.~\ref{tab:correction}
to approximately correct for finite-size errors, frozen orbitals, and the partitioning
approximation applied to the dense doubles block of the Hamiltonian.
\begin{table}[t]
\caption{EOM-CCSD first excitation energies and corrections (in eV) for Si, SiC, C, MgO, BN, and LiF.}\label{tab:correction}
\begin{ruledtabular}
\begin{tabular}{l d{-1} d{-1} d{-1} d{-1} }
\toprule
& \multicolumn{1}{c}{$E_{555}$} & \multicolumn{1}{c}{$\Delta_{\mathrm{TDL}}$} & \multicolumn{1}{c}{$\Delta_{\mathrm{frz+part}}$} & \multicolumn{1}{c}{$E_{\mathrm{final}}$} \\
& \multicolumn{4}{c}{randomly shifted $k$-point mesh} \\
\cline{2-5}
Si & 3.52 & -0.18 & 0.20 & 3.53 \\
SiC & 5.83 & 0.14 & 0.53 & 6.50 \\
C & 7.59 & -0.43 & 0.37 & 7.53 \\
MgO & 8.88 & -0.11 & 0.28 & 9.05 \\
BN & 11.06 & -0.16 & 0.24 & 11.14 \\
LiF & 13.57 & 0.09 & -0.06 & 13.61 \\
& \multicolumn{4}{c}{$\Gamma$-included $k$-point mesh} \\
\cline{2-5}
Si & 3.45 & -0.08 & -0.12 & 3.25 \\
SiC & 6.10 & -0.28 & 0.24 & 6.06 \\
C & 7.01 & -0.05 & 0.29 & 7.25 \\
MgO & 8.03 & 0.20 & 0.41 & 8.64 \\
BN & 10.82 & 0.07 & 0.16 & 11.06 \\
LiF & 13.51 & 0.28 & 0.00 & 13.79 \\
\bottomrule
\end{tabular}
\end{ruledtabular}
\end{table}
\section{Conclusions and outlook}
\label{sec:conc}
We have presented the first absorption spectra of atomistic, three-dimensional
solids using periodic EOM-CCSD, focusing on Si, SiC, C, MgO, BN, and LiF. With
increasing Brillouin zone sampling, we observe no problems associated with the
lacking size-extensivity of EOM-CCSD spectral intensities~\cite{kobayashi1994,koch1994}.
This may be due to the reasonably complete basis set~\cite{Caricato2009}
provided by a solid-state environment, but further study is warranted.
After accounting for a number of sources
of error, our best and
final spectra show reasonably good agreement with experimental spectra,
indicating that EOM-CCSD is a promising and tractable approach for the
study of excitations in solids.
In many materials, we find that spectral shapes are well reproduced but are shifted
to higher energies with respect to experiment by about 1~eV. We attribute this
discrepancy to a combination of incomplete electron correlation (i.e., the
impact of triple and higher excitations) and the neglect of zero-point and
finite-temperature vibrational effects~\cite{noffsinger2012,patrick2014,lambrecht2017}.
Unlike in TDDFT and GW-BSE, in CCSD there is reduced freedom in the choice of starting point
due to its weak sensitivity to the employed reference determinant.
Overall, the agreement between EOM-CCSD and experimental spectra is best for
large-gap insulators and worst for small-gap semiconductors, which we attribute
to finite-size errors, i.e.~incomplete Brillouin zone sampling, and the increasing
importance of correlation in small-gap materials. Whereas
extrapolation of isolated energies to the thermodynamic limit is largely
successful, doing the same for spectral intensities is not straightforward.
The high cost of EOM-CCSD calculations precludes brute force convergence
and future work will explore the use of interpolation~\cite{rohlfing2000}, twist
averaging~\cite{lin2001}, and double-grid schemes~\cite{Kammerlander2012}, which
have been very successful at providing converged GW-BSE spectra at reduced
computational cost.
\section*{acknowledgments}
X.W.~thanks Alan Lewis for helpful discussion. This work was supported in part
by the National Science Foundation under Grant No.~OAC-1931321. All
calculations were performed using resources provided by the Flatiron Institute.
The Flatiron Institute is a division of the Simons Foundation.
|
2,877,628,090,521 | arxiv | \section*{Introduction}
Standard combustion process is supported by the heat produced in the course of combustion reaction \cite{Lewis1987,Law2006}. When a volume where the process proceeds becomes small the reaction stops because of fast heat escape via the volume boundaries \cite{Veser2001,Fernandez2002}. For this reason a minimal size of microcombustors cannot be much smaller than $1\:$mm \cite{Maruta2011,Chou2011} and their volume is at least a few cm$^3$. In spite of these facts spontaneous combustion of hydrogen and oxygen was observed in microsystems in nano \cite{Svetovoy2011} and microbubbles \cite{Postnikov2016} (see also a recent review \cite{Svetovoy2016}). Mechanism of the combustion in such small volumes is not clear, but observations suggest that the bubble surface plays a role similar to a catalyst.
Nanobubbles (NBs) containing stoichiometric mixture of H$_2$ and O$_2$ gases were produced in microsystems using voltage pulses of alternating polarity (AP) \cite{Svetovoy2016}. Under sufficiently large amplitude $U > 4-5\:$V and pulses repetition frequency $f\sim 100\:$kHz a local concentration of both gases above the same electrode reaches very high values. It can be so large that the bubbles containing mixture of gases are nucleated homogeneously. Due to the homogeneous formation a large number of small bubbles emerges instead of a small number of large bubbles as it happens in the normal electrolysis \cite{Svetovoy2013}. These small bubbles do not grow larger than $200\:$nm in size because the reaction between gases is initiated spontaneously \cite{Svetovoy2011}. Although the combustion was not observed directly due to a short lifetime and a small size of the objects, a series of signatures strongly suggest that the reaction occurs.
Not all the gas produced electrochemically is burned in NBs. Some bubbles contain only oxygen or only hydrogen. These NBs survive resulting in a gradual pressure increase in a closed microchamber \cite{Svetovoy2014}. When the pulses are switched off the pressure relaxes in $100\:\mu$s i.e. much faster than the time needed to dissolve the gases in the liquid. Fast relaxation of the pressure was explained by merging of H$_2$ and O$_2$ nanobubbles with formation of a bubble containing a stoichiometric mixture of gases. The latter rapidly disappears due to the combustion reaction.
A new phenomenon emerges when concentration of the H$_2$ and O$_2$ nanobubbles becomes so large that the bubbles touch each other and merge during the process \cite{Postnikov2016}. In this case microbubles (MBs) with a typical size of $10\:\mu$m appear in the chamber as can be seen in stroboscope snapshots. Dynamics of the bubbles was too fast to observe it optically. Appearance of a MB is accompanied by a significant pressure surge in the closed chamber which lasts for a few microseconds as was measured by a vibrometer. The effect was explained by merging of many NBs with the subsequent combustion of gases in the formed MB. The energy produced in one event was estimated as $3\:$nJ.
Observation of the combustion in nano and microbubbles shows that it is possible in principle to overcome the fundamental limit for scaling down the internal combustion engines \cite{Svetovoy2016}, which can be used to power different kind of microdevices \cite{Abhari2012,Ashraf2011,Weiss2011,Volder2010}. However, the energy produced by the combustion in NBs turns mostly into heat \cite{Svetovoy2014}. Only 5\% of the combustion energy in MBs is transformed into mechanical work done by a flexible membrane covering the microchamber \cite{Postnikov2016}.
In this paper we report formation of exploding bubbles and their dynamics in a millimeter-sized system. The combustion reaction between hydrogen and oxygen in these bubbles is ignited spontaneously at room temperature. Formation of the bubbles is accompanied by audible sounds, the energy released in the explosion is two orders of magnitude larger than was observed previously, and a significant part of this energy is transformed into useful mechanical work. On the other hand, the reaction is not initiated in the bubbles generated from an external source of the gas mixture.
\section*{Results}
Figure \ref{fig:setup} shows a device used to generate gas in the system. A circular shape of the electrodes helps to better localize the produced gas. When a DC potential is applied to the electrodes we observe intense formation of well visible bubbles on both electrodes. The gases are hydrogen and oxygen since a small concentration of dissolved Cu$^{2+}$ ions does not play a significant role. When the AP pulses are applied at a frequency of $100\:$kHz or more no gas is visible. However, the Faraday component of the current increases in comparison with that in the DC regime. This component can be separated by fitting each pulse with the function of time $I(t)=I_F+I_1e^{-t/\tau}$ (see \cite{Svetovoy2013} for detail), where $I_F$ is the Faraday current and the second term is responsible for the charging-discharging effects. For driving pulses with the amplitude $U=6.75\:$V and the frequency $f=200\:$kHz we have found $I_F\approx 120\:$mA and $\tau\approx 1\:\mu$s.
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\linewidth]{Fig1.jpg}
\vspace{-0.3cm}
\caption{(a) Schematic representation of the PCB. (b) Ready-to-use device in a Petri dish. A piece of silicon is floating above one of the structures. (c) Scanning electron microscope image of the electrodes. (d) Profilometer image of the electrodes. Colors from blue to red correspond to increase of the height. \label{fig:setup}}
\vspace{-0.5cm}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{Fig2.jpg}
\vspace{-0.3cm}
\caption{Sound and current at $U=6.75\:$V, $f=200\:$kHz. (a) Sound (red, right axis) from eight clicks and total current (blue, left axis) as recorded synchronously by the PicoScope. Only enveloping lines for the current are shown. A region around the red arrow is zoomed in (b), where the sound signal is averaged over $50\:\mu$s. Panel (c) shows the power spectrum as a function of sound frequency $f_s$. \label{fig:current_sound}}
\vspace{-0.5cm}
\end{figure}
\subsection*{Sound and current}
Although no visible bubbles are produced in the AP regime, the process is accompanied by clicking sounds, which are repeated every $50-100\:$ms.
The sound generated by the process and the total current in the system
are shown in Fig.$\:$\ref{fig:current_sound}. The amplitude of the driving voltage and its frequency were $U=6.75\:$V and $f=200\:$kHz, respectively. Only enveloping lines for the current are shown since fast oscillations cannot be resolved on the timescale of the figure. As one can see in panels (a) and (b) each click is related to a current drop lasting for about $500\:\mu$s. An expected delay of $0.4\:$ms is observed between the beginning of the current drop and the starting moment of the sound because the microphone was positioned at a distance of $12-15\:$cm from the electrodes. The clicks are nearly but not exactly periodic. Their amplitude and the interval between the clicks correlate with the depth of the current drop: the deeper the current drop is the higher is the amplitude and the longer is the interval to the next click. Frequency spectrum of the sound is shown in panel (c). It varies with the dish size and shape, depends on the proximity of the objective and other geometrical characteristics of the setup. As demonstrated in Supplementary Information (Fig.$\:$1S) separate lines of the click are essentially defined by the eigenmodes of the system. After the drop the current slowly returns to the value it had before the drop. It is because the liquid near the structure is heated by a few degrees by the current but the bubble formation and termination bring colder liquid to the structure (see Fig.$\:$2S and explanations in Supplementary Information).
\subsection*{Fast video}
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{Fig3.jpg}
\vspace{-0.3cm}
\caption{The current drop at $U=6.75\:$V, $f=150\:$kHz and the synchronized position of the frames (dots) in the fast video. A few frames indicated by the arrows are shown as insets. The central electrode and the emerging bubble are in the field of view. In frame 4 the initial bubble is circled. \label{fig:current_frame}}
\vspace{-0.5cm}
\end{figure}
Fast video (see Supplementary information, file V1) reveals the reason for the current drop. A bubble growing at the periphery of the central electrode blocks the current. Figure \ref{fig:current_frame} shows the current as a function of time. Red dots indicate position of the frames in the video and a few frames are shown as insets. The initial size of the bubble can be roughly estimated from the frame 4 where the bubble appears for the first time (surrounded by a circle of diameter $100\:\mu$m). The bubble diameter is in the range of $30-50\:\mu$m. Large uncertainty is due to difficulty to define the bubble edge in this frame. Low quality of optical images is mostly due to poor reflectivity of the structure. Note that when the initial bubble appeared the current has not yet changed. The initial inflation rate is estimated from frames 4 and 5 as $6\:$m/s but it slows down with time. The bubble reaches a size of $300\:\mu$m in $50\:\mu$s (frame 9). At this point the inflation becomes slow and after frame 23 the bubble gets out of focus. It is not possible to see on this video disappearance of the bubble but for smaller bubbles, which stay in focus, one can see that the bubble shrinks and disappears (see Supplementary Information, file V2).
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{Fig4.jpg}
\vspace{-0.3cm}
\caption{Movement of the floating Si-sample at $U=6.75\:$V, $f=200\:$kHz. (a) Current (left axis) and the vibrometer signal (right axis) recorded synchronously as functions of time. The arrow indicates the point where the signal is three times larger than the noise level. This point is taken as $t=0$. (b) The same vibrometer signal (velocity) on a larger time scale (right axis). The green curve (left axis) shows the displacement of the silicon piece in its center, which is just the signal integrated over time. \label{fig:current_velocity}}
\vspace{-0.5cm}
\end{figure}
\subsection*{Vibrometer measurement}
The sound produced by the process is a sign that highly-energetic events are happening in the system. To characterize these events we use the vibrometer. A piece of silicon with dimensions of $17\times 8\times 0.5\:$mm$^3$ floats in the Petri dish with its center just above the electrodes. The laser beam is positioned on the center of the piece. The vibrometer registers velocity of the Si sample in its center. The result is presented in Fig.$\:$\ref{fig:current_velocity}(a) synchronously with the current in the system. The velocity signal is in agreement with the fast camera observations. Growing bubble moves the plate up but shrinking bubble pulls it down. However, what is striking is the magnitude of the effect. A non-zero signal appears about $10\:\mu$s before the current starts to decrease. The initial acceleration is estimated as $710\:$m$^2$/s. Velocity of the sample reaches a maximal value of $v\approx 6\:$cm/s at the moment $t=146\:\mu$s. The signal is zero again in the middle of the current drop, where inflation of the bubble changes to deflation. The latter corresponds to negative values of the Si-sample velocity. When the bubble disappears the signal changes its behavior and oscillates with a frequency of about $10\:kHz$ as one can see in panel (b).
The oscillations are related to the lowest flexural mode of the Si-sample. For a free sample its frequency is estimated as $15\:$kHz but the floating sample is not actually free and its frequency is reduced. Since the main wavelength $\lambda=52\:$mm of the sound is much larger than the liquid thickness $H$ between the electrodes and floating silicon (estimated as $H=2-0.5=1.5\:$mm), the liquid moves together with the sample and the frequency reduction factor can be expressed via the added mass as $\left[\rho_s h/(\rho_l H+\rho_s h)\right]^{1/2}\approx 0.66$, where $h=0.5\:$mm is the thickness of the piece, $\rho_l=1\:$g/cm$^3$ and $\rho_s=2.33\:$g/cm$^3$ are the densities of the liquid and solid, respectively.Thus, the flexural frequency of the Si sample is estimated as $9.9\:$kHz.
Figure \ref{fig:current_velocity}(b) shows the velocity signal and displacement of the Si-sample on a larger timescale. The bulky piece of silicon is moved up to $9\:\mu$m by the process. The velocity of the sample measured by the vibrometer gives all the mechanical information on its movement. Using the work-energy principle we can estimate the work $W$ done by the inflating bubble on the piece as its maximal kinetic energy $W=mv^2_{max}/2\approx 0.28\:\mu$J, where $m=156\:$mg is the sample mass. This work is a lower limit for the energy of the event $E_{ev}$ because some energy dissipates and some escapes via the liquid due to the longitudinal movement. Nevertheless, we can take $E_{ev}\approx 0.3\:\mu$J as a good estimate since the quality factor is not small as oscillations in Fig.$\:$\ref{fig:current_velocity}(b) demonstrate and only a small part of the energy escapes in the longitudinal direction because the sample size is much larger than the liquid layer underneath it.
\subsection*{Microfluidic generation}
We did a special investigation to compare MBs produced by the electrochemical process and MBs generated from the external source of stoichiometric mixture of gases. In the latter case the bubbles were produced by a microfluidic bubble generator \cite{Hettiarachchi2007}. The stoichiometric gas mixture was fed into one channel of the generator while the electrolyte or clean water was fed into the other channel. The bubbles produced by the generator have a size of $10-12\:\mu$m (see Fig.$\:$\ref{fig:b_generator}). These bubbles survive at least $900\:$ms before going out of the field of view.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{Fig5.jpg}
\vspace{-0.3cm}
\caption{Generation of MBs from an external source of gas mixture. (a) Supplying channels for gas and liquid and the diffuser, where the bubbles are generated from breaking the meniscus. (b) Train of MBs containing H$_2$ and O$_2$ mixture in the electrolyte produced by the bubble generator. \label{fig:b_generator}}
\vspace{-0.5cm}
\end{figure}
\section*{Discussion}
As follows from Fig.$\:$\ref{fig:current_velocity}(a) the velocity of the Si-sample builds up in less than $10\:\mu$s. Since the initial acceleration is large $\sim70g$, where $g$ is the free-fall acceleration, we conclude that the processes have an explosive character. Presence of the sound is an additional characteristic feature of the explosion. These signatures can be produced by the combustion reaction between H$_2$ and O$_2$ gases in the initial bubble. If the bubble is filled with a stoichiometric mixture of gases, the combustion energy is estimated as $E_{com}=2N|\Delta H|/3$, where $N$ is the number of gas molecules in the bubble and $\Delta H\approx -242\:$kJ/mol is the enthalpy of water formation. The number $N$ can be expressed using the equation of state at room temperature before the combustion. Thus, we find that $E_{com}\approx 0.3\:\mu$J for the initial radius of the bubble $R_{in}\approx 22\:\mu$m. This size is in agreement with the rough estimate from the video. The ratio of the useful work to the explosion energy $W/E_{com}\sim 1$ can be compared to a similar value $W/E_{com}\simeq 0.05$ achieved in \cite{Postnikov2016}, where the work was done by the flexible membrane closing the microchamber.
Underwater combustion and explosions were investigated in many papers (see, for example, \cite{Kumakura1992,Kumakura1996,Wu2005,Klaseboer2005,Teslenko2014}) but it is difficult to compare the results with our case due to much larger scale of the events and different geometrical configuration of the experiments and modelling. Combustion in bubbles as small as $10\:\mu$m in diameter was considered in \cite{Nguyen2005} but it was supported by the high temperature inside the bubble rapidly collapsing in an acoustic field.
Direct comparison can be done with the experiment \cite{Teslenko2010} where combustion was observed in a bubble (in water) of $2\:$mm in diameter containing acetylene-oxygen mixture. A significant energy of $20\:$mJ was supplied by a spark to ignite the normal combustion in the bubble. In our case the reaction is initiated spontaneously in a smaller bubble but the rates of inflation are comparable: $6\:$m/s in our case vs $8\:$m/s in \cite{Teslenko2010}. An additional relevant quantity is the energy density. In our case it can be estimated directly from the experiment as $E_{ev}/V_{in}=5-21\:$MJ/m$^3$, where $V_{in}=4\pi R_{in}^3/3$ is the initial volume of the bubble. Wide interval here is due to uncertainty of the initial bubble size. If we use $E_{com}$ instead of the experimental value $E_{ev}$ the volume $V_{in}$ falls out and we find the theoretical value $6.5\:$MJ/m$^3$ for the hydrogen-oxygen combustion. This energy density can be compared with $15\:$MJ/m$^3$ for the acetylene-oxygen mixture. It differs from that for hydrogen-oxygen because the enthalpy of the acetylene combustion is rather large, $\Delta H_{C_2H_2}=-1318\:$kJ/mol. From the comparison one can conclude that we really observe combustion of hydrogen in microbubbles.
The initial bubble containing stoichiometric mixture of gases is formed by merging of the NBs produced by the alternating polarity electrochemical process \cite{Postnikov2016}. For this, the density of NBs must be so high that the bubbles are nearly touching. This is why the events are separated by a time of about $50\:$ms as can be seen in Fig.$\:$\ref{fig:current_sound}. After explosion the system waits while new NBs will be collected near the structure. Fluctuations of the local concentration of nanobubbles explain that the clicks occurred not completely regularly.
A reason for the spontaneous initiation of the reaction in a relatively large microbubble is not clear. In NBs containing mixture of gases the reaction could be initiated as a surface-assisted process \cite{Svetovoy2016}, but for larger bubbles this mechanism has to be suppressed due to a smaller surface-to-volume ratio. The stoichiometric bubbles produced by the microfluidic generator survive at least $900\:$ms. On the other hand, in the bubbles formed by coalescence of NBs the reaction is initiated spontaneously in less than $10\:\mu$s. We do not know the reason for this difference but it could be related to nanodroplets left in the bubble after merging of many NBs. Densely packed NBs fill only 74\% of volume, the rest is the liquid trapped in between the bubbles. After coalescence of the bubbles the trapped liquid will form nanodroplets distributed inside of the MB. In this MB any gas molecule is separated from the liquid by a nanoscopic distance. Possibly it can play a role for the initiation of the reaction.
When the explosion happens in the initial MB, the pressure and temperature in the bubble rise. If heat exchange with the walls is slow in comparison with the reaction, the combustion energy $E_{com}$ is spent only on heating of the water vapor being formed in the reaction. Assuming that the bubble size changes insignificantly during the combustion phase we can find the pressure in the bubble from the equation of state as $P/P_0 \approx 2|\Delta H|/3T_0=64.7$, where $P_0\approx 1\:$bar is the ambient pressure and $T_0$ is the room temperature. The most efficient channel for the heat exchange is water vaporization from the walls by energetic molecules. For such molecules a time needed to reach the wall is estimated as $\tau_h\sim R_{in}^2/D_{gg}$, where $D_{gg}$ is the self-diffusion coefficient for water vapor. Under normal conditions $D_{gg}\approx 2.8\times 10^{-5}\:$ m$^2$/s and we find $\tau_h \sim 10\:\mu$s; this time is scaled with temperature as $(T/T_0 )^{1/2}$. Spontaneous combustion in micro and nanobubbles happens in a few microseconds \cite{Svetovoy2016}. Since both the time for the heat exchange and the time for combustion are comparable the pressure and temperature in the initial bubble immediately after the combustion is defined by the dynamics but anyway it is expected to be high. The source of the sound is the pressure surge.
During the inflation phase the pressure decreases and going below the atmospheric pressure by inertia as it happens, for example, in cavitation. At the final stage the bubble shrinks and disappears. The time scale for heat exchange explains why the useful work observed in this study is much larger than that in the microsystem \cite{Postnikov2016}. In the latter case the initial bubble had a radius of $R_{in}\approx 5\:\mu$m. Due to smaller size the heat exchange happens much faster and the pressure and temperature reach smaller values.
Alternatively, one could try to explain the observed phenomena by heating of the electrolyte by the Faraday current (Joule heating). A vapor bubble could be formed due to the heating. However, the heating of the electrolyte has to result in the current increase. This effect can be used as a build-in thermometer \cite{Svetovoy2014}. What is observed is only a small temperature increase between the clicks as explained in Supplementary Information, Fig. 2S. In this scenario there is no a driving force for the bubble inflation since pressure in the bubble cannot reach a high value. As an additional scenario one could imagine that the current passing through a small liquid volume vaporizes this volume producing the initial bubble with a high pressure inside. The number of molecules in the bubble is estimated as $N_v < E_{ev}/\Delta H_v\approx 4.4\times 10^{12}$, where $E_{ev}\approx 0.3\:\mu$J is the observed energy and $\Delta H_v=41\:$kJ/mol is the enthalpy of water vaporization. We took only the upper limit because the heating of the molecules was neglected. For this $N_v$ the radius of vaporized liquid sphere must be $R_l<3.2\:\mu$m. The experimental value of the Faraday current density is estimated as $j_F\approx 200\:$A/cm$^2$. It is similar to that used for normally
functioning microdevices \cite{Svetovoy2013,Svetovoy2014}. This current can transfer the energy $E_{ev}\approx 0.3\:\mu$J to $N_v$ molecules for the time $\tau_E$, which can be found from the relation $\pi R_l^2 j_F U\tau_E = E_{ev}$, where $U=6.75\:$V. It gives $\tau_E > 700\:\mu$s that is considerably larger than the time needed to produce the initial bubble, which is smaller than $10\:\mu$s.
For application in microcombustors it is not efficient to produce exploding gas electrochemically. Instead, one has to provide appropriate conditions in a microvolume with premixed gases. If the volume is closed by a flexible membrane, the explosion will move the membrane up. Therefore, a critical step is to understand why the gas is ignited spontaneously in MBs produced electrochemically and ensure the right conditions in the microvolume.
In conclusion, we observed highly energetic events in water electrolysis produced by short voltage pulses of alternating polarity. The process is accompanied by the sound clicks, which occur synchronously with the current drops in the system. The observations were explained by spontaneous combustion of H$_2$ and O$_2$ gases in the initial bubble with a diameter of about $40\:\mu$m. The combustion produces the pressure jump, which generates sound and drives the bubble inflation. Unusual reaction mechanism has to drive the process to provide spontaneous ignition at room temperature. Significant part of the combustion energy was transformed into mechanical work instead of heat. It opens a practical way to build a truly microscopic internal combustion engine.
\section*{Methods}
Standard printed circuit boards (PCB) with Cu foil is used for bubble generation in the AP regime. One PCB contains four pairs of electrodes and contact lines as shown in Fig.$\:$\ref{fig:setup}. Typical size of the circular structure is $1\:$mm, the line width is $150\:\mu$m, and the line thickness is $50\:\mu$m. Only the circular structures are in the electrical contact with the electrolyte, the rest is covered with a standard insulating coating used for PCB.
The PCB device is placed in a Petri dish and covered by 2-3 mm of the electrolyte ($1\:$mol/l solution of Na$_2$SO$_4$ in water). One electrode is kept grounded but the other one is at the alternating potential with an amplitude of $6-7\:$V. A relatively high potential is used to produce a significant supersaturation with H$_2$ and O$_2$ gases. The voltage pulses were generated by a PicoScope 5000 and amplified 15 times. The current in the system and the produced sounds were recorded using different channels of the PicoScope. On a long time scale ($\geq 1\:$s) Windows Sound Recorder was used. The sound was recorded in air with a microphone installed $10-12\:$cm from the Petri dish. One of the sound files is provided in Supplementary Information. Visual dynamics of the system was analysed with a fast camera Photron Fastcam SA1.1 at the frame rate up to $100\:000\:$fps. A detailed analysis of the process was performed with a vibrometer Polytec MSA-400.
|
2,877,628,090,522 | arxiv | \section{\label{sec:intr}Introduction}
A large number of interesting dynamic systems can be studied and
modeled by first representing them as networks and then considering
specific dynamic models. Because the latter depend greatly on the
connectivity of the network, it becomes critical to obtain good
characterizations of the respective connectivity structure. Such a
characterization is even more important in cases when the dynamics is
not considered, e.g.\ while analyzing a frozen instance of systems
such as the Internet and protein-protein interaction
networks. Therefore, it is hardly surprising that a great deal of
efforts (e.g.\ \cite{Costa:2007:survey}) has been invested in
developing new measurements capable of providing meaningful and
comprehensive characterization of the connectivity structure of
complex networks.
Traditional measurements of the topology of complex networks include
the classical vertex degree and the clustering coefficient (e.g.\
\cite{Newman:2003:survey}). Both these features are defined for each
vertex in the network and express the connectivity only at the
immediate neighborhood of that reference vertex. Other measurements
such as the minimum shortest path and betweenness centrality reflect
the connectivity of broader portions of the network. Hierarchical
measurements (e.g.\ \cite{Costa04:PRL, Costa06:EPJ, Costa06:JSP,
Andrade05:PRL}) such as the hierarchical vertex degree and
hierarchical clustering coefficient, also applicable to individual
reference vertices, have been proposed in order to reflect the
connectivity properties along successive hierarchical neighborhoods
around the reference vertex. Another interesting family of
measurements of the topological properties of complex networks
involves the quantification of the frequency of basic
\emph{motifs} in the network (e.g.\ \cite{ShenOrr:2002, Milo:2002,
Alon:2007:book, Lodato2007}). Motifs are subgraphs corresponding to
the simplest structural elements found in networks, in the sense of
involving small number of vertices and edges. Examples of motifs
include feed-forward loops, cycles of order three and bi-fans.
The study of chains of nodes in networks has been preliminarily
considered. Costa~\cite{Costa2004vaf} studied the effect of chains in
affecting the fractal dimension as revealed by dilations along
networks. Kaiser and Hilgetag~\cite{kaiser2004evn} studied the
vulnerability of networks involving linear chains with an open
extremity. In another work~\cite{kaiser2004sgr}, they addressed the
presence of this same type of motifs in a sparse model of spatial
network. More recently, Levnaji\'c and Tadi\'c~\cite{Levnajic}
investigated the dynamics in simple networks including linear chains
of nodes.
Although several measurements are now available in the literature,
their application will always be strongly related to each specific
problem. In other words, there is no definitive or complete set of
measurements for the characterization of the topology of complex
networks. For instance, in case one is interested in the community
structures, measurements such as the modularity are more likely to
provide valuable and meaningful information~\cite{Newman04:PRE}. In
this sense, specific new problems will likely continue to motivate
novel, especially suited, measurements. The reader is referred to the
survey~\cite{Costa:2007:survey} for a more extensive discussion of
measurements choice and applications.
The current work proposes a new, complementary way to characterize the
connectivity of complex networks in terms of a special class of motifs
defined by \emph{chains} of vertices, which are motifs composed by
vertices connected in a sequential way, where the internal vertices
have degree two. These motifs include \emph{cords}, \emph{tails},
\emph{rings} and \emph{handles}. While tails and handles have at least
one extremity connected to the remainder of the network, cords and
rings are disconnected, being composed by groups of vertices connected
in a sequential way. Additional motifs such as two or more handles
connected to the remainder of the network, namely $n$-handles with $n
\ge 2$, can also be defined, but they are not also considered in this
work.
Figure~\ref{Fig:typechains} illustrates six types of chains, namely
(a) a cord, (b) a tail, (c) a two-tail, (d) a ring, (e) a handle and
(f) a $n-$handle. The main difference between the traditional motifs
and those defined and characterized in this article is that the latter
may involve large number of vertices and edges.
\begin{figure}
\centerline{\includegraphics[width=0.95\linewidth]{typechains.eps}}
\caption{The chains can be classified into different types, depending
on the connections among their external vertices. Here is shown six
types of chains (dark gray vertices): (a) a cord, (b) a tail, (c)
a two-tail, (d) a ring, (e) a handle and (f) a $n-$handle.}
\label{Fig:typechains}
\end{figure}
The main motivation behind the introduction of the concept of chains
in complex networks provided in this article is that such a structure
is odd in the sense that it can be conceptualized as an edge
containing a series of intermediate vertices which make no
branches. In several aspects, such as in flow, the incorporation of
such intermediate vertices along an edge will imply virtually no
change on the overall dynamics of that substructure of the network. In
other words, the same flow capacity will be offered by either the
isolated edge or its version incorporating a series of intermediate
vertices. Interestingly, vertices with only two neighbors ---
henceforth called \emph{articulations} --- seem to have a rather
distinct nature and role in complex networks, which suggests that they
may have distinct origins. For instance, as explored further in this
work, articulations seem to appear in networks generated by sequential
processes (e.g.\ word adjacency in books), but can also be a
consequence of incompleteness of the building process of networks.
The latter possibility is experimentally investigated in this work by
considering incompletely sampled versions of network models.
In addition to introducing the concept and a theory of chains and
articulations in complex networks and presenting means for their
identification, the present work also illustrates the potential of the
considering the statistics of cords, tails, and handles for
characterizing real-world networks (social, information,
technological, word adjacency in books, and biological networks).
This article starts by presenting the definition of chains and their
categories (i.e.\ cords, tails, and handles), and proceeds by
developing an analytical investigation of the density of chains in
random and scale free models. Next, an algorithm for the
identification of such motifs is described, following by a discussion
of the obtained chain statistics. The application of such a
methodology considers the characterization of real-world complex
networks in terms of chain motifs.
\section{Chains, cords, tails, handles, and rings}
\label{sec:def}
Given a network with $N$ vertices, consider a sequence
$(n_1,n_2,\ldots,n_{m+1})$ of $m+1$ vertices $n_i.$ If the sequence
has the following properties:
\begin{enumerate}
\item There is an edge between vertices $n_i$ and $n_{i+1}$, $1 \le i
\le m$;
\item Vertices $n_1$ and $n_{m+1}$ have degree not equal to 2; and
\item Intermediate vertices $n_i$, $2\le i\le m$, if any, have degree $2$;
\end{enumerate}
we call the sequence a \emph{chain} of length $m$. Vertices $n_1$ and
$n_{m+1}$ are called the \emph{extremities} of the chain.
Chains can be classified in four categories ($k_{n_i}$ is the degree
of vertex $n_i$):
\begin{description}
\item[Cords] are chains with $k_{n_1}=1$ and $k_{n_{m+1}}=1$.
\item[Handles] are chains with $k_{n_1}>2$ and $k_{n_{m+1}}>2$.
\item[Tails] are chains with $k_{n_1}=1$ and $k_{n_{m+1}}>2$ (or
equivalently $k_{n_1}>2$ and $k_{n_{m+1}}=1$).
\item[Rings] (of length $m$) are sequences $(n_1,n_2,\ldots,n_{m})$ of
$m$ vertices where the degree of each vertex is $k_{n_i}=2,\,\, 1\le
n \le m$, $n_i$ is adjacent to $n_{i+1}$ (for $1\le i \le m-1$), and
$n_m$ is adjacent to $n_1$.
\end{description}
Rings are a special case of chains in which there is no extremities,
and was included in the chain classification only for completeness.
Including the trivial cases with $m=1$, it is easy to see that each
vertex of degree $1$ is at an extremity of a cord or a tail and each
vertex of degree greater than $2$ is at an extremity of a tail or a
handle. Note that the definition of handles includes the degenerate
case where the extremities are the same vertex: $n_1 = n_{m+1}.$
With these definitions and writing $N_C$, $N_H$, $N_T$, and $N_R$ for
the total number of cords, handles, tails, and rings, respectively,
$N(k)$ for the number of vertices of degree $k$ we have:
\begin{eqnarray}
N(1) & = & 2 N_C + N_T, \label{eq:deg1}\\
\sum_{k>2}kN(k) & = & 2 N_H + N_T. \label{eq:degk}
\end{eqnarray}
To evaluate the number of vertices of degree $2$, we introduce the
notation $N_C(m)$ for the number of cords of length $m$, and similarly
$N_H(m)$ for handles, $N_T(m)$ for tails, and $N_R(m)$ for rings. Each
chain of length $m$ has $m-1$ and each ring of length $m$ has $m$
vertices of degree $2$, giving:
\begin{small}
\begin{equation}
\label{eq:deg2}
N(2) = \sum_{m=1}^{\infty} \left[ m N_R(m) +
(m-1)\left( N_C(m)+N_H(m)+N_T(m)\right) \right]
\end{equation}
\end{small}
Isolated vertices (vertices with degree $0$) have no effect on such
structures, and it is considered hereafter that the network has no
isolated nodes.
The chains can also be classified according to the nature of its
connections as in Figure~\ref{fig:directions}. In undirected networks,
the chains are said \emph{undirected}
(Figure~\ref{fig:directions}). In directed networks, on the other
hand, the chains can be classified into three types:
\begin{enumerate}
\item \emph{Directed chains} are those whose arcs of inner vertices
follow just one direction, i.e.\ there is a directed path from one
extremity to the other (Figure~\ref{fig:directions}(b)).
\item \emph{Undirected chains} are defined as for undirected networks,
which have undirected arcs between inner vertices
(Figure~\ref{fig:directions}(a)). An undirected arc between vertices
$i$ and $j$ exist if there are an arc from $i$ to $j$ and another from
$j$ to $i$.
\item \emph{Mixed chains} are those with any other combination of arc
directions like in Figure~\ref{fig:directions}(c).
\end{enumerate}
\begin{figure}
\centerline{\includegraphics[width=0.6\linewidth]{figdirections.eps}}
\caption{The chain can be (a) undirected, (b) directed and (c)
mixed. Mixed chains have arcs in any direction. Note that (c) and
(d) are equivalent.}
\label{fig:directions}
\end{figure}
In our analysis we consider just undirect networks, but the extension
for direct networks is straightforward.
\section{Algorithm for chain identification}
\begin{figure*}
\includegraphics[scale=1]{algorithm.eps}
\caption{The main steps to identify handles of size greater than 2
in networks includes: (i) choose a vertex of degree 2 and add it to a
list (dark gray vertex); (ii) go to its neighbors and also add them
if they have degree 2; (iii)
go to the next neighbors, excluding the vertices already added in
the list, and also add them if they have degree 2; (iv) stop adding
vertices to the list after finding two vertices of degree greater
than 2. In this case, the size of the obtained handle is 6. The same
procedure can also be applied to find cords and tails, but at least
one extremity should have degree equal to 1.}
\label{fig:alg}
\end{figure*}
The algorithm to identify chains of vertices includes two steps, one
for finding chains of size greater than 1 and the other for finding
chains of unit size. The first step is illustrated in
Figure~\ref{fig:alg} and described as following:
\begin{small}
\begin{itemize}
\item input: graph G
\item output: list containing all chains of size greater than 2
\item calcule the degree of vertices in G and store them in a list K
\item Find vertices $i$ such that $k_i \ge 2$, $k_i \in K$, and store them in a
list Q2
\item while Q2 is not empty do
\begin{itemize}
\item remove a vertex (A) from Q2 and then insert its first
neighboring vertex (B), A, and its second neighboring vertex (C)
in a queue P (in this order)
\item while the first and last elements of P have degree equal to
2 or are not the same do
\begin{itemize}
\item let D be the neighboring node of the first element in P.
In case D is not already in P, include it into that queue in
the first position.
\item if D is in Q2, remove it.
\item let E be the neighboring node of the last element in P.
In case E is not already in P, include it into that queue in
the last position.
\item if E is in Q2, remove it.
\end{itemize}
\item insert P in a list L and clear P
\end{itemize}
\end{itemize}
\end{small}
The list L contains all chains of size greater than 2. They can now
be classified into cords, tails, and handles according to the degree
of the first and last element of the corresponding queue.
The second step, required for identifying the chains of unit length,
is as follows:
\begin{small}
\begin{itemize}
\item input: graph G, list K and list L
\item output: list of cords, tails, and handles of unit size
\item find all vertices of degree equal to 1 which were not in L and
store them in a list Q1
\item while Q1 is not empty do
\begin{itemize}
\item remove a vertex from Q1 and insert it in a queue P
\item if the neighboring node of A has degree also equal to 1,
remove it from Q1, insert it in P, and insert P in a list C1
\item else insert its neighbor in P and insert P in a list T1
\end{itemize}
\item include all pairs of connected vertices which are not in L,
C1 or T1 to a list H1
\end{itemize}
\end{small}
The lists C1, T1, and H1 contain, respectively, all cords, tails, and
handles of unit size in the network.
\section{Statistics} \label{sec:stat}
Consider an ensemble of networks completely determined by the
degree-degree correlations $P(k,k')$~\footnote{For such an ensemble to
be possible, connections from a vertex to itself (self-connections)
and multiple connections between two vertices must be allowed, in
contrast to many network models. Such self- and multiple connections
will be rare provided the network is sufficiently large.} Given
$P(k,k')$ and the number of vertices in the network, we want to
evaluate the number of each chain type and rings. The degree
distribution $P(k)$ and the conditional neighbor degree distribution
$P(k'|k)$, i.e.\ the probability that a neighbor of a vertex with
degree $k$ has degree $k'$, are easily computed:
\begin{eqnarray}
P(k) & = & \frac{\sum_{k'}P(k,k')/k}{\sum_{k',k''}P(k',k'')/k'}, \label{eq:deg}\\
P(k'|k) & = & \frac{{\langle k \rangle} P(k,k')}{k P(k)}, \label{eq:cond}
\end{eqnarray}
where ${\langle k \rangle} = \sum_k k P(k)$ is the average degree of the network.
\subsection{Rings} \label{sec:rings}
For a ring of length $m$, we start at a vertex of degree $2$, go
through $m-1$ vertices of degree $2$ and reach back the original
vertex. Each transition from a vertex of degree $2$ to the other,
with the exception of the last one that closes the ring, has
probability $P(2|2);$ the closing of the ring requires reaching one of
the vertices of degree $2$ (probability $P(2|2)$) and among them,
exactly the start one (probability $1/(NP(2)$). If we start from all
vertices of degree $2$, each ring will be counted $m$ times, resulting
in:
\begin{equation}
\label{eq:rings}
N_R(m) = \frac{1}{m} P(2|2)^m.
\end{equation}
This expression is valid only for the case of small $m$ and large $N$,
such that the vertices already included in the ring do not affect
significantly the conditional probabilities. Such an approximation is
used throughout this work. Note that, under this circumstance, when
computing Eq.~(\ref{eq:deg2}), $N_R(m)$ is of the order of the
approximation error in the expressions of $N_C(m), N_T(m),$ and
$N_H(m).$
\subsection{Cords}
\label{sec:cords}
Starting from a vertex of degree $1$, a cord is traversed by following
through a set of vertices of degree $2$ until reaching a vertex of
degree $1$ that ends the cord. A cord of length $1$ has no
intermediate vertices; starting in a vertex of degree $1$, the
probability of finding a cord of length 1 is therefore given by
$P(1|1).$ For a cord of length $2$, the edge from the initial vertex
should go through a vertex of degree $2$ before arriving at a new
vertex of degree $1$, giving $P(2|1)P(1|2).$ For lengths greater than
$2$, each new intermediate vertex is reached with probability
$P(2|2)$, and therefore we have $P(2|1)P(2|2)^{m-2}P(1|2)$\footnote{In
these expressions and the following, we assume that the network is
sufficiently large, such that the inclusion of some vertices in the
chain does not affect the probabilities of reaching new vertices in
the next step.} for a cord of length $m$. Considering that there are
$NP(1)$ vertices of degree $1$ in the network, but only half of them
must be taken as starting vertex to find a cord, we arrive at:
\begin{equation}
\label{eq:cords}
N_C(m) = \left\{
\begin{array}{ll}
\frac{1}{2}NP(1)P(1|1) & \mbox{if $m = 1$,}\\
\frac{1}{2}NP(1)P(2|1)P(2|2)^{m-2}P(1|2) & \mbox{if $m > 1$.}\\
\end{array}
\right.
\end{equation}
\subsection{Tails}
\label{sec:tails}
The number of tails can be computed similarly. We need either to start
at a vertex with degree $1$ and reach a vertex of degree greater than
$2$ or vice versa; only one of these possibilities must be
considered. We arrive at:
\begin{equation}
\label{eq:tails}
N_T(m) = \left\{
\begin{array}{ll}
NP(1)P(>2|1) & \mbox{if $m = 1$,}\\
NP(1)P(2|1)P(2|2)^{m-2}P(>2|2) & \mbox{if $m > 1$,}\\
\end{array}
\right.
\end{equation}
where the notation $P(>2|k) = \sum_{k'>2}P(k'|k)$ is used.
\subsection{Handles}
\label{sec:handles}
A handle starts in a vertex of degree $k>2$ and ends in a vertex of
degree $k'>2.$ Starting from one of the $NP(k)$ vertices of degree
$k>2$ of the network, there are $k$ possibilities to follow a chain,
each characterized by a sequence of vertices of degree $2$ until
reaching a vertex of degree $k'>2.$ This gives a total of
$NkP(k)P(>2|k)$ handles of length $1$ and
$NkP(k)P(2|k)P(2|2)^{m-2}P(>2|2)$ handles of length $m>1.$ Summing up
for all values of $k>2$, using $\sum_{k}kP(k)P(k'|k) = k'P(k'),$ which
can be deduced from relations~(\ref{eq:deg}) and~(\ref{eq:cond}), and
considering that each handles is counted twice when starting from all
nodes of degree greater than 2, we have:
\begin{widetext}
\begin{equation}
\label{eq:handles}
N_H(m) = \left\{
\begin{array}{ll}
\frac{1}{2}N\left\{{\langle k \rangle} -
P(1)[2-P(1|1)-P(2|1)]- P(2)[4-P(1|2)-P(2|2)]\right\}
& \mbox{if $m = 1$,} \\
\frac{1}{2}N[2P(2)-P(1)P(2|1)-2P(2)P(2|2)]P(2|2)^{m-2}P(>2|2)
& \mbox{if $m > 1$.}\\
\end{array}
\right.
\end{equation}
\end{widetext}
Using Equations~(\ref{eq:cords}), (\ref{eq:tails}),
and~(\ref{eq:handles}) we have
\begin{displaymath}
\sum_{m=1}^{\infty} \left[ (m-1) \left( N_C(m)+N_H(m)+N_T(m)\right)
\right] = N(2).
\end{displaymath}
Comparing this result with Equation~(\ref{eq:deg2}) we see that the
rings are already counted in the number of chains, as hinted in the
end of Section~\ref{sec:rings}. This happens because, while computing
the probability of chains, we ignore the fact that the presence of
rings decreases the number of possible chains. For a large enough
network, the number of rings should be small compared with the number
of the other structures, validating the approximation.
Note that all expressions are proportional to $P(2|2)^m$, and
therefore large chains should be exponentially rare, if they are not
favored by the network growth.
\section{Theoretical analysis for uncorrelated networks} \label{sec:uncorr}
For uncorrelated networks, where the degree at one side of an edge is
independent of the degree at the other side of the edge, $P(k,k')$ can
be factored as
\begin{equation}
\label{eq:pkkuncorr}
P(k,k') = \frac{kP(k)k'P(k')}{{\langle k \rangle}^2}.
\end{equation}
The conditional probability is simplified to
\begin{equation}
\label{eq:conduncorr}
P(k'|k) = \frac{k'P(k')}{{\langle k \rangle}}.
\end{equation}
Using this last expression, we have for uncorrelated networks
\begin{eqnarray}
\label{eq:nruncorr}
N_R(m) & = & \frac{1}{m} \left[\frac{2P(2)}{{\langle k \rangle}}\right]^m\\
\label{eq:ncuncorr}
N_C(m) & = & \frac{2^{m-2}NP(1)^2 P(2)^{m-1}}{{\langle k \rangle}^m} \\
\label{eq:ntuncorr}
N_T(m) & = & NP(1)\left[\frac{2P(2)}{{\langle k \rangle}}\right]^{m-1}
\alpha\\
\label{eq:nhuncorr}
N_H(m) & = & \frac{N{\langle k \rangle}}{2}\left[ \frac{2P(2)}{{\langle k \rangle}}\right]^{m-1}
\alpha^2.
\end{eqnarray}
where $\alpha = \left[1 - \frac{P(1)}{{\langle k \rangle}} - \frac{2P(2)}{{\langle k \rangle}}\right]$.
\subsubsection{Erd\H{o}s-R\'{e}nyi networks} \label{sec:er}
Erd\H{o}s-R\'{e}nyi networks have no degree correlations and a
Poissonian degree distribution:
\begin{equation}
\label{eq:pker}
P(k) = \frac{e^{-{\langle k \rangle}}{\langle k \rangle}^k}{k!}.
\end{equation}
This gives the following expressions for the number of rings, cords,
tails and handles:
\begin{eqnarray}
\label{eq:ernr}
N_R(m) & = & \frac{{\langle k \rangle}^m e^{-m {\langle k \rangle}}}{m}\\
\label{eq:ernc}
N_C(m) & = & \frac{N}{2} {\langle k \rangle}^m e^{-(m+1){\langle k \rangle}}\\
\label{eq:ernt}
N_T(m) & = & N {\langle k \rangle}^m e^{-(m+1){\langle k \rangle}} \varepsilon\\
\label{eq:ernh}
N_H(m) & = & \frac{N}{2} {\langle k \rangle}^m e^{-(m+1){\langle k \rangle}} \varepsilon^2
\end{eqnarray}
where $\varepsilon=\left(e^{\langle k \rangle} - {\langle k \rangle} - 1\right)$.
Figure~\ref{fig:poisson} shows the comparison of the results for
networks with $N=10^6$ vertices and $L=972\,941$ edges (this number of
edges was chosen to give the same average degree as for the scale-free
network discussed below). A total of 1\,000 realizations of the
model were used to compute the averages and standard deviations.
\begin{figure}
\includegraphics[scale=0.75]{cords-poisson.eps} (a) \vspace{0.2cm}\\
\includegraphics[scale=0.75]{tails-poisson.eps} (b) \vspace{0.2cm}\\
\includegraphics[scale=0.75]{handles-poisson.eps} (c)\\
\caption{Number of cords (a), tails (b), and handles (c) of
different sizes in the model with Poisson degree distribution.
The points are the averaged measured values (each of the error bars
corresponds to one standard deviation), the lines are the values computed
analytically. Note that the abrupt increase of the width of the
error bars is a consequence of the logarithmic scale.}
\label{fig:poisson}
\end{figure}
\subsubsection{Scale-free networks} \label{sec:sf}
We now proceed to uncorrelated scale-free networks with degree
distribution given as
\begin{equation}
\label{eq:pksf}
P(k) = \frac{k^{-\gamma}}{\zeta(\gamma)},
\end{equation}
where $\gamma$ is the power law coefficient and $\zeta(x)$ is the
Riemann zeta function. This distribution describes a strictly
scale-free network, with the power law valid for all values of $k$ and
a minimum $k_{\mathrm{min}} = 1.$ The results are therefore not
directly applicable to scale-free real networks or models. The
average degree is ${\langle k \rangle} = \zeta(\gamma-1)/\zeta(\gamma).$ The resulting
expressions are:
\begin{eqnarray}
\label{eq:sfnr}
N_R(m) & = & \frac{2^{-m(\gamma-1)}}{m\zeta(\gamma-1)^m}\\
\label{eq:sfnc}
N_C(m) & = & \frac{N}{2} \frac{2^{-(m-1)(\gamma-1)}}
{\zeta(\gamma)\zeta(\gamma-1)^m}\\
\label{eq:sfnt}
N_T(m) & = & N
\frac{2^{-(m-1)(\gamma-1)}}{\zeta(\gamma)\zeta(\gamma-1)^m}
\beta \\
\label{eq:sfnh}
N_H(m) & = & \frac{N}{2}
\frac{2^{-(m-1)(\gamma-1)}}{\zeta(\gamma)\zeta(\gamma-1)^m}
\beta^2
\end{eqnarray}
where $\beta=\left[\zeta(\gamma-1)-1-2^{-(\gamma-1)}\right]^2$.
Figure~\ref{fig:sf} shows the comparison of the results for networks
with $N=10^6$ vertices and $\gamma=2.5$. A total of 1\,000
realizations of the model were used to compute the averages and
standard deviations. A comparison with Figure~\ref{fig:poisson} shows
that the Poisson degree distribution with the same average degree
presents larger chains. This is due to the relation between the
constants in the exponential dependency with $m$: $\langle k
\rangle/e^{\langle k \rangle} \approx 0.278$ for the Poisson model and
$2^{1-\gamma}/\zeta(\gamma-1)\approx 0.135$ for the scale-free model.
\begin{figure}
\includegraphics[scale=0.75]{cords-sf.eps} (a) \vspace{0.2cm}\\
\includegraphics[scale=0.75]{tails-sf.eps} (b) \vspace{0.2cm}\\
\includegraphics[scale=0.75]{handles-sf.eps} (c)\\
\caption{Number of cords (a), tails (b), and handles (c) of
different sizes in the model with scale-free degree distribution.
The points are the averaged measured values (each of the
error bars corresponds to one standard deviation), the lines
are the values computed analytically.}
\label{fig:sf}
\end{figure}
The results presented in this section addressed the issue of
validating the theory for analytical models. In
Section~\ref{sec:uncorr}, we will evaluate the theory while
considering real-world networks.
\section{\label{sec:netdata}Real-world networks}
It is known that networks belonging to the same class may share
similar structural properties~\cite{Milo:2002,Newman:2003}. So, to
study the presence of handles in networks, we considered five types of
complex networks, namely social networks, information networks, word
adjacency networks in books, technological networks, and biological
networks.
\subsection{Social networks}
Social networks are formed by people or group of people (firms, teams,
economical classes) connected by some type of interaction, as
friendship, business relationship between companies, collaboration in
science and participation in movies or sport
teams~\cite{Newman:2003:survey}, to cite just a few examples. Below we
describe the social networks considered in our analysis.
\begin{trivlist}
\item \textbf{Scientific collaboration networks} are formed by
scientists who are connected if they had authored a paper together. In
our investigations, we considered the astrophysics collaboration
network, the condensed matter collaboration network, the high-energy
theory collaboration network, all collected by Mark Newman from
\texttt{http://www.arxiv.org}, and the scientific collaboration of
complex networks researchers, also compiled by Mark Newman from the
bibliographies of two review articles on networks (by
Newman~\cite{Newman:2003:survey} and Boccaletti et
al.~\cite{Boccaletti06}). The astrophysics collaboration network is
formed by scientists who post preprints on the astrophysics archive,
between the years 1995 and 1999~\cite{Newman-PNAS01}. The condensed
matter collaboration network, on the other hand, is composed by
scientist posting preprints on the condensed matter archive from 1995
until 2005~\cite{Newman-PNAS01}. Finally, the high-energy theory
collaboration network is composed by scientists who posted preprints
on the high-energy theory archive from 1995 until
1999~\cite{Newman00:PRE64:I,Newman00:PRE64:II}.
\end{trivlist}
\subsection{Information networks}
\begin{trivlist}
\item \textbf{Roget's Thesaurus network} is constructed associating
each vertex of the network to the one of the 1022 categories in the
1879 edition of Peter Mark Roget's Thesaurus of English Words and
Phrases, edited by John Lewis Roget~\cite{Roget82}. Two categories $i$
and $j$ are linked if Roget gave a reference to $j$ among the words
and phrases of $i$, or if such two categories are directly related to
each other by their positions in Roget's book~\cite{Roget82}. Such
network is available at Pajek datasets~\cite{pajek-data}.
\item \textbf{Wordnet} is a semantic network which is often used as a
form of knowledge representation. It is a directed graph consisting of
concepts connected by semantic relations. We collected the network
from the Pajek datasets~\cite{pajek-data}.
\item \textbf{The World Wide Web} is a network of Web pages belonging
to nd.edu domain linked together by hyperlinks from one page to
another~\cite{Albert99:Nature}. The data considered in our paper is
available at the Center for Complex Network Research~\cite{CCNR}
\end{trivlist}
\subsection{Word adjacency in books}
Word adjacency in books can be represented as a network of words
connected by proximity~\cite{Antiqueira2007}. A directed edge is
established between two words that are adjacent and its weight is the
number of times the adjacent words appear in the text. Before
constructing a network, the text must be preprocessed. All stop words
(e.g.\ articles, prepositions, conjunctions, etc) are removed, and the
remaining words are lemmatized~\cite{Antiqueira2007}. In our analysis,
we considered the books: David Copperfield by Charles Dickens, Night
and Day by Virginia Woolf, and On the Origin of Species by Charles
Darwin compiled by Antiqueira~\emph{et al.}~\cite{Antiqueira2006}.
\subsection{Technological networks}
\begin{trivlist}
\item \textbf{Internet} or the autonomous systems (AS) network is a
collection of IP networks and routers under the control of one entity
that presents a common routing policy to the Internet. Each AS is a
large domain of IP addresses that usually belongs to one organization
such as a university, a business enterpriser, or an Internet Service
Provider. In this type of networks, two vertices are connected
according to BGP tables. The considered network in our analysis was
collected by Newman in July, 2006~\cite{Newman:data}.
\item \textbf{The US Airlines Transportation Network} is formed by US
airports in 1997 connected by flights. Such network is available at
Pajek datasets~\cite{pajek-data}.
\item \textbf{The Western States Power Grid} represents the topology
of the electrical distribution grid~\cite{Watts:1998}. Vertices
represent generators, transformers and substations, and edges the
high-voltage transmission lines that connect them.
\end{trivlist}
\subsection{Biological networks}
Some biological systems can be modeled in terms of networks as the
brain, the genetic interaction and the interaction between proteins.
\begin{trivlist}
\item \textbf{The neural network of \emph{Caenorhabditis elegans}} is
composed by neurons connected according to
synapses~\cite{White86,Watts:1998}.
\item \textbf{Transcriptional Regulation Network of the Escherichia
coli} is formed by operons (an operon is a group of contiguous genes
that are transcribed into a single mRNA molecule). Each edge is
directed from an operon that encodes a transcription factor to another
operon which is regulated by that transcription factor. This kind of
network plays an important role in controlling gene
expression~\cite{ShenOrr:2002}.
\item \textbf{The protein-protein interaction network of
\emph{Saccharomyces cerevisiae}} is formed by proteins connected
according to identified directed physical interactions~\cite{Jeong01}.
\end{trivlist}
\section{\label{sec:appl}Results and Discussion}
We analyzed the real-world networks by comparing their number of
cords, tails, and handles with random networks generated by the
rewiring procedure as described in~\cite{milo2003} and with the
theory proposed in Section~\ref{sec:stat}.
\subsection{Comparison between real-world networks and their
randomized counterparts}
For each considered real-world network, we generated 1\,000 randomized
versions (100 for WWW) by the rewiring process described
in~\cite{milo2003}. The generated networks have the same degree
distribution as the original, but without any degree-degree
correlation. In order to compare the chain statistics obtained for the
real-world and the respective randomized versions, we evaluated the
Z-score values for each size of the cords, tails, and handles. The
Z-score is given by,
\begin{equation}
Z = \frac{X_{\mathrm{Real}}-\langle X \rangle}{\sigma},
\end{equation}
where $X_{\mathrm{Real}}$ is the number of cords, tails, or handles
with a specific size of the original (real-world) analyzed network,
and $\langle X \rangle$ and $\sigma$ are, respectively, the average
and the standard deviation of the corresponding values of its
randomized counterparts. A null value of the Z-score indicates that
there is no statistical difference between the number of occurrences
of cords, tails, or handles in the considered network and in its
randomized versions.
The results of the Z-scores for all considered networks can be seen in
Figure~\ref{fig:zscore}. The cases in which the Z-score values are not
defined ($\sigma = 0$) were disconsidered.
\begin{figure*}
\includegraphics[width=0.99\linewidth]{zscore.eps}
\caption{Z-scores of the number of cords, tails, and handles for each
size. The number of generated random networks was 1\,000 for all
considered networks, except for WWW, which was 100 (because of the
substantially larger size of this network).}
\label{fig:zscore}
\end{figure*}
The majority of results presented in Figure~\ref{fig:zscore} can be
explained by the fact that the rewiring process tends to make uniform
the distribution of cords size, tails and handles. In this way, the
excess of these structures on the real networks will reduce in the
random counterparts. For instance, if a network have many large
handles, its random version will present few large handles but many
small ones. The next discussion will not take into account the shape
of the distribution of chains, but just the most important results.
In the case of collaboration networks, there is a large quantity of
cords. This fact suggests that researchers published papers with just
one, two or three other scientists. Cords may appear because many
researchers can publish in other areas and, therefore, such papers are
not included in the network. If other research areas had been
considered, this effect could not occur and the number of small cords
would be less significant. Thus, the presence of cords in
collaboration networks can be the result of database incompleteness.
Another possible cause of cords in such networks concerns the
situations of authors which publish only among themselves.
The information networks do not present a well defined patterns as
observed in collaboration network. The Roget thesaurus network is
different from the others, but the results obtained for such a network
are not expressive enough to be discussed. Important to note that in
the Wordnet and WWW, there is a large occurrence of tails of size
one. In the case of Wordnet, this happen because specific words has
connections with more common words which has connections with the
remainder of the network. In the case of WWW, this structure is a
consequence of characteristic url documents which have just one
link. In addition to small tails, the WWW have long tails and handles.
This fact can be associated to the way in which the network were
constructed, by considering a \emph{web
crawler}~\cite{Albert99:Nature} --- a program designed to visit url
documents inside a given domain and get links between them in a
recursive fashion. When pages are visited by the crawler, the wandered
path can originate chains. If the program is not executed by a long
time interval, long chains can appear. Thus, this effect can be
resulting of incomplete sampling (see
Subsection~\ref{sec:incompleteness}). Besides, as the process of
network construction is recursive, isolated components does not occurs
in the database and therefore there are no cords and rings.
The books adjacency networks presents a characteristic pattern of
chains: no cords, the same quantity of tails of sizes 1, 2 and 3 as
observed in the random counterparts, and many handles of size 1, 3, 4
and 5. The increasing in the quantity of handles of size 2 in random
versions are consequence of the fact that when the rewiring process
are performed, many handles of size one can be put together. This
fact explain why book networks present more handles of size one than
in random counterparts. On the other hand, the long handles are
consequence of the sequential process considered to obtain the
network.
In technological networks, the chain patterns are more significant in
power grid. This networks present a high quantity of tails of size one
and handles of size 11. While the first occurrence appear to be
related to the geographical effect, where new vertices needed to cover
a new region tend to connect with the near vertices, the second can be
resulting of geographical constraints (e.g.\ the transmissors may be
allocated in a strategic way in order to contour a mountain, lake or
other geographical accidents).
The results obtained for biological networks are not so
expressive. However, the protein interaction network of the yeast
\emph{S. cerevisiae} have many cords of size one and two. The presence
of small cords in this networks is a consequence of isolated chains of
proteins which interact only with a small number of other
proteins. This fact can be due to incompleteness~\cite{han2005est},
where many real connections may not be considered, or high specialized
proteins, which lost many connections because the mutation process ---
protein interaction networks evolve from two basic process:
duplication and mutation~\cite{Vazquez03:complexus}.
\subsection{Theoretical analysis of the real-world networks}
Going back to the analysis presented in Section~\ref{sec:stat}, we
applied those theoretical developments to the considered real-world
networks. We obtained their degree-degree correlations and computed
the expected number of cords, tails, and handles in function of their
sizes by Equations (\ref{eq:cords}),~(\ref{eq:tails}),
and~(\ref{eq:handles}), respectively. The number of rings was not
taken into account because of their very low probability to appear in
real-world networks. The results concerning the theoretical analysis
are shown in Figure~\ref{fig:theory}. The cases not shown are those
that have all chains smaller than 2. Due to the low probability of
finding cords in networks, only three networks are shown
(Figure~\ref{fig:theory}(a)), namely: cond-mat, high-energy
collaborations and the Wordnet. The theoretical prediction does not
work well for these networks, except for the Wordnet, predicting less
cords than those found in the real networks. An opposite situation
was found for the number of tails and handles, shown in
Figure~\ref{fig:theory} (b) and (c) respectively. However, there are
more larger tails and handles in the real-world networks than
predicted by theory, except for Astrophysics, cond-mat, and
high-energy collaboration networks.
\begin{figure*}
\subfigure[\,Number of
cords.]{\includegraphics[width=0.54\linewidth]{cords_theory.eps}}
\vspace{0.3cm} \subfigure[\,Number of
tails.]{\includegraphics[width=0.9\linewidth]{tails_theory.eps}}
\vspace{0.3cm} \subfigure[\,Number of
handles.]{\includegraphics[width=0.9\linewidth]{handles_theory.eps}}
\caption{The distributions shown in (a), (b), and (c) correspond to
the most significant data (each distribution have at least three
points). Points correspond to the real data, and the solid lines
correspond to the theoretical predictions.} \label{fig:theory}
\end{figure*}
Despite the fact that, for some cases, the number of small cords,
tails, and handles of the real-world networks were far from the values
obtained from their respective randomized counterparts (see
Figure~\ref{fig:zscore}), the theoretical results were accurate for
several cases, except for astrophysics (handles), netscience (tails),
cond-mat (cords and handles), high-energy (cords, tails, and handles),
WWW (tails and handles), the book On the origin of species (handles),
and power grid (handles) (see (Figure~\ref{fig:theory}).
\subsection{Analysis of incomplete networks} \label{sec:incompleteness}
In order to investigate the possibility that incomplete networks
presents many tails and handles, we sampled two theoretical network
models, namely Erd\H{o}s-R\'{e}nyi model (ER)~\cite{Erdos-Renyi:1959}
and Barab\'asi and Albert scale-free model (BA)~\cite{Barabasi:99} by
performing random walks~\cite{noh2004,costa2007ecn}, and analyzing the
corresponding distributions of tails and handles. The ER and BA
models included 100\,000 vertices with average degree 6. The results
of the random walks in these theoretical networks are shown in
Figure~\ref{fig:incompleteness}. Each point of the mesh grid is the
average value considering 1\,000 realizations.
\begin{figure}
\includegraphics[width=0.8\linewidth]{er-tails.eps} (a) \vspace{0.6cm} \\
\includegraphics[width=0.8\linewidth]{er-handles.eps} (b) \vspace{0.6cm}\\
\includegraphics[width=0.8\linewidth]{ba-tails.eps} (c) \vspace{0.6cm} \\
\includegraphics[width=0.8\linewidth]{ba-handles.eps} (d)\\
\caption{Figures (a) and (b) present the number of tails and handles
of different sizes in the Erd\H{o}s-R\'{e}nyi model,
respectively. Figures (c) and (d), on the other hand, present the
number of tails and handles for the Barab\'asi and Albert scale-free
model, respectively. Each point in the mesh grid is the average
considering 1\,000 realizations of each random walk.}
\label{fig:incompleteness}
\end{figure}
For the ER and BA models the results are very similar, with the
difference that the tails tend to vanish with larger random walks
(almost $10^7$ steps) in the BA model. This is not the case for the ER
network because its original structure already had vertices with unit
degree. Therefore, this network already had small tails (size 1 and
2). Conversely, BA networks of average vertex degree 6 do not have
tails, and with large random walks these structures tend to vanish.
The results from Figure~\ref{fig:incompleteness} clearly indicates
that there are many large tails and handles for both models when the
random walks are relatively short. As the size of random walks are
increased, the number of large tails and handles tend to decrease, but
the number of small tails and handles increases, because with large
random walks the probability of breaking large tails and handles in
smaller parts is increased. As the length of the random walks increase
further, the large tails and handles tend to vanish, and the original
networks are recovered.
\section{\label{sec:conc}Conclusions}
One of the most important aspects characterizing different types of
complex networks concerns the distribution of specific connecting
patterns, such as the traditionally investigated motifs. In the
present work we considered specific connecting patterns including
chains of articulations, i.e.\ linear sequences of interconnected
vertices with only two neighbors. Such a new type of motifs has been
subdivided into cords (i.e.\ chains with free extremities), rings
(i.e.\ chains with no free extremities but disconnected from the
remainder of the network), tails (i.e.\ chains with only one free
extremity) and handles (i.e.\ chains with no free extremity). By
considering a large number of representative theoretical and
real-world networks, we identified that many specific types of such
networks tend to exhibit specific distribution of cords, tails, and
handles. We provide an algorithm to identify such motifs in generic
networks. Also, we developed an analytical framework to predict the
number of chains in random network models, scale-free network models
and real-world networks, which provided accurate approximations for
several of the considered networks. Finally, we investigated the
presence of chains by considering Z-score values (i.e.\ comparing the
presence of chains in real networks and the respective random
counterparts). The specific origin of handles and tails are likely
related to the evolution of each type of network, or incompleteness
arising from sampling. In the first case, the handles and tails in
geographical networks may be a consequence mainly of the chaining
effect obtained by connecting vertices with are spatially
near/adjacent one another. In the second, we showed that incomplete
sampling of networks by random walks can produce specific types of
chains.
All in all, the results obtained in our analysis indicate that handles
and tails are present in several important real-world networks, while
being largely absent in the randomized versions and in the considered
theoretical models. The study of such motifs is particularly important
because they can provide clues about the way in which each type of
network was grown. Several future investigations are possible,
including the proposal of models for generation of networks with
specific distribution of handles and tails, as well as additional
experiments aimed at studying the evolution of handles and tails in
growing networks such as the WWW and the Internet.
\begin{acknowledgments}
The authors thank Lucas Antiqueira for providing the books
networks. Luciano da F. Costa thanks CNPq (301303/06-1) and FAPESP
(05/00587-5); Francisco A. Rodrigues is grateful to FAPESP
(07/50633-9); Paulino R. Villas Boas is grateful to CNPq
(141390/2004-2); and Gonzalo Travieso is grateful to FAPESP (03/08269-7).
\end{acknowledgments}
|
2,877,628,090,523 | arxiv | \section{Introduction.}
\noindent
Most part of the life of stars (at any stage of evolution), may be
described on the basis of the quasi-static approximation (slowly
evolving regime). This is so, because most relevant processes in star
interiors take place on time scales that are usually, much larger
than the hydrostatic time scale \cite{1},\cite{2}.
\noindent
However, during their evolution, self-gravitating objects may pass
through phases of intense dynamical activity for which the quasi-static
approximation is clearly not reliable (e.g., the quick collapse phase
preceding neutron star formation). All these phases of star evolution
(``slow'' and ``quick'') are generally accompanied by intense dissipative
processes, usually described in the diffusion approximation. This
assumption, in its turn, is justified by the fact that frequently,
the mean free path of particles responsible for the propagation of
energy in stellar interiors is very small as compared with the typical
length of the star.
\noindent
In this work we shall study the influence of thermal conduction on the
evolution of a self-gravitating system out of hydrostatic equilibrium
(in the ``quick'' phase).
\noindent
However, instead of following its evolution long time
after its departure from equilibrium, we
shall evaluate the system immediately after such departure. Here
``immediately'' means on a time scale of the order of thermal relaxation time,
before the establishment of the steady state resistive flow.
\noindent
Doing so we shall avoid the introduction of numerical procedures
which might lead to model dependent conclusions.
\noindent
On the other hand, however, we shall obtain only indications about
the tendency of the object and not a complete description of its evolution.
\noindent
As we shall see, there appears a local parameter formed by a specific
combination of thermal relaxation time, thermal conductivity, proper
energy density and pressure, which critically affects the evolution
of the object and which is constrained by causality requirements.
\noindent
The paper is organized as follows.
\noindent
In the next section the
field equations, the conventions and other useful formulae are introduced.
In section 3 we briefly present the equation for the heat conduction.
The central problem is analysed in section 4 and a discussion of
results is given in the last section.
\section{Field Equations and Conventions.}
\noindent
We consider spherically symmetric distributions of collapsing
fluid, which for sake of completeness we assume to be anisotropic,
undergoing dissipation in the form of heat flow, bounded by a
spherical surface $\Sigma$.
\noindent
The line element is given in Schwarzschild-like coordinates by
\begin{equation}
ds^2=e^{\nu} dt^2 - e^{\lambda} dr^2 -
r^2 \left( d\theta^2 + sin^2\theta d\phi^2 \right)
\label{metric}
\end{equation}
\noindent
where $\nu(t,r)$ and $\lambda(t,r)$ are functions of their arguments. We
number the coordinates: $x^0=t; \, x^1=r; \, x^2=\theta; \, x^3=\phi$.
\noindent
The metric (\ref{metric}) has to satisfy Einstein field equations
\begin{equation}
G^\nu_\mu=-8\pi T^\nu_\mu
\label{Efeq}
\end{equation}
\noindent
which in our case read \cite{3}:
\begin{equation}
-8\pi T^0_0=-\frac{1}{r^2}+e^{-\lambda}
\left(\frac{1}{r^2}-\frac{\lambda'}{r} \right)
\label{feq00}
\end{equation}
\begin{equation}
-8\pi T^1_1=-\frac{1}{r^2}+e^{-\lambda}
\left(\frac{1}{r^2}+\frac{\nu'}{r}\right)
\label{feq11}
\end{equation}
\begin{eqnarray}
-8\pi T^2_2 = - 8\pi T^3_3 = & - &\frac{e^{-\nu}}{4}\left(2\ddot\lambda+
\dot\lambda(\dot\lambda-\dot\nu)\right) \nonumber \\
& + & \frac{e^{-\lambda}}{4}
\left(2\nu''+\nu'^2 -
\lambda'\nu' + 2\frac{\nu' - \lambda'}{r}\right)
\label{feq2233}
\end{eqnarray}
\begin{equation}
-8\pi T_{01}=-\frac{\dot\lambda}{r}
\label{feq01}
\end{equation}
\noindent
where dots and primes stand for partial differentiation with respect
to t and r
respectively.
\noindent
In order to give physical significance to the $T^{\mu}_{\nu}$ components
we apply the Bondi approach \cite{3}.
\noindent
Thus, following Bondi, let us introduce purely locally Minkowski
coordinates ($\tau, x, y, z$)
$$d\tau=e^{\nu/2}dt\,\qquad\,dx=e^{\lambda/2}dr\,\qquad\,
dy=rd\theta\,\qquad\, dz=rsin\theta d\phi$$
\noindent
Then, denoting the Minkowski components of the energy tensor by a bar,
we have
$$\bar T^0_0=T^0_0\,\qquad\,
\bar T^1_1=T^1_1\,\qquad\,\bar T^2_2=T^2_2\,\qquad\,
\bar T^3_3=T^3_3\,\qquad\,\bar T_{01}=e^{-(\nu+\lambda)/2}T_{01}$$
\noindent
Next, we suppose that when viewed by an observer moving relative to these
coordinates with velocity $\omega$ in the radial direction, the physical
content of space consists of an anisotropic fluid of energy density $\rho$,
radial pressure $P_r$, tangential pressure $P_\bot$ and radial heat flux
$\hat q$. Thus, when viewed by this moving observer the covariant tensor in
Minkowski coordinates is
\[ \left(\begin{array}{cccc}
\rho & -\hat q & 0 & 0 \\
-\hat q & P_r & 0 & 0 \\
0 & 0 & P_\bot & 0 \\
0 & 0 & 0 & P_\bot
\end{array} \right) \]
\noindent
Then a Lorentz transformation readily shows that
\begin{equation}
T^0_0=\bar T^0_0= \frac{\rho + P_r \omega^2 }{1 - \omega^2} +
\frac{2 Q \omega e^{\lambda/2}}{(1 - \omega^2)^{1/2}}
\label{T00}
\end{equation}
\begin{equation}
T^1_1=\bar T^1_1=-\frac{ P_r + \rho \omega^2}{1 - \omega^2} -
\frac{2 Q \omega e^{\lambda/2}}{(1 - \omega^2)^{1/2}}
\label{T11}
\end{equation}
\begin{equation}
T^2_2=T^3_3=\bar T^2_2=\bar T^3_3=-P_\bot
\label{T2233}
\end{equation}
\begin{equation}
T_{01}=e^{(\nu + \lambda)/2} \bar T_{01}=
-\frac{(\rho + P_r) \omega e^{(\nu + \lambda)/2}}{1 - \omega^2} -
\frac{Q e^{\nu/2} e^{\lambda}}{(1 - \omega^2)^{1/2}} (1 + \omega^2)
\label{T01}
\end{equation}
\noindent
with
\begin{equation}
Q \equiv \frac{\hat q e^{-\lambda/2}}{(1 - \omega^2)^{1/2}}
\label{defq}
\end{equation}
\noindent
Note that the velocity in the ($t,r,\theta,\phi$) system, $dr/dt$,
is related to $\omega$ by
\begin{equation}
\omega=\frac{dr}{dt}\,e^{(\lambda-\nu)/2}
\label{omega}
\end{equation}
\noindent
At the outside of the fluid distribution, the spacetime is that of Vaidya,
given by
\begin{equation}
ds^2= \left(1-\frac{2M(u)}{R}\right) du^2 + 2dudR -
R^2 \left(d\theta^2 + sin^2\theta d\phi^2 \right)
\label{Vaidya}
\end{equation}
\noindent
where $u$ is a time-like coordinate such that $u=constant$ is
(asymptotically) a
null cone open to the future and $R$ is a null coordinate
($g_{RR}=0$). It should
be remarked, however, that strictly speaking, the radiation can be considered
in radial free streaming only at radial infinity.
\noindent
The two coordinate systems ($t,r,\theta,\phi$) and ($u,R,\theta,\phi$) are
related at the boundary surface and outside it by
\begin{equation}
u=t-r-2M\,ln \left(\frac{r}{2M}-1\right)
\label{u}
\end{equation}
\begin{equation}
R=r
\label{R}
\end{equation}
\noindent
In order to match smoothly the two metrics above on the boundary surface
$r=r_\Sigma(t)$, we have to require the continuity of the first fundamental
form across that surface. As result of this matching we obtain
\begin{equation}
\left[P_r\right]_\Sigma=\left[Q\,e^{\lambda/2}\left(1-\omega^2\right)^
{1/2}\right]_\Sigma = \left[\hat q\right]_\Sigma
\label{PQ}
\end{equation}
\noindent
expressing the discontinuity of the radial pressure in the presence
of heat flow, which is a well known result \cite{4}.
\noindent
Next, it will be useful to calculate the radial components of the
conservation law
\begin{equation}
T^\mu_{\nu;\mu}=0
\label{dTmn}
\end{equation}
\noindent
After tedious but simple calculations we get
\begin{equation}
\left(-8\pi T^1_1\right)'=\frac{16\pi}{r} \left(T^1_1-T^2_2\right)
+ 4\pi \nu' \left(T^1_1-T^0_0\right) +
\frac{e^{-\nu}}{r} \left(\ddot\lambda + \frac{\dot\lambda^2}{2}
- \frac{\dot\lambda \dot\nu}{2}\right)
\label{T1p}
\end{equation}
\noindent
which in the static case becomes
\begin{equation}
P'_r=-\frac{\nu'}{2}\left(\rho+P_r\right)+
\frac{2\left(P_\bot-P_r\right)}{r}
\label{Prp}
\end{equation}
\noindent
representing the generalization of the Tolman-Oppenheimer-Volkof equation
for anisotropic fluids \cite{5}.
\section{Heat Conduction Equation.}
\noindent
As we mentioned in the introduction, in the study of star interiors
it is usually assumed that the energy flux of radiation (and
thermal conduction) is proportional to the gradient of temperature
(Maxwell-Fourier law or Eckart-Landau in general relativity).
\noindent
However it is well known that the Maxwell-Fourier law for the radiation
flux leads to a parabolic equation (diffusion equation) which predicts
propagation of perturbation with infinite speed (see \cite{6}--\cite{8} and
references therein). This simple fact is at the origin of the pathologies
\cite{9} found in the approaches of Eckart \cite{10} and Landau \cite{11}
for relativistic dissipative processes.
\noindent
To overcome such difficulties, different relativistic
theories with non-vanishing relaxation times have been proposed
in the past \cite{12}--\cite{15}. The important point is that all these
theories provide a heat transport equation which is not of
Maxwell-Fourier type but of Cattaneo type \cite{18}, leading thereby to a
hyperbolic equation for the propagation of thermal perturbation.
\noindent
Accordingly we shall describe the heat transport by means of a
relativistic Israel-Stewart equation \cite{8} , which reads
\begin{equation}
\tau \frac{Dq^\alpha}{Ds} + q^\alpha =
\kappa P^{\alpha \beta} \left(T_{,\beta} - T a_\beta\right) -
\tau u^\alpha q_\beta a^\beta-
\frac{1}{2} \kappa T^2
\left(\frac{\tau}{\kappa T^2} u^\beta\right)_{;\beta} q^\alpha
\label{Catrel}
\end{equation}
\noindent
where $\kappa$, $\tau$, $T$, $q^\beta$ and $a^\beta$ denote thermal
conductivity,
thermal relaxation time, temperature, the heat flow vector and the
components of the four
acceleration, respectively. Also, $P^{\alpha \beta}$ is the projector
onto the hypersurface orthogonal to the four velocity $u^\alpha$.
\noindent
In our case this equation has only two independent components,
which read, for $\alpha=0$
\[
\tau e^{(\lambda-\nu)/2}
\left(
Q \dot\omega + \dot Q \omega + Q \omega \dot\lambda
\right) +
\tau \left(
Q' \omega^2 + Q \omega \omega' + \frac{Q \omega^2 \lambda'}{2}
\right)
\]
\[
+ \frac{\tau Q \omega^2}{r}
+ Q \omega e^{\lambda/2} \left(1 - \omega^2\right)^{1/2} =
- \, \frac{\kappa \omega^2 \dot T e^{-\nu/2}}
{\left(1 - \omega^2\right)^{1/2}}
- \, \frac{\kappa \omega T' e^{-\lambda/2}}
{\left(1 - \omega^2\right)^{1/2}}
\]
\[
- \, \frac{\nu'}{2}
\frac{\kappa T \omega e^{-\lambda/2}}{\left(1 - \omega^2\right)^{1/2}}
- \frac12 Q \omega \left(e^{(\lambda-\nu)/2} \dot{\tau}+\omega \tau'\right)
\]
\[
-\frac12 \tau Q \omega \left[e^{(\lambda-\nu)/2}
\left(\frac{\omega \dot{\omega}}{1-\omega^2}+\frac{\dot{\lambda}}{2}\right)
+\left(\frac{\omega'}{1-\omega^2}+\frac{\nu'\omega}{2}\right)\right]
\]
\[
+\frac12 \tau Q \omega \left[\frac1\kappa
\left(e^{(\lambda-\nu)/2}\dot{\kappa}+\omega\kappa'\right)
+\frac2T\left(e^{(\lambda-\nu)/2}\dot{T}+\omega T'\right)
\right]
\]
\begin{eqnarray}
+ \left(\tau Q e^{(\lambda-\nu)/2} - \,
\frac{\kappa T \omega e^{-\nu/2}}{\left(1 - \omega^2\right)^{1/2}}\right)
& \times &
\left(\frac{\omega \dot\lambda}{2} + \frac{\dot\omega}{1 - \omega^2}\right)
\nonumber \\
+ \left(\tau Q - \,
\frac{\kappa T \omega e^{-\lambda/2}}{\left(1 - \omega^2\right)^{1/2}}\right)
& \times &
\frac{\omega \omega'}{1 - \omega^2}
\label{com0}
\end{eqnarray}
\noindent
and for $\alpha=1$
\[
\tau e^{(\lambda-\nu)/2}
\left(
\dot Q + \frac{Q \dot\lambda}{2} + \frac{Q \omega^2 \dot\lambda}{2}
\right) + \tau \omega \left(
Q' + \frac{Q \lambda'}{2} \right)
\]
\[
+\frac{\tau Q \omega}{r} + Q e^{\lambda/2} \left(1 - \omega^2\right)^{1/2} =
- \, \frac{\kappa \omega \dot T e^{-\nu/2}}
{\left(1 - \omega^2\right)^{1/2}}
- \, \frac{\kappa T' e^{-\lambda/2}}
{\left(1 - \omega^2\right)^{1/2}}
\]
\[
-\, \frac{\nu'}{2}
\frac{\kappa T e^{-\lambda/2}}{\left(1 - \omega^2\right)^{1/2}}
-\, \frac{1}{2} Q \left(e^{(\lambda-\nu)/2}\dot{\tau}+\omega\tau'\right)
\]
\[
-\frac12 \tau Q \left[e^{(\lambda-\nu)/2}
\left(\frac{\omega \dot{\omega}}{1-\omega^2}+\frac{\dot{\lambda}}{2}\right)
+\left(\frac{\omega'}{1-\omega^2}+\frac{\nu'\omega}{2}\right)\right]
\]
\[
+\frac12 \tau Q \left[\frac1\kappa
\left(e^{(\lambda-\nu)/2}\dot{\kappa}+\omega\kappa'\right)
+\frac2T\left(e^{(\lambda-\nu)/2}\dot{T}+\omega T'\right)
\right]
\]
\begin{eqnarray}
+ \left(\tau Q \omega e^{(\lambda-\nu)/2} - \,
\frac{\kappa T e^{-\nu/2}}{\left(1 - \omega^2\right)^{1/2}}\right)
& \times &
\left(\frac{\omega \dot\lambda}{2} + \frac{\dot\omega}{1 - \omega^2}\right)
\nonumber \\
+ \left(\tau Q \omega - \,
\frac{\kappa T e^{-\lambda/2}}{\left(1 - \omega^2\right)^{1/2}}\right)
& \times &
\frac{\omega \omega'}{1 - \omega^2}
\label{com1}
\end{eqnarray}
\noindent
where the expressions
\begin{equation}
u^\mu=\left(\frac{e^{-\nu/2}}{\left(1-\omega^2\right)^{1/2}},\,
\frac{\omega\, e^{-\lambda/2}}{\left(1-\omega^2\right)^{1/2}},\,0,\,0\right)
\label{umu}
\end{equation}
\begin{equation}
q^\mu=Q\,\left(\omega\,e^{(\lambda-\nu)/2},\,1,\,0,\,0\right)
\label{qmu}
\end{equation}
\noindent
have been used.
\noindent
We are now ready to get into the central problem of this work.
\section{Thermal Conduction and Departure from Hydrostatic Equilibrium.}
\noindent
Let us now consider a spherically symmetric fluid distribution which
initially may be in either hydrostatic and thermal equilibrium (i.e.
$\omega = Q = 0$), or slowly evolving and dissipating energy through
a radial heat flow vector.
\noindent
Before proceeding further with the treatment of our problem, let us
clearly specify the meaning of ``slowly evolving''. That means that
our sphere changes on a time scale which is very large as compared to
the typical time in which it reacts on a slight perturbation of
hydrostatic equilibrium. This typical time is called hydrostatic
time scale. Thus a slowly evolving system is always in hydrostatic
equilibrium (very close to), and its evolution may be regarded as
a sequence of static models linked by (\ref{feq01}).
\noindent
As we mentioned before, this assumption is very sensible, since
the hydrostatic time scale is usually very small.
\noindent
Thus, it is of the order of $27$ minutes for the sun, $4.5$ seconds
for a white dwarf and $10^{-4}$ seconds for a neutron star of one
solar mass and $10$ Km radius \cite{2}.
\noindent
In terms of $\omega$ and metric functions, slow evolution means
that the radial velocity $\omega$ measured by the Minkowski observer,
as well as time derivatives are so small that their products and
second order time derivatives may be neglected (an invariant
characterization of slow evolution may be found in \cite{16}).
\noindent
Thus \cite{17}
\begin{equation}
\ddot\nu\approx\ddot\lambda\approx\dot\lambda \dot\nu\approx
\dot\lambda^2\approx\dot\nu^2\approx
\omega^2\approx\dot\omega=0
\label{neg}
\end{equation}
\noindent
As it follows from (\ref{feq01}) and (\ref{T01}), $Q$ is of the
order $O(\omega)$.
Thus in the slowly evolving regime, relaxation terms may be neglected
and (\ref{Catrel}) becomes the usual Landau-Eckart transport equation
\cite{17}.
\noindent
Then, using (\ref{neg}) and (\ref{T1p}) we obtain (\ref{Prp}),
which as mentioned before is the equation of hydrostatic equilibrium
for an anisotropic fluid. This is in agreement with what was mentioned
above, in the sense that a slowly evolving system is in hydrostatic
equilibrium.
\noindent
Let us now return to our problem. Before perturbation, the two
possible initial states of our system are characterized by:
\begin{enumerate}
\item Static
\begin{equation}
\dot \omega = \dot Q = \omega = Q = 0
\label{eqdt}
\end{equation}
\item Slowly evolving
\begin{equation}
\dot \omega = \dot Q = 0
\label{evlen}
\end{equation}
\begin{equation}
Q \approx O(\omega) \not = 0 \; \qquad (small)
\label{Qorom}
\end{equation}
\end{enumerate}
\noindent
where the meaning of ``small'' is given by (\ref{neg}).
\noindent
Let us now assume that our system is submitted to perturbations
which force it to depart from hydrostatic equilibrium but keeping the
spherical symmetry.
\noindent
We shall study the perturbed system on a time scale which is
small as compared to the thermal adjustment time.
\noindent
Then, immediately after perturbation (``immediately'' understood
in the sense above), we have for the first initial condition
(static)
\begin{equation}
\omega = Q = 0
\label{omyQ0}
\end{equation}
\begin{equation}
\dot\omega \approx \dot Q \not = 0 \; \qquad (small)
\label{chiq}
\end{equation}
\noindent
whereas for the second initial condition (slowly evolving)
\begin{equation}
Q \approx O(\omega) \not = 0 \; \qquad (small)
\label{Qseg}
\end{equation}
\begin{equation}
\dot Q \approx \dot\omega \not = 0 \; \qquad (small)
\label{pomQ2}
\end{equation}
\noindent
As we shall see below, both initial conditions lead to the same final
equations.
\noindent
Let us now write explicitly eq.(\ref{T1p}). With the help of
(\ref{T00})--(\ref{T01}), we find after long but trivial calculations
\begin{eqnarray}
& & \frac{P_r'}{1-\omega^2} +
\frac{\rho' \omega^2}{1-\omega^2} +
\frac{2 \omega \omega' \rho}{1-\omega^2} +
\frac{2 \omega \omega' P_r}{\left(1-\omega^2\right)^2}
\nonumber \\
& + &
\frac{2 \omega^3 \omega' \rho}{\left(1-\omega^2\right)^2} +
\frac{2 Q' \omega e^{\lambda/2}}{\left(1-\omega^2\right)^{1/2}} +
\frac{2 Q \omega' e^{\lambda/2}}{\left(1-\omega^2\right)^{1/2}} +
\frac{2 Q \omega^2 \omega' e^{\lambda/2}}{\left(1-\omega^2\right)^{3/2}}
\nonumber \\
& + &
\frac{2}{r} \, [ \,
\frac{4 \pi r^3}{r-2m} \,
\left(\rho + P_r \omega^2\right) \,
\frac{Q \omega e^{\lambda/2}}{\left(1-\omega^2\right)^{3/2}}
+ \frac{12 \pi r^3}{r-2m} \,
\left(\frac{Q \omega e^{\lambda/2}}{\left(1-\omega^2\right)^{1/2}}\right)^2
\nonumber \\
& + &
\left(\rho + P_r\right) \, \frac{\omega^2}{1-\omega^2}
+ \left(P_r - P_\bot\right) +
\frac{2 Q \omega e^{\lambda/2}}{\left(1-\omega^2\right)^{1/2}} +
\frac{\left(\rho+P_r\right)}{2} \,
\frac{1+\omega^2}{1-\omega^2} \, \frac{m}{r-2m}
\nonumber \\
& + &
\frac{Q \omega e^{\lambda/2}}{\left(1-\omega^2\right)^{1/2}} \,
\frac{m}{r-2m} +
\frac{2 \pi r^3}{r-2m} \,
\left(P_r+\rho\omega^2\right) \left(\rho+P_r\right) \,
\frac{1+\omega^2}{\left(1-\omega^2\right)^2}
\nonumber \\
& + &
\frac{8 \pi r^3}{r-2m} \,
\left(P_r+\rho\omega^2\right) \,
\frac{ Q \omega e^{\lambda/2}}{\left(1-\omega^2\right)^{3/2}} +
\frac{4 \pi r^3}{r-2m} \,
Q \omega e^{\lambda/2} \left(\rho+P_r\right) \,
\frac{1+\omega^2}{\left(1-\omega^2\right)^{3/2}} \, ]
\nonumber \\
& = &
\frac{e^{-\nu}}{8 \pi r}
\left(\ddot\lambda +
\frac{\dot\lambda^2}{2} -
\frac{\dot\lambda \dot\nu}{2}\right)
\label{horror}
\end{eqnarray}
\noindent
which, when evaluated immediately after perturbation, reduces to
\begin{equation}
P'_r + \frac{\left(\rho + P_r\right) m}{r^2 \left(1 - 2m/r\right)}
+ \frac{4 \pi r}{\left(1 - 2m/r\right)} \left(P_r \rho + P_r^2\right)
+ \frac{2 \left(P_r - P_\bot\right)}{r}
= \frac{e^{- \nu}}{8 \pi r} \ddot \lambda
\label{menho}
\end{equation}
\noindent
for both initial states.
\noindent
On the other hand, an expression for $\ddot\lambda$ may be obtained by
taking the time derivative of (\ref{feq01})
\begin{eqnarray}
\ddot\lambda & = & - 8 \pi r e^{(\nu + \lambda)/2}
[\,
\left(\rho + P_r\right) \frac{\omega}{1-\omega^2}
\frac{\dot\nu}{2} +
Q e^{\lambda/2} \frac{1+\omega^2}{\left(1-\omega^2\right)^{1/2}}
\frac{\dot\nu}{2} \nonumber \\
& + &
\frac{\left(\rho+P_r\right) \omega}{1-\omega^2}
\frac{\dot\lambda}{2} +
Q e^{\lambda/2} \frac{1+\omega^2}{\left(1-\omega^2\right)^{1/2}}
\dot\lambda +
\left(\dot\rho + \dot P_r\right)
\frac{\omega}{1-\omega^2} \nonumber \\
& + &
\left(\rho+P_r\right) \dot\omega
\frac{1+\omega^2}{\left(1-\omega^2\right)^{2}} +
\dot Q e^{\lambda/2} \frac{1+\omega^2}{\left(1-\omega^2\right)^{1/2}}
\nonumber \\
& + &
Q e^{\lambda/2} \frac{\omega \dot\omega \left(3-\omega^2\right)}
{\left(1-\omega^2\right)^{3/2}}
\,]
\label{pplex}
\end{eqnarray}
\noindent
which, in its turn, when evaluated after perturbation, reads
\begin{equation}
\ddot\lambda = - 8 \pi r e^{(\nu+\lambda)/2}
\left[\left(\rho+P_r\right) \dot\omega +
\dot Q e^{\lambda/2}\right]
\label{ddl12}
\end{equation}
\noindent
replacing $\ddot \lambda$ by (\ref{ddl12}) en (\ref{menho}),
we obtain
\begin{equation}
- e^{(\nu-\lambda)/2} R = \left(\rho+P_r\right) \dot\omega +
\dot Q e^{\lambda/2}
\label{pfR}
\end{equation}
\noindent
where $R$ denotes the left-hand side of the TOV equation, i.e.
\begin{eqnarray}
R & \equiv & \frac{dP_r}{dr} + \frac{4\pi r P_r^2}{1-2m/r} +
\frac{P_r m}{r^2 \left(1-2m/r\right)} +
\frac{4\pi r \rho P_r}{1-2m/r} + \nonumber \\
& & + \frac{\rho m}{r^2 \left(1-2m/r\right)} -
\frac{2\left(P_\bot - P_r\right)}{r} \nonumber \\
& = & P'_r + \frac{\nu'}{2} \left(\rho + P_r\right) -
\frac{2}{r} \left(P_\bot - P_r\right)
\label{Rfr}
\end{eqnarray}
\noindent
The physical meaning of $R$ is clearly inferred from (\ref{Rfr}).
It represents the total force (gravitational + pressure gradient +
anisotropic term) acting on a given fluid element. Obviously,
$R>0/R<0$ means that the total force is directed $inward/outward$ of
the sphere.
\noindent
Let us now turn back to thermal conduction equation (\ref{Catrel}).
Evaluating its $t$-component (given by Eq.(\ref{com0}))
immediately after perturbation, we obtain for the first initial
configuration (static), an identity. Whereas the second case
(slowly evolving) leads to
\begin{equation}
\omega \left(T' + T \frac{\nu'}{2}\right) = 0
\label{cs2}
\end{equation}
\noindent
which is to be expected, since before perturbation, in the
slowly evolving regime, we have according to Eckart-Landau
(valid in this regime)
\begin{equation}
Q = - \kappa e^{-\lambda} \left(T' + \frac{T \nu'}{2}\right)
\label{EL}
\end{equation}
\noindent
Therefore, the quantity in bracket is of order $Q$. Then
immediately after perturbation this quantity is still of
order $O(\omega)$, which implies (\ref{cs2}).
\noindent
The corresponding evolution of the $r$-component of the equation
(\ref{Catrel}) yields, for the initially static configuration
\begin{equation}
\tau \dot Q e^{\lambda/2} = - \kappa T \dot\omega
\label{Cat1}
\end{equation}
\noindent
where the fact has been used that after perturbation
\begin{equation}
Q = 0 \quad \Longrightarrow \quad T' = - \, \frac{T \nu'}{2}
\label{impl}
\end{equation}
\noindent
For the second case, the $r$-component of heat transport equation
yields also (\ref{Cat1}), since after perturbation the value of $Q$
is still given by (\ref{EL}), up to $O(\omega)$ terms.
\noindent
Finally, combining (\ref{pfR}) and (\ref{Cat1})
\begin{equation}
\dot\omega = - \frac{e^{(\nu-\lambda)/2} R}{\left(\rho+P_r\right)}
\times
\frac{1}{\left(1 - \frac{\kappa T}{\tau \left(\rho+P_r\right)}\right)}
\label{exmin}
\end{equation}
\noindent
or, defining the parameter $\alpha$ by
\begin{equation}
\alpha \equiv \frac{\kappa T}{\tau \left(\rho + P_r\right)}
\label{alfa}
\end{equation}
\begin{equation}
- e^{(\nu-\lambda)/2} R =
\left(\rho + P_r\right) \dot \omega \left(1 - \alpha\right)
\label{Ralfa}
\end{equation}
\noindent
Let us first consider the $\alpha=0$ case. Then, last expression
has the obvious ``Newtonian'' form
\centerline{Force $=$ mass $\times$ acceleration}
\noindent
since, as it is well known, $\left(\rho + P_r\right)$ represents
the inertial mass density and by ``acceleration'' we mean the time derivative
of $\omega$ and not $(a_\mu a^\mu)^{1/2}$. In this case ($\alpha=0$), an
$outward/inward$ acceleration ($\dot \omega>0/\dot \omega<0$) is
associated with an $outwardly/inwardly$ ($R<0/R>0$) directed total
force (as one expects!).
\noindent
However, in the general case ($\alpha \not = 0$) the situation
becomes quite different. Indeed, the inertial mass term is now
multiplied by ($1-\alpha$), so that if $\alpha=1$, we obtain that
$\dot \omega \not = 0$ even though $R=0$. Still worse, if
$\alpha>1$, then an $outward/inward$ acceleration is associated with an
$inwardly/outwardly$ directed total force!.However as we shall see in
next section,causality requirements constrain $\alpha$ to be less than 1.
\noindent
The last term in (\ref{Catrel}) is frequently omitted (the so-called
``truncated'' theory) \cite{19}. In the context of this work both
components of this term vanish and therefore all results found above
are independent of the adopted theory (Israel-Stewart or truncated).
\section{Discussion.}
\noindent
Restrictions based on stability and causality were derived in \cite{9} for
Israel-Stewart thermodynamics.According to equation (134) of \cite{9},
it follows that we must have $\alpha<1$ in order to guarantee that thermal
pulses are subluminal
\noindent
In fact there is a similar parameter in the case of bulk viscous
perturbations
\cite{8}, which due to causality an stability limits should be
smaller than one.
\noindent
Before concluding we would like to make the following remarks:
\begin{enumerate}
\item Observe the formal similarity between the critical point and
the equation of state for an inflationary scenario ($\rho = - P_r$).
\item It should be clear that in the context of the perturbation scheme
used here, we get information only about the tendency of the system. To
find out the real influence of the critical point on the evolution of
the object, the full integration of the equations is required. Calculations
involving such integrations have been performed in the past
\cite{26}--\cite{28}, however in neither one of the examples examined there,
the system reaches the critical point. Furthermore, our configurations are
initially in global thermal equilibrium, which is a highly idealized
situation. In this sense, it should be stressed that our aim here is not to
model a real star but to study some specific aspects of relativistic
diffusion.
\item It should be clear that the analysis presented here depends strictly
on the validity of the diffusion approximation, which in turn depends on
the assumption of local thermodynamical equilibrium (LTE). Therefore, only
small deviations from LTE can be considered in the context of this work.
\item For the sake of completeness we have considered an anisotropic fluid
(instead of an isotropic one), leaving the origin of such anisotropy
completely
unspecified. As it is apparent, anisotropy does not affect the most important
result obtained here (Eq.(\ref{Ralfa})). However, should anisotropy be related
to viscosity, then for consistency the anisotropic pressure tensor should be
subjected to the Israel-Stewart causal evolution equation for shear viscosity
\end{enumerate}
\section*{Acknowledgments.}
\noindent
This work has been partially supported by the Spanish Ministry of
Education under grant No. PB94-0718.
|
2,877,628,090,524 | arxiv | \section{Introduction}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.9\linewidth]{pictures/teaser.jpg}
\end{center}
\caption{B-Rep segmentation into CAD operations types and steps.}
\label{fig:seg-group}
\end{figure}
In today’s digital era, Computer-Aided Design (CAD) is the standard option for designing objects ahead of manufacturing~\cite{drawing,importance,cadhistory}.
The parametric nature of CAD models allows engineers and designers to iterate over the parameters of existing CAD models to edit and adapt them to new contexts, such as customizing dental prostheses~\cite{solaberrieta2014computer}, or modifying mechanical parts~\cite{thompson1999feature}. However, this is only possible if the final shape of the CAD model comes with its design history. Unfortunately, this is rarely the case as the design history is often not available for generic 3D shapes~\cite{inversecsg} or lost when CAD models are exchanged between different CAD applications~\cite{kim2007integration,brepnet}. Consequently, the research community has put a lot of efforts in relating the geometry of 3D shapes to the CAD design history~\cite{brepnet,uvnet,csgnet,inversecsg,zonegraph,deepcad}. This process is known as \textit{3D~reverse~engineering}.
Prior works attempted to recover the CAD design history, considering \textit{Constructive Solid Geometry} (CSG) based models~\cite{inversecsg,csgnet} for simplicity. In CSG, a CAD model is represented by a set of rigidly transformed solid primitives (\eg cube, sphere, cylinder) and combined using Boolean operations such as union, intersection, and difference~\cite{csg-brep}.
However, modern CAD workflows use \textit{feature-based} modeling, in which solids are created by iteratively adding features such as holes, slots, or bosses~\cite{zonegraph,FeatureNet}. These high-level features are sequentially created through drawing~\textit{sketches} and applying \textit{CAD operations} such as `\textit{extrusion}', `\textit{revolution}', etc. Figure~\ref{fig:seg-group}\textcolor{red}{a} illustrates an example of feature-based simple CAD model creation. Using this type of CAD modeling, the final model is stored in a data structure called \textit{Boundary-Representation} (B-Rep). The B-Rep describes the geometry and the topology of the CAD model through faces, edges, loops, co-edges and vertices~\cite{brepnet}. However, it does not include information about how these entities are designed.
Accordingly, recent efforts in the state-of-the-art have focused on relating B-Reps to the design history~\cite{zonegraph,brepnet,uvnet}. In particular, two main directions have been followed: (1)~segmenting the B-Rep faces into CAD operation types (\eg `\textit{extrusion}', `\textit{revolution}')~\cite{uvnet,brepnet} or higher-level machining features (\eg `\textit{holes}', `\textit{slots}')~\cite{cadnet} that allowed their creation; (2)~inferring a sequence of parametric sketches and extrusions that allowed the design of the B-Rep~\cite{zonegraph,fusion360,point2cyl}. While the first group of works have the advantage of relating each face of the B-Rep to various types of CAD operations, they do not describe the relationship between the faces nor the steps of the construction. On the other hand, the works taking the second direction reconstruct the ordered sequence of the design history, including sketches, but they are usually limited to only one CAD operation type (\ie `\textit{extrusion}') as a simplification of the search space.
In this work, we combine both directions by segmenting the faces of the B-Reps into various CAD operation types and further decomposing them into steps of construction as shown in~Figure~\ref{fig:seg-group}. These two aspects are jointly learned using an end-to-end neural network, allowing the recovery of further information about the design history such as CAD sketches. The proposed method is evaluated on the publicly available Fusion360 dataset~\cite{fusion360}, and a newly introduced dataset that is closer to real-world challenges. The key \textbf{contributions} can be summarized as follows:
\vspace{-0.15cm}
\begin{itemize}
\item A neural network, \emph{\hbox{CADOps-Net}}, that operates on B-Reps is proposed to learn the segmentation of faces into CAD operation types and steps. We introduce a joint learning method within an end-to-end model.
\vspace{-0.15cm}
\item We create a novel dataset, \emph{\hbox{CC3D-Ops}}, that builds on top of the existing CC3D~dataset~\cite{cc3d} by extending it with B-Reps and their corresponding per-face CAD operation type and step annotations. Compared to existing datasets~\cite{fusion360,mfcad,abc_dataset}, \emph{\hbox{CC3D-Ops}}~better reflects real-world industrial challenges thanks to the complexity of its CAD models. This dataset can be found at \url{https://cvi2.uni.lu/cc3d-ops/}.
\vspace{-0.15cm}
\item The proposed approach is evaluated on two datasets and compared to recent state-of-the-art methods. We further showcase some preliminary results on a possible downstream application consisting of CAD sketch recovery from B-Reps.
\end{itemize}
The rest of the paper is organized as follows; In Section~\ref{sec:related-works} related works are discussed followed by the problem formulation in Section~\ref{sect:pblmstatement}. Section~\ref{sec:approach} describes the proposed \emph{\hbox{CADOps-Net}}. The proposed \emph{\hbox{CC3D-Ops}}~dataset is introduced in Section~\ref{dataset}. The experimental results are reported and analyzed in Section~\ref{sec:expt}. Finally, Section~\ref{sec:conclusion} concludes this work and presents directions for future work.
\section{Related Works}
\label{sec:related-works}
Learning representations for 3D shape modeling~\cite{ahmed2018survey} is an important research topic that aims at finding the best deep feature encoding method. For instance, while a group of works leverages feature embedding for unordered and irregular point clouds~\cite{pointnet, pointnet++, dgcnn, PointConv, DensePoint} or regular grids of voxels~\cite{voxnet, pvcnn, cc3d, RPSRNet, o-cnn}, another group of works~\cite{meshCNN, spiralnet++, GeodscCNN} defines convolution kernels and feature embedding techniques for meshes and manifolds. Other works~\cite{inversecsg,brepnet,uvnet,csgnet} focused on learning from high-level 3D shape representations such as CAD models. These methods either assume that the CAD models are obtained using CSG or feature-based modeling. In particular, the recovery of the CAD design history considering these two types of modeling has attracted a lot of attention~\cite{fusion360,uvnet,brepnet, csgnet, inversecsg}.
\vspace{0.12cm}
\noindent\textbf{CSG-based Approaches.} Several approaches~\cite{csgnet, csg-solid, csg-brep, constructingCSG} attempt to infer the design history of CAD models using CSG representation. For instance, when the input shape is a 3D point cloud, \cite{inversecsg} and~\cite{constructingCSG} convert it to the CSG tree (mainly binary-tree) of solid bodies which is a volumetric representation of simple geometrical primitives. Similarly, when the input is a B-Rep or a solid body,~\cite{BRep2CSG} and~\cite{csg-solid} describe unique CSG conversion steps (or vice-versa in~\cite{ csg-brep}).
The conversion reveals hierarchical steps involved in modeling solid bodies, whereas CAD models appear more as connected surface patches than volumetric solids~\cite{netgen}. Therefore, predicting CSG construction history may not reveal the actual CAD construction steps used in modern CAD workflows~\cite{zonegraph}. The latter mostly consider B-Reps instead of CSG and rely on feature-based modeling, which is addressed in our work.
\vspace{0.12cm}
\noindent\textbf{Feature-based Approaches. }The methods that either directly learn the B-Rep structure of a CAD model~\cite{uvnet, brepnet, solidgen, zonegraph, cadnet} or predict sketches and CAD operations~\cite{deepcad, sketchgen, sketchgraphs, cad-as-a-lang}, are closely related to our work. The works in~\cite{sketchgen, cad-as-a-lang} propose generative models for CAD sketches with a focus on the constraints of sketch entities. Therefore, they do not consider the connection between constrained CAD sketches and operations. On the other hand, methods like SolidGen~\cite{solidgen}, BRepNet~\cite{brepnet}, UV-Net~\cite{uvnet}, CADNet~\cite{cadnet} put more emphasis on how to use the B-Rep data structure to obtain face embeddings followed by face segmentation, but obscuring the relation between the segmented faces and design steps. DeepCAD~\cite{deepcad}, Fusion360~\cite{fusion360} and Zone-graph~\cite{zonegraph} are the first set of methods, to the best of our knowledge, that relate parametric sketches and CAD operations proposing a generative model for CAD design. However, their models were restricted to only one type of CAD operations, namely extrusion. Finally, Point2Cyl~\cite{point2cyl} operates on point clouds to detect 2D sketches but is also limited to the CAD extrusion operation.
\vspace{0.12cm}
\noindent\textbf{CAD Modeling Datasets. } Besides Fusion360~\cite{fusion360}, there are no datasets that provide both B-Reps and fully explicit construction history in standard format. For example, the ABC dataset~\cite{abc_dataset} provides $1M+$ CAD models with sparse construction history provided in Onshape proprietary format~\cite{fusion360}. On the other hand, the SketchGraphs dataset~\cite{sketchgraphs} contains a large number of sketch construction sequences but not the B-Reps. Both MFCAD~\cite{mfcad} and MFCAD++~\cite{cadnet} datasets contain B-Reps and machining feature labels. However, the samples are synthetic models and too simple to consider for industrial modeling tasks. CC3D dataset~\cite{cc3d} offers $50k+$ pairs of industrial CAD models as triangular meshes and their corresponding 3D scans, but without construction steps and B-Reps. \emph{\hbox{CC3D-Ops}}~supplements the CC3D dataset with these elements.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.87\linewidth]{pictures/Presentation-10.png}
\caption{The \emph{\hbox{CADOps-Net}}~joint learning network architecture. The input B-Rep, $\mathcal{B}$, is first passed through a BrepNet backbone, $\mathbf{\Delta}$, to obtain face embeddings, $\mathbf{F}^{\Delta}$. These embeddings are then fed to an MLP layer, $\mathbf{\sigma}$, to predict the face \textit{\hbox{op.step}}~segmentation, $\mathbf{\widehat{S}}$. Using these predictions, the face embeddings, $\mathbf{F}^{\Delta}$, are aggregated with a function $\mathcal{A}$ into step embeddings, $\mathbf{S}^{\mathcal{A}}$. Finally the concatenation, $\oplus$, of the face embeddings, $\mathbf{F}^{\Delta}$, and their corresponding step embeddings, $\mathbf{S}^{\mathcal{A}}$, are passed through an MLP layer, $\mathbf{\rho}$ to predict the \textit{\hbox{op.type}}~face labels.}
\label{fig:pipeline}
\end{center}
\end{figure*}
\section{Problem Statement}
\label{sect:pblmstatement}
A B-Rep $\mathcal{B}$ can be defined as a tuple of three sets of entities -- \ie,
a set of $N_f$ faces \hbox{$\{f_1, f_2, \dots, f_{N_f}\}$},
a set of $N_e$ edges \hbox{$\{e_1, e_2, \dots, e_{N_e}\}$},
and
a set of $N_c$ co-edges (also known as directed half-edges) \hbox{$\{c_1, c_2, \dots, c_{N_c}\}$}.
Our main goal is to relate each face $f$ in $\mathcal{B}$ with its construction history using three different types of features
\hbox{$\mathbf{F}\in\mathbb{R}^{N_f \times d_f}$}, \hbox{$\mathbf{E}\in\mathbb{R}^{N_e \times d_e}$},
and
\hbox{$\mathbf{C}\in\mathbb{R}^{N_c \times d_c}$}
extracted for the three entities, namely, faces, edges, and co-edges, respectively\footnote{The considered features are described in Section~\ref{sect:exp-setup}.}.
The CAD construction history is defined as a sequential combination of sketches followed by some CAD operations. In this work, we are interested in learning (1) the type of CAD operations through
the segmentation of each face that allowed for its creation, and (2) the CAD operation step to which the segmented face belongs.
\subsection{CAD Operation Types}
The choice of CAD operation types is crucial for constructing CAD models. For notation simplicity, let us denote them as \textbf{\textit{\hbox{op.types}}}.
The geometry of the final CAD model, usually stored as a B-Rep, is obtained through these operations, which makes each face of the B-Rep directly related to a type of operation. In Figure~\ref{fig:seg-group}\textcolor{red}{c}, we show some intermediate steps of CAD construction and how the faces of the corresponding B-Rep are obtained using different \textit{\hbox{op.types}}. For example, the B-Rep of a cube that was obtained by sketching a 2D square and applying an extrusion operation, as in Figure~\ref{fig:seg-group}\textcolor{red}{a}, would result in two faces with \textit{`extrude~end'} labels and four faces with \textit{`extrude~side'} labels. The ability to automatically infer the \textit{\hbox{op.type}}~that allowed for the creation of each face of the B-Rep constitutes a first, yet essential, step towards relating the geometry of the CAD model to its construction history. Recently introduced models~\cite{brepnet,uvnet} proposed to learn the segmentation of B-Rep faces into \textit{\hbox{op.types}}.\\
Formally, let us consider a B-Rep $\mathcal{B}$ labelled with the per-face \textit{\hbox{op.types}}~$\mathbf{T}~=~[\mathbf{t}_1;\mathbf{t}_2;\dots;\mathbf{t}_{N_f}]~\in~{\{0,1\}}^{N_f \times k_t}$, where $k_t$ is the number of possible \textit{\hbox{op.types}}. Here, $\mathbf{T}~\in~{\{0,1\}}^{N_f \times k_t}$ is an $N_f \times k_t$ matrix with binary entries, where each row $\mathbf{t}_j \in \{0,1\}^{k_t}$ can have only one element as $1$ representing the \textit{\hbox{op.type}}~of the face $f_j$. The task of \textit{\hbox{op.type}}~segmentation consists of learning a mapping~$\mathbf{\Phi}$, such that,
\setlength\abovedisplayskip{0pt}
\begin{align}
\mathbf{\Phi} : \mathbb{R}^{N_f \times d_f} \times \mathbb{R}&^{N_e \times d_e} \times \mathbb{R}^{N_c \times d_c} \rightarrow {\{0,1\}}^{N_f \times k_t} \ , \\
&\mathbf{\Phi}(\mathbf{F},\mathbf{E},\mathbf{C})=\mathbf{T} \ .
\end{align}
\setlength\abovedisplayskip{0pt}
It is important to highlight that the segmentation task of \textit{\hbox{op.types}}~uses the features of faces, edges and co-edges, but assigns a unique \textit{\hbox{op.type}}, among a fixed number of possible types, to each face of the B-Rep. Despite its usefulness for reconstructing the CAD construction history of B-Reps, the segmentation into \textit{\hbox{op.types}}~is not sufficient as it does not describe the relationship between the faces nor the steps of the construction.
\subsection{CAD Operation Steps} \label{sec:group-memb}
In addition to the operation types that are assigned to the faces of the B-Reps, our aim is to relate them further to the construction history. Accordingly, we propose a novel task consisting of segmenting the faces of B-Reps into CAD operation steps. For notation simplicity, they will be denoted as \textbf{\textit{\hbox{op.steps}}} in what follows.
While the segmentation into \textit{\hbox{op.types}}~aims at identifying the operation that was used to create each face, the purpose of the segmentation into \textit{\hbox{op.steps}}~is to group faces that were created at the same time step. An example is shown in Figure~\ref{fig:seg-group}\textcolor{red}{b}. \\
Formally, let us consider a B-Rep $\mathcal{B}$ labelled with the per-face \textit{\hbox{op.steps}}~$\mathbf{S} \in \{0,1\}^{N_f \times k_{s}}$, where $k_s$ denotes the number of \textit{\hbox{op.steps}}~in $\mathcal{B}$. Similarly to the \textit{\hbox{op.types}}~$\mathbf{T}$, the \textit{\hbox{op.steps}}~are represented by an $N_f \times k_s$ binary matrix $\mathbf{S}~=~[\mathbf{s}_1; \mathbf{s}_2; \dots ; \mathbf{s}_{N_f}]~\in~\{0,1\}^{N_f \times k_{s}}$. Each row of this matrix, $\mathbf{s}_j \in \{0,1\}^{k_{s}}$, can have only one element equal to $1$ denoting the \textit{\hbox{op.step}}~for the face $f_j$. Segmenting the faces of B-Reps into \textit{\hbox{op.steps}}, would require learning a mapping~$\mathbf{\Psi}$,
\setlength\abovedisplayskip{0pt}
\begin{align}
\mathbf{\Psi} : \mathbb{R}^{N_f \times d_f} \times \mathbb{R}&^{N_e \times d_e} \times \mathbb{R}^{N_c \times d_c} \rightarrow {\{0,1\}}^{N_f \times k_s} \ , \\
&\mathbf{\Psi}(\mathbf{F},\mathbf{E},\mathbf{C})=\mathbf{S} \ .
\end{align}
\setlength\abovedisplayskip{0pt}
The proposed segmentation into \textit{\hbox{op.steps}}~is a challenging task for two main reasons: (1) unlike the \textit{\hbox{op.type}}~segmentation where the possible types are predefined, the labels of \textit{\hbox{op.steps}}~$\textbf{S}$ are arbitrary and any combination of labels, in which faces belonging to the same step have identical labels, can be considered as correct; (2) predicting \textit{\hbox{op.steps}}~aims at grouping B-Rep faces according to the design history. Therefore, it requires learning the relationship between the different faces of the B-Rep in addition to its geometry and topology.
\section{Proposed CADOps-Net}
\label{sec:approach}
The proposed \emph{\hbox{CADOps-Net}}~jointly learns the \textit{\hbox{op.type}}~and \textit{\hbox{op.step}}~segmentation within the same model. In practice, the mappings $\mathbf{\Psi}$ and $\mathbf{\Phi}$, introduced in Section~\ref{sect:pblmstatement}, are learnt using an end-to-end neural network. BRepNet~\cite{brepnet} is used as the backbone of our model, as it has been shown to effectively operate on B-Reps. BRepNet uses the face, edge, and co-edge features $(\mathbf{F},\mathbf{E},\mathbf{C})$ of a B-Rep $\mathcal{B}$ to learn per-face embeddings using a succession of convolutions defined through specific topological walks and Multilayer Perceptron (MLP) layers. For more details about this backbone, readers are referred to~\cite{brepnet}. In what follows, the BRepNet backbone will be denoted by $\mathbf{\Delta}~:~\mathbb{R}^{N_f \times d_f}~\times~\mathbb{R}^{N_e \times d_e}~\times~\mathbb{R}^{N_c \times d_c}~\rightarrow~\mathbb{R}^{N_f \times d_{emb}}$ and $\mathbf{f}^{\Delta}$ will be used as a notation for the embedding extracted using this backbone from a face $f$ of a B-Rep $\mathcal{B}$. The proposed network is composed of two modules that are described below.
\subsection{CAD Operation Step Segmentation} \label{section:step_module}
The CAD operation step module has two roles. Firstly, it predicts the per-face \textit{\hbox{op.step}}~labels. Secondly, it is used to aggregate the embeddings of faces belonging to the same step and produce embeddings for each group of faces obtained in a single \textit{\hbox{op.step}}.
\vspace{0.12cm}
\noindent\textbf{Learning CAD operation steps: }
The mapping $\mathbf{\Psi}$ introduced in Section~\ref{sec:group-memb} consists of two components, \ie, $\mathbf{\Psi}~:=~\mathbf{\sigma}~\circ~\mathbf{\Delta}$, where $\mathbf{\Delta}$ uses the features of the B-Rep $(\mathbf{F},\mathbf{E},\mathbf{C})$ and extracts per-face embeddings $\mathbf{F}^{\Delta}~=~[\mathbf{f}_1^{\Delta};~\mathbf{f}_2^{\Delta}~;~\dots~;~\mathbf{f}_{N_f}^{\Delta}]~\in~\mathbb{R}^{N_f \times d_{emb}}$. $\sigma$ is an MLP followed by softmax that maps the face embeddings $\mathbf{F}^{\Delta}$ into probabilities of predicted \textit{\hbox{op.steps}}~$\mathbf{\widehat{S}}~=~[\mathbf{\widehat{s}}_1; \mathbf{\widehat{s}}_2;~\dots~;~\mathbf{\widehat{s}}_{N_f}]~\in~{[0,1]}^{N_f \times k_{s}}$. Here, each face $f_j$ would have a vector $\mathbf{\widehat{s}}_j \in [0,1]^{k_{s}}$ specifying its membership probabilities to the $k_s$ \textit{\hbox{op.steps}}. It is important to note that the number of \textit{\hbox{op.steps}}~in a CAD model is not known in advance. We assume the maximum number of steps, $k_s$, in a B-Rep $\mathcal{B}$ to be the largest number of possible steps per model computed on the training dataset.
As mentioned in Section~\ref{sec:group-memb}, a particular challenge for predicting the \textit{\hbox{op.steps}}~is that the ground truth labels $\mathbf{S}$ are arbitrary. Therefore, the task consists of predicting the combination of steps that matches the ground truth labels. Inspired by~\cite{spfn,point2cyl}, we use a Hungarian matching~\cite{hungarian} to find the best one-to-one correspondences between the predicted \textit{\hbox{op.steps}}~$\mathbf{\widehat{S}}$ and ground truth labels $\mathbf{S}$. Even though the Hungarian matching is not differentiable, it is only used to find the correspondences in the training phase, allowing for the computation of a \textit{Relaxed Intersection over Union} (RIoU)~\cite{param_learning} metric between pairs of predictions $\mathbf{\widehat{s}}$ and ground truth $\mathbf{s}$ as follows,
\begin{equation}
\mbox{RIoU}(\mathbf{s}, \mathbf{\widehat{s}}) = \frac{\mathbf{s}^{\text{T}}\mathbf{\widehat{s}}}{||\mathbf{s}||_1+||\mathbf{\widehat{s}} ||_1 -\mathbf{s}^{\text{T}}\mathbf{\widehat{s}} } \ ,
\end{equation}
where $||.||_1$ denotes the $\ell_1$ norm, and $^{\text{T}}$ the vector transpose. The RIoU metric is further used to define the following \textit{\hbox{op.step}}~loss function,
\begin{equation}
\mathcal{L}_{step}= \frac{1}{N_f}\displaystyle \sum_{j=1} ^{N_f} (1- \mbox{RIoU}(\mathbf{s}_j, \mathbf{\widehat{s}}_{j})) \ .
\end{equation}
\noindent For inference, the Hungarian matching is not used and the predicted \textit{\hbox{op.steps}}~are given by taking the maximum probability over each $\mathbf{\widehat{s}}$.
\vspace{0.12cm}
\noindent\textbf{CAD operation step embedding}: In addition to predicting the per-face \textit{\hbox{op.steps}}~given a B-Rep, the same module is used to extract CAD step embeddings
$\{\mathbf{s}^{\mathcal{A}}_1,\mathbf{s}^{\mathcal{A}}_2,\dots,\mathbf{s}^{\mathcal{A}}_{k_s}\}$. This is achieved by aggregating the embeddings of faces predicted to belong to the same \textit{\hbox{op.step}}. Specifically, each \textit{\hbox{op.step}}~$\varphi$ would have an embedding $\mathbf{s}^{\mathcal{A}}_{\varphi} \in \mathbb{R}^{d_{emb}}$, such that
\begin{equation}
\mathbf{s}^{\mathcal{A}}_{\varphi} = \underset{j=\argmax\mathbf{\widehat{S}}_{:, \varphi}}{\mathcal{A}} \mathbf{f}^{\Delta}_j \ ,
\label{eq:aggregation}
\end{equation}
where $\mathbf{\widehat{S}}_{:,\varphi}$ denotes the per-face predicted \textit{\hbox{op.step}}~labels for $\varphi$, and $\mathcal{A}$ is an aggregation function that preserves the dimension of the input embeddings such as average or maximum. Finally, each face of the B-Rep will have the corresponding \textit{\hbox{op.step}}~embedding $\mathbf{s}^{\mathcal{A}}$ according to the predicted \textit{\hbox{op.step}}~label. These embeddings are finally stacked in a matrix $\mathbf{S}^{\mathcal{A}} \in \mathbb{R}^{N_f \times d_{emb}}$.
\subsection{CAD Operation Type Segmentation}
The introduced mapping $\mathbf{\Phi}$ to obtain the \textit{\hbox{op.type}}~segmentation from an input B-Rep shares the same BRepNet backbone $\mathbf{\Delta}$ used by the module of \textit{\hbox{op.type}}~segmentation. Moreover, it uses two other mappings, $\mathbf{\gamma}$ and $\mathbf{\rho}$, where $\mathbf{\Phi}~:=~ \mathbf{\rho} \circ \gamma \circ \mathbf{\Delta}$. The mapping $\mathbf{\gamma}~:~\mathbb{R}^{N_f \times d_{emb}}
~\times~\mathbb{R}^{N_f \times d_{emb}}~\rightarrow~\mathbb{R}^{N_f \times 2d_{emb}}$ takes as input the face embeddings $\mathbf{F}^{\Delta}$ and outputs their concatenation with the corresponding step embeddings $\mathbf{S}^{\mathcal{A}}$. These concatenated embeddings are fed to an MLP with softmax which are represented by $\mathbf{\rho}~:~\mathbb{R}^{N_f \times 2d_{emb}}
~\rightarrow~{\{0,1\}}^{N_f \times k_t}$. The final \textit{\hbox{op.types}}~$\mathbf{\widehat{T}}$ can be obtained following,
\begin{equation}
\mathbf{\widehat{T}} = \mathbf{\rho}(\mathbf{F}^{\Delta}\oplus \mathbf{S}^{\mathcal{A}}) \ ,
\label{eq:concat_CADType}
\end{equation}
\noindent where $\oplus$ is the column-wise concatenation operation.
The loss function for the \textit{\hbox{op.type}}~segmentation is computed using the cross-entropy $\mathcal{H}$ between the predicted per-face \textit{\hbox{op.types}}~$\mathbf{\widehat{t}}$ and the ground truth labels $\mathbf{t}$,
\begin{equation}
\mathcal{L}_{type} = \frac{1}{N_f}\sum_{j=1}^{N_f}{\mathcal{H}(\mathbf{t}_j, \mathbf{\widehat{t}}_j)} \ .
\end{equation}
The total loss function is the sum of the \textit{\hbox{op.step}}~and \textit{\hbox{op.type}}~losses,
\begin{equation}
\mathcal{L}_{total} = \mathcal{L}_{step} + \mathcal{L}_{type} \ .
\end{equation}
\noindent The model jointly learns to predict the per-face \textit{\hbox{op.type}}~and \textit{\hbox{op.step}}~labels of a CAD model given its B-Rep, with the \textit{\hbox{op.type}}~being conditioned on the \textit{\hbox{op.step}}.
\section{CC3D-Ops dataset}
\label{dataset}
We introduce the \emph{\hbox{CC3D-Ops}}~dataset that contains $37k+$ B-Reps with the corresponding per-face \textit{\hbox{op.type}}~and \textit{\hbox{op.step}}~annotations. These labels were extracted using the Solidworks API~\cite{solidworks}. The B-Reps and their corresponding annotations constitute an extension of the CC3D~dataset~\cite{cc3d}.
While the Fusion360 dataset~\cite{fusion360} contains a similar number of B-Reps ($35k+$) with the corresponding \textit{\hbox{op.type}}~labels, it does not provide \textit{\hbox{op.step}}~labels and it includes relatively simple CAD models. The proposed \emph{\hbox{CC3D-Ops}}~dataset comes with more complex models that are closer to real-world industrial challenges. In Figure~\ref{fig:dataset_stats}, we illustrate the distribution of \textit{\hbox{op.step}}~number per model as a box plot for both Fusion360 and \emph{\hbox{CC3D-Ops}}~datasets. It can be clearly observed that the distribution of \emph{\hbox{CC3D-Ops}}~is more skewed towards a higher number of \textit{\hbox{op.steps}}~than the one of Fusion360.
Specifically, {\raise.17ex\hbox{$\scriptstyle\mathtt{\sim}$}}$48\%$ of the Fusion360 models are made of only one \textit{\hbox{op.step}}~and {\raise.17ex\hbox{$\scriptstyle\mathtt{\sim}$}}$80\%$ of them are constructed by $3$ or less \textit{\hbox{op.steps}}. On the other hand, only {\raise.17ex\hbox{$\scriptstyle\mathtt{\sim}$}}$20\%$ of the \emph{\hbox{CC3D-Ops}}~models are built with a single \textit{\hbox{op.step}}~and {\raise.17ex\hbox{$\scriptstyle\mathtt{\sim}$}}$44\%$ of them with $3$ or less \textit{\hbox{op.steps}}.
Moreover, the maximum number of \textit{\hbox{op.steps}}~per model, $k_{s}$, is $59$ for Fusion360 and $262$ for \emph{\hbox{CC3D-Ops}}.
Finally, the \emph{\hbox{CC3D-Ops}}~dataset introduces three new \textit{\hbox{op.types}}~to the eight present in Fusion360 which consists of, `\textit{cut revolve side}', `\textit{cut revolve end}', and `\textit{others}'. More details about the dataset can be found in the supplementary material.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{pictures/step_box.png}
\caption{Boxplot of the number of \textit{\hbox{op.steps}}~per model in Fusion360~\cite{fusion360} and the proposed \emph{\hbox{CC3D-Ops}}~dataset.
}
\label{fig:dataset_stats}
\end{figure}
\section{Experiments}
\label{sec:expt}
\subsection{Experimental Setup}
\label{sect:exp-setup}
\noindent\textbf{Input Features}: The input features of \emph{\hbox{CADOps-Net}}~are face, edge and co-edge features $(\mathbf{F},\mathbf{E},\mathbf{C})$ extracted from the B-Rep, $\mathcal{B}$. Following~\cite{brepnet}, the face type (\eg plane, cylinder, sphere) and area are encoded in a single vector. 3D points are further sampled on each face using the UV-grid of the B-Rep and encoded as described in~\cite{uvnet}. These two features are concatenated and used as face features. The features of the B-Rep faces are then concatenated in a \hbox{row-wise} fashion to form the matrix $\textbf{F}$. For edge features, a similar approach is taken by considering the type, convexity, closeness, length of the edge as in~\cite{brepnet}, and encoded sampled 3D points as done in~\cite{uvnet}. The result is concatenated in an edge feature matrix $\mathbf{E}$. The co-edge features, $\textbf{C}$, are simple flags to represent the direction of the corresponding edges~\cite{brepnet}.
\vspace{0.12cm}
\noindent\textbf{Network Architecture}: The input features are passed through a BRepNet backbone, $\mathbf{\Delta}$, with the same parameters as in~\cite{brepnet} using the wing-edge kernel. The dimension of the face embedding, $\mathbf{f}^{\Delta}$, is $d_{emb} = 64$. These embeddings are fed to an MLP followed by softmax, $\mathbf{\sigma}$, to predict the \textit{\hbox{op.step}}. The aggregation function used to compute the step embedding, $\mathbf{S}^{\mathcal{A}}$, is the average function. Each \textit{\hbox{op.step}}~embedding $\mathbf{s}^{\mathcal{A}}$ has the same dimension as $\mathbf{f}^{\Delta}$. The final face embedding, $\mathbf{f}^{\Delta}~\oplus~\mathbf{s}^{\mathcal{A}}$, are $128$-dimensional. Lastly, the \textit{\hbox{op.type}}~is estimated by passing these embeddings through an MLP followed by softmax, $\mathbf{\rho}$. In our experiments, the number of layers of the employed MLPs is $1$.
\vspace{0.12cm}
\noindent\textbf{Datasets:} \emph{\hbox{CADOps-Net}}~is evaluated on the Fusion360 dataset~\cite{fusion360} and the novel \emph{\hbox{CC3D-Ops}}~dataset described in Section~\ref{dataset}. Note that in Fusion360, the \textit{\hbox{op.step}}~annotations were derived from the \textit{\hbox{op.type}}~annotations as they were implicitly provided. The train, validation, and test sets for the Fusion360 dataset are the same as in~\cite{brepnet}. For the \emph{\hbox{CC3D-Ops}}~dataset, the splitting ratios are approximately $65\%$, $15\%$, and $20\%$ for the train, validation, and test sets.
\vspace{0.12cm}
\noindent\textbf{Training details}: The training was conducted for $200$ epochs with a batch size of $100$ using an NVIDIA RTX $A6000$ GPU. Adam optimizer is employed with a learning rate of $0.001$ and beta parameters of $0.9$ and $0.99$.
\vspace{0.12cm}
\noindent\textbf{Metrics}: The performance of the network is evaluated on \textit{\hbox{op.type}}~and \textit{\hbox{op.step}}~segmentation tasks. To evaluate the \textit{\hbox{op.type}}~segmentation, we use the same metrics as in~\cite{brepnet}, namely, the mean accuracy (mAcc) and the mean Intersection over Union (mIoU).
Note that we do not consider the mIoU for evaluating the \textit{\hbox{op.step}}~as the labels represent membership sets rather than predefined classes.
Furthermore, the consistency between the \textit{\hbox{op.type}}~and \textit{\hbox{op.step}}~predictions is considered. For this purpose, we group the sub-\textit{\hbox{op.types}}, such that \textit{`extrude~end'} and \textit{`extrude~side'}, into a single \textit{`extrude'} \textit{\hbox{op.type}}. Similar grouping is done for \textit{`revolve'}, \textit{`cut~extrude'}, and \textit{`cut~revolve'}. We define an \textit{\hbox{op.step}}~prediction as consistent if all its faces have the same \textit{\hbox{op.type}}~prediction. To evaluate this consistency, two metrics are computed: (1) the first one, $R_{C}$, quantifies the overall consistency as the ratio of consistent predicted \textit{\hbox{op.steps}}; (2) the second one quantifies the amount of consistency of a model as $S_{C}=\sum_{i} \frac{ \max(n_{(t_1,s_i)},..., n_{(t_{k_t},s_i)} )}{n_{s_i}}$ where $n_{s_i}$ is the number of faces with \textit{\hbox{op.step}}~label $s_i$ and $n_{(t_j,s_i)}$ the number of faces with \textit{\hbox{op.type}}~label $t_j$ and \textit{\hbox{op.step}}~label $s_i$. We then compute $mS_C$ as the average over all the models.
\begin{figure}[!ht]
\begin{center}
\includegraphics[trim= 5 5 5 5,clip,width=.9\linewidth]{pictures/predictions.pdf}
\caption{Sample predictions on five models from the \emph{\hbox{CC3D-Ops}}~dataset. (Left): The CAD operation type segmentation. (Right): The CAD operation step segmentation. For both tasks, the ground truth (GT) is shown in the left, the prediction (Pred.) in the middle, and the error (Error) in the right illustrating the correct/incorrect face predictions.}
\label{fig:prediction_examples}
\end{center}
\end{figure}
\subsection{Results and Discussions}
\noindent\textbf{Qualitative Evaluation:} In Figure~\ref{fig:prediction_examples}, we illustrate the predictions obtained by \emph{\hbox{CADOps-Net}}~on five models from the \emph{\hbox{CC3D-Ops}}~dataset. More predictions are provided in the supplementary material. Despite the complexity of some models, it can be observed that most of the \textit{\hbox{op.type}}~predictions (left panel) were correct except for very few faces. On the other hand, the segmentation into \textit{\hbox{op.steps}}~(right panel) was more challenging for complex models (two last rows) as the segmentation into \textit{\hbox{op.steps}}~requires the model to learn the relationship between the faces of the B-Rep according to the construction history. Such aspect is more challenging to capture for complex models than the \textit{\hbox{op.types}}~which could be hypothetically learned from the geometry and topology of the B-Reps. This hypothesis is further discussed in the quantitative evaluation.
\begin{figure}[ht]
\centering
\begin{subfigure}[b]{0.49\linewidth}
\includegraphics[width=\textwidth]{pictures/step_mAcc.png}
\caption{CAD operation step mAcc}
\label{fig:group_acc}
\end{subfigure}
\begin{subfigure}[b]{0.49\linewidth}
\includegraphics[width=\textwidth]{pictures/type_mAcc.png}
\caption{CAD operation type mAcc}
\label{fig:seg_acc}
\end{subfigure}
\label{fig:mean_acc_cc3d}
\caption{Mean accuracy (mAcc) of CAD operation type and step segmentation \textit{w.r.t} the number of steps per model on the \emph{\hbox{CC3D-Ops}}~dataset.}
\end{figure}
\noindent\textbf{Quantitative Evaluation:} In Table~\ref{table:results}, we report the quantitative results of our approach compared to baselines. \emph{\hbox{CADOps-Net}}~(\textit{Ours w/ JL$^+$}) is compared to the same model without the joint learning of \textit{\hbox{op.steps}}~and \textit{\hbox{op.types}}~(\textit{Ours w/o JL$^-$}). In the latter, the \textit{\hbox{op.type}}~and \textit{\hbox{op.step}}~segmentation modules are trained independently. In the following, we first analyze the results for the segmentation into \textit{\hbox{op.steps}}~(column 5 of Table~\ref{table:results}) and for the \textit{\hbox{op.type}}~segmentation (columns 3 and 4), then we discuss the consistency between the two types of predictions (columns 6 and 7).
\begingroup
\setlength{\tabcolsep}{3pt}
\begin{table}[t!]
\centering
\begin{tabular}{cccccccc}
\hline \\[-1em]
& \multirow{2}{*}{Model} & \multicolumn{2}{c}{\textit{\hbox{op.type}}} & \multicolumn{1}{c}{\textit{\hbox{op.step}}}&\multicolumn{2}{c}{\textit{\small Consistency}} \\[-1em] \\ \cline{3-4} \cline{6-7}\\[-1em]
& & mAcc & mIoU & mAcc & $R_{C}$ & $mS_{C}$ \\[-1em] \\ \hline \\[-1em]
\multirow{5}{*}{\rotatebox{90}{Fusion360}} & \textit{CADNet}~\cite{cadnet} & 88.9 & 67.9 & - & - & - \\
& \textit{UV-Net}~\cite{uvnet} & 92.3 & 72.4 & - & - & - \\
& \textit{BRepNet}~\cite{brepnet} & 94.3 & 81.4 & - & - & - \\
\cline{2-7}
& \textit{Ours w/o JL$^-$ } & 95.5 & 83.2 & 80.2 & 87.1 & 97.4 \\
& \textit{Ours w/ JL$^+$} & \textbf{95.9} & \textbf{84.2} & \textbf{82.5} & \textbf{93.3} & \textbf{98.7}
\\[-1em]\\ \hline \\[-1em]
\multirow{4}{*}{\rotatebox{90}{\emph{\hbox{CC3D-Ops}}}} & \textit{CADNet}~\cite{cadnet} & 57.5 & 26.9 & - & - & - \\
& \textit{BRepNet}~\cite{brepnet} & 71.4 & 35.9 & - & - & - \\
\cline{2-7}
& \textit{Ours w/o JL$^-$} & \textbf{76.0} & 43.0 & 48.4 & 40.7 & 82.7 \\
& \textit{Ours w/ JL$^+$} & 75.0 & \textbf{44.3} & \textbf{62.7} & \textbf{82.4} & \textbf{96.7} \\[-1em]
\\ \hline
\end{tabular}
\caption{Results of the segmentation into CAD operation types and steps on the Fusion360 and \emph{\hbox{CC3D-Ops}}~datasets. All results are expressed as percentages. \textit{Ours w/o JL$^-$} denotes our method without joint learning. \textit{Ours w/ JL$^+$} refers to the proposed \emph{\hbox{CADOps-Net}}~with joint learning.}
\label{table:results}
\end{table}
\endgroup
As previously mentioned, predicting \textit{\hbox{op.steps}}~is a much more challenging task than \textit{\hbox{op.types}}~especially for models with a large number of \textit{\hbox{op.steps}}.
While the joint learning leads to small improvements on the \textit{\hbox{op.step}}~mAcc metric on the Fusion360 dataset, significant improvements can be observed on the \emph{\hbox{CC3D-Ops}}~dataset results with an increase of {\raise.17ex\hbox{$\scriptstyle\mathtt{\sim}$}}$14\%$. This difference of results can be explained by the higher complexity of \emph{\hbox{CC3D-Ops}}~models compared to those of Fusion360. Figure~\ref{fig:group_acc} shows the mAcc of \textit{\hbox{op.step}}~segmentation related to the number of \textit{\hbox{op.steps}}~per model on the \emph{\hbox{CC3D-Ops}}~dataset. It can be observed that for models with less than $25$ \textit{\hbox{op.steps}}, representing over $96\%$ of the \emph{\hbox{CC3D-Ops}}~dataset, \emph{\hbox{CADOps-Net}}~scores consistently and significantly better than without joint learning. These observations demonstrate the importance of the joint learning for \textit{\hbox{op.step}}~segmentation. However, in both cases there is a major decrease in the \textit{\hbox{op.step}}~segmentation mAcc as the number of steps per model increases. This is expected since the task becomes increasingly challenging as the number of \textit{\hbox{op.steps}}~becomes larger. Note that we did not compare our results to state-of-the-art (BRepNet~\cite{brepnet}, UV-Net~\cite{uvnet}, and CADNet\cite{cadnet}) on the task of \textit{\hbox{op.step}}~segmentation as their methods are not designed to predict arbitrary face labels.
In order to evaluate the \textit{\hbox{op.type}}~segmentation of \emph{\hbox{CADOps-Net}}, the results are compared to state-of-the-art results. On the Fusion360 dataset, we recorded slight improvements over~\cite{brepnet},~\cite{uvnet}, and~\cite{cadnet} in terms of mAcc. More significant improvements \textit{w.r.t}~\cite{uvnet} and~\cite{cadnet} were obtained in terms of mIoU (more than $12\%$ and $16\%$, respectively). On the \emph{\hbox{CC3D-Ops}}~dataset, our results clearly outperformed those of~\cite{brepnet} and~\cite{cadnet} on the two metrics. Furthermore, we compare \emph{\hbox{CADOps-Net}}~to the scenario where the joint learning is omitted. One interesting observation is that there is no significant difference between the two scenarios. The same observation holds in Figure~\ref{fig:seg_acc} where we show the mAcc of \textit{\hbox{op.type}}~segmentation related to the number of \textit{\hbox{op.steps}}~per model. In contrast to the \textit{\hbox{op.step}}~segmentation, one can notice that the number of \textit{\hbox{op.steps}}~has a slight impact on the \textit{\hbox{op.type}}~mAcc. In other words, the \textit{\hbox{op.type}}~segmentation does not become more challenging when complex models with large number of construction steps are involved. Intuitively, it can be hypothesized that the \textit{\hbox{op.type}}~segmentation is more related to the geometry and topology of the B-Rep rather than its construction history.
The results on the consistency scores ($R_{C}$ and $mS_{C}$) highlight the relevance of the joint learning approach. Despite relatively similar \textit{\hbox{op.type}}~and \textit{\hbox{op.step}}~mAcc scores on the Fusion360 dataset for \textit{Ours w/ JL$^+$} and \textit{Ours w/o JL$^-$}, the joint learning approach produces more consistent results with an increase of {\raise.17ex\hbox{$\scriptstyle\mathtt{\sim}$}}$6\%$ in $R_C$ score. Similarly on the \emph{\hbox{CC3D-Ops}}~dataset, the predictions from \emph{\hbox{CADOps-Net}}~are significantly more consistent with an increase of {\raise.17ex\hbox{$\scriptstyle\mathtt{\sim}$}}$41\%$ in $R_C$ score and $14\%$ in $mS_C$ score. Therefore, the joint learning model is able to extract face features that contain consistent information for both the \textit{\hbox{op.type}}~and \textit{\hbox{op.step}}~segmentation labels. The consistency property is essential for the process of reverse engineering.
\subsection{Ablation Study}
\begin{table}[ht]
\centering
\begin{tabular}{ccccc}
\hline
& & \multicolumn{2}{c}{\textit{\hbox{op.type}}} & \multicolumn{1}{c}{\textit{\hbox{op.step}}} \\ \cline{3-4}
& Agg. type & mAcc & mIoU & mAcc \\ \hline
\multirow{5}{*}{\rotatebox{90}{\emph{\hbox{CC3D-Ops}}}} & \textit{No agg.} & 73.0 & 40.2 & 61.5 \\
& \textit{Soft labels} & 73.4 & 40.0 & 59.7 \\
& \textit{Sum} & 70.4 & 34.4 & 62.6 \\
& \textit{Max} & 74.3 & 42.0 & 62.2 \\
& \textit{Avg} & \textbf{75.0} & \textbf{44.3} & \textbf{62.7} \\ \hline
\end{tabular}
\caption{Ablation study on the aggregation function used in the joint learning of \emph{\hbox{CADOps-Net}}. All results are expressed as percentages.}
\label{table:ablation}
\end{table}
In order to provide a deeper insight into the joint learning approach, we conduct an ablation study on the aggregation function $\mathbf{\mathcal{A}}$ of the face embeddings. Experiments are conducted with the following five scenarios: (1) the output face embeddings, $\mathbf{f}^{\Delta}$, from the BRepNet backbone are directly used to predict both the \textit{\hbox{op.type}}~and \textit{\hbox{op.step}}~without any aggregation (\textit{No agg.}). (2) Another scenario concatenates the BRepNet face embeddings with the predicted soft labels of the \textit{\hbox{op.step}}~(\textit{Soft labels}) again without any aggregation. (3) The last three scenarios focus on the type of aggregation function used to obtain the \textit{\hbox{op.step}}~embeddings, $\mathbf{S}^{\mathcal{A}}$, namely the maximum (\textit{Max}), the average (\textit{Avg}), and the sum of the embeddings combined with a softmax normalization (\textit{Sum}). Table~\ref{table:ablation} shows the ablation results for both \textit{\hbox{op.type}}~and \textit{\hbox{op.step}}~segmentation tasks on the \emph{\hbox{CC3D-Ops}}~dataset. The results show that aggregating the face embeddings using an \textit{Avg} pooling leads to slightly better overall performance.
\subsection{CAD Sketch Recovery}
\begin{figure}[t]
\centering
\includegraphics[width=0.82\linewidth]{pictures/Sketch_Final.png}
\caption{Sketch recovery from predicted CAD operation types (\textit{\hbox{op.types}}) and steps (\textit{\hbox{op.steps}}). \textit{\hbox{op.step}}~1 and 2 are colored in yellow and blue, respectively. Figure \ref{fig:prediction_examples} defines the color codes used for different \textit{\hbox{op.types}}.
}
\label{fig:SketchFinal}
\end{figure}
Figure~\ref{fig:SketchFinal} illustrates preliminary results on how \emph{\hbox{CADOps-Net}}~predictions can be used to retrieve the CAD sketches.
A sketch $\mathbf{\mathcalboondox{Q}}$ of a B-Rep $\mathcal{B}$ can be defined as a set of simple geometrical entities (\eg straight lines, arcs). We consider a small subset of $20$ models made of extrusions from the Fusion360 dataset. In the following, we describe the process for recovering the sketch corresponding to \textit{\hbox{op.step}}~$2$ using the \emph{\hbox{CADOps-Net}}~predictions shown in Figure~\ref{fig:SketchFinal}\textcolor{red}{a}. We first identify the faces for which the \textit{\hbox{op.type}}~was predicted as `\textit{extrude side}'. Second, we cluster these faces according to their predicted \textit{\hbox{op.step}}. Third, we store the face-normals ($\mathbf{\widehat{n}}_{1}^{2},\hdots, \mathbf{\widehat{n}}_{m}^2$) and sample UV-grid points on the faces. This allows to derive a common \textit{axis of extrusion} $\widehat{\mathbf{\mathcalboondox{a}}}$ and a \textit{projection center} $\widehat{\mathbf{\mathcalboondox{o}}}$. Finally, the predicted sketch $\mathbf{\widehat{\mathcalboondox{Q}}}^{2}$ is obtained by projecting the sampled points along $\widehat{\mathbf{\mathcalboondox{a}}}$ (more details are in the supplementary material). Figure~\ref{fig:SketchFinal}\textcolor{red}{a} and \ref{fig:SketchFinal}\textcolor{red}{b} show qualitative results of successful and failed sketch recoveries from correctly and incorrectly predicted \textit{\hbox{op.types}}. These preliminary results on sketch recovery illustrate the relevance of \textit{\hbox{op.step}}~prediction in the context of 3D reverse engineering.
\subsection{Limitations}
In CAD modeling, designers may opt for different design solutions. Consequently, the segmentation into \textit{\hbox{op.type}}~and \textit{\hbox{op.step}}~is not necessarily unique.
An example for which the \textit{\hbox{op.step}}~prediction is valid despite not matching the ground truth can be found in Figure~\ref{fig:group_wrong_pred}\textcolor{red}{a}. The letters were predicted as part of the same \textit{\hbox{op.step}}, which could be a valid design approach. However, these letters were extruded with separate \textit{\hbox{op.steps}}~in the ground truth. In Figure~\ref{fig:group_wrong_pred}\textcolor{red}{b}, an example with valid predictions of \textit{\hbox{op.types}}~not matching the ground truth is depicted. Here, the hole in the center of the shape was predicted as a `\textit{cut}' type operation, while being an `\textit{extrude}' in the ground truth. In general, CAD designers follow good practices so that the final model reflects the design intent~\cite{design_intent}. However, different designers might have their own set of good practices, making it difficult for a learning-based model to capture all the different design intents.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.9\linewidth]{pictures/limits.jpg}
\caption{Failure cases of \emph{\hbox{CADOps-Net}}~for \textit{\hbox{op.step}}~segmentation in (a) and \textit{\hbox{op.type}}~segmentation in (b).}
\label{fig:group_wrong_pred}
\end{center}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
In this work, \emph{\hbox{CADOps-Net}}, a neural network that jointly learns the CAD operation type and step segmentation of B-Rep faces is presented. The joint learning strategy leads to significantly better results for the challenging task of CAD operation step segmentation, while achieving state-of-the-art results on the CAD operation type segmentation task. Moreover, we showed the potential of combining these two segmentations for recovering further information of the construction history such as sketches. Finally, the \emph{\hbox{CC3D-Ops}}~dataset is introduced with its operation type and step annotations. We believe that this dataset will help in advancing research on CAD modeling thanks to the complexity of the CAD models. As future work, an investigation of the ordering of the construction steps while maintaining various types of CAD operations would allow for the recovery of a more complete construction history.
\vspace{0.3cm}
\noindent \textbf{Acknowledgement:} The present project is supported by the National Research Fund, Luxembourg under the BRIDGES2021/IS/16849599/FREE-3D and IF/17052459/CASCADES projects, and by Artec 3D.
{\small
\bibliographystyle{ieee_fullname}
|
2,877,628,090,525 | arxiv | \section{Introduction}
Brans Dicke (BD) theory \cite{Will} is one of the closest cousins of General Relativity (GR). The salient feature of BD theory is that the curvature of geometry is nonminimally coupled with a scalar field, which makes Newton's constant $G$ a space-time dependent quantity. The significance of BD theory lies in the fact that it provides us with a simple prototype example of more realistic, sophisticated and physically motivated models including a wide class of scalar-tensor theories, having an interesting application in inflationary scenario \cite{johri,La,AM, Kolb, PJ, PJ2, AR}, and constructing potential dark energy models \cite{Amen}. Furthermore, the nonminimal coupling appears in the context of superstring theory \cite{Witten} as a low energy effective action for the dilaton-gravity sector in supergravity, as well as in Kaluza-Klein theory \cite{TE} and DGP theory \cite{Dvali}, where the extra scalar field of the theory emerges naturally from the compactification of an extra dimension \cite{cho}. It also appears in Galileon theories \cite{Nicolis}, proposed to explain cosmic acceleration while bypassing the Solar System constraints. To add to the list, BD theory can also be thought of as a limit of Horndeski theories \cite{h1,h2}. The further motivation and pertinence of the work that follows comes from the basic expectation that any quantum formulation of gravity requires ingredients foreign to GR, like higher order curvature correction, nonminimal coupling to matter. All of these make it meaningful to investigate scalar tensor theories as a quantum cosmological model, and because of its simplicity, BD theory is the most natural platform to explore such a quantum scenario to shed light on a wide class of scalar-tensor theories.
It is widely believed that as coupling $\omega$ becomes stronger, BD theory reduces to GR \cite{KN, AAJ, JD, JP, JD2}. In fact, this forms the basis to set lower limits of the $\omega$ parameter in Solar System experiments\cite{Will}. Albeit, there are counterexamples of several exact solutions not reducing to GR upon $\omega\rightarrow\infty$ \cite{mastuda, romero, romero1,romero2,romero3, FM1, FM2, MA} and counterarguments for nonconvergence with a scale invariant matter content, i.e, with $T^{\mu}{}_{\mu}=0$ \cite{oldnb, Faroni}. Hence, if we can show that in a quantized version, BD does reduce to GR, it would be of utmost importance. The first obstacle in this regard is that we do not have a complete picture of quantum gravity. Nonetheless, there has been recent rejuvenation in the Wheeler deWitt quantization \cite{deWitt, Wheeler} process of GR in a series of papers \cite{sp1,sp2,sp3,sp4,sp5,sachin}, where we build an effective quantum mechanical version of cosmological models. Given this resurgence in the Wheeler deWitt quantization process, it appears pertinent to explore the strong coupling limit of quantized BD using the Wheeler de Witt quantization process and to aim to answer the question posed in this formalism. In fact, there has been recent work regarding quantized BD theory \cite{nb, sachin1}.
In this article, we show for the first time that quantized BD theory can provide an elegant example of anomalous symmetry breaking leading to the existence of a rich phase structure, and thus the appeal of this work lies beyond quantum cosmology. Not to mention, the anomalous symmetry breaking is a widespread phenomenon in quantum systems ranging from particle physics to critical phenomenon in condensed matter physics, for example, relativistic quantum field theories admit chiral anomaly and weyl anomaly. In fact, the anomaly cancellation is an important tool to study quantum field theory in general. It is known in condensed matter that a 3-body problem with a large scattering length admits Efimov states \cite{efimov} due to anomalous breaking of scale symmetry of inverse square potential down to a discrete scaling group and the resulting appearance of a limit cycle in renormalisation group (RG) flow. Generically, in a singular potential like inverse square, renormalization is required to tame the singularity near the origin. We find similar singular potential in the quantum cosmological description of BD theory where the singularity appears owing to big bang singularity. Thus, the purpose of the communication is twofold, first to provide yet another physical scenario to the list of examples ranging from superconductivity\cite{cl4,cl5}, discrete Hamiltonian models\cite{wilson1,wilson2}, quantum field theory models\cite{cl3} to {\it S}-matrix models\cite{cl1,cl2}, where limit cycle and anomalous behavior with such rich physics can be realized. On the other hand, it is expected to elucidate the quantum behavior of scalar-tensor theories in the quantum cosmological set up, specifically to show the BD theory with scale invariant matter does reduce to GR in the large $\omega$ limit. To be specific, we will study the quantized Friedmann-Robertson-Walker (FRW) metric in BD theory with radiation like matter content, having conformal invariance. It deserves mention that the conformal properties of BD theory have been studied classically \cite{valero2} as well as in the loop quantized version \cite{MY}, but such an existence of the phase structure remains to be explored. Furthermore, such novel physics has never before been reported or emphasized in the context of quantum cosmology to the best of our knowledge.
The FRW model with a flat spatial section has a symmetry under scaling of ``scale factor" in GR. Under the scaling $a\mapsto\lambda a$, the Einstein equation of motion remains invariant. This symmetry is present in BD theory as well with a homogeneous scalar field. In this work, we show that the symmetry does not survive the quantization process in BD theory. For some range of coupling, the symmetry is broken anomalously solely due to quantum effects, and this leads to a binary-phase structure of quantized BD theory. We will show that the strong coupling ($\omega\rightarrow\infty$) limit of BD theory is in a symmetry preserving phase and so is the quantized GR. We argue that quantum mechanically, the presence of a phase wall must be an obstacle to the classical argument showing BD does not reduce to GR for scale invariant matter. In fact, exploiting the symmetry we explicitly show that BD theory does reduce to GR in strong coupling limit for a FRW universe with a flat spatial section and radiation (scale invariant) matter content, which is in sharp contrast with classical behavior. This contrasting behavior along with the existence of a rich quantum phenomenon should initiate more research exploring quantum BD theory along with other scalar-tensor theories, its strong coupling limit in a generic scenario.
\section{Brans Dicke Theory}
The BD theory in the Jordan frame with a perfect fluid ($P=\alpha\rho$) is described by the following Lagrangian:
\begin{equation}\label{lag}
\mathcal{L}= \phi R - \frac{\omega}{\phi}\partial_{\mu}\phi\partial^{\mu}\phi + \alpha\rho,
\end{equation}
where the scalar field $\phi$ is manifestly nonminimally coupled with the Ricci scalar.
The line element of the FRW universe with a flat spatial slice is given by
\begin{equation}\label{metric}
ds^{2}=-n^{2}dt^{2} + a^{2}(t)\left[dx^{2}+dy^{2}+dz^{2}\right]\ .
\end{equation}
where $n^{2}(t)$ is the lapse function and $a(t)$ is the scale factor. \\
We parametrize the scale factor and $\phi$ in the following way: $a(t) = e^{\kappa(t)};\ \phi(t)= e^{\gamma(t)}\ .$ Since, we have assumed an isotropic homogeneous universe, it is only natural to assume that $\phi$ is a function of time only. Now, we define a new variable $\beta(t)\equiv \kappa(t)+\frac{\gamma(t)}{2}$ and trade it in against $\kappa$ (as we will see this redefinition allows us to write the Lagrangian in a nice manner where $\beta$ and $\gamma$ gets decoupled, otherwise, we would have terms like $\dot{\kappa}\dot{\gamma}$).\\
Using this parametrization, the Lagrangian for the gravity sector can be written as
\begin{equation}
L_{g}=\frac{e^{3\beta-\frac{\gamma}{2}}}{n}\left[-6\dot{\beta}^{2}+\frac{2\omega+3}{2}\dot{\gamma}^{2}\right]\ .
\end{equation}
The corresponding Hamiltonian is given by
\begin{equation}\label{3.91}
H_{g}= ne^{\frac{\gamma}{2}-3\beta}\left(-\frac{p_{\beta}^{2}}{24}+\frac{p_{\gamma}^{2}}{2(2\omega+3)}\right).
\end{equation}
where $p_{\beta}$ and $p_{\gamma}$ are momenta conjugate to $\beta$ and $\gamma$ respectively.\\
For the matter sector, we take up a perfect fluid with $\alpha =\frac{1}{3}$ i.e radiation. Using standard thermodynamical considerations , the Hamiltonian for the matter sector is derived as
\begin{equation}
\label{3.93}
H_{f}= ne^{3(\frac{\gamma}{2}-\beta)\alpha}p_{T}= ne^{(\frac{\gamma}{2}-\beta)}p_{T},
\end{equation}
where $p_{T}$ is the momentum associated with fluid. A nice and crisp exposition of using the fluid sector to define a time variable $T$ and conjugate momentum $p_{T}$, is given in \cite{sp1}. The fact that the Hamiltonian of fluid sector turns out to be linear in $p_{T}$ facilitates writing down a Schrodinger-like equation.\\
The Equations~\eqref{3.91} and \eqref{3.93} can be combined to yield the total Hamiltonian,
\begin{equation}
H = ne^{\frac{\gamma}{2}-\beta}\left(-\frac{e^{-2\beta}p_{\beta}^{2}}{24}+\frac{e^{-2\beta}p_{\gamma}^{2}}{2(2\omega+3)}+p_{T}\right).
\end{equation}
The operators are now ordered following the prescription as laid out in \cite{sp1,sp3}, and varying the Hamiltonian with respect to $n$ results in a Hamiltonian constraint, given by
\begin{equation}
\left(-\frac{1}{24}e^{-\beta}p_{\beta}e^{-\beta}p_{\beta}+\frac{e^{-2\beta}p_{\gamma}^{2}}{2(2\omega+3)}+p_{T}\right)=0.
\end{equation}
As we quantize the system, the operators are realized in ``position" space in the following way: $p_{\beta}\mapsto-\imath\partial_{\beta}$, $p_{\gamma}\mapsto-\imath\partial_{\gamma}$ and $p_{T}\mapsto-\imath\partial_{T}$, leading to the Wheeler deWitt equation:
\begin{equation}\label{hc}
\left(\frac{1}{24}e^{-\beta}\partial_{\beta}e^{-\beta}\partial_{\beta}-\frac{e^{-2\beta}\partial_{\gamma}^{2}}{2(2\omega+3)}\right)\psi=\imath\partial_{T}\psi.
\end{equation}
A change of variable $\chi_{B}=e^{\beta}$ recasts this Hamiltonian constraint \eqref{hc} into
\begin{equation}
\frac{1}{24}\frac{\partial^{2}\psi}{\partial\chi_{B}^{2}} -\frac{1}{2(2\omega+3)}\frac{1}{\chi_{B}^{2}}\frac{\partial^{2}\psi}{\partial\gamma^{2}}=\imath\frac{\partial\psi}{\partial T}.
\end{equation}
We use the separation of variable technique $\psi(\gamma,\chi_{B},T)=\xi(\gamma)\varphi(\chi_{B})e^{\imath ET}$ to obtain:
\begin{equation}\label{gamma}
\frac{\partial^{2}\xi}{\partial\gamma^{2}} = -k^{2} \xi;
\end{equation}
with the solution given by $\xi = e^{\imath k\gamma}$, where $k$ appears due to separation of variables; subsequently, $\varphi$ satisfies
\begin{equation} \label{voila}
\frac{1}{24}\frac{\partial^{2}\varphi}{\partial\chi_{B}^{2}}+ \frac{k^{2}}{2(2\omega+3)}\frac{1}{\chi_{B}^{2}}\varphi =- E\varphi.
\end{equation}
We define parameters
\begin{equation}\label{coupling}
g= \frac{12k^{2}}{2\omega+3},\ E^{\prime}=24E,
\end{equation}
to cast Eq.~\eqref{voila} in the following form:
\begin{equation}\label{ge}
-\frac{\partial^{2}\varphi}{\partial\chi_{B}^{2}}-\frac{g}{\chi_{B}^{2}}\varphi = E^{\prime}\varphi.
\end{equation}
So, we have transformed this problem to a well-known inverse square potential problem with an attractive potential for $g>0$ i.e $\omega>-\frac{3}{2}$, repulsive one for $g<0$, i.e, $\omega<-\frac{3}{2}$. Apparently Eq.~\eqref{ge} admits a scaling symmetry under $\chi_{B}\mapsto \lambda\chi_{B}$, which is reminiscent of classical scale symmetry. To be specific, if $\phi(\chi_{B})$ is an eigenstate with energy $E^{\prime}$, then $\phi(\lambda\chi_{B})$ is an eigenstate with energy with $\lambda^{2}E^{\prime}$. This also implies a continuous spectra i.e if $E^{\prime}$ is an eigenenergy, then there exists a state with energy $\lambda^{2} E^{\prime}$ for $\lambda\in\mathbb{R}$. For $g<\frac{1}{4}$, one can show that $E^{\prime}>0$, and we have a spectra bounded below. For a strongly coupled regime, $g>\frac{1}{4}$, there exist states with negative $E^{\prime}$ which indicates that if we have to preserve scaling symmetry, there can not be any ground state. This comes out of \textcolor{WildStrawberry}{\textit{S-theorem}} elucidated nicely in the appendix of \cite{srben}. Hence, in a strongly coupled regime, we need to do a self-adjoint extension of the Hamiltonian \cite{gupta} or equivalently we need to regularize and renormalize \cite{swingle} the coupling so as to ensure a ground state. This is precisely what leads to anomalous (quantum) breaking of scale symmetry for $g>\frac{1}{4}$ \cite{camblong}. In summary, owing to quantum effects, we have two distinct phases: in the weakly attracting and repulsive regime ($g<\frac{1}{4}$) the symmetry is preserved, while in the strongly attractive regime($g>\frac{1}{4}$) the symmetry breaks down. It has been shown \cite{swingle,beane} that the symmetry is not lost completely but rather broken down to a discrete scaling symmetry, and we have limit cycle behavior in theory space. The critical point $g=\frac{1}{4}$ translates to a parabola in $(k,\omega)$ space (see Fig.~1), given by
\begin{equation}
\omega = \frac{48k^{2}-3}{2}.
\end{equation}
\begin{figure}[!ht]
\includegraphics[scale=0.55]{cosmology}
\caption{Phase structure in $(k,\omega)$ plane; The red (dark shaded) region is where symmetry is broken due to quantum effects while in the yellow (lightly shaded) region, the symmetry is preserved. The thick blue line represents the phase wall. The dotted red line is supposed to be at $\omega=\infty$. The dotted green line below which we have the yellow (lightly shaded) region is at $\omega=-1.5$.}
\end{figure}
where $k$ is the eigenvalue of the $p_{\gamma}$ operator, i.e, $k$ can be thought of as momentum associated with $\gamma$ and $\omega$ is coupling of the BD theory. This $k$ dependence of the critical point can be interpreted in the following way, which is very popular in field theory community: the scalar field (hence, the system as a whole) is composed with different momentum $k$ modes, which do not talk with each other and evolve independently; just like a free field theory. Each of these modes exhibits phase transition at a critical point, which is a function of its momenta.\\
For a given coupling $\omega$ such that $2\omega +3>0$, if we are to preserve the symmetry in the quantized version, then we restrict the possible momentum modes in a range i.e $|k|<\frac{1}{4}\sqrt{\frac{2\omega+3}{3}}$. Only in the limit $\omega\rightarrow\infty$, all the momentum modes are allowed. It is worth noting that for a fixed $\omega$, $g$ is invariant under $k\mapsto -k$. Hence, in the regime where $2\omega+3>0$, i.e, $g$ is positive definite, for $k > 0$ as well as for $k < 0$, the universe can be in either phase. But, for $2\omega+3<0$, $g$ is negative definite, i.e, $g< 0 <\frac{1}{4}$, therefore the symmetry is always preserved. The yellow region (lightly shaded) below the $\omega=\frac{-3}{2}$ horizontal line represents this regime in the graph. It also deserves mention that for a given nonzero mode $k$ such that $|k|< \frac{1}{4}$, the broken phase is attained only when $\omega$ becomes negative, to be precise when $\frac{-3}{2}<\omega <0$. Furthermore, the $k=0$ mode is very special in the sense that it never undergoes phase transition for any value of coupling $\omega$. \\
\section{Breakdown of Faraoni classical symmetry}
The BD theory with scale invariant matter content has a classical symmetry as pointed out in \cite{Faroni,Faroni2}. Two Brans Dicke space-time $\left(M,g_{\mu\nu}^{(\omega)},\phi^{(\omega)}\right)$ and $\left(M,\tilde{g}_{\mu\nu}^{\tilde{\omega}},\tilde{\phi}^{\tilde{\omega}}\right)$ are equivalent if we have $\tilde{\phi}=\phi^{1-2\theta} \Leftrightarrow \tilde{\gamma}=\gamma\left(1-2\theta\right)$, $\tilde{g}_{\mu\nu}= \phi^{2\theta}g_{\mu\nu} \Leftrightarrow \tilde{\beta}=\beta$ and
$\tilde{\omega}= \frac{\omega +6\theta(1-\theta)}{(2\theta-1)^{2}}$.\\
This symmetry is Abelian in nature and described by one parameter $\theta$. By this mapping i.e choosing $\theta$ suitably, we can classically relate two $\omega$ across a phase transition. In fact, $\omega\rightarrow\infty$ can be thought of as moving within this equivalence class. Now GR does not have this classical symmetry, implying GR cannot belong to this equivalence class. Thus GR cannot be classically realized as a strong coupling limit of BD theory with scale invariant matter content. Nonetheless, in the quantized version, the $\omega\rightarrow\infty$ limit of BD theory always lies in a symmetry preserving phase. Had this symmetry been there quantum mechanically, we could choose $\theta$ aptly [$\theta = \frac{1}{2} \left(1\pm \sqrt{\frac{\omega_{ns}+\frac{3}{2}}{\omega_{s}+\frac{3}{2}}}\right)$] to approach the limit and conclude that a theory in a broken phase with $\omega_{ns}$ is equivalent to a theory in a symmetry preserving phase with $\omega_{s}>\omega_{ns}\geq\frac{-3}{2}$. But quantum mechanically the nature of the spectrum changes dramatically across the phase transition. Thus this classical sense of equivalence must break down quantum mechanically and so must the argument proving that the GR is not a strong coupling limit of BD with $T^{\mu}{}_{\mu}=0$. \\
One can modify the argument by Faraoni and argue that within the symmetric ($a\mapsto\lambda a$) phase, there is no phase wall, hence, the classical Faraoni equivalence might survive in this phase. The $\omega\rightarrow\infty$ limit is in this symmetry preserving phase, and hence lies in the Faraoni equivalence class. This modified (restricted) sense of equivalence has no obstruction coming from the phase transition wall. Albeit, as we will show below, the strong coupling limit of BD does reduce to GR for a FRW metric with a flat spatial slice and radiation like matter content.
\section{Strong coupling limit and GR}
In this section, we will explicitly probe the strong coupling limit of BD and compare it to GR in the quantized version. The FRW line element is again given by Eq.~\eqref{metric} and we parametrize $a=e^{\sigma(t)}$.\\
The fluid sector can be dealt with in a similar manner as in BD, following the operator ordering prescription to arrive at the Hamiltonian of quantized GR
\begin{equation}
\hat{H}=ne^{3\alpha\sigma}\left(\frac{1}{24}e^{-\frac{3(1-\alpha)}{2}\sigma}\partial_{\sigma}e^{-3\frac{(1-\alpha)}{2}\sigma}\partial_{\sigma}+p_{T}\right),
\end{equation}
and a change of variable for $\alpha=\frac{1}{3}\neq 1$, $\chi_{G}=Exp\left[\frac{3(1-\alpha)}{2}\sigma\right]=Exp[\sigma]$
recasts the Wheeler de Witt equation $\hat{H}\Psi =0$ into $\frac{1}{24}\frac{\partial^{2}\Psi}{\partial\chi_{G}^{2}}=\imath\partial_{T}\Psi$. Plugging in the ansatz $\Psi=\psi(\chi_{G})e^{\imath E T}$, we obtain
\begin{equation}\label{ge2}
-\frac{1}{24}\frac{\partial^{2}\psi}{\partial\chi_{G}^{2}}= E\psi .
\end{equation}
This precisely mimics the $g\rightarrow 0$ limit of BD theory as in this limit the governing equation \eqref{ge} becomes
\begin{equation}\label{ge1}
-\frac{1}{24}\frac{\partial^{2}\varphi}{\partial\chi_{B}^{2}}= \frac{1}{24}E^{\prime}\varphi=E\varphi.
\end{equation}
Thus governing equations~ \eqref{ge1} and~\eqref{ge2}, controlling the behavior of $\chi_{B}$ and $\chi_{G}$ are same. In fact both of them admit symmetry under scaling of $\chi_B$ and $\chi_{G}$; albeit the scale factor behaves differently in these two scenarios. In GR, the scale factor $a$ is given by $a=\chi_{G}$ while in BD theory, it is given by $a= e^{-\frac{\gamma}{2}}\chi_{B}$.\\
Now, for $g\neq0$, $\varphi(\chi_{B})$ depends on $g$ (the solution being given by the modified Bessel function of order $\sqrt{-g+\frac{1}{4}}$), and hence on momentum mode $k$ \eqref{coupling} of scalar field $\gamma$ \eqref{gamma}. As $\omega\rightarrow\infty$, $g$ becomes $0$ and this dependence goes away. Even if we make a time dependent state by superposing energy eigenfunctions $\varphi$, the behavior of $\gamma$ is unaffected. On the other hand, even if we superimpose various momentum modes of $\gamma$, that does not affect the evolution of $\varphi$. Hence, in the $\omega\rightarrow\infty$ limit, the wave function $\xi(\gamma)$ controlling the behavior of $\gamma$ is explicitly time independent, which, in turn implies that on expectation value level, the GR FRW thus obtained has a scale factor that is some time independent multiple of the scale factor obtained from the strong coupling limit of BD. Thus for some constant $c$, we can write $\langle a_{GR} \rangle=c \langle a_{BD}\rangle$.\\
We know the strong coupling limit of both BD and GR preserves symmetry even after quantization; hence $\langle a_{GR}\rangle $ and $\langle a_{BD}\rangle $ are related by symmetry transformation. Thus, we have been able to show that quantum FRW obtained from BD does reduce to quantum FRW obtained from GR. For example, by superposing solutions of \eqref{gamma}, one can have $\xi(\gamma)=\frac{1}{\sqrt[4]{2\pi^{3}}} \int dk\ e^{-k^{2}+ik\gamma}=\frac{1}{\sqrt[4]{2\pi}}e^{-\frac{\gamma^{2}}{4}}$, to obtain $c=\langle e^{-\frac{\gamma}{2}}\rangle =e^{\frac{1}{8}}$. One might wonder about the fluctuation of $\gamma$, but note, in the strong coupling limit, even the fluctuations are time independent. Hence, even in the sense of the operator, we have $a_{GR}=\mathcal{C}. a_{BD}$ for constant operator $\mathcal{C}$. For example, $\sqrt{\langle \mathcal{C}^{2}\rangle -\langle \mathcal{C}\rangle^{2} } = e^{\frac{1}{8}}\sqrt{e^{\frac{1}{4}}-1}$ for the above mentioned $\xi$.\\
\section{D\'{e}nouement}
We have shown the existence of a binary phase structure of the FRW model with a flat spatial section in quantized BD theory, identifying the phase transition wall, explaining how the quantum effects break the classical symmetry which maps $a\mapsto\lambda a$. The obstruction provided by the phase transition wall implies the argument, showing that the BD theory with a scale invariant matter content does not reduce to GR and does not go through in the quantized version. Hence, we explore the strong coupling limit of the quantized BD theory and show explicitly that in sharp contrast with classical behavior, quantum mechanically, it does reduce to GR for a scale invariant matter content i.e radiation. This result is of utmost importance considering the fact that Solar System experiments and various important aspects of BD theory underlie the assumption that in the large $\omega$ limit, BD reduces to GR.\\
Although we have been working with the FRW model, it is a straightforward but nonetheless exciting exercise to show that the anisotropic homogeneous Bianchi-I model exhibits such scaling symmetry at the classical level which breaks down at the quantum level for a region in coupling space. Unlike FRW, Bianchi-I exhibits such binary phase structure in both GR and BD theories. We wish to report on it in future. \\
The invariance under $a\mapsto\lambda a$ plays a role in showing the convergence of strongly coupled BD to GR in quantized version. Hence, it seems that in the generic scenario, the strong coupling limit of the quantized BD theory yields a space-time, whose spatial slice (upon ADM decomposition) is conformal to the spatial slice of space-time obtained from quantized GR. At present, this is merely a conjecture, requiring a rigorous proof to be established. Nonetheless, this seems quite natural, as in the Einstein frame description of the BD theory, the scalar field always gets decoupled. There will possibly be a way to establish this decoupling effect in the Jordan frame or, to be more ambitious, to prove an equivalence between Jordan and Einstein frame descriptions of the BD theory in a generic scenario.\\
Last but not least, we list open questions that we believe will be interesting to explore in future:
\begin{enumerate}
\item to investigate whether the symmetry as laid out by Faraoni breaks down quantum mechanically, in a generic scenario or it happens only in FRW with a flat spatial section. One obvious choice would be to explore FRW with a curved spatial slice.
\item to explore the strong coupling limit of the BD theory and issue of convergence to GR in a generic scenario in the quantized version. One can investigate a generic scalar-tensor theory in a similar setup.
\item to explore whether any other model in quantized BD exhibits such rich quantum physics like anomalous symmetry breaking.
\item to show (in)equivalence of Einstein and Jordan frames with matter content.
\item to investigate the cosmological implication of anomalous symmetry breaking in the FRW model.
\item for the loop quantum gravity community to test whether the result obtained is robust enough to be independent of the quantization scheme and to be found in the loop quantum cosmological setup as well even though the work above has been done in a mini-superspace quantization scheme.
\end{enumerate}
\textit{Note Added}: A week after this had been posted in the arXiv, a work \cite{fabris} regarding self-adjoint extension in Brans-Dicke has appeared, where they arrived at a similar singular potential and found a constraint on operator ordering to ensure self-adjointness. It deserves mention that in the context of a singular potential, self-adjoint extension and renormalisation is intricately related. Hence, the results of \cite{fabris} can potentially be translated in the language of renormalisation and anomalous breaking of scale symmetry. They obtained an inequality involving momentum of scalar field and a parameter that depends on the operator ordering, coupling $\omega$, which ensures that the Hamiltonian is essentially self-adjoint. The regime of coupling where the Hamiltonian is essentially self-adjoint is precisely the regime where the symmetry is preserved whereas in the complementary regime, the symmetry breaks anomalously.\\
\begin{acknowledgments}
The author would like to thank Narayan Banerjee for insightful comments on the manuscript. The author also thanks anonymous referee for pointing out subtlety in the graphical representation of phase structure. This work was supported in part by the U.S. Department of Energy under Contract No. DE- SC0009919.
\end{acknowledgments}
|
2,877,628,090,526 | arxiv | \section{Introduction}
\label{sec:introduction}
Fluid-structure interaction (FSI) problems present the challenge of coupling a deformable structural problem to a fluid problem posed on a domain moving in accordance with the deforming structure. In the last four decades, both interface-tracking and interface-capturing methods have been developed to account for the deforming fluid domain. In interface-tracking methods, the coupling interface is resolved by the mesh, and the arbitrary Lagrangian-Eulerian (ALE) formulation is adopted to describe mechanics problems posed on a moving domain \cite{Hirt1974,Hughes1981,Donea1982}; in interface-capturing methods, including the immersed boundary \cite{Peskin1972,Mittal2005} and fictitious domain methods \cite{Baaijens2001}, the interface is described implicitly on a background mesh. Whereas applications in cardiac mechanics involving valve leaflet motion largely employ the interface capturing method \cite{Borazjani2013,Griffith2009,Kamensky2015,Hart2003,Loon2006}, ventricular and vascular wall deformation are typically modeled with the ALE method \cite{Wu2014,Hsu2014,Liu2018,Liu2020b}, allowing for hemodynamic attributes near the wall to be accurately resolved for clinical implications.
In addition to this classification of FSI formulations, FSI coupling strategies can also be categorized into monolithic and partitioned approaches. In monolithic approaches, the coupling conditions, namely the continuity of velocity and stress at the fluid-solid interface, are exactly satisfied \cite{Fernandez2011}. Despite their superior robustness, the resulting system is highly nonlinear \cite{Nobile2013,Wu2014}, requires novel algorithms for the coupled system, and necessitates additional implementation efforts. On the other hand, partitioned methods are generally favored for their modularity, as existing fluid and structure codes can be independently used and loosely coupled via transmission conditions at the fluid-solid interface. Partitioned methods, however, were initially developed for aeroelastic problems \cite{Piperno2000}, in which the structural density is much larger than the fluid density. Numerical instabilities arise in problems involving fluid and structural densities of comparable magnitudes. This so-called added mass effect \cite{Badia2008,Causin2005,Guidoboni2009} does not vanish with time step refinement and is particularly pronounced in hydroelastic problems such as cardiovascular FSI problems, where the fluid and structural densities are almost identical. Many approaches, such as generalized Robin-to-Robin transmission conditions \cite{Badia2008a}, have been proposed to improve the stability of partitioned algorithms under the added mass effect. Yet, recent results also suggest that this improved stability may actually be at the expense of critical dynamic characteristics of the structural sub-problem \cite{Kadapa2021}, signifying an alarming issue regarding partitioned approaches for hydroelastic problems.
In this work on vascular FSI, we adopt our recently developed unified continuum and variational multiscale (VMS) formulation \cite{Liu2018}, a monolithically coupled ALE method. Derived using the Gibbs free energy rather than the Helmholtz free energy as the thermodynamic potential, the formulation bridges the conventionally diverging approaches for computational fluid and solid mechanics. Its ability to naturally recover important continuum models, including viscous fluids and hyperelastic solids, through appropriate constitutive modeling drastically simplifies monolithic FSI coupling. Furthermore, the formulation is well-behaved in both compressible and incompressible regimes, enabling simulation of structural dynamics with a Poisson's ratio up to $0.5$. Given the nontrivial computational expense associated with an ALE formulation, we apply three common modeling assumptions concerning the strain magnitude, geometry, and constitutive model of the vascular wall--the infinitesimal strain, thin-walled, and linear elastic membrane assumptions, respectively--to arrive at our so-called \textit{reduced unified continuum formulation}. The resulting semi-discrete formulation presents a monolithically coupled FSI system posed in an Eulerian frame of reference, in which the structural velocity degrees of freedom are reduced to the fluid velocity degrees of freedom at the fluid-solid interface.
Despite its ostensible similarity to the semi-discrete formulation of the coupled momentum method (CMM), first introduced by Figueroa et al. \cite{Figueroa2006} and recently extended to a nonlinear rotation-free shell formulation \cite{Nama2020}, the FSI coupling in CMM relies on an assumption of a fictitious body force in the elastodynamics sub-problem, defined in relation to the fluid traction on the wall. While this coupling approach was inspired by Womersley's derivation of an analytical solution for axisymmetric flow in an elastic pipe \cite{Womersley1955,Zamir2000}, we believe this assumption of a fictitious body force is unnecessary. Since its introduction, CMM has been implemented in the open-source blood flow simulation software packages SimVascular \cite{Updegrove2017,Lan2018} and CRIMSON \cite{Arthurs2020} and extensively used in clinical applications ranging from interventions for coronary artery disease \cite{Williams2010, Gundert2011, Taylor2013} and aortic coarctation \cite{Coogan2010} to single-ventricle physiology \cite{Yang2010}, and Alagille syndrome \cite{Yang2016,Yang2017}. It has also been validated against experimental measurements from compliant in vitro phantom models \cite{Kung2011, Kung2011a} and Womersley's analytical solution for axisymmetric flow in a thin, linear elastic pipe subject to an oscillating pressure gradient \cite{Figueroa2006a,Filonova2019}. While the studies found good agreement for pressure, flow, pulse wave propagation, and wall displacement, Filonova et al. \cite{Filonova2019} documented large errors in radial velocity.
In this work, stabilized spatial discretization is performed with the residual-based variational multiscale formulation \cite{Bazilevs2007}, which retains numerical consistency across all scales and exhibits superior performance as a large eddy simulation turbulence model when compared to approaches employing traditional stabilized formulations. We further note that integration-by-parts is not adopted for the divergence operator in the continuity equation for two reasons. First, from an energy perspective, the additional boundary integral term produced from integration-by-parts could pollute the energy dissipation structure in the discrete scheme. In addition, integration-by-parts yields a contradiction in the regularity of the pressure function space in the Galerkin formulation, though this contradiction seems not to produce apparent numerical issues when the stabilized formulation is invoked.
The generalized-$\alpha$ method was initially proposed for structural dynamics as an unconditionally stable and second-order accurate implicit scheme for temporal discretization with user-specified levels of high-frequency dissipation \cite{Chung1993}. When Jansen et al. first applied the generalized-$\alpha$ method to the compressible Navier-Stokes equations \cite{Jansen2000}, the pressure primitive variables were uniformly evaluated at the intermediate time step $t_{n+\alpha_f}$. The predominant approach in the computational fluid dynamics (CFD) and FSI communities today \cite{Figueroa2006, Bazilevs2007, Bazilevs2008, Bazilevs2012, Joshi2018, Kang2012}, however, is to evaluate velocity at the intermediate step but pressure at time step $t_{n+1}$. While the community continues to reference the second-order temporal accuracy of the generalized-$\alpha$ method, we recently demonstrated that this particular dichotomous approach yields only first-order accuracy in pressure. Concurrent evaluation of velocity and pressure at the intermediate step, as in our approach, recovers second-order accuracy for pressure \cite{Liu2020a}.
In contrast to the use of the Newmark-$\beta$ method \cite{Newmark1959} in CMM \cite{Figueroa2006} for temporal integration of membrane displacements, we adopt the fully implicit generalized-$\alpha$ method for uniform temporal discretization of both the fluid and solid sub-problems, enabling second-order accuracy and high-frequency dissipation simultaneously in the full FSI system. With a segregated predictor-multicorrector algorithm we previously used for the unified continuum and VMS formulation \cite{Liu2018,Liu2020}, the three-by-three block structure for the matrix problem in the consistent Newton-Raphson procedure can be reduced to a two-by-two block structure identical to that of the incompressible Navier-Stokes equations. Not only does this segregated algorithm preserve the consistency of the Newton-Raphson method, but it also enables the use of existing CFD solvers with only minimal modifications. We exploit this preserved two-by-two block structure for preconditioning of the linear system with our three-level nested block preconditioner \cite{Liu2020}, which attains improved representation of the Schur complement with a ``matrix-free" technique to algorithmically define the action of the Schur complement on a vector. Our nested block preconditioning technique is thus robust for cardiovascular simulations involving several contributing terms in the Schur complement of widely varying orders of magnitude, associated with convection, diffusion, vascular wall stiffness, and reduced models at the outlets representing downstream vasculature. We further note that our study represents the first in which block preconditioning is performed for a monolithically coupled FSI system.
The body of this work is organized as follows. In Section \ref{sec:formulation}, the unified continuum and VMS formulation is simplified to our reduced unified continuum formulation for vascular FSI via three modeling assumptions. The spatiotemporal discretization methods and the associated predictor multi-corrector algorithm are also presented. In Section \ref{sec:iterative_solution_method}, the preconditioning technique for the associated linear system is developed. In Section \ref{sec:verification}, verification of our reduced unified continuum formulation is performed against the rigid and deformable Womersley benchmark cases using both linear and quadratic tetrahedral elements. Verification of CMM with linear elements is also presented for comparison. In Section \ref{sec:practical_modeling_techniques}, we discuss practical modeling techniques for capturing physiological behavior in patient-specific clinical applications. Among these practical modeling techniques is tissue prestressing to account for the nonzero internal stress state of the vascular wall at imaging, which we iteratively update via fixed-point iterations while solving a modified problem over the vascular wall under a fluid traction corresponding to the cardiac phase at imaging. We additionally present a centerline-based approach for variable wall thickness assignment to avoid unphysiological thicknesses produced by previous Laplacian approaches at regions of sharp local changes in geometry. Finally, we conclude with an assessment of our combined FSI technology with two patient-specific cases in Section \ref{sec:clinical_applications}.
\section{Governing equations and their spatiotemporal discretization}
\label{sec:formulation}
In this section, we introduce the strong and weak forms of the elastodynamic and incompressible Newtonian fluid problems following the unified continuum formulation \cite{Liu2018} and outline the assumptions yielding our reduced unified continuum formulation in the Eulerian description. This monolithically coupled FSI system is then integrated in time using the generalized-$\alpha$ method, which is solved by a segregated predictor multi-corrector algorithm.
\subsection{Strong-form problem}
We consider a domain $\Omega \subset \mathbb R^3$ admitting a non-overlapping subdivision $\overline{\Omega} = \overline{\Omega^f \cup \Omega^s}$, $\emptyset = \Omega^f \cap \Omega^s$, in which $\Omega^f$ and $\Omega^s$ represent the sub-domains occupied by the fluid and solid materials, respectively. The fluid-solid interface is a two-dimensional manifold denoted by $\Gamma_I$, and the boundary $\Gamma := \partial \Omega$ can be partitioned into four non-overlapping subdivisions:
\begin{align*}
\Gamma = \overline{\Gamma^f_g \cup \Gamma^f_h \cup \Gamma^s_g \cup \Gamma^s_h}, \mbox{ and } \emptyset = \Gamma_g^f \cap \Gamma_h^f = \Gamma_g^f \cap \Gamma_g^s = \Gamma_g^f \cap \Gamma_h^s = \Gamma_h^f \cap \Gamma_h^s = \Gamma_h^f \cap \Gamma_g^s = \Gamma_g^s \cap \Gamma_h^s.
\end{align*}
In the above, the four subdivisions represent the Dirichlet part of the fluid boundary, the Neumann part of the fluid boundary, the Dirichlet part of the solid boundary, and the Neumann part of the solid boundary, respectively. We note that since the present theory involves multiple unknowns in $\mathbb R^3$, the boundary $\Gamma$ should in fact be generalized to admit a different decomposition for each component of each unknown \cite[p.77]{Hughes1987}. To simplify our presentation, however, we consider the same partition of $\Gamma$ for all unknowns here and note that practical problems would require generalization. We demand $\Gamma$ to be at least Lipschitz such that the outward normal vector $\bm n$ is well-defined almost everywhere. We also assume that the interior fluid-solid interface $\Gamma_I$ is sufficiently smooth such that its outward normal vector is well-defined. In particular, we use $\bm n^f$ and $\bm n^s$ to represent the unit outward normal vector on $\Gamma_{I}$ relative to $\Omega^f$ and $\Omega^s$ respectively, such that $\bm n^f = - \bm n^s$. Let the time interval of interest be denoted by $(0,T) \subset \mathbb R$, with $T>0$. With this geometric configuration in mind, we state the strong-form sub-problems separately for the two sub-domains.
Under the Stokes' hypothesis and isothermal condition, the initial-boundary value problem in the solid sub-domain $\Omega^s$ can be stated as follows in the Lagrangian description \cite{Liu2018}. Given the body force per unit mass $\bm b^s$, Dirichlet data $\bm g^s$, boundary traction $\bm h^s$, and initial displacement and velocity fields $\bm u_0^s$ and $\bm v_0^s$, find the solid displacement $\bm u^s$, pressure $p^s$, and velocity $\bm v^s$, such that
\begin{align}
\label{eq:ela_kinematics}
& \bm 0 = \frac{d \bm u^s}{d t} - \bm v^s, && \mbox{ in } \Omega^s \times (0,T), \displaybreak[2] \\
\label{eq:ela_mass}
& 0 = \beta^s_{\theta}(p^s) \frac{dp^s}{dt} + \nabla \cdot \bm v^s, && \mbox{ in } \Omega^s \times (0,T), \displaybreak[2] \\
\label{eq:ela_mom}
& \bm 0 = \rho^s(p^s) \frac{d \bm v^s}{d t} - \nabla \cdot \bm \sigma^s_{\mathrm{dev}} + \nabla p^s - \rho^s(p^s) \bm b^s, && \mbox{ in } \Omega^s \times (0,T), \displaybreak[2] \\
\label{eq:dirichlet_bc_s}
& \bm u^s = \bm g^s, && \mbox{ on } \Gamma_g^s \times (0,T), \displaybreak[2] \\
\label{eq:neumann_bc_s}
&\bm \sigma^s \bm n = \bm h^s, && \mbox{ on } \Gamma_h^s \times (0,T), \displaybreak[2] \\
\label{eq:initial_condition_s}
&\bm u^s(\cdot, 0) = \bm u_0^s(\cdot), && \mbox{ in } \bar{\Omega}^s, \displaybreak[2] \\
&\bm v^s(\cdot, 0) = \bm v_0^s(\cdot), && \mbox{ in } \bar{\Omega}^s.
\end{align}
Here, $\beta^s_{\theta}$ is the isothermal compressibility coefficient, $\rho^s$ is the solid density, and $\bm \sigma^s_{\mathrm{dev}}$ is the deviatoric component of the Cauchy stress. To characterize the material behavior, constitutive relations for $\beta^s_{\theta}$, $\rho^s$, and $\bm \sigma^s_{\mathrm{dev}}$ must be provided. Interested readers may refer to \cite[Sec.~2.4]{Liu2018} for an overview of various constitutive relations for $\beta_{\theta}^s$ and $\rho^s$ and their relations with different forms of volumetric free energies.
\begin{assumption}
\label{assumption:small_strain}
The solid deformation is small enough such that the infinitesimal strain theory is valid.
\end{assumption}
Under the infinitesimal strain assumption, the reference and current frames coincide, as do the total ($d/dt$) and partial ($\partial / \partial t$) time derivatives in \eqref{eq:ela_kinematics}-\eqref{eq:ela_mom}. The density $\rho^s(p^s)$ takes the value in the reference configuration, denoted $\rho^s$. Furthermore, one may show that $\beta_{\theta}(p^s) = 1/\kappa^s$, where $\kappa^s$ is the solid bulk modulus \cite[p.941]{Liu2019}. Integrating \eqref{eq:ela_mass} in time then yields
\begin{align}
\label{eq:small_strain_pressure_constitutive}
p^s = - \kappa^s \nabla \cdot \bm u^s.
\end{align}
The infinitesimal strain tensor is given by
\begin{align*}
\bm \epsilon(\bm u^s) := \frac12 \left( \nabla \bm u^s + \left( \nabla \bm u^s \right) ^T \right) = \bm \epsilon_{\mathrm{dev}}(\bm u^s) + \frac{1}{3} \nabla \cdot \bm u^s \bm I,
\end{align*}
where $\bm \epsilon_{\mathrm{dev}}$ is the strain deviator and $\bm I$ is the second-order identity tensor. Given a strain energy function $W(\bm \epsilon_{\mathrm{dev}})$, the stress deviator is then
\begin{align*}
\bm \sigma^s_{\mathrm{dev}} = \frac{\partial W(\bm \epsilon_{\mathrm{dev}})}{\partial \bm \epsilon} = \frac{\partial W(\bm \epsilon_{\mathrm{dev}})}{\partial \bm \epsilon_{\mathrm{dev}}} : \frac{\partial \bm \epsilon_{\mathrm{dev}}}{\partial \bm \epsilon} = \mathbb P^T \frac{\partial W(\bm \epsilon_{\mathrm{dev}})}{\partial \bm \epsilon_{\mathrm{dev}}}, \qquad \mathbb P := \mathbb I - \frac{1}{3} \bm I \otimes \bm I,
\end{align*}
where $\mathbb P$ is the deviatoric projector, and $\mathbb I$ is the fourth-order symmetric identity tensor. The Cauchy stress for the solid body thus takes the following form,
\begin{align*}
\bm \sigma^s := \bm \sigma^s_{\mathrm{dev}} - p^s \bm I = \mathbb P^T \frac{\partial W(\bm \epsilon_{\mathrm{dev}})}{\partial \bm \epsilon_{\mathrm{dev}}} + \kappa^s \nabla \cdot \bm u^s \bm I.
\end{align*}
\begin{remark}
As will be revealed, Assumption \ref{assumption:small_strain} renders the eventual FSI formulation implementationally and computationally appealing. While several promising results have been reported in the literature \cite{Colciago2014,Filonova2019}, its validity must be judiciously assessed under various physiological settings in both health and disease.
\end{remark}
While the fluid sub-problem in an ALE formulation is indeed posed on a moving domain that tracks the solid deformation, Assumption \ref{assumption:small_strain} guarantees this geometry adherence and renders mesh motion unnecessary. The initial-boundary value problem for the incompressible Newtonian fluid in the fluid sub-domain $\Omega^f$ can thus be stated as follows. Given the body force per unit mass $\bm b^f$, Dirichlet data $\bm g^f$, boundary traction $\bm h^f$, and divergence-free initial velocity field $\bm v^f_0$, find the fluid velocity $\bm v^f$ and pressure $p^f$, such that
\begin{align}
\label{eq:ns_mom}
& \bm 0 = \rho^f \frac{\partial \bm v^f}{\partial t} + \rho^f \bm v^f \cdot \nabla \bm v^f - \nabla \cdot \bm \sigma^f_{\mathrm{dev}} + \nabla p^f - \rho^f \bm b^f, && \mbox{ in } \Omega^f \times (0,T), \\
\label{eq:ns_mass}
& 0 = \nabla \cdot \bm v^f, && \mbox{ in } \Omega^f \times (0, T), \\
\label{eq:dirichlet_bc_f}
& \bm v^f = \bm g^f && \mbox{ on } \Gamma_g^f \times (0,T), \\
\label{eq:neumann_bc_f}
&\bm \sigma^f \bm n = \bm h^f && \mbox{ on } \Gamma_h^f \times (0,T), \\
\label{eq:initial_condition_f}
&\bm v^f(\cdot, 0) = \bm v_0^f(\cdot), && \mbox{ in } \bar{\Omega}^f,
\end{align}
wherein
\begin{align}
\label{eq:ns_sigma}
\bm \sigma^f_{\mathrm{dev}} := 2\mu^f \bm \varepsilon_{\mathrm{dev}}(\bm v^f), \qquad \bm \varepsilon_{\mathrm{dev}}(\bm v^f) := \frac12 \left( \nabla \bm v^f + \left( \nabla \bm v^{f} \right)^T \right) - \frac13 \nabla \cdot \bm v^f \bm I.
\end{align}
Here, $\rho^f$ is the fluid density, $\bm \sigma^f_{\mathrm{dev}}$ is the deviatoric component of the Cauchy stress for a Newtonian fluid, $\mu^f$ is the dynamic viscosity, and $\bm \varepsilon_{\mathrm{dev}}$ is the deviatoric component of the rate-of-strain tensor.
The strong-form FSI problem can be completed with the following kinematic condition enforcing the continuity of velocity on $\Gamma_I$,
\begin{align}
\label{eq:coupling-kinematic}
\bm v^f = \bm v^s, \qquad \hspace{1.3cm} \mbox{on } \Gamma_I,
\end{align}
and the following dynamic condition enforcing the continuity of stress,
\begin{align}
\label{eq:coupling-traction}
\bm \sigma^f \bm n^f = -\bm \sigma^s \bm n^s, \qquad \mbox{ on } \Gamma_I.
\end{align}
Together, Equations \eqref{eq:ela_kinematics}-\eqref{eq:coupling-traction} constitute the coupled strong-form FSI problem, in which the solid problem is restricted to small-strain elastodynamics.
\subsection{Semi-discrete formulation}
\label{subsec:semi-discrete-formulation}
In this section, we present the semi-discrete formulations for the two coupled sub-problems separately. By invoking two more assumptions for the vascular wall, we then reduce the elastodynamics formulation to a thin-walled, linear elastic membrane formulation, yielding a convenient FSI formulation that does not explicitly require solid degrees of freedom. We further note that the reduction to a membrane formulation conveniently bypasses the troublesome procedure of modeling the vascular wall, which current medical imaging techniques largely remain unable to accurately resolve \cite{Liu2020b}.
\subsubsection{Semi-discrete formulation for elastodynamics}
Let $\mathcal S_{\bm v}^s$ be the trial solution space for the solid velocity; let $\mathcal S_{\bm u}^s$ and $\mathcal V_{\bm u}^s$ denote the trial solution and test function spaces for the solid displacement. We can then state the semi-discrete elastodynamics formulation in $\Omega^s$ as follows. Find
\begin{align*}
\bm y^s_h(t):= \Big\lbrace \bm v^s_h(t), \bm u^s_h(t) \Big\rbrace^T \in \mathcal S^s_{\bm v} \times \mathcal S^s_{\bm u}
\end{align*}
such that
\begin{align*}
& \mathbf B^{s}_{\mathrm{k}}\Big( \dot{\bm y}^s_h, \bm y^s_h \Big) = \bm 0, && \displaybreak[2] \\
& \mathbf B^{s}_{\mathrm{m}}\Big( \bm w^s_h ; \dot{\bm y}^s_h, \bm y^s_h \Big) = 0, && \forall \bm w^s_h \in \mathcal V^s_{\bm u},
\end{align*}
where
\begin{align}
\label{eq:semi-discrete-kinematic}
& \mathbf B^{s}_{\mathrm{k}} \Big( \dot{\bm y}_h^s, \bm y_h^s \Big) := \frac{d\bm u^s_h}{dt} - \bm v^s_h, \displaybreak[2] \\
\label{eq:semi-discrete-u}
& \mathbf B^{s}_{\mathrm{m}} \Big( \bm w_h^s ; \dot{\bm y}_h^s, \bm y_h^s \Big) := \int_{\Omega^s} \bm w_h^s \cdot \rho^s \left( \frac{d \bm v_h^s}{d t} - \bm b^s \right) d\Omega + \int_{\Omega^s} \bm \epsilon(\bm w_h^s) : \bm \sigma^s(\bm u_h^s) d\Omega - \int_{\Gamma_h^s} \bm w_h^s \cdot \bm h^s d\Gamma,
\end{align}
with $\bm y^s_h(0) = \left\lbrace \bm v^s_0, \bm u^s_0 \right\rbrace^T$. Here, $\bm v^s_0$ and $\bm u^s_0$ are $\mathcal L^2$ projections of the initial velocity and displacement fields onto the discrete spaces $\mathcal S_{\bm v}^s$ and $\mathcal S_{\bm u}^s$, respectively.
\begin{remark}
In contrast to the conventional ``acceleration form" in which only displacement degrees-of-freedom are utilized, acceleration is represented here as the first time derivative of velocity via the kinematic relation \eqref{eq:semi-discrete-kinematic} \cite{Hulbert2017}. While this ``momentum form" ostensibly introduces three additional velocity degrees of freedom on each node in $\Omega^s$, we will later show that \eqref{eq:semi-discrete-kinematic} does not enter the implicit solution procedure for the fully discrete formulation. Furthermore, as will be discussed later, this first-order structural dynamics formulation is favorable for temporal discretization via the generalized-$\alpha$ method.
\end{remark}
\noindent
Restricting our discussion to vascular FSI, we now introduce our second assumption pertaining to the vascular wall geometry.
\begin{assumption}
\label{assumption:thin-wall}
$\Omega^s$ is thin in one direction and can thus be parameterized by the fluid-solid interface $\Gamma_I$ and a through-thickness coordinate in the unit outward normal direction.
\end{assumption}
To simplify our presentation, let $\Gamma_I$ be parameterized by a single chart $\Xi \subset \mathbb R^2$, a bounded open set. Let $\bm \chi(\xi,\eta)$ be a smooth one-to-one mapping of $(\xi,\eta) \in \Xi$ onto the fluid-solid interface $\bm \chi \in \Gamma_I$, where $\bm \chi$ represents the position vector of a generic point on $\Gamma_I$. The unit outward normal vector to $\Omega^f$ can be represented by
\begin{align*}
\bm n^f = \frac{\bm e_{\xi} \times \bm e_{\eta}}{\| \bm e_{\xi} \times \bm e_{\eta} \|}, \quad \mbox{ where } \quad \bm e_{\xi} := \frac{\partial \bm \chi}{\partial \xi} / \left \|\frac{\partial \bm \chi}{\partial \xi} \right\|, \quad \bm e_{\eta} := \frac{\partial \bm \chi}{\partial \eta} / \left \|\frac{\partial \bm \chi}{\partial \eta} \right\|.
\end{align*}
Given this thin-walled assumption, we can introduce the following diffeomorphism from $\bm \xi := \left\lbrace \xi, \eta, \zeta \right\rbrace \in \Xi \times (0,1)$ to $\bm x \in \Omega^s$,
\begin{align}
\label{eq:3Dshell-parameterization}
\bm x(\bm \xi) = \bm x(\xi,\eta,\zeta) := \bm \chi(\xi,\eta) + \zeta h^s(\xi,\eta) \bm n^f,
\end{align}
where $\xi$ and $\eta$ are the in-plane parametric coordinates, $h^s$ is the wall thickness as a function of $\xi$ and $\eta$, and $\zeta \in (0,1)$ is the through-thickness parametric coordinate.
For any fixed $\zeta$, the surface defined by this parameterization of $\Omega^s$ is a lamina, and a corresponding lamina coordinate system $\{\bm e^l_1, \bm e^l_2, \bm e^l_3\}$, denoted with a superscript $l$, may be constructed as follows \cite[Sec.~6.2]{Hughes1987},
\begin{align*}
\bm e_1^l := \frac{\sqrt{2}}{2} \left( \bm e_\alpha - \bm e_\beta \right), \qquad
\bm e_2^l := \frac{\sqrt{2}}{2} \left( \bm e_\alpha + \bm e_\beta \right), \qquad
\bm e_3^l := \bm n^f,
\end{align*}
in which,
\begin{align*}
\bm e_\alpha := \frac12 \left( \bm e_\xi + \bm e_\eta \right) / \left \| \frac12 \left( \bm e_\xi + \bm e_\eta \right) \right \| , \quad
\bm e_\beta := \bm e_3^l \times \bm e_\alpha / \| \bm e_3^l \times \bm e_\alpha \|.
\end{align*}
With these lamina basis vectors $\left\lbrace \bm e_i^l \right\rbrace_{i=1}^{3}$, the coordinate transformation from the global coordinates $\bm x$ to the local lamina coordinates $\bm x^l$ is then given by $\bm x^l = \bm Q \bm x$ with the rotation matrix
\begin{align*}
\bm Q :=
\begin{bmatrix}
\bm e_1^l & \bm e_2^l & \bm e_3^l
\end{bmatrix}^T .
\end{align*}
From the parameterization \eqref{eq:3Dshell-parameterization}, we have
\begin{align*}
j :=& \mathrm{det}\left(\frac{\partial \bm x}{\partial \bm \xi}\right) = h^s \bm n ^f \cdot \left(\frac{\partial \bm x}{\partial \xi} \times \frac{\partial \bm x}{\partial \eta} \right) = h^s \bm n ^f \cdot \left( \left(\frac{\partial \bm \chi}{\partial \xi} + \zeta \frac{\partial h^s}{\partial \xi} \bm n^f \right) \times \left( \frac{\partial \bm \chi}{\partial \eta} + \zeta \frac{\partial h^s}{\partial \eta} \bm n^f \right) \right) \nonumber \\
=& h^s \bm n ^f \cdot \left(\frac{\partial \bm \chi}{\partial \xi} \times \frac{\partial \bm \chi}{\partial \eta} \right),
\end{align*}
indicating the following transformation of the volume element from $\Xi \times (0,1)$ to $\Omega^s$,
\begin{align}
\label{eq:shell-volume-element}
d\Omega^s := d\bm x= j d\bm \xi = j d\xi d\eta d\zeta = h^s \bm n ^f \cdot \left(\frac{\partial \bm \chi}{\partial \xi} \times \frac{\partial \bm \chi}{\partial \eta} \right) d\xi d\eta d\zeta = h^s \bm n^f \cdot \bm n^f d\Gamma_I d\zeta = h^s d\Gamma_I d\zeta,
\end{align}
where we have utilized the transformation of the area element from $\Xi$ to $\Gamma_I$,
\begin{align*}
\bm n^f d\Gamma_I = \left(\frac{\partial \bm \chi}{\partial \xi} \times \frac{\partial \bm \chi}{\partial \eta} \right) d\xi d\eta.
\end{align*}
The volume integral over $\Omega^s$ can thus be simplified in the following manner,
\begin{align}
\label{eq:shell-volume-integral}
\int_{\Omega^s} \left( \cdot \right) d\Omega
= \int_{\Gamma_I} h^s \int_0^1 \left( \cdot \right) d\zeta d\Gamma.
\end{align}
We finally introduce the following membrane assumption for the vascular wall.
\begin{assumption}
\label{assumption:membrane}
The displacement $\bm u^s$ is a function of the in-plane parametric coordinates $(\xi, \eta)$ only, and the transverse normal stress $\bm \sigma^s_{33}$ is zero in the $\bm e^l_3$ direction of the lamina system.
\end{assumption}
Cardiac pulse wavelengths are at least three orders of magnitude larger than arterial diameters \cite{Alastruey2012}, causing vessels to respond to transverse loading primarily with in-plane stresses rather than bending stresses. Out-of-plane rotations and their corresponding bending effects are thus neglected under this membrane assumption, minimizing the number of degrees of freedom and facilitating convenient fluid-solid coupling. In addition, to avoid thickness locking, also known as Poisson thickness locking in classical shell theories, the transverse normal stress is assumed to vanish, which has been well-substantiated over time \cite{Bazilevs2009a, Bischoff2004}. Furthermore, it is commonly known that when the linear, constant strain triangle is used to model membrane components experiencing transverse loads in three-dimensional structures, it suffers from severe transverse and in-plane shear locking, thereby demonstrating overly stiff behavior \cite{Bergan1985, Jun2018}. Transverse shear modes are therefore added to stabilize the linear membrane.
We now define the solid constitutive relation in the lamina coordinate system to enforce the zero transverse normal stress condition. Considering the strain energy for isotropic linear elasticity,
\begin{align*}
W(\bm \epsilon_{\mathrm{dev}}) = \mu \bm \epsilon_{\mathrm{dev}} : \bm \epsilon_{\mathrm{dev}},
\end{align*}
the constitutive relation is given by
\begin{align*}
\bm \sigma^{s, l}_{\mathrm{dev}} = 2\mu^s \bm \epsilon_{\mathrm{dev}}(\bm u^{s, l}).
\end{align*}
Recalling from \eqref{eq:small_strain_pressure_constitutive} that the hydrostatic component of the Cauchy stress is already given by $p^s = -\kappa^s \nabla \cdot \bm u^{s, l}$, the Cauchy stress can be written as
\begin{align*}
& \bm \sigma^{s, l} = \bm \sigma^{s, l}_{\mathrm{dev}} - p^s \bm I = \mathbb C^{s, l} \bm \epsilon^{l}(\bm u^{s,l}), \quad \mbox{ with } \quad \mathbb C^{s, l} := 2 \mu^s(\bm x^l) \mathbb I + \lambda^s(\bm x^l) \bm I \otimes \bm I,
\end{align*}
wherein
\begin{align*}
&\bm \sigma^{s, l} = \left\lbrace \sigma^{s, l}_{I} \right\rbrace = \left[ \sigma_{11}^{s,l}, \sigma_{22}^{s,l}, \sigma_{12}^{s,l}, \sigma_{23}^{s,l} , \sigma_{31}^{s,l} \right]^T, \displaybreak[2] \\
&\bm \epsilon^{l}(\bm u^{s, l}) = \left\lbrace \epsilon^{l}_{I} \right\rbrace = \left[ \epsilon_{11}^{l},
\epsilon_{22}^{l}, 2 \epsilon_{12}^{l}, 2 \epsilon_{23}^{l}, 2 \varepsilon_{31}^{l} \right]^T =
\left[ u_{1,1}^{s,l} , u_{2,2}^{s,l}, u_{1,2}^{s,l} + u_{2,1}^{s,l}, u_{3,2}^{s,l}, u_{3,1}^{s,l} \right]^T, \displaybreak[2] \\
& \mathbb C^{s, l} = \left[ \mathbb C^{s, l}_{IJ} \right] = \frac{E}{(1 - \nu^2)}
\begin{bmatrix}
1 & \nu & & & \\[1mm]
\nu & 1 & & & \\[1mm]
& & \displaystyle \frac{1 - \nu}{2} & & \\[1mm]
& & & \kappa \displaystyle \frac{(1 - \nu)}{2} & \\[1mm]
& & & & \kappa \displaystyle \frac{ (1 - \nu)}{2} \\[1mm]
\end{bmatrix}
\end{align*}
in Voigt notation. Here, $\mu^s$ and $\lambda^s$ are the Lam\'e parameters related to the bulk modulus $\kappa^s$ through the relation $\kappa^s := 2\mu^s/3 + \lambda^s$, $E$ is the Young's modulus, $\nu$ is the Poisson's ratio, and $\kappa = 5/6$ is the shear correction factor \cite[p.391]{Hughes1987}. Now adopting the full tensor notation rather than Voigt notation, the Cauchy stress in the lamina coordinate system can be rotated to the global coordinate system by
\begin{align*}
\bm \sigma^s = \bm Q^T \bm \sigma^{s,l} \bm Q.
\end{align*}
Assumption \ref{assumption:membrane} further enables evaluation of $\left( \cdot \right)$ in \eqref{eq:shell-volume-integral} at $\zeta = 0$, thereby reducing the volume integral over $\Omega^s$ to a surface integral over $\Gamma_I$,
\begin{align}
\label{eq:shell-volume-integral-to-surface-integral}
\int_{\Omega^s} \left( \cdot \right) d\Omega \approx \int_{\Gamma_I} h^s \left( \cdot \right)|_{\zeta = 0} d\Gamma.
\end{align}
\begin{remark}
The choice of a linear constitutive model can be well justified by experimental canine aortic and pulmonary arterial data exhibiting linearity within the physiological range of pressures \cite{Debes1995, Zhou1997}. Nonetheless, material nonlinearity and anisotropy could instead be considered using an alternative form for the strain energy function $W(\bm \epsilon_{\mathrm{dev}})$ in the above derivation. We note that for problems characterized by large deformation, such as hypertensive clinical cases, Assumptions \ref{assumption:thin-wall} and \ref{assumption:membrane} could still be invoked, yet an ALE description of the fluid sub-problem would be required, necessitating mesh motion and rendering the overall FSI formulation less computationally appealing.
\end{remark}
\subsubsection{Residual-based VMS formulation for an incompressible Newtonian fluid}
Let $\mathcal S_{\bm v}^f$ and $\mathcal S_{p}^f$ denote the trial solution spaces for the fluid velocity and pressure, and let $\mathcal V_{\bm v}^f$ and $\mathcal V_{p}^f$ be their corresponding test function spaces. We can then construct the semi-discrete fluid formulation in $\Omega^f$ using the residual-based VMS formulation \cite{Bazilevs2007} as follows. Find
\begin{align*}
\bm y_h^f(t):= \left\lbrace \bm v_h^f(t), p_h^f(t) \right\rbrace^T \in \mathcal S_{\bm v}^f \times \mathcal S_{p}^f
\end{align*}
such that
\begin{align}
\label{eq:semi-discrete-v}
& \mathbf B^{f}_{\mathrm{m}} \left( \bm w_h^f ; \dot{\bm y}_h^f, \bm y_h^f \right) = 0, && \forall \bm w_h^f \in \mathcal V_{\bm v}^f, \\
\label{eq:semi-discrete-p}
& \mathbf B^{f}_{\mathrm{c}}\left( q_h^f; \dot{\bm y}_h^f, \bm y_h^f \right) = 0, && \forall q_h^f \in \mathcal V_{p}^f,
\end{align}
where
\begin{align}
\label{eq:vms_momentum}
& \mathbf B^{f}_{\mathrm{m}} \left( \bm w_h^f ; \dot{\bm y}_h^f, \bm y_h^f \right) := \mathbf B_{\mathrm{m}}^{\textup{vol}} \left( \bm w_h^f ; \dot{\bm y}_h^f, \bm y_h^f \right) + \mathbf B_{\mathrm{m}}^{\mathrm{h}} \left( \bm w_h^f ; \dot{\bm y}_h^f, \bm y_h^f \right) + \mathbf B_{\mathrm{m}}^{\textup{bf}} \left( \bm w_h^f ; \dot{\bm y}_h^f, \bm y_h^f \right), \displaybreak[2] \\
& \mathbf B_{\mathrm{m}}^{\textup{vol}} \left( \bm w_h^f ; \dot{\bm y}_h^f, \bm y_h^f \right) := \int_{\Omega^f} \bm w_h^f \cdot \rho^f \left( \frac{\partial \bm v_h^f}{\partial t} + \bm v_h^f \cdot \nabla \bm v_h^f - \bm b^f \right) d\Omega - \int_{\Omega^f} \nabla \cdot \bm w_h^f p_h^f d\Omega + \int_{\Omega^f} 2\mu^f \bm \varepsilon(\bm w_h^f) : \bm \varepsilon(\bm v_h^f) d\Omega \nonumber \displaybreak[2] \\
& \hspace{28mm} - \int_{\Omega^{f \prime}} \nabla \bm w_h^f : \left( \rho^f \bm v^{\prime} \otimes \bm v_h^f \right) d\Omega + \int_{\Omega^{f \prime}} \nabla \bm v_h^f : \left( \rho^f \bm w_h^f \otimes \bm v^{\prime} \right) d\Omega - \int_{\Omega^{f \prime}} \nabla \bm w_h^f : \left( \rho^f \bm v^{\prime} \otimes \bm v^{\prime} \right) d\Omega \nonumber \displaybreak[2] \\
& \hspace{28mm} - \int_{\Omega^{f \prime}} \nabla \cdot \bm w_h^f p^{\prime} d\Omega,\displaybreak[2] \\
\label{eq:vms_traction}
& \mathbf B_{\mathrm{m}}^{\mathrm{h}} \left( \bm w_h^f ; \dot{\bm y}_h^f, \bm y_h^f \right) := - \int_{\Gamma_h^f} \bm w_h^f \cdot \bm h^f d\Gamma, \displaybreak[2] \\
\label{eq:back_flow_stabilization}
& \mathbf B_{\mathrm{m}}^{\textup{bf}} \left( \bm w_h^f ; \dot{\bm y}_h^f, \bm y_h^f \right) := - \int_{\Gamma_h^f} \rho^f \beta \left(\bm v_h^f \cdot \bm n^f \right)_{-} \bm w_h^f \cdot \bm v_h^f d\Gamma, \displaybreak[2] \\
\label{eq:vms_continuity}
& \mathbf B^{f}_{\mathrm{c}}\left( q_h^f ; \dot{\bm y}_h^f, \bm y_h^f \right) := \int_{\Omega^f} q_h^f \nabla \cdot \bm v_h^f d\Omega - \int_{\Omega^{f \prime}} \nabla q_h^f \cdot \bm v^{\prime} d\Omega, \displaybreak[2] \\
& \bm v^{\prime} := -\bm \tau_{M} \left( \rho^f \frac{\partial \bm v_h^f}{\partial t} + \rho^f \bm v_h^f \cdot \nabla \bm v_h^f + \nabla p_h^f - \mu^f \Delta \bm v_h^f - \rho^f \bm b^f \right), \displaybreak[2] \\
& p^{\prime} := -\tau_C \nabla \cdot \bm v_h^f, \displaybreak[2] \\
\label{eq:vms_def_tau_m}
& \tau_M := \frac{1}{\rho^f}\left( \frac{\mathrm C_{\mathrm T}}{\Delta t^2} + \bm v_h^f \cdot \bm G \bm v_h^f + \mathrm C_{\mathrm I} \left( \frac{\mu^f}{\rho^f} \right)^2 \bm G : \bm G \right)^{-\frac12}, \displaybreak[2] \\
\label{eq:vms_def_tau_c}
& \tau_C := \frac{1}{\tau_M \textup{tr}\bm G}, \displaybreak[2] \\
& G_{ij} := \sum_{k=1}^{3} \frac{\partial y_k}{\partial x_i} M_{kl} \frac{\partial y_l}{\partial x_j}, \displaybreak[2] \\
\label{eq:def_K_for_scale_G}
& \bm M = [ M_{kl} ] = \frac{\sqrt[3]{2}}{2}\begin{bmatrix}
2 & 1 & 1 \\
1 & 2 & 1 \\
1 & 1 & 2
\end{bmatrix}, \displaybreak[2] \\
& \bm G : \bm G := \sum_{i,j=1}^{3} G_{ij} G_{ij}, \displaybreak[2] \\
& \textup{tr}\bm G := \sum_{i=1}^{3} G_{ii}, \displaybreak[2] \\
& \left( \bm v_h^f \cdot \bm n^f \right)_{-} := \frac{\bm v_h^f \cdot \bm n^f - |\bm v_h^f \cdot \bm n^f|}{2} =
\begin{cases}
\bm v_h^f \cdot \bm n^f & \quad \mbox{ if } \bm v_h^f \cdot \bm n^f < 0, \\
0 & \quad \mbox{ if } \bm v_h^f \cdot \bm n^f \geq 0.
\end{cases}
\end{align}
Here, $\bm y = \left\lbrace y_i \right\rbrace_{i=1}^{3}$ are natural coordinates in the parent domain; $\mathrm C_{\mathrm I}$ depends on the polynomial order of the finite element basis functions, taking the values of $36$ and $60$ for linear and quadratic interpolations, respectively \cite{Figueroa2006, Franca1992}; and $\mathrm C_{\mathrm T}$ is taken to be $4$ \cite{Liu2018, Liu2020}. $\mathbf B_{\mathrm{m}}^{\textup{bf}}$ is an additional convective traction shown to be robust in overcoming backflow divergence \cite{Bazilevs2009b, Moghadam2011}, a well-known issue in cardiovascular simulations. It can be shown that taking $\beta = 1.0$ guarantees energy stability for the numerical scheme adopted here. In this work, $\beta$ is fixed to be $0.2$ to minimize its impact on the flow field and to improve robustness at larger time steps.
\begin{remark}
In contrast to CMM \cite{Figueroa2006a,Figueroa2006,Taylor1998}, integration-by-parts is not performed for the divergence operator in the continuity equation, which could otherwise lead to a loss of energy stability in the Galerkin formulation. Interested readers may refer to \cite{Gresho1998} for a thorough discussion of the Galerkin formulation for the Navier-Stokes equations. In addition, we adopt the residual-based variational multiscale formulation \cite{Bazilevs2007}, which has been shown to capture the correct energy spectrum and decay of kinetic energy in isotropic and wall-bounded turbulent flows \cite{Bazilevs2007,Bazilevs2007a,Colomes2015}. The conventional streamline upwind Petrov-Galerkin/pressure-stabilizing Petrov-Galerkin (SUPG/PSPG) method \cite{Brooks1982, Franca1992}, on the other hand, cannot correctly describe the energy spectrum and is thus physically inappropriate as a subgrid-scale model \cite{Hughes2000}. Furthermore, the stabilization parameters are defined to be invariant to cyclic permutations of node numbering \cite{Pauli2017,Danwitz2019}.
\end{remark}
\subsubsection{Reduced unified continuum formulation for vascular FSI}
Discretization of the entire domain $\Omega$ by a single mesh with continuous basis functions across the fluid-solid interface $\Gamma_I$ immediately guarantees satisfaction of the kinematic coupling condition \eqref{eq:coupling-kinematic} in the semi-discrete formulation. The implied relation
\begin{align}
\label{eq:fsi-semidiscrete-3D-test-fun}
\bm w^f_h = \bm w^s_h, \qquad \mbox{ on } \Gamma_I
\end{align}
also yields weak satisfaction of the traction coupling condition \eqref{eq:coupling-traction}, that is
\begin{align*}
0 = \int_{\Gamma_I} \bm w^f_h \cdot \left( \bm \sigma^f \bm n^f + \bm \sigma^s \bm n^s \right) d\Gamma = \int_{\Gamma_I} \bm w^f_h \cdot \left( \bm \sigma^f \bm n^f - \bm \sigma^s \bm n^f \right) d\Gamma.
\end{align*}
With this mesh choice, the momentum balances \eqref{eq:semi-discrete-u} and \eqref{eq:vms_momentum} over $\Omega^s$ and $\Omega^f$, respectively, can then be combined into a single momentum balance over the entire continuum body $\Omega$,
\begin{align*}
\mathbf B^{s}_{\mathrm{m}}\Big( \bm w^s_h ; \dot{\bm y}^s_h, \bm y^s_h \Big) + \mathbf B^{f}_{\mathrm{m}} \left( \bm w^f_h ; \dot{\bm y}^f_h, \bm y^f_h \right) = 0, \qquad \forall \bm w^s_h \in \mathcal V^s_{\bm u} \quad \mbox{ and } \quad \forall \bm w_h^f \in \mathcal V_{\bm v}^f.
\end{align*}
Having applied the outlined assumptions to collapse the three-dimensional elastodynamic problem in $\Omega^s$ to a two-dimensional problem posed on $\Gamma_I$, we now present the reduced semi-discrete FSI formulation. Let $\bm u^{w}_h$ be the membrane displacement on $\Gamma_I$. Using the kinematic coupling condition \eqref{eq:coupling-kinematic}, continuity of test functions on $\Gamma_I$ \eqref{eq:fsi-semidiscrete-3D-test-fun}, and the transformation of volume integrals over $\Omega^s$ \eqref{eq:shell-volume-integral-to-surface-integral}, we can rewrite the kinematic equation \eqref{eq:semi-discrete-kinematic} as
\begin{align}
\label{eq:semi-discrete-kinematic-reformulated}
& \mathbf B_{\mathrm{k}} \left( \dot{\bm y}_h, \bm y_h \right) := \frac{d\bm u^w_h}{dt} - \bm v^f_h = \bm 0, \qquad \mbox{ on } \Gamma_I,
\end{align}
and the momentum balance \eqref{eq:semi-discrete-u} over $\Omega^s$ as
\begin{align}
\label{eq:semi-discrete-u-reformulated}
& \mathbf B^w_{\mathrm{m}} \left( \bm w_h^f ; \dot{\bm y}_h, \bm y_h \right) := \int_{\Gamma_I} \bm w_h^f \cdot \rho^s h^s \left( \frac{d \bm v_h^f}{d t} - \bm b^s \right) d\Gamma + \int_{\Gamma_I} h^s \bm \epsilon(\bm w_h^f) : \bm \sigma^s(\bm u_h^w) d\Gamma - \int_{\partial \Gamma_I \cap \Gamma^h_s } h^s \bm w_h^f \cdot \bm h^s d\Gamma,
\end{align}
where $\partial \Gamma_I \cap \Gamma^h_s$ constitutes the Neumann partition of the boundary of $\Gamma_I$. Finally, let $\mathcal S^{w}_{\bm u}$ be the trial solution space for $\bm u^{w}_h$. Our reduced unified continuum formulation posed only in the fluid domain $\Omega_f$ is then stated as follows. Find
\begin{align*}
\bm y_h(t) := \left\lbrace \bm u^{w}_h(t), \bm v_h^f(t), p_h^f(t) \right\rbrace^T \in \mathcal S^w_{\bm u} \times \mathcal S_{\bm v}^f \times \mathcal S_{p}^f
\end{align*}
such that
\begin{align}
\label{eq:semidiscrete-fsi-couple}
& \mathbf B_{\mathrm{k}} \left( \dot{\bm y}_h, \bm y_h \right) = \bm 0, && \\
\label{eq:semidiscrete-fsi-momentum}
& \mathbf B_{\mathrm{m}} \left( \bm w_h^f ; \dot{\bm y}_h, \bm y_h \right) := \mathbf B^w_{\mathrm{m}} \left( \bm w_h^f ; \dot{\bm y}_h, \bm y_h \right) + \mathbf B^f_{\mathrm{m}} \left( \bm w^f_h ; \dot{\bm y}^f_h, \bm y^f_h \right) = 0, && \forall \bm w_h^f \in \mathcal V_{\bm v}^f, \\
\label{eq:semidiscrete-fsi-continuity}
& \mathbf B_{\mathrm{c}}\left( q^f_h; \dot{\bm y}_h, \bm y_h \right) := \mathbf B^f_{\mathrm{c}}\left( q^f_h; \dot{\bm y}^f_h, \bm y^f_h \right) = 0, && \forall q_h^f \in \mathcal V_{p}^f.
\end{align}
It is then clear that compared to the fluid sub-problem, the above FSI formulation \eqref{eq:semidiscrete-fsi-couple}-\eqref{eq:semidiscrete-fsi-continuity} consists of an additional coupling relation \eqref{eq:semidiscrete-fsi-couple} and four additional terms corresponding to the vascular wall's mass, body force, stiffness, and boundary traction, all of which are embedded in \eqref{eq:semidiscrete-fsi-momentum} through the form $\mathbf B^w_{\mathrm{m}}$. Importantly, \eqref{eq:semidiscrete-fsi-momentum} represents the semi-discrete formulation for momentum balance over the entire continuum body consisting of both the fluid and vascular wall. This FSI formulation therefore offers a computationally efficient approach for capturing vascular wall deformation on a stationary fluid mesh.
\begin{remark}
Despite the ostensible similarity between our reduced unified continuum formulation and the semi-discrete formulation of CMM, the fluid-solid coupling in CMM was achieved via a fictitious body force assumed to be uniformly distributed through the vessel thickness \cite{Figueroa2006,Figueroa2006a} (see also \cite[p.10]{Figueroa2017} and \cite[p.119]{Taylor2009}). Our recent development of the unified continuum and VMS formulation renders this assumption unnecessary for achieving the desired coupling. Starting from the unified formulation in ALE coordinates, we have instead arrived at a similar reduced FSI formulation simply by invoking the small-strain, thin-walled, and membrane assumptions for the solid sub-problem. We further note that the wall thickness has not been assumed to be uniform over each element and thus appears within the integrals over $\Gamma_I$ in our formulation.
\end{remark}
\subsection{Fully discrete formulation}
To arrive at the fully discrete FSI formulation, we apply the generalized-$\alpha$ method for temporal discretization of the first-order dynamic system. Let the time interval of interest $(0, T)$ be divided into $N_{\mathrm{ts}}$ subintervals of equal size $\Delta t_n := t_{n+1} - t_n$ and delimited by a discrete time vector $\left\lbrace t_n \right\rbrace_{n=0}^{N_{\mathrm{ts}}}$. The approximations of the solution vector and its time-derivative at time step $t_n$ are denoted as
\begin{align*}
\bm y_n := \left\lbrace \bm u^w_n, \bm v^f_n, p^f_n \right\rbrace, \quad \mbox{ and } \quad \dot{\bm y}_n := \left\lbrace \dot{\bm u}^w_n, \dot{\bm v}^f_n, \dot{p}^f_n \right\rbrace.
\end{align*}
Let $N_A$ represent basis functions for all variational spaces, and let $\{\bm e_i\}$ be the Cartesian basis vectors with $i=1,2,3$. We may then define the residual vectors as follows,
\begin{alignat*}{2}
& \boldsymbol{\mathrm R}_{\mathrm{k}}\left( \dot{\bm y}_{n}, \bm y_{n} \right) && := \Big\lbrace \mathbf B_{\mathrm{k}} \Big( \dot{\bm y}_{n}, \bm y_{n} \Big) \Big\rbrace, \nonumber \displaybreak[2] \\
& \boldsymbol{\mathrm R}_{\mathrm{m}}\left( \dot{\bm y}_{n}, \bm y_{n} \right) && := \Big\lbrace \mathbf B_{\mathrm{m}} \left( N_A \bm e_i ; \dot{\bm y}_{n}, \bm y_{n} \Big) \right\rbrace, \nonumber \displaybreak[2] \\
& \boldsymbol{\mathrm R}_{\mathrm{c}}\left( \dot{\bm y}_{n}, \bm y_{n} \right) && := \Big\lbrace \mathbf B_{\mathrm{c}} \left( N_A ; \dot{\bm y}_{n}, \bm y_{n} \Big) \right\rbrace.
\end{alignat*}
The fully discrete scheme can be stated as follows. At time step $t_n$, given $\dot{\bm y}_n$, $\bm y_n$, and the time step size $\Delta t_n$, find $\dot{\bm y}_{n+1}$ and $\bm y_{n+1}$ such that
\begin{alignat}{2}
\label{eq:coupling_residual}
& \boldsymbol{\mathrm R}_{\mathrm{k}} \left( \dot{\bm y}_{n+\alpha_m}, \bm y_{n+\alpha_f} \right) &&= \bm 0, \displaybreak[2] \\
& \boldsymbol{\mathrm R}_{\mathrm{m}} \left( \dot{\bm y}_{n+\alpha_m}, \bm y_{n+\alpha_f}\right) &&= \bm 0, \displaybreak[2] \\
& \boldsymbol{\mathrm R}_{\mathrm{c}}\left(\dot{\bm y}_{n+\alpha_m}, \bm y_{n+\alpha_f} \right) &&= \bm 0,
\end{alignat}
\begin{alignat}{2}
\label{eq:gen_alpha_def_y_n_alpha_m}
& \dot{\bm y}_{n+\alpha_m} &&= \dot{\bm y}_n + \alpha_m \left(\dot{\bm y}_{n+1} - \dot{\bm y}_n \right), \displaybreak[2] \\
\label{eq:gen_alpha_def_y_n_alpha_f}
& \bm y_{n+\alpha_f} &&= \bm y_n + \alpha_f \left( \bm y_{n+1} - \bm y_n \right), \displaybreak[2] \\
\label{eq:gen_alpha_def_y_n_plus_1}
& \bm y_{n+1} &&= \bm y_n + \Delta t_n \dot{\bm y}_n + \gamma \Delta t_n \left( \dot{\bm y}_{n+1} - \dot{\bm y}_n \right).
\end{alignat}
In the above system, the three parameters $\alpha_m$, $\alpha_f$, and $\gamma$ determine critical numerical properties of the discrete dynamic system. For linear problems, the following parameterization has been shown to achieve second-order accuracy, unconditional stability, and optimal high frequency dissipation,
\begin{align*}
\alpha_m = \frac12 \left( \frac{3-\varrho_{\infty}}{1+\varrho_{\infty}} \right), \quad \alpha_f = \frac{1}{1+\varrho_{\infty}}, \quad \gamma = \frac12 + \alpha_m - \alpha_f,
\end{align*}
wherein $\varrho_{\infty} \in [0,1]$ is the spectral radius of the amplification matrix at the highest mode \cite{Jansen2000,Chung1993}. In this work, we choose $\varrho_{\infty} = 0.5$.
\begin{remark}
While the Newmark-$\beta$ method \cite{Newmark1959} used to integrate membrane dynamics in CMM \cite{Figueroa2006} has classically been used in structural dynamics and persists in today's solvers, it faces several well-documented issues. First, it cannot simultaneously achieve second-order accuracy and high-frequency algorithmic damping; second, all first-order implementations of the Newmark-$\beta$ method are overly dissipative in the mid-frequency modes \cite[p.501]{Hughes1987}; third, implicit schemes of the Newmark family are ``not designed to conserve energy and also fail to conserve momentum" for nonlinear structural dynamics \cite{Simo1992}. As a result, despite its pervasiveness, the Newmark-$\beta$ method is not recommended for structural dynamics \cite{Hilber1978,Hulbert2017}.
\end{remark}
\begin{remark}
The generalized-$\alpha$ method was initially proposed as an integration scheme for structural dynamics \cite{Chung1993} and has since been applied to fluid dynamics \cite{Jansen2000} as well as FSI problems \cite{Bazilevs2008}. It exhibits all of the desirable attributes of a competitive integration scheme for structural dynamics, as noted by Hilber and Hughes \cite{Hilber1978}. Moreover, when applied to a first-order structural dynamic system, it was recently found not to suffer from the `overshoot' phenomenon, a long-standing issue in computational structural dynamics, and to further possess smaller dissipation and dispersion errors than when applied to a second-order system \cite{Kadapa2017}. The generalized-$\alpha$ method is thus highly recommended for integrating inertial type problems.
\end{remark}
\begin{remark}
\label{remark:gen-alpha-pressure}
In both CFD and FSI literature, the fluid velocity and pressure are typically treated dichotomously in the generalized-$\alpha$ method for the incompressible Navier-Stokes equations, such that pressure is collocated at time step $t_{n+1}$ rather than the intermediate time step $t_{n+\alpha_f}$ \cite{Figueroa2006,Bazilevs2007a,Taylor1998,Moghadam2013}. Despite the commonly cited second-order accuracy of the generalized-$\alpha$ method, we recently demonstrated that this particular approach yields only first-order temporal accuracy, at least, for pressure. Evaluating pressure at $t_{n+\alpha_f}$ recovers second-order accuracy for the overall algorithm, simplifies the implementation, and resolves a troubling issue in geometric multiscale modeling. Interested readers are referred to \cite{Liu2020a} for details.
\end{remark}
\subsection{A segregated predictor multi-corrector algorithm}
\label{subsec:segregated-predictor-multicorrector}
The fully discrete scheme can be solved iteratively with a predictor multi-corrector algorithm, in which the Newton-Raphson method is used in the multi-corrector iterations to improve the initial prediction. Let $\bm y_{{n+1}, (l)}$ and $\dot{\bm y}_{{n+1}, (l)}$ denote the solution vector and its time derivative at time step $t_{n+1}$ at the $l$-th Newton-Raphson iteration, where $n=0,1, ...,N_{ts} - 1$ and $l=0, 1, ...,l_{\mathrm{max}}$,
\begin{align*}
\bm y_{n+1, (l)} := \lbrace \bm u_{n+1, (l)}^w, \bm v_{n+1, (l)}^f, p_{n+1, (l)}^f \rbrace, \quad \mbox{ and } \quad
\dot{\bm y}_{n+1, (l)} := \lbrace \dot{\bm u}_{n+1, (l)}^w, \dot{\bm v}_{n+1, (l)}^f, \dot p_{n+1, (l)}^f \rbrace.
\end{align*}
We can then denote the residual vectors at iteration number $l$ as
\begin{alignat*}{2}
& \boldsymbol{\mathrm R}_{(l)} && := \lbrace \boldsymbol{\mathrm R}_{\mathrm{k},(l)}, \boldsymbol{\mathrm R}_{\mathrm{m}, (l)}, \boldsymbol{\mathrm R}_{\mathrm{c}, (l)}\rbrace^T, \\
& \boldsymbol{\mathrm R}_{\mathrm{k},(l)} && := \boldsymbol{\mathrm R}_{\mathrm{k}} \left( \dot{\bm y}_{n+\alpha_m, (l)}, \bm y_{n+\alpha_f, (l)} \right), \\
& \boldsymbol{\mathrm R}_{\mathrm{m}, (l)} && := \boldsymbol{\mathrm R}_{\mathrm{m}} \left( \dot{\bm y}_{n+\alpha_m,(l)}, \bm y_{n+\alpha_f,(l)}\right), \\
& \boldsymbol{\mathrm R}_{\mathrm{c}, (l)} && := \boldsymbol{\mathrm R}_{\mathrm{c}} \left(\dot{\bm y}_{n+\alpha_m,(l)}, \bm y_{n+\alpha_f,(l)} \right),
\end{alignat*}
and the consistent tangent matrix as
\begin{align*}
\boldsymbol{\mathrm K}_{(l)} =
\begin{bmatrix}
\boldsymbol{\mathrm K}_{\mathrm{k},(l),\dot{\bm u}^w} & \boldsymbol{\mathrm K}_{\mathrm{k},(l),\dot{\bm v}^f} & \boldsymbol{\mathrm K}_{\mathrm{k},(l),\dot{p}^f} \\
\boldsymbol{\mathrm K}_{\mathrm{m},(l),\dot{\bm u}^w} & \boldsymbol{\mathrm K}_{\mathrm{m},(l),\dot{\bm v}^f} & \boldsymbol{\mathrm K}_{\mathrm{m},(l),\dot{p}^f} \\
\boldsymbol{\mathrm K}_{\mathrm{c},(l),\dot{\bm u}^w} & \boldsymbol{\mathrm K}_{\mathrm{c},(l),\dot{\bm v}^f} & \boldsymbol{\mathrm K}_{\mathrm{c},(l),\dot{p}^f}
\end{bmatrix},
\end{align*}
wherein
\begin{align*}
& \boldsymbol{\mathrm K}_{\mathrm{k},(l),\dot{\bm u}^w} := \alpha_m \frac{\partial \boldsymbol{\mathrm R}_{\mathrm{k}} \left( \dot{\bm y}_{n+\alpha_m, (l)}, \bm y_{n+\alpha_f, (l)} \right)}{\partial \dot{\bm u}_{n+\alpha_m}^w} = \alpha_m \bm I, \displaybreak[2] \\
& \boldsymbol{\mathrm K}_{\mathrm{k},(l),\dot{\bm v}^f} := \alpha_f \gamma \Delta t_n \frac{\partial \boldsymbol{\mathrm R}_{\mathrm{k}} \left( \dot{\bm y}_{n+\alpha_m, (l)}, \bm y_{n+\alpha_f, (l)} \right)}{\partial \bm v_{n+\alpha_f}^f} = -\alpha_f \gamma \Delta t_n \bm I, \displaybreak[2] \\
& \boldsymbol{\mathrm K}_{\mathrm{k},(l),\dot{p}^f} := \bm 0, \displaybreak[2] \\
& \boldsymbol{\mathrm K}_{\mathrm{m},(l),\dot{\bm u}^w} := \alpha_f \gamma \Delta t_n \frac{\partial \boldsymbol{\mathrm R}_{\mathrm{m}} \left( \dot{\bm y}_{n+\alpha_m, (l)}, \bm y_{n+\alpha_f, (l)} \right)}{\partial \bm u_{n+\alpha_f}^w}, \displaybreak[2] \\
& \boldsymbol{\mathrm K}_{\mathrm{m},(l),\dot{\bm v}^f} := \alpha_m \frac{\partial \boldsymbol{\mathrm R}_{\mathrm{m}} \left( \dot{\bm y}_{n+\alpha_m, (l)}, \bm y_{n+\alpha_f, (l)} \right)}{\partial \dot{\bm v}_{n+\alpha_m}^f} + \alpha_f \gamma \Delta t_n \frac{\partial \boldsymbol{\mathrm R}_{\mathrm{m}} \left( \dot{\bm y}_{n+\alpha_m, (l)}, \bm y_{n+\alpha_f, (l)} \right)}{\partial \bm v_{n+\alpha_f}^f}, \displaybreak[2] \\
& \boldsymbol{\mathrm K}_{\mathrm{m},(l),\dot{p}^f} := \alpha_f \gamma \Delta t_n \frac{\partial \boldsymbol{\mathrm R}_{\mathrm{m}} \left( \dot{\bm y}_{n+\alpha_m, (l)}, \bm y_{n+\alpha_f, (l)} \right)}{\partial p_{n+\alpha_f}^f}, \displaybreak[2] \\
& \boldsymbol{\mathrm K}_{\mathrm{c},(l),\dot{\bm u}^w} := \bm 0, \displaybreak[2] \\
& \boldsymbol{\mathrm K}_{\mathrm{c},(l),\dot{\bm v}^f} := \alpha_m \frac{\partial \boldsymbol{\mathrm R}_{\mathrm{c}} \left( \dot{\bm y}_{n+\alpha_m, (l)}, \bm y_{n+\alpha_f, (l)} \right)}{\partial \dot{\bm v}_{n+\alpha_m}^f} + \alpha_f \gamma \Delta t_n \frac{\partial \boldsymbol{\mathrm R}_{\mathrm{c}} \left( \dot{\bm y}_{n+\alpha_m, (l)}, \bm y_{n+\alpha_f, (l)} \right)}{\partial \bm v_{n+\alpha_f}^f}, \displaybreak[2] \\
& \boldsymbol{\mathrm K}_{\mathrm{c},(l),\dot{p}^f} := \alpha_f \gamma \Delta t_n \frac{\partial \boldsymbol{\mathrm R}_{\mathrm{c}} \left( \dot{\bm y}_{n+\alpha_m, (l)}, \bm y_{n+\alpha_f, (l)} \right)}{\partial p_{n+\alpha_f}^f}.
\end{align*}
The special block structure in the first row of $\boldsymbol{\mathrm K}_{(l)}$ can be exploited for the following block decomposition \cite{Scovazzi2016, Liu2019a},
\begin{align*}
\boldsymbol{\mathrm K}_{(l)} =
\begin{bmatrix}
\bm I & \bm 0 & \bm 0 \\[4mm]
\displaystyle\frac{1}{\alpha_m}\boldsymbol{\mathrm K}_{\mathrm{m},(l),\dot{\bm u}^w} & \boldsymbol{\mathrm K}_{\mathrm{m},(l),\dot{\bm v}^f} + \displaystyle\frac{\alpha_f \gamma \Delta t_n}{\alpha_m}\boldsymbol{\mathrm K}_{\mathrm{m},(l),\dot{\bm u}^w} & \boldsymbol{\mathrm K}_{\mathrm{m},(l),\dot{p}^f} \\[4mm]
\bm 0 & \boldsymbol{\mathrm K}_{\mathrm{c},(l),\dot{\bm v}^f} & \boldsymbol{\mathrm K}_{\mathrm{c},(l),\dot{p}^f}
\end{bmatrix}
\begin{bmatrix}
\alpha_m \bm I & -\alpha_f \gamma \Delta t_n \bm I & \bm 0 \\[4mm]
\bm 0 & \bm I & \bm 0 \\[4mm]
\bm 0 & \bm 0 & \bm I
\end{bmatrix}.
\end{align*}
With the above decomposition, the original linear system for the Newton-Raphson method,
\begin{align*}
\boldsymbol{\mathrm K}_{(l)} \Delta \dot{\bm y}_{n+1,(l)} = -\boldsymbol{\mathrm R}_{(l)},
\end{align*}
can be solved to obtain the increments $\Delta \dot{\bm y}_{n+1,(l)} := \lbrace \Delta \dot{\bm u}_{n+1, (l)}^{w}, \Delta \dot{\bm v}_{n+1, (l)}^{f}, \Delta \dot{p}_{n+1, (l)}^{f}\rbrace^T$ at iteration number $l$ in the following two-stage segregated algorithm. In the first stage, intermediate increments are solved from
\begin{align}
\label{eq: segregated_stage-one}
\begin{bmatrix}
\bm I & \bm 0 & \bm 0 \\[4mm]
\displaystyle\frac{1}{\alpha_m}\boldsymbol{\mathrm K}_{\mathrm{m},(l),\dot{\bm u}^w} & \boldsymbol{\mathrm K}_{\mathrm{m},(l),\dot{\bm v}^f} + \displaystyle\frac{\alpha_f \gamma \Delta t_n}{\alpha_m}\boldsymbol{\mathrm K}_{\mathrm{m},(l),\dot{\bm u}^w} & \boldsymbol{\mathrm K}_{\mathrm{m},(l),\dot{p}^f} \\[4mm]
\bm 0 & \boldsymbol{\mathrm K}_{\mathrm{c},(l),\dot{\bm v}^f} & \boldsymbol{\mathrm K}_{\mathrm{c},(l),\dot{p}^f}
\end{bmatrix}
\begin{bmatrix}
\Delta \dot{\bm u}_{n+1, (l)}^{w*} \\[3mm]
\Delta \dot{\bm v}_{n+1, (l)}^{f*} \\[3mm]
\Delta \dot{p}_{n+1, (l)}^{f*}
\end{bmatrix} = -
\begin{bmatrix}
\boldsymbol{\mathrm R}_{\mathrm{k},(l)} \\[3mm]
\boldsymbol{\mathrm R}_{\mathrm{m}, (l)} \\[3mm]
\boldsymbol{\mathrm R}_{\mathrm{c}, (l)}
\end{bmatrix}.
\end{align}
In the second stage, the increments are obtained from the following system of equations,
\begin{align}
\label{eq: segregated_stage-two}
\begin{bmatrix}
\alpha_m \bm I & -\alpha_f \gamma \Delta t_n \bm I & \bm 0 \\[2mm]
\bm 0 & \bm I & \bm 0 \\[2mm]
\bm 0 & \bm 0 & \bm I
\end{bmatrix}
\begin{bmatrix}
\Delta \dot{\bm u}_{n+1, (l)}^{w} \\[2mm]
\Delta \dot{\bm v}_{n+1, (l)}^{f} \\[2mm]
\Delta \dot{p}_{n+1, (l)}^{f}
\end{bmatrix} =
\begin{bmatrix}
\Delta \dot{\bm u}_{n+1, (l)}^{w*} \\[2mm]
\Delta \dot{\bm v}_{n+1, (l)}^{f*} \\[2mm]
\Delta \dot{p}_{n+1, (l)}^{f*}
\end{bmatrix}.
\end{align}
From \eqref{eq: segregated_stage-one} and \eqref{eq: segregated_stage-two}, we make the following observations,
\begin{align*}
& \alpha_m \Delta \dot{\bm u}_{n+1, (l)}^{w} - \alpha_f \gamma \Delta t_n \Delta \dot{\bm v}_{n+1, (l)}^{f} = \Delta \dot{\bm u}_{n+1, (l)}^{w*} = -\boldsymbol{\mathrm R}_{\mathrm{k},(l)}, \quad
\Delta \dot{\bm v}_{n+1, (l)}^{f} = \Delta \dot{\bm v}_{n+1, (l)}^{f*}, \quad
\Delta \dot{p}_{n+1, (l)}^{f} = \Delta \dot{p}_{n+1, (l)}^{f*},
\end{align*}
with which we may reduce the linear systems in the segregated algorithm to
\begin{align}
\label{eq: segregated_stage-one_reduced}
\begin{bmatrix}
\boldsymbol{\mathrm K}_{\mathrm{m},(l),\dot{\bm v}^f} + \displaystyle\frac{\alpha_f \gamma \Delta t_n}{\alpha_m}\boldsymbol{\mathrm K}_{\mathrm{m},(l),\dot{\bm u}^w} & \boldsymbol{\mathrm K}_{\mathrm{m},(l),\dot{p}^f} \\[4mm]
\boldsymbol{\mathrm K}_{\mathrm{c},(l),\dot{\bm v}^f} & \boldsymbol{\mathrm K}_{\mathrm{c},(l),\dot{p}^f}
\end{bmatrix}
\begin{bmatrix}
\Delta \dot{\bm v}_{n+1, (l)}^{f} \\[4mm]
\Delta \dot{p}_{n+1, (l)}^{f}
\end{bmatrix} = -
\begin{bmatrix}
\boldsymbol{\mathrm R}_{\mathrm{m}, (l)} - \displaystyle\frac{1}{\alpha_m} \boldsymbol{\mathrm K}_{\mathrm{m},(l),\dot{\bm u}^w} \boldsymbol{\mathrm R}_{\mathrm{k},(l)} \\[4mm]
\boldsymbol{\mathrm R}_{\mathrm{c}, (l)}
\end{bmatrix},
\end{align}
\begin{align}
\label{eq: segregated_stage-two_reduced}
\Delta \dot{\bm u}_{n+1, (l)}^{w} = \frac{\alpha_f \gamma \Delta t_n}{\alpha_m} \Delta \dot{\bm v}_{n+1, (l)}^{f} - \frac{1}{\alpha_m}\boldsymbol{\mathrm R}_{\mathrm{k},(l)}.
\end{align}
The segregated algorithm therefore consists of solving \eqref{eq: segregated_stage-one_reduced} for $\lbrace \Delta \dot{\bm v}_{n+1, (l)}^{f}, \Delta \dot{p}_{n+1, (l)}^{f}\rbrace^T$, then subsequently obtaining $\Delta \dot{\bm u}_{n+1, (l)}^{w}$ from the algebraic update \eqref{eq: segregated_stage-two_reduced}. Furthermore, it has been shown in Proposition 5 of \cite{Liu2018} that
\begin{align*}
\boldsymbol{\mathrm R}_{\mathrm{k},(l)} = \bm 0 \quad \mbox{ for } l \geq 2
\end{align*}
holds true for any given update $\Delta \dot{\bm v}_{n+1, (l)}^{f}$ in \eqref{eq: segregated_stage-two_reduced}, prompting us to set $\boldsymbol{\mathrm R}_{\mathrm{k},(l)} = \bm 0$ for all $l \geq 1$ in \eqref{eq: segregated_stage-one_reduced}. While this may lead to inconsistent updates of $\Delta \dot{\bm v}_{n+1, (l)}^{f}$ and $\Delta \dot{p}_{n+1, (l)}^{f}$ for $l=1$, we have observed no deterioration of the overall Newton-Raphson algorithm's convergence rate in our collective experience. Interested readers are referred to Appendix B of \cite{Liu2018} for more details on the numerical analysis. For notational simplicity, we denote the block matrices in \eqref{eq: segregated_stage-one_reduced} as
\begin{align}
\label{eq: predictor_multi_correct_notation_for_block_matrices}
\boldsymbol{\mathrm A}_{(l)} := \boldsymbol{\mathrm K}_{\mathrm{m},(l),\dot{\bm v}^f} + \displaystyle\frac{\alpha_f \gamma \Delta t_n}{\alpha_m}\boldsymbol{\mathrm K}_{\mathrm{m},(l),\dot{\bm u}^w}, \quad \boldsymbol{\mathrm B}_{(l)} := \boldsymbol{\mathrm K}_{\mathrm{m},(l),\dot{p}^f}, \quad \boldsymbol{\mathrm C}_{(l)} := \boldsymbol{\mathrm K}_{\mathrm{c},(l),\dot{\bm v}^f}, \quad \boldsymbol{\mathrm D}_{(l)} := \boldsymbol{\mathrm K}_{\mathrm{c},(l),\dot{p}^f}.
\end{align}
We can now summarize our above discussion in the following segregated predictor multi-corrector algorithm.
\begin{myenv}{Segregated predictor multi-corrector algorithm}
\noindent \textbf{Predictor stage:} Set
\begin{align*}
\bm y_{n+1, (0)} = \bm y_n, \quad \dot{\bm y}_{n+1, (0)} = \frac{\gamma - 1}{\gamma} \dot{\bm y_n}.
\end{align*}
\noindent \textbf{Multi-corrector stage:} Repeat the following steps for $l=1, 2, ..., l_{max}$
\begin{enumerate}
\item Evaluate the solution vector and its time derivative at intermediate time steps,
\begin{align*}
& \bm y_{n+\alpha_f, (l)} = \bm y_n + \alpha_f \left( \bm y_{n+1, (l-1)} - \bm y_n \right), \quad \dot{\bm y}_{n+\alpha_m, (l)} = \dot{\bm y}_n + \alpha_m \left(\dot{\bm y}_{n+1, (l-1)} - \dot{\bm y}_n \right).
\end{align*}
\item Assemble the residual vector $\boldsymbol{\mathrm R}_{(l)}$ using $\dot{\bm y}_{n+\alpha_m, (l)}$ and $\bm y_{n+\alpha_f, (l)}$.
\item Let $\| \boldsymbol{\mathrm R}_{(l)} \|_{\mathfrak l_2}$ denote the $\mathfrak l_2$-norm of the residual vector, and let $\mathrm{tol}_{\mathrm{R}}$ and $\mathrm{tol}_{\mathrm{A}}$ denote the prescribed relative and absolute tolerances, respectively. If either of the following stopping criteria
\begin{align*}
\frac{\| \boldsymbol{\mathrm R}_{(l)} \|_{\mathfrak l_2}}{\| \boldsymbol{\mathrm R}_{(0)} \|_{\mathfrak l_2}} \leq \mathrm{tol}_{\mathrm{R}}, \qquad
\| \boldsymbol{\mathrm R}_{(l)} \|_{\mathfrak l_2} \leq \mathrm{tol}_{\mathrm{A}},
\end{align*}
is satisfied, then set
\begin{align*}
\bm y_{n+1} = \bm y_{n+1, (l-1)}, \quad \dot{\bm y}_{n+1} = \dot{\bm y}_{n+1, (l-1)},
\end{align*}
and exit the multi-corrector stage. Otherwise, continue to step 4.
\item Assemble the following sub-tangent matrices,
\begin{align*}
& \boldsymbol{\mathrm A}_{(l)} := \boldsymbol{\mathrm K}_{\mathrm{m},(l),\dot{\bm v}^f} + \displaystyle\frac{\alpha_f \gamma \Delta t_n}{\alpha_m}\boldsymbol{\mathrm K}_{\mathrm{m},(l),\dot{\bm u}^w}, &&
\boldsymbol{\mathrm B}_{(l)} := \boldsymbol{\mathrm K}_{\mathrm{m},(l),\dot{p}^f}, \\
& \boldsymbol{\mathrm C}_{(l)} := \boldsymbol{\mathrm K}_{\mathrm{c},(l),\dot{\bm v}^f}, &&
\boldsymbol{\mathrm D}_{(l)} := \boldsymbol{\mathrm K}_{\mathrm{c},(l),\dot{p}^f}.
\end{align*}
\item Solve the following linear system for $\Delta \dot{\bm v}_{n+1, (l)}^{f}$ and $\Delta \dot{p}_{n+1, (l)}^{f}$,
\begin{align}
\label{eq:pred_multi_correct_linear_system}
\begin{bmatrix}
\boldsymbol{\mathrm A}_{(l)} & \boldsymbol{\mathrm B}_{(l)} \\[1mm]
\boldsymbol{\mathrm C}_{(l)} & \boldsymbol{\mathrm D}_{(l)}
\end{bmatrix}
\begin{bmatrix}
\Delta \dot{\bm v}_{n+1, (l)}^{f} \\[1mm]
\Delta \dot{p}_{n+1, (l)}^{f}
\end{bmatrix} = -
\begin{bmatrix}
\boldsymbol{\mathrm R}_{\mathrm{m}, (l)} \\[1mm]
\boldsymbol{\mathrm R}_{\mathrm{c}, (l)}
\end{bmatrix}.
\end{align}
\item Obtain $\Delta \dot{\bm u}_{n+1, (l)}^{w}$ from $\Delta \dot{\bm v}_{n+1, (l)}^{f}$ via the relation \eqref{eq: segregated_stage-two_reduced}, that is,
\begin{align*}
\Delta \dot{\bm u}_{n+1, (l)}^{w} = \frac{\alpha_f \gamma \Delta t_n}{\alpha_m} \Delta \dot{\bm v}_{n+1, (l)}^{f} - \frac{1}{\alpha_m}\boldsymbol{\mathrm R}_{\mathrm{k},(l)}.
\end{align*}
\item Update the solution vector and its time derivative as
\begin{align*}
& \bm y_{n+1, (l)} = \bm y_{n+1, (l-1)} + \gamma \Delta t_n \Delta \dot{\bm y}_{n+1, (l)}, \quad \dot{\bm y}_{n+1, (l)} = \dot{\bm y}_{n+1, (l-1)} + \Delta\dot{\bm y}_{n+1, (l)}.
\end{align*}
\end{enumerate}
\end{myenv}
\noindent In this work, unless otherwise specified, we set the tolerances to $\mathrm{tol}_{\mathrm R} = \mathrm{tol}_{\mathrm A} = 10^{-6}$ and the maximum number of nonlinear iterations to $l_{max} = 20$.
\begin{remark}
\label{remark:zero_kinematic_residual}
We have chosen $\boldsymbol{\mathrm R}_{\mathrm{k},(l)} = \bm 0$ for all $l \geq 1$ in \eqref{eq: segregated_stage-one_reduced} to simplify the formation of the right-hand side of the linear system. We note that the wall displacement update \eqref{eq: segregated_stage-two_reduced} still requires a consistent definition of $\boldsymbol{\mathrm R}_{\mathrm{k},(l)}$, as stagnation or divergence may otherwise be observed. Numerical evidence will be documented for patient-specific clinical cases in Section \ref{subsec:zero_kinematic_residual_numerical_evidence}.
\end{remark}
\begin{remark}
In comparison to the consistent tangent matrix for the incompressible Navier-Stokes equations, only block matrix $\boldsymbol{\mathrm A}_{(l)}$ has been modified to include the wall stiffness term $\alpha_f \gamma \Delta t_n\boldsymbol{\mathrm K}_{\mathrm{m},(l),\dot{\bm u}^w} / \alpha_m$. As was shown in our prior analysis \cite{Liu2020}, $\boldsymbol{\mathrm A}_{(l)}$ additionally consists of contributions from the transient, convection, viscous, and subgrid scale modeling terms as well as multiple rank-one modifications from coupling with reduced models. Particular attention will thus be paid to approximate $\boldsymbol{\mathrm A}_{(l)}$ in our design of the iterative solution method, as discussed in the next section.
\end{remark}
\begin{remark}
We further note that for time steps of practical interest, the use of the `frozen-coefficient' tangent matrix was previously deemed necessary for achieving stability in the first few time steps \cite{Jansen2000,Johan1991a}. Nonetheless, we implement the consistent tangent matrix here as in our previous studies \cite{Liu2018, Liu2020, Liu2020b} without stability issues, thereby achieving rapid quadratic convergence.
\end{remark}
\section{Iterative solution method}
\label{sec:iterative_solution_method}
In this section, we consider the linear system \eqref{eq:pred_multi_correct_linear_system} arising in the aforementioned segregated predictor multi-corrector algorithm, which often comprises the most time-consuming part of the overall algorithm. We focus on a linear system
\begin{align}
\label{eq:general-linear-system}
\boldsymbol{\mathcal A} \bm x = \bm r
\end{align}
exhibiting the following $2\times 2$ block structure,
\begin{align*}
\boldsymbol{\mathcal A} :=
\begin{bmatrix}
\boldsymbol{\mathrm A} & \boldsymbol{\mathrm B} \\[0.3mm]
\boldsymbol{\mathrm C} & \boldsymbol{\mathrm D}
\end{bmatrix}, \quad
\bm x :=
\begin{bmatrix}
\bm x_{\bm v} \\[0.3em]
\bm x_{p}
\end{bmatrix},
\quad
\bm r :=
\begin{bmatrix}
\bm r_{\bm v} \\[0.3em]
\bm r_{p}
\end{bmatrix}.
\end{align*}
As is clear from our derivation of the consistent tangent matrix $\boldsymbol{\mathcal A}$ in the previous section, the segregated algorithm allows the implicit solver to retain the same block structure as that of the incompressible Navier-Stokes equations \cite{Liu2020}. From \eqref{eq: predictor_multi_correct_notation_for_block_matrices}, we observe that block matrices $\boldsymbol{\mathrm B}$, $\boldsymbol{\mathrm C}$, and $\boldsymbol{\mathrm D}$ are in fact identical to their counterparts in the incompressible Navier-Stokes equations, and the block matrix $\boldsymbol{\mathrm A}$ is only modified by an additional term representing the wall contribution, scaled by the time step size and parameters in the generalized-$\alpha$ method. The consistent tangent matrix $\boldsymbol{\mathcal A}$ can be factorized as $\boldsymbol{\mathcal A} = \boldsymbol{\mathcal L} \boldsymbol{\mathcal D} \boldsymbol{\mathcal U}$, with
\begin{align}
\label{eq:ISM_A_LDU_block_factorization}
\boldsymbol{\mathcal L} =
\begin{bmatrix}
\boldsymbol{\mathrm I} & \boldsymbol{\mathrm O} \\[0.3em]
\boldsymbol{\mathrm C} \boldsymbol{\mathrm A}^{-1} & \boldsymbol{\mathrm I}
\end{bmatrix},
\qquad
\boldsymbol{\mathcal D} =
\begin{bmatrix}
\boldsymbol{\mathrm A} & \boldsymbol{\mathrm O} \\[0.3em]
\boldsymbol{\mathrm O} & \boldsymbol{\mathrm S}
\end{bmatrix},
\qquad
\boldsymbol{\mathcal U} =
\begin{bmatrix}
\boldsymbol{\mathrm I} & \boldsymbol{\mathrm A}^{-1} \boldsymbol{\mathrm B} \\[0.3em]
\boldsymbol{\mathrm O} & \boldsymbol{\mathrm I}
\end{bmatrix},
\end{align}
where $\boldsymbol{\mathrm S} := \boldsymbol{\mathrm D} - \boldsymbol{\mathrm C} \boldsymbol{\mathrm A}^{-1} \boldsymbol{\mathrm B}$ is the Schur complement of $\boldsymbol{\mathrm A}$. The above block factorization immediately implies a solution procedure for the linear system $\boldsymbol{\mathcal A} \bm x = \bm r$. Applying $\boldsymbol{\mathcal L}^{-1}$ to both sides of \eqref{eq:general-linear-system} transforms the linear system to $\boldsymbol{\mathcal D} \boldsymbol{\mathcal U} \bm x = \boldsymbol{\mathcal L}^{-1} \bm r$, which can be written explicitly as
\begin{align}
\label{eq: ISM_DU_equations}
\begin{bmatrix}
\boldsymbol{\mathrm A} & \boldsymbol{\mathrm B} \\[0.3em]
\boldsymbol{\mathrm O} & \boldsymbol{\mathrm S}
\end{bmatrix}
\begin{bmatrix}
\bm x_{\bm v} \\[0.3em]
\bm x_{p}
\end{bmatrix}
=
\begin{bmatrix}
\bm r_{\bm v} \\[0.3em]
\bm r_{p} - \boldsymbol{\mathrm C} \boldsymbol{\mathrm A}^{-1} \bm r_{\bm v}
\end{bmatrix}.
\end{align}
The so-called Schur complement reduction (SCR) procedure \cite{Benzi2005,May2008} solves \eqref{eq: ISM_DU_equations} via back substitution and therefore involves solving smaller systems associated with $\boldsymbol{\mathrm A}$ and $\boldsymbol{\mathrm S}$. Given the dense structure of $\boldsymbol{\mathrm S}$ stemming from the presence of $\boldsymbol{\mathrm A}^{-1}$, solving a linear system associated with $\boldsymbol{\mathrm S}$ to high precision is, however, prohibitively expensive. The SCR procedure can therefore be applied as a preconditioning technique for an iterative solution method, such that the smaller systems need not be solved to high precision. The action of the preconditioner $\boldsymbol{\mathcal P}$ for the linear system $\boldsymbol{\mathcal A} \bm x = \bm r$ is defined in Algorithm \ref{algorithm:exact_block_factorization}, where the smaller systems associated with $\boldsymbol{\mathrm A}$ and $\boldsymbol{\mathrm S}$ are solved by the GMRES algorithm preconditioned by $\boldsymbol{\mathrm P}_{\mathrm A}$ and $\boldsymbol{\mathrm P}_{\mathrm S}$, respectively. The stopping criteria for the two iterative solvers include relative tolerances $\delta_{\mathrm A}$ and $\delta_{\mathrm S}$ and maximum iteration numbers $\mathrm n^{\textup{max}}_{\mathrm A}$ and $\mathrm n^{\textup{max}}_{\mathrm S}$, respectively.
\begin{algorithm}[H]
\caption{The action of $\boldsymbol{\mathcal P}^{-1}$ on a vector $\bm s := [ \bm s_{\bm v}; \bm s_p]^T$ with the output being $\bm y := [ \bm y_{\bm v}; \bm y_p]^T$.}
\label{algorithm:exact_block_factorization}
\begin{algorithmic}[1]
\State \texttt{Solve for an intermediate velocity $\hat{\bm y}_{\bm v}$ from the equation}
\begin{align}
\label{eq:seg_sol_int_disp}
\boldsymbol{\mathrm A} \hat{\bm y}_{\bm v} = \bm s_{\bm v}
\end{align}
\texttt{by GMRES preconditioned by $\boldsymbol{\mathrm P}_{\mathrm A}$ with $\delta_{\mathrm A}$ and $\mathrm n^{\textup{max}}_{\mathrm A}$ prescribed.}
\State \texttt{Update the continuity residual by $\bm s_{p} \gets \bm s_{p} - \boldsymbol{\mathrm C} \hat{\bm y}_{\bm v}$.}
\State \texttt{Solve for $\bm y_p$ from the equation}
\begin{align}
\label{eq:seg_sol_pres}
\boldsymbol{\mathrm S} \bm y_p = \bm s_{p}
\end{align}
\texttt{by GMRES preconditioned by $\boldsymbol{\mathrm P}_{\mathrm S}$ with $\delta_{\mathrm S}$ and $\mathrm n^{\textup{max}}_{\mathrm S}$ prescribed.}
\algorithmiccomment{The action of $\boldsymbol{\mathrm S}$ on a vector in the GMRES iteration will be defined in Algorithm \ref{algorithm:matrix_free_mat_vec_for_S}.}
\State \texttt{Update the momentum residual by $\bm s_{\bm v} \gets \bm s_{\bm v} - \boldsymbol{\mathrm B} \bm y_{p}$.}
\State \texttt{Solve for $\bm y_{\bm v}$ from the equation}
\begin{align}
\label{eq:seg_sol_disp}
\boldsymbol{\mathrm A} \bm y_{\bm v} = \bm s_{\bm v}
\end{align}
\texttt{by GMRES preconditioned by $\boldsymbol{\mathrm P}_{\mathrm A}$ with $\delta_{\mathrm A}$ and $\mathrm n^{\textup{max}}_{\mathrm A}$ prescribed.}
\end{algorithmic}
\end{algorithm}
\noindent Since the preconditioner $\boldsymbol{\mathcal P}$ is defined through an algorithm, its algebraic definition may vary over iterations. The flexible GMRES (FGMRES) algorithm \cite{Saad1993} is thus applied as the outer iterative method for the tangent matrix $\boldsymbol{\mathcal A}$ with a corresponding relative tolerance $\delta$ and maximum iteration number $\mathrm n^{\textup{max}}$. In this work, we set the relative tolerances $\delta_{\mathrm A} = 10^{-5}$, $\delta_{\mathrm S} = 10^{-2}$ and maximum iteration numbers $\mathrm n^{\textup{max}}_{\mathrm A} = \mathrm n^{\textup{max}}_{\mathrm S} = 100$, $\mathrm n^{\textup{max}} = 200$ unless otherwise specified. While the FGMRES algorithm minimizes the residual over the generated subspace, its convergence is not guaranteed in general, as the approximation subspace is not a standard Krylov subspace. Nonetheless, given this flexibility in preconditioner variation over iterations, the robustness and efficiency of the overall iterative method can be well balanced with a proper design of $\boldsymbol{\mathcal P}$. We therefore turn our attention to the design of $\boldsymbol{\mathrm P}_{\mathrm A}$ and $\boldsymbol{\mathrm P}_{\mathrm S}$ for the associated GMRES algorithms.
The block matrix $\boldsymbol{\mathrm A}$ consists of a discrete convection-diffusion-reaction operator, subgrid scale modeling terms, rank-one modifications from reduced models coupled to the outflow boundaries to represent the downstream vasculature \cite{Liu2020} (see also Section \ref{subsec:coupling-with-reduced-models}), and the stiffness matrix of the vascular wall (see \eqref{eq: predictor_multi_correct_notation_for_block_matrices}). The class of multigrid methods, which has been proposed as a scalable and robust preconditioner for elliptic problems \cite{Ieary2000}, presents an attractive choice for $\boldsymbol{\mathrm P}_{\mathrm A}$, particularly when the mesh size demands extremely large-scale parallel computations. Compared to the geometric multigrid method \cite{Wesseling2001}, the algebraic multigrid method (AMG) can more conveniently be integrated with numerical codes developed for unstructured grids.
The Schur complement $\boldsymbol{\mathrm S}$ is implicitly defined through the four block matrices. While constructing and storing $\boldsymbol{\mathrm S}$ is rather cost-prohibitive, the GMRES algorithm only necessitates the construction of a Krylov subspace via repeated matrix-vector multiplication operations. The action of $\boldsymbol{\mathrm S}$ on a given vector can therefore be computed with the four block matrices on the fly in a so-called ``matrix-free" fashion as outlined in Algorithm \ref{algorithm:matrix_free_mat_vec_for_S}, which is used to construct the Krylov subspace in Step $3$ of Algorithm \ref{algorithm:exact_block_factorization}. Inspired by the SIMPLE algorithm, the preconditioner $\boldsymbol{\mathrm P}_{\mathrm S}$ is formed by BoomerAMG \cite{Falgout2002} based on a sparse approximation of $\boldsymbol{\mathrm S}$ given by $\hat{\boldsymbol{\mathrm S}} := \boldsymbol{\mathrm D} - \boldsymbol{\mathrm C} \left(\textup{diag}\left(\boldsymbol{\mathrm A}\right)\right)^{-1} \boldsymbol{\mathrm B}$.
\begin{algorithm}[H]
\caption{The matrix-free algorithm for multiplying $\boldsymbol{\mathrm S}$ with a vector $\bm x_p$.}
\label{algorithm:matrix_free_mat_vec_for_S}
\begin{algorithmic}[1]
\State \texttt{Compute the matrix-vector multiplication $\hat{\bm x}_p \gets \boldsymbol{\mathrm D} \bm x_p$.}
\State \texttt{Compute the matrix-vector multiplication $\bar{\bm x}_p \gets \boldsymbol{\mathrm B} \bm x_p$.}
\State \texttt{Solve for $\tilde{\bm x}_p$ from the linear system}
\begin{align}
\label{eq:S_inner_A_eqn}
\boldsymbol{\mathrm A} \tilde{\bm x}_p = \bar{\bm x}_p
\end{align}
\texttt{by GMRES preconditioned by $\boldsymbol{\mathrm P}_{\mathrm A}$ with $\delta_{\mathrm I}$ and $\mathrm n^{\textup{max}}_{\mathrm A}$ prescribed.}
\algorithmiccomment{The action of $\boldsymbol{\mathrm A}^{-1}$ on $\bar{\bm x}_p$ is approximated by solving \eqref{eq:S_inner_A_eqn} with the given stopping criteria.}
\State \texttt{Compute the matrix-vector multiplication $\bar{\bm x}_p \gets \boldsymbol{\mathrm C} \tilde{\bm x}_p$.}
\State \Return $\hat{\bm x}_p - \bar{\bm x}_p$.
\end{algorithmic}
\end{algorithm}
We note that replacing equation \eqref{eq:seg_sol_pres} in Algorithm \ref{algorithm:exact_block_factorization} by $\hat{\boldsymbol{\mathrm S}} \bm y_p = \bm s_{p}$ renders the FGMRES algorithm similar to the SIMPLE algorithm, which does not require solving equation \eqref{eq:S_inner_A_eqn}. The preconditioner stated in Algorithm \ref{algorithm:exact_block_factorization} may therefore be regarded as a generalization of the SIMPLE algorithm, in which the matrix-free algorithm stated in Algorithm \ref{algorithm:matrix_free_mat_vec_for_S} is invoked to attain an improved approximation of the Schur complement.
\begin{remark}
The matrix-free technique is often invoked in Krylov subspace methods when assembling or storing a matrix is inconvenient or expensive. Often, a sparse approximation to the unassembled matrix, like $\hat{\boldsymbol{\mathrm S}}$, is also designed to provide a preconditioner to accelerate the Krylov iteration. This approach can also be found in higher-order methods \cite{Brown2010,Davydov2020} and Jacobian-free nonlinear solvers \cite{Knoll2004}.
\end{remark}
\section{Verification by the Womersley solution}
\label{sec:verification}
In this section, we present two verification studies using the Womersley solutions describing pulsatile flow in an axisymmetric cylindrical pipe, first with rigid walls and subsequently with thin, linear elastic walls. Furthermore, in the case of the rigid pipe, we use analytical solutions for pressure, velocity, and wall shear stress (WSS) to perform spatial convergence studies. All parameters are reported in centimeter-gram-second units in this work.
\subsection{Womersley flow in a rigid pipe}
\label{section:womersley_rigid}
The Womersley solution for pulsatile flow in a rigid pipe describes axisymmetric, fully developed flow subject to a pressure gradient with both steady and oscillatory contributions. The pressure can be expressed with the following Fourier series,
\begin{align}
\label{eq:womersley_rigid_pressure}
p = p_{\textup{ref}} + \left( k_0 + \sum_{n=1}^{N} k_n e^{\iota n\omega t} \right) z,
\end{align}
where $z$ is the longitudinal coordinate along the length of the pipe, $p_{\textup{ref}}$ is the reference pressure at the $z=0$ surface, $k_0$ is the steady zeroth mode of the pressure gradient, $k_n$ is the $n$-th Fourier coefficient in the oscillatory component of the pressure gradient, $\iota$ is the solution to $\iota^2=-1$, $T_p$ is the period of oscillation, and $\omega:=2\pi / T_p$ is the fundamental frequency. Whereas $k_0$ produces steady forward flow, the oscillatory component of the pressure gradient drives a phase-shifted oscillatory flow with zero net flow over $T_p$. Per the assumptions of axisymmetric and fully developed flow, the velocity is identically zero in the radial and circumferential directions and takes the following analytical form in the axial direction,
\begin{align}
\label{eq:womersley_rigid_velo}
v_z = \frac{k_0}{4\mu^f}\left(r^2 - R^2\right) + \sum\limits_{n=1}^{N} \frac{\iota k_n}{\rho^f n\omega} \left( 1 - \frac{J_0(\iota^{\frac{3}{2}} \alpha_n \frac{r}{R})}{J_0(\iota^{\frac{3}{2}}\alpha_n)} \right) e^{\iota n \omega t},
\end{align}
wherein $r:= \sqrt{x^2+y^2}$, $R$ is the pipe radius, $J_0$ is the zeroth-order Bessel function of the first kind, and $\alpha_n := R \sqrt{\rho^f n\omega / \mu^f}$ is the Womersley number for the $n$-th Fourier mode. The only nonzero component of WSS takes the corresponding form,
\begin{align}
\label{eq:womersley_rigid_wss}
\tau_{zr} = \sigma^f_{\mathrm{dev},zr}|_{r=R} = \frac{k_0 R}{2} - \sum\limits_{n=1}^{N} \frac{k_n R}{\iota^{\frac{3}{2}}\alpha_n} \frac{J_1(\iota^{\frac{3}{2}}\alpha_n)}{J_0(\iota^{\frac{3}{2}}\alpha_n)} e^{\iota n \omega t}.
\end{align}
The complex forms of $p$, $v_z$, and $\tau_{zr}$ in \eqref{eq:womersley_rigid_pressure}-\eqref{eq:womersley_rigid_wss} indicate the existence of two sets of real independent solutions. Here, we take the set of real components as the benchmark solution and represent a single oscillatory mode (i.e., $N=1$).
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[angle=0, trim=140 95 30 100, clip=true, scale=0.6]{./figures/rigid_womersley_p1p2_convergence.pdf}
\end{tabular}
\end{center}
\caption{Relative errors of (A) $\bm v_h$ in $L_2$ norm, (B) $\bm \tau_h$ in $L_2$ norm, (C) $p_h$ in $L_2$ norm, and (D) $p_h$ in $H_1$ norm for linear (P1) and quadratic (P2) tetrahedral elements with different mesh sizes $h$ normalized by the pipe radius $R$. Convergence rates computed from successive errors and mesh sizes are annotated.}
\label{fig:rigid_womersley_convergence}
\end{figure}
To reflect typical physiological flows, we set the pipe radius $R$ to $0.3$; fluid density $\rho^f$ and viscosity $\mu^f$ to $1.0$ and $0.04$, respectively; period $T_p$ to $1.1$; reference pressure $p_{\textup{ref}}$ to $0$; and Fourier coefficients $k_0$ and $k_1$ to $-21.0469$ and $-33.0102+42.9332\iota$, respectively. Correspondingly, the fundamental frequency $\omega$ and Womersley number $\alpha_1$ were approximately $5.71$ and $3.59$, respectively. Furthermore, given the fully developed flow, we set a short pipe length of $0.3$. The no-slip boundary condition was prescribed on the wall, and traction boundary conditions were prescribed on both the inlet and outlet. Simulations were performed with uniform time steps using both linear (P1) and quadratic (P2) tetrahedral meshes of comparable numbers of nodes generated by MeshSim (Simmetrix, Inc., Clifton Park, NY, USA), and relative errors of velocity, pressure, and WSS were computed. To circumvent confounding errors from temporal discretization, temporal refinement was performed for each simulation until the first three significant digits of all computed errors were preserved across two temporal refinement levels.
Figure \ref{fig:rigid_womersley_convergence} plots the relative errors of velocity $\bm v_h$ and WSS $\bm \tau_h$ in the $L_2$ norm, and of pressure $p_h$ in the $L_2$ and $H_1$ norms. For three of the four computed errors (Figure \ref{fig:rigid_womersley_convergence} A, B, and D), we consistently observe theoretical rates, with P2 elements exhibiting spatial accuracy of one order higher than that of P1 elements. For P2 elements, however, the relative error of $p_h$ in the $L_2$ norm (Figure \ref{fig:rigid_womersley_convergence} C), converges faster than the theoretical rate of $2.5$. This is likely due to the fact that the analytical solution for pressure here is only linear in space and thus falls within a subspace smaller than the approximation space.
\subsection{Womersley flow in a thin-walled elastic pipe}
\label{section:womersley_def}
As in the rigid case, the Womersley solution for pulsatile flow in an elastic pipe describes axisymmetric flow subject to a pressure gradient with both steady and oscillatory contributions. Given the motion of the elastic pipe, however, the radial velocity of the fluid is no longer identically zero, and the pressure propagates down the pipe with a gradient dependent on both time $t$ and the longitudinal coordinate $z$. This wave propagation is in sharp contrast to the rigid case, in which the fluid oscillates in bulk.
Let $c_n$ be the $n$-th complex-valued wave speed. Then under the long-wave approximation, namely that the wavelength $\lambda_n := c_{n} T_p = 2\pi c_{n} / (n \omega)$ is much larger than the pipe radius $R$, and the assumption that the wave speed $c_{n}$ is much larger than the fluid velocity, all nonlinear convective terms can be considered negligible, thereby reducing the Navier-Stokes equations to a set of linear equations. As in the rigid case in Section \ref{section:womersley_rigid}, the solution can then be represented as the summation of $N$ superimposed Fourier series. In this case, pressure can be expressed with Fourier coefficients $B_n$ as follows,
\begin{align}
\label{eq:womersley_def_pressure}
p = p_{\textup{ref}} + B_0 z + \sum_{n=1}^{N} B_n e^{\iota n\omega (t - z/c_n)},
\end{align}
In addition to a thin-walled assumption for the elastic pipe (i.e., $h^s \ll R$), the radial wall displacement can be assumed small such that the continuity of velocity at the fluid-solid interface can be imposed at the neutral position of the wall, $r=R$. The fluid velocity components in the longitudinal and radial directions, $v_z$ and $v_r$, can then be expressed with the same Fourier coefficients $B_n$,
\begin{align}
\label{eq:womersley_def_velo}
& v_z = \frac{B_0}{4\mu^f}\left(r^2 - R^2\right) + \sum\limits_{n=1}^{N} \frac{B_n}{\rho^f c_n} \left( 1 - G_n \frac{J_0(\iota^{\frac{3}{2}} \alpha_n \frac{r}{R})}{J_0(\iota^{\frac{3}{2}}\alpha_n)} \right) e^{\iota n \omega (t - z/c_n)}, \\
& v_r = \sum\limits_{n=1}^{N} \frac{\iota n \omega B_n R}{2 \rho^f c_n^2} \left( \frac{r}{R} - G_n \frac{2 J_1(\iota^{\frac{3}{2}} \alpha_n \frac{r}{R})}{\iota^{\frac{3}{2}}\alpha_n J_0(\iota^{\frac{3}{2}}\alpha_n)} \right) e^{\iota n \omega (t - z/c_n)},
\end{align}
and the wall displacement components in the longitudinal and radial directions, $u_z$ and $u_r$, are
\begin{align}
\label{eq:womersley_def_disp}
u_z = \sum\limits_{n=1}^{N} \frac{\iota B_n}{\rho^f c_n n \omega}(G_n - 1) e^{\iota n \omega (t - z/c_n)}, \qquad u_r = \sum\limits_{n=1}^{N} \frac{B_n R}{2 \rho^f c_n^2}(1 - G_n g_n) e^{\iota n \omega (t - z/c_n)}.
\end{align}
The volumetric flow rate can be found by integrating $v_z$ over a cross-section of the pipe,
\begin{align}
\label{eq:womersley_def_flow}
Q = \int_0^R{2\pi r v_z dr} = \frac{-\pi B_0 R^4}{8 \mu^f} + \sum\limits_{n=1}^{N} \frac{B_n \pi R^2}{\rho^f c_n} (1 - G_n g_n) e^{\iota n \omega (t - z/c_n)}.
\end{align}
In the above analytical forms, $G_n$ is the elasticity factor defined as
\begin{align*}
G_n := \frac{2 + \gamma_n (2\nu - 1)}{\gamma_n (2\nu - g_n)}, \qquad \gamma_n := \frac{E h^s}{\rho^f R (1 - \nu^2) c_n^2},
\end{align*}
and the wave speed $c_n$ can be determined from the following equation,
\begin{align}
\label{eq:womersley_def_frequency_eqn}
(g_n - 1)(\nu^2 - 1) \gamma_n^2 + \left( \frac{\rho^s h^s}{\rho^f R}(g_n - 1) + (2\nu - 0.5) g_n - 2 \right) \gamma_n + \frac{2 \rho^s h^s}{\rho^f R} + g_n = 0,
\end{align}
wherein
\begin{align*}
g_n := \frac{2 J_1(\iota^{\frac{3}{2}} \alpha_n)}{\iota^{\frac{3}{2}}\alpha_n J_0(\iota^{\frac{3}{2}}\alpha_n)}.
\end{align*}
Equation \eqref{eq:womersley_def_frequency_eqn}, commonly known as the frequency equation, is constructed by demanding a nontrivial solution to the coupled system of the fluid and elastic pipe \cite[Sec.~5.7]{Zamir2000}. Upon solving for $\gamma_n$ from \eqref{eq:womersley_def_frequency_eqn}, $c_n$ can be represented as
\begin{align*}
c_n = \sqrt{\frac{2}{(1-\nu^2)\gamma_n}} c_{\mathrm{inv}},
\end{align*}
where $c_{\mathrm{inv}}$ is the wave speed in inviscid flows, as given by the Moens-Korteweg formula,
\begin{align}
\label{eq:moens-korteweg-formula}
c_{\mathrm{inv}} = \sqrt{\frac{Eh^s}{2\rho^fR}}.
\end{align}
The consequence of this complex-valued wave speed $c_n$ can be understood from the following decomposition,
\begin{align*}
\frac{1}{c_n} = \frac{1}{c_{n}^{\mathrm{R}}} + \iota \frac{1}{c_{n}^{\mathrm{I}}}, \quad c_{n}^{\mathrm{R}} := \left( \mathrm{Re}\left[c_{n}^{-1} \right] \right)^{-1} , \quad c_{n}^{\mathrm{I}} := \left( \mathrm{Im} \left[ c_{n}^{-1} \right] \right)^{-1},
\end{align*}
wherein $c_{n}^{\mathrm{R}}$ and $c_{n}^{\mathrm{I}}$ are commonly referred to as the dispersion and attenuation coefficients, respectively representing differences in the wave frequency and amplitude from the inviscid case. As is clear from above, $c_n$ depends not only on properties of the fluid and the pipe, but also on the frequency of oscillations.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[angle=0, trim=195 190 250 120, clip=true, scale=1.0]{./figures/womersley_def-cap-flows_pres.pdf}
\end{tabular}
\end{center}
\caption{Analytical (solid) and numerical solutions from CMM (dashed) and our reduced unified continuum formulation using either P1 (dotted) or P2 (dash-dotted) elements for the inlet and outlet (A) volumetric flow rates and (B) pressures over a period. Detailed views are shown in the right column.}
\label{fig:def_womersley_cap_pres_flow}
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[angle=0, trim=195 255 205 165, clip=true, scale=0.83]{./figures/womersley_def-pres_v2.pdf}
\end{tabular}
\end{center}
\caption{Analytical (solid) and numerical solutions from CMM (dashed) and our reduced unified continuum formulation using either P1 (dotted) or P2 (dash-dotted) elements for the pressures along the longitudinal axis at different time instances. The solutions at $t=0$ and $t=T$ are overlaid as a result of temporal periodicity. A detailed view is shown on the right.}
\label{fig:def_womersley_pres}
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[angle=0, trim=270 90 260 88, clip=true, scale=1.2]{./figures/womersley_def-velo_profiles_axial_v3.pdf}
\end{tabular}
\end{center}
\label{fig:def_womersley_axial_velo_profiles}
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[angle=0, trim=270 90 260 88, clip=true, scale=1.2]{./figures/womersley_def-velo_profiles_radial_v3.pdf}
\end{tabular}
\end{center}
\caption{Analytical and numerical solutions from CMM (dashed) and our reduced unified continuum formulation using either P1 or P2 elements for the (A) longitudinal and (B) radial velocity profiles along the $y$-axis on the $z=L/2$ surface at different time instances. Detailed views at $t=T/5$ and $t=3T/5$ are shown in the bottom row.}
\label{fig:def_womersley_radial_velo_profiles}
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[angle=0, trim=210 90 210 85, clip=true, scale=1.2]{./figures/womersley_def-wall_disp_velo_v5.pdf}
\end{tabular}
\end{center}
\caption{Analytical (solid) and numerical solutions from CMM (dashed) and our reduced unified continuum formulation using either P1 (dotted) or P2 (dash-dotted) elements for the (A) longitudinal and (B) radial fluid velocities at the wall, and the (C) longitudinal and (D) radial wall displacements along the longitudinal axis at different time instances. The solutions at $t=0$ and $t=T$ are overlaid as a result of temporal periodicity. Detailed views are shown in the right column.}
\label{fig:def_womersley_wall_disp_velo}
\end{figure}
We again considered only the real components as the benchmark solution and represented a single oscillatory mode ($N=1$). As in Section \ref{section:womersley_rigid}, we set the pipe radius $R$ to 0.3, fluid density $\rho^f$ to $1.0$, viscosity $\mu^f$ to $0.04$, period $T_p$ to $1.1$, and reference pressure $p_{\textup{ref}}$ to $0$. We further set the pipe length $L$ to 15, and considered uniform wall properties, including a wall density $\rho^s$ of 1.0, Poisson's ratio $\nu$ of 0.5, thickness $h^s$ of 0.06, and Young's modulus $E$ of $9.5678 \times 10^6$, which yielded a wave speed $c_1$ of $886.31 + 29.786\iota$. In order to achieve the same volumetric flow rate as in Section \ref{section:womersley_rigid}, the Fourier coefficients $B_0$ and $B_1$ were set to $-21.0469$ and $-4926.29-4092.54\iota$, respectively. Given these parameters, we may examine the validity of the invoked assumptions. At the fundamental frequency, the real component of the wave speed $c_{1}^{\mathrm{R}} = 887.31$ is much larger than the maximum longitudinal velocity $\mathrm{max}\lbrace v_z \rbrace = 21.0701$. Correspondingly, the real component of the leading wavelength $\lambda_1^{\mathrm{R}} := c_{1}^{\mathrm{R}} T_p = 2\pi c_{1}^{\mathrm{R}} / \omega$ is $976.05$, three orders of magnitude larger than $R=0.3$, thereby satisfying the long wave approximation assumption. We further verify for the elastic pipe that both the thickness $h^s = 0.06$ and the maximum radial wall displacement $\mathrm{max}\lbrace u_r\rbrace = 0.0010$ are much smaller than $R$.
To account for the truncation of the semi-infinite domain used in Womersley's derivation to a finite domain, Cartesian velocity components were prescribed on the boundary nodes of the wall at $z=0$ and $z=L$ in the following form,
\begin{align*}
\bm v|_{r=R} := \{v_x, v_y, v_z\}|_{r=R} = \{ v_r \mathrm{cos}\theta, v_r \mathrm{sin}\theta, v_z \}|_{r=R},
\end{align*}
wherein $\theta$ is the four-quadrant inverse tangent of the point $(x,y)$.\footnote{ This function is commonly denoted as $\theta := \mathrm{atan2}\left( y, x \right)$ in programming languages.} Traction boundary conditions were prescribed on both the inlet and outlet surfaces, where the traction $\bm h^f$ was constructed from the pressure in \eqref{eq:womersley_rigid_pressure} and the following Cartesian velocity gradients,
\begin{alignat*}{3}
\frac{\partial v_x}{\partial x} &= \mathrm{cos}^2\theta \frac{\partial v_r}{\partial r} + \frac{\mathrm{sin}^2\theta}{r} v_r, \qquad
&\frac{\partial v_x}{\partial y} &= \mathrm{sin}\theta \mathrm{cos}\theta \left( \frac{\partial v_r}{\partial r} - \frac{v_r}{r} \right), \qquad
&\frac{\partial v_x}{\partial z} &= \mathrm{cos}\theta \frac{\partial v_r}{\partial z}, \\
\frac{\partial v_y}{\partial x} &= \mathrm{sin}\theta \mathrm{cos}\theta \left( \frac{\partial v_r}{\partial r} - \frac{v_r}{r} \right), \qquad
&\frac{\partial v_y}{\partial y} &= \mathrm{sin}^2\theta \frac{\partial v_r}{\partial r} + \frac{\mathrm{cos}^2\theta}{r} v_r, \qquad
&\frac{\partial v_y}{\partial z} &= \mathrm{sin}\theta \frac{\partial v_r}{\partial z}, \\
\frac{\partial v_z}{\partial x} &= \mathrm{cos}\theta \frac{\partial v_z}{\partial r}, \qquad
&\frac{\partial v_z}{\partial y} &= \mathrm{sin}\theta \frac{\partial v_z}{\partial r}, \qquad
& \frac{\partial v_z}{\partial z} &= \frac{\partial v_z}{\partial z},
\end{alignat*}
wherein
\begin{align*}
& \frac{\partial v_z}{\partial r} = \frac{B_0 r}{2\mu^f} + \sum_{n=1}^N \frac{\iota^{\frac{3}{2}} \alpha_n B_n G_n J_1(\iota^{\frac{3}{2}} \alpha_n \frac{r}{R})}{\rho^f c_n J_0(\iota^{\frac{3}{2}} \alpha_n)} e^{\iota n \omega (t - z/c_n)}, \displaybreak[2] \\
& \frac{\partial v_z}{\partial z} = \sum_{n=1}^N \frac{-\iota n \omega B_n}{\rho^f c_n^2} \left( 1 - G_n \frac{J_0(\iota^{\frac{3}{2}} \alpha_n \frac{r}{R})}{J_0(\iota^{\frac{3}{2}} \alpha_n)}\right) e^{\iota n \omega (t - \frac{z}{c_n})}, \displaybreak[2] \\
& \frac{\partial v_r}{\partial r} = \sum_{n=1}^N\frac{\iota n \omega_n B_n}{2 \rho^f c_n^2}\left( 1 - G_n \frac{2 \left(J_1(\iota^{\frac{3}{2}} \alpha_n \frac{r}{R}) - J_2(\iota^{\frac{3}{2}} \alpha_n \frac{r}{R}) \right)}{\iota^{\frac{3}{2}} \alpha_n \frac{r}{R} J_0(\iota^{\frac{3}{2}} \alpha_n)} \right) e^{\iota n \omega (t - z/c_n)}, \displaybreak[2] \\
& \frac{\partial v_r}{\partial z} = \sum_{n=1}^N\frac{n^2 \omega^2 B_n R}{2 \rho^f c_n^3} \left( \frac{r}{R} - G_n \frac{2 J_1(\iota^{\frac{3}{2}} \alpha_n \frac{r}{R})}{\iota^{\frac{3}{2}} \alpha_n J_0(\iota^{\frac{3}{2}} \alpha_n)} \right) e^{\iota n \omega (t - z/c_n)}.
\end{align*}
We note that our choice of boundary conditions differs from the approach adopted in verification studies for CMM, in which only the normal components of the tractions are prescribed on the inlet and outlet surfaces via impedance boundary conditions \cite{Figueroa2006a,Filonova2019}.
Simulations were performed over three periods with uniform time steps using linear and quadratic tetrahedral meshes, both of 284,400 elements and respectively 53,879 and 404,473 nodes. The linear tetrahedral mesh was additionally used to make comparisons against svSolver \cite{svsolver}, the CMM implementation in SimVascular. For each simulation, only the final period was analyzed. Given the assumptions and scaling analyses invoked in the derivations, the analytical solutions \eqref{eq:womersley_def_velo}-\eqref{eq:womersley_def_disp} are only approximate solutions to the FSI problem presented in Section \ref{sec:formulation} and thereby preclude any spatial convergence analyses. We show comparisons of analytical and numerical solutions for the volumetric flow rates and pressures (Figures \ref{fig:def_womersley_cap_pres_flow}, \ref{fig:def_womersley_pres}), longitudinal and radial fluid velocity profiles (Figure \ref{fig:def_womersley_radial_velo_profiles}), and longitudinal and radial wall velocity and displacement (Figure \ref{fig:def_womersley_wall_disp_velo}). We note that all numerical results are nearly indistinguishable from the analytical solutions. Differences, however, can be observed in the detailed views, where the P2 results are in closer agreement to the analytical solutions compared to the P1 results. In Figures \ref{fig:def_womersley_cap_pres_flow}B and \ref{fig:def_womersley_pres}, we observe that CMM yields larger discrepancies in pressure than our proposed method, likely due to the different treatment of pressure in the temporal discretization (see Remark \ref{remark:gen-alpha-pressure}) \cite{Liu2020a}. Across all numerical cases, the fluid velocity is in good agreement with the analytical solutions and presents axisymmetric profiles along the radial direction. This is in sharp contrast to existing CMM verification results in the literature exhibiting a notable lack of axisymmetry in the radial velocity profiles \cite{Filonova2019} that could be attributed to the outlet impedance boundary condition \cite{Figueroa2006a,Filonova2019}, which neglects viscous traction components. Contrary to the fluid quantities, larger discrepancies can be observed in the wall displacement and velocity. These discrepancies, which were not mitigated upon mesh refinement, can be attributed to the assumptions inherent in the theory.
\section{Physiological modeling techniques}
\label{sec:practical_modeling_techniques}
In this section, we briefly present a suite of practical techniques for appropriate modeling of physiological phenomena in clinical applications. Specifically, these techniques pertain to vascular wall thickness heterogeneity, in vivo tissue prestressing, and boundary conditions reflecting distal vasculature.
\subsection{Spatially varying vascular wall thickness}
The most commonly employed imaging modalities, such as computed tomography or magnetic resonance angiography, do not adequately resolve vascular wall thicknesses for most applications of clinical interest. While intravascular ultrasound is a notable exception, it is only performed in a small subset of clinical cases, primarily in the coronary vasculature. As a result, spatially varying distributions of vascular wall thicknesses must frequently be prescribed with limited knowledge, commonly with an assumed local thickness-to-radius ratio \cite{Humphrey2002}. In a previously proposed Laplacian approach \cite{Bazilevs2009a}, a Laplacian problem is solved with prescribed Dirichlet boundary conditions at the wall boundary nodes on all inlets and outlets \cite{Bazilevs2009a}. A similar approach has also since been adopted for prescribing cardiac fiber orientations in heart models \cite{Wong2014}. While the Laplacian approach effectively generates smooth distributions of wall thicknesses, it fails to capture sharp local changes in geometry as often occur in disease, yielding physiological thicknesses near the inlets and outlets but significant deviations from the desired thickness-to-radius ratio elsewhere. For example, non-physiological wall thicknesses up to 134\% of the local radii are prescribed in the coronary arteries near the ostia on the aortic root (Figure \ref{fig:var_wall_thickness}B).
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[angle=0, trim=135 95 150 95, clip=true, scale=0.63]{./figures/var_wall_thickness.pdf}
\end{tabular}
\end{center}
\caption{(A) Centerlines extracted for healthy models of the aortailiac (top) and coronary arteries (bottom). (B) Spatially varying wall thickness distributions obtained from the centerline-based (left) and Laplacian (right) approaches. The wall thickness is precisely 20\% of the local radius \textit{everywhere} in the centerline-based approach, but \textit{only at the inlets and outlets} in the Laplacian approach. Local thickness-to-radius ratios are annotated at multiple sites to highlight the Laplacian approach's deviation from the prescribed thickness-to-radius ratio and its resulting non-physiological wall thickness distribution.}
\label{fig:var_wall_thickness}
\end{figure}
For more refined control over the local thickness, we instead adopt a centerline-based approach similar to the one used in \cite{Xiao2013}, in which centerlines for all inlet-outlet pairs are extracted using the Vascular Modeling Toolkit \cite{vmtk-website,Antiga2008}. Upon specifying a global distribution over the entire wall, we can overwrite thicknesses with distinct local distributions for arbitrary sub-domains of the wall. We summarize our approach in Algorithm \ref{algorithm:var_wall_prop}, in which the conglomeration of all vessel centerlines is referred to as the \textit{global centerline}. We note that for geometries with sharp changes in radius, simply computing the local radius as the shortest distance to the global centerline could yield values based not on the corresponding vessel centerline of interest, but rather on an alternate centerline adjoined to the vessel centerline of interest. It is therefore sometimes helpful to extract individual vessel-specific centerlines from the global centerline prior to overwriting thicknesses in vessel sub-domains. In this work, we set a thickness-to-radius ratio of 20\% \cite{Humphrey2002}, which is precisely satisfied everywhere (Figure \ref{fig:var_wall_thickness}B).
\begin{algorithm}[H]
\caption{Centerline-based assignment of spatially varying vascular wall thickness.}
\label{algorithm:var_wall_prop}
\begin{algorithmic}[1]
\State \texttt{Extract the global centerline from the wall surface \cite{Antiga2002}}
\For{ \texttt{each node on the wall} }
\State \texttt{Compute the radius $r$ as the shortest distance to the global centerline}
\State \texttt{Compute the thickness} $h^s \gets x\% \ r$
\EndFor
\For{ \texttt{each sub-domain with a distinct local distribution} }
\State \texttt{Extract the local centerline from the global centerline}
\For{ \texttt{each node on the sub-domain} }
\State \texttt{Compute the radius $r$ as the shortest distance to the local centerline}
\State \texttt{Compute the thickness $h^s$ as desired}
\EndFor
\EndFor
\end{algorithmic}
\end{algorithm}
\subsection{Tissue prestressing}
The semi-discrete FSI formulation \eqref{eq:semidiscrete-fsi-couple}-\eqref{eq:semidiscrete-fsi-continuity} assumes the in vivo vascular wall configuration at imaging to be stress-free, yet vascular walls withstand physiological loading. An internal stress state, termed the prestress, must exist to balance the in vivo blood pressure and viscous traction. In contrast to approaches that seek to determine a stress-free configuration \cite{Tezduyar2008,Nama2020}, here we generate the prestress $\bm \sigma_0$ via a fixed-point algorithm similar to the one proposed for an ALE formulation \cite{Hsu2011, Baeumler2020}. Given a prestress field $\bm \sigma_0$, the wall momentum balance \eqref{eq:semi-discrete-u-reformulated} in the FSI formulation can correspondingly be modified as
\begin{align}
\label{eq:semi-discrete-u-with-fluid-traction}
\mathbf B^w_{\mathrm{m}} \left( \bm w_h^f ; \dot{\bm y}_h, \bm y_h\right) := & \int_{\Gamma_I} \bm w_h^f \cdot \rho^s h^s \left( \frac{d \bm v_h^f}{d t} - \bm b^s \right) d\Gamma + \int_{\Gamma_I} h^s \bm \epsilon(\bm w_h^f) : \Big( \bm \sigma^s(\bm u_h^w) + \bm \sigma_0 \Big) d\Gamma \nonumber \\
&- \int_{\partial \Gamma_I \cap \Gamma^h_s } h^s \bm w_h^f \cdot \bm h^s d\Gamma.
\end{align}
To determine $\bm \sigma_0$, we consider the following variational problem for the vascular wall. Given the body force per unit mass $\bm b^s$, boundary traction $\bm h^s$, and fluid boundary traction $\bm h^f$, find $\bm u_h^w \in \mathcal S^w_{\bm u}$ and $\bm v^w_h \in \mathcal S^w_{\bm v}$, such that for $\forall \bm w^f_h \in \mathcal{V}^f_{\bm v}$,
\begin{align}
\label{eq:semi-discrete-u-prestress-kinematics}
\bm 0 =& \frac{d \bm u^w_h}{dt} - \bm v^w_h, \quad \mbox{ and } \quad
0 = \mathbf B^w_{\mathrm{m}} \left( \bm w_h^f ; \dot{\bm y}_h, \bm y_h\right) + \int_{\Gamma_I} \bm w_h^f \cdot \bm h^f d\Gamma,
\end{align}
where $\mathcal S^w_{\bm u}$ and $\mathcal{V}^f_{\bm v}$ are as previously defined in Section \ref{subsec:semi-discrete-formulation}, and $\mathcal S^w_{\bm v}$ is a suitable trial solution space for the wall velocity. Using the prestress generation algorithm summarized below, $\bm \sigma_0$ is then determined such that equations \eqref{eq:semi-discrete-u-prestress-kinematics} are satisfied under the imaged wall configuration. We denote the prestress at the $m$-th iteration as $\bm \sigma_{0,(m)}$ and the maximum number of iterations as $m_{\mathrm{max}}$.
\begin{myenv}{Prestress generation algorithm}
\noindent \textbf{Initialization:} Set $\bm \sigma_{0, (0)} = \bm 0$ and $\bm u^w_{0} = \bm 0$.
\noindent \textbf{Fixed-point iteration:} Repeat the following steps for $m=0, 1, ..., m_{\mathrm{max}}$.
\begin{enumerate}
\item Set $\bm \sigma_0 = \bm \sigma_{0, (m)}$ and $\bm u^w_m = \bm 0$.
\item From $t_m$ to $t_{m+1}$, solve the variational problem \eqref{eq:semi-discrete-u-prestress-kinematics} for $\bm u^w_{m+1}$ and $\bm v^w_{m+1}$ using the backward Euler method for temporal discretization.
\item Update the prestress tensor as $\bm \sigma_{0, (m+1)} = \bm \sigma^s(\bm u^w_{m+1}) + \bm \sigma_{0, (m)}$.
\item Let $\mathrm{tol}_{\mathrm{P}}$ denote a prescribed tolerance. If the stopping criterion $\| \bm u^w_{m+1} \|_{\mathfrak l_2} \leq \mathrm{tol}_{\mathrm{P}}$ is satisfied, then set $\bm \sigma_0 = \bm \sigma_{0,(m+1)}$ and exit the fixed-point iteration.
\end{enumerate}
\end{myenv}
\begin{remark}
\label{remark:prestress_diastolic_traction}
To minimize cardiac motion artifacts, cardiac images are commonly acquired at diastole via electrocardiogram gating. The fluid boundary traction $\bm h^f$ at diastole can then be obtained from a separate rigid-wall CFD simulation prescribed with a steady diastolic inflow rate and outlet resistances tuned to achieve the corresponding diastolic pressures and flow splits.
\end{remark}
\subsection{Coupling with reduced models}
\label{subsec:coupling-with-reduced-models}
As alluded to in Section \ref{sec:iterative_solution_method}, zero-dimensional models representing the downstream vasculature are frequently coupled to outlets of the three-dimensional domain \cite{Moghadam2013, VignonClementel2006, VignonClementel2010}. While we restrict our attention to Neumann coupling with zero-dimensional models in which the boundary traction is a function of the flow rate at the corresponding outlet surface only, we note that more generally, any arbitrary combination of Neumann (Dirichlet-to-Neumann) and Dirichlet (Neumann-to-Dirichlet) inlets and outlets and their corresponding system of (nonlinear) ordinary differential equations can be considered \cite{Kung2020}. We consider the Neumann boundary $\Gamma^f_h$ to consist of $n_{\mathrm{out}}$ non-overlapping planar outlet surfaces,
\begin{align*}
\Gamma^f_h = \bigcup_{i=1}^{\mathrm n_{\mathrm{out}}} \Gamma^f_{\mathrm{out},i}, \qquad \overline{\Gamma^f_{\mathrm{out,i}}} \cap \overline{\Gamma^f_{\mathrm{out,j}}} = \emptyset, \mbox{ for } 1 \leq i,j \leq \mathrm n_{\mathrm{out}} \mbox{ and } i \neq j.
\end{align*}
Let $\mathcal F^k$ be a functional operator and $Q^k(t)$ be the volumetric flow rate through the outlet surface $\Gamma^f_{\mathrm{out,k}}$,
\begin{align*}
Q^k(t) := \int_{\Gamma^f_{\mathrm{out},k}} \bm v^f(t) \cdot \bm n d\Gamma.
\end{align*}
The boundary traction on $\Gamma^f_{\mathrm{out},k}$ is then given by $\bm h^f = -P^k(t)\bm n$, where $P^k(t) = \mathcal F^k(Q^k(t))$, and the term \eqref{eq:vms_traction} can be written explicitly as
\begin{align*}
\mathbf B_{\mathrm{m}}^{\mathrm{h}} \left( \bm w_h^f ; \dot{\bm y}_h^f, \bm y_h^f \right) := - \int_{\Gamma_h^f} \bm w_h^f \cdot \bm h^f d\Gamma = \sum_{k=1}^{n_{\mathrm{out}}} \mathcal F^k(Q^k(t)) \int_{\Gamma^f_{\mathrm{out},k}} \bm w_h^f \cdot \bm n d\Gamma.
\end{align*}
The corresponding contribution to the block matrix $\boldsymbol{\mathrm A}_{(l)}$ in \eqref{eq: predictor_multi_correct_notation_for_block_matrices} is then a weighted sum of rank-one matrices. Readers are referred to \cite[Section~2.4]{Liu2020} for more details of the consistent tangent matrix. The three-element Windkessel model and coronary model, two commonly used zero-dimensional models for Neumann coupling in cardiovascular simulations, are reviewed in \ref{sec:appendix}.
\section{Clinical applications}
\label{sec:clinical_applications}
In this section, we apply our combined FSI technology and practical modeling techniques to two patient-specific models, one of the pulmonary arteries of a healthy 9-year-old male and the other of the coronary arteries of a healthy 24-year-old male. Linear tetrahedral meshes were generated with three boundary layers each, at a thickness gradation ratio of $0.5$. Patient-specific inflow waveforms were prescribed with parabolic velocity profiles. Outlet boundary conditions were tuned to achieve target inlet systolic and diastolic pressures as well as assumed flow splits (Table \ref{table:model_characteristics}). Specifically, in the pulmonary model, all outlets were coupled to RCR models, and flow was assumed to be evenly distributed to the left and right lungs; in the coronary model, the aortic outlet was coupled to an RCR model while all remaining outlets were coupled to coronary models. Furthermore, $4$\% of the flow was distributed to the coronary arteries, with a $60$\%-$40$\% split for the left and right coronary arteries. Consistent with Section \ref{section:womersley_def}, we adopt the centimeter-gram-second units, and we set the fluid density $\rho^f$ to $1.0$, fluid viscosity $\mu^f$ to $0.04$, and wall density $\rho^s$ to $1.0$. Unless otherwise specified, we set the wall Poisson's ratio $\nu$ to $0.5$, and the Young's modulus $E$ to $1.3 \times 10^6$ uniformly for the pulmonary arteries \cite{Yang2019}, $7.0 \times 10^6$ for the aortic root, and $1.15 \times 10^7$ for the coronary arteries \cite{Ramachandra2016}. The time step size was chosen to be $T_p / 2000$. As discussed in Remark \ref{remark:prestress_diastolic_traction}, we generated initial conditions for each FSI simulation by first running a rigid-wall CFD simulation to generate solution fields at the diastolic pressure. The prestress generation algorithm was subsequently used to obtain the prestress $\bm \sigma_0$ balancing the diastolic fluid boundary traction under zero wall displacement relative to the imaged configuration.
\begin{table*}[htbp]
\footnotesize
\begin{center}
\tabcolsep=0.25cm
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{c | c c c c c c c c}
\hline
\hline
\multirow{2}{*}{\textbf{Model}} & \multirow{2}{*}{\textbf{Sex}} & \vtop{\hbox{\strut \textbf{Age}}\hbox{\strut \textbf{(year)}}} & \vtop{\hbox{\strut \bm{$R_{\mathrm{in}}$}}\hbox{\strut \textbf{(cm)}}} & \vtop{\hbox{\strut \bm{$P_{\mathrm{in}}$}}\hbox{\strut \textbf{(mm Hg)}}} & \vtop{\hbox{\strut \textbf{CO}}\hbox{\strut \textbf{(L/min)}}} & \multirow{2}{*}{\textbf{Flow Split}} & \vtop{\hbox{\strut \bm{$T_p$}}\hbox{\strut \textbf{(s)}}} & \multirow{2}{*}{\textbf{Outlets}} \\
\hline
\vtop{\hbox{\strut Pulmonary}\hbox{\strut Arteries}} & \multirow{2}{*}{M} & \multirow{2}{*}{$9$} & \multirow{2}{*}{$1.24$} & \multirow{2}{*}{$24 \ / \ 8$} & \multirow{2}{*}{$4.59$} & \vtop{\hbox{\strut $50$\% LPA}\hbox{\strut $50$\% RPA}} & \multirow{2}{*}{$0.811$} & \multirow{2}{*}{$46$ RCR} \\
\hline
\multirow{3}{*}{\vtop{\hbox{\strut Coronary}\hbox{\strut Arteries}}} & \multirow{3}{*}{M} & \multirow{3}{*}{$24$} & \multirow{3}{*}{$1.40$} & \multirow{3}{*}{$123 \ / \ 81$} & \multirow{3}{*}{$3.78$} & \vtop{\hbox{\strut $96$\% AO}\hbox{\strut $2.4$\% LCOR}\hbox{\strut $1.6$\% RCOR}} & \multirow{3}{*}{$1.43$} & \multirow{3}{*}{\vtop{\hbox{\strut $1$ RCR}\hbox{\strut $25$ coronary}}} \\
\hline
\hline
\end{tabular}
\end{center}
\caption{Model characteristics. $R_{\mathrm{in}}$: inlet radius; $P_{\mathrm{in}}$: target inlet pressure; CO: prescribed cardiac output; $T_p$: cardiac period; LPA: left pulmonary artery; RPA: right pulmonary artery; AO: aorta; LCOR: left coronary artery; RCOR: right coronary artery.}
\label{table:model_characteristics}
\end{table*}
All results reported in this section were obtained using the TaiYi supercomputer, a Lenovo system equipped with Intel Xeon Gold 6148 processors interconnected by a 100 GB/s Intel Omni-Path network. Each processor consists of 40 CPUs and 192 GB RAM and operates at a clock rate of 2.4GHz \cite{Taiyi-machine-details}.
\subsection{Linear solver robustness}
\label{subsection:solver_robustness}
We examined the linear solver performance under varying wall properties. For this test, we set the relative tolerance $\delta = 10^{-8}$ for the stopping criterion, and the maximum number of iterations for the outer, intermediate, and inner solvers $\mathrm n^{\textup{max}} = \mathrm n^{\textup{max}}_{\mathrm A} = \mathrm n^{\textup{max}}_{\mathrm S} = 200$. Relative tolerances $\delta_{\mathrm A}$ and $\delta_{\mathrm S}$ were jointly varied from $10^{-6}$ to $10^{-2}$, and $\delta_{\mathrm I} = \sqrt{\delta_{\mathrm A}}$. As described in Section \ref{sec:iterative_solution_method}, the preconditioners $\boldsymbol{\mathrm P}_{\mathrm A}$ and $\boldsymbol{\mathrm P}_{\mathrm S}$ were formed by BoomerAMG based on $\boldsymbol{\mathrm A}$ and $\hat{\boldsymbol{\mathrm S}}$, respectively.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\includegraphics[angle=0, trim=90 90 130 90, clip=true, scale = 0.3]{./figures/conv_history_pul_E1e5_nu0d5.pdf} &
\includegraphics[angle=0, trim=90 90 130 90, clip=true, scale = 0.3]{./figures/conv_history_pul_E1e6_nu0d5.pdf} \\
(A) $E=1.3\times 10^5$ & (B) $E=1.3\times 10^6$ \\
\includegraphics[angle=0, trim=90 90 130 90, clip=true, scale = 0.3]{./figures/conv_history_pul_E1e7_nu0d5.pdf} &
\includegraphics[angle=0, trim=90 90 130 90, clip=true, scale = 0.3]{./figures/conv_history_pul_rigid.pdf} \\
(C) $E=1.3\times 10^7$ & (D) Rigid Wall \\
\multicolumn{2}{c}{ \includegraphics[angle=0, trim=420 120 840 170, clip=true, scale = 0.45]{./figures/conv_history_legend.pdf}} \\
\multicolumn{2}{c}{ \includegraphics[angle=0, trim=1140 120 320 170, clip=true, scale = 0.45]{./figures/conv_history_legend.pdf}}
\end{tabular}
\caption{Convergence history for the pulmonary arterial model with varying values of the prescribed Young's modulus. A rigid-wall CFD simulation is also included for comparison. The latter three items in the legend correspond to the three alternative linear solver options investigated. The horizontal dashed black line demarcates the prescribed stopping criterion for the relative error $\delta = 10^{-8}$. CPU times (s) for the linear solver averaged over ten time steps are annotated. NC: no convergence within the prescribed maximum number of iterations.}
\label{fig:conv_history_pul}
\end{center}
\end{figure}
We additionally compared three other linear solver options. In the first alternative, we applied the block preconditioner without invoking the inner solver, that is, replacing \eqref{eq:seg_sol_pres} in Algorithm \ref{algorithm:exact_block_factorization} with $\hat{\boldsymbol{\mathrm S}} \bm y_p = \bm s_{p}$. In the second alternative, we applied the additive Schwarz method to $\boldsymbol{\mathcal A}$, using the incomplete LU factorization for the subdomain solver. In the third alternative, we applied the Jacobi preconditioner to $\boldsymbol{\mathcal A}$. For the latter two alternatives, we increased $\mathrm n^{\textup{max}} = \mathrm n^{\textup{max}}_{\mathrm A} = \mathrm n^{\textup{max}}_{\mathrm S}$ to $1 \times 10^4$, as significantly more iterations are generally required for convergence.
The pulmonary arterial mesh consists of $2.11 \times 10^6$ linear tetrahedral elements and $3.97 \times 10^5$ nodes, corresponding to $1.59 \times 10^6$ degrees of freedom in the associated linear system. Solver performance was investigated with varying values of the prescribed Young's modulus over three orders of magnitude, namely $E=1.3\times 10^5$, $1.3 \times 10^6$, and $1.3 \times 10^7$. The rigid-wall CFD simulation was also included as the extreme case of an infinitely large Young's modulus. Simulations were performed on a single node with $16$ CPUs. Figure \ref{fig:conv_history_pul} depicts all convergence histories and further annotates the CPU time for the linear solver averaged over ten time steps. We observe that with increasing Young's moduli, all preconditioners require increasingly more iterations and time to converge. This can be understood from the wall contribution $\boldsymbol{\mathrm K}_{\mathrm{m},(l),\dot{\bm u}^w}$ in \eqref{eq: predictor_multi_correct_notation_for_block_matrices}$_1$, in which an increased wall stiffness engenders stronger heterogeneity for the block matrix $\boldsymbol{\mathrm A}$. In contrast, no wall contribution is present in the rigid-wall case, and all block preconditioners require fewer iterations and less time to converge. In addition, the additive Schwarz and Jacobi preconditioners closely resemble each other in convergence behavior and are evidently less robust than the block preconditioners, signifying the importance of leveraging the block structure of $\boldsymbol{\mathcal A}$.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\includegraphics[angle=0, trim=90 90 130 90, clip=true, scale = 0.33]{./figures/conv_history_cor_varwall_nu0d3.pdf} &
\includegraphics[angle=0, trim=90 90 130 90, clip=true, scale = 0.33]{./figures/conv_history_cor_varwall_nu0d5.pdf} \\
(A) $\nu = 0.3$ & (B) $\nu = 0.5$ \\
\multicolumn{2}{c}{ \includegraphics[angle=0, trim=420 120 840 170, clip=true, scale = 0.45]{./figures/conv_history_legend.pdf}} \\
\multicolumn{2}{c}{ \includegraphics[angle=0, trim=1140 120 320 170, clip=true, scale = 0.45]{./figures/conv_history_legend.pdf}}\end{tabular}
\caption{Convergence history for the coronary arterial model with two different values of the Poisson's ratio. The latter three items in the legend correspond to the three alternative linear solver options investigated. The horizontal dashed black line demarcates the prescribed stopping criterion for the relative error $\delta = 10^{-8}$. CPU times (s) for the linear solver averaged over ten time steps are annotated. NC: no convergence within the prescribed maximum number of iterations.}
\label{fig:conv_history_cor}
\end{center}
\end{figure}
The coronary arterial mesh consists of $1.66 \times 10^6$ elements and $3.15 \times 10^5$ nodes, corresponding to $1.26 \times 10^6$ degrees of freedom in the associated linear system. Solver performance was investigated for two values of the Poisson's ratio $\nu = 0.5$ and $0.3$. Simulations were performed on a single processor with $48$ CPUs. Figure \ref{fig:conv_history_cor} depicts all convergence histories and again annotates the CPU time for the linear solver averaged over ten time steps. Only minor differences are observed between the two cases, suggesting a smaller impact of $\nu$ on the linear system as compared to $E$.
\subsection{Fixed-size scalability}
We examined the parallel performance of our proposed solution strategy, setting the relative tolerances $\delta = 10^{-8}$, $\delta_{\mathrm A}=\delta_{\mathrm S}=10^{-4}$, and $\delta_{\mathrm I}=10^{-2}$. While the same pulmonary arterial mesh was used as in Section \ref{subsection:solver_robustness}, we used a finer coronary mesh of $6.44 \times 10^6$ linear elements and $1.25 \times 10^6$ nodes, corresponding to $5.01 \times 10^6$ degrees of freedom in the associated linear system. Speed-up ratios were calculated based on a serial simulation for the pulmonary mesh and a parallel simulation with $20$ CPUs for the coronary mesh. Each job was run for $20$ time steps. For the pulmonary arterial mesh, super-optimal parallel efficiency was observed for $2$ and $4$ CPUs (Figure \ref{fig:strong_scaling}), likely a consequence of more efficient utilization of the cache in these scenarios.
\begin{figure}
\begin{center}
\includegraphics[angle=0, trim=120 80 130 100, clip=true, scale = 0.45]{./figures/FSI_strong_scaling.pdf}
\caption{Fixed-size scalability of our solution strategy. Annotated efficiency rates are computed from the total runtime.}
\label{fig:strong_scaling}
\end{center}
\end{figure}
\subsection{Performance of the segregated predictor multi-corrector algorithm}
\label{subsec:zero_kinematic_residual_numerical_evidence}
As discussed above in Remark \ref{remark:zero_kinematic_residual}, we have conveniently chosen $\boldsymbol{\mathrm R}_{\mathrm{k},(l)} = \bm 0$ for all $l \geq 1$ to allow for the simplified right-hand side in the linear system \eqref{eq:pred_multi_correct_linear_system} in the segregated predictor multi-corrector algorithm. In Table \ref{table:model_residuals}, we document the nonlinear residual $\boldsymbol{\mathrm R}_{(l)}$ and kinematic residual $\boldsymbol{\mathrm R}_{\mathrm{k},(l)}$ at all Newton-Raphson iterations within two time steps of the cardiac cycle for each model, one at peak systole and the other at mid-diastole. We note that $\boldsymbol{\mathrm R}_{\mathrm{k},(l)} < 10^{-12}$ beginning with $l=1$ and is driven close to machine precision for $l \geq 2$, closely agreeing with our prior analysis of the segregated algorithm \cite{Liu2019a}. We also note the expected quadratic convergence of the relative nonlinear residual in the first two iterations of the Newton-Raphson procedure. The convergence rate from the second to the third iteration is slightly reduced, likely a consequence of the linear solver accuracy.
\begin{table}[htbp]
\footnotesize
\begin{center}
\tabcolsep=0.3cm
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{c | c c c c}
\hline
\hline
Model & $n$ & $l$ & $\boldsymbol{\mathrm R}_{(l)}/\boldsymbol{\mathrm R}_{(0)}$ & $\boldsymbol{\mathrm R}_{\mathrm{k},(l)}$ \\
\hline
\multirow{6}{*}{Pulmonary} & \multirow{3}{*}{$491$} & $1$ & $3.40 \times 10^{-1}$ & $4.73 \times 10^{-15}$ \\
& & $2$ & $1.81 \times 10^{-5}$ & $4.51 \times 10^{-16}$ \\
& & $3$ & $1.09 \times 10^{-7}$ & $6.78 \times 10^{-21}$\\
\cline{2-5}
& \multirow{3}{*}{$1246$} & $1$ & $2.23 \times 10^{-1}$ & $3.08 \times 10^{-15}$ \\
& & $2$ & $3.71 \times 10^{-5}$ & $2.35\times 10^{-16}$ \\
& & $3$ & $1.08 \times 10^{-7}$ & $3.50\times 10^{-18}$ \\
\hline
\multirow{6}{*}{Coronary} & \multirow{3}{*}{$446$} & $1$ & $2.64 \times 10^{0}$ & $2.49 \times 10^{-13}$ \\
& & $2$ & $4.22 \times 10^{-4}$ & $4.93 \times 10^{-14}$ \\
& & $3$ & $2.78 \times 10^{-7}$ & $4.44 \times 10^{-16}$ \\
\cline{2-5}
& \multirow{3}{*}{$1223$} & $1$ & $2.28 \times 10^{0}$ & $6.45 \times 10^{-14}$ \\
& & $2$ & $8.51 \times 10^{-5}$ & $1.18\times 10^{-14}$ \\
& & $3$ & $4.12 \times 10^{-8}$ & $1.94\times 10^{-18}$ \\
\hline
\hline
\end{tabular}
\end{center}
\caption{The nonlinear residual $\boldsymbol{\mathrm R}_{(l)}$ and kinematic residual $\boldsymbol{\mathrm R}_{\mathrm{k},(l)}$ at all nonlinear iterations within time steps corresponding to peak systole and mid-diastole.}
\label{table:model_residuals}
\end{table}
\subsection{Simulation with higher-order elements}
Given limitations of the meshing software MeshSim, we were unable to generate a suite of spatially homogeneous quadratic tetrahedral meshes with boundary layers that would be of tractable computational cost. We therefore investigated the spatial convergence of peak systolic WSS with isotropic pulmonary arterial meshes of linear and quadratic tetrahedral elements at three refinement levels of comparable numbers of nodes ($4.0 \times 10^5$, $8.0 \times 10^5$, and $1.6 \times 10^6$ nodes). For each model, the coarsest isotropic mesh was chosen to match the number of nodes in the linear tetrahedral mesh with three boundary layers used in Section \ref{subsection:solver_robustness}. WSS results from the boundary layer mesh were verified to be mesh independent and taken as reference values. Consistent with observations from our spatial convergence study in Section \ref{section:womersley_rigid}, quadratic elements resolve WSS more accurately (Figure \ref{fig:systolic-wss-spatial-convergence}). We do, however, note the presence of undesirable oscillations yielding sharp local gradients and local WSS over/underestimations.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[angle=0, trim=0 90 380 95, clip=true, scale=1.0]{./figures/pulm_systolicWSS-convergence.pdf}
\end{tabular}
\end{center}
\caption{Spatial convergence of peak systolic WSS using isotropic linear (left) and quadratic (right) tetrahedral meshes of the pulmonary arterial model at three refinement levels: (A) $4.0 \times 10^5$ nodes, (B) $8.0 \times 10^5$ nodes, (C) $1.6 \times 10^6$ nodes.}
\label{fig:systolic-wss-spatial-convergence}
\end{figure}
\subsection{Patient-specific simulations}
In order to assess our proposed techniques for variable wall thickness assignment and tissue prestressing, three simulations were performed for each model: (i) an unprestressed simulation with centerline-based thickness, (ii) a prestressed simulation with centerline-based thickness, (iii) a prestressed simulation with Laplacian-based thickness. In the centerline-based approach, the local thickness was prescribed to be $20$\% of the local radius everywhere; in the Laplacian approach, the thickness was prescribed to be $20$\% of the corresponding cap radius at all wall boundary nodes (Figure \ref{fig:patient-specific_systolic_comparisons}A). Simulations were performed over three cardiac cycles with uniform time steps and verified for convergence to a limit cycle. For each simulation, only the final cardiac cycle was analyzed.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[angle=0, trim=190 90 150 90, clip=true, scale=1.1]{./figures/pulm_cor-0183-trunc_systole_v3.pdf}
\end{tabular}
\end{center}
\caption{Effects of different wall thickness distributions and tissue prestressing on patient-specific pulmonary (left) and coronary (right) arterial models. (A) Centerline-based (left) and Laplacian wall thickness distributions. (B) Peak systolic wall displacement (left) and wall shear stress (WSS; right) magnitudes for unprestressed simulations with centerline-based thickness (top), prestressed simulations with centerline-based thickness (middle), and prestressed simulations with Laplacian-based thickness (bottom). Red arrows and detailed views are included to highlight WSS differences across the three cases.}
\label{fig:patient-specific_systolic_comparisons}
\end{figure}
The wall displacement and WSS distributions at peak systole were compared across the three cases (Figure \ref{fig:patient-specific_systolic_comparisons}B). Relative to Simulation (ii), failure to consider tissue prestressing in Simulation (i) \textit{overestimates} the maximum wall displacement magnitude by $37.9\%$ ($0.189$ vs. $0.137$ cm) in the pulmonary arterial model and by $162\%$ ($0.119$ vs. $0.0453$ cm) in the coronary arterial model. It also overestimates the mean displacement magnitude by $29.4$\% ($0.0616$ vs. $0.0476$ cm) over the main pulmonary arterial (MPA) bifurcation, by $159$\% ($0.01829$ vs. $0.00707$ cm) over the aortic root, and by $183$\% ($0.0267$ vs. $0.00942$ cm) over the left coronary artery (LCOR). While prestressing yields significantly different displacements in both models, our results suggest that prestressing is particularly critical in the systemic circulation where the diastolic pressure is an order of magnitude larger than that in the pulmonary circulation. In contrast, the Laplacian-based thickness in Simulation (iii) \textit{underestimates} the mean displacement magnitude by $29.8$\% ($0.0334$ vs. $0.0453$ cm) over the MPA bifurcation, by $45.3$\% ($0.00387$ vs. $0.00707$ cm) over the aortic root, and by $45.7$\% ($0.00511$ vs. $0.00942$ cm) over the LCOR. We note that while we prescribed Young's moduli that were previously determined to yield cross-sectional relative area changes observed from PC-MRI, tissue prestressing was not considered in these prior studies \cite{Yang2019,Ramachandra2016}. More compliant wall properties would thus need to be considered to achieve the same relative area changes under prestressing.
While local discrepancies in peak systolic WSS are also observed across the three cases (as highlighted by the red arrows in Figure \ref{fig:patient-specific_systolic_comparisons}(B), these drastic discrepancies in displacement do not produce discrepancies in volumetric flow rates or spatially averaged WSS quantities. In fact, the mean WSS magnitude only differs by up to $2.03$\% over the MPA bifurcation, $0.174\%$ over the aorta, and $0.149$\% over the LCOR. Despite the rationale behind our proposed modeling techniques, the merits of Simulation (ii) remain to be assessed with in vivo and/or in vitro validation data.
\section{Conclusions}
\label{sec:conclusion}
In this work, we derived a reduced unified continuum formulation for vascular FSI and presented strong verification of our numerical methodology against Womersley's deformable wall theory using both linear and quadratic tetrahedral elements. Compared to the unified continuum ALE formulation, our reduced theory invokes three assumptions for the vascular wall to achieve monolithic FSI coupling in the Eulerian frame for small-strain problems. The residual-based VMS formulation is adopted for spatial discretization, and the generalized-$\alpha$ method is adopted for temporal discretization such that velocity and pressure are uniformly second-order accurate in time, a significant improvement over the predominant dichotomous approach. Block preconditioning of a monolithically coupled FSI system is also performed for the first time. Using two patient-specific models, we demonstrated the fixed-size scalability and enhanced robustness of our nested block preconditioner as compared to alternative preconditioners for vascular FSI applications. To appropriately model physiological phenomena, we further outlined a centerline-based approach for wall thickness assignment and a fixed-point algorithm for prestressing the vascular wall at the imaged configuration. Validation of our combined FSI methodology against in vivo and/or in vitro data will be examined in future studies.
\section*{Acknowledgements}
This work was supported by the National Institutes of Health [grant numbers 1R01HL121754, 1R01HL123689, R01EB01830204], Southern University of Science and Technology [startup grant number Y01326127], the National Natural Science Foundation of China [grant number 12172160], and the Guangdong-Hong Kong-Macao Joint Laboratory for Data-Driven Fluid Mechanics and Engineering Applications [grant number 2020B1212030001]. Ingrid S. Lan was supported by the National Science Foundation (NSF) Graduate Research Fellowship and the Stanford Graduate Fellowship in Science and Engineering. Computational resources were provided by the Stanford Research Computing Center, the Extreme Science and Engineering Discovery Environment supported by NSF [grant number ACI-1053575], and the Center for Computational Science and Engineering at Southern University of Science and Technology.
\bibliographystyle{elsarticle-num}
|
2,877,628,090,527 | arxiv | \section{Introduction}
Electromagnetic resonators offer the possibility to enhance the interaction of waves with matter. In the RF or optical domain, they are the basis of many commercial devices, going from the microwave-oven to lasers. At the most fundamental level, they permit to explore the boundary between the classical and the quantum world allowing experimental tests of quantum electrodynamics (QED) \cite{haroche2006exploring}.
The practical interest for active resonators, i.e. filled with emitters, has been reactivated in the context of quantum storage for which a complete mapping of the field carrying the information into a long-lived atomic system is necessary. Between the two extreme situations, a single emitter in an ultra-high-finesse cavity (Cavity QED) on one side and a strongly absorbing medium in free-space on the other side, an intermediate regime exists. A weakly absorbing sample can be placed in a medium-finesse cavity to obtain a significant interaction.
I will consider the specific situation of an impedance matched ring cavity. The matching condition means that the input mirror transmission equals the losses. I assume the loss mechanism being dominated by the emitters absorption of the active medium inside the resonator. Conversion into the atomic excitation either way represents a loss or a mapping of the incoming field depending on the point of view. This approach has been particularly fruitful in non-linear optics for second harmonic generation or more generally for frequency mixing \cite{kozlovsky1988efficient}. The incoming power is then fully converted into the target frequency. Following the same idea, the use of a matched cavity has been proposed for quantum light storage \cite{AFCcavity, PhysRevA.82.022311} and successfully implemented in a luminescent crystal \cite{SabooniCavity}. The incoming signal is then fully mapped into the atomic coherences even if the single path absorption is moderate as soon as it is compensated by the cavity quality factor. This situation is not restricted to optics but is also considered very actively in the RF domain with spins \cite{Schoelkopf.105.140501, PhysRevLett.105.140502, PhysRevLett.107.060502, PhysRevB.84.060501, PhysRevLett.107.220501}. Similar proposals have been made to store and retrieve microwave photons \cite{WilsonNJP, JulsgaardPhysRevLett.110.250503}. A resonator, in the optical or in the RF domain, is indeed advantageous as compared to free-space propagation. That's the reason why this technique is actively investigated in a topical context. It allows the use of low absorption samples reducing the constraint on the physical size of the medium or the concentration of emitters. Highly doped samples can indeed exhibit a important coupling between individual dipoles thus reducing the coherence time. It is the case for example for NV centers in diamond \cite{van1997dependences} and potentially for different kinds of impurities in insulators as rare-earth or transition ions. This statement can be generalized to atomic vapors. Extreme optical depth can be obtained in Bose-Einstein condensate at the price of interaction or collisions between dipoles. Low doping samples in a resonator are clearly an alternative. Using optically thin samples also facilitates the preparation stage as described by Afzelius {\it et al.} \cite{AFCcavity} when optical pumping can be operated from the side of the cavity. In the optical or RF domain, involving optical dipoles or spins, matched resonators offer interesting prospects.
The different protocols \cite{AFCcavity, PhysRevA.82.022311, WilsonNJP, JulsgaardPhysRevLett.110.250503, gao, PhysRevA.88.012304} to store and retrieve quantum signals are based on the coherent manipulation of spins or optical dipoles. A $\pi$-pulse is a tool of choice in that case because it permits the delayed emission of the stored signal in the lineage of the spin or photon echo experiments. The propagation of such strong pulses in free-space absorbing samples is a complex problem known for decades \cite{allen1987ora, AreaTheorem}. More recently, the use of the area theorem has been proposed in the context of quantum memories \cite{Moissev_bull}. This general approach allows the derivation of analytic solutions for weak and strong area pulses. Later on , the distortion though the sample of the rephasing $\pi$-pulse has been identified experimentally as critical to explain the observed storage efficiency \cite{Ruggiero2PE}. My main motivation is to transpose this propagation analysis in a resonator filled with absorbers.
For the rest of the paper, I study the ring cavity design (see fig. \ref{GTI}). The ring configuration has the advantage of the theoretical simplicity. When the round-trip absorption is small, the traveling wave inside interacts only with one atomic mode. As a consequence, inside the resonator, the field and the atomic variables (population and polarization) are fully described by spatially independent parameters (the amplitudes of the field and atomic modes). This model has been extensively and accurately discussed in the context of optical bistability \cite{drummond1981optical, zakharov1995interaction, Zakharov} based on previous work for the semi-classical description of lasers \cite{stenholm1969semiclassical}.
Even if my paper is restricted to the ring resonator design, it should be noted that the standing-wave configuration has the advantage of the experimental simplicity. It only involves two mirrors thus reducing the unwanted passive losses, allowing more compact and stable setup. Semi-monolithic linear resonators are indeed widely used in optics for pulse compression \cite{GTI} or non-linear frequency conversion \cite{kozlovsky1988efficient}. The theoretical treatment is more complex because the standing wave induces a population grating in the non-perturbative regime. It couples the forward and backward propagating modes fundamentally differing from the traveling wave description \cite{drummond1981optical, zakharov1995interaction}. It is beyond the scope of the present paper but it should deserve a specific study because of its experimental simplicity and its current extensive usage for electronic spins coupled to planar waveguides \cite{PhysRevLett.105.140502, PhysRevLett.107.060502, PhysRevB.84.060501, PhysRevLett.107.220501}.
I finally assume the atomic transition to be inhomogeneously broadened (local shift of the transition). It allows the use of photon echo techniques to control the sample and possibly offers a larger intrinsic interaction bandwidth. I assume the coupling constant being identical for all atoms. The assumption is well suited in optics because the orientation of the transition dipoles is well defined with respect to the incoming polarization. It is generally not appropriate for spins in planar waveguides because the magnetic dipoles can have different orientations and the magnetic field orientation around the waveguide has a radial dependency \cite{pozar2005microwave}. Even if I employ the optics terminology and use appropriate assumptions my approach can hopefully open some perspectives for RF or microwave resonators offering a whole bestiary of integrated designs with waveguides on different substrates \cite{pozar2005microwave}.
The aim of this paper is manifold. I primarily study the input/output relations of strong pulses in the matched cavity and point out the possible distortions. I also draw an analogy with the free space propagation through thick sample giving a solid basis for the intuition. I use the intracavity version of the McCall \& Hahn area theorem that has been already derived by Zakharov in the context of optical bistability and superraddiance \cite{zakharov1995interaction, Zakharov, Mossberg2001,Mossberg2003}. The area theorem is extremely helpful to predict the qualitative behavior of strong pulses, $\pi/2$ or $\pi$ for example. $\pi$-pulses for example are fundamental tools for the manipulation of quantum systems. It is still widely unknown in the quantum information community. My goal is to replace the work of Zakharov in this context and show how helpful it is to estimate the possible distortions when the pulse travels through the medium. The analytic part is then essentially a recontextualization of previous results. My numerical calculations of the pulse shapes are strongly supported by the area and energy conservation rules whose simplicity reinforce this kind of analysis. I propose a qualitative interpretation in terms of slow-light. This explanation has the advantage to reintroduce the dispersion properties of the medium particularly useful in the weak signal limit. My approach is clearly inspired by the problematic of optically active emitters in a resonant cavity. Input/output relations are well-known in optics and are formally equivalent to a propagation problem in free space. It is less natural in the microwave domain (mostly in EPR spectroscopy for which relatively large coupling strength can be obtained), because the wavelength is usually larger than the sample. Considering the complete mapping between a microwave or optical photon into spins or optical dipoles respectively brings two fields of study closer. Bridging this gap between optics and RF is also an objective of this paper and is then intended to a larger audience.
\section{Ring cavity model}
I consider a ring cavity uniformly filled with emitters (fig. \ref{GTI}). In this geometry, the outgoing field $\Omega_\mathrm{out}$ and the incoming one $\Omega_\mathrm{in}$ can be separated easily. The fields interact with only one mode of the cavity whose amplitude $\Omega$ uniquely characterizes the traveling-wave inside the resonator. All the fields are then completely defined by their time-varying Rabi frequencies.
\begin{figure}[htbp]
\centering\includegraphics[width=9cm]{GTI_ring.eps}
\caption{Active ring cavity uniformly filled with emitters. The entrance mirror is partially reflecting. The different fields are defined by their time-varying Rabi frequencies: $ \Omega_\mathrm{in}$, $\Omega_\mathrm{out}$ represent the incoming and the outgoing amplitudes on the entrance mirror and $\Omega$ the intracavity mode amplitude. The spatial dependence can be neglected when the round-trip absorption is small. The local interference because of a partial beam overlap is supposed to be negligibly within the cavity volume.}
\label{GTI}
\end{figure}
\subsection{Atoms dynamics and cavity master equations}
The coupled system between the cavity field and the emitters is described on one side by the input/output relations of the resonator including the atomic polarization (source term) and on the other side by the Bloch equation characterizing the dipole dynamics. This model is well-established in quantum optics \cite{CollettPhysRevA.30.1386, GardinerPhysRevA.31.3761, walls, WilsonNJP, JulsgaardPhysRevLett.110.250503} including sophisticated description of multilevel atomic level scheme \cite{GorshkovI} and complete quantum description of QED cavity with inhomogeneous broadening \cite{PhysRevA.82.022311}. Since I specifically study the strong pulse distortions, I use a semi-classical formalism for the cavity master equation and the atomic variables dynamics. It can be naturally extended to describe the interaction at the quantum level \cite{PhysRevA.82.022311,WilsonNJP, JulsgaardPhysRevLett.110.250503,GorshkovI, PhysRevA.85.013844}.
Concerning the dynamics of the emitters, the transition is assumed to be only inhomogeneously broadened. The dipole dephasing and the population lifetime are neglected, in other words the atomic response is examined in the coherent transient regime as in echo type experiments
\cite{allen1987ora}. In the Bloch vector formalism, the transverse and longitudinal decays are then neglected. In the rotating frame the equations read as:
\begin{equation}\label{Bloch}
\begin{array}{rl}
\partial_t \ensuremath{U_\Delta} &= \displaystyle - \Delta \; \ensuremath{V_\Delta} \vspace{2mm} \\
\partial_t \ensuremath{V_\Delta} &= \displaystyle\Delta \; \ensuremath{U_\Delta}+ \Omega \; \ensuremath{W_\Delta} \vspace{2mm} \\
\partial_t \ensuremath{W_\Delta} &= \displaystyle - \Omega \; \ensuremath{V_\Delta} \vspace{2mm}
\end{array}
\end{equation}
$\Delta$ is the detuning from the excitation frequency and is used as an index accounting for the inhomogeneous broadening. $\ensuremath{U_\Delta}$ and $\ensuremath{V_\Delta}$ are the in-phase and out-of-phase components of the Bloch vector. The real Rabi frequency $\Omega$ is the slowly time-varying envelope of the monochromatic field. The last equation describes the population dynamics $\ensuremath{W_\Delta}$.
Using the complex amplitude $\displaystyle \ensuremath{R_\Delta}= \ensuremath{U_\Delta}+ i\ensuremath{V_\Delta}$, the dipole moment's equation may be written in a more compact manner:
\begin{equation}\label{P}
\partial_t \ensuremath{R_\Delta} = \displaystyle i \Delta \; \ensuremath{R_\Delta} + i \Omega \; \ensuremath{W_\Delta}
\end{equation}
The cavity master equations also known as input/output relations are slightly modified as compared to the empty ring cavity case \cite{walls}. They include a source term corresponding to the atomic emission:
\begin{equation}\label{BMeqGTI}
\begin{array}{rl}
\displaystyle \frac{1}{\mathcal{D}} \frac{\ensuremath{\partial} \Omega} {\ensuremath{\partial} t}& = \displaystyle -\frac{\kappa}{2} \Omega + \sqrt{\kappa} \Omega_\mathrm{in} - i\alpha L \int_{\Delta} \; g(\Delta) \; \ensuremath{R_\Delta} \; \ensuremath{\mathrm{d}} \Delta \vspace{3mm} \\
\displaystyle \Omega_\mathrm{out}&=\sqrt{\kappa} \Omega - \Omega_\mathrm{in} \\
\end{array}
\end{equation}
The source term is the sum of the polarizations $\displaystyle \ensuremath{R_\Delta}$ over the inhomogeneous broadening $g(\Delta)$. I take $\displaystyle g(\Delta) =1$ over the spectrum of interest as the normalization assuming the inhomogeneous broadening is much larger than the interaction bandwidth. It is an important assumption. Even if numerical simulations can be implemented without this condition, it is required by the area theorem as we will see in \ref{intracavity_area}. Since my paper precisely compares the intuition from the area conservation law and numerical simulations of the temporal shape, I make this assumption from now. To summarize, the dynamics is assumed to verify: homogeneous linewidth $\ll$ interaction bandwidth $\ll$ inhomogeneous broadening \cite[p.3]{Eberly:98}.
It is clearly a restriction because to optimize the resources one would naturally match the bandwidth and the inhomogeneous linewidth as in \cite{PhysRevLett.105.140502, PhysRevLett.107.220501} for example. In that case, the analytic area theorem is not valid anymore, one can only rely on numerical simulations.
Coming back to the model description (eq. \ref{BMeqGTI}), the atoms-light coupling constant is directly included into $\alpha $, the measured absorption coefficient. The convergence of the integral is ensured by $\displaystyle \ensuremath{R_\Delta}$ whose support is the excitation bandwidth.
The term $\alpha L$ equals the round-trip absorption of the uniformly filled resonator. It can be rescaled by the filling ratio if the sample is smaller than the cavity.
The cavity is described by the free-spectral range defined as $\mathcal{D}=\displaystyle \frac{c}{L}$ where $L$ is its round-trip length and the intensity transmission coefficient $\kappa$ of the entrance mirror.
I now focus on the specific condition where weak incoming pulses are completely mapped into the ensemble.
\subsection{Matched cavity condition}
The matching condition is defined in the perturbative limit: the populations are not modified (weak pulse) and are supposed to keep its initial value $\ensuremath{W_\Delta} = -1$. I define the reflection coefficient in the spectral domain $\displaystyle \widetilde{\Omega}_\mathrm{out}= r\left(\omega\right)\widetilde{\Omega}_\mathrm{in}$ where $\displaystyle \widetilde{\Omega}_\mathrm{out}$ and $\displaystyle \widetilde{\Omega}_\mathrm{in}$ are the Fourier transform of the incoming and outgoing pulses respectively. $r\left(\omega\right)$ completely characterizes the response of the cavity in the weak signal limit. It reads as
\begin{equation}\label{r}
r\left(\omega\right)=\displaystyle \frac{\kappa-2\pi \alpha L - 2i \omega/\mathcal{D}}{\kappa+2\pi \alpha L + 2i \omega/\mathcal{D}}
\end{equation}
The cavity is matched when the reflection is zero on resonance ($ \omega=0$):
\begin{equation}\label{match}
\alpha L =\frac{\kappa}{2 \pi}
\end{equation}
i.e when the round-trip absorption equals the inverse of the finesse $\displaystyle \frac{2 \pi}{\kappa}$.
For a matched cavity, the intensity reflection $|r\left(\omega\right)|^2$ exhibits a dip whose FWHM defines the cavity linewidth $\Delta \omega_\mathrm{cav}$. It scales the possible interaction bandwidth as previously demonstrated by Moiseev {\it et al.} \cite{PhysRevA.82.022311, PhysRevA.88.012304}. It is then an important parameter: $\displaystyle \Delta \omega_\mathrm{cav}=2 \kappa \mathcal{D}$.
Because I neglect the atomic decay, the field is completely mapped into the dipoles without damping. It allows the implementation of the field-matter interface. The system is clearly conservative so it is particularly helpfull to use the energy conservation rule to evaluate the conversion from field to atoms.
\section{Energy and area conservation rules}
\subsection{Energy conservation}\label{energyconservation}
In the coherence propagation regime, there is a conservation of quanta between the field (photons) and the excitation of the ensemble (population). Writing the equations by using the Rabi frequencies to describe the field envelope and the absorption coefficient (eqs. \ref{Bloch} and \ref{BMeqGTI}) hides the microscopic parameters (energy quanta) but has the advantage to exhibit experimentally measurable macroscopic parameters. The conservation of quanta can be straightforwardly retrieved by integrating the equation of motion \cite{allen1987ora}.
On the onde side, the energy contained in the incoming field (electromagnetic) is given by the integration of the intensity i.e the incoming pulse energy $\displaystyle U_{\Omega} = \displaystyle \int_{t} |\Omega_\mathrm{in}|^2 \ensuremath{\mathrm{d}} t$.
On the other side, the sum of the population gives the energy stored into the atomic excitation. It reads with the same units $\displaystyle U_{w}= \displaystyle \frac{\alpha L}{2 \pi} \displaystyle \int_{\Delta} \frac{ \ensuremath{W_\Delta} +1}{2} \ensuremath{\mathrm{d}} \Delta$.
Comparing the two forms of energy is a simple way to predict the qualitative shape of the output pulse as we'll see later. This analysis tells us how much energy is potentially left behind by the incoming pulse. The area conservation rule offers a second powerful tool.
\subsection{Intracavity area theorem}\label{intracavity_area}
The McCall-Hahn area theorem is extremely helpful in free space to guide the intuition when propagation in absorbing media is considered \cite{AreaTheorem}. It gives a simple analytic law for the pulse area defined as $\displaystyle \Theta=\int_{t}\Omega\ensuremath{\mathrm{d}} t$.
Its intracavity version has been derived by Zakharov \cite{zakharov1995interaction, Zakharov}. It has been studied experimentally with doped solids \cite{Mossberg2001,Mossberg2003}. I briefly rederive it here to make the present article more self-contained. I recommend the reading of \cite{Eberly:98} for a pedagogical introduction and accurate derivation to the area theorem in free space (see also \cite[p.19]{allen1987ora} and \cite[p.816]{mandel1995}) . The intracavity version follows the same formal demonstration \cite{zakharov1995interaction}.
It is given by a time integration of the equation of motion (eq. \ref{BMeqGTI}):
\begin{equation}
\begin{array}{rl}
\frac{\kappa}{2} \Theta - \sqrt{\kappa} \Theta_\mathrm{in} &=\displaystyle - i\alpha L \int_{t} \int_{\Delta} \; g(\Delta) \; \ensuremath{R_\Delta}(t) \; \ensuremath{\mathrm{d}} \Delta \ensuremath{\mathrm{d}} t \vspace{2mm} \\
\Theta_\mathrm{out}&= \sqrt{\kappa} \Theta -\Theta_\mathrm{in}
\end{array}
\end{equation}
The source term is resolved by rewriting the Bloch equation (\ref{P}) in its integral form \cite{Eberly:98}. \begin{equation}
\ensuremath{R_\Delta}(t) = \displaystyle i\exp(i\Delta t)\int_{-\infty}^t \ensuremath{W_\Delta}(t^\prime) {\Omega(t^\prime)} \exp(-i\Delta t^\prime)\ensuremath{\mathrm{d}} t^\prime \end{equation}
Following \cite{Eberly:98} because the calculation of the source term contribution is the same as in free space, the distribution $g(\Delta)$ is assumed much broader than the interaction bandwidth so the term $\exp(-i\Delta (t^\prime-t))$ is rapidly oscillating. This simplification is possible because of the three previously mentioned well-separated temporal dynamics: homogeneous linewidth $\ll$ interaction bandwidth $\ll$ inhomogeneous broadening. The integration of the field $\displaystyle \Theta=\int_{t}\Omega\ensuremath{\mathrm{d}} t$ introduces a time when the field disappears and the macroscopic polarization vanishes because of the inhomogeneous dephasing but the coherences still freely oscillate. This oscillating term is a formal representation of the Dirac distribution centered on $\Delta=0$ \cite{allen1987ora, mandel1995}. After integration over the inhomogeneous profile, only remains $\ensuremath{W_0}(t)$, the on-resonance population \cite[eq. (7)]{Eberly:98}. It is the solution of the Rabi flopping problem: \begin{equation}
\ensuremath{W_0}(t)=-\displaystyle \cos\left(\int_{-\infty}^{t} \Omega \ensuremath{\mathrm{d}} t^\prime \right)\end{equation}
I finally obtain the area theorem
\begin{equation}\label{area}
\begin{array}{rl}
\frac{\kappa}{2} \Theta - \sqrt{\kappa} \Theta_\mathrm{in} &= -\pi \alpha L \sin \left( \Theta \right) \vspace{2mm} \\
\Theta_\mathrm{out}&= \sqrt{\kappa} \Theta -\Theta_\mathrm{in}
\end{array}
\end{equation}
It gives input/output relations for the pulse area whatever is the exact incoming temporal shape.
The $\sin \left( \Theta \right)$ term introduces singular propagation effects for specific area integer of $\pi$ as in the free-space situation \cite{Ruggiero:10}. Assuming a matched cavity (eq. \ref{match}) further simply the expression:
\begin{equation}\label{areamatched} \begin{array}{rl} \Theta - \frac{2}{\sqrt{\kappa}} \Theta_\mathrm{in} &= - \sin \left( \Theta \right) \vspace{2mm} \\ \Theta_\mathrm{out}&= \sqrt{\kappa} \Theta -\Theta_\mathrm{in} \end{array} \end{equation}
In the small area limit (perturbative regime), one verifies that $ \Theta_\mathrm{out} =0$. No light escapes the cavity (matching condition). For incoming area ranging from $0$ to $ \sqrt{\kappa}\pi$, I represent the outgoing and intracavity area (fig. \ref{PlotSerie_matchedVSaire_article}). The latter ranges from $0$ to $ 2 \pi$ accordingly.
\begin{figure}[!h] \begin{center}
\includegraphics[width=10cm]{PlotSerie_matchedVSaire_article.eps}
\caption{Outgoing $\Theta_\mathrm{out}$ (top) and intracavity area $\Theta$ (bottom) as a function of the incoming area $\Theta_\mathrm{in}$ calculated from the matched cavity area theorem (eqs. \ref{areamatched}). Squares correspond to the numerical simulation of the pulse temporal shapes that will be detailed later on (see \ref{simul}). It serves as a validation of the numerically calculated area that can be compared to the analytic result of the area theorem.}
\label{PlotSerie_matchedVSaire_article}\end{center}\end{figure}
I now analyze the two singular situations for the area theorem: $\Theta = \pi$ and $\Theta = 2 \pi$. For both and more generally as any integer of $\pi$, the area is conserved. They are essential for the manipulation of coherences in the context of quantum information storage.
\section{Pulse distortion}\label{distortion}
I here show that energy and area conservation rules can predict the general behavior of strong pulses through the cavity. The intuition is well supported by numerical results.
\subsection{$\pi$-pulses}
\subsubsection{Area and energy of $\pi$-pulses}
I define as $\pi$, the pulse whose intracavity area is $\Theta = \pi$. They correspond to incoming and the outgoing values $\displaystyle \Theta_\mathrm{in}=\Theta_\mathrm{out}= \frac{ \sqrt{\kappa}}{2} \pi$. In other words, the area is conserved. It may be surprising at very first sight because the intracavity $\pi$-pulse is inverting the medium and is then leaving some energy behind. This apparent contradiction actually tells us that the pulse stretches to conserve its area but to reduce its energy. My point is actually to look at the energy conservation to evaluate qualitatively these distortions. If the pulse looses are a neglegible fraction of its energy, it will keep its shape. It is not the case here. Let's compare the incoming energy and what's left in the atomic excitation by a $\pi$-pulse of bandwidth $\Delta \omega$ (see \ref{energyconservation}). Its amplitude $\Omega_\mathrm{in} $ scales as $\Delta \omega \frac{ \sqrt{\kappa}}{2} \pi$ by definition of a $\pi$-pulse. So I obtain $\displaystyle U_{\Omega} \sim \displaystyle \frac{\pi^2}{4} \kappa \Delta \omega $ for the incoming energy. On the other side, if the medium is inverted ($\ensuremath{W_\Delta} = 1$) over a bandwidth $\Delta \omega$, the population contains
$\displaystyle U_{w} \sim \displaystyle \frac{1}{2 \pi} \alpha L \Delta \omega $.
For a matched cavity (eq. \ref{match}), $U_{\Omega}$ and $U_{w}$ have precisely the same scaling with $\displaystyle \kappa \Delta \omega $. The ratio doesn't depend on the atomic or cavity parameters.
It should be noted that the two forms of energies have the same order of magnitude precisely because the matching condition is assumed. On the contrary, the area theorem in general (eq. \ref{area}) doesn't depend on the absorption for $\Theta = \pi$. In other words, a $\pi$-pulse conserves its area if the cavity is matched or not. It offers a certain degree of freedom if the pulse distortions have to be minimized. If the transmission of the input mirror can be changed for example. Even if the matching is required for a complete storage of the incoming signal, a dynamical control of the resonator parameters as it is the case for RF waveguides would certainly open some perspectives.
\subsubsection{Discussion}
An equivalent energy comparison can be done for an arbitrary area. It tells that the pulses have to leave a constant fraction of energy behind independently from their duration. Let's take weak pulses as a reference. They reduce both area and energy by simply reducing the pulse amplitude keeping the duration constant. This option is not possible for a $\pi$-pulse because the area should be conserved. The only possibility here is to associate amplitude reduction and pulse elongation. A $\pi$-pulse reduces the energy but roughly keeps its area. Major distortions of the incoming pulse are then expected. The analogy with strongly absorbing media in free-space situation is remarkable \cite{Ruggiero:10}.
Before considering numerical simulations which are necessary to obtain the exact shape of the outgoing pulse, I treat the case of Self-Induced Transparency (SIT).
\subsection{Self-induced transparency of $2\pi$-pulses}
The situation is clearly different for $\displaystyle \Theta_\mathrm{in}= \sqrt{\kappa}\pi$. The area should be still conserved but the energy conservation is relaxed. The on-resonance atoms are returned back to the ground state. Even if off-resonant atoms can still be excited because they don't undergo a $2\pi$ area, less energy should be left behind as compared to a $\pi$-pulse. Because the energy conservation is less drastic, less distortions and a minimized absorption are likely.
Soliton like behavior are expect in analogy with free-space defining the SIT phenomenon \cite{allen1987ora, AreaTheorem}. Since the present paper is primarily focused on quantum storage, $2\pi$ rotations of the atomic state are not really interesting because they essentially leave the system unchanged. $\pi$-pulses are more relevant in that sense. They can indeed rephase the inhomogeneous broadening (photon or spin echo) and be used in series for dynamical decoupling sequences \cite{PhysRevA.58.2733, slichter1990principles}
\subsection{Numerical simulation of the pulse temporal shapes}\label{simul}
\subsubsection{Model and parameters}
The numerical simulation first requires a discretization of the detuning $\Delta$. I write the Bloch system of eq. (\ref{Bloch}) for $n$ evenly-spaced detunings $\Delta_n$ (spacing $\ensuremath{\mathrm{d}} \Delta$) leading to 3n equations for $(U_{\Delta_n},V_{\Delta_n},W_{\Delta_n})$.
The cavity master (eq. \ref{BMeqGTI}) becomes
\begin{equation}
\displaystyle \frac{1}{\mathcal{D}} \frac{\ensuremath{\partial} \Omega} {\ensuremath{\partial} t} = \displaystyle -\frac{\kappa}{2} \Omega + \sqrt{\kappa} \Omega_\mathrm{in} - i\alpha L \sum_{n} \; ( U_{\Delta_n}+i V_{\Delta_n}) \; \ensuremath{\mathrm{d}} \Delta
\end{equation}
It gives $(3n+1)$ linear first-order differential equations for the variables $(U_{\Delta_1},V_{\Delta_1},W_{\Delta_1}, \cdots, U_{\Delta_n},V_{\Delta_n},W_{\Delta_n}, \cdots,\Omega)$.
The boundary conditions are given on the one hand by assuming the atoms in the ground state $(U_{\Delta_n},V_{\Delta_n},W_{\Delta_n})=(0,0,-1)$ and on the other hand by giving an incoming temporal pulse shape $\Omega_\mathrm{in}(t)$. I solve the $(3n+1)$ linear system numerically ($n=256$) by applying a forth-order Runge-Kutta method.
For numerical application, I choose a finesse of $\displaystyle \frac{2 \pi}{\kappa}=500$ and a free-spectral-range $\mathcal{D}= 3 \mathrm{GHz}$. These common parameters covers different physical realities. In the optical domain, they correspond to a 10 cm long cavity for which finesse of 1000 have been observed \cite{PhysRevA.74.033818, Goto:10} using monolithic samples. In the RF domain, a GHz range can be associated with the hyperfine splitting of NV centers in diamond \cite{PhysRevLett.105.140502, PhysRevLett.107.060502, PhysRevLett.107.220501} or with the electron spin of paramagnetic impurities in luminescent crystals \cite{Schoelkopf.105.140501, PhysRevB.84.060501}. The finesse in that case is not limited by the ultimate performance of the superconducting resonator but by the atomic linewidth (typical in the MHz range) then limiting the quality factor (few hundreds). A finesse of $\displaystyle \frac{2 \pi}{\kappa}=500$ is then realistic and covers a wide range of physical systems.
The cavity is supposed to be matched (eq \ref{match}). The different parameters are now fixed.
\subsubsection{Outgoing pulse shapes}
I choose gaussian shaped incoming pulses for the simulation with a $2\mu s$ duration. The corresponding bandwidth is $2 \pi \times 80\mathrm{kHz}$ much smaller than the cavity linewidth $\Delta \omega_\mathrm{cav}=2 \pi \times 12 \mathrm{MHz}$. I keep the duration constant and vary the pulse amplitude ranging the incoming area from zero to $\Theta_\mathrm{in}= \sqrt{\kappa} \pi$ ($2\pi$-pulse). I then obtain the temporal shape of the intracavity and outgoing pulses in fig. \ref{PlotSerie_matchedTemporal_article}.
I also perform a rudimentary test of my simulation by computing the intracavity $\Theta$ and outgoing $\Theta_\mathrm{out}$ areas. They can be easily compared (squares on fig. \ref{PlotSerie_matchedVSaire_article}) to the analytic result of the area theorem (eq. \ref{areamatched}).
\begin{figure}[!h] \begin{center}
\includegraphics[width=11cm]{PlotSerie_matchedTemporal_article.eps}
\caption{Top: incoming gaussian pulses $ \Omega_\mathrm{in}$ of varying area $\Theta_\mathrm{in}= 0.4 \frac{ \sqrt{\kappa}}{2} \pi$ in black, $\Theta_\mathrm{in}= \frac{ \sqrt{\kappa}}{2} \pi$ in red ($\pi$-pulse) and $\Theta_\mathrm{in}= \sqrt{\kappa} \pi$ in blue ($2\pi$-pulse). The corresponding intracavity $\Omega$ (middle) and outgoing pulses $\Omega_\mathrm{out}$ (bottom).}
\label{PlotSerie_matchedTemporal_article}\end{center}\end{figure}
The pulse for $\Theta_\mathrm{in}= 0.4 \frac{ \sqrt{\kappa}}{2} \pi$ behaves as a weak area pulse meaning it is mostly absorbed without distortion. No reflection at all is expected for areas much smaller than $\frac{ \sqrt{\kappa}}{2} \pi$.
The $2\pi$-pulse with $\Theta_\mathrm{in}= \sqrt{\kappa} \pi$ shows remarkable similarities with SIT in strongly absorbing media. It conserves its amplitude and shape. It is essentially delayed. The delay scales as the incoming pulse duration, $2\mu s$ in that case \cite{allen1987ora}. The situation is particular and has been observed experimentally \cite{Mossberg2001, Mossberg2003}. It is less interesting in the context of quantum manipulation of qubits but it would certainly deserve further investigation. I now consider more specifically the $\pi$-pulse situation.
The outgoing shape $\Omega_\mathrm{out}$ for $\Theta_\mathrm{in}= \frac{ \sqrt{\kappa}}{2} \pi$ is characterized by a long tail, much longer than the pulse duration. This strong distortion is precisely expected for $\pi$-pulse which have to leave most of its energy behind by conserving its area (see section \ref{distortion}). To make this comparison more clear, I plot the normalized outgoing pulse (fig. \ref{PlotRMSwidthVSaire_article}, left).
\begin{figure}[!h] \begin{center}
\includegraphics[width=12cm]{PlotRMSwidthVSaire_article.eps}
\caption{Left: normalized outgoing pulse $\Omega_\mathrm{out}$ for $\Theta_\mathrm{in}= \displaystyle 0.4 \frac{ \sqrt{\kappa}}{2} \pi, \frac{ \sqrt{\kappa}}{2} \pi$ and $ \displaystyle \sqrt{\kappa} \pi$ in black, red and blue respectively (as in fig. \ref{PlotSerie_matchedTemporal_article}). The normalised incoming pulse is plotted as a reference (dashed red). The long tail of the outgoing $\pi$-pulse (red) is prominent. Right: root mean square (rms) temporal widths of the calculated pulse (squares, a dashed line is used to guide the eye).}
\label{PlotRMSwidthVSaire_article}\end{center}\end{figure}
As previously mentioned, the $2\pi$ outgoing pulse (SIT) is remarkably similar to the incoming pulse (fig. \ref{PlotRMSwidthVSaire_article}, left, blue curve). It is essentially delayed. The almost weak area pulse $\Theta_\mathrm{in}= 0.4 \frac{ \sqrt{\kappa}}{2} \pi$ (fig. \ref{PlotRMSwidthVSaire_article}, left, black curve) is not distorted neither. For a matched cavity, it is strongly absorbed, the observed shape and delay should be taken with precaution because they strongly depend on the numerical parameters. Temporal discretization may typically explain the pedestal on the rising edge of outgoing pulse (fig. \ref{PlotRMSwidthVSaire_article}, black curve). Anyway, it serves as a reference as compared to $\pi$-pulse whose behavior is completely different. A long tail clearly appears.
\subsubsection{Outgoing pulse width}
To quantify the distortion, I plot the rms width $\sigma$ of the pulse (fig. \ref{PlotRMSwidthVSaire_article}, right). This standard deviation is computed from the normalized distribution $p_\mathrm{out}(t)=\displaystyle \frac{\Omega_\mathrm{out}(t)}{\Theta_\mathrm{out}}$ and defined as $\sigma = \sqrt{ \displaystyle \int_t \left( t - \mu \right)^2 p_\mathrm{out}\, \ensuremath{\mathrm{d}} t}$ where $\mu$ is the mean value $\mu= \displaystyle \int_t t p_\mathrm{out}\, \ensuremath{\mathrm{d}} t$. It equals $2\mu s$ for the incoming pulse as reference. For incoming area close to $\Theta_\mathrm{in}= \frac{ \sqrt{\kappa}}{2} \pi$ elongations by more than an order of magnitude are observed. The distortion is extremely sensitive to the pulse area close to $\frac{ \sqrt{\kappa}}{2} \pi$. The numerical simulation allows to put numbers on my qualitative analysis based on area and energy conservation.
\subsubsection{Slow-light interpretation}\label{SL}
The pulse distortion is usually associated with a steep dispersion profile. The slow-light phenomenon recently reappeared in the context of impedance matched cavity \cite{sabooni2013three} with particularly strong and observable effects. Considering the reflection coefficient (eq. \ref{r}) as an effective susceptibility whose imaginary part is the equivalent refractive index is tempting. It would be only justify in the weak signal perturbative limit and cannot be used in general to explain the outgoing shape of strong area pulses. I nevertheless propose to consider the effective group delay for a qualitative analysis essentially demonstrating that extreme dispersion is expected for an inverted medium. One can push the perturbative approach by looking at the reflection coefficient (eq. \ref{r}). I assume for example that a given time the atomic population is excited $\ensuremath{W_\Delta}(t) \geq -1$, and from that point calculate an effective reflection coefficient. I now assume the population constant $W$ over the bandwidth of interest for the sake of simplicity since my goal is to derive an order of magnitude. The population simply rescales the absorption coefficient so to say taking the form $-W \times \alpha$. It leads to gain or absorption if the medium is inverted or not. Starting from the generalized expression of the reflection coefficient
\begin{equation}\label{r_gen}
r_W\left(\omega\right)=\displaystyle \frac{\kappa+4\pi W \alpha L - 4i\pi \omega/\mathcal{D}}{\kappa-4\pi W \alpha L + 4i\pi \omega/\mathcal{D}}
\end{equation}
One can derive the group delay $T_g$ for a matched cavity (eq. \ref{match}) by a first order expansion
$ \displaystyle
r_W\left(\omega\right)=r_W\left(0\right)+i\omega T_g+...
$
One simply finds
\begin{equation}\label{T_g}
r_W\left(0\right)=\frac{1+W}{1-W}\mbox{\hspace{1cm} and \hspace{1cm}} T_g=\frac{1}{\Delta \omega_\mathrm{cav}}\frac{4}{\left(1-W\right)^2}
\end{equation}
I now compare the initial situation in the ground state (the population is weakly modified by small area pulses) and a inverted medium as expected for $\pi$-pulses. For atoms in the ground state, the reflection tends toward zero and the group delay toward $\displaystyle \frac{1}{\Delta \omega_\mathrm{cav}}$, that can be compared to the empty ring cavity case with a $\displaystyle \frac{4}{\Delta \omega_\mathrm{cav}}$ delay. When the medium is inverted, both the reflection $r_W\left(0\right)$ and the group delay diverge. It is extremely surprising at first sight because an infinite reflection seems to break the energy conservation rule. It is not the case because it is precisely associated with an infinite group delay. It tells that the weak pulse stays for an infinitely long time in the cavity. Because the ensemble is supposed inverted (and not depleted), it gives constantly its energy to the pulse leading to an endless amplification $r_W\left(0\right) \rightarrow \infty$. This analysis should not be confused with the distortion of a $\pi$-pulse. A strong pulse enters an not-inverted medium but creates the inversion through which it propagates. It cannot be accurately described by my perturbative expansion and precisely requires a numerical simulation to solve the coupled equations. We nevertheless understand qualitatively that the more the pulse inverts the medium, the slowest is goes through the sample. It is not so surprising at the end that a long tail appears following the outgoing $\pi$-pulse.
\section{Conclusion}
The intracavity and output shapes of a strong exciting pulse are extremely sensitive to its incoming area close to $\Theta_\mathrm{in}= \frac{ \sqrt{\kappa}}{2} \pi$. It corresponds to a singularity for the intracavity area theorem. This effect can be interpreted as a competition between the area and energy conservation rules. The pulse stretches because it reduces its energy by keeping its area. The parallel with the free space propagation is remarkable. This effect appears to be critical to explain the observed efficiency of the two-pulse echo in an optically-thick sample
\cite{Ruggiero2PE}. A similar result is expected for echo type experiments in a matched cavity.
A long pulse tail can be problematic when a weak signal has to be isolated from a strong control pulse. To get around this potential limitation, monochromatic $\pi$-pulses can be replaced advantageously by adiabatic inversion as proposed in free-space \cite{Damon}. Frequency swept pulses are not only more robust in terms of power fluctuation for example but they are known to undergo less distortion when propagating though absorbing samples \cite{Warren}. A similar behavior is expected in a matched-cavity. Frequency swept pulses would deserves a specific study. The Bloch-Maxwell model can be extended to include the time-dependent phase of the field in a straightforward manner \cite{allen1987ora}. It should be noted that the area theorem is not valid anymore, in a certain sense relaxing the constraints on the pulse dynamics \cite{Eberly:98}.
\section*{Acknowledgments}
I would like to acknowledge useful discussions with P. Bertet, J.-L. Le Gou{\"e}t and A. Ourjoumtsev as well as financial support from the french national grant RAMACO no. ANR-12-BS08-0015-02. The research leading to these results has received funding from the People Programme (Marie Curie Actions) of the European Union's Seventh Framework Programme FP7/2007-2013/ under REA grant agreement no. 287252.
\end{document}
|
2,877,628,090,528 | arxiv | \section{Prologue: black hole baldness}
\textit{``In my entire scientific life,
extending over forty-five years, the most shattering
experience has been the realization that an exact solution
of Einstein's equations of general relativity, discovered
by the New Zealand mathematician, Roy Kerr, provides the
absolutely exact representation of untold numbers of
massive black holes that populate the universe.''}
\bigskip
This quote, by S. Chandrasekhar~\cite{Chandra}, highlights a central result from black hole (BH) theory: the uniqueness theorems of vacuum Einstein's gravity~\cite{Robinson:2004zz}. Such results endow the BH concept with such an extraordinary elegance and simplicity that Chandrasekhar could not help feeling a sense of awe. The same idea became carved in stone by John Wheeler's statement {\it ``BHs have no hair''}~ \cite{Wheeler}. In a nutshell, that whatever matter originates the BH, all its information -- to which `hair' provides an image -- disappears, except for a small set of asymptotically measurable quantities.
In this essay, we will revisit the `BH no-hair' idea and show that, albeit compelling, there is new evidence to reconsider it.
\newpage
\section{Hairy black holes and horizonless solitons}
At present, ``hair'' is used in BH physics as a colloquial term to describe any measure --
beyond those subjected to a Gauss law, such as mass, angular momentum and electric/magnetic charges --
needed to fully describe the BH.
As such, there are by now a number of counterexamples to the no-hair idea,
for various nonlinear matter sources coupled to general relativity
in a four dimensional, asymptotically flat spacetime
(for reviews, see
\cite{Bizon:1994dh,Bekenstein:1996pn,Volkov:1998cc}).
These counterexamples are typically unstable and/or occur
in rather exotic theories, but they illustrate the $mathematical$ limitations of the no-hair idea.
An analysis of the known hairy black holes (HBHs) shows that these solutions, in a given model, typically occur together with horizonless soliton-like configurations, obtained from the HBH in the limit of vanishing horizon size. Conversely, a rule of thumb is that if some type of matter allows for solitons when coupled to gravity, then it also allows for HBHs.
This state of affairs has led to a description of some HBHs
as bound states of
ordinary BHs (without hair) and solitons,
within the isolated horizon formalism
\cite{Ashtekar:2000nx}.
From the above, it appears rather mysterious that
there are no known asymptotically flat BHs with scalar hair,
since a massive complex scalar field can condense to
form smooth horizonless bound states: {\it boson stars} (BSs).
BSs exist due to a balance between their self-generated gravity and the dispersive effect
due to the wave character of the scalar field \cite{Kaup:1968zz}.
Such solitons are argueably the physically most interesting gravitating solitons,
considered as possible BH mimickers and dark matter candidates
(see $e.g.$ the recent review \cite{Liebling:2012fv}).
The original no-hair theorems do not cover scalar fields with a harmonic time dependence,
but the results in \cite{Pena:1997cy} prove the absence of
BH generalizations of {\it spherically symmetric} BSs.
This led to a widespread believe that it is not possible to add an horizon in the interior of
\textit{any} BS,
without trivializing the scalar field.
This believe, however, turns out to be incorrect.
\section{\textit{Spinning} boson stars may wear black...}
The crucial new ingredient to obtain HBHs with scalar hair is \textit{spin}. HBHs must be studied using numerical methods, since no exact analytic solution is known for BSs, even with spherical symmetry.
We found HBH solutions by solving numerically the field equations with a metric ansatz
with two Killing vectors:
$\xi=\partial_t$
and
$\eta=\partial_\varphi$. These are not, however, Killing vectors of the full solution;
the scalar field $\Psi$ depends on both $\varphi$ and $t$ through a phase. Thus HBHs have a single Killing vector field, c.f.~\cite{Dias:2011at}. The full ansatz reads:
\begin{eqnarray}
\label{metric-ansatz}
&ds^2=e^{2F_1\left(r,\theta\right)}\left(\frac{dr^2}{1-\frac{r_H}{r} }
+r^2 d\theta^2\right)+e^{2F_2(r,\theta)}r^2 \sin^2\theta (d\varphi-W(r,\theta) dt)^2-e^{2F_0(r,\theta)} \left(1-\frac{r_H}{r}\right) dt^2 ,
\nonumber
\\
\label{scalar-ansatz}
&\Psi=\phi(r,\theta)e^{i(m\varphi-w t)}.
\end{eqnarray}
$w>0$ is the frequency and $m=\pm 1,\pm 2$\dots
is the azimuthal winding number.
As for BSs, the existence of HBHs requires
the scalar field to be massive. $\Psi$ may also possess a self-interacting potential; but this is not mandatory.
$r_H\geq 0$ fixes the position of
the event horizon; thus the solitonic limit of HBHs is obtained by taking $r_H\rightarrow 0$ and yields the spinning
BSs in \cite{s1,Yoshida:1997qf}. The near horizon expansion of the solutions imposes that the horizon angular velocity obeys
\begin{eqnarray}
\label{cond}
\Omega_H=\frac{w}{m}.
\end{eqnarray}
This implies that there is no flux of scalar field
into the HBH, $\chi^{\mu}\partial_\mu \Psi=0$, where $\chi =\xi+\Omega_H \eta$ is the null horizon generator.
The HBH solutions were obtained by slowly increasing $r_H$ from zero, for given $m,w$~\cite{HR}.
They possess a nonvanishing
scalar field on and outside a (regular) horizon,
providing perhaps the simplest violation of the no-hair idea.
The scalar field surfaces of constant
energy density have a toroidal topology near the horizon and a spherical topology asymptotically -- Fig. 1.
\vspace{1.cm}
\setlength{\unitlength}{1cm}
\begin{picture}(8,6)
\put(-0.5,0.5){\epsfig{file=3DBS1.jpeg,width=7cm}}
\put(8,0.0){\epsfig{file=3DBH1.jpeg,width=8cm}}
\end{picture}
\\
{\small {\bf Figure 1.}
{\small
Isosurfaces of constant energy density,
$E_1=0.17$ (outside shell) and $E_2=3$, for a BS (left)
and a HBH (right), with parameters $m=1$, $w/\mu=0.81$ and
$(M\mu,J\mu^2,Q\mu^2)$=
$( 0.65,0.42,0.42)$ and $(0.72,0.49,0.46)$, respectively.
$\mu$ is the scalar field mass and Newton's constant is set to $G=1$.
The event horizon is the sphere at the center of the right plot.
}
\section{... \textit{i.e.} \textit{spinning} black holes may wear scalar hair}
Since BHs can be added at the center
of spinning BSs: 1) \textit{how can we fully describe these solutions?} and 2) \textit{do these solutions connect continuously with (hairless) Kerr BHs?}
As for the first question, the only global charges of a HBH, computed using a Gauss law, are the mass $M$ and angular momentum $J$. In contrast to the Kerr case, they fail to fully specify the HBH solution. In fact, HBHs can even coexist with Kerr BHs in some region of the $(M,J)$ plane, providing a new example of non-uniqueness. Thus, to specify the solution, we add an extra quantity: the global Noether charge
\begin{eqnarray}
\label{Q}
Q=\int_{\Sigma}dr d\theta d\varphi ~j^t \sqrt{-g}, \qquad {\rm where} \ \ j^a=-i (\Psi^* \partial^a \Psi-\Psi \partial^a \Psi^*),
\end{eqnarray}
($j^a$ is a conserved current), which provides a quantitative measure of the scalar field outside the horizon.
Our results indicate the $(M,J,Q)$ specify a unique solution, after fixing $m,w$ and the number of scalar field nodes.
The answer to the second question reveals another feature
which may have far-reaching implications.
For any given $m$,
the HBHs are indeed connected to a sub-family of Kerr BHs with a particular relation $M=M(J)$~\cite{HR}. It is straightforward to specify a physical requirement for this sub-family. Approaching the Kerr limit,
the scalar field becomes arbitrarily small and the geometry arbitrarily close to Kerr. The hair can therefore be seen as scalar \textit{bound states} -- since they have a time independent energy density and an exponential spatial decay -- of the Klein-Gordon equation in the Kerr background. Thus, the Kerr sub-family connected to HBHs must support scalar bound states with the given $m$.
Schwarzschild BHs do not support scalar bound state solutions of the Klein Gordon equation; they only allow modes that are slowly decaying into the BH. Kerr BHs, on the other hand, support both decaying but also -- because of the \textit{superradiant instability}~ \cite{Press:1972zz} -- growing modes. Bound states exist at the threshold of the instability. They obey~\eqref{cond} and define, for each $m$, precisely the aforementioned Kerr sub-family.
Thus, the scalar hair is the non-linear realization of the bound states obtained in linear theory (see~\cite{Hod:2012px} for a discussion of bound states for extremal Kerr). And HBHs branch off from the Kerr family at the threshold of the superradiant instability.
%
%
\section{A recipe to grow hair}
The occurance of a new branch of solutions at the threshold of a classical instability is a well-known
feature of BH physics. The Gregory-Laflamme instability~\cite{Gregory:1993vy} is a classical example.
The previous section established the same pattern for the superradiant instability.
Thus, more than an example of HBH solutions, the discussion presented in this essay provides a \textit{mechanism} for growing hair on BHs:
\bigskip
\textit{A (hairless) BH which is afflicted by the superradiant instability of a given field must allow hairy generalizations with that field.}
\bigskip
\section{Epilogue: how simple is the Universe?}
Is Chandrasekhar's realization quoted at the beginning of this essay at risk?
In other words, in view of these new families of HBHs is it possible
that the BHs populating the Universe will be different from the Kerr solution? The answer is intimately related to two other questions. Firstly: are there any fields in nature that can excite a superradiant instability of Kerr BHs? And secondly: do the HBHs presented here -- or their generalizations associated to other fields -- play a role, as transient or final states in the \textit{dynamical} development of that superradiant instability?
Understanding these questions will clarify if Chandrasekhar's amazing realization is justified or if Nature is not that simple.
\newpage
|
2,877,628,090,529 | arxiv |
\section{(\boldmath{$c,e$})-Disks and Pruning Fronts}
\label{cedisks}
We will now give a preliminary definition of what we call $(c,e)$-disks.
Later we will add a dyamical hypothesis which is not necessary at present.
\bigskip
\begin{defn}
A closed disk $D$ is called a $(c,e)$-{\em disk}\/
if there are closed arcs $C, E \subset \partial D$ specified such that $\partial D=C
\cup
E$ and $C$ and $E$ only intersect at endpoints. In other words, for now,
a $(c,e)$-disk is just a {\em bigon}\/ with sides $C$ and $E$. We call
the common endpoints of $C$ and $E$ the {\em vertices}\/ of $D$.
\end{defn}
\begin{defn}
\label{longer}
Let $D_{1}, D_{2}$ be $(c,e)$-disks such that
$I_{1} \cap I_{2} \neq \emptyset$. We say $D_1$ is {\em e-longer}\/ or simply
{\em longer}\/ than $D_2$, denoted $D_{1} \succ D_2$, if (i), (ii) and (iii)
hold (see figure 6):
\begin{description}
\item [(i)] $C_{1} \cap I_{2} = \emptyset$ and $E_{2} \cap I_{1} = \emptyset$;
\item [(ii)] if $C_{1} \cap C_{2} \neq \emptyset$ then $C_{1} \cup C_{2}$ is an
arc and if $E_{1} \cap E _{2} \neq \emptyset$ then $E_{1} \cup E_{2}$ is an arc;
\item [(iii)] if ${\stackrel{\circ}{C_{1}} } \cap \overline{ I_{1} \cap I_{2} } \neq
\emptyset$ then $C_{1} \subset C_2$ and if ${\stackrel{\circ} {E_{2}} }\cap \overline{ I_{1}
\cap I_{2} } \neq \emptyset$ then $E_{2} \subset E_1$.
\end{description}
\end{defn}
\begin{figure}
\begin{center}~
\psfig{file=Fig6,height=2.5in}
\end{center}
\caption{The relation $\succ$.} \label{f6}
\end{figure}
\noindent{\sc Notation}: Let $D$ be a $(c,e)$-disk and $\alpha$ a cross-cut
joining the vertices of $D$. We have seen that $\alpha$ separates the
interior $I$ of $D$ into two Jordan domains whose boundaries are $C \cup
\alpha$ and $E \cup \alpha$. We denote them by $I^{c}(\alpha)$ and $I^{e}(\alpha)$,
respectively, and their closures by $D^{c}(\alpha), \ D^{e}(\alpha)$ (see figure 7.)
Moreover, when the disks are indexed and so are the cross-cuts we will
only use the index inside the parentheses so that $D^{c}(\alpha_{i})$
will denote the disk bounded by $C_{i} \cup \alpha_{i}$.
\bigskip
\begin{figure}
\begin{center}~
\psfig{file=Fig7,height=1.75in}
\end{center}
\caption{Cut $(c,e)$-disks.} \label{f7}
\end{figure}
\noindent{\sc Conventions}: If $D_{1}, \ldots , D_{L}$ is a collection of
$(c,e)$-disks, we say they are {\em related}\/ by $\succ$ if for any $i,
\ j
\in \underline{L}$ either $I_{i} \cap I_{j} = \emptyset$ or $D_{i} \succ D_{j}$ or
$D_{j} \succ
D_{i}$. When nothing is mentioned about a colection of $(c,e)$-disks it
is assumed they are related by $\succ$.
Cross-cuts in $(c,e)$-disks, when nothing is mentioned to the contrary,
are assumed to be open and to join the vertices of the disk wherein they lie.
\bigskip
The following propositions are easy consequences of what we have
developed so far and we omit the proofs.
\bigskip
\begin{prop} \label{17}
If $D$
is a $(c,e)$-disk and $\alpha, \beta, \gamma \subset D$ are open cross-cuts
joining vertices such that $\beta \subset I^{c} (\alpha)$ and $\gamma \subset
I^{e}(\alpha)$ then $I^{c}(\beta) \subset I^{c}(\alpha) \subset I^{c}(\gamma)$ and
$I^{e}(\beta) \supset I^{e}(\alpha) \supset I^{e}(\gamma)$. $\Box$
\end{prop}
\begin{prop} \label{18}
Let $D_{1}$ and
$D_2$ be $(c,e)$-disks and $D_{1} \succ D_{2}$. Then
\begin{description}
\item[(i)] if ${\stackrel{\circ}{C_{1}} }\cap \overline{ I_{1} \cap I_{2} } \neq \emptyset$,
then $D_{1}, D_{2}|_{C_{1}}$ and
\item[(ii)] if ${\stackrel{\circ}{E_{2}} } \cap \overline{ I_{1} \cap I_{2} } \neq \emptyset$,
then $D_{1}, D_{2}|_{E_{2} }$. $\Box$
\end{description}
\end{prop}
\begin{defn}
A collection of pairs $\{(D_{i}, \beta_{i})\}^{L}_{i=1}$,
where $\{D_{i} \}^{L} _{i=1}$ is a collection of
$(c,e)$-disks related by $\succ$ and $\{ \beta_{i} \subset D_{i}
\}^{L}_{i=1}$
is a collection of open cross-cuts joining vertices, will be called a {\em
cut collection}.
\end{defn}
\begin{prop} \label{19}
Let $D_{1}$ and $D_{2}$ be $(c,e)$-disks and $D_{1} \succ D_{2}$. If
$C_{1}=C_{2}$ or $E_{1}=E_{2}$ then $D_{1}=D_2$.
\end{prop}
\noindent{\sc Proof}: Assume $C_{1}=C_{2}$. Then the endpoints of $E_{1}$ and
$E_{2}$ coincide (since they are the same as those of $C_{1}$ and $C_2$)
and, by (ii) in the definition of $\succ, \ E_{1} \cup E_2$ is an arc. But
this can only happen if $E_{1}=E_2$. $\Box$
\bigskip
\begin{prop} \label{20}
If $D_{1},
D_{2}$ are $(c,e)$-disks and $D_{1} \succ D_2$ and $D_{2} \succ D_1$ then
$D_{1}=D_2$.
\end{prop}
\noindent{\sc Proof}: The proof is easy and is left to the reader. $\Box$
\bigskip
\begin{prop} \label{21}
Let
$\{(D_{i}, \beta_{i} ) \}^{L}_{i=0}$ be a cut collection and $\varepsilon$ a
positive number.
If $D_{0} \not\prec D_i$ (i.e., either $I_{0} \cap
I_{i} = \emptyset$ or $D_{0} \succ D_{i}$ and $D_{0} \neq D_{i}$) for every $i
\in \underline{L}$ then there exists an open cross-cut $\alpha_{0} \subset
I^{c}(\beta_{0}) \cap V_{\varepsilon} (C_{0})$ joining vertices such that for
each $i \in \underline{L}$ either (i) or (ii) holds:
\begin{description}
\item[(i)] if ${\stackrel {\circ}{C_{0}} } \cap \overline{ I_{0} \cap I_{i} } \neq \emptyset$
then $[ I^{c}( \alpha_{0}) \cup \alpha_{0} ] \subset I^{c}(\beta_{i})$;
\item[(ii)] otherwise $[I^{c}(\alpha_{0}) \cup \alpha_{0} ] \cap D_{i} = \emptyset$.
\end{description}
If, on the other hand, $D_{0} \not\succ D_i$ for every $i \in \underline{L}$,
then there exists an open cross-cut $\alpha_{0} \subset I^{e}(\beta_{0}) \cap
V_{\varepsilon} (E_{0})$ such that for each $i \in \underline{L}$ either (iii) or (iv)
holds:
\begin{description}
\item[(iii)] if ${\stackrel {\circ}{E_{0}} } \cap \overline{ I_{0} \cap I_{i} }\neq
\emptyset$ then $[ I^{e} (\alpha_{0}) \cup \alpha_{0} ] \subset I^{e}(\beta_{i})$;
\item[(iv)] otherwise $[I^{e}(\alpha_{0}) \cup \alpha_{0}] \cap D_{i}= \emptyset$.
\end{description}
\end{prop}
\noindent{\sc Proof}:
We will prove (i) and (ii), the proof of (iii) and (iv) being analogous.
Divide the disks $D_i$ into two groups: (i) those for which $C_{0}
\cap \overline{I_{0} \cap I_{i} } \neq \emptyset$ and (ii) those for which $C_{0} \cap
\overline{I_{0} \cap I_{i} } = \emptyset$. If $D_i$ is in group (i), $I_{0} \cap
I_{i} \neq \emptyset$, so that by our assumption $D_{0} \succ D_i$ and, by
Proposition~\ref{18}, $D_{0}, D_{i}|_{C_{0}}$. Clearly
$D^{c}(\beta_{0}), D_{0}|_{C_{0} }$ and $D^{c}(\beta_{i}),
D_{i}|_{C_{i}}$ and, since $C_{0} \subset C_i$, we see that
$D^{c}(\beta_{0}), D_{0} |_{C_{0}}$, $D_{0}, D_{i}|_{C_{0}}$ and
$D_{i},D^{c}(\beta_{i})|_{C_{i}}$, by Proposition~
\ref{13}, imply that $D^{c}(\beta_{0}),
D^{c}(\beta_{i})|_{C_{0}}$ for every $D_i$ in group (i). It now follows
>from Proposition \ref{10} and
Corollary~\ref{11}
that there exists an open
cross-cut $\alpha \subset I^{c}(\beta_{0}) \cap V_{\varepsilon} (C_{0})$ such that
$[I^{c} (\alpha ) \cup \alpha ] \subset I^{c} (\beta_{i})$.
On the other hand, for the disks $D_j$ in group (ii), $C_{0} \cap \overline{ I_{0}
\cap I_{j} } = \emptyset$ and since $I^{c} (\alpha ) \subset I_0$, it is also the
case that $C_{0} \cap \overline{I^{c} (\alpha ) \cap I_{j} } = \emptyset$. Thus, by
Proposition~\ref{14} there exists an open cross-cut $\alpha_0$
in $I^{c}(\alpha)$ such that for every $D_j$ in group (ii), $[I^{c} (\alpha _{0}
) \cup \alpha_{0} ] \cap D_{j} = \emptyset$. It is clear that such $\alpha_0$ also
satisfies $[I^{c} (\alpha_{0}) \cup \alpha_{0} ] \subset I^{c} (\beta_{i})$ for
every $D_i$ in group (i) (see figure 8.)
In the event that all the disks belong to one or the other of the groups,
the modifications necessary in the above proof are minor and are left to
the reader. $\Box$
\bigskip
\begin{figure}
\begin{center}~
\psfig{file=Fig8,height=2.75in}
\end{center}
\caption{A cut collection where $D_{0} \succ D_{i}$ for every $i \in
\mbox{\underline{\it L}}$.}\label{f8}
\end{figure}
\begin{defn}
Let $\{(D_{i}, \beta_{i}) \}^{L}_{i=1}$ be a
cut collection, $S \subset \underline{L}$ and $\varepsilon > 0$. The collection $\{
\alpha_{i} \}_{i \in S}$ of disjoint open cross-cuts is said to be a
$(\varepsilon,c)$-{\em collection compatible with}\/ $\{ ( D_{i}, \beta_{i} ) \}
^{L}_{i=1}$ (see figure 9) if $\alpha_{i} \subset I^{c}(\beta_{i} ) \cap
V_{\varepsilon} (C_{i})$ and
for every $i \in S$ and $j \in \underline{L}$ such that $D_{i} \not\prec D_j$
either (i) or (ii) holds:
\begin{description}
\item[(i)] if ${\stackrel {\circ}{C_{i}} } \cap \overline{ I_{i} \cap I_{j} } \neq \emptyset$
then $[I^{c} (\alpha_{i} ) \cup \alpha _{i} ] \subset I^{c} (\beta _{j} )$;
\item[(ii)] otherwise $[I^{c} (\alpha_{i} ) \cup \alpha_{i}] \cap D_{j} = \emptyset$.
\end{description}
The collection $\{\alpha_{i}\}_{i \in S}$ is called a $(\varepsilon,e)$-{\em
collection compatible with}\/ $\{(D_{i},$ $\beta _{i}) \}^{L}_{i=1}$ if
$\alpha_{i} \subset I^{e} (\beta_{i} ) \cap V_{\varepsilon} (E_{i})$ and for every $i
\in S$ and $j \in \underline{L}$ such that $D_{i} \not\succ D_{j}$ either (iii)
or (iv) holds:
\begin{description}
\item[(iii)] if ${\stackrel{\circ}{E_{i}} } \cap \overline{ I_{i} \cap I_{j} } \neq \emptyset$
then $[ I^{e} (\alpha_{i}) \cup \alpha_{i} ] \subset I^{e} (\beta _{j} )$;
\item[(iv)] otherwise $[I^{e} (\alpha_{i}) \cup \alpha_{i} ] \cap D_{j} = \emptyset$.
\end{description}
\end{defn}
\begin{figure}
\begin{center}~
\psfig{file=Fig9,height=3in}
\end{center}
\caption{$\{\alpha_{1}, \alpha_{2} \}$ is a ($\varepsilon, e$)-collection and
$\{\alpha_{3}, \alpha_{4} \}$ is a ($\varepsilon, c$)-collection, both compatible with
$\{ (D_{i}, \beta_{i} ) \}^{4}_{i=1}$.} \label{f9}
\end{figure}
\noindent {\sc Remarks}: Notice that if $\{ \alpha_{i}\}_{i \in S}$ is a
$(\varepsilon,c)$-collection compatible with $\{ ( D_{i}, \beta_{i} ) \}$ and $\{
\gamma_{i} \}_{i \in S}$ is a collection of open cross-cuts joining the
vertices of $D_i$ and such that $\gamma _{i} \subset I^{c} (\alpha_{i} ),\ \{
\gamma_{i} \}_{i \in S}$ is also a $( \varepsilon,c)$-collection compatible with
$\{(
D_{i}, \beta_{i} ) \}$. If moreover $\gamma_{i} \subset V_{\varepsilon '}(C_{i})$
then $\{ \gamma _{i} \}_{i \in S}$ is a $(\varepsilon ', c)$-collection. The
analogous statement holds true for $(\varepsilon,e)$-collections.
\bigskip
\noindent{\sc Warning}: As the reader may have already noticed, statements
about $c$-``things'' and $e$-``things'' are ``dual'' to one another and
most proofs are totally analogous in both cases. We will henceforward,
whenever there is nothing essentially different between the two, present
only the ``$c$-proof'' without further comments.
\bigskip
\begin{prop} \label{22}
Let $\{(D_{i}, \beta_{i}) \}^{L}_{i=1}$ be a
cut collection, $\{ \alpha_{i} \}_{i \in S}$ a $(\varepsilon,c)$-collec\-tion and $\{
\alpha_{i}' \}_{i \in S}$ a $(\varepsilon, e)$-collection both compatible with
$\{(D_{i}, \beta_{i} ) \}^{L}_{i=1}$. If $i, j \in S$ are such that $
D_{i} \succ D_{j}$ and $ D_{i} \neq D_j$ then:
\begin{description}
\item[(i)] ${\stackrel{\circ}{C_{i}} } \cap \overline{ I_{i} \cap I_{j} } \neq \emptyset$
implies $[I^{c} (\alpha_{i}) \cup \alpha_{i} ] \subset I^{c}(\alpha_{j} )$ and
\item[(ii)] ${\stackrel{\circ} {E_{j}} } \cap \overline{ I_{i} \cap I_{j} } \neq \emptyset$
imples $[I^{e}(\alpha_{j}') \cup \alpha_{j}' ] \subset I^{e} (\alpha_{i}')$.
\end{description}
\end{prop}
\noindent {\sc Proof}: From the definition of $\succ$ and
Proposition~\ref{18}
it follows, under the hypotheses above, that $C_{i} \subset
C_{j}$ and $D_{i}, D_{j} |_{C_{i}}$ and from the definition of
$(\varepsilon,c)$-collection, that $[I^{c}(\alpha_{i} ) \cup \alpha_{i} ] \subset I^{c}
(\beta _{i} )$. Therefore both $\alpha_i$ and $\alpha_j$ are open cross-cuts
in $I^{c}(\beta_{j})$. Since they are assumed to be disjoint (by
definition), $\alpha_i$ joins the endpoints of $C_i, \ \alpha_j$ those of
$C_j$ and $C_{i} \subset C_j$, it must be the case that $[I^{c} (\alpha_{i})
\cup \alpha_{i} ] \subset I^{c} (\alpha _{j})$, as we wanted. $\Box$
\bigskip
\begin{prop} \label{23}
Let $\{\alpha_{i} \}_{i \in S}$ be a ($\varepsilon ,c$)-collection
compatible with the cut collection $\{ ( D_{i}, \beta_{i} ) \}
^{L}_{i=1}$ and $\{\beta_{i}' \subset D_{i} \}^{L} _{i=1}$ a collection of
cross-cuts such that $\beta_{i}' \subset D^{e}(\beta_{i})$ for each $i \in
\underline{L}$. Then $\{\alpha_{i} \}_{i \in S}$ is also compatible with
$\{(D_{i}, \beta_{i}') \}^{L}_{i=1}$. If above we change ($\varepsilon,c$)-
to ($\varepsilon,e$)- and $D^{e}(\beta_{i})$ to $D^{c}(\beta_{i})$ the resulting
statement is true.
\end{prop}
\noindent {\sc Proof}: Since the collection of $(c,e)$-disks remains
unchanged all there is to check is that if $i \in S$ and $j \in \underline{L}$
are such that $D_{i} \succ D_{j}, \ {\stackrel{\circ}{C_{i}} } \cap \overline{ I_{i}
\cap I_{j} } \neq \emptyset$ implies $[I^{c} (\alpha_{i}) \cup \alpha_{i} ] \subset
I^{c} (\beta _{j} ' )$. But by the ``closed'' version of
Proposition~\ref{17},
$\beta_{j}' \subset D^{e} (\beta_{j} )$ implies
that $I^{c} (\beta_{j}' ) \supset I^{c} (\beta _{j})$. The result now
follows. $\Box$
\bigskip
\begin{cor} \label{24}
Let $\{\alpha_{i} \}_{i \in S}$ and $\{
\alpha _{i}'\}_{i \in S'}$ be a $(\varepsilon,c)$- and a $(\varepsilon ',e)$-collection
respectively, both compatible with the cut collection $\{ (D_{i},
\beta_{i} ) \}^{L}_{i=1}.$ Then $\{\alpha_{i}\}_{i \in S}$ is a
$(\varepsilon,c)$-collection compatible with the cut collection $$\{(D_{i},
\beta_{i}), \ i \in \underline{L} \setminus S' \} \cup \{ ( D_{i}, \alpha_{i}'); \ i \in
S'\}$$ and $\{ \alpha_{i}'\}_{i \in S'}$ is a $(\varepsilon,e)$-collection
compatible with $$\{(D_{i}, \beta_{i}); \ i \in \underline{L} \setminus S \} \cup \{ (
D_{i}, \alpha_{i} ); \ i \in S \}. \ \ \Box$$
\end{cor}
\begin{prop} \label{25}
Under the hypotheses of Corollary~
\ref{24},
if $i \in S$ and $j \in S'$ are such that $D_{i} \not\prec D_{j}$, then
$\alpha_{i} \cap \alpha_{j} = \emptyset$.
\end{prop}
\noindent{\sc Proof}: The proof is easy and is left to the reader. $\Box$
\bigskip
We still have to show that $(\varepsilon , c)$- and $(\varepsilon ,e )$-collections
exist. In the proof we will use the definition and the proposition below.
\bigskip
\begin{defn}
Let $\{D_{i}\}$ be a collection of $(c,e)$-disks
related by $\succ$. We say $D_i$ and $D_j$ are $c$-equivalent, and write
$D_{i} \sim_{c} D_j$, if there exits $D_k$ in the collection such that
$C_{i}, C_{j} \subset C_k$ and $D_{i}, D_{k} |_{C_{i}}$ and $D_{j}, D_{k}
|_{C_{j}}$. We define $e$-equivalence analogously by changing $c$-sides
to $e$-sides above, and denote it by $\sim_{e}$ (see figure 10.)
\end{defn}
\begin{figure}
\begin{center}~
\psfig{file=Fig10,height=2in}
\end{center}
\caption{An equivalence class for $\sim_c$. $D_1$ is the distinguished
representative.} \label{f10}
\end{figure}
\noindent{\sc Remark}: Notice that by this definition, $D_{i} \sim_{c} D_j$
if $C_{i} \subset C_{j}$ and $D_{i}, D_{j} |_{C_{j}}$ or vice versa and
analogously for $\sim_e$.
\bigskip
\begin{prop} \label{26}
The
relations $\sim_c$ and $\sim_e$ defined above are equivalence relations.
If the collection $\{D_{i} \}^{L}_{i=1}$ is finite, each equivalence
class for $\sim_{c} \ (\sim_{e})$ has a distinguished representative
whose c-(e-)side contains the c-(e-)sides of all other disks in
its c-(e-)equivalence class. Moreover, in each c-(e-)equivalence
class the c-(e-)sides are unlinked in the c-side of
its distinguished representative.
\end{prop}
\noindent {\sc Proof}:
That $\sim_c$ is reflexive and symmetric
is clear. In order to prove transitivity, assume $D_{i}
\sim_{c} D_{j}$ and $D_{j} \sim_{c} D_{k}$. This means there exist
$D_{l}, \ D_{m}$ in the collection such that $C_{i}, C_{j} \subset C_{l}$
and $D_{i}, D_{l}|_{C_{i}}, \ D_{j}, D_{l}|_{C_{j}}$ and $C_{j}, C_{k}
\subset
C_{m}$ and $D_{j}, D_{m}|_{C_{j}}, \ D_{k}, D_{m}|_{C_{k}}$. It follows
>from Proposition~\ref{13} that $D_{l},
D_{m}|_{C_{j}}$
and thus either $D_{l} \succ D_m$ or $D_{m} \succ D_{l}$. We may assume
$D_{l} \succ D_m$, the other case being analogous. Then, since $C_{j}
\subset C_l$ and $D_{l}, D_{m}|_{C_{j}}, \ {\stackrel {\circ}{C_{l}} } \cap \overline{
I_{l} \cap I_{m} } \neq \emptyset$ and from the definition of $\succ$ and
Proposition~\ref{18} we can conclude that $C_{l}
\subset C_{m}$
and $D_{l}, D_{m}|_{C_{l}}$. From this we see that $C_{i} \subset C_{m}$
and $D_{i}, D_{m}|_{C_{i}}$, which shows that $D_{i} \sim_{c} D_{k}$.
\bigskip
Consider now one $c$-equivalence class and let $D_i$ be an element in it
whose $c$-side is not strictly contained in the $c$-side of any other
disk in the same class. If $D_{j}\sim_{c}D_{i}$ then it must be the case
that $C_{j} \subset C_i$ for otherwise there would exist $D_k$ in the
collection for which $C_{i}, C_{j} \subset C_{k}$ and $D_{i},
D_{k}|_{C_{i}}$ and $D_{j}, D_{k}|_{C_{j}}$. But $D_{k} \sim_{c} D_{i}$
(see the remark just after the definition of $c$-equivalence) and if
$C_{j} \not\subset C_{i}, \ C_{k}$ contains $C_{i}$ strictly which is
contrary to our assumption. This shows that for every $D_{j}$ such that
$D_{j} \sim_{c} D_{i}$ we have $C_{j} \subset C_{i}$ and $D_{i}, D_{j}
|_{C_{j}}$. In order to see that the $c$-sides of disks in the
$c$-equivalence class of $D_{i}$ are unlinked in $C_{i}$ assume $D_{j}
\sim_{c} D_{k} \sim_{c} D_{i}$ and that $C_{j} \cap C_{k} \supset C$,
where $C$ is a closed arc. Since $D_{i},
D_{j}|_{C_{j}}$ and $D_{j}, D_{k}|_{C_{k}}$ by Proposition~\ref{13}, it
follows that $D_{j}, D_{k}|_{C}$. Then $I_{j} \cap I_{k} \neq \emptyset$ and we
must have $D_{j} \succ D_{k}$ or $D_{k} \succ D_{j}$ and by (iii) in the
definition of $\succ$, $C_{j} \subset C_{k}$ or $C_{k} \subset C_{j}$. $\Box$
\bigskip
\noindent{\sc Standing Convention}: If the lower index in an indexed union or
collection is larger than the upper one we will take the union or
collection to be empty, so that $\displaystyle{\bigcup ^{n-1}_{-n+1} }f^{k}(P) =
\emptyset$ when $n=0$. Also, recall that a bar under a positive integer
denotes the set of all positive integers smaller than or equal to it:
$\underline{L} = \{1,2, \ldots, L \}$. If $L=0$ we take $\underline{L}$ to be the empty
set as well.
\bigskip
We now go on to prove the existence of $(\varepsilon, c)$- and $(\varepsilon,
e)$-collections (see figure 11.)
\begin{prop} \label{27}
Let $\{(D_{i}(k), \beta_{i}(k) ); \ k= -1,0,1 \ {\mbox {\rm and}} \ i \in
\underline{L(k)} \}$ (where $L(k)$ is a nonnegative integer for each
$k=-1,0,1$) be a cut collection such that if $k < l$ then $D_{i}(k)
\not\succ D_{j}(l)$ for $i \in \underline{L(k)}$ and $j \in \underline{L(l)}$. Then given
$\varepsilon ,
\delta > 0$ there exist a $(\delta,e)$-collection $\{ \alpha_{i}(-1)
\subset D_{i}(-1) \}^{L(-1)}_{i=1}$ and a $(\varepsilon,c)$-collection $\{
\alpha_{j}(1) \subset D_{j}(1) \}^{L(1)}_{j=1}$ both compatible with $\{
(D_{i}(k), \beta_{i}(k) );\ k= -1,0,1 \ {\mbox{\rm and}} \ i \in
\underline{L(k)} \}$.
\end{prop}
\noindent {\sc Proof}: (See remark before the statement.) We may assume,
without loss of generality, that the distinguished representatives in the
$c$-equivalence classes among $\{ D_{i}(1); \ i \in \underline{L(1)} \}$ are the
first $n$ disks $D_{1}(1), \ldots, D_{n}(1)$. For each $i \in \underline{n}$
consider the cut collection $$\{ (D_{j}(k), \beta_{j}(k)); \ k=-1,0,1, \ j
\in \underline{L(k)}, D_{j}(k) \not\succ D_{i}(1) \} \cup \{ ( D_{i}(1),
\beta_{i}(1) ) \}.$$ By Proposition~\ref{21} there exists an open
cross-cut $\alpha_{i}(1) \subset I^{c} (\beta_{i}(1) ) \cap V_{\varepsilon} (C_{i}(1))$
satisfying (i) and (ii) of that proposition (with $\alpha_{i}(1)$ in place
of $\alpha_{0}$.) We do the same for every $i \in \underline{n}$ obtaining $\{
\alpha_{i}(1) \}^{n} _{i=1}$. These cross-cuts clearly satisfy (i) and
(ii) in the definition of $(\varepsilon,c)$-collections and $\alpha_{i}(1) \subset
I^{c} (\beta_{i}(1) ) \cap V_{\varepsilon}(C_{i}(1))$ by construction. In order
to see they are disjoint, let $i,j \in \underline{n}$. If $I_{i}(1) \cap
I_{j}(1) = \emptyset, \ \alpha_{i} (1) \cap \alpha_{j}(1)= \emptyset$ since $\alpha_{i}(1)
\subset I_{i}(1)$ and $\alpha_{j}(1) \subset I_{j}(1)$. If $I_{i}(1) \cap
I_{j}(1) \neq \emptyset$, then either $D_{i}(1) \succ D_{j}(1)$ or $D_{j}(1)
\succ D_{i}(1)$, say, $D_{i}(1) \succ D_{j}(1)$. It follows that
$\stackrel{\circ}{C_{i}} (1) \cap \overline{ I_{i}(1) \cap I_{j}(1) }= \emptyset$ for
otherwise $C_{i}(1) \subset C_{j} (1)$ and $D_{i}(1), D_{j}(1)
|_{C_{i}(1)}$, which goes against our assumption that $C_{i}(1)$ was the
distinguished representative in its $c$-equivalence class. From this we
can conclude that $[ I^{c} ( \alpha_{i} (1) ) \cup \alpha_{i} (1) ] \cap D_{j}
= \emptyset$ and thus that $\alpha_{i}(1) \cap \alpha_{j} (1) = \emptyset$. Indeed we have
shown more, namely that $$[I^{c}(\alpha_{i}(1) ) \cup \alpha_{i}(1) ] \cap
[I^{c} ( \alpha_{j} (1) ) \cup \alpha_{j} (1)] = \emptyset$$ for any $i,j \in \underline{n}$.
\bigskip
We now look at the disks in one $c$-equivalence class. By
Proposition~\ref{26} the $c$-sides of the elements in the class are
unlinked in the $c$-side of its distinguished representative, $D_{i}(1)$
say. By Proposition~\ref{16} it is possible to find disjoint open
cross-cuts $\alpha_{j}(1) \subset I^{c}( \alpha _{i} (1) )$ joining the
endpoints of $C_{j}(1)$ such that $\alpha_{j}(1) \subset I^{c} ( \beta_{j}(1) )
\cap V_{\varepsilon} (
C_{j}(1) )$ for every $j$ such that $D_{j} \sim_{c} D_{i}$. Doing this
for each $c$-equivalence class we find a collection of disjoint open
cross-cuts $\{ \alpha_{i}(1) \}^{L(1)}_{i=1}$ satisfying the conditions in
the definition of a $(\varepsilon,c)$-collection compatible with $\{ ( D_{i}(k),
\beta_{i}(k) ) \} \ \ \Box$.
\begin{figure}
\begin{center}~
\psfig{file=Fig11,height=2.5in}
\end{center}
\caption{$\{ \alpha (1) \}$ is the ($\varepsilon,c$)-collection and $\{ \alpha (-1)
\}$ is the ($\varepsilon,e$)-collection, both compatible with $\{ (D(k),
\beta(k); k = -1,0,1 \}$} \label{f11}
\end{figure}
We will now introduce dynamics in our discussion and add to the definition
of $(c,e)$-disks a new requirement, as we promised earlier. Let $f: \pi
\rightarrow \pi$ be a plane homeomorphism which we will have fixed for
the remainder of the time.
\bigskip
\noindent (C,E) {\sc Dynamical Assumption}: All $(c,e)$-disks henceforth will
be assumed to satisfy (i) and (ii):
\begin{description}
\item[(i)] $\displaystyle{\lim_{n \rightarrow \infty} }$ diam $f^{n} (C) = 0$;
\item[(ii)] $\displaystyle{\lim_{m \rightarrow - \infty} }$ diam $f^{m}(E) = 0$.
\end{description}
\noindent The main purpose of the present work is to isotop away dynamics of
$f$ in a controlled manner. We will now define sets within which it is
possible to do this, namely, to destroy all dynamics within them by an
isotopy which is identically equal to $f$ without them. We call them
{\em pruning fronts}\/ after the work of Predrag Cvitanovi\'c~\cite{C}.
\begin{defn}
Let $\{D_{i} \}^{L} _{i=1}$ be a collection of $(c,e)$-disks (satisfying
the dynamical assumption above) such that (i), (ii) and (iii) hold:
\begin{description}
\item[(i)] $\succ$ can be extended by transitivity to a partial order
on $\{ D_{i} \}^{L} _{i=1}$
or, equivalently, there are no ``loops'' $D_{i_{1}} \succ D_{i_{2}} \succ
\ldots \succ D_{i_{n}} \succ D_{i_{1}}$;
\item[(ii)] for every $n> 0$ and $i,j \in \underline{L}, \ f^{n} (D_{i})
\not\prec D_{j}$;
\item[(iii)] for every $m <0$ and $i,j \in \underline{L}, \ f^{m}(D_{i} )
\not\succ D_{j}$.
\end{description}
Such a collection will be called a {\em pruning collection}. Its locus
$\overline{P} = \displaystyle{\bigcup ^{L} _{i=1} } D_{i}$ (see \cite{C} and the
comments before the definition) will be called a {\em pruning front}.
\end{defn}
\noindent {\sc Notation}: We will use $\geq$ to denote the extension of
$\succ$ to a partial order and keep $\succ$ to denote the binary relation
as we defined previously.
\bigskip
Before we proceed, let us say a word about finite partially ordered
sets. If $(X, \geq )$ is one such we define the set of {\em initial
elements}\/ of $X$ to be
$$ I(X) = \{x \in X; \ \forall y \in X, \ y \leq x \Longrightarrow y = x
\} $$
\noindent It is easy to see that if $X$ is finite and nonempty, $I(X)$ is
nonempty and that no two distinct elements in $I(X)$ are related by
$\geq$. Now let $X_{1} = I(X)$ and inductively set $X_{n} = I ( X \setminus
\displaystyle{\bigcup ^{n-1} _{i=1} } X_{i} )$. From what we have said, $X_n$ is
nonempty if $X \setminus \displaystyle { \bigcup^{n-1}_{i=1} }X_{i}$ is nonempty. Since
$X$ is finite, there exists $n \geq 1$ such that $X_{1}, X_{2}, \ldots,
X_{n}$ are all nonempty and for $m>n, \ X_{m} = \emptyset$. Clearly $X_{1},
\ldots, X_{n}$ is a partition of $X$ and if $X_i$ has $s_i$ elements we
can list the elements of $X = \{ x_{1}, x_{2}, x_{3}, \ldots, x_{L} \}$
so that the first $s_1$ elements are those in $X_1$, the next $s_2$
elements are those in $X_2$ and so on. In this way the subscripts
reflect the partial order in the sense that if $i < j$ then $x_{i}
\not\geq x_{j}$. Having said this we adopt the following
\bigskip
\noindent{\sc Convention}:
Henceforth it will be assumed that the subscripts in a pruning collection
reflect the partial order $\geq$ in the sense that if $i < j$ then $D_{i}
\not\geq D_{j}$. Notice that, in particular, if $i < j$ then $D_{i}
\not\succ D_{j}$.
\bigskip
We can now state a proposition containing one of the main ingredients in
the proof of the main theorem (see figure 12.)
\begin{prop}\label{28}
Let $\{D_{i} \}^{L}_{i=1}$ be a pruning collection and $\{\varepsilon_{n}
\}^{\infty}_{n=0}$ a sequence of positive numbers converging to zero.
Then there exists a collection $\{ \alpha_{i} (n) \subset f^{n}(D_{i}); \ i
\in \underline{L}, \ n \in {\Bbb{Z}} \}$ of disjoint open cross-cuts joining the
vertices of $f^{n}(D_{i})$ such that (i) and (ii) below hold:
\begin{description}
\item[(i)] For each $n \geq 1, \ \{\alpha _{i} (n); \ i \in \underline{L} \}$ is a
$(\varepsilon_{n},c)$-collection compatible with
\begin{eqnarray*}
&&\{(f^{k}(D_{j}), \ \alpha_{j}(k));
\ j \in \underline{L}, \ -n +1 \leq k \leq n-1 \} \\
&&\cup \{ ( f^{n} (D_{j}), \
f(\alpha_{j} (n-1) )); \ j \in \underline{L} \}
\end{eqnarray*}
\item[(ii)] For each $m \leq 0, \ \{ \alpha_{i} (m); \ i \in \underline{L} \}$ is
a $(\varepsilon_{|m|},C)$-collection compatible with
\begin{eqnarray*}
&&\{ (f^{k}(D_{j}),
\alpha_{j}(k)); \ j \in L, m+1 \leq k \leq -m+1 \} \\
&& \cup \{ ( f^{m}(D_{j}
), f^{-1} ( \alpha_{j} (m+1))); \ j \in \underline{L} \}.
\end{eqnarray*}
\end{description}
\end{prop}
\noindent{\sc Proof}: We will let $m=-n+1$ and use induction on $n$. In order
to prove the proposition for $n=1$, choose any collection $\{ \beta_{i}
\subset D_{i} \}^{L} _{i=1}$ of open cross-cuts joining vertices and
apply Proposition~\ref{27} with $L(0)=0$ (so that $\underline{L(0)} = \emptyset$ and
$\{ (D_{i}(0), \beta_{i} (0) ) \} = \emptyset$) to the cut collection
$${\mathcal{D}}
= \{ ( D_{i}, \beta_{i});\ i \in \underline{L} \} \ \cup \{ ( f(D_{i}), \
f(\beta_{i})); \ i \in \underline{L} \}$$
where $\{(D_{i}, \beta_{i} ) \}$ and
$\{ (f (D_{i}), f(\beta_{i} )) \}$ play the roles of $\{(D_{i}(-1),
\beta_{i}(-1)) \}$ and $\{(D_{i}(1), \ \alpha_{i}(1) ) \}$ respectively in
the statement of that proposition, whereas $\varepsilon=\varepsilon_{1}$ and $\delta =
\varepsilon_{0}$. By the definition of pruning collection, $f(D_{i}) \not\prec D_{j}$
for any $i, j \in \underline{L}$ so that $\mathcal{D}$ satisfies the hypotheses and
we can conclude there exist $\{\alpha_{i}(1) \}^{L}_{i=1}$ and $\{\alpha_{i}
\}^{L}_{i=1}$ a $(\varepsilon_{1},c)$- and a $(\varepsilon_{0},e)$-collection
respectively,
both compatible with $\mathcal{D}$. Since $\alpha_{i} \subset I^{e}(\beta_{i} )$
and therefore $f(\alpha_{i}) \subset I^{e} (f (\beta_{i} ))$, by
Proposition~\ref{23}, and Corollary~\ref{24}, $\{\alpha_{i}(1) \}^{L} _{i=1}$
is a
$(\varepsilon_{1},c)$-collection compatible with $$\{ ( D_{i}, \alpha_{i} ); \ i \in
\underline{L} \} \cup \{ ( f (D_{i} ), f( \alpha_{i} ) ); \ i \in \underline{L} \}.$$ By
the same token $\{ \alpha_{i} \} ^{L}_{i=1}$ is a $(\varepsilon_{0},e)$-collection
compatible with $$\{(D_{i}, f^{-1} (\alpha_{i} (1) )); \ i \in \underline{L} \} \cup \{
( f (D_{i}), a_{i}(1) ); i \in \underline{L} \}.$$ That $\alpha_{i}(1)\cap \alpha_{j} =
\emptyset$ for $i, j \in \underline{L}$ is a consequence of Proposition~\ref{25}.
This proves the proposition for $n=1, \ m=0$.
\medskip
Assume we have constructed a collection $$\{\alpha_{i} (k); \ i \in \underline{L},
\ -n +2 \leq k \leq n-1 \}$$ of disjoint open cross-cuts satisfying the
conclusions of the proposition. Consider the cut collection
\begin{eqnarray*}
{\mathcal{D}} &= &\{ ( f^{n} (D_{i}), \ f(\alpha_{i} (n-1) )); \ i \in \underline{L} \} \\
&& \cup \ \{ (f^{k} (D_{i}), \alpha_{i}(k) ); \ i \in \underline{L}, \
-n +2 \leq k \leq
n-1 \} \\
&& \cup \ \{ (f^{-n+1} (D_{i}), f^{-1} (\alpha_{i}(-n+2))); \ i
\in \underline{L} \}
\end{eqnarray*}
\noindent and apply Proposition~\ref{27} with $\{(D_{i} (1), \beta_{i}(1) )
\}, \
\{ ( D_{i} (0), \alpha_{i} (0) ) \}$ and \linebreak[3]
$\{ ( D_{i}$ $ (-1), \ \alpha_{i}(-1)) \}$
equal to the first, second and third collections respectively, in the
above union, letting $\varepsilon = \varepsilon_{n}$ and $\delta= \varepsilon_{|-n+1|}$. From
the definition of pruning collection, $f^{n}(D_{i}) \not\prec f^{k}(D_{j})$ for
any $k < n$ and any $i, j \in \underline{L}$ and $f^{-n+1}(D_{i}) \not\succ f^{k}
(D_{j})$ for any $k > -n+1$ and any $i,j \in \underline{L}$, so that the hypotheses
of the proposition are satisfied. We may then conlude there exist $\{
\alpha_{i}(n) \subset f^{n} (D_{i} ) \}^{L}_{i=1}$ and $\{\alpha_{i}(-n+1)
\subset f^{-n+1}(D_{i})\}^{L}_{i=1}$ a $(\varepsilon_{n}, c)$- and a
$(\varepsilon_{|-n+1|},e)$-collection respectively, both compatible with
$\mathcal{D}$. From Corollary \ref{24}, $\{\alpha_{i}(n)
\}^{L}_{i=1}$ is compatible with
\begin{eqnarray*}
&&\{( f^{k}(D_{i}), \alpha_{i} (k)); \ i \in
\underline{L}, \ -n+1 \leq k \leq n-1 \} \\
&& \cup \{ (f^{n}(D_{i} ) , f(\alpha_{i}
(n-1))); \ i \in \underline{L} \}
\end{eqnarray*}
and $\{ \alpha_{i} (-n+1) \}^{L}_{i=1}$ is
compatible with
\begin{eqnarray*}
&&\{ ( f^{k} (D_{i}), \alpha_{i}(k) ); \ i \in \underline{L}, \ -n+2
\leq k \leq n \} \\
&& \cup \{ ( f^{-n+1} (D_{i}), f^{-1} ( \alpha_{i} (-n+2))); \
i \in \underline{L} \}.
\end{eqnarray*}
That $\alpha_{i}(n) \cap \alpha_{j}(k) = \emptyset$ for $-n+1 \leq
k \leq n-1$ and $\alpha_{i} (-n+1) \cap \alpha_{j} (k) = \emptyset$ for $-n+2 \leq k
\leq n$ is a consequence of Proposition~\ref{25}. This finishes the
induction step and proves the proposition. $\Box$
\bigskip
\begin{figure}
\begin{center}~
\psfig{file=Fig12,height=3in}
\end{center}
\caption{The first few $\alpha(n)$'s for a pruning collection containing
only one ($c,e$)-disk $D$.} \label{f12}
\end{figure}
\begin{cor}\label{29}
With the notation of Proposition~\ref{28}, for every $n \in \Bbb{Z}$,
$\alpha_{i} (n) \subset I^{c} (f (\alpha_{i} (n-1)))$ and $\alpha_{i}(n) \subset
I^{e} (f^{-1} (\alpha_{i} (n+1)))$.
\end{cor}
\noindent{\sc Proof}: For $n \geq 1$, (i) of Proposition~\ref{28} implies
that $\alpha_{i}( \!n \!) \! \!\subset \! I^{c} (f(\alpha_{i} ( \!n \!- \!1 \!)
\!) \!)$ whereas (ii)
implies that for $m \leq 0, \ \alpha_{i}(m) \subset$ $I^{e} ( f^{-1} (\alpha_{i}
(m+1)))$. By Proposition \ref{17}, $f^{-1} (\alpha_{i} (m+1)) \subset I^{c}
(\alpha_{i} (m) )$ and applying $f$ to both sides we get $\alpha_{i} (m+1)
\subset f ( I^{c} ( \alpha_{i}(m))) = I ^{c}(f (\alpha_{i}(m)))$. Letting $n=m+1$
we see that for $n \leq 1, \ \alpha_{i}(n) \subset I^{c} (f ( \alpha_{i}
(n-1)))$, which completes the proof of the first statement. The second
is obtained from it using Proposition~\ref{17} (see figure 13.) $\Box$
\bigskip
\begin{figure}
\begin{center}~
\psfig{file=Fig13,height=2.5in}
\end{center}
\caption{The $\alpha(n)$'s are chosen so that $\alpha_{i} (n) \subset I^{c} (f (\alpha_{i} (n-1)))$ and $\alpha_{i}(n) \subset
I^{e} (f^{-1} (\alpha_{i} (n+1)))$.} \label{f13}
\end{figure}
The next proposition is nothing but a ``fattened'' version of
Proposition~\ref{28} (see figure 14.) We could have proven it together
with Proposition~\ref{28} had we stated the ``fattened'' versions of the
propositions we proved before. Although feasible, this would have been
rather cumbersome. It is also possible to give a direct proof using
the techniques we have used so far. We leave it to the interested
reader.
\bigskip
\begin{prop}\label{30}
Let $\{\alpha_{i}(n); \ i \in \underline{L}, \ n \in {\Bbb{Z}} \}$ be as in
Proposition~\ref{28}. Then there exist collections of disjoint open
cross-cuts $\{\beta_{i}(n) \subset f^{n}(D_{i}); \ i \in \underline{L}, \ n \in
{\Bbb{Z}} \}$ and $\{\gamma_{i}(n) \subset f^{n}(D_{i}); \ i \in \underline{L}, \ n
\in {\Bbb{Z}} \}$ joining vertices such that:
\begin{description}
\item[(i)] $\beta_{i}(n) \subset I^{c} (\alpha_{i}(n))$ and $\gamma_{i}(n)
\subset I^{e} (\alpha_{i}(n))$;
\item[(ii)] for $n \geq 1, \ \{ \gamma_{i} (n); \ i \in \underline{L} \}$ is a
$(\varepsilon_{n}, c)$-collection compatible with
\begin{eqnarray*}
&&\{(f^{k}(D_{i}),
\ \beta_{i}(k)); \ i \in \underline{L}, \ -n+1 \leq k \leq n-1 \} \\
&& \cup \{( f^{n}
(D_{i}), \ f(\beta_{i} (n-1))); \ i \in \underline{L} \};
\end{eqnarray*}
\item[(iii)] for $m \leq 0, \ \{ \beta_{i} (m); \ i \in L \}$ is a
$(\varepsilon_{|m|},e)$-collection compatible with
\begin{eqnarray*}
&&\{(f^{k}(D_{i}), \gamma_{i}
(k)); \ i \in \underline{L}, \ m+1 \leq k \leq -m+1\} \\
&&\cup \{ ( f^{m} (D_{i} ),
f^{-1} (\gamma_{i} (m+1))); \ i \in \underline{L} \}.\ \Box
\end{eqnarray*}
\end{description}
\end{prop}
\begin{figure}
\begin{center}~
\psfig{file=Fig14,height=3in}
\end{center}
\caption{The first few $\gamma(n)$'s and $\beta(n)$'s for a
pruning collection containing only one ($c,e$)-disk $D$.} \label{f14}
\end{figure}
The corollary below is proved in the same way as Corollary~\ref{29}
(see figure~15.)
\bigskip
\begin{cor}\label{31}
With the notation of Proposition~\ref{30}, for every $n \in {\Bbb{Z}}$,
$\gamma_{i}(n) \subset I^{c} (f(\beta_{i} (n-1)))$ and
$\beta_{i}(n)
\subset I^{e} (f^{-1}( \gamma_{i} (n+1))). \ \ \Box$
\end{cor}
\begin{figure}
\begin{center}~
\psfig{file=Fig15,height=2.5in}
\end{center}
\caption{The $\beta_{i}(n)$'s and $\gamma_{i}(n)$'s are chosen so that $\gamma_{i}(n) \subset I^{c} (f(\beta_{i} (n-1)))$ and $\beta_{i}(n)
\subset I^{e} (f^{-1}( \gamma_{i} (n+1)))$.} \label{f15}
\end{figure}
The next proposition creates the sets in whose union will lie the
support of the isotopy we will construct to prove the main theorem.
\begin{prop}\label{32}
Let $\{\alpha_{i}(n) \}, \ \{\beta_{i}(n) \}$ and $\{ \gamma_{i} (n)\}$ be
as in Propositions \ref{28} and \ref{30}.
Then for every $n \in \Bbb{Z}$ and $i \in \underline{L}, \ \overline{ f^{-1} (\beta _{i}
(n+1) ) \cup \gamma_{i} (n) }$ is a Jordan curve bounding a Jordan domain
${\mathcal{V}}_{i}(n)$ such that $${\mathcal{V}}_{i}(n) \supset f^{-1}(\alpha_{i} (n+1))
\cup \alpha_{i}
(n).$$ Moreover, $${\mathcal{V}}_{i}(n) = I^{c} (\gamma_{i}(n)) \cap I^{e} ( f^{-1}
(\beta_{i} (n+1))).$$
\end{prop}
\noindent{\sc Proof}: The proof is an easy exercise using (i) of
Proposition~\ref{30}, Corollary~\ref{31} and Proposition~\ref{17}
(see figure 16.) $\Box$
\bigskip
\begin{figure}
\begin{center}~
\psfig{file=Fig16,height=2.75in}
\end{center}
\caption{$\overline{f^{-1} (\beta_{i}(n+1))) \cup
\gamma_{i} (n) }$ is a Jordan curve bounding the domain
${\mathcal{V}}_{i}(n)$.} \label{f16}
\end{figure}
\begin{prop} \label{33}
Let $D_{1}, D_{2}$ be $(c,e)$-disks, $D_{1} \not\prec D_{2}$ and
$\alpha_{1} \subset D_{1}$ and $\alpha_{2} \subset D_{2}$ be disjoint open
cross-cuts joining vertices. Then $\alpha_{1} \cap I_{2} \subset I^{c}
(\alpha_{2})$ and $\alpha_{2} \cap I _{1} \subset I^{e} (\alpha_{1})$.
\end{prop}
\noindent {\sc Proof}: Since $D_{1} \not\prec D_{2}$ either $I_{1} \cap I_{2}
= \emptyset$, in which case both statements are clearly true, or $D_{1} \succ
D_{2}$ and $D_{1} \neq D_{2}$. If $D_{1} \succ D_{2}, \ C_{1} \cap I_{2}
= \emptyset$ and since $\alpha_{1} \cap \alpha_{2} = \emptyset, \ (\alpha_{1} \cup C_{1}) \cap
\alpha_{2}= \emptyset$. It follows, since $\alpha_{2}$ is connected, that either
$\alpha_{2} \subset I^{c}(\alpha_{1} ) $ or $\alpha_{2} \cap I^{c}(\alpha_{1})=
\emptyset$. We want to show that the latter is true, so we will assume
$\alpha_{2} \subset I^{c}(\alpha_{1})$ and reach a contradiction. The
endpoints of $\alpha_{2}$ are the same as those of $E_2$ and since $E_{2}
\cap I_{1} = \emptyset \ (D_{1} \succ D_{2} )$ and $\alpha_{1} \subset I_{1}$, if
$\alpha_{2} \subset I^{c} (\alpha_{1})$, it must be the case that the endpoints
of $\alpha_{2}$ lie on $C_1$. But the endpoints of $\alpha_2$ coincide
with those of $C_2$ and, by (ii) in the definition of $\succ, \ C_{2}
\subset C_1$. We claim that $C_{1} = C_{2}$, for if $C_2$ is strctly
contained in $C_1$, one of the endpoints of $\alpha_2$ lies in
$\stackrel{\circ}{C_{1}}$ and since $\alpha_{2} \subset I_{1} \cap I_2$,
(iii) in the definition of $\succ$ implies that $C_{1} \subset C_{2}$
which is a contradiction. By Proposition~\ref{19} we see that $D_{1} =
D_{2}$ which is contrary to our hypothesis that $D_{1} \not\prec D_2$.
This contradiction shows that $\alpha_{2} \cap I^{c} (\alpha_{1}) = \emptyset$ and
since $\alpha_{2} \cap \alpha_{1} = \emptyset$ by hypothesis, we have shown that
$\alpha_{2} \cap I_{1} \subset I^{e}(\alpha_{1} )$. The other statement is
proven analogously. $\Box$
\bigskip
\begin{cor} \label{34}
Under the hypotheses of Proposition~\ref{33} $$I^{c} (\alpha_{1} ) \cap
I^{e} (\alpha_{2} ) = \emptyset. \ \ \Box$$
\end{cor}
\begin{prop}\label{35}
Let $i,j \in \underline{L}$ and $n, k \in \Bbb{Z}$:
\begin{description}
\item[(i)] if $f^{k} (D_{j}) \not\prec f^{n}(D_{i} )$ then $f^{-1}
(\alpha_{j} (k+1)) \cap \overline{ {\mathcal{V}}_{i} (n) } = \emptyset$, and
\item[(ii)] if $f^{k}(D_{j}) \not\succ f^{n} (D_{i})$ then $\alpha_{j} (k)
\cap \overline{ {\mathcal{V}}_{i} (n) } = \emptyset$.
\end{description}
\end{prop}
\noindent{\sc Proof}: From Proposition~\ref{30} (i) and
Proposition~\ref{17} it follows that $\alpha_{i}(n) \subset I^{e}
(\beta_{i}(n)) \cap I^{c} (\gamma_{i}(n))$. From Proposition~\ref{30}
it also follows that $\beta_{j}(k) \cap \gamma _{i} (n)= \emptyset$ for any
$i,j \in \underline{L}, \ k,n \in \Bbb{Z}$. Assume $f^{k}(D_{j}) \not\succ
f^{n}(D_{i})$. By Corollary~\ref{34}, we see that $I^{e} (\beta_{j} (k))
\cap I^{c} (\gamma_{i} (n)) = \emptyset$.
Since ${\mathcal{V}}_{i}(n) \subset I^{c}(\gamma_{i} (n)), \ \alpha_{j}(k) \subset
I^{e}
(\beta_{j}(k) )$ and $I^{e}(\beta_{j}(k) )$ is open we can conclude that
$\overline{ {\mathcal{V}}_{i} (n)} \cap
\alpha_{j}(k)= \emptyset$. This proves (ii). In order to prove (i) assume $f^{k}
(D_{j}) \not\prec f^{n}(D_{i})$. Then $f^{k+1}(D_{j}) \not\prec
f^{n+1}(D_{i})$ and, as above, we can conclude that $I^{e} (\beta_{i}
(n+1)) \cap I^{c}(\gamma_{j} (k+1))=\emptyset$. It follows that
$$I^{e}(f^{-1}
(\beta_{i} (n+1))) \cap I^{c}(f^{-1} ( \gamma_{j} (k+1))) = \emptyset$$ and
since
$${\mathcal{V}}_{i}(n) \subset I^{e} (f^{-1} (\beta_{i} (n+1))),$$
then
$$ f^{-1}
(\alpha_{j} (k+1)) \subset I^{c} ( f^{-1} ( \gamma_{j} (k+1))). $$
This latter being an open set, we see that
$$f^{-1} (\alpha_{j}(k+1)) \cap \overline{
{\mathcal{V}}_{i} (n) } = \emptyset.$$ This completes the proof. $\Box$
\bigskip
\begin{prop} \label{70}
With the notation above:
\begin{description}
\item[(i)] for $n \geq 1$ and $ -n +1 \leq k \leq n, \ f^{k} (C_{i} )
\cap f^{n} (I_{j}) \subset I^{c} ( \gamma_{j} (n) )$;
\item[(ii)] for $m \leq 0$ and $m \leq k \leq -m+1, \ f^{k} (E_{i} ) \cap
f^{m}(I_{j} ) \subset I^{e} ( \beta_{j} (m))$.
\end{description}
\end{prop}
\noindent{\sc Proof}: From Proposition \ref{30} we know that $\{ \gamma_{j} (n)
\}^{L}_{j=1} $ is compatible with $$\{ ( f^{k} (D_{i} ), \beta_{i} (k) ) ;
i \in \underline{L},-n +1 \leq k \leq n-1 \} \cup \{ (f^{n} (D_{i} ),
f(\beta_{i} (n-1))); i \in \underline{L} \}.$$ If $f^{k} (C_{i}) \cap f^{n}
(I_{j} ) = \emptyset$ there is nothing to prove. Otherwise, $f^{n}(D_{j})
\succ f^{k} (D_{i} ) $ and therefore either $[ I^{c} ( \gamma_{j} (n))
\cup \gamma_{j} (n) ] \subset I^{c} (\beta_{i} (k) )$ or $[ I^{c} (
\gamma_{j} (n)) \cup \gamma_{j} (n) ] \cap f^{k} (D_{i} ) = \emptyset$. Since
$f^{k} (C_{i} ) \subset f^{k} (D_{i} ) \cap {\mathcal{C}} I^{c} ( \beta _{i}
(k))$, the conclusion of (i) follows. $\Box$
\bigskip
\begin{cor} \label{67}
For $k \geq 1$, $f^{k}(C_{i})$ and $f^{-k}(E_{i})$ are disjoint from
${\mathcal{V}}_{j} (n)$ for every $i, j \in \underline{L}$ and every $n \in \Bbb{Z}$.
\end{cor}
\noindent{\sc Proof}: If $k > n$ by the definition of pruning collection
$f^{k}(D_{i}) \not\prec f^{n}(D_{j} )$ which implies that $f^{k} (C_{i})
\cap f^{n} (I_{j}) = \emptyset$. Since ${\mathcal{V}}_{j}(n) \subset f^{n}(I_{j})$
this proves the result for $k > n$. If $1 \leq k \leq n$ by Proposition
\ref{70}, $f^{k}(C_{i}) \cap {\mathcal{V}}_{j}(n) \subset I^{e} ( \gamma_{j}(n)
)$ whereas by Proposition~\ref{32}, ${\mathcal{V}}_{j}(n) \subset I^{c}
(\gamma_{j} (n) )$, which completes the proof of $f^{k} (C_{i}) \cap
{\mathcal{V}}_{j}(n) = \emptyset$ if $k \geq 1, \ j \in \underline{L}$ and $n \in
\Bbb{Z}$.
If $n >k$ we again have by the definition of pruning
collection that $f^{n} (D_{j}) \not\prec f^{k}( D_{i})$, which implies
that $f^{k} (E_{i} ) \cap f^{n} (I_{j} ) = \emptyset$ and thus that $f^{k}
(E_{i}) \cap {\mathcal{V}}_{j}(n) = \emptyset$. If $m \leq k \leq 0 $, by
Proposition \ref{70}, $f^{k} (E_{i} ) \cap f^{n} (I_{j} ) \subset I^{c}
(\beta_{j}(n) )$
which implies that, if $n \leq k \leq -1$, $$f^{k} (E_{i} ) \cap f^{n}
(I_{j} ) \subset f^{-1} (I^{c} (\beta_{j} (n+1))) = I^{c} (f^{-1}
(\beta_{j} (n+1))).$$ By Proposition \ref{32}, ${\mathcal{V}}_{j}(n) \subset
I^{e} (f^{-1} (\beta_{j} (n+1)))$ and thus $f^{k}(E_{i} ) \cap
{\mathcal{V}}_{j} (n) = \emptyset$ if $ k \leq 1, \ j \in \underline{L}, \ n \in
\Bbb{Z}$. This completes the proof. $\Box$
\bigskip
\section{Examples}
In this section we present examples of pruning collections for
Smale's horseshoe map $f:{\Bbb R}^2\to{\Bbb R}^2$. We begin by choosing
a rigid model for $f$ and describing some well known results,
offered without proof. We also present some elementary concepts of
kneading theory which we will need (the reader is referred to the
books of Wiggins \cite{Wi}, Devaney \cite{De}, and de Melo and Van Strien
\cite{MS} for further details on the horseshoe and on 1-dimensional
dynamics.) We then get to the examples. In describing the dynamics
of the ``pruned" maps $f_P$ in each example, we will make several
assertions and only sketch the proofs. The reason for proceeding
thus is twofold. First, this is a section to give examples of
pruning collections and this aspect is presented fully. Second, the
details we omit are part of a more general theory deserving of separate
treatment, which we intend to do in forthcoming papers.
We now fix a rigid model of Smale's horseshoe map
$f:{\Bbb R}^2\to{\Bbb R}^2$. Foliate the square
\[
S=\{ (x,y): |x|\leq \textstyle\frac 12, \
|y|\leq \textstyle\frac 12\}
\]
with horizontal {\it unstable} leaves and vertical
{\it stable} leaves, and begin by choosing the action of $f$ on
$S$ as depicted in figure~\ref{hsmodel}. We require that $f$ should
stretch the unstable leaves uniformly, contract the stable leaves
uniformly, and map segments of unstable (respectively, stable)
leaf in $S\cap f^{-1}(S)$ onto segments of unstable (respectively, stable)
leaf in $S$. Morever, we choose $f$ to map the corner of $S$ marked
with a circle on figure~\ref{hsmodel} onto the corner of $f(S)$ marked with a
circle. Extend $f$ to the half-disks $A_1$ and $A_2$ as depicted
in the diagram: let $f$ be a strict contraction of $A_1\cup A_2$,
so that there is a fixed point $x$ of $f$ lying in $A_1$ with the
property that $f^i(y)\to x$ \ as \ $i\to\infty$ for all
$y\in A_1\cup A_2$. Finally, extend $f$ over the rest of ${\Bbb R}^2$
without introducing any new nonwandering points.
\begin{figure}
\centerline{\psfig{file=hsmodel,height=1.5in}}
\caption{A rigid model for the horseshoe.}\label{hsmodel}
\end{figure}
The nonwandering set $\Omega(f)$ of $f$ consists of the fixed point
$\{x\}$ and an invariant Cantor set $\Lambda\subset S$. Morever,
there exists a homeomorphism $h:\Sigma\to\Lambda$, where
$\Sigma=\{0,1\}^{\Bbb Z}$ is the two-sided shift on two symbols,
which conjugates the shift map $\sigma:\Sigma\to\Sigma$ and
$f|_\Lambda:\Lambda\to\Lambda$.
$\Lambda$ is a hyperbolic invariant set and each point
$p\in\Lambda$ has one-dimensional stable and unstable manifolds
which intersect transversally. Notice that if two points $p_0$ and
$p_1$ lie on the stable (unstable) manifold of some point $q\in\Lambda$,
$p_0$ and $p_1$ are the endpoints of exactly one arc contained in
the stable (unstable) manifold of $p$.
We now describe the {\it unimodal order} on the one-sided shift space
$\Sigma_+=\{ 0,1\}^{\Bbb N}$ and, using it, define
{\it kneading sequences}.
\bigskip
\begin{defn} Let $s=s_0s_1\dots$ and
$t=t_0t_1\dots$ lie in $\Sigma_+$ and suppose
$s_i=t_i$ for $i < k$ {\it and} $s_k\neq t_k$. We set
$s\lhd t$ if $\sum^k_{i=1} s_i$ is even. We set
$s\unlhd t$ if either $s=t$ or $s\lhd t$.
\end{defn}
\begin{defn}
Let $\sigma: \Sigma_+\to\Sigma_+$
be the shift map and $\kappa\in\Sigma_+$. We say
$\kappa$ is a {\em kneading sequence} if, for every
$n\in\Bbb N$, \ $\sigma^n(\kappa)\unlhd\kappa$.
\end{defn}
\bigskip
The unimodal order just defined is used in the study of
1-dimensional {\it unimodal} maps, i.e., piecewise monotone
endomorphisms of the interval with exactly one critical
(turning) point. In this context, kneading sequences are defined as
the itinerary of the critical value. It is possible to check that
kneading sequences associated to unimodal maps satisfy the
definition above.
The unimodal order describes the horizontal and vertical ordering
of points in $\Lambda$ as follows: if $(x_1,y_1),(x_2,y_2)\in\Lambda$,
with $h(x_1,y_1)=s_{-2}s_{-1}\cdot s_0s_1\dots$ and
$h(x_2,y_2)=t_{-2}t_{-1}\cdot t_0t_1\dots$, then
\begin{eqnarray*}
&& x_1 < x_2 \Longleftrightarrow s_0 s_1 s_2\dots \lhd
t_0 t_1 t_2 \dots \ \ {\mbox{and }} \\
&& y_1 < y_2 \Longleftrightarrow s_{-1} s_{-2}\dots \lhd
t_{-1} t_{-2} \dots
\end{eqnarray*}
We shall often use the elements of $\Sigma$ to describe
points of $\Lambda$ without explicitly invoking the map $h$.
Thus, for example, we may talk about ``the fixed point $\bar 1$,"
``the periodic orbit $\overline{10011}$," or ``the point
$\overline 0 .0\overline{101}$."
Here, a bar over a group of symbols stands for infinite
repetition of the group. If the group is to the right (left) of the decimal
point, it should be repeated infinitely to the right (left) and if there is
no decimal point, the group should be repeated infinitely to both sides
(so $\overline 0 .0\overline{101}=\dots 000.0101101101\dots$ and
$\overline{10}=\dots 1010.1010\dots$ .) If the symbolic sequence is an element
of $\Sigma_+$, a bar over a group of symbols means infinite repetition of the
group to the right. Let $p\in\Lambda$ and $h(p)=
s_{-2}s_{-1}\cdot s_0s_1\dots$, we will sometimes refer
to $\dots s_{-2}s_{-1}\cdot s_0s_1\dots$
as the {\it symbolic representation} of $p$ and to
$\dots s_{-2}s_{-1}.$ and
$.s_0s_1\dots$ as the {\it symbolic vertical and horizontal coordinates
of} $p$, respectively.
If two points $p_0,p_1\in\Lambda$ have symbolic representation
$\dots t_{-2}t_{-1}\cdot t_0t_1\dots$ and
$\dots s_{-2}s_{-1}\cdot s_0s_1\dots$, respectively, $p_0$
and $p_1$ lie on the same stable (unstable) manifold if there
exists $N\in\Bbb Z$ such that $s_i=t_i$ for every $i\geq N$
($i\leq N$). Consequently, $p_0$ and $p_1$ lie on the same stable {\it and}
unstable manifolds if their symbolic representations differ in at most
finitely many entries. If the symbolic representations of $p_0$ and
$p_1$ differ at exactly one entry, the stable and unstable arcs of
which they are the endpoints form a simple closed curve bounding a
closed disk which we denote by $D(p_0,p_1)$ (see figure~\ref{cedisk-ex}.)
Because the boundary
of $D(p_0,p_1)$ is the union of a stable and an unstable arcs,
$D(p_0,p_1)$ is a $(c,e)$-disk for $f$ as defined in Section~\ref{cedisks}
whose vertices are $p_0$ and $p_1$.
\begin{figure}
\centerline{\psfig{file=cedisk,height=3in}}
\caption{\label{cedisk-ex}$(c,e)$-disk determined by $p_0=
\overline 010.011\overline 0$ and $p_1=\overline 010.111\overline 0$}
\end{figure}
\bigskip
\noindent
{\sc Notation:} If $p_0,p_1\in\Lambda$ lie on the same stable
(unstable) manifold, we denote the closed arc of stable (unstable)
manifold whose endpoints are $p_0$ and $p_1$ by
$[p_0,p_1]_s$ ($[p_0,p_1]_u$).
Let $s=s_0s_1\dots\in\Sigma_+$. We define the {\it vertical segment
with horizontal coordinate} $s$ to be $[ \,\overline 0.s,
\overline 01.s]_s$ and denote it by ver$(s)$. A vertical
segment is thus an arc of stable manifold extending from the lowest to the
highest possible symbolic vertical coordinates (notice that 000$\dots$
and 100$\dots$ are, respectively, the smallest and largest elements of
$\Sigma_+$ in the unimodal order), having symbolic horizontal coordinate
$s$. Notice also that, if we use $\sigma$ to denote the shift map on
$\Sigma_+$, $f({\mbox{ver}}(s))\subset{\mbox{ver}}(\sigma(s))$.
We are now ready to present examples of pruning collections for $f$.
\bigskip
\noindent
{\sc Remark:} So that the figures below are not hopelessly complicated and
unintelligible, we will represent the Cantor set $\Lambda$ as a solid
square. Formally, what we are depicting is the quotient of $\Lambda$ under
an equivalence relation which collapses the ``gaps" of the vertical and
horizontal Cantor sets, the product of which is $\Lambda$.
The ambiguity thus created is easily understood and the clarity gained
plentifully compensates it.
\bigskip
{\bf Example 1}. Let $\kappa$ be a kneading sequence and
\[D=D(\overline 0.0\kappa,\overline 0.1\kappa)
\]
be the $(c,e)$-disk determined by $\overline 0.0\kappa$
and $\overline 0.1\kappa$ (that is, the disk bounded by
the union of $C=[ \,\overline 0.0\kappa,
\overline 0.1\kappa]_s$ and $E=[ \,\overline 0.0\kappa,
\overline 0.1\kappa]_u$). We claim that the collection
$\{D\}$, containing $D$ alone, is a pruning collection (see figure~\ref{odl}.)
In order to see this we have to show that, if
$f^k(I)\cap I\neq\emptyset$ (where $I$ is the interior of $D$),
then $f^k(D)\succ D$, if $k > 0$, and $f^k(D)\prec D$, if $k < 0$.
Notice that conditions (ii) and (iii) in Definition~\ref{longer}
are automatically satisfied since $C$ and $E$ are arcs of stable
and unstable manifolds which intersect transversally. All there is
left to check is that $f^n(C)\cap I=\emptyset$ for $n > 0$
and $f^m(E)\cap I=\emptyset$ for $m < 0$.
Notice that $E\subset[ \,\overline 0,\overline 0.1\overline 0 \,]_u$,
that
\[
f^{-1}([ \,\overline 0, \overline 0.1\overline 0 \,]_u) =
[ \,\overline 0, \overline 0.01\overline 0 \,]_u \subset
[ \,\overline 0, \overline 0.1\overline 0 \,]_u
\]
and that
\[
[ \,\overline 0,\overline 0.1\overline 0 \,]_u\cap I=\emptyset \ .
\]
Thus, if $m\leq -1$, \ $f^m(E)\cap I=\emptyset$.
On the other hand, observe that $f(C)=$ ver$(\kappa)$ and, therefore,
$f^n(C)\subset$ ver$(\sigma^{n-1}(\kappa))$ for $n\geq 1$.
If $f^n(C)\cap I\neq\emptyset$, then $f^n(C)\subset I$ and,
in fact, ver$(\sigma^{n-1}(\kappa))\subset I$. This implies that
\[
0\kappa\lhd \sigma^{n-1}(\kappa)\lhd 1\kappa
\]
and, applying $\sigma$ to this inequality, we get
$\kappa\lhd\sigma^n(\kappa)$, which contradicts the assumption
that $\kappa$ is a kneading sequence.
Let $f_\kappa$ denote the map obtained using Theorem~\ref{65}
for $\overline P=D$. The family ${\cal F}=\{f_\kappa; \kappa$ is a kneading
sequence$\}$ mimics in dimension 2 a {\it full family} of
unimodal maps of the interval. In particular, ${\cal F}$ is an
uncountable family of 2-dimensional homeomorphisms passing from
trivial dynamics to a full horseshoe as $\kappa$ varies from
$\overline 0$ to $1\overline 0$.
\begin{figure}
\centerline{\psfig{file=odl,height=3in}}
\caption{A ``one-dimensional-like'' pruning front
for the horseshoe.}\label{odl}
\end{figure}
{\bf Example 2}. Consider the two $(c,e)$-disks
\[
D_1=D(\overline 1.010\overline 1, \ \overline 1.110\overline 1) \qquad
{\mbox{ and }} \qquad
D_2=D(\overline 101.0\overline 1, \ \overline 101.\overline 1) \ .
\]
Because of the periodicity in the coordinates of the vertices of
$D_1$ and $D_2$, it is easy to check that $\{D_1,D_2\}$ is a
pruning collection (see figure~\ref{pf-ex2}.) Let
$\alpha_i(k)\subset f^k(D_i)$, for
$i=1,2$, $k\in\Bbb Z$, be closed cross-cuts as given by
Proposition~\ref{28}. Then
\[
\gamma_0=\bigcup\{ \alpha_i(2k); \ i=1,2, \ k\in\Bbb Z\}
\]
and
\[
\gamma_1=\bigcup\{ \alpha_i(2k-1); \ i=1,2, \ k\in\Bbb Z\}
\]
are Jordan curves (see figure~\ref{renorm}) such that
$\gamma_0\cap\gamma_1=\{\,\overline 1\,\}$. If $f_P$ is the map given by
Theorem~\ref{65} for the pruning front $\overline P=D_1\cup D_2$
and $U_0$ and $U_1$ are the closed disks bounded by $\gamma_0$ and
$\gamma_1$, $f_P$ interchanges $U_0$ and $U_1$, that is,
$f_P(U_0)=U_1$ and $f_P(U_1)=U_0$. Moreover, if $\Lambda_0=\Omega(f_P)\cap
U_0$ is the intersection of the nonwandering set of $f_P$ with
$U_0$, \ $f^2_P|_{\Lambda_0}:\Lambda_0\to\Lambda_0$ is topologically conjugated
to the full horseshoe $f|_\Lambda: \Lambda\to\Lambda$ restricted to the set $\Lambda$.
This is an example of a ``renormalizable" or ``reducible" map.
\begin{figure}
\centerline{\psfig{file=pf2,height=3in}}
\caption{The $(c,e)$-disks $D_1$ and $D_2$ of Example 2. $\diamond$ is the
fixed point $\bar 1$.}\label{pf-ex2}
\end{figure}
\begin{figure}
\centerline{\psfig{file=renorm,height=3.8in}}
\caption{The Jordan curves $\gamma_0$ and $\gamma_1$}\label{renorm}
\end{figure}
\bigskip
{\bf Example 3}. Consider the $(c,e)$-disks
\[
D_1=D(\overline 0.0\overline{1000100}, \
\overline 0.1\overline{1000100}) \qquad
{\mbox{ and }} \qquad
D_2=D(\overline 01100010.0\overline 1, \
\overline 01100010.\overline 1) \ .
\]
Notice that $D_1$ is of the kind $D(\overline 0.0\kappa,\overline
0.1\kappa)$, since $\kappa = \overline{1000100}\in \Sigma_+$
is a kneading sequence, as may easily be verified.
As in the previous example, the
periodicity in the coordinates of the vertices of $D_1$ and $D_2$
make it an easy computation to check that $\{D_1,D_2\}$ is a
pruning collection (see figure~\ref{pf-ex3}.)
\begin{figure}
\centerline{\psfig{file=pf3,height=3in}}
\caption{The $(c,e)$-disks $D_1$ and $D_2$ of Example 3. The points marked by
$\circ$ are the periodic orbit $s_{7}^{6}(0)$ and $\diamond$ is the fixed
point $\bar 1$.}\label{pf-ex3}
\end{figure}
Let $s^6_7(0)$ denote the periodic orbit containing the point
$\overline{1000100}$ (see \cite{HWh} for an explanation of this name).
Since none of its seven points lies in $P=I_1\cup I_2$, none of them
lies in $\displaystyle{\bigcup_{n\in\Bbb Z}} f^n(P)$.
In figure~\ref{im-pf}, $f^k(P)$, for $-1\leq k\leq 3$, are shown.
Notice that the nonwandering points of $f_P$ lie outside the shaded region.
(In fact, it is not hard to see that the points on the open $e$-side
$\stackrel{\circ}{E}_1$ of $D_1$ are also wandering under $f_P$.)
\begin{figure}
\centerline{\psfig{file=im_pf3,height=3in}}
\caption{A few images of $P$ under $f$.}\label{im-pf}
\end{figure}
We claim that the map $f_P$ obtained using Theorem~\ref{65}
realizes the minimum topological entropy among all maps in the
isotopy class of $f$ relative to $s^6_7(0)$. In order to see this, we
construct a Markov Partition for $f_P$ like in figure~\ref{mp}.
The horizontal and vertical sides of the rectangles $R_i$ are
contained in $\displaystyle{\bigcup_{n\in\Bbb Z}} f^n_P(E_1\cup E_2)$
and $\displaystyle{\bigcup_{n\in\Bbb Z}} f^n_P(C_1\cup C_2)$, respectively.
(In fact, it is enough to take the unions ranging from $n=-7$ to $n=7$,
say.) It is easy to see that, if we define $\Lambda_P=\Omega(f_P)\backslash
\{\overline 0,\overline 1\}$, where $\Omega(f_P)$ is the nonwandering set
of $f_P$ and $\overline 0$ and $\overline 1$ are the fixed of $f$ inside
$S$, which are also fixed points under $f_P$, then $\Lambda_P\subset
\displaystyle{\bigcup^8_{i=1}} R_i$. The vertices of each $R_i$ lie outside of
$\displaystyle{\bigcup_{n\in\Bbb Z}} f^n(P)$ and it therefore makes sense
to refer to them using their symbolic representation in $\Sigma$. In
table~\ref{corners-mp} we give the symbolic horizontal and vertical
coordinates of the vertices of the rectangle $R_i$. The columns under $x_L$
and $x_R$ contain the left and right horizontal coordinates, respectively,
whereas those under $y_L$ and $y_U$ contain the lower and upper vertical
coordinates, respectively.
\begin{figure}
\centerline{\psfig{file=mp,height=4.5in}}
\caption{The Markov Partition for $f_P$.}\label{mp}
\end{figure}
\begin{table}
\begin{center}
\renewcommand{\arraystretch}{1.2}
\[ \begin{array}{|c|l|l|r|r|} \hline
&x_L &x_R &y_L &y_U \\ \hline
R_1 &.\bar{0001001} &.00101\bar{1000100} &\bar 011. &\bar 0110001.\\ \hline
R_2 &.\bar{0010010}&.\bar{0010001}&\bar 0110.&\bar 0110001. \\ \hline
R_3 &.01\bar{1000100}&.0\bar 1&\bar 0110.&\bar 0110001. \\ \hline
R_4 &.010\bar 1&.0101\bar{1000100}&\bar 0110.&\bar 01100010. \\ \hline
R_5 &.\bar{0100100}&.\bar{0100010}&\bar 01100.&\bar 01100010. \\ \hline
R_6 &.1\bar{1000100}&.\bar 1 &\bar 01100.&\bar 01100010. \\ \hline
R_7 &.10\bar 1&.101\bar{1000100}&\bar 01100.&\bar 011001.\\ \hline
R_8 &.\bar{1001000}&.\bar{1000100}&\bar 011000.&\bar 011001.\\ \hline
\end{array} \]
\caption{The coordinates of the vertices of the rectangles $R_i$.}
\label{corners-mp}
\end{center}
\end{table}
\begin{figure}
\centering
\mbox{\subfigure[The rectangles $R_i$ and \ldots]
{\psfig{file=mapmpa,height=3in}}\qquad
\subfigure[\ldots their images under $f_P$.]
{\psfig{figure=mapmpb,height=3in}}}
\caption{The Markov Partition and its image.}
\label{mapmp}
\end{figure}
In figure~\ref{mapmp} we show how the rectangles $R_i$ are mapped under $f_P$.
The transition matrix $M=(m_{ij})$
associated with this partition is the $8\times 8$ matrix defined by
\[
m_{ij} =\cases 1 \ , & {\mbox{if }} \ I(f_P(R_j))\cap R_i\neq \emptyset \\
0 \ , & {\mbox{otherwise}}
\endcases
\]
where $I(R_i)$ stands for the interior of the rectangle $R_i$.
Using the notation $R_j\to R_i$ for
$I(f_P(R_j))\cap R_i\neq\emptyset$, we have
$R_1\to R_2R_3R_4$, \ $R_2\to R_5$, \ $R_3\to R_6$, \ $R_4\to R_7$, \
$R_5\to R_8$, \ $R_6\to R_8R_7$, \ $R_7\to R_3$, \ $R_8\to R_2R_1$, \
so that
\[
M=\left[
\begin{array}{llllllll} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 1 & 0 & 0
\end{array} \right]
\]
Let
\[
\Sigma_M =\{ s=\dots s_{-2}s_{-1}\cdot s_0 s_1\dots\in
\{ 0,1,\dots 8\}^{\Bbb Z}; m_{s_is_{i+1}}=1 \ \forall i\in\Bbb Z\}
\]
be the subshift of finite type associated to $M$. It is possible to show
that, if $s=\dots s_{-2}s_{-1}\cdot s_0 s_1\dots\in\Sigma_M$, then
$\displaystyle{\bigcap_{n\in\Bbb Z}}f^{-n}_P(R_{s_n})$ consists of a single
point in $\Lambda_P$ and that the map $k:\Sigma_M\to\Lambda_P$, given by
$k(s)=\displaystyle{\bigcap_{n\in\Bbb Z}}f^{-n}_P(R_{s_n})$, is a
topological conjugacy between the shift map $\sigma:\Sigma_M\to\Sigma_M$
and $f_P|_{\Lambda_P}:\Lambda_P\to\Lambda_P$. Under these circumstances,
$h(f_P)=\log\lambda$, where $\lambda$ is the spectral radius of $M$.
Using your favorite matrix computation program, you may check that
$\lambda=1.46557$ and that $\log\lambda=0.382244$, which agrees with table
1 of \cite{H2}. Although this is not a proof,
one may be obtained using the algorithm in \cite{BH} to find the
pseudo-Anosov homeomorphism in the isotopy class of $f$ rel.
$s^6_7(0)$. It is known that this map realizes the minimum topological
entropy in its isotopy class and it is possible to find a Markov Partition
for it with the same transition matrix $M$.
As was mentioned in the introduction, we intend to show in a forthcoming
paper that, as in Example 3, given a horseshoe periodic orbit collection
$\cal O$, there exists a pruning front $P=P(\cal O)$ such that
$f_P$ restricted to $\Omega(f_P)$ is semconjugated to the Thurston minimal
representative $\phi_{\cal O}$ in the isotopy class of $f$ rel. $\cal O$
and that $h(f_P)=h(\phi_{\cal O})$.
\section{Introduction}
\setlength{\baselineskip}{20pt}
\indent
One of the main concerns in the study of dynamical systems is to
understand how a family of maps passes from simple to complicated dynamic
behaviour as we vary parameters. When the dynamical systems under
consideration are 1-dimensional, the kneading theory of Milnor and
Thurston provides a full topological understanding of the transition from
simple to chaotic behaviour. In dimension 2, no such theory exists. In
fact , it is not clear what restrictions should be imposed on the
families under consideration in order that understanding them is not too
hopeless a task.
Families like the H\'enon and the Lozi ones are interesting examples but
they lack a defining topological characteristic analogous, for example,
to saying that a 1-dimensional map is {\em unimodal} (i.e., is piecewise
monotone with exactly one turning point.)
In this work, we present a method of isotoping away dynamics from a
homeomorphism of the plane in a controlled fashion. More precisely, if
$f: \pi \rightarrow \pi$ is a homeomorphism of the plane $\pi$, we define open
sets $P$ for which there exists an isotopy $H: \pi \times [0,1] \rightarrow \pi$
with (open) support contained in $\displaystyle\bigcup_{n \in {\Bbb{Z}} }f^{n}
(P)$, such that $H( \cdot, 0 ) = f$ and $H ( \cdot, 1) = f_{P}$, where
$f_{P}$ is a homeomorphism under which every point of $P$ is wandering.
Using this construction, with $f$ being Smale's horseshoe, for example,
it is possible to produce an uncountable family of homeomorphisms of the
plane, depending on infinitely many parameters, going from trivial
dynamics (say, only two nonwandering points, one attracting and one
repelling fixed points) to a full horseshoe.
We call the sets $P$ mentioned above {\em pruning fronts}, after the work
of P. Cvitanovi\'c \cite{C}. In \cite{C} they propose sets of symbol
space for Smale's horseshoe which get ``pruned away'' as we vary
parameters in a family like the H\'enon one. Here we give a precise
definition of pruning fronts and construct the isotopies which ``prune
away'' the dynamics in $P$.
In forthcoming papers we intend to do two things. First, for each map
$f_P$, where $P$ is a pruning front as defined herein, there exists a
collapsing procedure which produces a ``tight'' map $\varphi_P$ isotopic
to $f_P$ and with essentially the same dynamics. More precisely, there
exists an $f_P$-invariant upper semi-continuous decomposition $G_P$ of the
sphere $S^2$ (we can extend $f$ to $S^2$ setting $f(\infty )= \infty$ ),
such that, for every element $g$ of $G_P$, $g$ contains at least one
element of the nonwandering set of $f_P$ and $h(f_{P};g) = 0$, where
$h(f_{P};g)$ is the topological entropy of $f_P$ in $g$ as defined by
Bowen. $f_P$ then projects to a homeomorphism $\varphi_{P}: K_{P} \rightarrow
K_{P}$ of the {\em cactoid} $K_{P} = S^{2}/G_{P}$, such that no point of
$K_P$ is wandering under $\varphi_{P}$ and $h(f_{P}) =h( \varphi_{P} )$.
Second, we intend to show that the family $\varphi_{P}$ contains the
Thurston minimal reresentatives in the isotopy classes of $f$ relative to
periodic orbit collections of $f$. In other words, we would like to
show that given a periodic orbit collection $\mathcal{O}$ of periodic orbits
of $f$, there exists a pruning front $P=P({\mathcal{O}})$, such that
$\varphi_P$ is
the Thurston minimal representative in the isotopy class of $f$ rel
$\mathcal O$. This last statement should have an algorithmic proof, providing
another algorithmic proof of Thurston's classification therorem for
homeomorphisms of surfaces.
The techniques used in the present work are those of point set topology
of the plane. In Section 1 we state without proof the
main background
results we will need, the most important of which being the Jordan Curve
Therorem (Theorem \ref{1}) and Whyburn's Separation Theorem (Theorem
\ref{2}). In Section 2 we develop the plane toplogy
tools we will use in
the remainder of the paper. In Section 3 we introduce
the concept of $(c,e)$-{\em disks}, define pruning fronts and prove
some propositions which will be used in Section 5. In Section 4 we state
and
prove some results
about isotopies of homeomorphisms of the plane, which will also be needed
in Section 5. Although these results are
folkloric, we decided to present them for completeness; the proofs given
are rather elementary. Section 5 contains the
proof of the main theorem, as its title suggests. Within the first few
pages we get to define an isotopy which is almost all we need (Proposition
\ref{43}) and the remainder of the section is devoted to showing how this
isotopy works and how we fix it in order to get the final isotopy $H$
(which depends, of course, on $P$.) It is only in Section 6 that we get to the
second part of the title --- the formation of horseshoes. We present three
examples of pruning fronts for Smale's horseshoe map. The first of which is,
in fact, a family of such examples and produces, via the main theorem, a
family of homeomorphisms of the plane whose dynamics mimics that of a
full unimodal family of endomorphisms of the interval. The second example
gives rise to a `renormalizable' map, that is, a homeomorphism which
interchanges two closed disks. The second iterate restricted to each one
of these disks is again a full horseshoe. Finally, in the third example
we present a pruning front which gives rise to a `lax pseudo-Anosov'
homeomorphism. Together these examples should suggest different ways
in which a horseshoe can be formed.
A word about the figures is in order. One of the hardest things for me
during the preparation of this work was to translate into precise
mathematical statements the pictures I had in my mind. I decided,
therefore, to add to the text all those pictures I had to draw over and
over for myself before I understood what were the right mathematical
statements that described them. I hope they will be helpful to the
reader, for as the saying goes, ``a picture is worth a thousand
words.''
\noindent
{\sc Acknowledgements:} The research presented herein comprises my Ph.D.
dissertation done at the Graduate Center of The City University of New
York (CUNY) under the supervision of Professor Dennis Sullivan. I would like
to thank Professor Sullivan for his guidance during the preparation of this
work and the Graduate Center of CUNY for providing a friendly and helpful
research atmosphere. I had several discussions with Alberto Baider,
Pregrag Cvitanovi\'c, Fred Gardiner, Toby Hall, Michael Handel, Ronnie
Mainieri, Charles Tresser and Nick Tufillaro and I would like to thank them
for their help.
\section{Isotopies}
\begin{defn}
Let $X,Y$ be topological spaces. By an {\em isotopy} we mean a
continuous map $H:X \times [0,1] \rightarrow Y$ such that the ``slice''
map $H_{t}: X \rightarrow Y, \ H_{t}(x)= H(x,t)$ is a homeomorphism for
each $t \in [0,1]$. If $f,g: X \rightarrow Y$ are homeomorphisms, we
say $f$ and $g$ are {\em isotopic} if there exists an isotopy $H: X\times
[0,1]
\rightarrow Y$ such that $H(x,0)=f(x)$ and $H(x,1)=g(x)$ for every $x \in
X$.
The {\em support} of an isotopy $H$ is by definition (see the remark
below) the set
$$ \mbox{\rm supp } H = {\mathcal{C}} \{x \in X; \ H(x,t)=H(x,0) \ \forall t
\in [0,1] \} $$
\noindent where, as usual, $\mathcal{C}$ stands for complement.
If $f:X \rightarrow X$ is a homeomorphism we define the support of $f$ as
$$ \mbox{\rm supp }f = {\mathcal{C}} \{ x \in X; \ f(x)= x \} $$
\end{defn}
\noindent{\sc Remark}: Notice that our definition of support is not the usual
one in that we are not taking closures. Supports of isotopies and
homeomorphisms are therfore {\em open} sets.
\bigskip
The following proposition is a straightforward exercise in point set
topology and we omit the proof.
\begin{prop} \label{36}
Let $H:X\times [0,1] \rightarrow X$ be an isotopy of the identity, i.e.,
$H(x,0)=x$ for every $x \in X$. If $x \in$ supp $H$, then $H(x,t)$ and
$x$ belong to the same path component of supp $H. \ \ \Box$
\end{prop}
\noindent{\sc Remark:} If $X$ is locally path-connected, the path components
of supp $H$ coincide with its connected components, since supp $H$ is open.
\bigskip
\begin{defn}
Let $G$ be a collection of subsets of a metric space. We call $G$ a {\em
null collection} if for every $\varepsilon > 0$ only finitely many elements of
$G$ have diameter greater than $\varepsilon$.
\end{defn}
The lemma below is true in greater generality than we state and is part
of the folklore of hyperbolic geometry, geodesic laminations, etc. The
proof we give is somewhat sketchy but is rather elementary.
\bigskip
\begin{lem}\label{37}
Let $I\!\!D$ denote the unit disk $\{ x \in I\!\!R^{2}; \ ||x|| \leq 1 \}$,
and $\{ \alpha_{n} \}^{\infty}_{n=1}$ a null collection of closed
cross-cuts, disjoint except possibly at endpoints, no two $\alpha_{n}$'s
sharing both endpoints. For each $n \geq 1$, let $\gamma_n$ be the closed
arc of circle perpendiular to $S^{1}= \{x \in I\!\!R^{2}; \ ||x||=1 \}$
with the same endpoints as $\alpha_n$. Then there exists a homeomorphism
$\zeta: I\!\!D \rightarrow I \!\! D$ such that $\zeta|_{S^{1} }$ is the
identity and $\zeta(\alpha_{n} ) = \gamma_{n}$.
\end{lem}
\noindent{\sc Proof:} From the hypotheses that the $\alpha_{n}$'s are interior
disjoint and no two share both endpoints it follows that the cross-cuts
$\gamma_n$ are interior disjoint and the correspondence $\alpha_{n} \rightarrow
\gamma_n$ is one-to-one in the sense that if $\alpha_{n} \neq \alpha_{m}$ then
$\gamma_{n} \neq \gamma_{m}$. Moreover, $\{ \gamma_{n} \}^{\infty}_{n=1}
$ is a null collection, since given $\varepsilon > 0$ only finitely many pairs
of endpoints of the $\alpha_n$'s can be more than $\varepsilon$ apart, which
implies that only finitely many $\gamma_{n}$'s have diameter greater than
$\varepsilon$.
Let $\psi_{n}: \gamma_{n} \rightarrow \alpha_{n}$ be a homeomorphism
extending the identity homeomorphism between the endpoints of $\gamma_n$
and $\alpha_n$, for each $n \geq 1$, and define the map $\psi$ as
$$ \psi = {\mbox{\rm id}} \cup \bigcup^{\infty}_{n=1} \psi_{n}: S^{1} \cup
\bigcup ^{\infty}_{n=1} \gamma_{n} \longrightarrow S^{1} \cup \bigcup
^{\infty}_{n=1} \alpha_{n}$$
where id: $S^{1} \longrightarrow S^{1}$ is the
identity homeomorphism. $\psi$ is well defined since the interiors
$\stackrel{\circ}{\gamma_{n} }$ are disjoint and $\psi_n$ is the identity
at the endpoints of $\gamma_n$. We claim $\psi$ is a homeomorphism.
>From what we have said above, $\psi$ is clearly one-to-one and onto. All
there remains to show is that $\psi$ is continuous. Let $\{x_{k} \}$ be
a sequence in $S^{1} \cup \displaystyle{\bigcup ^{\infty} _{n=1} } \gamma_{n}$ and
assume $x_{k} \rightarrow x$. We want to show that $\psi (x_{k})
\rightarrow \psi (x)$. If there exists $n$ such that all but finitely
many points $x_{k}$ lie in $\gamma_n$, then for $k_0$ sufficiently large
$x_{k} \in \gamma_{n}$ for every $k \geq k_0$ and since $\gamma_n$ is
closed $x \in \gamma_n$. It follows that for $k \geq k_0, \ \psi(x_{k})
= \psi_{n} (x_{k} ) \rightarrow \psi_{n}(x) = \psi(x)$ since $\psi_n$ is
continuous. If there is no $\gamma_n$ containing all but finitely many
$x_{k}$'s, we can choose a subsequence $x_{k_{j}} \in \gamma_j$ so that
different points lie in different $\gamma_{j}$'s. Since $\{ \gamma_{n}
\}$ is a null sequence, diam $\gamma_{j} \rightarrow 0$ as $j
\rightarrow \infty$ and, since $x_{k_{j}} \in \gamma_j$ and $x_{k_{j}}
\rightarrow x$, for any sequence $y_{j} \in \gamma_{j}, \ y_{j}
\rightarrow x$. In particular, if $p_{j}, q_{j}$ are the endpoints of
$\gamma_{j}, \ p_{j}, q_{j} \rightarrow x$. This shows that $x \in
S^1$. Also, the cross-cuts $\alpha_j$, whose endpoints are $p_{j}, q_{j}$,
are all distinct, since the $\gamma_{j}$'s are, and since $\{ \alpha_{n} \}$
is a null family and $p_{j}, q_{j} \rightarrow x$, for any sequence
$z_{j} \in \alpha_{j}, \ z_{j} \rightarrow x$. We then have $\psi
(x_{k_{j}}) = \psi_{j} (x_{k_{j}} ) = z_{j} \in \alpha_{j}$ and $z_{j}
\rightarrow x = \psi (x)$ since $x \in S^1$. This shows that $\psi$ is a
homeomorphism. Assume for a moment we have shown that every component of
the complement of $S^{1} \cup \displaystyle{\bigcup ^{\infty}_{n=1} } \gamma_{n}$
in $I \!\!D$ is a Jordan domain. Let $U$ be one such and $\partial U =J$. $J$
is a Jordan curve in $S^{1} \cup \displaystyle{\bigcup^{\infty}_{n=1} }
\gamma_{n}$ and thus $\psi(J)$ is a Jordan curve in $S^{1} \cup
\displaystyle{\bigcup^{\infty}_{n=1} }\alpha_n$. We claim that the Jordan domain $V$
bounded by $\psi(J)$ is a component of the complement of $S^{1} \cup
\displaystyle{\bigcup^{\infty}_{n=1} } \alpha_n$ in $I \!\! D$. It is clear that $V
\subset \{x;\ ||x|| < 1 \}$ so that $V \cap S^{1} = \emptyset$. If $V \cap
\alpha_{j} \neq \emptyset$ for some $\alpha_j$, then $\stackrel{\circ}{\alpha_{j}}$,
which is connected and disjoint from $S^{1} \cup \displaystyle{\bigcup_{n \neq j}}
\alpha_{n} \supset \partial V$, is contained in $V$ and its endpoints in $\partial V$.
But this implies that the endpoints of $\gamma_j$ lie on $J$ which in turn
implies $\gamma_{j} \subset \overline{U}$. Since we assumed $U$ to be in the
complement of $S^{1} \cup \displaystyle{\bigcup^{\infty}_{n=1} } \gamma_{n}, \
\gamma_{j} \subset J = \partial U$. This would then contradict the hypothesis
that no two $\alpha_n$'s shared both endpoints. This shows that if $U$ is a
component of the complement of $S^{1} \cup \displaystyle{\bigcup ^{\infty}_{n=1} }
\gamma_n$ in $I \!\! D$ whose boundary is a Jordan curve $J, \ \psi(J)$
is a Jordan curve in $S^{1} \cup \displaystyle{\bigcup^{\infty}_{n=1} } \alpha_n$
bounding a component $V$ of the complement of $S^{1} \cup \displaystyle{
\bigcup^{\infty}_{n=1} } \alpha_{n} $ in $I \!\! D$. So if every
component $U$ of
the complement of $S^{1} \cup \displaystyle{ \bigcup^{\infty}_{n=1}} \gamma_{n}$
in $I \!\! D$
is a Jordan domain we can use Theorem~\ref{4} to extend $\psi$ to a
homeomorphism $\tilde{\psi} : I \!\!D \rightarrow I \!\! D$ and $\zeta =
\tilde{\psi}^{-1}$ will satisfy the conclusions of the lemma.
In order to see that the components $U$ of $I \!\! D \setminus \left[S^{1} \cup
\displaystyle{ \bigcup ^{\infty}_{n=1} } \gamma_{n}\right]$ are Jordan domains let
$\gamma_n$ be a cross-cut such that $\gamma_{n} \subset \partial U$. Such a
$\gamma_n$ must exist unless $\{ \gamma_{n} \} = \emptyset$ in which case the
statement is trivial. By a conformal mapping, map $I \!\! D$ onto the
upper half plane $I \!\! H$ so that $\gamma_n$ maps onto $S^{1} \cap I
\!\! H$ and $U$ maps onto $U' \subset \{x; ||x|| < 1 \} \cap I \!\! H$.
It is now not hard to see that $\partial U' \setminus S^{1}$ is the graph of a
continuous function $g: (-1,1) \rightarrow [0,1)$ such that $|g(x)| <
\sqrt{1-x^{2} }$ for every $x \in (-1,1)$. This proves that $\partial U'$
is a Jordan curve and therefore so is $\partial U$ and completes the proof of
the lemma. $\Box$
\bigskip
\begin{cor}\label{38}
Let $J$ be a Jordan curve and $\{\alpha_{n} \}^{\infty}_{n=1} $ and $\{
\beta _{n} \}^{\infty}_{n=1}$ two null collections of interior disjoint
cross-cuts in $D$, the closed disk bounded by $J$. Assume that no two
elements of each collection share both endpoints and that the endpoints
of $\alpha_i$ and $\beta_i$ coincide. Then there exists a homeomorphism
$\zeta: D \rightarrow D$ such that $\zeta|_{J}$ is the identity,
$\zeta(\alpha_{n}) = \beta_{n}$ and $\zeta$ is isotopic to the identity
through an isotopy with support in $I$, the interior of $D$.
\end{cor}
\noindent{\sc Proof:} Let $f: D \rightarrow I \!\! D$ a homeomorphism and
$\zeta_{\alpha}: I \!\!D \rightarrow I \!\! D$ and $\zeta_{\beta}: I \!\! D
\rightarrow I \!\! D$ homeomorphisms ``straightening'' $\{f( \alpha_{n} )
\}$ and $\{ f(\beta_{n} ) \}$, which exist by Lemma~\ref{37}. Set
$\zeta = \zeta^{-1}_{\beta} \circ \zeta_{\alpha}$. It is not hard to check
that $\zeta|_{J} =$ id and $\zeta (\alpha_{n}) = \beta_{n}$. That $\zeta$
is isotopic to the identity is a consequence of Theorem \ref{5}. $\Box$
\bigskip
\begin{cor} \label{39}
Let $J$ be a Jordan curve and $\alpha, \beta$ cross-cuts in $D$ having the
same endpoints. Then there exists an isotopy of the identity taking $\alpha$
to $\beta$ with support in $I$.
\end{cor}
\noindent{\sc Proof:} The collection $\{\alpha \}$ with a single element is a null
collection so it is possible to apply Lemma~\ref{37} and
Corollary~\ref{38}. $\Box$
\bigskip
\noindent{\sc Notation:}
Let $D_{1},D_{2}$ be closed disks, $D_{1} \subset D_{2}$ and $D_{1},
D_{2}|_{L}$, where $L \subset \partial D_{1} \cap \partial D_{2}$ is an arc. If
$D_{1} \setminus L \subset I_{2}$, the interior of $D_{2}$, we will write
$D_{1} \subset D_{2} |_{L}$.
\bigskip
\begin{lem} \label{40}
Let $\psi: D \rightarrow D$ be a homeomorphism onto its image so that
$\psi(D) \subset D|_{\psi(L)}$, where $L$ is a closed arc, $\psi(L)
\subset L$ and $p \in L$ is a fixed point such that $\psi^{n}(x)
\rightarrow p$ for every $x \in L$. Then there exists an isotopy $h: D
\times [0,1] \rightarrow D$ of the identity such that $h|_{\partial D}$= id and if
$\zeta( \cdot ) = h
( \cdot, 1 ) $, then $(\psi \circ \zeta ) ^{n} (x) \rightarrow p$ for
every $x \in D$.
\end{lem}
\noindent{\sc Proof:} We will construct a null collection $\{ \alpha_{n} \}
^{\infty}_{n=1}$ of disjoint open cross-cuts in $I$ with the following
properties:
\begin{description}
\item[(i)] $\alpha_n$ has the same endpoints as $\psi^{n}(L)$;
\item[(ii)] if $I_{n-1}$ is the Jordan domain bounded by $\alpha_{n-1} \cup
\psi^{n-1}(L), \ \alpha_n$ is a cross-cut in $I_{n-1} \cap \psi (I_{n-1} )$,
for $n \geq 2$;
\item[(iii)] $\alpha_{n} \subset V_{\frac{1}{n} }(\psi^{n} (L) )$.
\end{description}
Set $\alpha_{1} = \psi (\partial D \setminus L )$ and $D_{1} = \psi (D)$. Notice that
$D_{1} \subset D|_{\psi (L)}$ implies $\psi (D_{1}) \subset D_{1}
|_{\psi^{2}(L) }$. By Proposition~\ref{16}, it is possible to find
$\alpha_{2} \subset \psi (I_{1}) \cap I_{1} = \psi (I_{1})$, an open
cross-cut joining the endpoints of $\psi^{2} (L)$ such that $\alpha_{2}
\subset V_{\frac{1}{2}}(\psi^{2}(L))$.
Assume we have constructed $\alpha_{1}, \alpha_{2}, \ldots, \alpha_{n}$ satisfying
(i), (ii) and (iii) above. Since $\alpha_{n} \subset \psi(I_{n-1}) \cap
I_{n-1}$, and $\alpha_n$ has the same endpoints as $\psi^{n}(L), \ D_{n}
\subset \psi (D_{n-1} ) |_{\psi^{n}(L) }$ and $D_{n} \subset D_{n-1}|
_{\psi^{n}(L)}$. This latter implies that $\psi(D_{n}) \subset \psi
(D_{n-1}) |_{\psi^{n+1}(L) }$
and since $\psi^{n+1} (L)
\subset \psi^{n}(L)$, by Proposition~\ref{13}, it follows that
$D_{n}, \psi (D_{n} ) | _ {\psi^{n+1}(L)} $. By Proposition~\ref{16},
there exists $\alpha_{n+1} \subset I_{n} \cap \psi (I_n)$ an open cross-cut with
the same endpoints as $\psi^{n+1}(L)$ such that $\alpha_{n+1} \subset
V_{\frac{1}{n+1}} (\psi^{n+1} (L) )$. By induction, we construct the
collection $\{ \alpha_{n} \}^{\infty} _{n=1} $. That $\{ \alpha_{n}
\}^{\infty}_{n=1}$ is a null collection follows from the fact that
$\alpha_{n} \subset V_{\frac {1}{n}} (\psi^{n} (L) )$ and diam $\psi^{n} (L)
\rightarrow 0$. That the $\alpha_n$'s are disjoint is clear since $\alpha_{n}
\subset I_{n-1}$ for every $n \geq 1$. Notice also that no two $\alpha_n$'s
share both endpoints. This is so because the endpoints of $\alpha_n$ are
the same as those of $\psi^{n}(L)$ and if $\psi^{n} (L)$ and $\psi^{m}
(L)$ shared both endpoints, $L$ would contain more than one fixed point.
Let $\beta_{n} = \psi^{-1}(\alpha_{n+1} )$. The collection $\{ \beta _{n}
\} ^{\infty} _{n=1}$ is clearly a null collection of disjoint open
cross-cuts no two of which share both endpoints. Also, for each $n \geq
1, \ \alpha_n$ and $\beta_n$ have the same endpoints. By
Corollary~\ref{38} there exists an isotopy of the identity $h : D
\times [0,1] \rightarrow D$ such that if $\zeta (\cdot) = h ( \cdot, 1), \
\zeta (\alpha_{n} ) = \beta_n$. Then $\psi \circ \zeta (\alpha_{n}) = \psi
(\beta_{n}) = \alpha_{n+1}$ and since $\psi ( \psi^{n} (L)) = \psi^{n+1}
(L)$ we see that $\psi \circ \zeta (D_{n} ) = D_{n+1} $. But diam $D_{n}
\rightarrow 0$ as $n \rightarrow \infty$ and therefore it follows that
$(\psi \circ \zeta )^{n} (x) \rightarrow p, \ \forall x \in D$ as $n
\rightarrow \infty$ as we wanted. $\Box$
\bigskip
\begin{cor} \label{41}
For $i=1, \ldots, n$ let $D_i$ be closed disks with disjoint interiors and
$L_{i} \subset \partial
D_i$ a closed arc. Let $\psi: \pi \rightarrow \pi$ be a homeomorphism of
the plane such that $\psi(L_{i} ) \subset L_{i+1}$ and $\psi(D_{i})
\subset D_{i+1} |_{\psi(L_{i}) }$, where we let the indices ``wrap
around'', i.e., we set $n+1$ to be $1$. Assume $\psi^{n}|_{D_{1}}:
(D_{1}, L_{1} ) \rightarrow (D_{1}, L_{1} )$ satisfies the hypotheses of
Lemma~\ref{40}. Then there exists an isotopy $h: \pi \times [0,1]
\rightarrow \pi$ of the identity such that supp $h \subset D_1$ and if
$\zeta ( \cdot) = h (\cdot, 1), \ ( \psi \circ \zeta)^{kn} (x)
\rightarrow p$ as $k \rightarrow \infty$ for every $x \in D_1$ where $p
\in L_1$ is the fixed point of $\psi^{n}|_{L_{1}}$.
\end{cor}
\noindent{\sc Proof:} The proof is straightforward using Lemma~\ref{40} and
we omit the details. $\Box$
\bigskip
\section{Plane Topology}
In this section we will develop some plane topology preliminaries we will
need later on.
\bigskip
\noindent{\sc Notation}: Unless stated explicitly otherwise, we will use
the following notations: $J$ will stand for a Jordan curve, $I$ and
$O$ for its
inner and outer domains respectively, and $D$ for the closed disk $I \cup
J$. If $D$ is a closed
disk we will sometimes use $I(D)$ to denote its inner domain. Subscripts
will match in the obvious way, so that the inner domain determined by the
Jordan curve $J_1$ is $I_1$ and $D_{1}= I_{1} \cup J_1$, etc.
If $k$ is a positive integer $\underline{k}$ will stand for the set $\{1,2,
\ldots , k \}$.
\bigskip
\begin{defn}
Let $J_{1}, \dots, J_{n}$ be Jordan
curves and $L \subset J_{1} \cap \ldots \cap J_n$ an arc.
We say the closed disks $D_{1}, \ldots , D_n$ {\em lie on the same side
of}\/ $L$, denoted $D_{1}, \ldots , D_{n} |_{L}$, if $L \subset
\overline{I_{1} \cap \ldots \cap I_{n} }$ (see figure~\ref{f1}.)
\end{defn}
\begin{figure}
\begin{center}~
\psfig{file=Fig1,height=2.5in}
\end{center}
\caption{Two disks on the same side of the arc $L$.} \label{f1}
\end{figure}
\begin{prop} \label{6}
In the plane
$\pi$, let $A$ be a closed arc and $B$ a closed set such
that $A \cap B \subset \{ {\mbox{\rm endpoints of } A} \}$ and there exists
$\varepsilon > 0$ such that every component of $B \setminus (A \cap B)$
contains a
point at distance greater than $\varepsilon$ from $A$. Then there exists
a Jordan curve $J$ such that $A \setminus (A \cap B) \subset I$ and $B
\setminus (A \cap B) \subset O$
where $I$ and $O$ are the bounded and unbounded components
of ${\mathcal{C}} J$ (the complement of $J$ in $\pi$)
respectively, and $J \cap (A \cup B ) \subset A \cap B \subset \{
\mbox{\rm endpoints of }A \}$.
\end{prop}
\noindent{\sc Proof}: Let $a \in A \setminus (A \cap B)$ and $b \in B
\setminus (A \cap B)$ such
that $d(b,A) > \varepsilon$. By Theorem~\ref{2}, there exits a Jordan
curve $J$
separating $a$ from $b$, such that $J \subset V_{\varepsilon}(A)$ (the
$\varepsilon$-neighborhood about $A$) and $J \cap (A \cup B) \subset A
\cap B$.
First notice that $I \subset V_{\varepsilon}(A)$. This is so because $D
= J \cup I$ is compact and since $A$ is also compact, there exist $x \in
A$ and $y \in D$ which realize $\sup \{d(x,y); \ x \in A, \ y \in D
\}$. We claim $y \in J$ for if $y \in I$ there would exist $\delta >0$
such that $V_{\delta}(y) \subset I$ and in $V_{\delta}(y)$ there must be
a point whose distance to $x$ is greater than $d(x,y)$. This shows that
if $J$ is contained in $V_{\varepsilon}(A)$ then so is $D=J \cup I$.
Since $b \notin V_{\varepsilon}(A), \ b \in O$ and since $J$ separates
$a$ from $b, \ a \in I$. But $A \setminus (A \cap B)$ is a connected point
set disjoint from $J$ and $a \in A \setminus (A \cap B)$ so that $A
\setminus (A \cap B)
\subset I$. Also, we assumed that each connected component of $B
\setminus (A \cap
B)$ had a point outside of $V_{\varepsilon}(A)$, and therefore in $O$.
Since $B \setminus (A \cap B)$ is disjoint from $J, \ B \setminus (A
\cap B) \subset O$, as we wanted. $\Box$
\bigskip
In the proofs of the statements that follow, indexed unions and
intersections will be assumed to range from $i=1$ to $i=n$.
\bigskip
\begin{cor} \label{8}
Let $J_{1}, \ldots , J_{n}$ be Jordan curves and $L \subset
\displaystyle\bigcap^{n}_{i=1}
J_i$ a closed arc. Then there exists a Jordan curve $J$ such that
${\stackrel{\circ}{L}} \subset I$ and such that $\left( \displaystyle\bigcup^{n}_{i=1}
J_{i} \right) \setminus L \subset O$.
\end{cor}
\noindent {\sc Proof}: Let $\varepsilon_{i} = \sup \{ d(x,L); \ x \in
J_{i} \backslash L \}$. Since $L$ is a closed arc, $J_{i} \setminus \overline{L}
\neq \emptyset$ and thus $\varepsilon_{i} > 0$. Let $A = L, \ B
=\overline{ \left( \bigcup J_{i} \right) \backslash L } \
= \ \bigcup \overline{ J_{i} \backslash L}$ and
$\varepsilon = \frac{1}{2} \min \varepsilon_i$. Then $A \cap B = \{
\mbox{\rm endpoints of } A \}$ and $B \backslash (A \cap B) =
\bigcup (J_{i} \backslash L)$, every component of
which has a point at distance greater than $\varepsilon$ from $A$. We
can then apply Proposition~\ref{6} in
order to find the desired Jordan curve $J$ (see figure~\ref{f2}.) $\Box$
\begin{figure}
\begin{center}~
\psfig{file=Fig2,height=2.5in}
\end{center}
\caption{A Jordan neighborhood of a common arc $L$.} \label{f2}
\end{figure}
\begin{cor} \label{9}
With the notation of Corollary~\ref{8}, $J \cap L=$
\{endpoints of $L \}$ and thus $L$ is a cross-cut in $I$.
\end{cor}
\noindent{\sc Proof}: Since $\stackrel{\circ}{L} \subset I, \ L \subset
\overline {I} = I \cup J$ so that \{endpoints of $L\} \subset I \cup J$.
On the other hand both endpoints of $L$ are accumulation points of each
$J_{i} \backslash L$ so that \{endpoints of $L$\} $\subset \left( \overline{
\bigcup J_{i} } \right) \backslash L \subset
\overline{O} =
O \cup J$. Therefore \{endpoints of $L$\} $\subset J$. $\Box$
\bigskip
\begin{cor} \label{7}
Let $J$ be a Jordan curve, and $L \subset J$ a closed arc. Then for any
$\varepsilon >
0$ there exists an open cross-cut $\alpha \subset I \cap V_{\varepsilon}(L)$ with the
same endpoints as $L$. $\Box$
\end{cor}
\begin{prop} \label{10} The
closed disks $D_{1}, \ldots, D_n$ lie on the same side of a closed arc
$L$ if and only if there exists an open arc $\alpha \subset
\displaystyle{\bigcap^{n}_{i=1}}I_i$ with the same endpoints as $L$. As a
consequence, if $U$ is the Jordan domain bounded by $\alpha \cup L, \ U
\subset \displaystyle{\bigcap^{n}_{i=1}}I_i$.
\end{prop}
\noindent{\sc Proof}: If there exists such an arc, and $U$ is the Jordan
domain bounded by $\alpha \cup L$, by Theorem~\ref{1}, $\alpha \cup L
\subset \overline{U} \subset \left( \overline{ \bigcap
I_{i} } \right)$. Therefore $D_{1}, \ldots , D_{n} |_{L}$.
If $D_{1}, \ldots , D_{n}|_{L}$,
then $L \subset \bigcap J_{i}$ and we can use Proposition~\ref{8}
to find a Jordan curve $J$ satisfying the conclusions of that
proposition. By Corollary~\ref{9}
and Corollary~
\ref{3}, $L$ separates $I$ into two Jordan
domains $U$ and $V$. Notice that since $U \cup V = I
\setminus L, \ U \cup V$ does not intersect $L$ or
$\left(\bigcup J_{i} \right) \setminus L$, that is, $U \cup
V \subset {\mathcal{C}} \left(\bigcup J_{i} \right)$.
Since $\stackrel{\circ}{L} \subset \bigcap I_{i}\, , \ \stackrel{\circ}{L}
\subset I$ and $L \cap ( \bigcap I_{i}) = \emptyset, \ (\bigcap I_{i})
\cap (U \cup V) \neq \emptyset$. Assume $U \cap ( \bigcap I_{i})
= \emptyset$. Since $U \subset {\mathcal{C}} ( \bigcup J_{i}
)$ and $U$ is connected, $U \subset ( \bigcap I _{i} )$.
Now, $U$ is bounded by $\alpha \cup L$, where $\alpha$ is one of the open
arcs into which the endpoints of $L$ separate $J$ (see figure 3.)
Since $J \cap (\bigcup J_{i})=$ \{endpoints of $L$\}, $\alpha \cap
(\bigcup J_{i}) = \emptyset$ and since $\alpha
\subset \overline{U} \subset \overline{\bigcap
I_{i} }\, , \ \alpha \subset \bigcap I_{i}$.
Therefore $\alpha$ is the arc we were after. $\Box$
\begin{figure}
\begin{center}~
\psfig{file=Fig3,height=2.5in}
\end{center}
\caption{$D_{1}$ and $D_{2}$ are on the same side of $L$ and $\alpha$ is a
cross-cut in both $D_1$ and $D_2$.}
\label{f3}
\end{figure}
\begin{cor} [of the proof] \label{11} In Proposition~\ref{10},
$\alpha$ may be taken to lie in a $\varepsilon$-neighborhood of $L$, for any
$\varepsilon$ chosen in advance. $\Box$
\end{cor}
\noindent{\sc Remark}: The arc $\alpha$ of Proposition~\ref{10}
and Corollary~\ref{11} is
clearly a cross-cut in each of the domains $I_{i}$ for each $i \in
\underline{n} $.
\bigskip
\begin{prop} \label{12}
If $D_{1}, \ldots, D_{n}|_{L'}$ and $L$ is the connected
component of $\displaystyle\bigcap^{n}_{i=1} J_{i}$ containing $L'$, then $D_{1},
\ldots, D_{n}|_{L}$.
\end{prop}
\noindent {\sc Proof}: Let $J$ be a Jordan curve as in
Corollary~\ref{8}
and $U$ and $V$ the components
of $J \setminus L$. By Corollary~\ref{9},
$U \cup V \subset {\mathcal{C}}( \bigcup J_{i})$.
Since $\stackrel{\circ}{L'} \subset \stackrel{\circ}{L} \subset I$ and $D_{1},
\ldots ,
D_{n}|_{L'}$, by the same reasoning as in the proof of
Proposition~\ref{10},
$( \bigcap I_{i}) \cap (U \cup
V ) \neq \emptyset$, say, $( \bigcap I_{i} ) \cap U
\neq \emptyset$. Since $U \subset {\mathcal{C}} ( \bigcup
J_{i}), \ U \subset \bigcap I_{i}$. Thus, if $\partial U
= L \cup \alpha$, $\alpha$ satisfies the conditions of
Proposition~\ref{10},
which shows that $D_{1}, \dots, D_{n}|_{L}$ as
we wanted. $\Box$
\bigskip
\begin{prop} \label{13} If $D_{1},
D_{2}|_{L}, \, D_{2}, \, D_{3}|_{L'}$ and $L'' \subset L \cap L'$ then
$D_{1}, D_{2}, D_{3}|_{L''}$.
\end{prop}
\noindent {\sc Proof}: The proof is similar to the previous ones and is
left to the reader. $\Box$
\bigskip
\begin{prop} \label{14}
Let $J_{0},
J_{1}, \ldots, J_{n}$ be Jordan curves, $L \subset J_0$ an open arc and
for $i \in \underline{n}, \ L \cap \overline{I_{0} \cap I_{i} }= \emptyset$.
Then given $\varepsilon > 0$ there exists an open cross-cut $\alpha$ in
$I_0$ joining the endpoints of $L$ such that $\alpha \subset
V_{\varepsilon}(L)$ and if $U$ is the Jordan domain bounded by $\alpha
\cup L$, then $(U \cup \alpha) \cap D_{i} = \emptyset$ for each
$i \in \underline{n}$.
\end{prop}
\noindent {\sc Proof}: Consider the set $B=(J_{0} \backslash L) \cup
\left[ \overline{ I_{0} \cap ( \bigcup D_{i} ) } \right] $.
$B$ is clearly closed, since $L$ is an open arc, and we claim
that $B \cap L = \emptyset$. Since $I_0$ is open, it is an exercise to
show that $\overline{ I_{0} \cap I_{i} }= \overline{ I_{0} \cap D_{i}
}$. Thus our assumption that $L_{0} \cap \overline{I_{0} \cap I_{i} } =
\emptyset$ is equivalent to $L \cap \overline{ I_{0} \cap D_{i} } =
\emptyset $ for each $i \in \underline{n}$. Since $\overline{ I_{0} \cap
\bigcup D_{i} } =
\bigcup \overline{ I_{0} \cap D_{i} }, \ L \cap
( \overline{ I_{0} \cap D_{i} } ) = \emptyset$ and clearly $L \cap (J_{0}
\backslash L ) = \emptyset$, so that $B \cap L = \emptyset$.
Now let $C$ be a component of $\overline{I_{0} \cap D_{i} }$ for some $i
\in \underline{n}$
and assume $C \cap (J_{0} \backslash L)= \emptyset$. Since $C \cap L =
\emptyset, \ C \cap J_{0} = \emptyset$ and it follows that $C \subset I_{0}$. But
$D_i$ is
connected so that $C = D_i$. This shows that if a component of $B$ is
not that which contains $J_{0} \setminus L$, it must consist of the union of
one or more of the closed disks $D_i$. From this it is not hard to see
that there exists $\varepsilon > 0$ such that every component of $B \setminus \{
\mbox{\rm endpoints of } L \}$ contains a point at distance greater
than $\varepsilon$ from $L$. Let $A = \overline{L}$ and apply
Proposition~\ref{6}
to $A, B$ and $\varepsilon$ as above to find a Jordan curve
$J$ such that $A \setminus (A \cap B) = \overline{L} \setminus \{ \mbox{\rm endpoints of
}L\}=L \subset I, \ B \setminus (A \cap B ) = B \setminus \{\mbox{\rm endpoints of }L\}
\subset O$ and $J \cap (A \cup B ) \subset A \cap B = \{\mbox{\rm
endpoints of
}L\}$. Since $L \subset I$ and $J_{0} \setminus L \subset \overline{O}$, $L$ is a
cross-cut in
$I$ and $I \setminus L = U \cup V$, where $U$ and $V$ are disjoint
Jordan domains. Since $I \cap (J_{0} \setminus L)= \emptyset, \ I \setminus L = I \setminus [ L
\cup
(J_{0} \setminus L )]= I \setminus J_{0}$ and it follows that $I \setminus L = (I \cap I
_{0}) \cup (I \cap O_{0})$ so that either $U = I \cap I_{0}$ and
$V=I \cap O_{0}$ or vice versa. Assume $U=I \cap I_{0}$ (see figure
4.) Then $U
\cap D_{i}= \emptyset$ for every $i \in \underline{n}$ since $U = I \cap I_{0}$ and $I
\cap \overline{I_{0} \cap \bigcup D_{i} }= \emptyset$. Also, if $\alpha$ is the arc of $J
\setminus (A \cup B )$ for which $\partial U = \alpha \cup L$, it is clear that $\alpha
\subset I
_{0}$ and since $\alpha \cap \overline{I _{0} \cap \bigcup D_{i} } = \emptyset, \ \alpha \cap
\bigcup D_{i} = \emptyset$. Therefore, $\alpha$ is the arc we were after. $\Box$
\bigskip
\begin{figure}
\begin{center}~
\psfig{file=Fig4,height=2.75in}
\end{center}
\caption{The curve $J_ 1$ only touches the arc $L$ from the outer domain
determined by $J_0$.} \label{f4}
\end{figure}
\begin{defn}
Let $A$ be a Jordan curve or an arc and $L, L'
\subset A$ closed arcs. We say that $L$ and $L'$ are {\em unlinked}\/ if
either $L \subset L'$ or $L' \subset L$ or $L$ and $L'$ intersect at most at
endpoints.
\end{defn}
\noindent {\sc Remark}: Notice that saying that $L$ and $L'$ are unlinked in
a Jordan curve is more than the usual definition of their endpoints being
unlinked.
\bigskip
\begin{prop} \label{15}
Let $J$
be a Jordan curve and $L_{1}, \dots , L_{n} \subset J$ be pairwise unlinked
closed arcs. Then for every $\varepsilon > 0$ there exist disjoint open
cross-cuts $\alpha_{i} \subset I \cap V _{\varepsilon} (L_{i})$ joining the
endpoints of $L_i$, for each $i \in \underline{n}$.
\end{prop}
\noindent {\sc Proof}: We will use induction on the number $n$ of arcs. For
$n=1$, the statement is true by Corollary~\ref{7}.
Assume we have proven the statement for collections of arcs
with up to $n-1$ elements and $L_{1}, \ldots ,L_{n}$ are unlinked. Use
Corollary~\ref{7} to find an open cross-cut
$\alpha_{1} \subset I \cap V_{\varepsilon} (L_{1})$ joining the endpoints of $L_1$.
Then for $i > 1$, since $L_{i}, L_{1}$ are unlinked, either $L_{i} \subset
L_{1}$ or $L_{i} \subset \overline{J \setminus L_{1} }$. Let $L_{i_{1}}, \ldots ,
L_{i_{k}}
\subset L_{1}$ and $L_{j_{1}}, \ldots , L_{j_{m}} \subset \overline{ J \setminus L_{1}
}$.
These are collections of unlinked arcs with fewer than $n$ elements and
since $L_{i_{1}}, \ldots , L_{i_{k}} \subset L_{1} \cup \alpha_{1}$ and
$L_{j_{1}},
\ldots , L_{j_m} \subset \overline{J \setminus L_{1} } \cup \alpha_{1}$, by the inductive
hypothesis it is possible to find collections of cross-cuts $\alpha_{i_{1}},
\ldots , \alpha_{i_{k}}$ and $\alpha_{j_{1}}, \ldots , \alpha_{j_{m}}$
satisfying the conclusion of the proposition. Clearly $\alpha_{1},
\alpha_{i_{1}}, \ldots , \alpha_{i_{k}}, \alpha_{j_{1}}, \ldots , \alpha_{j_{m}}$
is the desired collection for $L_{1}, \ldots, L_{n}$. $\Box$
\bigskip
\begin{prop} \label{16}
Let
$J_{0}, \ldots , J_n$ be Jordan curves, and $L_{i} \subset J_{i} \cap
J_{0}, \ i \in \underline{n}$,
closed arcs, pairwise unlinked in $J_0$, no two of which
are indentical. Assume that $D_{0}, D_{i} |_{L_{i}}$ for $i \in
\underline{n}$. Then for each $\varepsilon > 0$ there exist disjoint open cross-cuts
$\alpha_{i} \subset I_{0}$ joining the endpoints of $L_i$ such that $\alpha_{i}
\subset V_{\varepsilon} (L_{i}) \cap I_{i}$ for $i \in \underline{n}$.
\end{prop}
\noindent {\sc Proof}: The proof is by induction on the number $n$ of
curves. If $n=1$, the statement is true by Corollary~\ref{7}.
Assume we have proven the statement for
collections with fewer than $n$ curves and $J_{i}, L_{i}, \ i \in
\underline{n}$ satisfy the hypotheses above. Among $L_{1}, \ldots , L_{n}$
choose all the ones which are not contained in any other (see figure
5.) We may assume without loss of generality that they are the first $k$
arcs
$L_{1}, \ldots , L_k$. Since $L_{1}, \ldots , L_k$ are pairwise unlinked
and are not contained in one another, they are pairwise disjoint except
possibly at endpoints. By Proposition~\ref{15}
there exist disjoint open cross-cuts $\gamma_{i} \subset I_{0} \cap
V_{\varepsilon}(L_{i})$
joining the endpoints of $L_i$ for $i \in \underline{k}$. Notice that since the
arcs $L_{1}, \ldots , L_k$ are disjoint except possibly at endpoints, the
interior $U_i$ of the disks bounded by $\gamma_{i} \cup L_{i}, \ i \in
\underline{k}$ are pairwise disjoint. Moreover by
Proposition~\ref{13}
the closed disk bounded by $\gamma_{i} \cup L_{i}$ is on the same side
of $L_i$ as $D_0$ (the disk bounded by $J_0$) for each $i \in \underline{k}$.
>From Proposition \ref{10} it follows
that there exist
arcs $\alpha_{i} \subset U _{i} \cap I_{i}$ joining the endpoints of $L_i$.
Since $U_{i} \subset I_{0} \cap V _{\varepsilon} (L_{i})$ it is clear that $\alpha_i$
is a
cross-cut in $I_0$ and $\alpha_{i} \subset V_{\varepsilon}(L_{i})\cap I_{i}$. Now,
the
remaining arcs are contained in $L_{1}, \ldots , L_k$, since we chose
all the arcs which were not contained in any other. For each $L_{i}, \ i
\in \underline{k}$, the arcs inside it form an unlinked collection with fewer
than $n$ elements satisfying the hypotheses of the proposition.
Therefore by the inductive assumption we are done. $\Box$
\bigskip
\begin{figure}
\begin{center}~
\psfig{file=Fig5,height=2.5in}
\end{center}
\caption{Disjoint cross-cuts joining the endpoints of unlinked arcs.}
\label{f5}
\end{figure}
\section{Preliminaries}
We will denote the 2-dimensional plane
$\pi$ or $\Bbb{R}^2$. A {\em Jordan curve}\/ $J$ is the
homeomorphic
image of the circle $S^1= \{ (x,y) \in \Bbb{R}^{2}; \ x^{2} + y^{2}=1 \}$ and
a {\em closed arc} $L$ is the homeomorphic image of the closed interval
$[0,1]$, the images of $\{0\}$ and $\{1\}$ being its endpoints.
By an {\em open arc}\/ we will mean the set obtained by taking the
endpoints away from a closed arc. If $L$ is a closed arc $\stackrel{\circ}{L}$
will denote the corresponding open arc.
The theorems that follow can be found in the books of Newman \cite{Ne},
Moise \cite{M} and Whyburn \cite{Wh}. Moore's book \cite{Mo} is also a
good reference although a little less palatable. \bigskip
\begin{theorem}[Jordan Curve Theorem] \label{1}
Every Jordan curve
separates the plane into two regions $I$ and $O$ and is the boundary of
each.
\end{theorem}
\begin{defn}
Let $J$ be a Jordan curve and $I$ the bounded region of $\pi \setminus J$.
We call $I$ a {\em Jordan domain} and sometimes refer to it as the {\em
inner domain} determined by $J$.
\end{defn}
\begin{theorem}[Separation Theorem (Whyburn)] \label{2}
Let $A$ be compact and $B$ closed
subsets of the plane such that $A \cap B$ is totally disconnected, $a \in
A \setminus (A \cap B), \ b \in B \setminus (A \cap B)$ and
$\varepsilon$ a positive
number. Then there exists a Jordan curve $J$ which separates $a$ and
$b$ and is such that $J \cap (A \cup B ) \subset A \cap B$ and every
point of $J$ is at distance less than $\varepsilon$ from some point of $A$.
\end{theorem}
\begin{defn}
Let $U$ be a domain in the plane and $\alpha$
an open (closed) arc whose endpoints lie on $\partial U$ and all others
lie in $U$. Such an $\alpha$ is called an {\em open (closed) cross-cut}.
\end{defn}
\begin{theorem}
If both endpoints of a cross-cut $\alpha$ in a
domain $U
\subset \pi$ are on the same component of ${\mathcal{C}} U$,
the complement of $U, \ U \backslash \alpha$ has two components and is
contained in the frontiers of both.
\end{theorem}
\begin{cor}\label{3} Let $J$ be a Jordan
curve, $I$ its inner domain and $\alpha \subset I$ a cross-cut. Then
$\alpha$ separates $I$ into two Jordan domains $I_{1}$ and $I_2$ whose
boundaries are $L_{1} \cup \alpha$ and $L_{2} \cup \alpha$, where $L_1$
and $L_2$ are the arcs into which the endpoints of $\alpha$ separate $J$.
\end{cor}
\begin{theorem}
\label{4} Let $f: J_{1} \rightarrow J_2$ be a
homeomorphism between the Jordan curves $J_1$ and $J_2$. Then it is
possible to extend $f$ to a homeomophism $\tilde{f} : D_{1} \rightarrow
D_{2}$ between the closed disks $D_{1}=J_{1} \cup I_1, \ D_{2}=J_{2} \cup
I_2$ bounded by $J_1$ and $J_2$.
\end{theorem}
\begin{theorem} [Alexander] \label{5}
In $\Bbb{R}^n$, let $B^{n}= \{
x; ||x|| \leq 1 \}$ and $S^{n-1}=\partial B^{n-1} = \{x; ||x||=1 \}$ and
$f: B^{n} \rightarrow B^n$ a homeomorphism such that $f|_{S^{n-1}}
\equiv$ identity. Then $f$ is isotopic to the identity
through an isotopy that fixes the boundary pointwise.
\end{theorem}
\bigskip
\section{The Proof of the Main Theorem}
In what follows $f: \pi \rightarrow \pi$ will be a uniformly continuous
homeomorphism of the plane and $\{ D_{i} \}^{L} _{i=1}$ a pruning
collection for $f$. As we pointed out before, we may and will assume
that the subscripts reflect the partial order $\geq$ in $\{ D_{i}
\}^{L}_{i=1}$, in the sense that, if $i > j$ then $D_{i} \not\leq D_j$.
In particular, if $i > j $ then $D_{i} \not\prec D_j$.
\bigskip
\begin{defn}
For each $i \in L$ we define four numbers $n(i),\ N(i), \ m(i),$ $ M(i)
\in {\Bbb{Z}} \cup \{ \pm \infty \}$ as follows:
\begin{description}
\item[(i)] $n(i)$ is the smallest integer $\geq 1$ such that $f^{n(i)}
(D_{i} ), f(D_{j} ) | _{f^{n(i) }(C_{i})}$ for some $j \in \underline{L}$ or
$n(i) = \infty$ if $f^{k} (D_{i} ), f(D_{j} ) \not|_{f^{k}(C_{i} ) }$ for
every $k \geq 1$ and $j \in \underline{L}$;
\item[(ii)] $N(i) = \left\lceil \frac{n(i)}{2} \right\rceil$, i.e., the
smallest integer greater than or equal to $\frac{n(i)}{2}$, if $n(i) <
\infty$ or $N(i)= \infty$ if $n(i)= \infty$;
\item[(iii)] $m(i)$ is the largest integer $\leq 0$ such that $f^{m(i)}
(D_{i} ), D_{j} | _{f^{m(i)} (E_{i} )}$ for some $j \in \underline{L}$ or $m(i)=
- \infty$ if $f^{k} (D_{i} ), D_{j} \not| _{f^{k}(E_{i})}$ for
every $k
\leq 0$ and $j \in \underline{L}$;
\item[(iv)] $M(i) = \left\lceil \frac{m(i)}{2} \right\rceil$ if $m(i) >
- \infty$ or $M(i) = - \infty$ if $m(i) = - \infty$.
\end{description}
\end{defn}
The following proposition is a straightforward consequence of the
definitions and we omit the proof.
\bigskip
\begin{prop}\label{66}
If $n(i), \ N(i), \ m(i)$ and $M(i)$ are finite the following holds for
each $i \in \underline{L}$:
\begin{description}
\item[(i)] $n(i) = 2 N(i) - \delta$ and $m(i) = 2 M(i) - \delta '$ where
$\delta, \delta'= 0$ or $1$;
\item[(ii)] $f^{N(i)} (D_{i} ),\ f^{-N(i) + \delta +1 } (D_{j}) |
_{f^{N(i) } (C_{i} )}$
for some $j \in \underline {L}$ but for $-N(i) + \delta
+2 \leq k \leq N(i) -1$,
\[\ f^{N(i)} (D_{i}), f^{k} (D_{j}) \not|_{f^{N(i) } (C_{i})}\]
for any $j \in \underline{L}$;
\item[(iii)] for $1 \leq n < N(i), \ -n+1 \leq k \leq n-1, \
f^{n}(D_{i}), f^{k} (D_{j}) \not| _{f^{n}(C_{i}) }$ for any $j \in \underline{L}$;
\item[(iv)] $f^{M(i)}(D_{i}), f^{-M(i) + \delta'} (D_{j}) |_{f^{m(i)}
(E_{i})}$ for some $j \in \underline{L}$ but for $M(i) +1 \leq k \leq -M(i)+\delta'
+1$,
\[ f^{M(i)} (D_{i}), f^{k} (D_{j}) \not| _{f^{M(i)}(E_{i})}\]
for any $j \in \underline{L}$;
\item[(v)] for $M(i) < m \leq 0$ and $m +1 \leq k \leq -m+1, \ f^{m}
(D_{i} ), f^{k}(D_{j} ) \not| _{f^{m}(E_{i} ) }$ for any $j \in \underline{L}$.
$\Box$
\end{description}
\end{prop}
Recall that we defined $c$- and $e$-equivalence relations in a collection
$\{D_{i} \}^{L}_{i=1}$ of $(c,e)$-disks and in Proposition~\ref{26}
proved that the equivalence classes have distinguished representatives.
The following proposition is again an easy consequence of the definitions.
\bigskip
\begin{prop}\label{42}
For each $i \in \underline{L}, \ n(i) > 1$ if and only if $D_i$ is the
distinguished representative in its $c$-equivalence class in $\{D_{i}
\}^{L} _{i=1}$. Likewise, $m(i) < 0$ if and only if $D_i$ is the
distinguished representative in its $e$-equivalence class in $\{D_{i}
\}^{L}_{i=1}$. $\Box$
\end{prop}
We now start the construction of the isotopy for the proof of the main
theorem. If the pruning collection contains only one $(c,e)$-disk $D_1$
and $N(1)=\infty$ and $M(1)=-\infty$, most of what is presented
>from here to the end of this section is very much simplified.
We suggest that the reader concentrate on this case upon a first reading.
\bigskip
Recall that ${\mathcal{V}}_{i}(n) \subset f^{n} (I_{i} )$ is a Jordan domain
containing $\alpha_{i}(n)$ and $f^{-1} (\alpha_{i} (n+1) )$ as cross-cuts with
the same endpoints. Using Corollary~\ref{31} construct, for each $i
\in \underline{L}$ and $M(i) \leq n < N(i)$ an isotopy $k_{i,n} : \pi \times [0,1]
\rightarrow \pi$ of the identity such that supp $k_{i,n} \subset
{\mathcal{V}}_{i}(n)$ and $k_{i,n} (\alpha_{i} (n),1 ) = f^{-1} (\alpha_{i} (n+1)
)$. If
$n < M(i)$ or $n \geq N(i)$ we let $k_{i,n} \equiv$ identity. Set
$\zeta_{i,n} (\cdot) = k_{i,n} (\cdot , 1)$. For $n \in {\Bbb{Z}}$ define
$$k_{n}(x,t) =\left\{
\begin{array}{ll}
k_{1,n} (x,Lt), & t \in \left[ 0, \displaystyle\frac{1}{L} \right] \\
\\
\zeta_{1,n} (k_{2,n} (x, Lt -1)), & t \in \left[ \displaystyle\frac{1}{L},
\displaystyle\frac{2}{L} \right] \\
\\
\zeta_{1,n} \circ \zeta_{2,n} (k_{3,n} (x, Lt -2)), & t \in \left[
\displaystyle\frac{2}{L}, \displaystyle\frac{3}{L} \right] \\
\\
\vdots & \vdots \\
\\
\zeta_{1,n} \circ \zeta_{2,n} \circ \ldots \circ \zeta _{L-1,n}
(k_{L,n}(x, Lt -L+1)), & t \in \left[ \displaystyle\frac{L-1}{L}, 1 \right]
\end{array}
\right. $$
\noindent and let $\zeta_{n} (\cdot) = k_{n} (\cdot, 1)$. Now let $r_{0} =
k_0$ and for $n \geq 1$
$$
r_{n} (x,t) = \left\{
\begin{array}{lr}
k_{-n} (x, 2t), & t \in \left[ 0, \displaystyle\frac{1}{2} \right] \\
\\
\zeta_{-n} (k_{n} (x, 2t -1) ), & t \in \left[ \displaystyle\frac{1}{2}, 1
\right] \\
\end{array}
\right.
$$
\noindent and set $\rho_{n} (\cdot) = r _{n} (\cdot, 1 )$ for $n \geq 0$.
Recall that the locus $\overline{P} = \displaystyle\bigcup ^{L} _{i=1} D_i$ of a pruning
collection $\{ D_{i} \} ^{L} _{i=1}$ was called a pruning front. We will
denote the union of the interiors $\displaystyle\bigcup^{L}_{i=1} I_i$ by
$P$.
\bigskip
\begin{prop}\label{43}
The isotopies $r_n$ just defined have the following properties:
\begin{description}
\item[(i)] supp $r_{n} \subset [ f^{n} (P) \cup f^{-n} (P) ] \setminus
\displaystyle\bigcup ^{n-1}_{-n+1} f^{k} (\overline{P} )$ for every $n \geq 0$ so that
if $n \neq m, \ \mbox{\rm supp } r_{n} \cap \mbox{\rm supp }r_{m} = \emptyset$;
\item[(ii)] since $f$ is uniformly continuous, the diameters of the
connected components of supp $r_{n}$ converge to $0$ as $n \rightarrow
\infty$;
\item[(iii)] for each $i \in \underline{L}$, if $n < N(i), \ \rho_{n} (\alpha_{i}
(n) ) = f^{-1} (\alpha_{i} (n+1))$ and if $-n \geq M(i)$, $\rho_{n} (\alpha_{i}
(-n) ) = f^{-1} (\alpha_{i} (-n +1))$.
\end{description}
\end{prop}
\noindent{\sc Proof:} From the definition of $k_n$ it is clear that for $n \in
{\Bbb{Z}}$, $$\mbox{\rm supp }k_{n} \subset \bigcup \{ {\mathcal{V}}_{i} (n); \ M(i)
\leq n < N(i) \}$$
so that for $n \geq 0$
$$ \mbox{\rm supp }r_{n} \subset \bigcup \{ {\mathcal{V}}_{i} (n); \ 1 \leq n <
N(i) \} \cup \bigcup \{ {\mathcal{V}}_{i} (-n); \ M(i) \leq -n \leq 0 \} $$
\noindent and since ${\mathcal{V}}_{i}(n) \subset f^{n} (I_{i})$, it is clear
that
$$\mbox{\rm supp } r_{n} \subset f^{n} (P) \cup f^{-n}(P)= \displaystyle \bigcup^{L}_{i=1} f^{n}
(I_{i} ) \cup \displaystyle \bigcup^{L}_{i=1} f^{-n}(I_{i} ).$$ There is nothing
more to prove for $n=0$ (recall that $\displaystyle \bigcup^{-1}_{1} f^{k}(P) =
\emptyset$, by our convention) and we may assume that $n \geq 1$ (see figure~17.)
If $1 \leq n < N(i)$, it follows from Proposition \ref{66} that $$f^{n}
(\!D_{i}\!), f^{k} (\!D_{j}\!) \not|_{f^{n}(\!C_{i}\!) }$$ for any $j \in
\underline{L}$ and
$-n +1 \leq k \leq n-1$. Since $\{ \gamma_{i}(n) \}^{L}_{i=1}$ is a
$(\varepsilon_{n},c)$-collection compatible with $$\{ ( f^{k} (D_{j}), \beta_{j}
(k) ) : \ j \in \underline{L}, \ -n +1 \leq k \leq n-1 \},$$ by
Proposition~\ref{30}, we must have $\left[ I ^{c} (\gamma_{i} (n) ) \cup
\gamma_{j} (n) \right] \cap f^{k} (D_{j} ) = \emptyset$ for every $j \in
\underline{L}$ and $-n+1 \leq k \leq n-1$. But ${\mathcal{V}}_{i} (n) \subset I^{c}
(\gamma_{i} (n) )$ by Proposition~\ref{32} and taking the union
over $j \in \underline{L}$ and $-n+1 \leq k \leq n-1$ we see that
$$ {\mathcal{V}}_{i}(n) \cap \displaystyle\bigcup^{n-1}_{-n+1} f^{k} (\overline{P} ) = \emptyset $$
\noindent from which it follows that
$$ \bigcup \{ {\mathcal{V}}_{i} (n);\ 1 \leq n < N(i) \} \cap
\displaystyle\bigcup^{n-1}_{-n+1} f^{k} (\overline{P}) = \emptyset \ . $$
If $M(i) < m \leq 0$, it follows from Proposition~\ref{66} that
$$f^{m}(\!D_{i}\!), f^{k}(\!D_{j} \!) \not|_{f^{m}(\!E_{i}\!)}$$ for any $j
\in \underline{L}$
and $m+1 \leq k \leq -m+1$. Again by Proposition~\ref{30} $\{\beta_{i}
(m) \}^{L} _{i=1}$ is a $(\varepsilon_{|m|},e)$-collection compatible with $$\{ (
f^{k} (D_{j} ), \beta _{j} (k) ); \ j \in \underline{L}, \ m+1 \leq k \leq -m+1
\},$$ which implies that $[ I^{e} (\beta _{i} (m) ) \cup \beta _{i} (m) ]
\cap f^{k} (D_{j} ) = \emptyset$ for every $j \in \underline{L}$ and $m+1 \leq k \leq
-m+1$, and thus that $[I^{e} (f^{-1} (\beta _{i} (m) )) \cup f^{-1} (\beta
_{i} (m) ) ] \cap f^{k} (D_{j}) = \emptyset$ for every $j \in \underline{L}$ and $m
\leq k \leq -m$. Letting $m = -n +1$ and noticing that ${\mathcal{V}}_{i}(-n)
\subset I^{e} (f^{-1} (\beta _{i} (-n+1)))$, by Proposition~\ref{32},
what we have just seen implies that for $M(i) \leq -n < 0$
$$ {\mathcal{V}}_{i}(-n) \cap \displaystyle\bigcup ^{n-1}_{-n+1} f^{k} (\overline{P}) = \emptyset $$
\noindent from which it follows that
$$ \bigcup \{ {\mathcal{V}}_{i} (-n); \ M(i) \leq -n < 0 \} \cap \displaystyle
\bigcup^{n-1}_{-m+1} f^{k} ( \overline{P} ) = \emptyset \ . $$
This finishes the proof of (i).
\bigskip
In order to prove (ii) notice that supp $r_{n} \subset \displaystyle{ \bigcup
^{L}_{i=1} } \{ {\mathcal{V}}_{i} (n) \cup {\mathcal{V}}_{i} (-n) \}$ and from
Propositions~\ref{30} and \ref{32}, for $n \geq 1, {\mathcal{V}}_{i} (n) \subset
I^{c} ( \gamma_{i} (n) ) \subset V_{\epsilon_{n}}(f^{n}(C_{i}) ) $ and
$${\mathcal{V}}_{i}
(-n) \subset I^{e} (f^{-1} (\beta _{i} (-n +1 ))) = f^{-1}(I^{e}(\beta_{i}
(-n+1))) \subset f^{-1} (V_{\epsilon_{n-1}} (f^{-n+1}(E_{i} ))).$$
>From the $(c, e)$ dynamic assumption, diam $f^{n}(C_{i}) \rightarrow 0$ as
$n \rightarrow \infty$ and diam $f^{m} (E_{i}) \rightarrow 0$ as $m \rightarrow - \infty$. Since
$\epsilon_{n} \rightarrow 0$, it is clear that diam ${\mathcal{V}}_{i} (n) \rightarrow 0$, as $n \rightarrow
\infty$ and from the uniform continuity of $f$ we can also conclude that
diam ${\mathcal{V}}_{i} (-n) \rightarrow 0$ as $n \rightarrow \infty$. It is now easy to see
that the
connected components of $\displaystyle{ \bigcup^{L} _{i=1} } \{ {\mathcal{V}}_{i} (n) \cup
{\mathcal{V}}_{i} (-n) \}$ have diameters converging to zero as $n \rightarrow \infty$.
This proves (ii).
\bigskip
Let us now look at (iii). From the way we indexed the pruning
collection, if $i > j, \ D_{i} \not\prec D_j$ which implies $f^{n}(D_{i})
\not\prec f^{n} (D_{j})$ for any $n \in \Bbb{Z}$. From
Proposition~\ref{35} it follows that $f^{-1} (\alpha_{i} (n+1)) \cap
{\mathcal{V}}_{j}
(n) = \emptyset$. Similarly, if $l > i$ the same proposition implies that
$\alpha_{i} (n) \cap {\mathcal{V}}_{l}(n)=\emptyset$.
Since each $k_{i,n}$ is an isotopy of the identity with support contained
in ${\mathcal{V}}_{i}(n)$, and $\zeta_{i,n}(\cdot) =k_{i,n} (\cdot, 1)$, we have
supp
$\zeta_{i,n} \subset {\mathcal{V}}_{i} (n)$ and, from what we said above, we see
that if $j < i$, $\ \zeta_{j,n} (f^{-1} (\alpha_{i} (n+1))) = f^{-1} (\alpha_{i}
(n+1))$ and that if $l > i$, $ \zeta_{l,n} (\alpha_{i}(n) )= \alpha_{i}(n)$.
Thus, for any $M(i) \leq n < N(i)$
\begin{eqnarray*}
\zeta_{n} (\alpha_{i} (n) ) & = &\zeta_{1,n} \circ \ldots \circ \zeta_{i,n}
\circ \ldots \circ \zeta_{L,n} (\alpha_{i}(n) ) \\
&= &\zeta_{1,n} \circ \ldots \circ \zeta_{i,n} (\alpha_{i}(n)) \\
& = &\zeta_{1,n} \circ \ldots \circ \zeta_{i-1,n} (f^{-1} (\alpha_{i}
(n+1))) \\
& = & f^{-1} (\alpha_{i} (n+1)).
\end{eqnarray*}
>From the definition of pruning collections, $f^{-n} (D_{i}) \not\succ f^{n}
(D_{j})$ for any $n \geq 1$ and any $i,j \in \underline{L}$ and, by
Proposition~\ref{35}, it follows that $f^{-1} (\alpha_{i}(n+1) )$ $\cap
{\mathcal{V}}_{j} (-n) = \emptyset$ and ${\mathcal{V}}_{i} (n) \cap \alpha_{j} (-n) = \emptyset$.
Thus we can
conclude that for any $i \in \underline{L}, f^{-1} (\alpha_{i} (n+1)) \cap {\mbox
{\rm supp } }\zeta_{-n} = \emptyset$ and that $\alpha_{i}(-n) \cap$ supp
$\zeta_{n} = \emptyset$, for $n \geq 1$. Therefore if $1 \leq n < N(i)$,
\begin{eqnarray*}
\rho_{n} (\alpha_{i} (n)) & = &\zeta_{-n} \circ \zeta_{n} (\alpha_{i} (n)) \\
&= &\zeta_{-n} (f^{-1} (\alpha_{i} (n+1)) \\
&= &f^{-1}( \alpha_{i} (n+1)) \\
\end{eqnarray*}
\noindent and if $M(i) \leq -n \leq -1$,
\begin{eqnarray*}
\rho_{n} (\alpha_{i} (-n) ) &= &\zeta_{-n} \circ \zeta_{n} (\alpha_{i} (-n) ) \\
& = &\zeta_{-n}( \alpha_{i} (-n)) \\
& = & f^{-1} (\alpha_{i} (-n+1)).
\end{eqnarray*}
This completes the proof since, for $n=0, \ \rho_{0} = \zeta_{0}$ and this
case had already been taken care of. $\Box$
\bigskip
\begin{figure}
\begin{center}~
\psfig{file=Fig17,height=3in}
\end{center}
\caption{The first few ${\mathcal{V}}(n)$'s for a pruning collection
with only one ($c,e$)-disk $D$.}
\label{f17}
\end{figure}
\begin{cor} \label{44}
The sequence $R_{n} = \displaystyle\bigcup^{n}_{i=0} r_{n}$ is a Cauchy
sequence in the uniform topology and converges to an isotopy $R: \pi
\times [0,1] \rightarrow \pi$. If we set $\rho(\cdot) = R (\cdot, 1)$, for each $i
\in \underline{L}$ and $M(i) \leq n < N(i)$, $\rho(\alpha_{i} (n) ) = f^{-1} (\alpha_{i}
(n+1))$. Moreover supp $R \subset \bigcup \{ {\mathcal{V}}_{i} (n); \ i \in
\underline{L}, \ M(i) \leq n < N(i) \}$.
\end{cor}
\noindent{\sc Proof}:
Given $\varepsilon > 0$, by Proposition~\ref{43}, there exits $K$ large
enough so that all the connected components of supp $r_{m}$ have diameter
smaller than $\varepsilon$ if $m \geq K$. Let $n > m \geq K$. We
then have
\begin{eqnarray*}
d (R_{m}, R_{n} ) & = & \sup_{(x,t)} d (R_{m}(x,t), R_{n} (x,t)) \\
& = & \sup_{(x,t)} d (R_{m} (x,t), [ R_{m} \cup \displaystyle\bigcup ^{n}
_{m+1} r_{i} ] (x,t) ) \\
& = & \sup_{(x,t)} d (x, \displaystyle \bigcup^{n}_{m+1} r_{i} (x,t)) \\
& < & \varepsilon
\end{eqnarray*}
\noindent where the last inequality is a consequence of Proposition~\ref{36}.
This shows that $R_n$ is a Cauchy sequence. The remaining statements
are readily proven and we leave them to the reader. $\Box$
\bigskip
\begin{prop} \label{45}
Let $R$ and $\rho$ be as in Corollary~\ref{44}. Then for each $i \in
\underline{L}$ we have:
\begin{description}
\item[(i)] $\rho (D^{c} (\alpha_{i}(n))) = D^{c} (f^{-1}(\alpha_{i}(n+1))) \quad
{\mbox{\rm for }} 1 \leq n < N(i)$ and
\item[(ii)] $\rho (D^{e} ( \alpha_{i}(m))) = D^{e} (f^{-1} (\alpha_{i} (m+1)))
\quad \mbox{\rm for } M(i) \leq m < 0$.
\end{description}
\end{prop}
\noindent {\sc Proof}:
Notice that supp $R \subset \bigcup \{ {\mathcal{V}}_{i}(n); \ i \in \underline{L}, \
M(i) \leq
n < N(i) \}$. By Corollary~\ref{67}, for $n \geq 1, f^{n} (C_{i}) \cap \
\mbox{\rm supp } R = \emptyset$ and by Corollary~\ref{44}, if $1 \leq n <
N(i), \rho(\alpha_{i}(n)) = f^{-1} (\alpha_{i}(n+1))$. Therefore
\begin{eqnarray*}
\rho(f^{n} (C_{i} ) \cup \alpha_{i} (n) ) & = & \rho (f^{n} (C_{i}) ) \cup \rho
(\alpha_{i} (n)) \\
& = & f^{n} (C_{i}) \cup f^{-1} (\alpha_{i} (n+1))
\end{eqnarray*}
But
$$f^{n} (C_{i}) \cup \alpha_{i} (n) = \partial D^{c} (\alpha_{i} (n))$$
\noindent and
$$f^{n}
(C_{i} ) \cup f^{-1} ( \alpha_{i} (n+1)) = \partial D^{c} ( f^{-1} ( \alpha_{i}
(n+1)))$$
This completes the proof of (i). (ii) is proven analogously.
$\Box$
\bigskip
\begin{defn}
For each $n \geq 0$, let $\psi_{n} = f \circ \rho_{n}$, $\Psi_{n} = \displaystyle
\bigcup^{n} _{i=0} \psi_{i}$ and $\Psi = f \circ \rho$, i.e., $\psi_{n} (
\cdot) = f \circ r_{n} (\cdot, 1)$, $\Psi_{n} (\cdot ) = f \circ R_{n} (
\cdot, 1)$ and $\Psi (\cdot ) = f \circ R (\cdot, 1).$
\end{defn}
Recall that if $\xi: X \rightarrow X$ is a homeomorophism we defined
$$ \mbox{\rm supp } \xi = {\mathcal{C}} \{ x \in X; \quad \xi(x) = x \}. $$
\begin{lem} \label{68}
Let $\xi, \eta: X \rightarrow X$ be homeomorphisms so that supp $\xi \subset A$
and supp $\eta \subset B$. Then $A \cup B = A \cup \xi \circ \eta (B)$.
\end{lem}
\noindent{\sc Proof}:
First notice that if supp $\xi \subset A$ then $\xi(A) = A$ since $\xi
({\mathcal{C}}A) = {\mathcal{C}}A$ and $\xi$ is a homeomorphism. Therefore, since
supp $\xi \circ \eta \subset A \cup B$ we have
$$ A \cup B = \xi \circ \eta ( A \cup B) = \xi (A) \cup (B) = A \cup \xi
(B) = A \cup \xi \circ \eta (B) \quad \Box $$
\begin{prop} \label{46}
For $n \geq 0$,
\begin{description}
\item[(i)] $f^{n} (P) \cup f^{-n} (P) = \rho _{n} (f^{n} (P) ) \cup
f^{-n} (P)$;
\item[(ii)] $f^{n} (P) \cup f^{-n} (P) = f^{n} (P) \cup \rho^{-1}_{n}
(f^{-n}(P) )$.
\end{description}
\end{prop}
\noindent{\sc Proof}:
For $n=0$, supp $\rho_{0} \subset P$ and the result follows. For $n \geq
1, \rho_{n} = \zeta_{-n} \circ \zeta_{n}$ and supp $\zeta_{-n} \subset
f^{-n} (P)$ and supp $\zeta_{n} \subset f^{n}(P)$. The results now follow
as easy applications of Lemma~\ref{68} (see figure 18.) $\Box$
\bigskip
\begin{figure}
\begin{center}~
\psfig{file=Fig18,height=2.5in}
\end{center}
\caption{The homeomorphism $\rho_n$.}
\label{f18}
\end{figure}
The following corollary is immediate from the definition of $\psi_{n}$.
\begin{cor} \label{47}
For $n \geq 1$,
\begin{description}
\item[(i)] $f^{n+1}(P) \cup f^{-n+1} (P) = \psi_{n} (f^{n} (P) ) \cup
f^{-n+1} (P) $;
\item[(ii)] $f^{n} (P) \cup f^{-n} (P) = f^{n} (P) \cup \psi^{-1}_{n}
(f^{-n+1} (P) ). \quad \Box$
\end{description}
\end{cor}
We now state and prove an important technical proposition to be used
later. We will use the following
\bigskip
\begin{defn}
Let $P(0) =P$ and define inductively $P(\!n\!) \!=\! \Psi_{n-1} (\!P
(\!n-1 \!) \!)$ and $P(-n) = \Psi^{-1}_{n} (P (-n+1))$, for every $n \geq 1$.
\end{defn}
\begin{prop} \label{48}
With the notation above $P(1) = f(P)$ and
\begin{description}
\item[(i)] for $n \geq 1, \quad \displaystyle\bigcup^{n-1}_{-n+1} f^{k}(P) = \displaystyle
\bigcup^{n-1}_{-n+1} P(k)$;
\item[(ii)] for $n \geq 2, \quad \displaystyle \bigcup^{n}_{-n+2} f^{k}(P) = \displaystyle
\bigcup^{n}_{-n+2} P (k)$.
\end{description}
\end{prop}
\noindent{\sc Proof}:
We will use induction on $n$. For $n=1$, (i) states that $P=P(0)$, which
is just the definition, whereas $P(1) = \Psi_{0}(P) = \psi_{0}(P) = f
\rho_{0}(P)$ and, since supp $\rho_{0} \subset P, \ \rho_{0} (P) =P$, which
shows that $P(1) = f(P)$ (see figure 19.)
We now show that $\displaystyle \bigcup^{2}_{0} f^{k}(P) = \displaystyle \bigcup^{2}_{0}
P(k)$, but before we start, let us point out that, from the definitions of
$\psi_n$ and $\Psi_n$, the following is clear, for each $n \geq 0$:
\begin{description}
\item[a)] $\Psi_{n} = f$ in the complement of supp $R_{n} \subset \displaystyle
\bigcup^{n}_{-n} f^{k} (P)$;
\item[b)] $\psi_{n} =f$ in the complement of supp $\rho_{n} \subset [ f^{n}
(P) \cup f^{-n} (P) ] \setminus \displaystyle \bigcup^{n-1}_{-n+1} f^{k} ( \overline{P} )$;
\item[c)] $\Psi_{n} = \left\{ \begin{array}{ll}
\Psi_{n-1} &\mbox{\rm within } \displaystyle \bigcup^{n-1}_{-n+1}
f^{k} (P) \\
\psi_{n} &\mbox{\rm without } \displaystyle\bigcup^{n-1}_{-n+1}
f^{k} (P) \end{array} \right.; $
\item[d)] $\Psi^{-1}_{n} = \left\{
\begin{array}{ll}
\Psi^{-1}_{n-1} &\mbox{\rm within } \displaystyle\bigcup^{n}_{-n+2} f^{k}(P)
= \Psi_{n-1} \left( \displaystyle\bigcup^{n-1}_{-n+1} f^{k} (P) \right) \\
\psi^{-1}_{n} & \mbox{\rm without } \displaystyle\bigcup^{n}_{-n+2} f^{k} (P)
= \Psi_{n-1} \left( \displaystyle\bigcup^{n-1}_{-n+1} f^{k} (P) \right)
\end{array} \right. . $
\end{description}
Having said this, let us go back to the proof of $\displaystyle\bigcup^{2}_{0}
f^{k}(P) = \displaystyle\bigcup^{2}_{0}P(k)$. Notice that from c) above we have
$$ P(2) = \Psi_{1} (P(1)) = \left\{ \begin{array}{ll}
\Psi_{0}(P(1) ) \quad \mbox{\rm within } & P \\
\psi_{1} (P(1)) \quad \mbox{\rm without } & P
\end{array}
\right. $$
\noindent and, since we have seen that $P(0)=P$ and $P(1) = f(P)$,
\begin{eqnarray*}
\Psi_{1}(P(1)) &= &\Psi_{1} (P(1) \cap P(0) ) \cup \Psi_{1} (P(1) \setminus
P(0) ) \\
& = &[\Psi_{1} (P(1)) \cap \Psi_{0}(P(0)) ] \cup [ \Psi_{1} (f(P) \setminus
P ) ] \\
& = &[ P(2) \cap P(1) ] \cup [ \psi_{1} (f(P) \setminus P) ] \\
& = &[ P (2) \cap f(P) ] \cup [ \psi_{1} (f(P)) \setminus f (P) ]
\end{eqnarray*}
\noindent where the last equality is a consequence of b) above. Thus
\begin{eqnarray*}
\displaystyle\bigcup^{2}_{0} P(k) &= &\Psi_{1} (P(1)) \cup \displaystyle\bigcup^{1}_{0}
P(k) \\
&= &\Psi_{1} (P(1)) \cup \displaystyle\bigcup^{1}_{0} f^{k}(P) \\
& = &\psi_{1} (f(P)) \cup \displaystyle\bigcup^{1}_{0} f^{k}(P) \\
& = &\displaystyle\bigcup^{2}_{0} f^{k} (P) \\
\end{eqnarray*}
\noindent where the last equality is a consequence of Corollary~\ref{47}
(i), with $n=1$.
\bigskip
We now show that $\displaystyle\bigcup^{1}_{-1}f^{k}(P) = \displaystyle\bigcup^{1}_{-1}
P(k)$. From d) above we have
$$ P(-1) = \Psi^{-1}_{1}(P(0))= \left\{ \begin{array}{l}
\Psi^{-1}_{0} (P(0)) \quad \mbox{\rm within } f(P)= P(1) \\
\psi^{-1}_{1}(P(0)) \quad \mbox{\rm without } f (P) = P(1)
\end{array}
\right. $$
\noindent so that
\begin{eqnarray*}
\Psi^{-1}_{1} (P(0)) &= &\Psi^{-1}_{1} (P(0) \cap P(1)) \cup
\Psi^{-1}_{1} (P(0) \setminus P(1)) \\
& = & [ \Psi^{-1}_{1} (P(0)) \cap \Psi^{-1}_{0} (P(1)) ] \cup
\Psi^{-1}_{1} ( P \setminus f(P)) \\
& = & [ P(-1) \cap P(0) ] \cup [ \psi^{-1}_{1} (P \setminus f(P)) ] \\
& = & [P(-1) \cap P ] \cup [ \psi^{-1}_{1}(P) \setminus P ]
\end{eqnarray*}
\noindent where the last equality is a consequence of b). From this we see that
\begin{eqnarray*}
\displaystyle\bigcup^{1}_{-1} P(k) &= &\displaystyle\bigcup^{1}_{0} P(k) \cup
\Psi^{-1}_{1} (P(0)) \\
\\
&= &\displaystyle\bigcup^{1}_{0} f^{k}(P) \cup \Psi^{-1}_{1} (P(0))\\
\\
& = &\displaystyle\bigcup^{1}_{0} f^{k} (P) \cup \psi^{-1}_{1} (P)\\
\\
&= &\displaystyle \bigcup^{1}_{-1} f^{k}(P) \\
\end{eqnarray*}
\noindent where the last equality is again a consequence of
Corollary~\ref{47} (ii), with $n=1$. This completes the proof of (i)
and (ii) for $n=2$. Suppose we have proven that (i) and (ii) hold for $2
\leq n \leq N$. From this assumption the assertions below follow:
\begin{description}
\item[1)] $\displaystyle \bigcup^{n}_{-n+1} f^{k}(P) = \displaystyle\bigcup^{n}_{-n+1}
P(k)$, for $2 \leq n \leq N$, by just taking the union of (i) and (ii).
\item[2)] $f^{n}(P) = P(n)$ and $f^{-n+1}(P) = P (-n+1)$ in the
complement of $\displaystyle\bigcup^{n-1}_{-n+2} f^{k}(P) =
\displaystyle\bigcup^{n-1}_{-n+2} P(k)$, for $0 \leq n \leq N$. This can be seen
as follows: by (i), $\displaystyle\bigcup^{n}_{-n+2} f^{k}(P) = \displaystyle \bigcup^{n}
_{-n+2} P(k)$ and by 1), $\displaystyle \bigcup^{n-1} _{-n+2} f^{k}(P) =
\displaystyle\bigcup^{n-1}_{-n+2} P(k)$. Then
$$ f^{n}(P) \cup \displaystyle \bigcup^{n-1}_{-n+2} f^{k} (P) = P (n) \cup \displaystyle
\bigcup^{n-1}_{-n+2} P(k)\ . $$
\noindent It then follows that $f^{n}(P)=P(n)$ in the complement of $\displaystyle
\bigcup^{n-1}_{-n+2} f^{k}(P) = \displaystyle \bigcup ^{n-1}_{-n+2} P(k)$. The
other part is proven similarly.
\item[3)] $\Psi_{n} (P(j))= P(j+1)$ for any $-n \leq j \leq n, \ 0 \leq
n \leq N$. For notice that $\Psi _{n} = \Psi_{|j|}$ in $\displaystyle
\bigcup^{|j|}_{-|j|}f^{k}(P) = \displaystyle\bigcup^{|j|}_{-|j|} P(k) \supset
P(j)$. Thus $\Psi_{n}(P(j)) = \Psi_{|j|} (P(j)) = P(j+1)$ from the
definition of $P(j)$. This reasoning is valid for $-n \leq j \leq n, \ 0
\leq n < N$. For $n = N$ what remains to be shown is that $\Psi_{N}
(P(N)) = P(N+1)$ and $\Psi_{N} (P(-N)) = P (N+1)$ or equivalently $P(-N)
= \Psi_{N}^{-1} (P(-N+1))$. But these are just the definitions again.
\end{description}
We now proceed to prove (i) and (ii) for $N+1$. We start with (ii) $\displaystyle
\bigcup ^{N+1}_{-N+1}P(k) = \displaystyle \bigcup^{N+1}_{-N+1} f^{k}(P)$.
>From c) in the beginning of the proof
$$ P(N+1) = \Psi_{N} (P(N)) = \left\{ \begin{array}{l}
\Psi_{N-1} (P(N)) \quad \mbox{\rm within } \displaystyle
\bigcup^{N-1}_{-N+1} f^{k}(P) \\
\qquad = \displaystyle \bigcup^{N-1}_{-N+1}P(k) \\
\psi_{N} (P(N)) \quad \mbox{\rm without } \displaystyle \bigcup^{N-1}_{-N+1}
f^{k}(P) \\
\qquad = \displaystyle \bigcup^{N-1}_{-N+1} P(k) \\
\end{array}
\right. $$
Thus,
\begin{eqnarray*}
\Psi_{N} (P(N)) &= &\Psi _{N} \left( P(N) \cap \displaystyle\bigcup^{N-1}_{-N+1} P
(k) \right) \cup \Psi_{N} \left(P (N)
\setminus \displaystyle\bigcup^{N-1}_{-N+1} P(k)\right) \\
& = &\left[ \Psi_{N} (P(N))
\cap \Psi_{N-1} \left( \displaystyle\bigcup^{N-1}_{-N+1}
P(k) \right) \right] \cup \\
& & \Psi_{N} \left(f^{N}(P) \setminus
\displaystyle\bigcup^{N-1}_{-N+1} f^{k}(P) \right) \\
&= &\left[ P (N+1) \cap \displaystyle\bigcup^{N}_{-N+2} P(k) \right] \cup \psi
_{N} \left(f^{N}(P) \setminus \displaystyle\bigcup^{N-1}_{-N+1} f^{k}(P) \right) \\
&= &\left[ P (N+1) \cap \displaystyle\bigcup^{N}_{-N+2} f^{k}(P) \right] \cup
\left[ \psi_{N} (f^{N} (P) ) \setminus \displaystyle\bigcup^{N}_{-N+2} f^{k} (P) \right]
\end{eqnarray*}
\noindent where we used 2) in the second equality, 3) in the third and b) from the
beginning in the forth, not to mention the induction hypothesis here and
there. From this it follows that
\begin{eqnarray*}
\displaystyle\bigcup^{N+1}_{-N+1} P(k) &= &\Psi_{N} (P(N)) \cup \displaystyle\bigcup^{N}
_{-N+1} P(k) \\
&= & \Psi_{N} (P(N)) \cup \displaystyle\bigcup^{N}_{-N+1} f^{k}(P) \\
&= & \psi_{N} (f^{N} (P)) \cup \displaystyle\bigcup^{N}_{-N+1} f^{k}(P) \\
&= & \displaystyle\bigcup^{N+1}_{-N+1} f^{k} (P)
\end{eqnarray*}
\noindent where the last equality comes from Corollary~\ref{47} (i) with $n=N$.
We now prove (i) $\displaystyle\bigcup^{N}_{-N} P(k) = \displaystyle\bigcup^{N}_{-N}
f^{k}(P)$. From d) we have
$$ P(-N) = \Psi^{-1}_{N} (P(-N+1)) = \left\{ \begin{array}{l}
\Psi^{-1}_{N-1} (P(-N+1)) \ \mbox{\rm within } \\
\quad \displaystyle\bigcup^{N}_{-N+2}
f^{k}(P) = \displaystyle\bigcup^{N}_{-N+2} P(k) \\
\psi^{-1}_{N} (P(-N+1)) \ \mbox{\rm without } \\
\quad \displaystyle\bigcup^{N}_{-N+2}
f^{k} (P) = \displaystyle\bigcup^{N}_{-N+2} P(k)
\end{array} \right. $$
Thus
\begin{eqnarray*}
\Psi^{-1}_{N} (P(-N+1)) &= &\Psi^{-1}_{N} \left( P(-N+1) \cap \displaystyle\bigcup
^{N}_{-N+2} P(k) \right) \cup \\
& & \Psi^{-1}_{N}
\left(P (-N+1) \setminus \displaystyle\bigcup
^{N}_{_N+2} P(k) \right) \\
&= &\left[ \Psi_{N}^{-1}(P(-N+1)) \cap \Psi^{-1}_{N-1} \left(\displaystyle\bigcup
^{N}_{-N+2} P(k) \right) \right] \cup \\
& & \Psi^{-1}_{N}
\left( f^{-N+1} (P)
\setminus \displaystyle\bigcup ^{N}_{-N+2} f^{k}(P) \right) \\
&= &\left[ P (-N) \cap \displaystyle\bigcup^{N-1}_{-N+1} P(k) \right] \cup \\
& & \psi^{-1}_{N}
\left( f^{-N+1}(P) \setminus \displaystyle\bigcup^{N}_{-N+2} f^{k} (P)
\right)\\
&= &\left[ P(-N) \cap \displaystyle\bigcup^{N-1}_{-N+1} f^{k}(P) \right] \cup \\
& & \left[ \psi^{-1}_{N} ( f^{-N+1} (P) ) \setminus \displaystyle\bigcup
^{N-1}_{-N+1} f^{k} (P) \right]
\end{eqnarray*}
\noindent where we have used 2) in the second equality, 3) in the third, b) in the
fourth and the induction hypothesis.
Therefore
\begin{eqnarray*}
\displaystyle\bigcup^{N}_{-N} P(k) &= &\displaystyle\bigcup^{N}_{-N+1} P(k) \cup \Psi^{-1}
_{N} (P(-N+1)) \\
&= &\displaystyle\bigcup^{N} _{-N+1} f^{k}(P) \cup \Psi^{-1}_{N} (P(-N+1)) \\
&= &\displaystyle\bigcup^{N}_{-N+1} f^{k} (P) \cup \psi^{-1}_{N} (P(-N+1)) \\
&= &\displaystyle\bigcup^{N}_{-N} f^{k}(P)
\end{eqnarray*}
\noindent where the last equality comes from Corollary~\ref{47} (ii) with $n=N$.
This completes the proof. $\Box$
\bigskip
\begin{figure}
\begin{center}~
\psfig{file=Fig19,height=3.25in}
\end{center}
\caption{$\overline{P}(k), k=- 1, 0,1,2$, for a pruning collection
containing only one ($c,e$)-disk $D$.}
\label{f19}
\end{figure}
\begin{cor}\label{49}
For $n \geq 1$
\begin{description}
\item[(i)] $\displaystyle\bigcup^{n}_{n+1} f^{k}(P) = \displaystyle\bigcup^{n}_{-n+1}P(k)$;
\item[(ii)] $f^{n}(P) = P(n)$ and $f^{-n+1}(P) = P(-n+1)$ in the complement
of $$\displaystyle\bigcup ^{n-1}_{-n+2} f^{k}(P) = \displaystyle\bigcup^{n-1}_{-n+2} P(k).$$
\end{description}
\end{cor}
\noindent{\sc Proof:} The proof is the same as that given for 1) and 2) in the
proof of Proposition~\ref{48}. $\Box$
\bigskip
\begin{cor}\label{50}
If $\Psi$ is as we defined above, $P(k) = \Psi(P(k-1))$ for every $k \in
{\Bbb{Z}}$, that is $\{P(k); \ k \in {\Bbb{Z}} \}$ is an orbit under $\Psi$.
\end{cor}
\noindent{\sc Proof:} Just notice that $\Psi = \Psi_{n}$ in
$\displaystyle\bigcup^{n}_{-n} f^{k}(P)$ and argue like in the proof of 3) in
Proposition~\ref{48}. $\Box$
\bigskip
We are now going to define new closed disks $A_{i}, \ i \in \underline{L}$ whose
union is still the closed pruning front $\overline{P}$. We will see that the
cross-cut $\alpha_{i}(0) \subset D_{i}$ is also a cross-cut in $A_i$ and
divides it into two disks $A^{c}_{i}$ and $A^{e}_{i}$ (see figure
20.) These will have
some disjoint/nested properties we will make precise later and will be
useful in the proof of the theorem.
\bigskip
\begin{defn}
Let $A_{L} = A_{L}(0)= D_{L}$ and, for $1 \leq i \leq L$, set $A_{i} =
A_{i}(0) = \zeta^{-1} _{L,0} \circ \ldots \circ
\zeta^{-1}_{i+1,0}(D_{i})$. Then define inductively for $n \geq 1$, $ \
A_{i} (n) = \Psi (A_{i} (n-1))$ and $A_{i}(-n) = \Psi^{-1} (A_{i}(-n+1))$.
\end{defn}
\begin{figure}
\begin{center}~
\psfig{file=Fig20,height=2.75in}
\end{center}
\caption{The $D_{i}$'s and the $A_i$'s}
\label{f20}
\end{figure}
\begin{prop}\label{51}
For $l \in \underline{L}, \ \displaystyle \bigcup^{L}_{i=l} A_{i} = \displaystyle\bigcup^{L}
_{i=l} D_{i}$. In particular $\displaystyle\bigcup^{L} _{i=1} A_{i} = \overline{P}$.
\end{prop}
\noindent{\sc Proof:} By definition $A_{L} = D_{L}$. Assume we have shown
that $\displaystyle \bigcup^{L} _{i=l+1} A_{i} = \displaystyle \bigcup ^{L} _{i=l+1}
D_{i}$. Then
\begin{eqnarray*}
\displaystyle\bigcup^{L} _{i=l} A_{i} & = & \displaystyle \bigcup^{L} _{i=l+1} A_{i} \cup
A_{l} \\
& = & \displaystyle\bigcup^{L}_{i=l+1} D_{i} \cup \zeta^{-1} _{L,0} \circ \ldots
\circ \zeta^{-1} _{l+1,0} (D_{l}) \\
& = & \displaystyle\bigcup^{L}_{i=l} D_{i}
\end{eqnarray*}
\noindent where the last equality holds because supp $\zeta^{-1}_{L,0} \circ
\ldots \circ \zeta^{-1}_{l+1,0} \subset \displaystyle\bigcup^{L}_{i=l+1}D_{i}. \ \
\Box$
\bigskip
\begin{cor} \label{53}
For every $n \in {\Bbb{Z}}, \ \overline{P}(n) = \displaystyle \bigcup^{L} _{i=1} A_{i}(n). \
\ \Box$
\end{cor}
\begin{prop} \label{52}
For $n \geq 1 $ and $i \in \underline{L}$,
\begin{description}
\item[(i)] $\zeta_{n} \left( \displaystyle\bigcup _{j \leq i} f^{n} (D_{j})
\right) = \displaystyle\bigcup_{j \leq i} f^{n} (D_{j})$;
\item[(ii)] $ \zeta_{-n} \left( \displaystyle \bigcup_{j \geq i} f^{-n}(D_{j})
\right) = \displaystyle \bigcup _{j \geq i} f^{-n} (D_{j})$.
\end{description}
\end{prop}
\noindent{\sc Proof:} If $k > i \geq j$ then $D_{k} \not\prec D_j$ and we
have seen that for $n \geq 1$, $I^{c} ( \gamma_{k} (n) )$ is either
contained in $f^{n}(I_{j})$ or it is disjoint from $f^{n}(D_{j})$. If
$I^{c} (\gamma_{k} (n))$ is contained in $f^{n}(I_{j})$ it is because
$f^{n}(D_{k}), \ f^{n}(D_{j} ) | _{f^{n}(C_{k}) }$ and therefore
$f(D_{k}), \ f(D_{j}) |_{f(C_{k})}$. By Proposition~\ref{42}, $N(i)
=1$ and it follows that $k_{i,n} \equiv$ identity. If $I^{c} (\gamma_{k}
(n))$ is disjoint from $f^{n}(D_{j})$ so is ${\mathcal{V}}_{k}(n)$, since
${\mathcal{V}}_{k}(n) \subset I^{c}( \gamma_{k} (n) )$. Either way we see that
(supp $\zeta_{k,n} ) \cap f^{n}(D_{j} ) = \emptyset$. Thus
\begin{eqnarray*}
\zeta_{n} \left( \displaystyle \bigcup _{j \leq i} f^{n} (D_{j} ) \right) & = &
\zeta _{1,n} \circ \ldots \circ \zeta_{L,n} \left( \displaystyle \bigcup _{j \leq
i} f^{n} (D_{j} ) \right) \\
& = & \zeta_{1,n} \circ \ldots \circ \zeta_{i,n} \left( \displaystyle\bigcup _{j
\leq i} f^{n} (D_{j}) \right) \\
& = & \displaystyle \bigcup _{j \leq i} f^{n} (D_{j})
\end{eqnarray*}
\noindent where the last equality holds because supp $\zeta_{1,n} \circ \ldots
\circ \zeta_{i,n} \subset \displaystyle \bigcup _{j \leq i} f^{n} (D_{i})$. This
proves (i). (ii) is proven analogously. $\Box$
\bigskip
The next proposition and corollary are analogous to
Proposition~\ref{46} and Corollary~\ref{47}. The proofs use
Proposition~\ref{52} but are otherwise completely similar. We omit them.
\begin{prop} \label{54}
For $n \geq 1$ and $i \in \underline{L}$,
\begin{description}
\item[(i)] $\rho_{n} \left( \displaystyle \bigcup _{j \leq i} f^{n} (D_{j}) \right)
\cup f^{-n} (P) = \displaystyle \bigcup _{j \leq i} f^{n} (D_{j}) \cup f^{-n} (P) $;
\item[(ii)] $ f^{n}(P) \cup \rho^{-1}_{n} \left( \displaystyle \bigcup _{j\geq i}
f^{-n} (D_{j} ) \right) = f^{n} (P) \cup \displaystyle \bigcup _{j \geq i} f^{-n}
(D_{j})$. $\Box$
\end{description}
\end{prop}
\begin{cor} \label{55}
For $n \geq 1$ and $i \in \underline{L}$,
\begin{description}
\item[(i)] $\psi _{n} \left( \displaystyle \bigcup_{j \leq i} f^{n} (D_{j} )
\right) \cup f^{-n+1} (P) = \displaystyle \bigcup _{j \leq i} f^{n+1} (D_{j} )
\cup f^{-n+1} (P)$;
\item[(ii)] $f^{n}(P) \cup \psi^{-1}_{n} \left( \displaystyle \bigcup _{j \geq i}
f^{-n+1} (D_{j} ) \right) = f^{n} (P) \cup \displaystyle \bigcup _{j \geq i}
f^{-n} (D_{j} )$. $\Box$
\end{description}
\end{cor}
We can now state and prove a proposition which sharpens
Proposition~\ref{48} somewhat. Although the proof goes along the same
lines as that of Proposition~\ref{48} we present it for completeness.
\bigskip
\begin{prop} \label{56}
For $n \geq 1$ and $i \in \underline{L}$ we have
\begin{description}
\item[(i)] $\displaystyle \bigcup_{j \leq i} f^{n} (D_{j}) \cup \displaystyle
\bigcup^{n-1} _{-n+2} f^{k} (P) = \displaystyle \bigcup _{j \leq i} A_{j} (n) \cup
\displaystyle \bigcup ^{n-1} _{ -n+2} P(k)$;
\item[(ii)] $ \displaystyle \bigcup_{j \geq i} f^{-n+1} (D_{j}) \cup \displaystyle \bigcup
^{n-1}_{-n+2} f^{k} (P) = \displaystyle \bigcup _{j \geq i} A_{j} (-n+1) \cup \displaystyle
\bigcup^{n-1}_{-n+2} P (k).$
\end{description}
\end{prop}
\noindent {\sc Proof:}
The proof is by induction on $n$. Notice that for $n=1$, (ii) above is
just Corollary~\ref{53}. In order to prove (i) with $n=1$, observe
that, since $A_{i} = A_{i}(0) \subset \overline{P}, \ A_{i}(1) =
\Psi(A_{i}(0)) = \psi_{0} (A_{i}(0)) = f \circ \rho_{0} (A _{i}(0))$. Thus
\begin{eqnarray*}
A_{i}(1) & = & f \circ \rho_{0} ( \zeta^{-1} _{L,0} \circ \ldots \circ
\zeta^{-1}_{i+1,0} (D_{i}) ) \\
& = & f \circ ( \zeta_{1,0} \circ \ldots \circ \zeta_{L,0} ) \circ (
\zeta^{-1}_{L,0} \circ \ldots \circ \zeta ^{-1} _{i+1,0} (D_{i} )) \\
& = & f \circ ( \zeta_{1,0} \circ \ldots \circ \zeta_{i,0}) (D_{i})
\end{eqnarray*}
Reasoning as in the proof of Corollary~\ref{53}, it is
easy to prove that $\displaystyle \bigcup_{j \leq i} A_{j} (1) = \displaystyle \bigcup_{j
\leq i} f(D_{j} )$ which is (i) for $n=1$.
Assume we have shown that
$$\displaystyle \bigcup_{j \leq i} A_{j} (n) \cup \displaystyle
\bigcup ^{n-1}_{-n+2} P(k) = \displaystyle \bigcup _{j \leq i} f^{n} (D_{j}) \cup
\displaystyle \bigcup^{n-1}_{-n+2} f^{k} (P) . $$
\noindent Then, since we know that $\displaystyle
\bigcup^{n-1}_{-n+1} P(k) = \displaystyle \bigcup^{n-1}_{-n+1} f^{k} (P) $ and
$\displaystyle \bigcup^{n}_{-n+1} P(k) = \displaystyle \bigcup^{n}_{-n+1}$ $f^{k}(P)$ by
Proposition~\ref{48}, just like in the proof of that proposition we
can see that $$\displaystyle \bigcup_{j \leq i} A_{j}(n) \setminus \displaystyle
\bigcup^{n-1}_{-n+1} P(k) = \displaystyle \bigcup_{j \leq i} f^{n}(D_{j}) \setminus \displaystyle
\bigcup ^{n-1}_{-n+1}f^{k}(P).$$ Using all this information we have
\begin{eqnarray*}
\displaystyle\bigcup_{j \leq i} A_{j} (n+1) \cup \displaystyle \bigcup ^{n} _{-n+1} P(k) &
= & \left[ \displaystyle \bigcup_{j \leq i} A_{j} (n+1) \setminus \displaystyle \bigcup^{n}
_{-n+2} P(k) \right] \! \cup \!\displaystyle \bigcup^{n} _{-n+1}\! P(k) \\
& = & \left[ \Psi \left( \displaystyle \bigcup _{j \leq i} A_{j} (n) \setminus \displaystyle
\bigcup^{n-1}_{-n+1} P(k) \right) \right] \cup \displaystyle \bigcup ^{n} _{-n+1}
P(k) \\
& = & \left[ \Psi \left( \displaystyle \bigcup_{j \leq i} f^{n} (D_{j} ) \setminus \displaystyle
\bigcup^{n-1} _{-n+1} f^{k} (P) \right) \right] \cup \displaystyle\bigcup
^{n}_{-n+1} f^{k} (P) \\
& = & \left[ \psi_{n} \left( \displaystyle \bigcup _{j \leq i} f^{n} (D_{j})
\setminus \displaystyle \bigcup^{n-1} _{-n+1} f^{k} (P) \right) \right] \cup \displaystyle\bigcup
^{n} _{-n+1} f^{k} (P) \\
& = & \left[ \psi_{n} \left( \displaystyle \bigcup _{j \leq i} f^{n} (D_{j})
\right) \setminus \displaystyle \bigcup^{n} _{-n+2} f^{k} (P) \right] \cup \displaystyle\bigcup
^{n} _{-n+1} f^{k} (P) \\
& = & \psi _{n} \left( \displaystyle \bigcup_{j\leq i} f^{n} (D_{j}) \right)\! \cup \!
\displaystyle \bigcup ^{n} _{-n+1} f^{k} (P) \\
& = & \displaystyle \bigcup_{j \leq i} f^{n+1} (D_{j} ) \cup \displaystyle \bigcup ^{n}
_{-n+1} f^{k}(P)
\end{eqnarray*}
\noindent where the last equality holds by Corollary~\ref{55}, (i).
Statement (ii) is proven analogously and we leave it to the interested
reader. $\Box$
\bigskip
\begin{cor} \label{57}
For $n \geq 1$ and $i \in \underline{L}$ we have:
\begin{description}
\item[(i)] $A_{i} (n) = f^{n} (D_{i})$ in the complement of
$$\displaystyle
\bigcup _{j < i} f^{n}(D_{j}) \cup \displaystyle \bigcup^{n-1}_{-n+2} f^{k}(P) =
\displaystyle \bigcup _{j < i} A_{j} (n) \cup \displaystyle \bigcup^{n-1}_{-n+2} P(k) ; $$
\item[(ii)] $A_{i} (-n+1) = f^{-n +1} (D_{i})$ in the complement of
$$\displaystyle
\bigcup _{j > i} f^{-n+1} (D_{j}) \cup \displaystyle \bigcup ^{n-1} _{-n+2} f^{k}
(P) = \displaystyle \bigcup_{j >i} A_{j} (-n+1) \cup \displaystyle \bigcup^{n-1}_{-n+2}
P (k) . \ \Box$$
\end{description}
\end{cor}
\begin{prop} \label{58}
For each $i \in \underline{L}, \ \alpha_{i}(0)$ is a cross-cut in $A_i$ and divides
$A_i$ into two closed disks $A^{c}_{i}$ and $A^{e}_{i}$ bounded by
$\rho^{-1}_{0} ( C_{i}) \cup \alpha_{i} (0)$ and $\alpha_{i} (0) \cup E_{i}$,
respectively.
\end{prop}
\noindent {\sc Proof:} In the proof of Proposition~\ref{43} (iii),
we have shown that for each $i \in \underline{L}$, $\rho_{0} (\alpha_{i}(0)) =
\zeta_{0} (\alpha_{i}(0) ) = \zeta_{1,0} \circ \ldots \circ \zeta_{i, 0}
(\alpha_{i}(0))$. Thus we see that
\begin{eqnarray*}
\alpha_{i}(0) & = &\rho^{-1}_{0} ( \zeta_{1,0} \circ \dots \circ \zeta_{i,0}
(\alpha_{i}(0))) \\
&= & \zeta^{-1}_{L,0} \circ \ldots \circ \zeta^{-1}_{1,0} \circ \zeta_{1,0}
\circ \dots \circ \zeta_{i,0}(\alpha_{i}(0)) \\
&= & \zeta^{-1}_{L,0} \circ \dots \circ \zeta^{-1}_{i+1} (\alpha_{i}(0))
\end{eqnarray*}
\noindent so that $\alpha_{i}(0)$ is left fixed by $\zeta^{-1}_{L,0} \circ \dots
\circ \zeta^{-1}_{i+1,0}$. Since $\alpha_{i}(0)$ is a cross-cut in $D_{i}$ and
$A_{i} = \zeta^{-1}_{L,0} \circ \ldots \circ \zeta^{-1}_{i+1,0} (D_{i}), \
\alpha_{i}(0)$ is also a cross-cut in $A_i$.
We now show that $A_i$ is bounded by $\rho^{-1}_{0} (C_{i}) \cup E_{i}$ which
will complete the proof of the proposition. Notice that if $j \leq i$,
$C_{i} \cap
I_{j} = \emptyset$ so that $\rho^{-1}_{0} (C_{i}) = \zeta^{-1}_{L,0} \circ \ldots
\circ \zeta^{-1}_{i+1,0} (C_{i} )$. On the other hand, for $j \geq i$,
$I_{j}
\cap E_{i} = \emptyset$ so that $\zeta ^{-1}_{L,0} \circ \ldots \circ \zeta^{-1}
_{i+1,0}(E_{i}) = E_{i}$. This shows that $\zeta^{-1}_{L,0} \circ \ldots
\circ
\zeta^{-1}_{i+1,0} (C_{i} \cup E_{i} ) = \rho^{-1}_{0} (C_{i}) \cup E_i$, as
we wanted. $\Box$
\bigskip
\begin{prop} \label{59}
For each $i \in \underline{L}$, (i) and (ii) hold:
\begin{description}
\item[(i)] for $n \geq 1, \ A_{i}(n)$ is bounded by the Jordan curve
$$f^{n}
(C_{i}) \cup \Psi^{n} (E_{i} );$$
\item[(ii)] for $m \leq 0, \ A_{i} (m)$ is bounded by the Jordan curve
$$\Psi
^{m} (\rho^{-1}_{0} (C_{i})) \cup f^{m}(E_{i}) = \Psi^{m-1} (f(C_{i}))
\cup f^ {m}(E_{i}).$$
\end{description}
\end{prop}
\noindent{\sc Proof:}
In the proof of Proposition~\ref{58} we saw that $A_{i}(0)$ is bounded by
$\rho_{0}^{-1}(C_{i}) \cup E_{i} = \psi^{-1}_{0} (f(C_{i} )) \cup E_{i} =
\Psi ^{-1} (f (C_{i}) ) \cup E_{i}$.
Therefore $A_{i}(1) = \Psi (A_{i}(0))$ is bounded by $\Psi ( \Psi^{-1} (f(C_{i}
)) \cup E_{i} ) = f(C_{i}) \cup \Psi (E_{i})$, which proves (i) and (ii) for
$n=1$ and $m=0$ respectively. The general result is now proved by induction
using Corollary~\ref{67} to guarantee that $\Psi^{n}(f(C_{i}) ) = f^{n+1}
(C_{i})$ for $n \geq 0$ and that $\Psi^{m}(E_{i}) = f^{m} (E_{i})$ for $m
\leq 0. \ \ \Box$
\bigskip
\begin{defn}
Let $A^{c}_{i}(0) = A ^{c}_{i}$ and $A^{e}_{i}(0) = A^{e}_{i} $ as in
Proposition~\ref{58} and define inductively for $n \geq 1$, $A^{c(e)}_{i} (n)
= \Psi (A^{c(e)}_{i} (n-1))$ and $A^{c(e)}_{i} (-n) = \Psi^{-1} (A^{c(e)}_{i}
(-n+1))$ (see figure 21.)
\end{defn}
\begin{figure}
\begin{center}~
\psfig{file=Fig21,height=2.25in}
\end{center}
\caption{$A^{c}_{i}(0)$ and $A^{e}_{i}(0)$ for $i= 1,2$.}
\label{f21}
\end{figure}
\begin{prop} \label{60}
With the notation just introduced we have:
\begin{description}
\item[(i)] $A^{c}_{i}(n) = D^{c} (\alpha_{i}(n))$ for $1 \leq n \leq N(i)$;
\item[(ii)] $ A^{e}_{i}(m) = D^{e}(\alpha_{i}(m))$ for $M(i) \leq m \leq0$.
\end{description}
\end{prop}
\noindent{\sc Proof:}
That $A^{e}_{i} = D^{e}( \alpha_{i}( 0))$ is a direct consequence of Proposition
~\ref{58}, since $A^{e}_{i}$ is bounded by $\alpha_{i}(0) \cup E_{i}$ which is the
same curve that bounds $D^{e} (\alpha_{i}(0))$. On the other hand, $A^{c}_{i}(0)$
is bounded by $\rho^{-1}_{0}(C_{i}) \cup \alpha_{i} (0)$ and it follows that
$A^{c}_{i} (1) = \Psi (A^{c}_{i} (0))$ is bounded by $$\Psi (\rho^{-1}_{0}
(C_{i}) \cup \alpha_{i}(0)) = \psi_{0} ( \rho^{-1}_{0} (C_{i}) \cup \alpha_{i}(0))
= f(C_{i}) \cup \alpha_{i} (1).$$ This shows that $A^{c}_{i}(1) = D^{c}(\alpha_{i}
(1))$.
Assume we have shown that $A^{c}_{i}(n) = D^{c}(\alpha_{i}(n))$ for $n < N(i)$.
Then, using Propositon~\ref{45} (i), we see that
\begin{eqnarray*}
A^{c}_{i} (n+1) & = & \Psi (A^{c}_{i}(n)) \\
&= & f\rho (D^{c} (\alpha_{i} (n))) \\
&= &f (D^{c} (f^{-1} (\alpha_{i} (n+1)))) \\
&= &D^{c} (\alpha_{i} (n+1))
\end{eqnarray*}
\noindent This proves (i). (ii) is proven similarly. $\Box$
\bigskip
Let $n(i), N(i), m(i)$ and $M(i)$ be as we defined them in the beginning of
this section. By Proposition~\ref{66}
if $n(i), m(i)$ are finite then $n(i) = 2N(i) - \delta, m(i) = 2M(i) - \delta '
$, where $\delta, \delta ' = 0 $ or $1$. Moreover, $$f^{N(i)} (D_{i}),
\ f^{-N(i) + \delta +1} (D_{j}) | _{f^{N(i)} (C_{i}) }$$ and
$$f^{M(i)}(D_{i}), \ f^{-M(i) +
\delta '} (D_{l} ) |_{f^{M(i)} (E_{i}) }$$
for some $j, l \in \underline{L}$.
Recall
also that if $D_{1}, D_{2}|_{L}$ and $D_{1} \backslash L \subset I_{2}$ we write
$D_{1} \subset D_{2} |_{L}$. We can now state
\begin{prop} \label{61}
With the above notation for each $i \in \underline{L}$, (i) and (ii) hold:
\begin{description}
\item[(i)] If $n(i) < \infty$ and $j \in \underline{L}$ is largest such that
$$f^{N(i)}
(D_{i}), f^{-N(i)+ \delta+1 } (D_{j} ) |_{f^{N(i)} (C_{i}) }$$
then
$$A^{c}_{i}
(N(i)) \subset A_{j} (-N(i) + \delta +1) | _{f^{N(i)} (C_{i})};$$
\item[(ii)] if $m(i) > - \infty$ and $j \in \underline{L}$ is smallest such that
$$f^{M(i)
} (D_{i}), f^{-M(i)+ \delta '} (D_{j}) |_{f^{M(i)} (E_{i}) }$$
then
$$A^{e}_{i}
( M(i)) \subset A_{j} (-M(i) + \delta ' ) | _{f ^{M(i)} (E_{i}) }. $$
\end{description}
\end{prop}
\noindent {\sc Proof:}
By Proposition~\ref{60} we know that $A^{c}_{i}(N(i)) = D^{c}( \alpha_{i}
(N(i)))$. We have to show that $f^{N(i)} (C_{i}) \subset \partial A _{j} ( -N(i) +
\delta +1)$ and that
\begin{eqnarray*}
A^{c}_{i} (N(i) ) \setminus f^{N(i)} (C_{i}) &= &D^{c}
(\alpha_{i} (N(i))) \setminus f^{N(i)} (C_{i}) \\
& = &I^{c} (\alpha_{i} (N(i) ))
\cup \alpha_{i} (N(i)) \\
& \subset &I (A_{j} (-N(i) + \delta +1))
\end{eqnarray*}
where the last set is the
interior of $A_{j} (-N(i) + \delta + 1 )$ and the second equality is
just the definition and only the last inclusion needs proof (see figure
22.)
Let us first show that
$$f^{N(i)} (C_{i}) \subset \partial A_{j} ( -N(i) +
\delta +1 ). $$
By assumption
$$f^{N(i)} (D_{i}), f^{-N(i) + \delta +1 } (D_{j}) |
_{f^{N(i)} (C_{i})}$$
which implies that
$$f^{N(i)} (C_{i}) \subset f^{-N(i) + \delta
+ 1} (C_{j}).$$
If $n(i) =1$, then $N(i)=1$ and $\delta=1$, so that by
Proposition~
\ref{59} and the above we have $f(C_{i}) \subset f (C_{j}) \subset \partial A_{j}
(1)$. If $n(i) > 1$, applying $f^{N(i) - \delta -1}$ to the inclusion
$f^{N(i)} (C_{j}) \subset f^{-N(i) + \delta +1}(C_{j})$ we get
$f^{n(i)-1} (C_{i}) \subset C_{j}$. From
Corollary~\ref{67} we know that $f^{n} (C_{i}) \cap$ supp $R = \emptyset$ for
$n \geq 1$. In particular, $f^{n(i) -1}(C_{i}) \cap$ supp $\rho_{0} =
\emptyset$, and
it follows that $f^{n(i)-1} (C_{i} ) \subset \rho_{0}^{-1} (C_{j}) \subset \partial A
_{j} (0)$, this last inculsion coming from Proposition~\ref{59}. Also, if
$k < n(i)$ then $\Psi^{-k} (f^{n(i)} (C_{i})) = f^{-k} (f^{n(i)}(C_{i}) ) =
f^{n(i)-k} (C_{i})$, so that
\begin{eqnarray*}
\Psi^{-N(i) + \delta + 1} (f^{n(i) -1} (C_{i})) & = &f^{N(i)} (C_{i}) \\
& \subset
&\Psi^{-N(i) +\delta + 1} ( \rho_{0}^{-1} ( C_{i}) )\\
&\subset & \partial A _{j} ( -N(i) + \delta +1 )
\end{eqnarray*}
\noindent as we wanted.
In order to see that $I^{c} (\alpha_{i} ( N(i))) \cup \alpha_{i}(N(i)) \subset I
(A_{j}
( -N(i) + \delta +1))$ first notice that, since $ \{ \alpha_{l} (N(i) )
\}^{L}_{l=1}$ is\
a ($\varepsilon_{N(i)},c$)-collection compatible with $$\{ (f^{k} (D_{j} ),
\alpha_{j} (k)
); \ j \in \underline{L}, \ -N(i) + 1 \leq k \leq N(i) -1 \}$$
and that
$$ f^{N(i)} (D_{i} ),
f^{-N(i) + \delta + 1} (D_{j}) |_{f^{N(i)} (C_{i})}$$
then $$ I^{c} (\alpha_{i}
(N(i))) \cup \alpha_{i} (N(i)) \subset f^{-N(i) + \delta +1} (I_{j}).$$
We will now show that
$$[I^{c} (\alpha_{i}(N(i))) \cup \alpha_{i} (N(i)) ] \cap \left[ \bigcup_{l > j}
f^{-N
(i) + \delta +1} (D_{l} ) \cup \bigcup^{N(i) - \delta -1} _{-N(i) + \delta +2}
f^{k} (\overline{P} ) \right] = \emptyset .$$
\noindent This is so because by assumption $f^{N(i)} (D_{i}), f^{-N(i) + \delta + 1}
(D_{l}) \not|_{f^{N(i)} (C_{i})}$ for $l >j$ and from Proposition~\ref{66}
(ii), $f^{N(i)} (D_{i}), f^{k} (D_{j} ) \not|_{f^{N(i)} (C_{i}) }$ for any
$j \in\underline{L}$ and $-N(i) + \delta +2 \leq k \leq N(i) -1$.
This together with the aforementioned compatiblity of $\{ \alpha_{i} (N(i)) \} ^{L}
_{l=1}$ are exactly what we need in order to verify the equation above. By
Corollary~\ref{57} (ii),
$$A_{j} (-N(i) +\delta +1) = f^{-N(i) + \delta +1} (D_{j})$$
in the complement of
$$ \bigcup_{l > j} f^{-N(i) + \delta +1} (D_{l}) \cup \bigcup^{N(i) -
\delta -1} _{-N(i) + \delta +2 } f^{k} (P) $$
\noindent which shows that $$I^{c} (\alpha_{i} (N(i))) \cup \alpha_{i}(N(i)) \subset
A_{j} (-N (i) + \delta + 1).$$
We leave it for the reader to show that
it is possible
to put $I(A _{j} ( -N(i) + \delta + 1))$ in place of $A_{j}( -N(i) + \delta +1 )
$ in the inclusion above. $\Box$
\bigskip
\begin{figure}
\begin{center}~
\psfig{file=Fig22,height=2.5in}
\end{center}
\caption{A possible configuration for $f^{N(i)} (D_{i}), \
f^{-N(i) + \delta + 1} (D_{j})$ and $A_{i} (N(i))$, \ $A_{j}(-N(i) +
\delta + 1 )$ and $A^{c}_{i} (N(i))$.}
\label{f22}
\end{figure}
\begin{prop} \label{62}
Under the hypotheses of Proposition~\ref{61} (i) and (ii) respectively, (i$'$)
and (ii$'$) below hold:
\begin{description}
\item[(i$'$) ] $A^{c}_{i}(N(i)) \subset A^{c}_{j} ( -N(i) + \delta + 1) | _{f^
{N(i)} (C_{i} )}$;
\item[(ii$'$) ] $A^{e}_{i} (M(i)) \subset A_{j} (-M(i) + \delta ') |
_{f^{M(i)} (E_{i}) }$.
\end{description}
\end{prop}
\noindent{\sc Proof:}
By Proposition~\ref{61}, $A^{c}_{i} (N(i) ) \subset A_{j} ( -N(i) +
\delta +1)
|_{f^{N(i)} (C_{i} ) }$. Therefore all we need to prove is that
$$[I^{c} ( \alpha
_{i} (N(i) )) \cup \alpha_{i} (N(i) ) ] \cap A^{e} _{j} ( -N(i) + \delta +1 )
= \emptyset.$$
There are two cases to be considered: $M(j) \leq -N(i) +\delta +1$ and $M(j)
> -N(i) + \delta +1$. If $M(j) \leq -N(i) + \delta + 1$, by Proposition~\ref
{60},
$$A^{e}_{j} (-N(i) + \delta +1 ) = D^{e} ( \alpha_{j} ( -N(i) + \delta
+1 ))$$
and, since $\{ \alpha_{l} (N(i)) \}^{L}_{l=1}$ is a ($\varepsilon_{N(i)},c$)-collection
compatible with
$$\{ (f^{k} (D_{j}), \alpha_{j} (k) ); j \in \underline{L}, \ -N(i)+1
\leq k \leq N(i) -1 \},$$
then
$$[I^{c} (\alpha_{i}(N(i))) \cup \alpha_{i} (N(i)) ] \subset
I^{c} ( \alpha_{j} (-N(i) + \delta + 1 ))$$
so that
$$[ I^{c} (\alpha_{i} (N(i) )) \cup
\alpha_{i} (N(i)) ] \cap D^{e} (-N(i) + \delta + 1 ) = \emptyset,$$
as we wanted.
If $M_{j} > -N(i) + \delta +1$, there exists $l \in \underline{L}$ such that
$$f^{M(j)} (D_{j}), f^{-M(j) + \delta '}(D_{l}) | _ {f^{M(j)} (E_{j}) }$$
where $m(j) = 2 M(j)
+ \delta '$, and, assuming $l$ is the smallest such, by Proposition~\ref
{62} (ii), we can conclude that $A^{e}_{j}(M(j)) \subset A_{l} (-M(j) + \delta
')$. Therefore
\begin{eqnarray*}
A^{e}_{j} (-N(i) + \delta +1) & = & \Psi ^{-M(j) -N(i) + \delta +1 }
(A^{e}_{j} (M(j))) \\
& \subset & \Psi^{-M(j) -N(i) + \delta +1}
(A_{l} ( -M(j) + \delta ' ) ) \\
& = & A_{l} (-m(j) -N(i) + \delta +1 ).
\end{eqnarray*}
>From $M(j) \geq -N(i) + \delta +2$ we have
\begin{eqnarray*}
-m(j) -N(i) + \delta + 1 & = & -2 M(j) + 2 \delta ' -N(i) + \delta + 1 \\
& \leq & 2N(i) - 2 \delta -4 +2 \delta ' -N(i) + \delta + 1 \\
& = & N(i) - \delta -3 -2 \delta ' \\
& \leq & N(i) - \delta -1
\end{eqnarray*}
\noindent If $m(j) > 0$, then $-m(j) -N(i) + \delta +1 \geq -N(i) + \delta +2$,
so that
$$A_{l} (-m(j) -N(i) + \delta +1 ) \subset \displaystyle\bigcup^{N(i) -
\delta -1}
_{-N(i) + \delta +2} \overline{P} (k).$$
If $m(j) = 0$, then $D_{j},
D_{l}|_{E_{j}}$,
which implies that $D_{l} \succ D_{j}$ and therefore that $l > j$. With this
we have shown that
\begin{eqnarray*}
A_{l} (-m(j) -N(i) + \delta + 1) & \subset & \bigcup _{l > j} A_{l}
(-N(i) + \delta + 1 ) \cup \bigcup^{N(i) - \delta -1} _{-N(i) + \delta +2}
\overline{P} (k) \\
& = & \bigcup_{l> j} f^{-N(i) + \delta +1} (D_{l} ) \cup \bigcup ^{N(i)-
\delta -1} _{-N(i) + \delta + 2} f^{k} (\overline{P})
\end{eqnarray*}
\noindent where this last equaltiy is a conseqence of Proposition~\ref{56} (ii).
But, from the proof of Proposition~\ref{61}, we have seen that $I^{c} (\alpha_{i}
(N(i))) \cup \alpha_{i} (N(i))$ does not intersect the set after the equal
sign just above. This finishes the proof. $\Box$
\bigskip
\begin{cor} \label{63}
With the same notation as above, for every $i \in \underline{L}$ the following
holds:
\begin{description}
\item[(i) ] if $n(i) < \infty$, then for every $j \in \underline{L}$ such that
$$f^
{N(i)} (D_{i}), f^{-N(i) + \delta +1} (D_{j})| _{f^{N(i)} (C{i}) }$$
we have
$$A^{c}_{i} (N(i)) \subset A^{c}_{j}(-N(i) + \delta + 1 ) | _ {f^{N(i) }
(C_{i}) };$$
\item [(ii) ] if $m (i) >- \infty$, then for every $j \in \underline{L}$ such that
$$f^
{M(i)} (D_{i} ) \ f^{-M(i) + \delta '} ( D_{j}) | _
{f^{M(i)} (E_{i}) }$$
we have
$$ A^{e}_{i} (M (i) ) \subset A^{e}_{j} ( -M (i) + \delta ') | _{f^{M(i)}
(E_{i})}.$$
\end{description}
\end{cor}
\noindent {\sc Proof:}
We have shown that if $j \in \underline{L}$ is largest such that
$$f^{n(i)}(D_{i} ) ,
f(D_{j}) | _ {f^{N(i)} (C_{i} ) }$$
(which is equivalent to the condition in (i)
above) then the desired inclusion holds. Let $l \in \underline{L}$ be such that
$f^{n(i)} (D_{i} ), f(D_{l} ) | _{f^{n(i) } (C_{i}) }$, $l \neq j$ (and thus
$l < j$.) By Proposition~\ref{13}, $f(D_{j}), f(D_{l}) | _ {f^{n(i) }
(C_{i} ) }
$ and since $l < j$, we must have $f(D_{l}) \prec f(D_{j})$. Since $\{
\alpha_{i}
(1) \}_{i=1}^{L}$ is a ($\varepsilon,c$)-collection, it follows that $[ I^{c}
( \alpha
_{j}(1) ) \cup \alpha_{j} (1) ] \subset I^{c} ( \alpha _{l} (1))$ and by
Proposition
~\ref{60} this is equivalent to $A^{c}_{j}(1) \subset A^{c}_{l}(1) | _
{f(C_{j}
)} $ (see figure 23.) Taking the $\Psi^{-N(i) + \delta}$-image of this
latter inclusion, we get
$$ A ^{c} _{j} (-N(i) + \delta + 1) \subset A^{c}_{l} (-N(i) + \delta +1 )
| _{ \Psi ^{-N(i) + \delta } (f (C_{j}) ) }.$$
Notice that $\Psi^{-N(i) + \delta} (f (C_{j}) ) = \Psi^{-N(i) + \delta +1} (
\rho^{-1}_{0} ( C_{j} ))$ and that in the proof of Proposition~\ref{62} we
showed that $f^{N(i)}(C_{i}) \subset \Psi ^{-N(i) + \delta + 1 } ( \rho^{-1}
_{0}(C_{j}))$. Therefore
$$A^{c}_{i} (N(i)) \subset A^{c}_{j} (-N(i) + \delta +1) | _
{f^{N(i)}(C_{i})} $$
\noindent and
$$A^{c}_{j} (-N(i) + \delta + 1) \subset A^{c} _{l}( -N(i) + \delta +1 ) |
_{\Psi ^{-N(i) + \delta + 1} ( \rho^{-1}_{0}( C_{i}) ) } $$
\noindent imply that
$$ A^{c}_{i} (N(i)) \subset A^{c}_{l} ( -N(i) + \delta + 1) | _ {f^{N(i)}
(C_{i}) } $$
\noindent as we wanted. $\Box$
\bigskip
\begin{figure}
\begin{center}~
\psfig{file=Fig23,height=2.25in}
\end{center}
\caption{An example of $A_{i} (N(i)), \ A_{j} (-N(i) + \delta +
1 ) $ and $A_{l} (-N (i) + \delta + 1)$.}
\label{f23}
\end{figure}
\begin{prop} \label{64}
$\Psi$ has the following properties:
\begin{description}
\item[(i)] if $n(i) = \infty$, then $\Psi^{n} (A_{i}^{c})$ has interior
disjoint from $\overline{P}$ for every $n > 0$;
\item[(ii)] if $n(i) < \infty$, then for every $j \in \underline{L}$ such that $f^
{n(i)} (D_{i}), f(D_{j}) | _{f^{n(i)} (C_{i}) }$
$ \Psi ^{n(i)} (A^{c}_{i} ) \subset A^{c}_{j}(1) | _ {f^{n(i)} (C_{i})}$ and
$f^{n(i)} (C_{i}) \subset f(C_{j})$
\item[(iii)] if $m(i) = - \infty$, then $\Psi^{m} (A^{e}_{i} ) $ has interior
disjoint from $\overline{P}$ for every $m < 0$;
\item[(iv)] if $m(i) > - \infty$, then for every $j \in \underline{L}$ such that
$f^{m(i)} (D_{i}), D_{j} | _{f^{m(i)} (E_{i}) }$ $\Psi^{m(i)} (A^{e}_{i} )
\subset A^{e}_{j} | _{f^{m(i)} (E_{i}) }$ and $f^{m(i)} (E_{i}) \subset E_j$.
\end{description}
\end{prop}
\noindent {\sc Proof:}
If $n(i) = \infty$, then $N(i) = \infty$ and by Proposition~\ref{60} we see
that $\Psi^{n} (A^{c}_{i} ) = A^{c}_{i} (n) = D^{c} ( \alpha_{i}(n) )$ for every
$n \geq 1$. Since $f^{n} (D_{i}), f^{k} (D_{j}) \not| _{f^{n}(C_{i}) } $ for
$-n+1 \leq k \leq n-1$,
$$[I^{c}( \alpha_{i}(n) ) \cup \alpha_{i} (n) ] \cap \displaystyle
\bigcup^{n-1}_{-n+1} f^{k} ( \overline{P}) = \emptyset.$$
This being true for every
$n \geq
1$, we se that $[ I^{c} ( \alpha_{i}( n)) \cup \alpha_{i}(n) ] \cap \overline{P} = \emptyset$ for
every $n \geq 1$, which proves (i). (ii) is immediate from
Corollary~\ref{63}.
(iii) and (iv) are analogous and we omit the proofs. $\Box$
\bigskip
We can now state and prove the main theorem.
\begin{theorem} [Main Theorem] \label{65}
Let $f: \pi \rightarrow \pi$ be a homeomorphism of the plane, $\{D_{i}
\}^{L}_{i=1}$ a
pruning collection and $P = \displaystyle \bigcup^{L}_{i=1} I_{i}$, where $I_i$ is the
interior of the disk $D_i$. Then there exists an isotopy $H: \pi \times [0,1]
\rightarrow \pi$ of the identity such that supp $H \subset \displaystyle \bigcup_{k \in{\Bbb{Z}}
}f^{k} (P)$, and if we set $f_{P}(\cdot) = f \circ H ( \cdot, 1)$, every
point of $P$ is wandering under $f_P$.
\end{theorem}
\noindent{\sc Proof:}
Construct a directed graph $G_c$ as follows: its vertices are the integers
$\{ i \in \underline{L}; \ n(i) > 1 \}$ and there is a directed vertex from $i$ to
$j$ if $n(i) < \infty$ and $f^{n(i)} (D_{i}), f(D_{j}) | _{f^{n(i)}(C_{i})}$.
Since we have taken only $i \in \underline{L}$ for which $n(i) > 1$, it is easy
to see
that from each vertex there is at most one outgoing edge (or none, if $n(i) =
\infty$). A {\em loop} in the directed graph consists of an ordered set of
distinct vertices $\{i_{1} < i_{2} < \ldots < i_{l} \}$ such that there is a
directed edge from $i_{r}$ to $i_{r+1}$, for $1 \leq r \leq l$ where we let the
indices ``wrap around'', i.e., $l + 1$ ``=''$1$. Since there is at most one
edge emanating from each vertex, and the vertices in a loop are ordered and
distinct, it follows that two loops are either equal or disjoint. Let ${\mathcal{L}
} = \{ i_{1}, \ldots , i_{l} \}$ be a loop in $G_c$, which for now we will
represent by just its subscripts $\{1, \ldots, l \}$ so that the notation is
not too awkward. By definition, we have
$$ f^{n(r)} (D_{r}), \ f(D_{r+1}) | _{f^{n(r)}(C_{r})} \ \mbox{\rm for } 1
\leq r \leq l$$
\noindent and by Proposition~\ref{64}
$$ \Psi^{n(r)} (A^{c}_{i}) \subset A_{r+1} (1) |_ {f^{n(r)}(C_{r}) }
\ \mbox{\rm for } 1 \leq r \leq l$$
\noindent from which it follows that
$$ \Psi^{ \sum^{l}_{r=1} n(r) - (l-2) } (A^{c}_{1}(1 )) \subset
A^{c}_{1}(1) |_{f^{\sum^{l}_{r=1} n(r) - (l-1) } (C_{i} ) } $$
For a loop ${\mathcal{L}} = \{i, \ldots , i_{l} \}$ let $n ({\mathcal{L}} ) = \displaystyle\sum
^{l}_{r=1} n(i_{r}) - (l -2)$. By Lemma~\ref{40} and Corollary~\ref{41},
there exits an isotopy $h_{{\mathcal{L}} }$ of the identity with supp $h_{ {\mathcal{L}
} }$ $\subset I (A^{c} _{i_{1}} (1) )$ such that, if $\zeta_{ {\mathcal{L} }
}(\cdot)
= h _{ {\mathcal{L}}} (\cdot, 1 )$, then $(\Psi \circ \zeta_{ {\mathcal{L}} })^{k n( {\mathcal
{L}} ) }(x) \rightarrow p$ for every $ x \in A^{c}_{i} (1)$, where
$p$ is the fixed point of $\Psi^{n( {\mathcal{L}}) }$ in $f^{n( {\mathcal{L} } ) +1}
(C_{i})$. We then construct isotopies $h_{{\mathcal{L}} }$ for each loop ${\mathcal{L}}$
in $G_c$. Since the vertices of $G_c$ where integers $i \in \underline{L}$ for which
$n(i) > 1$, the supports of isotopies associated to different loops are
disjoint. Let $h_c$ be the union of all these isotopies. By
construction supp $h_{c} \subset \displaystyle \bigcup^{l}_{i=1} I (A^{c}_{i} (1)) =
\displaystyle \bigcup^{L}_{i=1} I^{c} ( \alpha_{i} (1) )$.
In an analogous manner, we construct a directed graph $G_e$ whose
vertices are
$\{ i \in \underline{L}; m(i) < 0 \}$ and for each loop $\mathcal{L}$ in $G_e$, we
construct an isotopy $h_{\mathcal{L}}$ of the identity, with support in
$A^{e}_{i_{1}}(1)$, playing the analogous role for $\Psi^{-1}$ as the above
ones played for $\Psi$. Let $k_e$ denote the union of these isotopies, for it
is again easy to check that they have disjoint supports, and define $h_{e} =
\Psi^{-1} \circ k_{e}^{-1} \circ \Psi$, i.e., for each fixed $t$, $h_{e}(x,t)
= \Psi^{-1} ( k_{e}^{-1} ( \Psi (x),t) )$. $h_e$ is also an isotopy
of the
identity and since supp $k_{e} \subset \displaystyle \bigcup^{L}_{i=1} I ( A^{e}_{i}
(1))$,
$$\mbox{\rm supp } h_{e} \subset \displaystyle \bigcup^{L}_{i=1} I (A^{e}_{i} (0)) =
\displaystyle
\bigcup ^{L}_{i=1} I^{e} ( \alpha_{i} (0)).$$
>From this it follows that supp
$h_{e} \cap$ supp $h_{c} = \emptyset$ and we let $h = h_{c} \cup h_e$ and $\zeta
(\cdot ) = h ( \cdot, 1 )$. Finally set
$$ H(x,t) = \left\{ \begin{array}{ll}
h(x, 2t) & t \in \left[ 0, \displaystyle \frac{1}{2} \right] \\
\\
R(\zeta(x), 2t-1) & t \in \left[ \displaystyle \frac{1}{2}, 1 \right]
\end{array}
\right. .
$$
It is now not hard to check that $H$ has the desired properties. $\Box$
\bigskip
|
2,877,628,090,530 | arxiv | \section{Introduction}
Arc diagrams are simple, combinatorial objects associated to surfaces with boundary. They consist of homotopy classes of disjoint curves, and can be thought of as embedded graphs on suitably marked surfaces. Arc diagrams and the simplicial complexes, known as arc complexes, which can be built from them, are have been studied by topologists for many years; see, for example, \cite{hatcher1991triangulations}, \cite{korkmaz_papadopoulos_2010}, \cite{fomin2008cluster}. This paper examines the behavior of weighted arc diagrams (that is, diagrams with nonnegative real numbers assigned to each arc) under topological branched covering maps. A given branched covering map of marked surfaces induces a map between arcs of the base space and arcs of the total space by path lifting.
We are interested in the membership problem of the set of weighted arc diagrams which can be realized by such a process of lifting from a disk with two marked boundary points, which is referred to as a bigon. That is, given a weighted arc diagram, decide if it can be realized by lifting the arcs of a diagram on a bigon under a suitable branched cover. In section \ref{realizability}, we construct a combinatorial way of representing a topological branched cover which we refer to as a lifting picture. Because there are finitely many lifting pictures on a given surface, this problem admits a finite, if computationally expensive, solution, which we present as algorithm \ref{brute-force}.
In the sections which follow, we present a much more efficient solution to this problem in the case of an arc diagram which is a triangulation of the surface on which it lives. We tackle this by first presenting solutions to two special cases in sections \ref{homonymous recursion section} and \ref{heteronymous section}, then merging these two special case solutions to handle a general triangulation in section \ref{general maximal section}.
\section{Definitions and Notation}
We first need to define a particular kind of marked surface with boundary, which we will call a \emph{substrate}. The objects and definitions here broadly track with those in \cite{fomin2008cluster}.
\begin{define}
Let $\Sigma$ be a surface with boundary. Let $V$ be a finite set of distinct points on the boundary of $\Sigma$ which we call \emph{vertices}. A \emph{2-coloring} is a map $V \to \{\alpha,\beta\}$ so that no two neighboring vertices are assigned the same value.
\end{define}
\begin{define}
A \emph{substrate} is a surface $\Sigma$ with boundary equipped with a set $M_\Sigma$ of marked points in the interior of $\Sigma$, a set of vertices $V_\Sigma$, and a 2-coloring $C_\Sigma:V_\sigma \to \{\alpha,\beta\}$. Each boundary component of $\Sigma$ must contain at least 2 vertices. Call the componenets of $\bdry \Sigma \backslash V_\Sigma$ \emph{boundary edges}.
\end{define}
The set $M_\Sigma$ will typically be empty except for bigons, which will be defined shortly. Substrates are the homes of arc diagrams.
\begin{define}
Let $\Sigma$ be a substrate and $p:[0,1] \to \Sigma$ be a simple path with $p(0),p(1) \in V_\Sigma$, and $p(t) \not\in M_\Sigma$ for all $t$. An \emph{arc} is the homotopy class relative to $\{0,1\}$ of such a path through paths disjoint from $M_\Sigma$.
\end{define}
We will simply say \emph{homotopic} to mean \emph{homotopic rel endpoints through paths disjoint from} $M_\Sigma$ in the context of arcs. An arc is called \emph{trivial} if it is homotopic to a path in the boundary of $\Sigma$ which intersect $V_\Sigma$ only at its endpoints. Otherwise, it is called \emph{nontrivial}. In particular, an arc which is homotopic to a vertex is trivial. An arc is called \emph{homonymous} if $C_\Sigma$ takes the same value at both of its endpoints; that is, if its endpoints are the same color. Otherwise, an arc is called \emph{heteronymous}.
\begin{define}
Two arcs $a$ and $b$ are \emph{noncrossing} if there are paths representing $a$ and $b$ that are disjoint except at their endpoints. Otherwise, $a$ and $b$ \emph{cross}.
\end{define}
\begin{define}
An \emph{arc diagram} on a substrate $\Sigma$ is a set of mutually noncrossing arcs on $\Sigma$.
\end{define}
We will call an arc diagram \emph{clean} if it contains no trivial arcs. An arc diagram $D$ is \emph{maximal} if any nontrivial arc not already in $D$ crosses at least one arc in $D$. An arc diagram is \emph{fully homonymous} if it contains only homonymous arcs. The arcs of a diagram may also be assigned weights.
\begin{define}
A \emph{weighted arc diagram} is an arc diagram $D$ equipped with a map $w_D:D \to{} [0,\infty)$.
\end{define}
\begin{define}
A weighted arc diagram $E$ \emph{extends} a weighted arc diagram $D$ if $D \subset E$ and $w_E = w_D$ on the arcs of $D$. $E$ is also called an \emph{extension} of $D$.
\end{define}
We will also need to define a branched cover of substrates. Compare to the definitions of topological branched cover in \cite{mohar1988branched} and \cite{pikekosz1996basic}.
\begin{define}
Let $X$ and $Y$ be surfaces. A continuous map $f: X \to Y$ is called a \emph{branched covering map} if there is a finite set $\Delta \subset Y$ so that $f$ restricted to $f^{-1}(Y\backslash\Delta)$ is a covering map, and it meets the following regularity condition. For each $p \in X$ there are open neighborhoods $U_p$ of $p$ and $V_{f(p)}$ of $f(p)$ with charts $\psi:V_{f(p)} \to Z$ and $\phi:U_p \to Z$, where $Z$ is an open neighborhood of 0 in $\C$, so that the map $\psi \circ f \circ \phi^{-1}:\C \to \C$ is the complex function $z \to z^n$. Note that we are not requiring that $X$ or $Y$ have a complex structure, only that they are topological 2-manifolds.
\end{define}
The smallest such $\Delta$ is called the \emph{singular set}, and its elements are \emph{singular values}. $Y\backslash\Delta$ is called the \emph{regular set} and its elements likewise called \emph{regular values}. For each $p \in X$, the number $n$ so that $f$ is $z \to z^n$ in local coordinates around $p$ and $f(p)$ is called the \emph{local degree} of $f$ at $p$, $\deg f(p)$. A point $p$ where $\deg f(p) > 1$ is called a \emph{branch point}.
A branched covering map is called \emph{simple} if the local degree of $f$ is 2 at each branch point and the fiber over any singular value contains a unique branch point.
\begin{define}
Let $\Sigma$ and $\Sigma^{\prime}$ be substrates. A \emph{branched cover} of $\Sigma^\prime$ by $\Sigma$ is a branched covering map $f:\Sigma \to \Sigma^\prime$ which takes $V_\Sigma$ to $V_{\Sigma^\prime}$, whose set of singular values is $M_{\Sigma^\prime}$ and which commutes with the 2-coloring. That is, for all vertices $v$ of $\Sigma$, $C_\Sigma(v)=C_{\Sigma^\prime}(f(v))$. We say that $\Sigma$ is a branched cover of $\Sigma^\prime$ if there exists such a branched covering map.
\end{define}
We are particularly interested in branched covers of bigons.
\begin{define}
A topological disk equipped with the structure of a substrate is a \emph{polygon}. A polygon with 2 vertices is a \emph{bigon}.
\end{define}
Arcs which share an endpoint are naturally ordered, a property we will use shortly. Suppose $D$ is an arc diagram on substrate $\Sigma$. Orient each boundary component of $\Sigma$ so that the interior of $\Sigma$ is on the left. Choose a set of smooth, disjoint, simple paths realizing $D$. The arcs incident on a given vertex may now be ordered left to right by the angle of their inward-pointing tangent vectors at $v$. We call this the \emph{canonical order} at each vertex.
\begin{prop}\label{branching arcs exist}
Suppose $f: \Sigma \to \B$ is a simple branched cover. Let $s \in M_\B$ be a singular value, and $\wt{s}$ the branch point in the fiber over $s$. If $p$ is a simple path connecting $s$ to $v$, the union of the two lifts of $p$ which pass through $\wt{s}$ are a path representing a nontrivial homonymous arc. No other union of lifts of $p$ represents an arc.
\end{prop}
\begin{proof}
The preimage of $p$ is a collection of paths in $\Sigma$, each of which connects a point in the fiber over $s$ to a vertex which matches the color of $v$. Each path has an endpoint on a distinct vertex of $\Sigma$. Two of these paths, $a$ and $b$, will contain $\wt{s}$ as an endpoint; the rest will have endpoints on distinct points of the fiber over $s$. Therefore, only $a$ and $b$ join together into a path between vertices of $\Sigma$; none of the rest of the paths fit together into a representative of an arc.
Now it remains to show that $a \cup b$ represents a nontrivial arc. Since $a$ and $b$ will have endpoints on distinct vertices of $\Sigma$, $a \cup b$ is not a loop. The only trivial, homonymous arcs are loops, so $a \cup b$ represents a nontrivial arc.
\end{proof}
Suppose $f: \Sigma \to \B$ is a branched cover of a bigon by a substrate. Choose simple, disjoint paths connecting each point of $M_\B$ to a vertex of $\B$. By Proposition \ref{branching arcs exist}, each path $p$ may be asociated with a unique, nontrivial, homonymous arc $\wt{p}$ on $\Sigma$, which we call a \emph{branching arc}. We call the set of these arcs a \emph{branching diagram} for $f$. If we change these paths by homotopy rel endpoints and through paths disjoint from $M_\B$, the branching diagram remains constant.
\begin{define}
Let $D$ be a branching diagram on $\Sigma$ for the branched cover $f$. A component of $\Sigma \backslash D$ is called a \emph{sheet}.
\end{define}
\begin{prop}\label{sheets are simply connected}
Every sheet $s$ of a simple branched cover $f:\Sigma \to \B$ is simply connected, and $f$ is a homeomorphism on the interior of $s$.
\end{prop}
\begin{proof}
The restriction of $f$ to the interior of $s$ is a covering map which sends interior of $(s)$ to the open set $\inside(\B)$ with the images of the branching arcs in $\bdry s$ deleted. This is a simply connected open set. Since $f$ is injective on the fundamental group of $\inside(s)$, $s$ must be simply connected.
Since $s$ is simply connected, $\chi(s)=1$. This implies that, since $f|_{\inside(s)}$ is a covering map, $1=\chi(s)=d \chi(f(s))=d$, where $d$ is the degree of $f|_{\inside(s)}$. Hence, $d=1$ and $f|_{\inside(s)}$ is a homeomorphism.
\end{proof}
An important consequence of Proposition \ref{sheets are simply connected} is that each sheet of a simple branched cover contains exactly two boundary edges of $\Sigma$.
Conversely, one can use a branching diagram together with an additional choice of order to specify a simple branched cover of the bigon. Call an arc diagram $D$ \emph{ordered} if there is a total order on the arcs of $D$ which agrees with the canonical order at each vertex.
\begin{prop}\label{branch data}
Let $\Sigma$ be a substrate, and $D$ be an ordered arc diagram on $\Sigma$ containing $n=|V_\Sigma|/2-\chi(\Sigma)$ homonymous arcs so that each component of $\Sigma \backslash D$ is simply connected and contains two of $\Sigma$'s boundary edges. Then there exists a simple branched cover $f:\Sigma \to \B$ which has $D$ as its branching diagram.
\end{prop}
\begin{proof}
Let $d_1,\ldots,d_n$ be the arcs of $D$, and number the points of $M_\B$ $m_1,\ldots,m_n$. For $1 \leq i \leq n$, connect $m_i$ to the vertex which matches the color of the endpoints of $d_i$ by a simple path $p_i$ so that the canonical order at each vertex agrees with the order of the $p_i$'s. Let $\phi:\bdry \Sigma \cup D \to \B$ which sends $\bdry \Sigma \to \bdry \B$, sends vertices to vertices of the same color, and which sends $d_i$ to $p_i$.
For each sheet $s$, choose a homeomorphism $\phi_s:\inside(s) \to \B \backslash \phi(\bdry s)$ so that the map $f_s:s \to \B$ defined by $f_s=\phi_s$ on $\inside(s)$ and $\phi$ on $\bdry s$ is continuous. Define $f$ to be $f_s$ on each sheet $s$.
\end{proof}
An arc on the bigon will have the same lifts under any branched cover with the same ordered branching diagram, so we will consider branched covers with the same ordered branching diagram equivalent.
\begin{remark}
Here is an equivalent definition of substrates and arcs, which is sometimes convenient. Instead of 2-coloring the vertices of the substrate $\Sigma$, we can instead 2-color its boundary edges. An arc is now the homotopy class of a simple path disjoint from $M_\Sigma$ with each endpoint on a boundary edge, through paths disjoint from $V_\Sigma$ and $M_\Sigma$ which have their endpoints on $\bdry\Sigma$. That is, We allow the ends of the arc to slide along along a boundary edge, but not into a vertex. All other definitions and results about arc diagrams may be used with straightforward modifications.
\end{remark}
\section{Lifting and Realizability}\label{realizability}
Suppose $\Sigma$ and $\B$ are substrates, and $B$ is a weighted arc diagram on $\B$. Let $f:\Sigma \to \B$ be a branched cover of substrates. Since $f$ is an ordinary covering map in the complement of $M_\B$, for any arc $a \in B$ we may choose a path $\hat{a}$ representing $a$ and lift $\hat{a}$ to a set of $\deg(f)$-many paths in $\Sigma$. The homotopy classes of these paths form a set of arcs called the \emph{lifts} of $a$. Some of these may be trivial arcs. We denote the set of nontrivial lifts of $a$ by $\lift(a)$.
The branched cover $f:\Sigma \to \B$ together with the weighted arc diagram $B$ determines a clean, weighted arc diagram $\wt{B}$ on $\Sigma$ in the following way. The set of arcs in $\wt{B}$ is the union of nontrivial lifts
\[
\bigcup_{b \in B} \lift(b)
\]
of arcs in $B$. For an arc $\wt{b} \in \wt{B}$, define the weight by
\begin{equation}
w_{\wt{B}}(\wt{b}) = \sum_{\{b \in B| \wt{b} \in \lift(b)\}} w_B(b)
\end{equation}
That is, the weight of an arc in $\wt{B}$ is the sum of the weights of all arcs which lift to it. It is convenient to treat arcs with weight zero as essentially trivial. So we impose the following equivalence relation.
\begin{define}
Arc diagrams $D$ and $D^\prime$ are \emph{equivalent} if they differ only on weight zero arcs. That is, if $a \in D \backslash D^\prime$, then $w_D(a)=0$, and likewise if $a \in D^\prime \backslash D$, then $w_{D^\prime}=0$.
\end{define}
\begin{define}
$\wt{B}$ is the \emph{lift of} $B$ \emph{under} $f$. We also say $B$ \emph{lifts to} $\wt{B}$.
\end{define}
\begin{define}
A \emph{lifting picture} is a quintuple $(\Sigma,C,\prec,\B,B)$ consisting of a substrate $\Sigma$, a branching diagram $C$, an order $\prec$ on $C$, a bigon $\B$, and an arc diagram $B$ on $\B$.
\end{define}
By Proposition \ref{branch data}, $C,\prec$ determines a branched cover $\Sigma \to \B$.
\begin{define}
Let $L=(\Sigma,C,\prec,\B,B)$ be a lifting picture, and $D$ an arc diagram on $\Sigma$. $L$ \emph{realizes} $D$ if $B$ lifts under $f$ to $D$. $D$ is \emph{realizable} if there exists a lifting picture which realizes $D$.
\end{define}
\begin{define}
The \emph{realizability problem} asks, given a weighted arc diagram $D$ on a substrate $\Sigma$, to decide whether $D$ is realizable, and to produce a lifting picture $(\Sigma,C,\prec,\B,B)$ which realizes $D$ if one exists.
\end{define}
Because there are only finitely many lifting pictures on a given substrate, there is a brute force solution the realizability problem.
\begin{algo}\label{brute-force}
INPUT: A weighted arc diagram $D$ with weights $w_D$ on a substrate $\Sigma$. \\
OUTPUT: A lifting picture $(\Sigma,C,\prec,\B,B)$ realizing $D$ if one exists, or the message that no such lifting picture exists.\\
PROCEDURE: Choose a branching diagram $C$ compatible with $D$ and a valid order $\prec$ on $C$. Fix a bigon $\B$ with the appropriate number of interior marked points. Choose an unweighted arc diagram on $\B$. By propositions \ref{branching arcs exist} and \ref{branch data}, choosing $C$ and $\prec$ specifies a branched covering map $f:\Sigma \to \B$.
If the arcs of $B$ lift to those of $D$, then we get a linear map $L$ from vectors of weights $w_B$ on $B$ to vectors of weights on $D$. Compute the Moore-Penrose inverse $L^+$ of $L$. The equation $L w_B = w_D$ has a solution if and only if $L L^+ w_D = w_D$ \cite{james_1978}. In that case, $L^+ w_D$ is a solution to $L w_B = w_D$, so assign weights $L^+ w_D$ to the arcs of $w_B$, and return the resulting lifting picture $(\Sigma,C,\prec,\B,B)$. Otherwise, try a new combination of $C$, $\prec$, and unweighted diagram $B$ on $\B$. If all combinations are exhausted without yielding a solvable equation $L w_B = w_D$, then return the message that $D$ is not realizable. $$\Diamond$$
\end{algo}
\begin{prop}
A clean, maximal arc diagram $D$ on a substrate $\Sigma$ with $M_\Sigma=\emptyset$ defines a triangulation of $\Sigma$, and contains $N-3\chi(\Sigma)$ arcs, where $N$ is the cardinality of $V_\Sigma$ and $\chi(\Sigma)$ denotes the Euler characteristic.
\end{prop}
\begin{proof}
$D$ defines a cell decompostition $\mathcal{C}$ of $\Sigma$ as follows: the set of vertices $V$ of $\mathcal{C}$ is $V_\Sigma$, the set of edges $E$ contains the arcs of $D$ and the boundary edges of $\Sigma$, and the set of 2-cells $F$ consists of the closures of the components of $\Sigma \backslash E$. To see this is a triangulation, suppose there is a 2-cell $C$ that is not a triangle. If the boundary of $C$ contains at least 4 vertices, then any non-adjacent pair of vertices in $\bdry C$ can be connected by an arc which is disjoint from $D$ (as it lies in the interior of $C$) and not already in $D$. This contradicts maximality of $D$. If the boundary of $C$ contains only 1 or 2 vertices, then the interior of $C$ must have genus greater than zero; otherwise, the boundary of $C$ would collapse under homotopy. In that case, once again one can add an arc to the diagram, contradictiong maximality. This is illustrated in Figure \ref{nobigons}.
By Euler's formula, $\chi(\Sigma)=|V|-|E|+|F|$. $N=|V|$. Since $\mathcal{C}$ is a triangulation, $2|D|+\#\{\bdry\ \mathrm{edges}\}=3|F|$, where $|D|$ is the number of arcs in $D$. Since there are the same number of vertices as boundary edges, and $|E|=|D|+\#\{\bdry\ \mathrm{edges}\}$, the equation reduces to $\chi(\Sigma)= N-|D|-N+(2/3)|D|+(1/3)N$, and therefore $|D|=N-3\chi(\Sigma)$.
\end{proof}
\begin{figure}[ht]
\centering
\includegraphics[width=0.3\textwidth]{nobigonscorrected.pdf}
\caption{A subdiagram consisting of 2 distinct arcs sharing endpoints must have genus greater than 0. Bold lines represent the boundary of the surface, fine lines represent arcs. The dashed arc can be added to the diagram, meaning it is non-maximal.}
\label{nobigons}
\end{figure}
We now prove some useful facts about lifting arcs from a bigon under a simple branched cover.
\begin{prop}\label{how lifting works}
Let $(\Sigma,C,\prec,\B,B)$ be a lifting picture, and $b$ an arc in $B$. Let $S$ be a sheet of the cover. Then $b$ has a unique lift $\wt{b}$ in $S$. If $b$ is homonymous then $\wt{b}$ is nontrivial if and only if $b$ encloses a singular value $v$ such that the branch point $\wt{v}$ in the fiber over $v$ is contained in the boundary of $S$. If $b$ is heteronymous, then $\wt{b}$ is nontrivial if and only if $b$ separates two critical values $v$ and $w$.
\end{prop}
\begin{proof}
Since $f$ restricted to the interior of $S$ is a homeomorphism onto an open set in $\B$ which contains $b$, $b$ has a unique lift in $S$.
Suppose $b$ is homonymous. First assume $\wt{b}$ is nontrivial. Since $S$ is simply connected, either $\wt{b}$ is a branching arc in $\bdry S$ or $\wt{b}$ divides $S$ into two nonempty components. If $\wt{b}$ is a branching arc containing branch point $u$, then $b$ is an arc which encloses $f(b)$. If $\wt{b}$ divides $S$ into two components, then one of those must contain a branch point $u$, and the other contains the boundary edges of $\Sigma$ present in $\bdry S$. Therefore, $b$ must separate $f(u)$ from $\bdry \B$ and hence $b$ encloses a singular value. Now, assume $b$ encloses a singular value $v$, whose associated branch point $\wt{v}$ is in $\bdry S$. Then since $b$ separates $v$ from $\bdry \B$, $\wt{b}$ must either separate $\wt{v}$ from $\bdry \Sigma \cap \bdry S$, or be the branching arc containing $\wt{v}$. In either case, $\wt{b}$ is nontrivial.
Now, suppose $b$ is heteronymous. First, assume $\wt{b}$ is nontrivial. Then $\wt{b}$ separates $S$ into two components. If one of them contains no branch points, and hence no branching arcs, then it must contain only $\bdry \Sigma \cap \bdry S$ and points of $\inside(S)$. But since $\bdry S$ only contains two boundary edges of $\Sigma$, then $\wt{b}$ must be homotopic to one of them, which contradicts nontriviality. So $\wt{b}$ must separate two branch points, and hence $b$ separates two singular values whose branch points are in $S$. Finally, assume $b$ separates two singular vaules whose branch points are in $S$. Then $\wt{b}$ must separate those corresponding branch points in $S$, and hence $\wt{b}$ is nontrivial.
\end{proof}
\begin{cor}
Let $(\Sigma,C,\prec,\B,B)$ be a lifting picture, and $b$ a homonymous arc in $B$ which encloses exactly one singular value $v$. Then $b$ the only nontrivial lift of $b$ is the branching arc $\wt{b}$ containing the branch point in the fiber over $b$, and $b$ lifts $\wt{b}$ twice.
\end{cor}
\begin{proof}
There are exactly two sheets whose boundary contains $\wt{b}$, and $b$ lifts to $\wt{b}$ on each. By the proposition above, $b$ will lift trivially on every other sheet.
\end{proof}
\section{Homonymous Recursion}\label{homonymous recursion section}
In this chapter, we will show how to decide whether a fully homonymous weighted arc diagram is realizable, and how to count the ways of realizing it if so. First, we will define some operations on arc diagrams and substrates which will be required in the decision algorithm.
\begin{define}
A homonymous arc is \emph{outer} if it forms a triangle, called an \emph{outer triangle} $\delta(c)$ with two adjacent boundary edges; $c$ is said to \emph{enclose} $\delta(c)$. If it is also of minimal weight among outer arcs, it is called \emph{least outer}. If there are also arcs $a$ and $b$ in $D$ which bound a triangle with $c$ which contains no other arcs of $D$, then that triangle is unique and called the \emph{inner triangle} $\Delta(c)$. In particular, every outer arc in a maximal diagram has an inner triangle. An inner triangle is called \emph{homonymous} is it is bounded only by homonymous arcs.
\end{define}
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\textwidth]{outerarc.pdf}
\caption{An outer arc $c$. $\Delta(c)$ is bounded by $a$, $b$, and $c$. The dashed arc $c^\prime$ is the dual arc of $c$.}
\label{neartriangle}
\end{figure}
An outer arc $c$ with an inner triangle has a \emph{dual arc} $c^\prime$, which is the unique nontrivial arc crossing $c$ and connecting a vertex of $\delta(c)$ to a vertex of $\Delta(c)$. The dual arc is used to define an operation on a substrate with an arc diagram we call one seam reduction.
\begin{define}\emph{One Seam Reduction: }
Let $\Sigma$ be a substrate with an arc diagram $D$, and suppose $c$ is an outer arc with homonymous inner triangle $\Delta(c)$. Cut $\Sigma$ along the homonymous dual arc $c^\prime$; that is choose a simple representative $p^\prime$ of $c^\prime$, and let $\Sigma^\prime$ be the closure of $\Sigma \backslash c$. The \emph{one seam reduction} by $c$ is the new diagram and substrate pair $(\Sigma^\prime,D \backslash c)$.
\end{define}
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\textwidth]{oneseamreduction.pdf}
\caption{Before and after a one seam reduction by $c$.}
\label{oneseamreduction}
\end{figure}
By performing a one seam reduction, you get a new arc and (possibly disconnected) substrate with a new, maximal diagram containing one fewer arc. Two components of the substrate which were separated by a one seam reduction performed on a triangle $T$ are said to \emph{meet} at $T$. Since the two legs of $T$ are both incident on a vertex, one is to the left of the other in the canonical order. If components $\Sigma^\prime$ and $\Sigma^{\prime\prime}$ meet at $T$ and $\Sigma^\prime$ contains the left leg of $T$, then $\Sigma^\prime$ meets $\Sigma^{\prime\prime}$ on the left at $T$; otherwise, it meets $\Sigma^{\prime\prime}$ on the right at $T$.
Call the reverse operation a \emph{one seam join}.
\begin{algo}\label{homonymous recursion}
\emph{Homonymous Recursion: }\\
INPUT: A substrate $\Sigma$ with a fully homonymous arc diagram $D$.\\
OUTPUT: A lifting picture $L$ realizing $D$ if one exists.\\
PROCEDURE: The base case is when $\Sigma$ is a square. $D$ has only one arc $d$, with weight $w$ and color $c$. Let $\B$ be a bigon with $M_\B$ containing a single point $p$. Let $B$ be an arc diagram consisting of a single homonymous arc $b$ with color $c$ enclosing $p$, and with weight $w/2$. The branching diagram is just $D$. Return the lifting picture $(\Sigma,D,\B,B)$.
Suppose $\Sigma$ is not a square. Let $\out(D)$ be the set of least outer arcs of $D$, which all have weight $w_\out$. Perform a one-seam reduction along each element of $\out(D)$. If the resulting substrate is connected, return that $D$ is not realizable. Order the components of the resulting substrate $\Sigma_1,\ldots,\Sigma_k$ so that if $\Sigma_i$ meets $\Sigma_j$ on the left anywhere, then $i<j$. If no such order exists, return that $D$ is not realizable. One may find this order using, for example, a depth first search. This is a special case of a topological sort, and can be done in linear time; see section 22.4 of \cite{cormen}. Perform this procedure recursively on each $\Sigma_i$.
If none of these return that the diagram is not realizable, then they return lifting pictures $L_1,\ldots,L_k$. Let $B$ be the diagram containing diagrams $B_1,\ldots,B_k$ as subdiagrams arranged in order, with one extra arc of weight $w_\out$ enclosing all of them. The branching diagram $C$ is the union $C_1 \cup \ldots \cup C_k$ from $L_1,\ldots,L_k$. The order $\prec$ on $C$ is defined by requiring it to extend each $\prec_i$ on $C_i$; and if $c \in C_i,c^\prime \in C_j$, $i<j$, then $c \prec c^\prime$. Return the lifting picture $(\Sigma,C,\prec,\B,B)$. $\Diamond$
\end{algo}
\begin{lemma}
Let $(\Sigma,C,\prec,\B,B)$ be a lifting picture, and suppose $b \in B$ is a homonymous arc of color $c$ which encloses every singular value. If $\Sigma$ is not a square, then the set of lifts of $b$ is the set of outer arcs of color $c$ in $\Sigma$. If $\Sigma$ is not a square, then $b$ lifts to each one once.
\end{lemma}
\begin{proof}
Each sheet $S$ contains a unique outer arc $\omega(S)$ whose outer triangle is contained in $S$. Equivalently, $\omega(S)$ separates the branching arcs in $\bdry S$ from $\bdry \Sigma \cap \bdry S$. By Proposition \ref{how lifting works}, $b$ has a unique nontrivial lift $\wt{b}$ on every sheet $S$. Since $b$ separates all singular values from $\bdry \B$, $\wt{b}$ separates the branching arcs of $\bdry S$ from $\bdry \Sigma \cap S$, so $\wt{b}=\omega(S)$.
If $\Sigma$ is not a square, then no two sheets $S,S^\prime$ can have $\omega(S)=\omega(S^\prime)$.
\end{proof}
\begin{theorem}\label{homonymous recursion works}
Let $D$ be a maximal, homonymous diagram on substrate $\Sigma$. Homonymous recursion returns a lifting picture realizing $D$ if and only if $D$ is realizable.
\end{theorem}
\begin{proof}
Suppose the lifting picture $L=(\Sigma,C,\prec,\B,B)$ is the result of performing homonymous recursion on $D$. We must show that $L$ realizes $D$. We proceed by induction on the number of arcs $|D|$ in $D$. Suppose $|D|=1$. Then $\Sigma$ must be a square. It follows from Proposition \ref{how lifting works} and its corollary that $L$ as returned by the base case of homonymous recursion realizes $D$. Now, assume that for any maximal, homonymous arc diagram $D$ with $|D|<N$ for some $N$, that if homonymous recursion returns a lifting picture, then that lifting picture realizes $D$. Suppose $D$ is a maximal, homonymous diagram with $|D|=N$.
Let $\out$ be the set of least weighted outer arcs of $D$, and let $w_\out$ be the weight of any one of these arcs. Let $D^\prime$ be $D \backslash \out$ with the weights of all outer arcs reduced by $w_\out$, and define $\Sigma_1,\ldots,\Sigma_k$ to be the components resulting from performing a one seam reduction along each arc in $\out$. We then have maximal diagrams $D_1,\ldots,D_k$ which are the restriction of $D^\prime$ to each $\Sigma_i$. Since homonymous recursion finishes when applied to $D$, that means that $k \geq 2$ and we have lifting pictures $L_i=(\Sigma_i,C_i,\prec_i,\B_i,B_i)$, $1 \leq i \leq k$. By the inductive assumption, $L_i$ realizes $D_i$, since $|D_i|<N$.
We now must show $C=C_1 \cup \ldots \cup C_k$ is a valid branching diagram. By construction, and using the assumption that homonymous recursion successfully terminated, the order of $\Sigma_1,\ldots,\Sigma_k$ is compatible with the canonical order at each vertex. Therefore, by induction, the order on $C$ extends the canonical order at each vertex, as it must. It remains to show that each component of $\Sigma \backslash C$ is simply connected and contains two edges of $\bdry \Sigma$. Let $S$ be a component of $\Sigma \backslash C$. If $S$ is contained in some $\Sigma_i$, then it is a valid sheet by the inductive hypothesis. Otherwise, $S$ is the result of performing one seam joins on sheets in various $\Sigma_i$'s. But since a sheet of any $\Sigma_i$ contains a unique outer arc along which to perform a one seam join, $S$ must be the result of performing a one seam join on two sheets, meaning that $S$ is simply connected and has the correct boundary. So $C$ is a valid branching diagram.
Now we must show that $L$ realizes $D$. If $b_i$ is an arc in $B_i \subset B$, then by prop \ref{how lifting works} all nontrivial lifts will be in $\Sigma_i$, since the branch points mapped to the singular values enclosed by $b_i$ are all contained in $\Sigma_i$. Therefore, $B_i$ lifts to $D_i$ in the combined lifting picture $L$. Then the lift of $B_1 \cup \ldots \cup B_k = B \backslash \omega$, where $\omega$ is the unique arc in $B$ which encloses all of $M_\B$, is $D_1 \cup \ldots \cup D_k = D^\prime$. Since $\omega$ lifts to every outer arc once by the lemma, $L$ realizes $D$.
Finally, suppose $D$ is a realizable, maximal, homonymous weighted arc diagram on $\Sigma$. We need to show that homonymous recursion returns a lifting picture when applied to $D$. Suppose $R=(\Sigma,C_R,\prec,\B,B_R)$ is a lifting picture which realizes $D$. Let $\omega$ be the arc in $B$ which encloses $M_\B$, and consider the set $A$ of nontrivial arcs in $\B$ which cross $\omega$ and which are disjoint from the rest of $B$. The set of nontrivial lifts of arcs in $A$ is exactly the set of dual arcs of the least outer arcs of $D$. Since the arcs of $A$ divide $\B$ into multiple components, and these arcs are disjoint from the singular values, the set of lifts of these arcs must divide $\Sigma$ into at least 2 components, and these components will have a consistent left-right orientation as required in the algorithm. Therefore, one seam reduction will take place, and the algorithm will recurse onto the components. Finally, we need only observe that each component $\Sigma_i$ will have fewer boundary marked points than $\Sigma$, so therefore this process must terminate, and homonymous recursion thus returns a lifting picture.
\end{proof}
\section{Heteronymous Arcs}\label{heteronymous section}
To address the question of realizability for more general arc diagrams, we will first treat another special case.
\begin{define}
A maximal arc diagram on a substrate $\Sigma$ is \emph{maximally heteronymous} if it contains exactly $|V_\Sigma|/2-\chi(\Sigma)$ homonymous arcs.
\end{define}
A maximally heteronymous diagram has only one possible choice of branching diagram. The only freedom is in choosing a total order. As in the homonymous case, each sheet imposes an ordering on the branching arcs which form its boundary. A sheet of a maximally heteronymous diagram will be of the form shown in Figure \ref{typical het sheet}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.3\textwidth]{hetsheet.pdf}
\caption{A typical sheet of a maximally heteronymous diagram.}
\label{typical het sheet}
\end{figure}
For each color $c$, the canonical order at vertices orders the branching arcs of color $c$ in a sheet $S$. If $D$ is also maximal, then the heteronymous arcs contained in $S$ impose a canonical order on all branching arcs in $\bdry S$. Globally fix a color $c$. Orient each heteronymous arc so it points out of its $c$-colored endpoint. Each heteronymous arc separates $S$ into two components, and with the orientation one of these will be on the left, and the other on the right. Define an order $<_c$, on the homonymous arcs bounding $S$ by requiring $x <_c y$ if $x$ comes before $y$ in the canonical order at a common endpoint; or if there exists a heteronymous arc $h$ so that $x$ is on the left of $h$ and $y$ is on its right. Since $D$ is maximal, every pair of homonymous arcs in $S$ is separated by at least one heteronymous arc, so this is a total order on the branching arcs which bound $S$. Note that if we choose the opposite color $c^\prime$, then $x <_c y$ if and only if $y <_{c^\prime} x$.
\begin{prop
Let $D$ be a maximal, maximally heteronymous arc diagram on a substrate $\Sigma$. If the lifting picture $(\Sigma,C,\prec,\B,B)$ realizes $D$, then the order $\prec$ on $C$ extends $<_c$.
\end{prop}
\begin{proof}
Assume to the contrary that $\prec$ does not extend $<_c$. Then on some sheet $S$ there is a pair of branching arcs, $x$ and $y$, and a heteronymous arc $h$, such that $x$ is left of $h$ and $y$ is on the right, but $y \prec x$. If $x$ and $y$ have the same colored endpoints, then this reverses the canonical order at a vertex, so $(\Sigma,C,\prec,\B,B)$ doesn't define a branched cover. Otherwise, since $y \prec x$, there is no arc in $B$ which separates the singular values $v_x$ from $v_y$ associated with $x$ and $y$, respectively, and puts $v_x$ to the left of $v_y$. Therefore, no arcs in $B$ can lift to $h$, and therefore $(\Sigma,C,\prec,\B,B)$ does not realize $D$.
\end{proof}
If a maximal, maximally heteronymous diagram $D$ is realizable, then it must be realized by a maximal, maximally heteronymous diagram, which on the bigon has the following form.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{het-bigon.pdf}
\caption{A maximal, maximally heteronymous diagram on the bigon. A heteronymous arc separates each pair of homonymous arcs.}
\label{het bigon}
\end{figure}
Suppose that $a$ is a heteronymous arc in $D$, and let $f: \Sigma \to \B$ be a branched cover. $a$ is inside a sheet $S$ bounded by branching arcs, which have canonical order $<_c$. $a$ divides the branching arcs in $\bdry S$ into two sets $X$ and $Y$, with $X <_c Y$. Let $x$ be the greatest element of $X$ and $y$ be the least element of $Y$. All the hetermonymous arcs between $f(x)$ and $f(y)$ will lift to $a$.
Therefore, to realize a maximal, maximally hetermonymous diagram $D$ is to find a total order on the homonymous arcs of $D$ which agrees with $<_c$ on each sheet and apportions weight to the heteronymous arcs of a diagram on the bigon of the form shown in Figure \ref{het bigon} so that the total weight in the bigon between any arc and its successor in a given sheet is exactly the weight of the heteronymous arc upstairs which separates them on that sheet. We will build this procedure from several more elementary algorithms.
We can represent a sheet $S$ by a list of the form
\begin{equation}\label{list}
b_0,w_1,b_1,w_2,b_3,\ldots,w_k,b_k
\end{equation}
where the $b_i$ are the branching arcs in canonical order, and $w_i$ is the weight of the heteronymous arc between $w_{i-1}$ and $w_i$.
\begin{algo}\label{free merge}
\emph{Free Merge.}\\
INPUT: Lists $X=w_1^X,x_1,\ldots,w_m^X,x_m$ and $Y=w_1^Y,y_1,\ldots,w_n^Y,y_n$ of the form in eq. \ref{list} with $\{x_1,\ldots,x_m\} \cap \{y_1,\ldots,y_n\} = \emptyset$. \\
OUTPUT: A list $w_1^Z,z_1,\ldots,w_{m+n}^Z,z_{m+n}$.\\
PROCEDURE: The algorithm is defined recursively. The base case is when $X$ or $Y$ is empty. In that case, return the nonempty list.
If $X$ and $Y$ are both nonempty, let $w_1^Z=\min\{w_1^X,w_1^Y\}$. If $w_1^X<w_1^Y$, then let $X^\prime=w_2^X,\ldots,x_m$, $Y^\prime=w_1^Y-w_1^X,y_1,\ldots,y_n$, and $z_1=x_1$. Otherwise, let $X^\prime=w_1^X-w_1^Y,x_2,\ldots,x_m$, $Y^\prime=w_2^Y,y_2,\ldots,y_n$, and $z_1=y_1$. Return $Z=w_1^Z,z_1||FreeMerge(X^\prime,Y^\prime)$ where $||$ denotes concatenation. $\Diamond$
\end{algo}
\begin{prop}\label{free merge works}
Algorithm \ref{free merge}, when applied to lists $X=w_1^X,x_1,\ldots,w_m^X,x_m$ and $Y=w_1^Y,y_1,\ldots,w_n^Y,y_n$ returns a list $Z=w_1^Z,z_1,\ldots,w_{m+n}^Z,z_{m+n}$ satisfying the following properties:
\begin{enumerate}
\item $\{z_1,\ldots,z_{m+n}\}=\{x_1,\ldots,x_n\} \cup \{y_1,\ldots,y_m\}$
\item If $z_i=x_a$ and $z_j=x_b$, then $a<b$ implies $i<j$; likewise if $z_i=y_a$ and $z_j=y_b$, then $a<b$ implies $i<j$.
\item $w_1^X=\sum_{i=1}^k w_i^Z$, where $z_k=x_1$. Similarly, $w_1^Y=\sum_{i=1}^k w_i^Z$, where $z_k=y_1$.
\item For each $1 \leq i \leq m$, let $z_a=x_{i}$ and $z_b=x_{i+1}$. Then $w_{i+1}^X=\sum_{j=a}^b w_j^Z$, and likewise for $z_a=y_{i-1}$ and $z_b=y_{i}$.
\end{enumerate}
\end{prop}
\begin{proof}
First, observe that this algorithm always terminates after no more than $m+n$ steps of the recursion. This is because we remove the first 2 elements of either $X$ or $Y$ to make $X^\prime$ and $Y^\prime$, so at most we can only do this $m$ times to $X$ and $n$ times to $Y$.
$Z$ trivially satisfies properties 1-4 when $X$ or $Y$ is empty. To show this in general, we will induct on the combined length of the lists, $n+m$. The base case, $n+m=1$, is a special case of $X$ or $Y$ being empty. Assume $Z$ has properties 1-4 when $n+m \leq N$ for some $N$, and suppose $n+m=N+1$. Let $X^\prime$ and $Y^\prime$ be as described in algorithm \ref{free merge}.
We need to check that the list $Z=w_1^Z,z_1||Z^\prime$, where $Z^\prime=FreeMerge(X^\prime,Y^\prime)$, has the properties above. Properties 1 and 2 follow from the definitions of $z_1$, $X^\prime$, and $Y^\prime$, and the assumption that these properties hold for $FreeMerge(X^\prime,Y^\prime)$. To show $Z$ has property 3, WLOG assume that $w_1^Z=w_1^X$ and $z_1=x_1$; otherwise, just switch the roles of $X$ and $Y$. Let $z_k=y_1$. Then using the inductive hypothesis it suffices to show that
\[
w_1^Y=\sum_{j=1}^k w_j^Z
\]
Now, $\sum_{j=1}^k w_j^Z=w_1^X+\sum_{j=1}^k w_j^{Z^\prime}$. By definition $w_1^{Y^\prime}=w_1^Y-w_1^X$ and by the inductive hypothesis, $w_1^{Y^\prime}=\sum_{j=1}^k w_j^{Z^\prime}$. So $\sum_{j=1}^k w_j^Z = w_1^X+w_1^{Y^\prime}=w_1^Y$. Property 4 now also follows.
\end{proof}
Now we will treat merging lists of the form in eq. \ref{list} with $x_0=y_0$ and $x_m=y_n$ so that the result also satisfies a result like Proposition \ref{free merge works}.
\begin{algo}\label{constrained merge}
\emph{Constrained Merge.}\\
INPUT: Lists $X=x_0,w_1^X,x_1,\ldots,w_m^X,x_m$ and $Y=y_0,w_1^Y,y_1,\ldots,w_n^Y,y_n$ of the form in eq. \ref{list} with $x_0=y_0$ and $x_m=y_n$, and such that
\[
\sum_{i=1}^m w_i^X = \sum_{j=i}^n w_i^Y
\]
OUTPUT: A list $Z=z_0,w_1^Z,z_1,\ldots,w_{m+n}^Z,z_{m+n}$.\\
PROCEDURE: Let $z_0=x_0$. Compute
\[
Z^\prime=FreeMerge(w_1^X,x_1,\ldots,w_m^X,x_m,w_1^Y,y_1,\ldots,w_n^Y,y_n)
\]
The last 2 terms of $Z^\prime$ will be $0,x_m$ or $0,y_n$. Let $Z^{\prime\prime}$ be $Z^\prime$ with the last 2 terms deleted. Return $Z=z_0||Z^{\prime\prime}$. $\Diamond$
\end{algo}
\begin{prop}\label{constrained merge works}
Algorithm \ref{constrained merge}, when applied to lists $X=x_0,w_1^X,x_1,\ldots,w_m^X,x_m$ and $Y=y_0,w_1^Y,y_1,\ldots,w_n^Y,y_n$ satisfying the assumptions in algorithm \ref{constrained merge} returns a list $Z=z_0,w_1^Z,z_1,\ldots,w_{m+n}^Z,z_{m+n}$ satisfying the following properties:
\begin{enumerate}
\item $\{z_1,\ldots,z_{m+n}\}=\{x_1,\ldots,x_n\} \cup \{y_1,\ldots,y_m\}$
\item If $z_i=x_a$ and $z_j=x_b$, then $a<b$ implies $i<j$; likewise if $z_i=y_a$ and $z_j=y_b$, then $a<b$ implies $i<j$.
\item For each $1 \leq i \leq m+n$, let $z_a=x_{i-1}$ and $z_b=x_{i}$. Then $w_i^X=\sum_{j=a}^b w_j^Z$, and likewise for $z_a=y_{i-1}$ and $z_b=y_{i}$
\end{enumerate}
\end{prop}
\begin{proof}
Property 2 is automatic given prop \ref{free merge works}. Property 1 follows from the fact that the homonymous arcs $z_1^\prime,\ldots,z_k^\prime$ in
\[
Z^\prime=FreeMerge(w_1^X,x_1,\ldots,w_m^X,x_m,w_1^Y,y_1,\ldots,w_n^Y,y_n)
\]
will be the disjoint union of the arcs in the arguments of $FreeMerge$. Since the total weight in both lists is equal, the last 2 terms of $Z^\prime$ will be either $0,x_m$ or $0,y_m$. Therefore, after discarding the spurious last 2 terms of $Z^\prime$, property 1 holds, and property 3 also follows from that fact and prop \ref{free merge works}.
\end{proof}
We now generalize to all lists of the form in eq. \ref{list}. Now, we allow any number of arcs in the two lists to coincide.
\begin{algo}\label{general merge}
\emph{General Merge.}\\
INPUT: Lists $X=x_0,w_1^X,x_1,\ldots,x_m$ and $Y=y_0,w_1^Y,y_1,\ldots,y_n$ of the form in eq. \ref{list}.\\
OUTPUT: A list $Z=z_0,w_1^Z,z_1,\ldots,z_{k}$ of the form in eq. \ref{list}.\\
PROCEDURE: Compute the set of arcs $A$ which appear in both $X$ and $Y$. Define index functions $\sigma,\eta$ by, for each arc $a \in A$, $x_{\sigma(a)}=a$ and $y_{\eta(a)}=a$. For each pair $a,b$ of arcs in $A$, with $\sigma(a)<\sigma(b)$, first check that $\eta(a)<\eta(b)$. Otherwise, raise an exception. Next, check that
\[
\sum_{i=\sigma(a)+1}^{\sigma(b)} w_i^X = \sum_{j=\eta(a)+1}^{\eta(b)} w_j^Y
\]
If that equation is not satisfied, raise an exception.
Now, let $A=\{a_1,\ldots,a_k\}$. Partition $X$ into sublists
\[
X_0=x_0,w_1^X,\ldots,x_{\sigma(a_1)-1},w_{\sigma(a_1)},
\]
\[
X_1=x_{\sigma(a_1)},\ldots,x_{\sigma(a_2)}
\]
\[\ldots\]
\[
X_k=w_{\sigma(a_k)+1}^X,x_{\sigma(a_k)+1},\ldots,x_m
\]
Partition $Y$ into sublists $Y_0,\ldots,Y_k$ in the same way. Define $Z^\prime$ to be
\[
(FreeMerge(X_0^*,Y_0^*))^*||ConstrainedMerge(X_1,Y_1)||\ldots||FreeMerge(X_k,Y_k)
\]
where ${}^*$ denotes reindexing in opposite order. Concatenating $ConstainedMerge(X_i,Y_i)$ with $ConstainedMerge(X_{i+1},Y_{i+1})$ results in duplicate, adjacent $a_i$. Remove all these duplicates from $Z^\prime$ and call the resulting list $Z$. Return $Z$. $\Diamond$
\end{algo}
Now, as an immediate corollary of propositions \ref{free merge works} and \ref{constrained merge works}, we have
\begin{prop}\label{general merge works}
If algorithm \ref{general merge} does not raise an exception when applied to lists $X=x_0,w_1^X,x_1,\ldots,w_m^X,x_m$ and $Y=y_0,w_1^Y,y_1,\ldots,w_n^Y,y_n$ satisfying the assumptions in algorithm \ref{general merge}, then it returns a list $Z=z_0,w_1^Z,z_1,\ldots,w_{m+n}^Z,z_{m+n}$ satisfying the following properties:
\begin{enumerate}
\item $\{z_1,\ldots,z_{m+n}\}=\{x_1,\ldots,x_n\} \cup \{y_1,\ldots,y_m\}$
\item If $z_i=x_a$ and $z_j=x_b$, then $a<b$ implies $i<j$; likewise if $z_i=y_a$ and $z_j=y_b$, then $a<b$ implies $i<j$.
\item For each $1 \leq i \leq m+n$, let $z_a=x_{i-1}$ and $z_b=x_{i}$. Then $w_i^X=\sum_{j=a}^b w_j^Z$, and likewise for $z_a=y_{i-1}$ and $z_b=y_{i}$
\end{enumerate}
\end{prop}
We can now define the algorithm for realizing a maximal, maximally heteronymous arc diagram.
\begin{algo}\label{heteronymous realization}
\emph{Heteronymous Realization.}\\
INPUT: A maximal, maximally heteronymous weighted arc diagram $D$ on a substrate $\Sigma$. \\
OUTPUT: A lifting picture $L=(\Sigma,C,\prec,\B,B)$. \\
PROCEDURE: Let $C$ be the set of homonymous arcs in $D$. Check that each component of the complement of $C$ is simply connected; if not, raise an exception. Check that $\bdry C$ contains 2 edges of $\bdry \Sigma$; otherwise, raise an exception.
For each component $S$ of the complement of $C$, compute the list $X(S) = x_0,w_1,x_1,\ldots,w_n,x_n$, where $x_0,\ldots,x_n$ are the arcs of $C$ bounding $S$, and $w_i$ is the weight of the heteronymous arc between $x_{i-1}$ and $x_i$. Choose a component $T_0$ of $\Sigma \backslash C$. Let $Z_0=X(T_0)$. Choose a component $U$ adjacent to $T_0$. Let $T_1 = T_0 \cup U$. Compute $Z_1 = GeneralMerge(Z_0,X(U))$. If $T_1 = \Sigma$, then we are done. Otherwise, choose a component $U$ to $T_1$. Let $T_1 = T_0 \cup U$, and compute $Z_1 = GeneralMerge(Z_1,X(U))$. Continue in this fashion until, for some $k$, $T_k = \Sigma$.
The order in which each arc of $C$ appears in $Z_k$ defines the order $\prec$ on $C$. Construct $B$ as follows. $Z_k = z_0,w_1,z_1,\ldots,w_n,z_n$. For $0 \leq i \leq n$, enclose $m_i \in M_\B$ by a homonymous arc with endpoints the same color as those of $z_i$. Assign it half the weight of $z_i$. Place a heteronymous arc between $z_i$ and $z_{i+1}$, and assign its weight to be $w_i$. Return $(\Sigma,C,\prec,\B,B)$. $\Diamond$
\end{algo}
\begin{theorem}\label{heteronymous realization works}
Let $D$ be a maximal, maximally heteronymous weighted arc diagram on a substrate $\Sigma$. Then algorithm \ref{heteronymous realization} returns a lifting picture if and only if $D$ is realizable, and the lifting picture it returns realizes $D$.
\end{theorem}
\begin{proof}
Suppose that Algorithm \ref{heteronymous realization} returns $L = (\Sigma,C,\prec,\B,B)$ when applied to $D$. It follows from Proposition \ref{how lifting works} that $L$ realizes $D$ as an unweighted diagram; it remains to show that each arc of $D$ gets the correct weight. It follows from Proposition \ref{how lifting works} that the homonymous arcs all get the correct weight. Let $h \in D$ be a heteronymous arc. It is contained in some sheet $S$. Let $c_k,c_\ell$ be the pair of arcs which flank $h$ in $S$, where the indices indicate their index in $C$ under $\prec$. Again by Proposition \ref{how lifting works}, the heteronymous arcs in $B$ between critical values $m_k$ and $m_\ell$ will lift to $h$. The weights of those arcs are $w_{k+1},\ldots,w_\ell$. By Proposition \ref{general merge works}, $\sum_{i=k+1}^\ell w_i$ is equal to the weight of $h$, so $L$ realizes $D$.
Now, suppose $D$ is realizable. We want to show that Algorithm \ref{heteronymous realization} finishes without throwing an exception. Since $D$ is realizable, there exists a branched cover $f:\Sigma \to \B$ and a weighted arc diagram $B$ on $\B$, so that $B$ lifts to $D$ under $f$. There is a unique up to homotopy set of branch cuts disjoint from and parallel to the homonymous arcs of $B$ which connect the critical values $M_\B$ to the vertices of $\B$. Therefore, we can assume that the branch cuts lift to the homonymous arcs $C$ of $D$. That implies that there are no cycles in $C$; that each component of $D \backslash C$ is simply connected, as these are sheets of a branched cover of a disk; and that the canonical order of homonymous arcs bounding each sheet extends to a total order on $C$. Finally, we need to show that when we merge a sheet with a connected union of sheets, the condition on sums of weights in Algorithm \ref{general merge} is satisfied. This follows from Proposition \ref{how lifting works} and Lemma \ref{merge order doesn't matter}.
\end{proof}
\begin{lemma}\label{merge order doesn't matter}
Let $A,B,C$ be lists of the form required by Algorithm \ref{general merge}. Then $\M(\M(A,B),C) = \M(\M(A,C),B)$.
\end{lemma}
\begin{proof}
Let $X = \M(\M(A,B),C)$ and $Y = \M(\M(A,C),B)$, where $\M$ performs algo \ref{general merge}. Delete the arcs of $C$ not appearing in $A$ or $B$ from $Y$ and add any weights which are now adjacent; call the result $Z$. On one hand, $Z = \M(A,B)$. On the other hand, clearly $\M(Z,C)=Y$, so $X=Y$.
\end{proof}
\section{General Maximal Diagrams}\label{general maximal section}
We now put the results of the previous sections together to decide whether any maximal arc diagram is realizable. A generic maximal diagram is composed of connected, maximal, fully homonymous subdiagrams separated by heteronymous arcs. On a bigon, a generic diagram has the form shown in Figure \ref{general bigon}.
\begin{define}
A \emph{homonymous clump} is a maximal, connected, homonymous subdiagram.
\end{define}
Let $H_1,\ldots,H_n$ be the set of homonymous clumps in $D$. Each $H_i$ has a boundary consisting of homonymous arcs which are side of a triangle with a heteronymous arc. Call these arcs \emph{outer to} $H_i$. We may cut $\Sigma$ along each of these arcs and paste in triangles as shown in Figure \ref{cutting parallel to an arc}. Let $\Sigma_i$ be the component of the resulting surface which contains $H_i$.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{parallelcut.pdf}
\caption{Cutting along the homonymous arc $a$.}
\label{cutting parallel to an arc}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{generalbigon.pdf}
\caption{Each $B_i$ is a maximal, homonymous subdiagram.}
\label{general bigon}
\end{figure}
Each component of the complement of $\Sigma_1 \cup \ldots \cup \Sigma_n$ is either an outer triangle bounded by 2 edges of $\bdry \Sigma$ and a homonymous arc, or a region of the form shown in Figure \ref{typical het sheet} which we will call a heteronymous ladder.
\begin{define}
Let $D$ be an arc diagram. A \emph{heteronymous ladder} is a region of $\Sigma$ bounded by homonymous arcs of $D$ which contains only heteronymous arcs in its interior.
\end{define}
This all suggests that to realize a diagram $D$, one can perform homonymous recursion to realize each homonymous clump individually, and then use a version of Algorithm \ref{heteronymous realization} to order these $H_i$ and assign weights to the the heteronymous arcs separating the diagrams which realize each $H_i$ downstairs.
\begin{define}
Let $(\Sigma,C,\prec,\B,B)$ be a lifting picture realizing a diagram $D$ on $\Sigma$, and $E$ be a maximal, connected homonymous subdiagram of $B$. We say that $E$ has \emph{connected branching} if the set of branching arcs whose critical points are enclosed by arcs of $E$ all are contained in a single connected, homonymous subdiagram of $D$.
\end{define}
A maximal diagram consists of honomyous clumps and heteronymous ladders. Each heteronymous ladder can be given a canonical total order, just as in the case of the sheets of a maximally heteronymous diagram. We now show that, in essence, clumps lift to clumps. For this we'll need a procedure we call \emph{weight rebalancing}.
\begin{define}
Let $D$ be an arc diagram, $(\Sigma,C,\prec,\B,B)$ be a lifting picture realizing $D$, and $h \in B$ a homonymous arc. The \emph{branching} of $h$ is the set of arcs in $C$ whose critical points are mapped into the region enclosed by $h$. We say $h$ has \emph{connected branching} if the branching of $h$ is contained in one homonymous clump of $D$. Otherwise, $h$ has \emph{disconnected} branching.
\end{define}
\begin{define}
Let $D$ be an arc diagram, $(\Sigma,C,\prec,\B,B)$ be a lifting picture realizing $D$, and $h \in B$ a homonymous arc. A \emph{new lift} of $h$ is an arc in $D$ which is not a lift of any arc in $B$ enclosed by $h$.
\end{define}
If a homonymous arc has no new lifts, then we may perform the following opertaion on $B$ without changing its lift.
\begin{define}\label{pushing down weight}
\emph{Pushing down weight.}
Let $(\Sigma,C,\prec,\B,B)$ be a lifting picture, $w_B$ be the weight function on $B$, and $h \in B$ a homonymous arc. Let $h_1,\ldots,h_k$ be the outermost arcs enclosed by $h$; that is, each $h_i$ is enclosed by $h$ and not by any other arc enclosed by $h$. Define $P_h(B)$ to be $B$ with $h$ deleted and with $w_{P_h}(h_i) = w_B(h_i)+w_B(h)$. We say that we have \emph{pushed down} the weight of $h$ to make $(\Sigma,C,\prec,\B,P_h(B))$.
\end{define}
\begin{lemma}\label{pushing down weights works}
Let $D$ be a weighted arc diagram, and suppose that $(\Sigma,C,\prec,\B,B)$ realizes $D$. If $h$ is a homonymous arc in $B$ with no new lifts, then the diagram $(\Sigma,C,\prec,\B,P_h(B))$ also realizes $D$.
\end{lemma}
\begin{proof}
Let $h_1,\ldots,h_k$ be the outermost arcs enclosed by $h$. By assumption every lift of $h$ is also a lift of some $h_i$. Therefore, $(\Sigma,C,\prec,\B,P_h(B))$ still realizes $D$ as an unweighted diagram. Furthermore, since $w_{P_h}(h_i) = w_B(h_i)+w_B(h)$, the weight previously contributed by $h$ to the lifts of $h_i$ is not contributed directly by $h_i$, so $(\Sigma,C,\prec,\B,P_h(B))$ also realizes $D$ as a weighted diagram.
\end{proof}
\begin{prop}\label{lifting pictures admit connected branching}
If $D$ is a maximal, realizable weighted arc diagram, then $D$ is realized by a lifting picture where every homonymous arc has connected branching.
\end{prop}
\begin{proof}
Suppose $(\Sigma,C,\prec,\B,B)$ realizes $D$. If there is an arc $h$ with no new lifts, push down the weight of $h$. repeat this until every arc has new lift. Call the resulting lifting picture $(\Sigma,C,\prec,\B,B^\prime)$. By Lemma \ref{pushing down weights works}, $(\Sigma,C,\prec,\B,B^\prime)$ realizes $D$.
Define $\epsilon(a)$ be the number of critical values enclosed by homonyous arc $a$. Let $h \in B^\prime$ be the arc with the smallest $\epsilon(h)$ such that $h$ has disconnected branching. We know that $h$ must have a new lift. Let $h_1,\ldots,h_k$ be the homonymous clumps enclosed by $h$. Each $h_i$ has connected branching. Since $h$ has a new lift, there must be clumps $h_a,h_b$ so that $h_a \cup h_b$ has connected branching. On the other hand, if $h_i,h_j$ are such that $h_i \cup h_j$ have disconnected branching, then we may switch their order without affecting the lift.
Using this, we may choose a new order $h_1^\prime,\ldots,h_k^\prime$ so that if $h_i^\prime \cup h_j^\prime$ has connected branching, then there is no $h_\ell$ with $i<\ell<j$ such that $h_i \cup h_\ell \cup h_j$ has disconnected branching. That is, we can push together all the clumps under $h$ which lift to the same homonymous clump upstairs. Add a weight zero homonymous arc enclosing each maximal collection of $h_i^\prime$s whose union has connected branching. Call these arcs $\alpha_1,\ldots,\alpha_n$. Now, $h$ has no new lifts: it always lifts parallel to the lifts of $\alpha_1,\ldots,\alpha_n$. Call this diagram $B^{\prime\prime}$. Pass to $P_h(B^{\prime\prime})$. The lifting picture $(\Sigma,C,\prec,\B,P_h(B^{\prime\prime}))$ still realizes $D$. Since each of the arcs $\alpha_i$ introduced this way have connected branching, $B^{\prime\prime}$ has fewer arcs with disonnected branching than $B^\prime$. If $B^{\prime\prime}$ still has an arc with disconnected branching, choose the arc $h$ such that $h$ has the smallest $\epsilon(h)$ among arcs with disconnected branching, and eliminate it in the same way. Repeat this process until the resulting lifting picture $(\Sigma,C,\prec,\B,B^\dagger)$ has the property that all homonymous arcs of $B^\dagger$ have connected branching. Since $B^{\prime\prime}$ has only finitely many arcs, and each step of this process reduces the number of arcs with disconnected branching, this process terminates after finitely many steps. By Proposition \ref{pushing down weights works}, $B^\dagger$ still realizes $D$.
Finally, add weight zero heteronymous arcs separating each homonymous clump of $B^\dagger$. Call the resulting diagram $B^{max}$. It follows from Lemma \ref{pushing down weights works} and Lemma \ref{how lifting works} that these heteronymous arcs have no nontrivial lifts which are not also lifts of heteronymous arcs already in $B^\dagger$, so indeed $B^{max}$ still realizes $D$.
\end{proof}
Lifting pictures where all homonymous arcs have connected branching are useful, because in such lifting pictures, each homonymous clump lifts nontrivially to a unique homonymous clump.
\begin{lemma}\label{homonymous clump lemma}
Let $D$ be a maximal arc diagram on a substrate $\Sigma$. Suppose $(\Sigma,C,\prec,\B,B)$ realizes $D$, and all homonymous arcs in $B$ have connected branching. Then each homonymous clump in $B$ lifts nontrivially to a unique homonymous clump in $D$.
\end{lemma}
\begin{proof}
Let $H$ be a homonymous clump in $B$, and $h \in H$. Since $h$ has connected branching, all its nontrivial lifts are on sheets bounded by one or more branching arc in a single homonymous clump $\wt{H}$ in $D$. If $S$ is a sheet completely bounded by arcs in $\wt{H}$ then clearly any nontrivial lifts of $h$ on that sheet are arcs of $\wt{H}$. On the other hand, if $S$ is bounded by some arcs of $\wt{H}$ and some arcs from another homonymous clump, then the lift of at least one heteronymous arc in $B$ separates the lift of $h$ from the lifts of any arcs in any other homonymous clump in $B$. So $H$ lifts nontrivially only to $\wt{H}$.
\end{proof}
\begin{lemma}\label{heteronymous ladder lemma}
Let $D$ be a realizable, maximal arc diagram, and $S$ a heteronymous ladder in $D$.
\begin{enumerate}
\item $S$ is simply connected.
\item Each homonymous arc bounding $S$ is contained in a different homonymous clump.
\end{enumerate}
\end{lemma}
\begin{proof}
Every branching arc lies in some homonymous clump. So $S$ is contained in a single sheet of the cover, and is therefore simply connected. To prove statement 2, observe that $S$ is contained in a sheet $S^\prime$. If multiple arcs of the same homonymous clump $K$ are in $\bdry S$, then at least 2 branching arcs $a,b$ in $K$ are in $\bdry S^\prime$. Since the diagram is maximal and $S^\prime$ is simply connected, there is a heteronymous arc in $S$ which separates $a$ from $b$ in $S^\prime$. But this cannot be the lift of any arc on the bigon, as by Lemma \ref{homonymous clump lemma} no heteronymous arc on the bigon can separate two critical points from the same homonymous clump.
\end{proof}
This allows us to represent a heteronymous ladder $S$ as a list of the form needed to input into Algorithm \ref{general merge}. The homonymous arcs of $S$ can be canonically ordered just as in the maximally heteronymous case. Represent $S$ as a list $s_0,w_1,s_1,\ldots,w_n,s_n$ where $s_i$ is the homonymous clump which contains the $i^{th}$ homonymous arc bounding $S$, and $w_i$ is the weight of the heteronymous arc separating $s_{i-1}$ and $s_i$ in $S$.
\begin{algo}\label{realization}
\emph{Maximal Diagram Realization.}\\
INPUT: A maximal, weighted arc diagram $D$ on a substrate $\Sigma$.\\
OUTPUT: A lifting picture $(\Sigma,C,\prec,\B,B)$ realizing $D$.\\
PROCEDURE: For each homonymous clump $K$, perform homonymous recursion (that is, Algorithm \ref{homonymous recursion}) on $K$, to get a lifting picture $L_K = (\Sigma,C_K,\prec,\B_K,B_K)$.
For each heteronymous ladder $S$, represent $S$ as a list $L(S)$. Choose a heteronymous ladder $T_0$, and let $Z_0=L(T_0)$. If this is the only heteronymous ladder, we're done. Otherwise, choose another heteronymous ladder $U$. Let $T_1 = T_0 \cup U$. Compute $Z_1 = GeneralMerge(Z_0,L(U))$. Continue in this fashion until all heteronymous ladders are merged. Let $Z=z_0,w_1^Z,z_1,\ldots,w_n,z_n$ be the resulting list.
Now construct the lifting picture. Let $\mathcal{K}$ be the set of homonymous clumps in $D$. The set $C$ of branching arcs is $\cup_{K \in \mathcal{K}} C_K$. The order $\prec$ is given by the order on each $C_K$ and the ordering on $\mathcal{K}$ provided by $Z$. The bigon diagram $B$ consists of the homonymous subdiagrams $B_{z_1},\ldots,B_{z_n}$ computed in the first part of the procedure. $B_{z_i}$ and $B_{z_{i+1}}$ are separated by a heteronymous arc with weight $w_i^Z$. Return $(\Sigma,C,\prec,\B,B)$. $\Diamond$
\end{algo}
\begin{theorem}
Let $D$ be a maximal weighted arc diagram on a substrate $\Sigma$. Then $D$ is realizable if and only if algorithm \ref{realization} finishes without raising an exception, and if it returns $L = (\Sigma,C,\prec,\B,B)$, then $L$ realizes $D$.
\end{theorem}
\begin{proof}
Suppose Algorithm \ref{realization} returns $L = (\Sigma,C,\prec,\B,B)$. Then by Lemma \ref{homonymous clump lemma}, each maximal homonymous subdiagram $B^\prime$ of $B$ lifts to a unique homonymous clump $D^\prime$ in $D$. Since the algorithm finished without error, it follows from Theorem \ref{homonymous recursion works} that $B^\prime$ realizes $D^\prime$. It now follows from the proof of Theorem \ref{heteronymous realization works} and Lemma \ref{heteronymous ladder lemma} that the heteronymous arcs of $B$ lift to exactly the heteronymous arcs of $D$, and that they get the correct weight.
Now, suppose that $D$ is realizable. By Proposition \ref{lifting pictures admit connected branching}, we may assume $D$ is realized by a lifting picture with connected branching. Then by Lemma \ref{homonymous clump lemma}, each homonymous clump of $D$ is realizable as a homonymous diagram in its own right, so homonymous recursion will succeed on each clump. Similarly, it follows from the proof of Theorem \ref{heteronymous realization works} that $GeneralMerge$ must succeed in merging all the heteronymous ladders of $D$. So algorithm \ref{realization} finishes without error when applied to $D$.
\end{proof}
\section{Further Directions}
The motivation for the work in this dissertation came from Heegaard Floer homology. Heegaard Floer homology is a topological invariant of closed, orientable 3-manifolds, introduced by Ozsvath and Szabo in \cite{heegaard-floer-original}. It is built on top of what is known as a Heegaard diagram. Detailed information on Heegaard diagrams can be found, for example, in \cite{rolfsen}. In brief, a Heegaard splitting of a 3-manifold $M$ is a decomposition of $M$ into two handlebodies $H_1$ and $H_2$. One may reconstruct $M$ by gluing together $\bdry H_1$ and $\bdry H_2$ by a homeomorphism. It turns out that enough data to specify this gluing up to isotopy is encoded in what is called a Heegaard diagram. Let $\Sigma$ be a surface of genus $g$ homeomorphic to $\bdry H_1$ and $\bdry H_2$. A Heegaard diagram on $\Sigma$ is a choice, up to homotopy, of $2g$ closed curves on $\Sigma$, $g$ of which are labeled as coming from $H_1$ and the rest of which are labeled as coming from $H_2$. Each set of $g$ curves with the same labels must be linearly independent as 1 dimensional homology classes.
Heegaard Floer homology uses unordered $g$-tuples of intersection points of $H_1$ curves with $H_2$ curves as the generators of the Heegaard Floer chain complex. Computing the boundary map requires, among other things, counting structures known as Whitney disks. A Whitney disk, as defined in \cite{heegaard-floer-original}, can be thought of as a pair of homotopy classes of maps on a surface with boundary $\Sigma^\prime$. One map is an immersion $\phi: \Sigma^\prime \to \Sigma$ which sends $\bdry \Sigma^\prime$ to the labeled curves on $\Sigma$. The other is a branched cover $f$ taking $\Sigma^\prime$ to the unit disk with two marked points $p_1$ and $p_2$ on its boundary. For $i=1,2$, we must have that $\phi(f^{-1}(p_i))$ is an intersection point of an $H_1$ and an $H_2$ curve for each point in $f^{-1}(p_i)$.
To compute the Heegaard Floer boundary map, one must count the number of holomorphic representatives of these Whitney disks. Holomorphic, in this case, means that for a choice of complex structure on $\Sigma$, the complex structure on $\Sigma^\prime$ one gets by pulling back the complex structure on $\Sigma$ under $\phi$ is the same as the complex structure on $\Sigma^\prime$ one gets by pulling back the standard complex structure on the unit disk under $f$.
We ultimately wish to simplify this computation by using a more combinatorial version of a Whitney disk, which is based on a so-called degenerate complex structure. A degenerate complex structure is something like a measured lamination, which can be pulled back under immersions and branched covers, and which can be encoded, in the right circumstances, using an arc diagram. We hope that the algorithm presented here will be useful in a new method for computing Heegaard Floer homology, and that lifting pictures will be of independent interest.
|
2,877,628,090,531 | arxiv | \section{Introduction}
\setcounter{equation}{0}
In this article we consider a model for haptotaxis with growth.
Haptotaxis is the directed motion of cells by migration up a gradient
of cellular adhesion sites located in the extracellular matrix (ECM).
This process appears in tumor invasion and is involved in the first stage
of proliferation. It also plays an important role in wound healing. \\
\noindent
The basic mechanism involves 3 main cellular components: the tumor cells,
the Extracellular Matrix (ECM), and some Matrix Degrading Enzymes (MDE).
Tumor cells migrate in response to gradients of some ECM proteins. Those ECM proteins
are degraded by MDE, those enzymes being produced by tumor cells themselves.
Moreover, both tumor cells and MDE diffuse in the cellular medium but
ECM proteins do not diffuse.
\noindent
This mechanism is reminiscent of chemotaxis, which is accounting
for the directed migration of biological individuals
(e.g. bacteria) towards higher gradients of some chemical substance.
Chemotaxis often works as an aggregating mechanism, which
is reflected in the blow-up of solutions of the Keller-Segel model, a phenomenon that
has been widely studied in the recent years.
However there is a major difference between chemotaxis and haptotaxis:
since ECM proteins do not diffuse, instead of the elliptic or parabolic coupling appearing in chemotaxis,
the haptotaxis model involves an ODE coupling between the
concentration of ECM proteins and the MDE concentration. This is also the case in angiogenesis model
(cf \cite{cpz}) but models for haptotaxis involve (at least) 3 equations, whereas angiogenesis is a coupled system of 2 equations.\\
\noindent
We now give a brief review of the mathematical literature related to haptotaxis modelling.
The relevant variables are the tumor cells concentration, the Extracellular
Matrix concentration (ECM) the Matrix Degrading Enzymes concentration (MDE)
as well as the oxygen concentration. A hybrid model using PDEs and
cellular automata has been proposed by Anderson
\cite{anderson05}, involving
4 components: tumor cells, ECM, MDE + Oxygen.
Global existence for Anderson's model has been established
in \cite{ww} (Walker, Webb (07)).
Our model is a simpler version from this model involving 3 components
where we introduce a
bistable nonlinearity to model the role of
changes in oxygen concentration.
A similar model of haptotaxis with a logistic nonlinearity is studied in \cite{pmc09}
(and the references therein)
and global existence is proved.
Finally Chaplain, Lolas (see \cite{cl06} and the references therein) proposed a combined
chemotaxis-haptotaxis model with logistic source.
Recent results by Y. Tao, M. Wang \cite{tw08} and Y. Tao \cite{tw09} show
global existence for this model in dimension $N\leq 2$.
Complex patterns in haptotaxis models
are obtained numerically in \cite{ww}, and also in \cite{yekat} and
\cite{chertock}.
\noindent
Our starting point is the haptotaxis model proposed in \cite{ww}. In
this paper, the authors prove global well-posedness for a large class of initial
data, a result which strongly emphasizes the difference with Keller-Segel chemotaxis
model. Here we consider a different version of this model, where we do not explicitely
consider the oxygen concentration as a variable. Instead we replace it by a bistable
nonlinearity in the equation for the cell concentration.\\
Next we show that in the limit $\varepsilon \rightarrow 0$, the solutions converge to the
solutions of a free boundary problem where the interface motion is driven by mean
curvature plus an haptotaxis term.
\noindent
More precisely, we study the initial value Problem $(P^{\varepsilon})$
$$
(P^{\varepsilon})
\left\{
\begin{array}{ll}
u_{t} = \Delta u - \nabla \cdot (u \nabla \chi (v)) +
\dfrac{1}{\varepsilon^{2}} f(u) & \quad \quad \mbox{ in } \ \O \times (0,T] \\
v_{t}= -\lambda mv & \quad \quad \mbox{ in }
\ \O \times (0,T] \\
m_{t} = \alpha \Delta m +u-m & \quad \quad \mbox{in} \ \O \times (0,T] \\
u(x,0)=u_{0}(x) & \quad \quad x \in \Omega \\
v(x,0)=v_{0}(x) & \quad \quad x \in \Omega \\
m(x,0)=m_{0}(x) & \quad \quad x \in \Omega \\
\dfrac{\p u}{\p \nu} = \dfrac{\p m}{\p \nu} =0 & \quad \quad \mbox{on} \ \p \O \times (0,T],
\end{array}
\right.
$$
where $\O$ is a smooth bounded domain in $\mathbb{R}^{N}$ $(N \geq 2)$,
$\O_T=\Omega \times [0,T]$ with $T >0$,
$\nu$ is the exterior normal vector on $\p \O$ and $\lambda >0$, $\alpha >0$
are strictly positive constants.
\noindent
The haptotaxis sensitivity function $\chi$ is smooth and satisfies
$$\forall v>0,\,\,\,\chi(v) >0, \,\,\,\chi'(v)>0 .$$
The growth term $f$ is bistable and is given by
$$\forall u \in \R,\,\,\,f(u) = u(1-u)(u-\frac{1}{2}) $$
so that $\int_{0}^{1}f(u)du=0$.
\noindent
We make the following assumptions about the initial data.
\begin{enumerate}
\item $u_{0}$, $v_{0}$ and $m_{0}$ are nonnegative $C^{2}$ functions in
$\overline \O$ and we fix a constant $C_{0} >1$ such that
\begin{equation}\label{diyi}
||u_{0}||_{C^{2}(\overline \O)} +|| v_{0}||_{C^{2}(\overline \O)}
+||m_{0}||_{C^{2}(\overline \O)} \leq C_{0}.
\end{equation}
\item
$v_{0}$ satisfies the homogeneous Neumann boundary condition
\begin{equation}\label{neumann0}
\dfrac{\p v_{0}}{\p \nu} =0 \quad \mbox{on} \quad \p \O.
\end{equation}
\item
The open set $\O_{0}$ defined by
$$\O_{0} := \{ x \in \O, u_{0}(x) > 1/2 \}$$
is connected and $\O_{0} \subset \subset \O$.
\item
$\Gamma_{0} := \p \O_{0}$ is a smooth hypersurface without boundary.
\end{enumerate}
With these assumptions $\O_{0}$ is a domain enclosed by the initial interface $\Gamma_{0}$ and
$$u_{0} > 1/2 \mbox{ in } \O_{0},\,\,\, 0 \leq u_{0} < 1/2
\mbox{ in } \O \setminus \overline \O_{0}.$$
The existence of a unique nonnegative solution $(u^{\varepsilon}, v^{\varepsilon}, m^{\varepsilon})$
to Problem $(P^{\varepsilon})$ is established in Section 2. Note that it follows
from (\ref{neumann0}) that
\begin{equation}\label{neumann}
\frac{\p v^{\varepsilon}}{\p \nu} =0 \mbox{ on } \p \O \times [0,T].
\end{equation}
We are interested
in the asymptotic behavior of $(u^{\varepsilon}, v^{\varepsilon}, m^{\varepsilon})$
as $\varepsilon \rightarrow 0$.
The asymptotic limit of Problem $(P^{\varepsilon})$
as $\varepsilon \rightarrow 0$ is given by the following free boundary
Problem $(P^{0})$
\begin{align*}
(P^{0})
\left\{
\begin{array}{ll}
u^{0}(x,t)= \chi_{\O_{t}}(x) = \displaystyle{
\left\{
\begin{array}{ll}
1 \mbox{ in } \ \O_{t}, t \in [0,T] \\
0 \mbox{ in } \ \O \setminus \overline \O_{t}, t \in [0,T]
\end{array}
\right.} \\
v_{t}^{0} =-\lambda m^{0}v^{0} & \mbox{in} \ \O \times (0,T] \\
m_{t}^{0}= \alpha \Delta m^{0} + u^{0} - m^{0} & \mbox{in} \ \O \times (0,T] \\
V_{n} = -(N-1) \kappa + \dfrac{\p \chi (v^{0})}{\p n} & \mbox{on} \ \Gamma_{t}= \p \O_{t}, t \in (0, T] \\
\Gamma_{t} |_{t=0} = \Gamma_{0} \\
v^{0}(x,0)=v_{0}(x) & x \in \Omega \\
m^{0}(x,0)=m_{0}(x) & x \in \Omega \\
\dfrac{\p m^{0}}{\p \nu} =0 & \mbox{on} \ \p \O \times (0,T],
\end{array}
\right.
\end{align*}
\noindent
where $\O_{t} \subset \subset \O$ is a moving domain,
$\Gamma_{t}= \p \O_{t}$ is the limit interface,
$n$ is the exterior normal vector on $\Gamma_{t}$ $V_{n}$ is the normal velocity of $\Gamma_{t}$
in the exterior direction and $\kappa$ is the mean curvature at each point
of $\Gamma_{t}$.
We first establish the well-posedness of Problem $(P^{0})$ locally in time in Section 3.
Our main result is to prove rigorously the convergence of
$(u^{\varepsilon}, v^{\varepsilon}, m^{\varepsilon})$
to
$(u^0, v^0, m^0)$
for initial data satisfying the above assumptions.
In a first step, we establish the following generation of interface property.
\begin{thm}
\label{gmi}
Assume that $(u_{0},v_0,m_0)$ satisfy the hypotheses $1$-$2$-$3$-$4$.
Let $0 <\eta < 1/4$ be an arbitrary constant and define $\mu = f^{'}(1/2) = 1/4$. Then there exist $\varepsilon_{0} >0$ and $M_{0}>0$ such that, for all $\varepsilon \in (0, \varepsilon_{0}]$ and all $t \in [t^{\ast},T]$ where $t^{\ast}=\mu^{-1} \varepsilon^{2} |\ln \varepsilon|$,\\
(a) for all $x \in \O$, we have
$$0 \leq u^{\varepsilon} (x, t) \leq 1+\eta ;$$
(b) for all $x \in \O$ such that $|u_{0}(x) - \frac{1}{2}| \geq M_{0} \varepsilon$, we have
$$\mbox{ if } u_{0}(x) \geq \frac{1}{2} + M_{0} \varepsilon , \mbox{ then }
u^{\varepsilon}(x, t) \geq 1-\eta ,$$
$$\mbox{ if } u_{0}(x) \leq \frac{1}{2} - M_{0} \varepsilon , \mbox{ then }
0\leq u^{\varepsilon}(x, t) \leq \eta .$$
\end{thm}
\noindent
The main result reads as follows.
\begin{thm}
\label{thm1}
Assume that $(u_{0},v_0,m_0)$ satisfy the hypotheses $1$-$2$-$3$-$4$.
Let $(u^{\varepsilon}, v^{\varepsilon}, m^{\varepsilon})$ be the solution
of Problem $(P^{\varepsilon})$ and let $(v^{0}, m^{0}, \Gamma)$
with $\Gamma = (\Gamma_{t} \times \{ t \}) _{t \in [0,T]}$
be the smooth solution of the free boundary Problem $(P^{0})$ on $[0,T]$.
Then, as $\varepsilon \rightarrow 0$, the solution
$(u^{\varepsilon}, v^{\varepsilon}, m^{\varepsilon})$ converges to
$(u^{0}, v^{0}, m^{0})$ almost everywhere
in $\bigcup_{0 < t \leq T} ((\O \setminus \Gamma_{t}) \times {t})$.
More precisely,
$$\lim_{\varepsilon \rightarrow 0} u^{\varepsilon}(x,t) = u^{0}(x,t)
\mbox{ a.e. } \mbox{ in }
\bigcup_{0 < t \leq T} ((\O \setminus \Gamma_{t}) \times {t}), $$
and for all $\alpha \in (0,1)$,
$$\lim_{\varepsilon \rightarrow 0}
||v^{\varepsilon} - v^{0}||_{C^{1+\alpha, (1+\alpha)/2}(\overline \O_{T})} =0,$$
$$\lim_{\varepsilon \rightarrow 0}
||m^{\varepsilon}-m^{0}||_{C^{1+\alpha, (1+\alpha)/2}(\overline \O_{T})} =0.$$
\end{thm}
We actually prove a stronger convergence result concerning $\ue$.
\begin{cor}
\label{coru}
Assume that $(u_{0},v_0,m_0)$ satisfy the hypotheses $1$-$2$-$3$-$4$.
Then for any $ t \in (0, T]$,
\begin{eqnarray}
&& \lim_{\varepsilon \rightarrow 0} \ue(x,t)= \chi_{\O_t}(x) =\displaystyle{
\left\{
\begin{array}{ll}
1 & \mbox{ for } x \in \Omega_{t} \\
0 & \mbox{ for } x \in \O \setminus {\overline{\Omega_{t}}}
\end{array}
\right.}
\end{eqnarray}
\end{cor}
Moreover like in \cite{a}, we also obtain the following estimate
of the distance between the interface $\Gamma_{t}$ solution of Problem $(P^0)$
and the set
$$\Gamma_{t}^{\varepsilon}:=\{ x \in \O, u^{\varepsilon}(x,t)=1/2 \}.$$
\begin{thm}
\label{thm2}
There exists $C>0$ such that
$$\Gamma_{t}^{\varepsilon} \subset \aleph_{C\varepsilon}(\Gamma_{t}) \mbox{ for } 0 \leq t \leq T, $$
where $\aleph_{r}(\Gamma_{t}):= \{ x \in \O, dist(x, \Gamma_{t})<r \}$ is the
tubular neighborhood of $\Gamma_{t}$ of radius $r>0$.
\end{thm}
\begin{cor}
\label{haus}
$\Gamma_{t}^{\varepsilon} \rightarrow \Gamma_{t}$ as $\varepsilon \rightarrow 0$,
uniformly in $t \in [0,T]$ in the sense of the Hausdorff distance.
\end{cor}
The organization of the paper is as follows.
In section 2 we prove some a priori estimates,
establish a comparison principle for Problem $(P^{\varepsilon})$ and prove the
existence of a unique global solution.
In section 3 we prove the well-posedness of the free boundary problem $(P^{0})$
and obtain the existence of a smooth unique solution up to some time $T>0$.
In section 4 we establish the property of generation of interface. Finally in section 5
we prove the convergence of the solution
of Problem $(P^{\varepsilon})$ to the solution of Problem $(P^{0})$.
\section{A priori estimates and comparison principle}
\setcounter{equation}{0}
\subsection{A priori estimates}
For a given $T>0$ and a given nonnegative function $u_{0} \in C^{2}(\overline \O)$,
we define
$$X_{T} = \{ u \in C^{0}( \overline \O_{T}),\quad 0 \leq u \leq C_{0} \mbox{ in } \O_{T} \mbox{ and } u(x,0)=u_{0}(x) \},$$
where $C_{0} >1$ is the constant defined in \eqref{diyi}.
\noindent
It is convenient to rewrite Problem $(P^{\varepsilon})$ as an evolution equation
for $u$ with a nonlocal coefficient $H(u)=v$, namely
\begin{equation}
\left\{
\begin{array}{ll}
u_{t} = \Delta u - \nabla \cdot (u \nabla \chi(H(u)) ) +\dfrac{1}{\varepsilon^{2}} f(u) & \quad \quad \mbox{in} \ \O \times (0,T] \\
u(x,0)=u_{0}(x), & \quad \quad x \in \Omega \\
\dfrac{\p u}{\p \nu} =0 & \quad \quad \mbox{on} \ \p \O \times (0,T],
\end{array}
\right.
\end{equation}
where for a given function $u=u(x,t) \in X_{T}$, we define $H(u) = v$
as the first component of the
unique solution $(v,m)$ of the auxiliary problem
\begin{equation}
\label{yeshi}
\left\{
\begin{array}{ll}
v_{t}= -\lambda m v & \quad \quad \mbox{in } \Omega \times (0,T]\\
m_{t}= \alpha \Delta m + u-m & \quad \quad \mbox{in } \Omega \times (0,T]\\
v(x,0)=v_{0}(x), & \quad \quad x \in \Omega \\
m(x,0)=m_{0}(x), & \quad \quad x \in \Omega \\
\dfrac{\p m}{\p \nu} =0 & \quad \quad \mbox{on } \p \O \times (0,T].
\end{array}
\right.
\end{equation}
The functions $v_{0}$ and $m_{0}$ are given and satisfy $1$-$2$.
\noindent
We give below some a priori estimates on the solution to Problem $(P^\e)$ and state the
related properties of $H$.
\begin{lem}
\label{xinjia}
For $u \in X_{T}$, let $(v, m)$ be the solution of Problem \eqref{yeshi} and let
$H : X_{T} \rightarrow C^{2}( \overline \O_{T})$ be the operator defined by
$H(u)=v$.
Then there exists $C >0$ only depending on $T$ and $\O$ such that
\begin{enumerate}
\item[](a) for all $(u_{1}, u_{2}) \in X_{T}^{2}$ with
$0 \leq u_{1} \leq u_{2}$ in $\O_{T}$, the solution $(v_i,m_i)$ of
Problem \eqref{yeshi} for $i=1,2$ satisfies
$$0 \leq m_{1} \leq m_{2} \mbox{ and }\leq v_{2} \leq v_{1} \mbox{ in } \O_{T}$$
so that the operator $H$ is nonincreasing on $X_T$.
\item[](b) for all $u \in X_{T}$,
$$ ||m||_{C^{1+\alpha,(1+\alpha)/2}(\overline \O_{T})} \leq C C_{0}
\mbox{ and }
\sup_{(x,t) \in \overline \O_{T}} \big{|}\int^{t}_{0} \Delta m(x,s)ds \big{|}
\leq C C_{0}. $$
\item[](c) for all $u \in X_{T}$, the function $v = H(u)$ satisfies
$$||v||_{C^{0}(\overline \O_{T})} \leq C_{0} \quad
\mbox{ and } \quad ||\nabla v||_{C^{0}(\overline \O_{T})}
+ ||\Delta v||_{C^{0}(\overline \O_{T})} \leq CC_{0}^{3}. $$
\end{enumerate}
\end{lem}
\proof{}
To prove property $(a)$, let $ (u_{1}, u_{2}) \in X_{T}^{2}$ with
$0 \leq u_{1} \leq u_{2}$ in $\O_{T}$. Since for $i = 1, 2$
$$(m_{i})_{t} - \alpha \Delta m_{i} + m_{i} = u_{i}\geq 0 \mbox{ in } \O_{T}, $$
with
$$m_{i}|_{t=0} = m_{0}\geq 0 \mbox{ and } \dfrac{\p m_i}{\p \nu} =0 \mbox{ on }
\p \O \times (0,T],$$
we deduce from the standard maximum principle that
$0 \leq m_{1} \leq m_{2}$ in $\O_T$.
\noindent
Next solving the equation $v_{t}= -\lambda m v$ we get that
\begin{align}
\begin{split}
\label{guanyuv}
v_{i}(x,t) = v_{0}(x) e^{-\lambda \int_{0}^{t} m_{i}(x,s) ds}
\end{split}
\end{align}
for all $(x,t) \in \O_{T}$ and $i = 1, 2$,
so that $v_{1} \geq v_{2} \geq 0 $ in $\O_{T}$, which proves that $H$
is nonincreasing on $X_T$.
\noindent
In order to prove $(b)$ and $(c)$, note that $m$ satisfies
the linear parabolic equation
\begin{equation}
\left\{
\begin{array}{ll}
m_{t} = \alpha \Delta m +u-m & \mbox{ in } \ \O \times (0,T] \\
m(x,0)=m_{0}(x) & x \in \Omega \\
\dfrac{\p m}{\p \nu} =0 & \mbox{ on } \ \p \O \times (0,T]
\end{array}
\right.
\end{equation}
with $0\leq u \leq C_0$ in $\O_T$ and $ m_{0} \geq 0 \in \Omega $. Thus
it follows from the maximum principle and from standard parabolic estimates that
there exists a constant
$C>0$ only depending on $T$ and $\O$ such that
\begin{equation}\label{estm}
0\leq m \leq C_0 \mbox{ in } \O_{T}, \,\,\,\,\,
||m||_{C^{1+\alpha,(1+\alpha)/2}(\overline \O_{T})} \leq C C_{0}.
\end{equation}
In view of \eqref{guanyuv}, $v \geq 0$ and $v_{t} \leq 0$ in $\O_{T}$
so that
\begin{align}
\begin{split}
0 \leq v(x,t) \leq v_{0}(x) \leq C_{0} \mbox{ for all } (x, t) \in \O_{T}.
\end{split}
\end{align}
Since for all $(x,t) \in \O_T$
\begin{align}
\begin{split}
\label{erdba}
\nabla v(x,t)=
\nabla v_{0}(x) e^{-\lambda \int_{0}^{t} m(x,s) ds} -
\lambda v(x,t) \big(\int_{0}^{t} \nabla m(x,s)ds \big),
\end{split}
\end{align}
it follows that there exists $C >0$ such that
\begin{align}
\begin{split}
|\nabla v(x,t)|
\leq & |\nabla v_{0}(x)|+ \lambda v(x,t) |\int_{0}^{t} \nabla m(x,s) ds| \\
\leq & C C_{0}^{2}
\end{split}
\end{align}
Since for all $(x,t) \in \O_T$
\begin{align}
\begin{split}
\Delta v(x,t) = & \Delta v_{0}(x) e^{-\lambda \int_{0}^{t} m(x,s)ds}
- 2 \lambda \nabla v_{0}(x) . \big(\int_{0}^{t} \nabla m(x,s)ds \big)
e^{-\lambda \int_{0}^{t} m(x,s)ds} \\
+ & \lambda^{2} v(x,t)
\big| \int_{0}^{t} \nabla m(x,s)ds \big| ^{2}
- \lambda v(x,t) \big( \int_{0}^{t} \Delta m(x,s)ds \big)
\end{split}
\end{align}
it follows that
\begin{align}
\begin{split}
\label{smyi}
\forall (x,t) \in \O_T,\,\,\,\, |\Delta v(x,t)| \leq & C C_{0}^{3} +
\lambda C_{0} |\int_{0}^{t} \Delta m(x,s)ds|
\end{split}
\end{align}
\noindent
with $C>0$ a suitable constant. \\
\noindent
For any fixed $x \in \O$, we integrate the equation $m_{t} - \alpha \Delta m + m = u$ on $[0,t]$
and obtain that
$$\int_{0}^{t} \Delta m(x,s)ds =
\frac{1}{\alpha} [m(x,t) - m_{0}(x) + \int_{0}^{t} (m(x,s)- u(x,s)) ds]$$
so that in view of \eqref{estm} there exists a constant $C>0$ such that
\begin{equation}
\label{smer}
\forall (x,t) \in \O_T,\,\,\,\, |\int_{0}^{t} \Delta m(x,s)ds|
\leq C C_{0}
\end{equation}
which completes the proof of $(b)$. Moreover in view of \eqref{smyi} and \eqref{smer},
we conclude that there exists $C>0$ such that
$$\forall (x,t) \in \O_T,\,\,\,\, |\Delta v(x,t)| \leq C C_{0}^{3} $$
and obtain the property $(c)$, which completes the proof of Lemma \ref{xinjia}.
\subsection{Existence of a global solution to Problem $(P^{\varepsilon})$}
We prove the existence of a unique solution
$(u^{\varepsilon}, v^{\varepsilon}, m^{\varepsilon})$
to Problem $(P^{\varepsilon})$ on $\O_{T}$ for $\e>0$ small enough.
\begin{lem}
\label{new}
Assume that
$(u_{0}, v_{0}, m_{0})$ satisfy the hypotheses $1$-$2$-$3$-$4$. Then there exists
$\varepsilon_{0} >0$ such that for all $0<\varepsilon<\varepsilon_{0}$,
Problem $(P^{\varepsilon})$ has a unique solution
$(u^{\varepsilon}, v^{\varepsilon}, m^{\varepsilon})$
on $\O \times [0,T]$ for any $T>0$. This solution satisfies $0\leq \ue\leq C_0 $ in $\O_T$.
\end{lem}
The above lemma is similar to Lemma 4.2 in \cite{bhlm} and we just sketch the proof.
It relies on Schauder's fixed point theorem
and on the a priori estimates on Problem $(P^{\varepsilon})$ obtained in
Lemma \ref{xinjia}.
\noindent
First let $T>0$ be arbitrarily fixed and for all $u \in X_{T}$,
let $v= H(u)$ be defined as above. By the estimates of $v$ in Lemma \ref{xinjia},
there exists $ C >0$
such that
\begin{equation}\label{vvv}
0 \leq v \leq C_0,\,\,\,
|\nabla v| +
|\Delta v| \leq C C_0^3 \mbox{ in } \O_T.
\end{equation}
Next let $\tilde{u}$ be the unique solution of
\begin{equation}\label{uhat}
\left\{
\begin{array}{ll}
\tilde{u}_{t} = \Delta \tilde{u} - \nabla \cdot (\tilde{u} \nabla \chi (v)) + \dfrac{1}{\varepsilon^{2}} f(\tilde{u}) & \quad \quad \mbox{in} \ \O \times (0,T] \\
\tilde{u}(x,0)=u_{0}(x) & \quad \quad x \in \Omega \\
\dfrac{\p \tilde{u}}{\p \nu} =0 & \quad \quad \mbox{on} \ \p \O.
\end{array}
\right.
\end{equation}
The key point of the proof is to show that for $0<\e<\e_0$ small enough, we have
$$0 \leq {\hat u} \leq C_0 \mbox{ in } \O_T.$$
This follows from the fact that $C_0$ is a supersolution for equation (\ref{uhat})
for $\e>0$ small enough. Precisely, using that $f(C_0) <0$
since $C_0>1$ and (\ref{vvv}),
we have that
\begin{eqnarray*}
&&C_0 \Delta (\chi(v)) -
\frac{1}{\e^2} f(C_0) \\
&=&C_0(\chi'(v)\Delta (v) + \chi''(v)|\nabla (v)|^2)-
\frac{1}{\e^2} f(C_0)\\
&\geq& - 2 C_0^4 -\frac{1}{\e^2} f(C_0) \geq 0
\end{eqnarray*}
for $\e>0$ small enough. Moreover
$\tilde{u} \in C^{\alpha,\alpha/2}(\overline{\O_{T}})$ for some
$\alpha \in (0,1)$. Hence $u \rightarrow \tilde{u}$ maps $X_{T}$ into itself
and defines a compact operator. A fixed point of this operator
obtained by Schauder's theorem
is then a solution
to Problem $(P^{\varepsilon})$. The uniqueness of solution follows from the a priori
estimates on Problem $(P^{\varepsilon})$. For the details of the proof, we refer to
\cite{bhlm} and \cite{dibe}.
\subsection{A comparison principle for Problem $(P^{\varepsilon})$}
We first recall the definition of a pair of sub- and super-solutions similar to the one
proposed in \cite{bhlm}.
\begin{df}
Let $(u_{\varepsilon}^{-}, u_{\varepsilon}^{+})$
be two smooth functions with
$ 0 \leq u_{\varepsilon}^{-} \leq u_{\varepsilon}^{+}$ in $\Omega_{T}$
and
$\dfrac{\p u_{\varepsilon}^{-}}{\p \nu} \leq \dfrac{\p u_{\varepsilon}^{+}}{\p \nu} $
on $\p \O \times (0,T)$.
By definition,
$(u_{\varepsilon}^{-}, u_{\varepsilon}^{+})$ is a pair of sub- and super-solutions in
$\Omega_T$ if for any $v=H(u)$, with
$u_{\varepsilon}^{-} \leq u \leq u_{\varepsilon}^{+}$ in $\O_{T}$, we have
$$L_{v}[u_{\varepsilon}^{-}] \leq 0 \leq L_{v}[u_{\varepsilon}^{+}] \quad \mbox{ in }
\O_{T},$$
where the operator $L_{v}$ is defined by
$$L_{v}[\phi] = \phi_{t} - \Delta \phi + \nabla \cdot (\phi \nabla \chi (v) )
- \frac {1}{\varepsilon^{2}} f(\phi).$$
\end{df}
\noindent
Note that in Lemma 2.2, $(0,C_{0})$ is a pair of sub- and super-solutions
of Problem $(P^{\varepsilon})$.
It is then proved in \cite{bhlm} that the following comparison principle holds.
\begin{pro}
Let a pair of sub- and super-solutions $(u_{\varepsilon}^{-}, u_{\varepsilon}^{+})$
in $\O_{T}$ be given. Assume that
$$\forall x \in \O,\,\,\,
u_{\varepsilon}^{-}(x,0) \leq u_{0}(x) \leq u_{\varepsilon}^{+}(x,0),$$
with $(u_{0}, v_{0}, m_{0})$ satisfying the hypotheses $1$-$2$.
Then there exists a unique solution $(u^{\varepsilon}, \ve, \me)$ of
Problem $(P^{\varepsilon})$
with
$$\forall (x,t) \in \O_{T},\,\,\,
u_{\varepsilon}^{-}(x,t) \leq u^{\varepsilon}(x,t) \leq u_{\varepsilon}^{+}(x,t).$$
\end{pro}
\section{Well-posedness of Problem $(P^0)$}
\setcounter{equation}{0}
We establish here the existence and uniqueness of a smooth solution
to the free boundary Problem $(P^{0})$ locally in time.
\begin{thm}
\label{well}
Let $\Gamma_{0} = \p \O_{0}$, where $\O_{0} \subset \subset \O$ is a $C^{2+\alpha}$
domain with $\alpha \in (0,1)$.
Then there exists a time $T>0$ such that Problem $(P^{0})$
has a unique solution $(v^{0},m^{0}, \Gamma)$ on $[0,T]$ with
$$\Gamma = (\Gamma_{t} \times \{ t \}) _{t \in [0,T]} \in C^{2+\alpha, (2+\alpha)/2}
\mbox{ and } v^{0}|_{\Gamma} \in C^{1+\alpha, (1+\alpha)/2}$$
\end{thm}
This theorem is similar to Theorem 2.1 in \cite{bhlm} and is using
a contraction fixed-point argument in suitable H\"older spaces
(see Section 2 in \cite{bhlm}). We show here how it can actually be obtained using the
result established in
Theorem 2.1 in \cite{bhlm} and
some additional properties that we state
and prove below. \\
First we introduce some notations as in \cite{bhlm}.
We assume that $\Gamma_0$ is parametrized by some smooth $(N-1)$-dimensional
compact manifold ${\cal M}$ without boundaries which divides $\R^N$ into two pieces.
We denote by $ \vec{N}(s)$ the outward normal vector to ${\cal M}$
at $s\in {\cal M}$
and define
$$\begin{array}{rll}
X:{\cal M}\times(-L,+L)& \rightarrow & \R^N \\
(s,s_N)&\mapsto& X(s,s_N)
\end{array}
$$
where
$$ X(s,s_N)= s + s_N \vec{N}(s). $$
If $L>0$ is chosen small enough, $X$ is a $C^{\infty}$-diffeomorphism
from ${\cal M}\times(-L,+L)$ onto a tubular neighborhood of ${\cal M}$ that
we denote by ${\cal M}^L$.
We assume that $\Gamma_0 \subset {\cal M}^{L \over 2}$
and is given by
$$\Gamma_0=\{ X(s,s_N),\, s_N=\Lambda_0(s),\, s\in {\cal M} \}$$
and that $\Omega_0$ is the connected component of $\Omega \setminus \Gamma_0$
which contains
$$\{ x =X(s,s_N),\, s_N < \Lambda_0(s), \,
s \in {\cal M} \}.$$
According to the regularity hypothesis on $\Gamma_0$ in
Theorem \ref{well}, $\Lambda_0$ is a $C^{2 + \alpha}$ function with
$$||\Lambda_0||_{C^0({\cal M})} < \frac{L}{2}.$$
Let $T>0$ be a fixed constant that will be chosen later.
We parametrize the interface $\Gamma= (\Gamma_t)_{t \in [0,T]}$ as follows
\begin{equation}
\label{gammat}
\Gamma_t=\{ X(s,s_N),\, s_N=\Lambda(s,t),\, s\in {\cal M} \},
\end{equation}
where $\Lambda: {\cal M} \times [0,T] \rightarrow (-L,+L)$ is a function.
By definition, we will say that $\Gamma$ is $C^{m+\alpha,{m+\alpha \over 2}}$
if the function $\Lambda$ satisfies
$$ \Lambda \in C^{m+\alpha,{m+\alpha \over 2}}({\cal M}\times [0,T]) $$
For any function $v(x,t)$ defined in $\overline{\O_T}$,
we consider the restriction of $v$ and of $\nabla v$ on the interface $\Gamma$ and
we associate to $v$ the functions $w(s,t)$ and $\vec h(s,t)$ defined on
${\cal M}\times [0,T]$ by
\begin{eqnarray}
w(s,t)= v(X(s, \Lambda(s,t)), t), \label{w}\\
{\vec h}(s,t)= {\nabla v}(X(s, \Lambda(s,t)), t) . \label{h}
\end{eqnarray}
Next we split Problem $(P^{0})$ into two subproblems $(p_{a})$ and $(p_{b})$,
where Problem $(p_{a})$ is given by
\begin{equation}
(p_{a})
\left\{
\begin{array}{ll}
V_{n} = -(N-1) \kappa + \chi^{'}(w){\vec h} \cdot {\vec n}
\mbox{ on } \ \Gamma_{t}= \p \O_{t},\,\, t \in (0, T] \\
\Gamma_{t} |_{t=0} = \Gamma_{0}
\end{array}
\right.
\end{equation}
and Problem $(p_{b})$ is given by
\begin{equation}
(p_{b})
\left\{
\begin{array}{ll}
v_{t}^{0} =-\lambda m^{0}v^{0} & \mbox{in} \ \O \times (0,T] \\
m_{t}^{0}-\alpha \Delta m^{0} + m^{0}= u^{0} & \mbox{in} \ \O \times (0,T] \\
\dfrac{\p m^{0}}{\p \nu} =0 & \mbox{on} \ \p \O \times (0,T]\\
u^{0}(x,t)= \chi_{\O_{t}}(x) = \displaystyle{
\left\{
\begin{array}{ll}
1 \mbox{ in } \ \O_{t}, t \in [0,T] \\
0 \mbox{ in } \ \O \setminus \overline \O_{t}, t \in [0,T]
\end{array}
\right.}
\end{array}
\right.
\end{equation}
Note that the difference between the free boundary problem in \cite{bhlm} and here
concerns Problem $(p_{b})$.
Let us consider
\begin{equation}
\label{M}
\forall (x,t) \in \Omega_T,\,\,\, M(x,t)= \int^{t}_{0} m^{0}(x,s) ds
\end{equation}
The restrictions
of $M$ and $\nabla M$ on $\Gamma$ are denoted
$a(s,t)$ and ${\vec b}(s,t)$ and defined on ${\cal M} \times [0,T]$ by
\begin{eqnarray}
a(s,t)= M(X(s, \Lambda(s,t)), t), \label{w}\\
{\vec b}(s,t)= {\nabla M}(X(s, \Lambda(s,t)), t) . \label{hb}
\end{eqnarray}
Note that using \eqref{guanyuv} and \eqref{erdba} we have that
$$w(s,t)=v_{0}(X(s, \Lambda(s,t))) e^{-\lambda a(s,t)}$$
and $${\vec h}(s,t)= \nabla v_{0}(X(s, \Lambda(s,t))) e^{-\lambda a(s,t)} -\lambda w(s,t) {\vec b}(s,t), $$
so that $w$ has the same regularity as $a$ and ${\vec h}$ has the same
regularity as ${\vec b}$.
We deduce from Problem $(p_{b})$ that $M$ satisfies
\begin{equation}
\label{youyige}
\left\{
\begin{array}{ll}
-\alpha \Delta M+M=g(x,t) & \quad \quad \mbox{in} \ \O \times (0,T] \\
\dfrac{\p M}{\p \nu} =0 & \quad \quad \mbox{on} \ \p \O \times (0,T],
\end{array}
\right.
\end{equation}
where $$g(x,t)=\int^{t}_{0} u^{0}(x,s)ds + m_{0}(x)- m^{0}(x,t). $$
The same problem \ref{youyige} has been considered in \cite{bhlm}
but with a right-hand-side $g= u^0$.
Here the function $g(x,t)$ is continuous in time, its regularity being the one of a time-integral of
$u^0$. Thus we can use Theorem 2.2 in \cite{bhlm}
and obtain (at least) the same regularity for $(a, {\vec b})$
in the case considered here.
\begin{lem}\label{t1}
Let $\Gamma=(\Gamma_t\times \{t\})_{t \in [0,T]}$ be given by \eqref{gammat} with
$$\Lambda \in C^{m + \alpha,{m + \alpha \over 2}}({\cal M}\times [0,T]) $$
for some $m \in \N$, $m \geq 2$ and $\alpha \in (0,1)$.
Let $M$ satisfy \eqref{youyige} and let $a$ and ${\vec b}$
be associated to $M$ by (\ref{w}) and (\ref{hb}) respectively.
Then
$$a \in C^{m + \alpha,{m + \alpha \over 2}}({\cal M}\times [0,T]) $$
and
$${\vec b} \in [C^{m + \alpha',{m + \alpha'\over 2}}({\cal M}\times [0,T])]^n
\mbox{ for all } 0<\alpha'< \alpha. $$
\end{lem}
By the argument in \cite{bhlm} we know then that Problem $(p_{a})$ defines a mapping
$(w,{\vec h}) \rightarrow \Lambda$ and Problem $(p_{b})$
defines a mapping $\Lambda \rightarrow (w,{\vec h})$ with the proper
regularity in H\"older
spaces. Therefore the composition of these two
mappings defines a contraction in some closed ball for $T>0$ small enough.
The unique fixed point of this contraction is the solution
to Problem $(P^{0})$ on $[0, T]$. This completes the proof of
Theorem \ref{well}.
\section{Generation of interface}
\setcounter{equation}{0}
In this section we establish the rapid formation of transition layers in a neighborhood
of $\Gamma_{0}$ within a very short time interval of
order $\varepsilon^{2} |\ln \varepsilon|$. The width of the transition layer
around $\Gamma_{0}$ is
of order $\varepsilon$.
After a short time the solution $u^{\varepsilon}$ becomes
close to 1 or 0
except in a small
neighborhood of $\Gamma_{0}$. It reads precisely as follows.
\begin{thm}
\label{zao}
Let $u_{0}$ satisfy the assumptions $1$-$2$-$3$-$4$.
Let $0 <\eta < 1/4$ and define $\mu = f^{'}(1/2) = 1/4$.
Then there exist $\varepsilon_{0} >0$ and $M_{0}>0$ such that,
for all $\varepsilon \in (0, \varepsilon_{0}]$ and $t^{\ast}=\mu^{-1}
\varepsilon^{2} |\ln \varepsilon|$, \\
(a) for all $x \in \O$, we have
$$-\eta \leq u^{\varepsilon} (x, t^{\ast}) \leq 1+\eta ;$$
(b) for all $x \in \O$ such that $|u_{0}(x) - \frac{1}{2}| \geq M_{0} \varepsilon$,
we have
$$\mbox{ if } \quad u_{0}(x) \geq \frac{1}{2} + M_{0} \varepsilon ,
\mbox{ then } u^{\varepsilon}(x, t^{\ast}) \geq 1-\eta ,$$
$$\mbox{ if } u_{0}(x) \leq \frac{1}{2} - M_{0} \varepsilon , \quad \mbox{then} \quad u^{\varepsilon}(x, t^{\ast}) \leq \eta .$$
\end{thm}
\noindent
The above theorem relies on the construction of a suitable pair of
sub- and super-solutions involving the solution of the bistable ODE.
We refer to the proof of Theorem 3.1 in \cite{a}
in the simple case $\delta =0$.
\section{Convergence}
\setcounter{equation}{0}
We split the present section into 2 parts.
In a first step we establish the convergence
of the $\ue$ to $u^0$ and prove Corollary \ref{coru}).
In a second step we prove Theorem \ref{thm1} as well as Theorem \ref{gmi}, Theorem \ref{thm2} and
Corollary \ref{haus}.
\noindent
In what follows, we construct a pair of sub- and super-solution
$u_{\varepsilon}^{\pm}$ for Problem $(P^{\varepsilon})$ in order to control
the function $u^{\varepsilon}$ on $[t^{\ast}, T]$.
By the comparison principle it then follows that,
if $u_{\varepsilon}^{-}(x,0) \leq u^{\varepsilon}(x,t^{\ast})
\leq u_{\varepsilon}^{+}(x,0)$,
then
$u_{\varepsilon}^{-}(x,t) \leq u^{\varepsilon}(x,t+t^{\ast}) \leq u_{\varepsilon}^{+}(x,t)$ for all $(x,t) \in \O_{T}$. As a result, if both $u_{\varepsilon}^{+}$ and $u_{\varepsilon}^{-}$ converge to $u^{0}$, the solution $u^{\varepsilon}$ also converge to $u^{0}$ for all $(x,t) \in \O_{T} \setminus \Gamma$.
\subsection{Construction of sub- and super-solutions}
Before the construction, we present the definition of the modified
signed distance function which is essential for our construction of
sub- and super-solutions. Let us first define the signed distance function.
\begin{df}
Let $\Gamma = \bigcup _{0 \leq t \leq T} (\Gamma_{t} \times {t})$ be the solution of the limit geometric motion Problem $(P^{0})$. The signed distance function $\tilde{d}(x,t)$ is defined by
\begin{eqnarray}
&& \tilde{d}(x,t)= \displaystyle{
\left\{
\begin{array}{ll}
dist(x, \Gamma_{t}) & \mbox{for} ~x \in \O \setminus \Omega_{t} \\
-dist(x, \Gamma_{t}) & \mbox{for}~ x \in \Omega_{t},
\end{array}
\right.}
\end{eqnarray}
\noindent
where $dist(x,\Gamma_{t})$ is the distance from $x$ to the hyperface $\Gamma_{t}$ in $\O$. \\
\noindent
Note that $\tilde{d}(x,t) =0$ on $\Gamma$ and that $|\nabla\tilde{d}(x,t)|=1$ in a neighborhood of $\Gamma$.
\end{df}
\noindent
In fact, rather than working with the above signed distance function $\tilde{d}(x,t)$, we need a modified signed distance function $d$ defined as follows.
\begin{df}
Let $d_{0} >0$ small enough such that $\tilde{d}(x,t)$ is smooth in
$$\{(x,t) \in \overline \O \times [0,T] , |\tilde{d}(x,t)| < 3 d_{0} \}$$
and such that for all $ t \in [0,T]$,
$$dist(\Gamma_{t}, \p \O) > 4 d_{0}.$$
\noindent
We define the modified signed distance function $d(x,t)$ by
$$d(x,t)= \zeta (\tilde{d}(x,t)),$$
where $\zeta (s)$ is a smooth increasing function on $\mathbb{R}^{N}$ defined by
\begin{eqnarray}
&& \zeta(s)= \displaystyle{
\left\{
\begin{array}{ll}
s & \mbox{if} ~~|s| \leq 2d_{0} \\
-3d_{0} & \mbox{if} ~~ s \leq -3d_{0} \\
3d_{0} & \mbox{if} ~~ s \geq 3d_{0}.
\end{array}
\right.}
\end{eqnarray}
\end{df}
\noindent
Note that $|\nabla d|=1$ in the region
$\{|d(x,t)| < 2d_{0} , (x,t) \in \overline \O \times [0,T] \}$.
It follows that at $x \in \Gamma_t$,
the exterior normal vector is $n(x,t) = \nabla d(x,t)$,
the normal velocity is $V_n(x,t)= -d_t (x,t)$ and the mean curvature
is $K = \frac{1}{N-1}\Delta d(x,t)$. Therefore the motion law on $\Gamma^t$ given
by Problem $(P^0)$ reads
\begin{equation}\label{eqd}
d_t - \Delta d + \nabla d . \nabla \chi(v^0)=0
\mbox{ on }
\Gamma_t=\{ x \in \Omega \Bigm|d(x,t)=0 \}.
\end{equation}
By Theorem \ref{gammat}, the interface $\Gamma_{t}$ is of class
$C^{2+\alpha, \frac{2+\alpha}{2}}$ and $v^{0}$ is of class
$C^{1+\alpha^{'}, \frac{1+\alpha^{'}}{2}}$ for any $\alpha, \alpha^{'} \in (0,1)$,
all the functions $d_{t}$, $\Delta d$, $\nabla d$ are Lipschitz continuous
near $\Gamma_{t}$ and $\nabla \chi(v^{0})$ is continuous near $\Gamma_{t}$. Therefore
from the mean value theorem applied separately on both sides of $\Gamma_{t}$, it follows
that there exists $N_0>0$ such that
\begin{equation}\label{eqdmv}
\forall (x,t) \in \Omega_T,\,\,\,
|d_t - \Delta d + \nabla d . \nabla \chi(v^0)| \leq N_0 |d(x,t)|.
\end{equation}
Note also that by construction, $\nabla d(x,t) =0$ in a neighborhood of $\p \O$.
\noindent
As in \cite{a}, the sub- and super-solutions $u^{\pm} _{\varepsilon}$ are defined by
\begin{align}
\label{defyi}
\begin{split}
u_{\varepsilon}^{\pm}= U_{0}(\frac{d(x,t) \mp \varepsilon p(t)}{\varepsilon}) \pm q(t),
\end{split}
\end{align}
\noindent
where $U_{0}(z)$ is the unique solution of the stationary problem
\begin{equation}
\label{equyi}
\left\{
\begin{array}{ll}
U_{0}^{''} + f(U_{0}) =0 & \\
U_{0}(- \infty) =1 , U_{0}(0) = \frac{1}{2} , U_{0}( + \infty) = 0
\end{array}
\right.
\end{equation}
\noindent
and
$$p(t) = -e^{-\beta t / \varepsilon^{2}} + e^{Lt} +K $$
$$q(t) = \sigma (\beta e^{-\beta t / \varepsilon^{2}} + \varepsilon^{2} L e^{Lt}) $$
with $L>0$ and $K>1$ to be chosen later. \\
\noindent
First note that $q = \varepsilon^{2} \sigma p_{t}$, then remark that for Problem \eqref{equyi} the unique solution $U_{0}$ has the following properties.
\begin{lem}
\label{lemyi}
There exist the positive constants $C$ and $\lambda$ such that the following estimates hold:
$$0 < U_{0}(z) \leq C e^{-\lambda |z|} ~~\mbox{ for } z \geq 0,$$
$$0 <1- U_{0}(z) \leq C e^{-\lambda |z|} ~~\mbox{ for } z \leq 0.$$
In addition, $U_{0} $ is strictly decreasing and $|U_{0}^{'}(z)| + |U_{0}^{''}(z)| \leq C e^{-\lambda |z|}$ for all $z \in \mathbb{R}$.
\end{lem}
The proof of Lemma 5.4 is given in \cite{bhlm}. We also note that
$$ u_{\varepsilon}^{-}(x,t) \leq U_{0}(\frac{d(x,t)}{\varepsilon}) \leq u_{\varepsilon}^{+}(x,t) $$
and that $p(t)$ is bounded for all $0 < \varepsilon < \varepsilon_{0}$
and $t \in [0,T]$, $\lim_{\varepsilon \rightarrow 0} q(t) =0$ for all $t>0$.
Therefore it follows from the definition of $u_\varepsilon ^\pm (x,t)$ that
for all $t \in (0,T]$,
\begin{eqnarray}
\label{uyl}
&& \lim_{\varepsilon \rightarrow 0} u_{\varepsilon}^{\pm}(x,t)=
\chi_{\O_{t}}(x) =\displaystyle{
\left\{
\begin{array}{ll}
1 & \mbox{for all} ~(x,t) \in \Omega_{t} \\
0 & \mbox{for all} ~(x,t) \in \O \setminus \Omega_{t}
\end{array}
\right.}
\end{eqnarray}
\noindent
The key result of this section is the following lemma.
\begin{lem}
\label{lem1}
There exist $\beta>0,\sigma>0$ such that for all $K > 1$,
we can find $\varepsilon_0>0$ and $L>0$ such that for any
$\varepsilon \in (0,\varepsilon_0)$, ($u_{\varepsilon}^{-}$, $u_{\varepsilon}^{+}$)
is a pair of sub- and super-solutions for Problem $(P^{\varepsilon})$
in $\overline \O \times [0, T]$.
\end{lem}
\subsection{Proof of Lemma \ref{lem1}}
First note that for all $(x,t) \in \overline \O_{T}$,
$$u_{\varepsilon}^{-}(x,t) \leq U_{0}(\frac{d(x,t)}{\varepsilon})-q(t)
\leq U_{0}(\frac{d(x,t)}{\varepsilon})+q(t) \leq u_{\varepsilon}^{+}(x,t).$$
Next since $\nabla d=0$ in a neighborhood of $\p \O$, we have that
$\dfrac{\p u_{\varepsilon}^{\pm}}{\p \nu} = 0$ on $\p \O \times [0,T]$.
\noindent
Let $v$ be such that $v=H(u)$ with $u_{\varepsilon}^{-} \leq u
\leq u_{\varepsilon}^{+}$ in $\O_{T}$, we show below that
$$L_{v}[u_{\varepsilon}^{-}] \leq 0 \leq L_{v}[u_{\varepsilon}^{+}],$$
where the operator $L_{v}$ is defined by
$$L_{v}[\phi] = \phi_{t} - \Delta \phi + \nabla(\phi \nabla \chi(v))
- \frac {1}{\varepsilon^{2}} f(\phi).$$
Here we just consider the inequality $L_{v}[u_{\varepsilon}^{+}] \geq 0$,
because the proof of the other inequality $L_{v}[u_{\varepsilon}^{-}] \leq 0$
is obtained by similar arguments.
A direct computation gives us the following terms
$$(u_{\varepsilon}^{+})_{t} = U_{0}^{'} (\frac{d_{t}}{\varepsilon} -p_{t}) +q_{t} ,$$
$$\nabla u_{\varepsilon} ^{+} =U_{0}^{'} \frac{\nabla d}{\varepsilon} ,$$
$$\Delta u_{\varepsilon} ^{+} = U_{0}^{''} \frac{|\nabla d|^{2}}{\varepsilon^{2}}
+ U_{0}^{'} \frac{\Delta d}{\varepsilon} ,$$
where the value of the function $U_{
0}$ and its derivatives are taken at the point
$\dfrac{d(x,t) - \varepsilon p(t)} {\varepsilon}$.
Moreover the bistable function has the expansions
$$f(u_{\varepsilon}^{+}) = f(U_{0}) +q f^{'}(U_{0}) +
\frac{1}{2} q^{2} f^{''}(\theta) , $$
where $\theta (x,t) $ is a function satisfying
$U_{0} < \theta < u_{\varepsilon} ^{+}$. Hence, combining all the above,
we obtain that
$$ L_{v}[u_{\varepsilon}^{+}] =
(u_{\varepsilon}^{+})_{t} - \Delta u_{\varepsilon}^{+}
+ \nabla u_{\varepsilon}^{+} \nabla \chi(v)
+ u_{\varepsilon}^{+} \Delta \chi(v) -
\frac{1}{\varepsilon^{2}} f(u_{\varepsilon}^{+})
= E_{1}+E_{2}+ E_{3}+E_{4}$$
where
$$E_{1} = -\frac{1}{\varepsilon^{2}} q[f^{'}(U_{0})
+ \frac{1}{2} q f^{''}(\theta)] - U_{0}^{'} p_{t} + q_{t},$$
$$E_{2} = \frac {U_{0}^{''}}{\varepsilon^{2}} (1- |\nabla d|^{2}),$$
$$E_{3} = \frac{U_{0}^{'}}{\varepsilon} (d_{t} - \Delta d
+ \nabla d \cdot \nabla \chi(v_{0})),$$
$$E_{4} = \frac{U_{0}^{'}}{\varepsilon}
\nabla d \cdot \nabla(\chi(v) - \chi(v^{0})) + u_{\varepsilon}^{+} \Delta \chi(v).$$
\noindent
We first need to present some useful inequalities before estimating
the four terms above,
this step is exactly the same as in \cite{a}. \\
Since $f^{'}(0) = f^{'}(1) = -\dfrac{1}{2}$ ,
we can find $0<b<1/2$ and $m>0$ such that
$$ \mbox{ if } U_{0}(z) \in [0,b] \cup [1-b,1]
\mbox{ then } f^{'}(U_{0}(z)) \leq -m. $$
Furthermore, since the region $\{z \in \mathbb{R}, U_{0}(z) \in [b, 1-b] \}$
is compact and $U_{0}^{'} < 0$ on $\mathbb{R}$, there exists a constant $a_{1} >0$
such that
$$\mbox{ if } U_{0} (z) \in [b,1-b] \mbox{ then } U_{0}^{'}(z) \leq -a_{1}. $$
\noindent
Now we define $$F = \sup_{-1 \leq z \leq 2} (|f(z)| + |f^{'}(z)| + |f^{''}(z)|),$$
\begin{align}
\label{ok1}
\beta = \frac {m}{4},
\end{align}
and choose $\sigma$ which satisfies
\begin{align}
\label{ok2}
0 < \sigma < min (\sigma_{0}, \sigma_{1}, \sigma_{2}) ,
\end{align}
where $\sigma_{0} = \dfrac{a_{1}}{m+F}$, $\sigma_{1}= \dfrac{1}{\beta +1}$, $\sigma_{2}
= \dfrac{4 \beta}{F(\beta+1)}$. Hence we obtain that
$$\forall z \in \mathbb{R}, -U_{0}^{'}(z) - \sigma
f^{'}(U_{0}(z)) \geq 4 \sigma \beta.$$
\noindent
Now we have already chosen the appropriate $\beta$ and $\sigma$.
Let $K>1$ be arbitrary, next we prove that $L_{v^{\varepsilon}}[u_{\varepsilon}^{+}]
\geq 0$ provided that the constants $\varepsilon_{0}>0$ and $L>0$
are appropriately chosen. From now on, we suppose that the following inequality
is satisfied
\begin{align}
\label{inyi}
\varepsilon_{0}^{2} L e^{LT} \leq 1.
\end{align}
\noindent
Then given any $\varepsilon \in (0, \varepsilon_{0})$, since $0<\sigma < \sigma_{1}$,
we have $0 < q(t) < 1$ for all $t \geq 0$. Since $0 < U_{0} < 1$,
it follows that for all $(x,t) \in \overline \O_{T}$
\begin{align}
\label{iner}
-1 < u_{\varepsilon}^{\pm}(x,t) < 2.
\end{align}
\noindent
We begin to estimate the four terms $E_{1}$, $E_{2}$, $E_{3}$ and $E_{4}$.
The estimates of the terms $E_{1}$, $E_{2}$ and $E_{3}$ are similar
to the estimates in \cite{a} and we obtain that
$$E_{1} \geq \frac{\sigma \beta^{2}}{\varepsilon^{2}} e^{-\beta t / \varepsilon^{2}} + 2 \sigma \beta L e^{Lt} =
\frac{C_{1}}{\varepsilon^{2}} e^{-\beta t / \varepsilon^{2}} + C_{1}^{'} L e^{Lt} ,$$
where $C_{1} = \sigma \beta^{2}$, $C_{1}^{'}= 2 \sigma \beta$ are positive constants.
$$|E_{2}| \leq
\frac{16 C}{ (e \lambda d_{0})^{2}} (1+ ||\nabla d||_{\infty}^{2}) = C_{2} ,$$
where $C$ and $\lambda$ are the constants that we choose in Lemma 5.4,
so that $C_{2}$ is also a positive constant.
\noindent
We remark that in the estimate for $E_{2}$ in \cite{a}, the following assumption holds:
\begin{align}
\label{leq}
\begin{split}
e^{LT} + K \leq \frac{d_{0}}{2 \varepsilon_{0}} .
\end{split}
\end{align}
\noindent
For $E_{3}$, we use (\ref{eqdmv}) and obtain that
$$|E_{3}| \leq C_{3} (e^{Lt} +K) + C_{3}^{'} ,$$
where $C_{3} = N_{0} C$ and $C_{3}^{'} = \dfrac{N_{0} C}{\lambda}$
with $C$ and $\lambda$ the constants given by Lemma 5.4 .
\noindent
Then we consider the term $E_{4}$.
We should know the estimates
of $\nabla (\chi(v)- \chi(v^{0}))$ and $\Delta \chi(v)$.
In fact, for this term, we have the following lemma.
\begin{lem}
\label{lemer}
Let $u$ be any function satisfying
$$ u_{\varepsilon}^{-} \leq u \leq u_{\varepsilon}^{+} \mbox{ in } \O_{T}$$
and let $(v, m)$ be the corresponding solution of Problem \eqref{yeshi} with
$v=H(u)$. Then there exists $C >0$ depending on $T$ and $\O$ such that
for all $(x,t) \in \O_{T}$,
\begin{eqnarray}
&&|v(x,t)| + |\nabla v(x,t)| + |\Delta v(x,t)| \leq C \label{v2}\\
&&|\int_{0}^{t}(m-m^{0})(x,s)ds|+|\nabla d(x,t) \cdot \int_{0}^{t} \nabla (m-m^{0})(x,s)ds|
\leq C \varepsilon p(t) \label{mdiff}\\
&&|(v-v^{0})(x,t)| + |\nabla d(x,t) \cdot \nabla(v-v^{0})(x,t)| \leq C
\varepsilon p(t) \label{vdiff}
\end{eqnarray}
where $(v^{0},m^0)$ are given by the solution of Problem $(P^{0})$.
\end{lem}
\noindent
We prove this lemma below. Let us carry on with
the proof of Lemma \ref{lem1}. We write
\begin{equation}
\label{wusisan}
\nabla d \cdot \nabla(\chi(v)-\chi(v^{0})) = \chi^{'}(v) \nabla d \cdot \nabla(v-v^{0}) + (\chi^{'}(v)-\chi^{'}(v^{0})) \nabla d \cdot \nabla v^{0}.
\end{equation}
Since $v^{0}$ is bounded in $C^{1+\alpha^{'}, \frac{1+\alpha^{'}}{2}}$
for any $\alpha^{'} \in (0,1)$,
there exists $C>0$, such that
$$ ||v^{0}||_{L^{\infty}(\O_{T})} +||\nabla v^{0}||_{L^{\infty}(\O_{T})} \leq C, $$
which combined with \eqref{wusisan}, yields that
\begin{equation}
\label{wuwuling}
|\nabla d \cdot \nabla(\chi(v)-\chi(v^{0}))| \leq
||\chi^{'}||_{\infty} |\nabla d \cdot \nabla(v-v^{0})| + C ||\nabla d||_{\infty}
||\chi^{''}||_{\infty}|v-v^{0}|,
\end{equation}
where the $L^{\infty}$-norms of $\chi^{'}$ and $\chi^{''}$
are considered
on the interval $(-C, C)$. Therefore, since $\chi$ is smooth
and $||\nabla d||_{\infty} $ is bounded,
it follows from \eqref{wuwuling} that for all $(x,t) \in \O_{T}$,
there exists $C>0$ such that
\begin{equation}
\label{wuwuwu}
|\nabla d \cdot \nabla(\chi(v)-\chi(v^{0}))| \leq C \varepsilon p(t) .
\end{equation}
Moreover, using the smoothness of $\chi$ and the first inequality of Lemma 5.6,
we obtain that there exists $C'>0$ such that
\begin{equation}
\label{wuliu}
|\Delta \chi(v)| \leq C'.
\end{equation}
Hence, by the above inequalities \eqref{wuwuwu}, \eqref{wuliu}
and the fact that $| u_{\varepsilon}^{+}(x,t)| \leq 2$, we obtain that for all $(x,t) \in \O_{T}$,
$$|E_{4}| \leq \frac{C}{\varepsilon} C \varepsilon p(t) + 2 C'. $$
\noindent
Finally substituting the expression for $p$ and $q$,
we obtain that there exist the positive constants $C_{4}$, $C_{4}^{'}$
and $C_{4}^{''}$ such that
$$|E_{4}| \leq C_{4} + C_{4}^{'} e^{-\beta t / \varepsilon^{2}} + C_{4}^{''} e^{Lt}.$$
We collect the above four estimates of $E_{1}$, $E_{2}$, $E_{3}$ and $E_{4}$, which yield
\begin{align}
\begin{split}
L_{v} [u_{\varepsilon}^{+}]
\geq & \frac{C_{1}}{\varepsilon^{2}} e^{-\beta t / \varepsilon^{2}}
+ C_{1}^{'} L e^{Lt} - C_{2} \\
& - C_{3} (e^{Lt} +K) - C_{3}^{'} - C_{4}
- C_{4}^{'} e^{-\beta t / \varepsilon^{2}}
- C_{4}^{''} e^{Lt} \\
= & \frac{C_{1} - \varepsilon^{2} C_{4}^{'}}{\varepsilon^{2}}
e^{-\beta t / \varepsilon^{2}} + (LC_{1}^{'} - C_{3} -
C_{4}^{''}) e^{Lt} - C_{6} ,
\end{split}
\end{align}
where $C_{6} = C_{2} + C_{3}K + C_{3}^{'} + C_{4}$ is a positive constant.
\noindent
Now we set
$$L := \frac{1}{T} \ln \frac{d_{0}}{4 \varepsilon_{0}},$$
where $\varepsilon_{0}$ is small enough and satisfies the assumptions \eqref{inyi} and \eqref{leq}, so that $L$ is large enough. It also follows that $\dfrac{C_{1} - \varepsilon^{2} C_{4}^{'}}{\varepsilon^{2}} >0$ and
$$LC_{1}^{'} - C_{3} - C_{4}^{''} \geq \frac{1}{2} L C_{1}^{'},$$
therefore
$$L_{v} [u_{\varepsilon}^{+}] \geq \frac{1}{2} L C_{1}^{'} - C_{6} \geq 0.$$
The proof of Lemma 5.5 is now completed, with the constants $\beta$, $\sigma$ given in \eqref{ok1}, \eqref{ok2}.
\subsection{Proof of Lemma \ref{lemer}}
Lemma \ref{lemer} gives the key estimate and is the analogue of Lemma 4.9 in \cite{bhlm}
and of Lemma 2.1 in \cite{a}.
However the proof is markedly different since the coupling between $u$ and $v$ is given
by a system with an ODE and a parabolic equation versus an elliptic equation in
the two above references.
\noindent
First note that (\ref{v2}) is established exactly as in Lemma \ref{xinjia} (c).
\noindent
Concerning the second inequality (\ref{mdiff}), let us recall the following properties of $U_{0}$
given in [1].
\begin{lem}
\label{lemold}
For all given $a \in \mathbb{R}$ and $z \in \mathbb{R}$, we have the inequality:
$$|U_{0}(z+a) - \chi_{]-\infty,0]}(z)| \leq C e^{-\lambda |z+a|} + \chi_{]-a,a]}(z) $$
\end{lem}
Define $w(x,t) = m(x,t) - m^{0}(x,t)$,
then $w$ satisfies
\begin{equation}
\label{side}
\left\{
\begin{array}{ll}
w_{t}-\alpha \Delta w + w = h &\mbox{ in } \O_{T}\\
\dfrac{\p w}{\p \nu}=0 & \mbox{ on } \p\O \times (0,T) \\
w(x,0)=0, & x \in \O
\end{array}
\right.
\end{equation}
with $h = u - u^{0}$ satisfying
$$u_{\varepsilon}^{-} -u^{0} \leq \,h
\leq \, u_{\varepsilon}^{+} -u^{0} \mbox{ in } \Omega_T.$$
From the definition of $u_{\varepsilon}^{\pm}$ in \eqref{defyi} and from
Lemma \ref{lemold}
for $z= \dfrac{d(x,t)}{\varepsilon}$ and $a= \pm p(t)$, we deduce
that for all $(x,t) \in \O_{T}$,
\begin{equation}
\label{ih}
|h(x,t)| \leq
C(e^{- \lambda|d(x,t)/ \varepsilon + p(t)|} + e^{- \lambda|d(x,t)/ \varepsilon -p(t)|})
+ \chi_{\{ |d(x,t)| \leq \varepsilon p(t) \}} + q(t)
\end{equation}
Let us define for all $(x,t) \in \O_{T}$,
$$h_{1}(x,t)= q(t),$$
$$h_{2}(x,t)= C(e^{- \lambda|d(x,t)/ \varepsilon + p(t)|}
+ e^{- \lambda|d(x,t)/ \varepsilon -p(t)|})
\chi_{\{ |d(x,t)| > d_{0} \}}$$
and
$$h_{3}(x,t)=C(e^{- \lambda|d(x,t)/ \varepsilon
+ p(t)|} + e^{- \lambda|d(x,t)/ \varepsilon -p(t)|}) \chi_{\{ |d(x,t)| \leq d_{0} \}}
+ \chi_{\{ |d(x,t)| \leq \varepsilon p(t) \}} $$
and denote by $(w_{i})_{i=1,2,3}$ the solutions
of the three following auxiliary problems
$$
(A_{i})
\left\{
\begin{array}{ll}
(w_i)_t-\alpha \Delta w_i + w_i = h_i &\mbox{ in } \O_{T}\\
\dfrac{\p w_i}{\p \nu}=0 & \mbox{ on } \p\O \times (0,T) \\
w_i(x,0)=0, & x \in \O
\end{array}
\right.
$$
Note that in view of the definition of $p(t)$ and the inequality \eqref{leq}, we have that for all $t \in [0,T]$
\begin{align}
\begin{split}
\label{ip}
0 < K-1 \leq p(t) \leq \frac{d_{0}}{2 \varepsilon_{0}}
\end{split}
\end{align}
so that the function $p$ is bounded away from $0$
for all $t \in [0,T]$. It follows in particular that choosing $\e>0$ small enough,
$$ \varepsilon p(t)\leq d_{0}/ 2 \mbox{ for all }t \in [0,T]$$ so
that $|h| \leq h_{1}+h_{2}+h_{3}$. Thus we deduce from the maximum principle
that for all $x \in \O$ and $t \in [0,T]$,
$$|w(x,t)| \leq w_{1}(x,t) + w_{2}(x,t)+w_{3}(x,t).$$
We now establish estimates for $w_i$, with $i=1,2,3$.
\noindent
\textbf{Problem $(A_{1})$}
\noindent
Set $W_{1}(x,t)=\int_{0}^{t}w_1(x,s)ds$, then $W_{1}$ satisfies
\begin{equation}
\left\{
\begin{array}{ll}
(W_1)_t-\alpha \Delta W_{1} + W_{1} = H_{1} &\mbox{ in } \O_{T}\\
\dfrac{\p W_{1}}{\p \nu}=0 & \mbox{ on } \p\O \times (0,T) \\
W_{1}(x,0)=0, & x \in \O
\end{array}
\right.
\end{equation}
with, since $q(t) = \varepsilon^{2} \sigma p'(t)$,
$$H_{1}(x,t)=
\int_{0}^{t} q(s)ds=
\varepsilon^{2} \sigma (p(t)-p(0))$$
so that by (\ref{ip}) we get that there exists
$C>0$ such that for all $t \in [0,T]$,
$$\sup_{(y,s) \in \O \times [0,T]} |H_{1}(y,s)| \leq C \varepsilon p(t) .$$
Hence by standard parabolic estimates, there exists $C>0$
such that for all $(x,t) \in \O_{T}$,
the solution $W_{1}$ of Problem $(A_{1})$ satisfies
\begin{equation}
\label{W1}
|W_{1}(x,t)| + |\nabla W_{1}(x,t)| \leq C \varepsilon p(t).
\end{equation}
\textbf{Problem $(A_{2})$}
\noindent
Note that by the standard parabolic estimates there exists a constant
$C^{'}>0$ such that
By definition of $h_{2}$, using again \eqref{ip},
we obtain that there exists $C'>0$ such that for all $(s,t) \in [0,T]^{2}$
\begin{align}
\begin{split}
\label{liuliuyi}
h_{2}(y,s) \leq & 2 C e^{-\lambda (d_{0}/ \varepsilon -p(s))} \\
\leq & 2C e^{-\lambda d_{0} / 2 \varepsilon} \\
\leq & \frac{4C}{\lambda d_{0} e} \varepsilon \\
\leq & \frac {4C}{ \lambda d_{0} e (K-1)} \varepsilon p(s)\\
\leq & C_{1} \varepsilon p(s) \leq C'\varepsilon p(t) .
\end{split}
\end{align}
Thus by standard parabolic estimates, we obtain that for all $(x,t) \in \O_{T}$
$$|w_{2}(x,t)| + |\nabla w_{2}(x,t)| \leq C' \varepsilon p(t),$$
which implies that there exists $C>0$ such that for all $(x,t) \in \O_{T}$
\begin{equation}
\label{W2}
|W_{2}(x,t)| + |\nabla W_{2}(x,t)| \leq C \varepsilon p(t),
\end{equation}
where we define $W_{2}(x,t)=\int_{0}^{t}w_2(x,s) ds$.
\noindent
\textbf{Problem $(A_{3})$}
\noindent
Note that $h_{3}(y,s)$ is supported in $\{ |d(y,s)| \leq d_{0} \}$.
Moreover by linearity
we may suppose that the function $h_{3}$ satisfies
one of the three following assumptions:
$$ (H_{1}) ~~~~ |h_{3}(y,s)| \leq \chi_{\{ |d(y,s)| \leq \varepsilon p(s) \}}$$
$$(H_{2}^{\pm}) ~~~~ |h_{3}(y,s)| \leq e^{-\lambda |d(y,s) / \varepsilon \pm p(s)|} $$
Then under respectively assumptions $(H_{1})$, $(H_{2}^{\pm})$,
we define a function $\tilde{h}$ on $R \times [0,T]$, respectively by
\begin{eqnarray}
&& \tilde{h}(r,s)= \displaystyle{
\left\{
\begin{array}{ll}
\chi_{\{ |r| \leq \varepsilon p(s) \}} \\
e^{-\lambda |r/ \varepsilon \pm p(s)|}
\end{array}
\right.}
\end{eqnarray}
Note that $|h_{3}(y,s)| \leq \tilde{h} (d(y,s) ,s)$,
and under either of the assumptions $(H_{1})$ or $(H_{2}^{\pm})$,
there exists a constant $C >0$ such that for all $(s,t) \in [0,T]^{2}$
\begin{align}
\label{tildh}
\begin{split}
0 \leq \int_{-d_{0}}^{d_{0}} \tilde{h}(r,s)dr \leq C \varepsilon p(t).
\end{split}
\end{align}
Let $\varphi (x,t) = e^{t} w_{3}(x,t)$, then in view of Problem $(A_{3})$, the function
$\phi$ satisfies
\begin{equation}
\label{cphi}
\left\{
\begin{array}{ll}
\varphi_{t}-\alpha \Delta \varphi= f &\mbox{ in } \O_{T}\\
\dfrac{\p \varphi}{\p \nu}=0 & \mbox{ on } \p\O \times (0,T)
\end{array}
\right.
\end{equation}
where $f(x,t) = e^{t} h_{3}(x,t)$ and
$\varphi(x,0) = w_{3}(x,0) = 0$ for all $x \in \O$.
We establish now that
there exist a constant $C>0$ such that
\begin{equation}
\forall (x,t) \in \Omega_T,\,\,\, 0\leq \varphi(x,t) \leq C \varepsilon p(t).
\label{fi}
\end{equation}
As in \cite{ahm}, the solution $\varphi(x,t)$ of Problem \eqref{cphi}
can be expressed as
$$\varphi(x,t) = \int_{0}^{t} \int_{|d(y,s)| \leq d_{0}} G(x, y, t-s) f(y,s) dy ds, $$
with $G(x, y,t)$
being the Green function associated to the Neumann boundary value problem
in $\O$ for the parabolic operator $\varphi_{t} -\alpha \Delta \varphi$.
Thus for all $(x,t) \in \Omega_T$,
\begin{equation}
0 \leq \varphi(x,t)
\leq
\int_{0}^{t} \int_{|d(y,s)| \leq d_{0}}
G(x, y, t-s) e^s \tilde{h} (d(y,s) ,s) dy ds
\label{fir}
\end{equation}
Next we recall the following important property of $G$ which is established in
\cite{ahm}.
\noindent
{\bf Lemma 7.6, \cite{ahm}}: \textit{Let $\Gamma$ be a closed hypersurface
in $\Omega$ and denote by $d(x)$
the signed distance function associated with $ \Gamma$. Then there exists constants
$C, d_0>0$ such that for any function $\eta(r) \geq 0$ on $\R$, it holds that
$$\int_{|d| \leq d_{0}} G(x, y, t) \eta(d(y)) dy \leq
\frac{C}{\sqrt{t}} \int_{-d_{0}}^{d_{0}} \eta(r)dr \mbox{ for } 0 < t \leq T $$}
\noindent
Moreover as pointed out in \cite{ahm}, the above inequality is uniform with respect to
smooth variations of $\Gamma$ and for $t \in [0,T]$.
Applying this inequality to our case, we deduce that there exists $C>0$ such that
for all $(x,y) \in \Omega^2$
and for all $0\leq s < t \leq T$,
\begin{equation}
\label{intg}
\int_{|d(y,s)| \leq d_{0}} G(x, y, t-s) \tilde{h}(d(y,s),s) dy \leq
\frac{C}{\sqrt{t-s}} \int_{-d_{0}}^{d_{0}} \tilde{h}(r,s) dr.
\end{equation}
In view of (\ref{fir}) and of (\ref{tildh}), it follows that for all $x \in \Omega$
and for all $t \in [0,T]$,
\begin{eqnarray*}
0\leq \varphi(x,t)
&\leq &
C\int_{0}^{t} \int_{|d(y,s)| \leq d_{0}}
G(x, y, t-s) \tilde{h} (d(y,s) ,s) dy ds \\
&\leq &
C'\int_{0}^{t}\frac{1}{\sqrt{t-s}} \int_{-d_{0}}^{d_{0}} \tilde{h}(r,s) dr ds\\
&\leq &
C'\int_{0}^{t}\frac{1}{\sqrt{t-s}} \e p(t) ds \leq 2C' \e p(t) \sqrt{T}
\end{eqnarray*}
which yields inequality (\ref{fi}).
\noindent
Coming back to $w_3$, we deduce that for all $(x,t) \in \Omega_T$,
\begin{equation}
\label{w3}
|w_{3}(x,t)| = |e^{-t} \varphi(x,t)| \leq C \varepsilon p(t).
\end{equation}
Define $W_{3}(x,t) = \int_{0}^{t} w_{3}(x,s)ds$, then it follows that
\begin{equation}
\label{W3}
|W_{3}(x,t)| \leq C \varepsilon p(t)
\end{equation}
for some $C>0$
and for all $(x,t) \in \Omega_T$.
We show now that there exist $C>0$ such that for all $(x,t) \in \O_{T}$,
\begin{equation}
\label{nablaW3}
|\nabla d(x,t).\nabla W_{3}(x,t)| \leq C \varepsilon p(t).
\end{equation}
Time integration of the equation in Problem $(A_3)$ on $[0,t]$ gives
$$w_{3}(x,t) - w_{3}(x,0) - \alpha \Delta W_{3}(x,t) +W_{3}(x,t)
= \int_{0}^{t} h_{3}(x,s)ds.$$
Since $w_3(x,0)=0$,
we obtain the following elliptic problem for any $t \in [0,T]$,
\begin{equation}
\left\{
\begin{array}{ll}
-\alpha \Delta W_3(.,t) +W_3(.,t)= \hat{H}_{3}(.,t) &\mbox{ in } \O\\
\dfrac{\p W_{3}}{\p \nu}(.,t)=0 & \mbox{ on } \p\O
\end{array}
\right.
\end{equation}
where for all $(x,t) \in \O_{T}$,
$$\hat{H}_{3}(x,t) = \int_{0}^{t} h_{3}(x,s)ds - w_{3}(x,t).$$
Let us define for any $t \in [0,T]$ the functions $a(.,t)$ as the solution of
\begin{equation}
\label{a}
\left\{
\begin{array}{ll}
-\alpha \Delta a(.,t) +a(.,t)= h_3(.,t) &\mbox{ in } \O\\
\dfrac{\p a}{\p \nu}(.,t)=0 & \mbox{ on } \p\O
\end{array}
\right.
\end{equation}
and define
$A(x,t) = \int_{0}^{t} a(x,s)ds$. Define similarly $B(.,t)$ as the solution of
\begin{equation}
\label{B}
\left\{
\begin{array}{ll}
-\alpha \Delta B(.,t) +B(.,t)= -w_3(.,t) &\mbox{ in } \O\\
\dfrac{\p B}{\p \nu}(.,t)=0 & \mbox{ on } \p\O
\end{array}
\right.
\end{equation}
so that by linearity
$$\forall (x,t) \in \Omega_T,\,\,\,W_3(x,t) =A(x,t) + B(x,t).$$
It follows from standard elliptic estimates in view of (\ref{w3}) that
$$|B(x,t)|+|\nabla B(x,t)| \leq C \varepsilon p(t).$$
Concerning $a$, note that the elliptic problem
appearing here is the same as for the chemotaxis-growth system studied
in \cite{bhlm} and in \cite{a}, with the right-hand-side satisfying
(\ref{tildh}).
Therefore the results stated in Lemma 4.2 in \cite{a} and in Lemma 4.10 in \cite{bhlm}
apply and prove that there exists a constant $C>0$
such that for all $(x,t) \in \O_{T}$,
$$|a(x,t)|+|\nabla d(x,t).\nabla a(x,t)| \leq C \varepsilon p(t)$$
and consequently
$$|A(x,t)|+|\nabla d(x,t). \nabla A(x,t)| \leq C \varepsilon p(t).$$
This completes
the proof of (\ref{nablaW3}).
In view of (\ref{W1}), (\ref{W2}), (\ref{W3}) and (\ref{nablaW3}),
inequality (\ref{mdiff}) is now established.
\noindent
In order to prove inequality (\ref{vdiff}), note that using (\ref{mdiff})
we obtain that for all $(x,t) \in \O_{T}$,
\begin{eqnarray}
|(v-v^{0})(x,t)| &= & |v_{0}(x) e^{-\lambda \int_{0}^{t} m(x,s)ds}
- v_{0}(x) e^{-\lambda \int_{0}^{t} m^{0}(x,s)ds}| \\
&\leq & C|v_{0}(x)| |\int_{0}^{t} (m-m^{0})(x,s)ds|
\leq C' \varepsilon p(t),
\label{vv}
\end{eqnarray}
where $C'>0$ is a suitable constant. \\
Next we have similarly that for all $(x,t) \in \O_{T}$,
\begin{eqnarray*}
&& |\nabla d(x,t) \cdot \nabla (v-v^{0})(x,t)|
\leq C|e^{-\lambda \int_{0}^{t} m(x,s)ds}-
e^{-\lambda \int_{0}^{t} m^{0}(x,s)ds}| \\
& &+ C |v(x,t) \nabla d(x,t)\cdot \int_{0}^{t} \nabla m(x,s)ds
-v^{0}(x,t) \nabla d(x,t)\cdot \int_{0}^{t} \nabla m^{0}(x,s)ds| \\
&&\leq C' \varepsilon p(t) +
C |v(x,t)||\nabla d(x,t) \cdot \int_{0}^{t} \nabla (m-m^{0})(x,s)ds| \\
& &+ C|v(x,t)-v^{0}(x,t)||\nabla d(x,t)
\cdot \int_{0}^{t}
\nabla m^{0}(x,s)ds |,
\end{eqnarray*}
where $C,C'>0$ are suitable constants.
Using (\ref{vv}), (\ref{mdiff}) and upper bounds on $|v|$ and $|\nabla m^0|$,
we deduce that (\ref{vdiff}) is satisfied. This completes the proof of Lemma 5.6.
\subsection{Proof of Corollary \ref{coru} and Theorem \ref{thm1}}
The pointwise convergence of $u^{\varepsilon}$ to $u^{0}$
in $\bigcup_{0 < t \leq T} ((\O \setminus \Gamma_{t}) \times {t})$
when $\varepsilon \rightarrow 0$ follows from Lemma \ref{lem1} and from (\ref{uyl}).
Next note that $w^{\e}= \me - m^0$ is a solution of Problem \eqref{side}
with the right-hand-side
$\he$ satisfying
$$ |\he(x,t)| \leq h_1(x,t) + h_2(x,t) + h_3(x,t)$$
with $h_i$, $i=1,2,3$ defined as in the proof of Lemma \ref{lemer}.
This shows that there exists $C>0$
such that
$$|\he||_{L^{1}(\O_T)} \leq C \varepsilon.$$
It follows then from standard parabolic estimates and Sobolev inequalities
that for any $\alpha \in (0,1)$ there exists $p \in (1, +\infty]$ and $C>0$
such that
\begin{align}
\begin{split}
||w^\e||_{C^{1+\alpha, 1+\alpha/2}(\overline \O_{T})}
\leq & C ||u^{\varepsilon}-u^{0}||_{L^{p}(\O_{T})}\\
\leq& C ||\he||_{L^{p}(\O_{T})}
\leq C \varepsilon^{\frac{1}{p}}.
\end{split}
\end{align}
Thus for any $\alpha \in (0,1)$,
$$\lim_{\varepsilon \rightarrow 0}
||m^{\varepsilon}-m^{0}||_{C^{1+\alpha, (1+\alpha)/2}(\overline \O_{T})} =0.$$
The expression of $v^{\varepsilon}$ and $\nabla \ve$ in (\ref{guanyuv}) and (\ref{erdba})
then shows that
$$\lim_{\varepsilon \rightarrow 0}
||v^{\varepsilon}-v^{0}||_{C^{1+\alpha, (1+\alpha)/2}(\overline \O_{T})} =0$$
which completes the proof of Theorem \ref{thm1}.
\subsection{Proof of Theorem \ref{gmi}, Theorem \ref{thm2} and Corollary \ref{haus}}
The proofs are exactly the same as the proofs
of Theorem 1.3, Theorem 1.5 and Corollary 1.6 in \cite{a} respectively,
we omit the details here.
|
2,877,628,090,532 | arxiv | \section{Introduction}
Relating the diameter of a polyhedron to its dimension and its number of facets is a classical topic in optimization. The {\em combinatorial diameter} of a polyhedron is the maximum length of a shortest path between any two vertices in the graph (or $1$-skeleton) of the polyhedron. The famous Hirsch conjecture \cite{d-63} claimed a bound of $f-d$ on the combinatorial diameter of any $d$-dimensional polyhedron with $f$ facets. It was first disproved for unbounded polyhedra \cite{kw-67} and in a stronger setting requiring monotone walks \cite{ToddExample} and only disproven much later for polytopes \cite{msw-15,s-11}, i.e., bounded polyhedra.
Today, the arguably most important open question in the field is the {\em polynomial Hirsch conjecture}, which asks whether there is a polynomial bound on the diameter in terms of $f$ and $d$. Note that the existence of a strongly polynomial pivot rule for the Simplex method would require this conjecture to be true. The same holds for a polynomial version of the {\em monotone Hirsch conjecture}, which concerns edge walks that are strictly decreasing and lead to a minimal vertex for some linear objective function.
To approach these long-standing questions, a number of abstractions and generalizations of edge walks on the $1$-skeleton have been introduced (see, e.g., \cite{cs-22,ContinuousHirsch,ehrr-10,KimAbstraction,ks-10,s-13} and references therein). We focus on the so-called {\em circuit diameter} of polyhedra introduced in \cite{bfh-14}. {\em Circuits} are a classical topic in oriented matroid theory \cite{oxley-06}, and correspond to the elementary vectors as introduced in \cite{r-69}. In this work, we study whether the original counterexamples to the Hirsch conjecture and the monotone Hirsch conjecture can be transferred to the circuit setting. We begin with some necessary background in Section \ref{sec:conjecture} and explain our contributions in Section \ref{sec:contributions}.
\subsection{Circuit Diameters and the Circuit Diameter Conjecture}\label{sec:conjecture}
We follow \cite{bv-17,bv-22,dknv-22,dhl-15} for formal definitions and the important concepts.
\begin{definition}[Circuits]
For a rational polyhedron $P = \{ \mathbf{x} \in \mathbb{R}^n \colon A \mathbf{x} = \mathbf{b}, B \mathbf{x} \leq \mathbf{d} \}$, the set of circuits of $P$, denoted $\mathcal{C}(A,B)$, consists of all vectors $\mathbf{g} \in \ker(A) \setminus \{ \mathbf{0} \}$, normalized to have coprime integer components, for which $B \mathbf{g}$ is support-minimal in the set $\{ B \mathbf{x} \colon \mathbf{x} \in \ker(A) \setminus \{ \mathbf{0} \}\}$.
\end{definition}
The set of circuits consists of all potential edge directions of the polyhedron as the right hand sides $\mathbf{b}$ and $\mathbf{d}$ vary \cite{g-75}. In particular, it contains the set of all actual edge directions. Thus, a {\em circuit walk}, referring to a sequence of maximal steps along circuits, is a direct generalization of an edge walk.
\begin{definition}[Circuit Walk]\label{def:circuitwalk}
Let $P = \{ \mathbf{x} \in \mathbb{R}^n \colon A \mathbf{x} = \mathbf{b}, B \mathbf{x} \leq \mathbf{d} \}$ be a polyhedron. For two vertices $\mathbf{v}_1$ and $\mathbf{v}_2$ of $P$, we call a sequence $\mathbf{v}_1 = \mathbf{y}_0,\dots,\mathbf{y}_k = \mathbf{v}_2$ a circuit walk of length $k$ from $\mathbf{v}_1$ to $\mathbf{v}_2$ in $P$ if, for all $i = 1,\dots,k$,
\begin{myenumerate}
\item $\mathbf{y}_i \in P$,
\item $\mathbf{y}_{i} = \mathbf{y}_{i-1} +\alpha_i \mathbf{g}_i$ for some $\mathbf{g}_i \in \mathcal{C}(A,B)$ and $\alpha_i > 0$, and
\item $\mathbf{y}_{i-1} + \alpha \mathbf{g}_i \notin P$ for all $\alpha > \alpha_i$.
\label{def:circuitwalk-3}
\end{myenumerate}
\end{definition}
We define the {\em circuit diameter} of a polyhedron $P$ as the maximum length of a shortest circuit walk between any pair of vertices of $P$. Clearly, the circuit diameter is a lower bound on the combinatorial diameter of $P$. Note that, unlike edge walks, circuit walks are not necessarily \emph{reversible}; the number of steps required to walk from $\mathbf{v}_1$ to $\mathbf{v}_2$ may not be the same as from $\mathbf{v}_2$ to $\mathbf{v}_1$. We use $\Delta(f,d)$ to denote the maximum circuit diameter of any $d$-dimensional polyhedron with $f$ facets. The circuit analogue of the Hirsch conjecture, the {\em circuit diameter conjecture} \cite{bfh-14}, asks whether $\Delta(f,d)\leq f-d$ and is open. So is its monotone variant:
we call a circuit walk $\mathbf{y}_0,\dots,\mathbf{y}_k$ {\em monotone} with respect to a linear objective function $\mathbf{c}$ if the sequence $(\mathbf{c}^\top \mathbf{y}_i)_{i=0,\dots,k}$ is strictly decreasing.
The {\em monotone circuit diameter} of a polyhedron $P$ is defined as the maximum length of a shortest monotone circuit walk from any vertex of $P$ to a vertex minimizer of $\mathbf{c}$ across all possible choices of objective function $\mathbf{c}$.
Circuit diameters providing lower bounds on combinatorial diameters is just one of several reasons for interest in the study of the circuit analogues of the Hirsch conjecture and its variants. A resolution of the circuit diameter conjecture would reveal some information as to {\em why} the Hirsch bound of $f-d$ is violated in the combinatorial setting \cite{bdf-16,bsy-18}. More specifically, an affirmative answer to the circuit diameter conjecture implies that it is the restriction from circuit to edge steps that causes the violation. On the other hand, if not even the circuit diameter satisfies the Hirsch bound, the reason for this would be the maximality of steps in \cref{def:circuitwalk}\cref{def:circuitwalk-3}: if the step lengths are not required to be maximal, the so-called {\em conformal sum property} \cite{bk-84,r-69,z-95} guarantees the existence of a walk of at most $f-d$ circuit steps between any pair of vertices (see also \cref{sec:prelim}).
Finally, circuit diameters are intimately related to the efficiency of {\em circuit augmentation schemes} to solve linear programs \cite{bv-19b,bv-22,dhl-15,env-21,gdl-14,gdl-15}.
Despite results on circuit diameters for some polyhedra in combinatorial optimization (see, e.g., \cite{bfh-16,kps-17}), and general upper bounds involving the so-called circuit imbalance \cite{dknv-22,env-21} or the input bit-size \cite{dks-22}, not much is known about the potential validity of the circuit diameter conjecture. It was shown in \cite{bsy-18} that $\Delta (8, 4) = 4$. In particular, this holds for the Klee-Walkup polyhedron in the original counterexample to the unbounded Hirsch conjecture \cite{kw-67}: an unbounded $4$-dimensional polyhedron with $8$ facets and combinatorial diameter $5 >8-4$, but circuit diameter at most $4$ \cite{sy-15}. The question that motivates our work is whether any of the other known counterexamples to variants of the Hirsch conjecture may be counterexamples to the respective circuit analogues.
\subsection{Contributions}\label{sec:contributions}
We study four polytopes -- more precisely, so-called {\em spindles} -- that appear in the construction of (monotone) Hirsch counterexamples: the $5$-dimensional spindles $S^{48}_5$ from \cite{s-11} and $S^{25}_5$, $S^{28}_5$ from \cite{msw-15} (named for their number of facets $f=48$ or $f=25,28$, respectively) that are the basis for a construction of counterexamples to the bounded Hirsch conjecture, as well as the $4$-dimensional spindle $M_4$ from \cite{ToddExample} underlying the original counterexample for the monotone Hirsch conjecture. As our main contribution, we show that neither of these polytopes can lead to counterexamples for the (monotone) circuit diameter conjecture. We do so through a combination of theoretical arguments and computations.
Recall that a spindle is a polytope with two distinguished vertices $\mathbf{u}$ and $\mathbf{v}$ such that each facet is incident to either $\mathbf{u}$ or $\mathbf{v}$. We call $\mathbf{u}$ and $\mathbf{v}$ the \emph{apices} of the spindle. The {\em length} of a spindle denotes the combinatorial distance of $\mathbf{u}$ and $\mathbf{v}$. A spindle can be defined as the intersection of two pointed cones emanating from $\mathbf{u}$ and $\mathbf{v}$ such that each apex is in the strict interior of the opposite cone.
We are interested in bounding the {\em circuit length}, referring to the maximum length of a shortest circuit walk from one apex of the spindle to the other one. In \cref{sec:prelim}, we define sign-compatible circuit walks, bound their length, and prove sufficient conditions for their existence on spindles in \cref{thm:spindle-containment}. We then apply these results to the spindles $S^{25}_5,S^{28}_5,S^{48}_5$ and $M_4$ in \cref{sec:counterexamples}.
The bounded Hirsch counterexamples in \cite{msw-15,s-11} are constructed from highly degenerate $5$-dimensional spindles $S^{25}_5,S^{28}_5,S^{48}_5$ of length $6$. At the core of Santos' construction is the following observation: from a $d$-dimensional spindle with $f$ facets and length greater than $d$, one can obtain a $(d{+}1)$-dimensional spindle with $f+1$ facets which has length greater than $d+1$. By doing this $f-2d$ times, one obtains an $(f{-}d)$-dimensional spindle with $2f-2d$ facets whose length exceeds $f-d$. Santos applied this construction to $S^{48}_5$, leading to a $43$-dimensional spindle with $86$ facets and length at least $44$. The fact that $S^{28}_5$ and $S^{25}_5$ have fewer facets leads to counterexamples in lower dimensions $23$ and $20$, respectively.
In \cref{sec:Santos}, we prove that $S^{25}_5,S^{28}_5$, and $S^{48}_5$ have circuit length at most $5$. In doing so, we show that the above construction does not lead to a counterexample for polytopes in the circuit setting (\cref{cor:santos-width,thm:weibel-length}).
In \cref{sec:Todd}, we show that Todd's counterexample to the monotone Hirsch conjecture \cite{ToddExample} is not a counterexample to the monotone circuit diameter conjecture. The Todd polytope $M_4$ is a $4$-dimensional spindle with $8$ facets. Todd found a linear objective function such that any monotone path from one apex to the other is of length at least $5$. In \cref{sec:Todd}, we show in \cref{thm:todd} that $M_4$ has monotone circuit diameter $4$. To do this, we study all orientations of the graph of $M_{4}$ induced by linear objective functions finding that only five contradict the monotone Hirsch bound for the combinatorial diameter.
One of the main challenges in studying circuit diameters is the fact that the circuit diameter of a polyhedron, unlike the combinatorial diameter, depends on its geometry. In particular, there can be two realizations of the same polyhedron with different circuit diameters (see, for example, \cite{sy-15b}). We stress that our results in \cref{sec:counterexamples} only apply to the specific realizations of the spindles in the literature (and some careful, mild perturbations). It will remain open whether {\em all} realizations of these polytopes satisfy the circuit diameter bound.
\section{Preliminaries}\label{sec:prelim}
Our main tool for analyzing the spindles in \cref{sec:counterexamples} is the concept of \emph{sign-compatible} circuit walks.
Two vectors $\mathbf{x},\mathbf{y} \in \mathbb{R}^d$ are \emph{sign-compatible} if $\mathbf{x}_i \mathbf{y}_i \ge 0$ for all $i=1,\dots,d$. Let $P = \{ \mathbf{x} \in \mathbb{R}^d \colon B \mathbf{x} \le \mathbf{d} \}$ be a polyhedron for a matrix $B \in \mathbb{R}^{m \times d}$ with rows $\mathbf{b}_i \in \mathbb{R}^d$ for $i=1,\dots,m$.
We call a circuit walk on $P$ with steps $\mathbf{g}_j$ and step lengths $\alpha_j$ for $j=1,\dots,k$ a \emph{sign-compatible circuit walk} if all $B \mathbf{g}_j$ for $j=1,\dots,k$ are pairwise sign-compatible.
Such walks are special cases of conformal sums of circuits, which correspond to sign-compatible circuit walks without the maximal step requirement in \cref{def:circuitwalk}\cref{def:circuitwalk-3}. In this weaker setting, the well-known \emph{conformal sum property} \cite{bk-84,r-69,z-95} guarantees that for any given pair of vertices $\mathbf{u},\mathbf{v}$ of a $d$-dimensional polyhedron, their difference $\mathbf{v}-\mathbf{u}$ can be written as a conformal sum of at most $d$ circuits. This contrasts with the situation for sign-compatible circuit walks (with maximal steps), which may not exist at all \cite{bdf-16}. When they do, however, their length is at most $d$: note that once a sign-compatible walk enters a new facet, it may not leave it again. Since each step of the walk is maximal and must therefore enter a new facet, the number of steps is at most $d$, and thus satisfies the bound $f-d\geq d$ for a spindle.
Moreover, sign-compatible circuit walks are monotone for any linear objective function that is uniquely minimized at the ending vertex of the walk. To see this, let $P$ be a polyhedron given by $P = \{ \mathbf{x} \in \mathbb{R}^d \colon B \mathbf{x} \le \mathbf{d} \}$. For the sake of simplicity, we only consider full-dimensional polyhedra here. We define the hyperplane arrangement $\mathcal{H}(B) = \bigcup_{i=1}^{m} \{\mathbf{x} \in \mathbb{R}^d \colon \mathbf{b}_i^\top \mathbf{x} = 0\}$. Following \cite{bv-17}, we call this arrangement the {\em elementary arrangement} of $P$.
Since the set of circuits of $P$ consists precisely of the (normalized) directions of the extreme rays of $\mathcal{H}(B)$ (see \cite{bv-17}), we obtain the following equivalent characterization of sign-compatibility:
\begin{lemma} \label{prop:sign-comp-char}
Let $P = \{ \mathbf{x} \in \mathbb{R}^d \colon B \mathbf{x} \le \mathbf{d} \}$ be a polyhedron in $\mathbb{R}^d$ and let $\mathbf{u}$ and $\mathbf{v}$ be two vertices of $P$. Denote by $C$ the minimal face of the elementary arrangement $\mathcal{H}(B)$ containing $\mathbf{v}-\mathbf{u}$.
For a circuit walk $\mathbf{u} = \mathbf{y}_0,\dots,\mathbf{y}_k = \mathbf{v}$ from $\mathbf{u}$ to $\mathbf{v}$ in $P$ with steps $\mathbf{g}_j \in \mathcal{C}(B)$ for $j=1,\dots,k$, the following statements are equivalent:
\begin{myenumerate}
\item The walk $\mathbf{y}_0,\dots,\mathbf{y}_k$ is sign-compatible. \label{prop:sign-comp-char-1}
\item $\mathbf{g}_j \in C$ for all $j=1,\dots,k$. \label{prop:sign-comp-char-2}
\item The walk $\mathbf{y}_0,\dots,\mathbf{y}_k$ is monotone for any linear objective function $\mathbf{c} \in \mathbb{R}^d$ uniquely minimized over $-C$ at the origin $\mathbf{0}$. \label{prop:sign-comp-char-3}
\end{myenumerate}
\end{lemma}
\begin{proof}
To see that \cref{prop:sign-comp-char-1} and \cref{prop:sign-comp-char-2} are equivalent, note that the set of all vectors in $\mathbb{R}^d$ whose products with $B$ are pairwise sign-compatible and sign-compatible with $B(\mathbf{v}-\mathbf{u})$ is a polyhedral cone (see \cite{bdf-16}) and coincides with the minimal face of $\mathcal{H}(B)$ containing $\mathbf{v}-\mathbf{u}$.
Let us now prove the equivalence of \cref{prop:sign-comp-char-2} and \cref{prop:sign-comp-char-3}.
From standard polyhedral theory, $\mathbf{c}$ is uniquely minimized over $-C$ at $\mathbf{0}$ if and only if $-\mathbf{c}$ is in the relative interior of the polar cone of $-C$, meaning that $\mathbf{c}^\top\mathbf{x} < 0$ for all $\mathbf{x} \in C \setminus \{\mathbf{0}\}$. Hence, if $\mathbf{g}_{j} \in C$ for all $j = 1, \dots, k$ then $\mathbf{c}^{\top} \mathbf{g}_{j} < 0$ for all such $j$ meaning that the path is monotone for all choices of $\mathbf{c}$. Thus, \cref{prop:sign-comp-char-2} implies \cref{prop:sign-comp-char-3}. For the other direction, recall that polar duality is an involution, so $\mathbf{x} \in C$ if and only if, for all $\mathbf{c}$ minimized at $\mathbf{0}$ on $-C$, $\mathbf{c}^\top x < 0$. Thus, if the walk is monotone for all linear objective functions uniquely minimized over $-C$ at $\mathbf{0}$, each step $\mathbf{g}_{j}$ must be contained in $C$ meaning that \cref{prop:sign-comp-char-3} implies \cref{prop:sign-comp-char-2}.
\qed
\end{proof}
A key ingredient to our proofs is that we are able to guarantee the existence of sign-compatible circuit walks on spindles satisfying some restrictions. Recall that a spindle is the intersection of two pointed cones, one emanating from each apex.
We make the following key observation: one can always find sign-compatible walks between the two apices of a spindle formed by repeating the same cone twice.
\begin{lemma} \label{lem:spindle-same-cone}
Let $P \subset \mathbb{R}^d$ be a spindle with apices $\mathbf{u}$ and $\mathbf{v}$, given by $(C + \mathbf{u}) \cap (-C + \mathbf{v})$ for a pointed cone $C$ with a unique vertex at the origin $\mathbf{0}$. Then there is a sign-compatible circuit walk of length at most $d$ from $\mathbf{u}$ to $\mathbf{v}$ in $P$.
\end{lemma}
\begin{proof}
We proceed by induction on $d$. If $d=1$, then $P$ is an edge with endpoints $\mathbf{u}$ and $\mathbf{v}$ and the statement is clear.
Now suppose that $d \ge 2$.
Since $C$ is a region of the elementary arrangement of $P$ and $\mathbf{v}-\mathbf{u}$ is in the interior of $C$, any circuit walk from $\mathbf{u}$ to $\mathbf{v}$ that only walks along directions of extreme rays of $C$ will be sign-compatible by \cref{prop:sign-comp-char}. We construct such a walk as follows.
Starting at $\mathbf{u}$, we walk along any edge of $P$ incident with $\mathbf{u}$ to some adjacent vertex $\mathbf{u}'$. Then $\mathbf{u}'$ and $\mathbf{v}$ are contained in a common facet $F$ of $P$, which has to be of the form $F = P \cap (-C' + \mathbf{v})$ for some facet $C'$ of $C$. Now consider the spindle $P' = (C' + \mathbf{u}') \cap (-C' + \mathbf{v})$. Since $C' + \mathbf{u}' \subset C + \mathbf{u}$ and $F = (C + \mathbf{u}) \cap (-C' + \mathbf{v})$, we have that $P' \subset F$. By the induction hypothesis and \cref{prop:sign-comp-char}, there is a circuit walk on $P'$ of length at most $d-1$ from $\mathbf{u}'$ to $\mathbf{v}$ with each step being in the direction of one of the extreme rays of $C'$ (and, thus, of $C$). Since each direction followed is in $C'$ and each step is maximal in $P'$, each step length must therefore be bounded by a facet-defining inequality for the cone $-C' + \mathbf{v}$, which is a facet of $-C + \mathbf{v}$ by hypothesis. So each step must also be maximal in $P$. Thus, appending this walk to the first edge step from $\mathbf{u}$ to $\mathbf{u'}$ yields a sign-compatible circuit walk from $\mathbf{u}$ to $\mathbf{v}$. \qed
\end{proof}
We are now ready to prove the main result of this section.
\begin{theorem} \label{thm:spindle-containment}
Let $P \subset \mathbb{R}^d$ be a spindle with apices $\mathbf{u}$ and $\mathbf{v}$, given by $(C + \mathbf{u}) \cap (-D + \mathbf{v})$ for two pointed cones $C$ and $D$ with a unique vertex at the origin $\mathbf{0}$. Suppose that $D \subseteq C$. Then there is a sign-compatible circuit walk of length at most $d$ from $\mathbf{u}$ to $\mathbf{v}$ in $P$.
For any linear objective function $\mathbf{c} \in \mathbb{R}^d$ uniquely minimized over $P$ at $\mathbf{v}$, this walk is monotone.
\end{theorem}
\begin{proof}
Define $P' = (D + \mathbf{u}) \cap (-D + \mathbf{v})$. Since $P$ is a spindle, $\mathbf{v}-\mathbf{u}$ is in the interior of $C \cap D = D$ by hypothesis. It follows then that $P'$ is a spindle with apices $\mathbf{u}$ and $\mathbf{v}$. Since $D \subseteq C$, we further have that $P' \subseteq P$. By \cref{lem:spindle-same-cone}, there is a sign-compatible circuit walk of length at most $d$ steps from $\mathbf{u}$ to $\mathbf{v}$ in $P'$. Each step walks along the direction of some extreme ray of $D$ by \cref{prop:sign-comp-char}. Therefore, the only facet-defining inequalities of $P'$ that can become tight at each step are facet-defining inequalities for the cone $-D + \mathbf{v}$, which are facet-defining for $P$ as well. So each step is maximal in $P$ and the walk is therefore a circuit walk in $P$.
Sign-compatibility for $P$ follows from \cref{prop:sign-comp-char} by noting that, since $D \subseteq C$, $D$ is a region of the elementary arrangement which contains $\mathbf{v}-\mathbf{u}$ in its interior. For monotonicity, note that any linear objective function $\mathbf{c}$ for which $\mathbf{v}$ is the unique minimizer over $P$ is uniquely minimized at $\mathbf{0}$ over $P - \mathbf{v} \subset -D$ and, hence, also over $-D$. By \cref{prop:sign-comp-char}, the walk must therefore be monotone for any such $\mathbf{c}$. \qed
\end{proof}
\section{Circuit Walks on Hirsch Counterexamples}\label{sec:counterexamples}
We are now ready to discuss the circuit lengths of spindles $S^{48}_5$, $S^{28}_5$, and $S^{25}_5$ and the monotone circuit diameter of $M_4$. In Section \ref{sec:Santos}, we turn to the spindles used in the disproof of the bounded Hirsch conjecture. In Section \ref{sec:Todd}, we turn to the spindle $M_4$ used in the disproof of the monotone Hirsch conjecture.
\subsection{Bounded Hirsch Counterexamples}\label{sec:Santos}
Santos' original counterexample in \cite{s-11} is constructed from a $5$-dimensional spindle with $48$ facets and length $6$. We denote this spindle, as defined in \cite[Theorem 3.1]{s-11}, by $S_5^{48}$. It is given by the description $S_5^{48} = \{ \mathbf{x} \in \mathbb{R}^5 \colon A^+ \mathbf{x} \le \mathbf{1}, A^- \mathbf{x} \le \mathbf{1} \}$ where $\mathbf{1}$ denotes the all-one vector and $A^+$ and $A^-$ are the matrices
\[
A^+ = \begin{pmatrix}
1 & \pm 18 & 0 & 0 & 0 \\
1 & 0 & \pm 18 & 0 & 0 \\
1 & 0 & 0 & \pm 45 & 0 \\
1 & 0 & 0 & 0 & \pm 45 \\
1 & \pm 15 & \pm 15 & 0 & 0 \\
1 & 0 & 0 & \pm 30 & \pm 30 \\
1 & 0 & \pm 10 & \pm 40 & 0 \\
1 & \pm 10 & 0 & 0 & \pm 40
\end{pmatrix}
%
\text{ and }
%
A^- = \begin{pmatrix}
-1 & 0 & 0 & 0 & \pm 18 \\
-1 & 0 & 0 & \pm 18 & 0 \\
-1 & \pm 45 & 0 & 0 & 0 \\
-1 & 0 & \pm 45 & 0 & 0 \\
-1 & 0 & 0 & \pm 15 & \pm 15 \\
-1 & \pm 30 & \pm 30 & 0 & 0 \\
-1 & \pm 40 & 0 & \pm 10 & 0 \\
-1 & 0 & \pm 40 & 0 & \pm 10
\end{pmatrix}
\]
with $24$ rows each. The two apices of $S_5^{48}$ are $\mathbf{v}^+ = (1,0,0,0,0)$ and $\mathbf{v}^- = (-1,0,0,0,0)$.
The spindle $S_5^{48}$ can be equivalently written as $S_5^{48} = (C^+ + \mathbf{v}^+) \cap (-C^- + \mathbf{v}^-)$ for the two cones $C^+ = \{ \mathbf{x} \in \mathbb{R}^5 \colon A^+ \mathbf{x} \le \mathbf{0} \}$ and $C^- = \{ \mathbf{x} \in \mathbb{R}^5 \colon A^- \mathbf{x} \ge \mathbf{0} \}$.
\begin{theorem} \label{cor:santos-width}
The circuit length of $S^{48}_5$ is $2$.
\end{theorem}
\begin{proof}
First observe that, by the symmetry in the constraint matrices $A^{+}$ and $A^{-}$, for all $\mathbf{g} \in C^+$, the vector $\mathbf{g}'$ obtained from $\mathbf{g}$ by flipping the signs of all but the first component is contained in $C^+$, too (and similarly for $C^-$).
Let $\mathbf{g} = (-360,8,4,4,7)$ and $\mathbf{g}' = (-360,-8,-4,-4,-7)$. By a direct computation, $\mathbf{g}$ generates an extreme ray of the cone $C^+ \cap C^-$. Since $C^+ \cap C^-$ is a region of the elementary arrangement of $S_5^{48}$, the vector $\mathbf{g}$ is a circuit. It is further contained in a facet of $C^+$ and a facet of $C^-$. By symmetry, all of the above also holds for $\mathbf{g}'$.
Now let $K$ be the cone spanned by $\mathbf{g}$ and $\mathbf{g}'$ and define $P = (K + \mathbf{v}^+) \cap (-K + \mathbf{v}^-)$. Since $\mathbf{g} + \mathbf{g}' \in K$ is a multiple of $\mathbf{v}^+ - \mathbf{v}^-$, $P$ is a nonempty spindle with apices $\mathbf{v}^+$ and $\mathbf{v}^-$. It further follows from $K \subset C^+ \cap C^-$ that $P \subset S_5^{48}$.
Note that $P$ is a parallelogram with two edge walks of length $2$ between $\mathbf{v}^+$ and $\mathbf{v}^-$. Since the edges of $P$ are generated by circuits of $S_5^{48}$ and each edge is contained in a facet of $S_5^{48}$, both edge walks are circuit walks in $S_5^{48}$.
Further, the vector $(\mathbf{v}^+ - \mathbf{v}^-)/2 = (1,0,0,0,0)$ is not contained in any facet of $C^+$ or $C^-$ and thus cannot be a circuit of $S_5^{48}$. Hence, the circuit length of $S_{5}^{48}$ is $2$. \qed
\end{proof}
We stress that the above bound heavily relies on the peculiar symmetry in the description of $S_5^{48}$. However, if $S_5^{48}$ is perturbed carefully, we may apply \cref{lem:spindle-same-cone} to still obtain a bound of $5$ on the circuit length of the perturbed spindle. More precisely, suppose that the two steps $\mathbf{g}$ and $\mathbf{g}'$ from the proof of \cref{cor:santos-width} are still circuits after a perturbation, and suppose that they can be used to walk from one of the apices to some point $\mathbf{y}$ on the boundary of the perturbed spindle. Provided that the perturbation is sufficiently small, $\mathbf{y}$ will be in a face of dimension at most $3$, close to the other apex of the perturbed spindle. We can then embed into this face a spindle that satisfies the requirements of \cref{lem:spindle-same-cone} and has $\mathbf{y}$ as one apex. By \cref{lem:spindle-same-cone}, we can reach the other apex from $\mathbf{y}$ within three more steps.
Santos' original example constructed from $S_5^{48}$ is not the lowest-dimensional bounded Hirsch counterexample known to date. In \cite{msw-15}, Matschke, Santos, and Weibel gave two smaller counterexamples, both of which are constructed from 5-dimensional spindles of length 6 with $28$ and $25$ facets, respectively. The first one, $S^{28}_5$, from \cite[Corollary 2.9]{msw-15} has the same symmetries as $S_5^{48}$ and can be shown to have circuit length at most 2 by the same argument as in \cref{cor:santos-width}.
The second spindle with 25 facets from \cite[Theorem 2.14]{msw-15} is given by $S_5^{25} = \{ \mathbf{x} \in \mathbb{R}^5 \colon A^+ \mathbf{x} \le \mathbf{1}, A^- \mathbf{x} \le \mathbf{1} \}$ where
\begingroup
\renewcommand*{\arraystretch}{1.1}
\[
A^+ = \begin{pmatrix}
1 & 0 & 0 & -20 & -4\\
1 & 0 & 0 & 20 & -4\\
1 & 0 & 0 & 21 & -7\\
1 & 0 & 0 & -21 & -7\\
1 & 0 & 0 & 16 & -15\\
1 & 0 & 0 & -16 & -15\\
1 & 0 & 0 & 0 & 32\\
1 & 0 & 0 & 0 & -32\\
1 & \frac{3}{50} & -\frac{1}{25} & 0 & -30\\
1 & -\frac{3}{50} & -\frac{1}{25} & 0 & 30\\
1 & \frac{3}{1000} & \frac{7}{1000} & 0 & -\frac{159}{5}\\
1 & -\frac{3}{1000} & \frac{7}{1000} & 0 & \frac{159}{5}
\end{pmatrix}
%
\text{ and }
%
A^- = \begin{pmatrix}
-1 & 60 & 0 & 0 & 0\\
-1 & 8 & -30 & 0 & 0\\
-1 & 0 & -33 & 0 & 0\\
-1 & -2 & -32 & 0 & 0\\
-1 & -55 & 0 & 0 & 0\\
-1 & -34 & 36 & 0 & 0\\
-1 & 0 & 76 & 0 & 0\\
-1 & 44 & 34 & 0 & 0\\
-1 & -20 & 0 & \frac{1}{5} & -\frac{1}{5}\\
-1 & \frac{2999}{50} & 0 & -\frac{3}{25} & -\frac{1}{5}\\
-1 & \frac{299999}{5000} & 0 & 0 & \frac{1}{100}\\
-1 & -\frac{549}{10} & 0 & \frac{1}{5000} & \frac{1}{800}\\
-1 & -54 & 0 & \frac{1}{500} & -\frac{1}{80}\\
\end{pmatrix}.
\]
\endgroup
Using Polymake \cite{polymake} and Python, we computed circuit walks of length $5$ between the apices $\mathbf{v}^+ = (1,0,0,0,0)$ and $\mathbf{v}^- = (-1,0,0,0,0)$ of $S_5^{25}$. An initial computation yielded that $S^{25}_5$ has $17454$ circuits, which is far too large for a brute force search across all length $5$ circuit walks. To find our length $5$ walks, we restricted to a suitably chosen subset of circuits, for which an enumerative approach was computationally feasible.
\begin{theorem} \label{thm:weibel-length}
The circuit length of $S^{25}_5$ is at most $5$.
\end{theorem}
\begin{proof}
We exhibit explicit circuit walks between the apices of the $S^{25}_{5}$. We restricted to circuit walks using the rays of $C^{+} \cap C^{-}$, $C^{+}$, and $C^{-}$. This restriction yielded $132$ pairs of circuits $\pm \mathbf{g}$. From this restricted set, we found the following circuit walks.
From $\mathbf{v}^{+}$ to $\mathbf{v}^{-}$, the circuits used are:
\begin{align*}
&(-53592, -609, 1624, 2112, -1320), (-262276, 4060, -3451, -10336, -6460), \\
&(-339416, -4263, -4466, 13376, -8360), (-26752, -336, -352, -83979, 83381), \\
&\text{and } (-83600, -1050, -1100, 3605210, -2060021).
\end{align*}
From $\mathbf{v}^{-}$ to $\mathbf{v}^{+}$, the circuits used are:
\begin{align*}
&(53592, 609, -1624, -2112, 1320), (7197600, 118440, -208336, -404865, -224925),\\
&(5280, -96, 56, 297, -165), (160, -2500, 4000, 9, -5),\\
& \text{and } (4320, 5500, -1500, 243, -135).
\end{align*}
We note that for both walks, the first three steps are circuits from $C^+ \cap C^-$ and the final two are edge directions of the cones $C^-$ and $C^+$, respectively.
\qed
\end{proof}
\subsection{Todd's Monotone Hirsch Counterexample}\label{sec:Todd}
The Todd polytope $M_{4}$ is given by $M_{4} = \{ \mathbf{x} \in \mathbb{R}^4 \colon A \mathbf{x} \leq \mathbf{b}, \mathbf{x} \geq \mathbf{0} \}$ where
\[ A = \begin{pmatrix} 7 & 4 & 1 & 0\\ 4 & 7 & 0 & 1 \\ 43 & 53 & 2 & 5 \\ 53 & 43 & 5 & 2 \end{pmatrix} \text{ and } \mathbf{b} = \begin{pmatrix} 1 \\ 1 \\ 8 \\ 8 \end{pmatrix}. \]
The polytope has $8$ facets in dimension $4$. Consider the linear program $\min \{ (1,1,1,1)^\top \mathbf{x} \colon \mathbf{x} \in M_{4} \}$. Todd showed that the shortest monotone path from the vertex $(1,1,8,8)/19$ to the optimum $\mathbf{0}$ of this LP is of length at least $5$, a contradiction to the monotone Hirsch conjecture \cite{ToddExample}.
However, in the forty years since there has not been a detailed analysis of how many different orientations of $M_{4}$ have large monotone diameters. We will first address how rigid the selection of orientation is to obtain a counter-example to the monotone Hirsch conjecture. To do this computation, we first enumerate all orientations of the graph induced by a linear objective function and then we compute the monotone diameter using a breadth first search. In \cite{edgotope}, the authors show that this set of orientations corresponds to the set of vertices of a zonotope they call the \emph{edgotope}. To compute this zonotope for a polytope $P$ with vertices $V(P)$ and edges $E(P) = \{(\mathbf{u},\mathbf{v}): \mathbf{u}, \mathbf{v} \in V(P), \mathbf{u} \text{ is adjacent to } \mathbf{v}\}$, one computes the following:
\[EZ(P) = \sum_{(\mathbf{u},\mathbf{v}) \in E(P)} \text{conv}(\{\mathbf{u},\mathbf{v}\}).\]
Zonotopes are dual to hyperplane arrangements, so this statement is equivalent to the observation that the set of orientations are in bijection with the set of regions of the hyperplane arrangement
\[\mathcal{H} = \bigcup_{(\mathbf{u},\mathbf{v}) \in E(P)} \{ \mathbf{x} \colon (\mathbf{u}-\mathbf{v})^\top \mathbf{x} = 0\}.\]
A region $R$ of $\mathcal{H}$ is uniquely determined by its sign vector $\mathbf{z} \in \{+,-\}^{E(P)}$ with entry $\mathbf{z}_{(\mathbf{u},\mathbf{v})}$ denoting whether $\mathbf{c}^\top(\mathbf{u} - \mathbf{v}) > 0$ or $\mathbf{c}^\top(\mathbf{u} - \mathbf{v}) < 0$ for each $(\mathbf{u},\mathbf{v}) \in E(P)$ and all $\mathbf{c} \in R$. Equivalently, the sign vector determines whether $\mathbf{c}^\top \mathbf{u} < \mathbf{c}^\top \mathbf{v}$ or $\mathbf{c}^\top \mathbf{v} > \mathbf{c}^\top \mathbf{u}$ for all $(\mathbf{u},\mathbf{v}) \in E(P)$ and $\mathbf{c} \in R$, which uniquely determines the orientation of the polytope.
To enumerate all the regions of the edgotope arrangement for $M_{4}$, we first found the graph $G(M_{4})$. This graph has precisely $40$ edges, which leads to an arrangement of $40$ hyperplanes. We used Sage \cite{sage} to compute the set of regions of this arrangement and found that there are exactly $7112$ regions and therefore $7112$ orientations. We enumerated the possible oriented graphs of $M_{4}$ for those orientations, and only five have diameters that contradict the monotone Hirsch conjecture. Five representatives of choices of $\mathbf{c}$ for which there is a bad orientation are $(1, 1, 1, 1),$ $(10716, 13680, 3477, 4465),$ $(13680, 10716, 4465, 3477),$ $(912, 1824, 513, 817),$ and $(1824, 912, 817, 513)$. These four orientations other than $(1,1,1,1)$ of the graph of $M_{4}$ are new and only differ from the Todd orientation on one edge in the first two cases and on two edges in the final two cases.
Each of those orientations have $\mathbf{0}$ as the optimum and the only vertex of distance $5$ away is $(1,1,8,8)/19$. In all of the bad orientations, $(1,1,8,8)/19$ is not a maximizer of $\mathbf{c}$. It follows then that $M_{4}$ is not a counterexample to Ziegler's strict monotone Hirsch conjecture, which asks whether the Hirsch bound is satisfied for paths from maxima to minima across all orientations (see Chapter $3$ of \cite{z-95} for more details). Furthermore, there are $1832$ orientations for which $\mathbf{0}$ is the unique sink, so even among those oriented graphs, a large diameter is rare.
While these observations are interesting on their own, in our context it allows us to reduce the set of orientations for which we need to prove there is always a monotone circuit walk down to those with $\mathbf{0}$ as a unique sink and coming from $(1,1,8,8)/19$. Note that the cones for this spindle are given by $\{ \mathbf{x} \in \mathbb{R}^{4} \colon \mathbf{x} \geq \mathbf{0} \}$ and $\{ \mathbf{x} \in \mathbb{R}^{4} \colon A \mathbf{x} \leq \mathbf{b} \}$. With these observations, we may prove our main result:
\begin{theorem} \label{thm:todd}
The monotone circuit diameter of $M_{4}$ is $4$.
\end{theorem}
\begin{proof}
From our computations, for all orientations that do not have $\mathbf{0}$ as the unique optimum, there is always a monotone edge walk from any starting vertex of length at most $4$ and therefore always a monotone circuit walk of length at most $4$. Furthermore, the only case for which the shortest monotone edge walk is of length $5$ is when the starting vertex is $(1,1,8,8)/19$. Note that $\mathbf{0}$ is the apex of the cone $\{ \mathbf{x} \in \mathbb{R}^4 \colon \mathbf{x} \geq \mathbf{0} \}$ and $(1,1,8,8)/19$ is the apex of the cone $\{ \mathbf{x} \in \mathbb{R}^{4} \colon A \mathbf{x} \leq \mathbf{b} \}$.
Observe also that, since the entries of $-A$ are all non-positive, we have that $ \{ \mathbf{x} \in \mathbb{R}^{4} \colon \mathbf{x} \geq \mathbf{0}\} \subseteq \left\{ \mathbf{x} \in \mathbb{R}^{4} \colon -A \mathbf{x} \leq \mathbf{0} \right\}.$ Hence, by \cref{thm:spindle-containment}, there must always exist a monotone circuit walk of length at most $4$ from $(1,1,8,8)/19$ to $\mathbf{0}$ for any orientation for which $\mathbf{0}$ is minimal. Therefore, the monotone circuit diameter of the Todd example $M_4$ is at most $4$. We may show by a direct computation that it is exactly $4$ by noting that $(1,1,8,8)/19$ is not a linear combination of any $3$ circuits meaning that there is no circuit walk of length $3$ from $\mathbf{0}$ to $(1,1,8,8)/19$. \qed
\end{proof}
A natural concern with our argument is whether it breaks after perturbation. Since $M_{4}$ is simple, perturbation does not change the graph of the polytope. However, any sufficiently generic perturbation will increase the set of possible orientations, since for such a perturbation, the set of edge directions will not satisfy any nontrivial linear relations. However, we suspect that the set of orientations is always the same for any sufficiently small perturbation even if it is distinct from the set of orientations we started with. If that is this case, then one can extend our argument to allow for perturbation by evaluating the circuit diameter for those new orientations.
\subsection*{Acknowledgements}
We would like to thank Nicholas Crawford, Jes\'{u}s A. De Loera, Michael Wigal, and Youngho Yoo for insightful discussions.
\bibliographystyle{splncs04}
|
2,877,628,090,533 | arxiv | \section{Introduction}
The physics of interacting vortex lines in high-tem\-perature superconductors
subject to strong thermal fluctuations and pointlike or extended disorder is
amazingly rich and has been a major research focus in condensed matter physics
in the past two decades \cite{BLAT}.
Vortex motion plays a crucial role in the transport properties of type-II
superconductors in external magnetic fields.
In the presence of a sufficiently large applied current magnetic flux lines
will experience a Lorentz force and drift perpendicular to both the current and
applied magnetic field.
The motion of these current-encircled magnetic flux filaments induces an
electric field parallel to the applied current resulting in power dissipation
and hence an Ohmic voltage drop across the material proportional to the
velocity of the vortices.
While an externally applied current serves as a driving force, underlying
defects in the superconducting material inhibit vortex motion.
Material defects pin vortices below a critical applied force, and also play a
significant role in their motion above that threshold, providing, on a
coarse-grained description level, an effective friction or viscosity.
Aside from the obvious relevance for technological applications, driven
magnetic flux lines in type-II superconductors also represent one of the few
cleanly experimentally realizable systems of interacting particles in a
non-trivial non-equilibrium steady state.
A thorough understanding of the ensuing phase diagram and full characterization
of each emerging steady state should shed light on the rich and still rather
incompletely understood features of non-equilibrium systems in general.
Since stochastic fluctuations and intrinsic correlations typically play a
significant role away from thermal equilibrium, it is desirable for each
emerging stationary state to attain a thorough quantitative understanding of
the fluctuations.
In superconductors specifically, pinning effects on vortex motion are also
reflected in the voltage noise power spectrum \cite{CLEM}.
For instance, slightly abo\-ve the critical force, in the presence of strong
disorder, depinning of vortices is observed to proceed via flow through plastic
channels \cite{BRANDT}.
These `rivers' of vortices form at different locations in the sample, and flow
around `islands' of temporarily trapped flux lines resulting in incoherent
motion.
Such behavior has been observed experimentally \cite{MATS1}, as well as in
two-dimensional computer simulations \cite{JENS1}, and is characterized in the
velocity or voltage frequency power spectrum by a broadband noise signal which
obeys a $1/\omega^\alpha$ power law, as also demonstrated in recent
three-dimensional numerical work \cite{VEST}.
Well above the critical force it has been observed that vortices are more
translationally ordered than at low velocities \cite{YARIN}.
It might be expected that at a sufficiently high drive, the vortices would form
a moving Abrikosov lattice since the effective pinning force from the disorder
on each vortex varies rapidly and would therefore be less effective
\cite{KOSH}.
However, it has been shown by Le~Doussal and Giamarchi \cite{DOUS1} that some
modes of the disorder are not affected by the motion even at large velocities.
As a result the vortices enter what has been termed a `moving glass' phase.
Here, subtle competitions between elastic energy, disorder, and dissipation
lead to the transverse displacements becoming pinned into preferred
time-independent configurations resulting in stationary two-dimensional
channels.
In the moving glass phase, vortices thus follow each other in a manner similar
to beads on a wire.
There exist a few possible coupling regimes between these elastic channels.
For strong point disorder approximately parallel elastic channels are
completely decoupled, while the periodicity along the direction transverse to
the drive is maintained.
Here sharp delta-function Bragg peaks with nonzero reci\-procal-lattice vector
components along the direction of motion are lost, while peaks with only
transverse components remain.
This regime is known as the `moving transverse glass' or `moving smectic'
phase \cite{DOUS1}, and is supported by recent numerical simulations in two and
three dimensions \cite{GOTCHA,CHEN}.
On the other hand, for weak point disorder, or large velocities, relative
deformations grow only logarithmically with distance; hence, the vortex
structure maintains quasi long-range order corresponding to complete elastic
coupling between vortices.
This state is known as a `moving Bragg glass' and is characterized by
algebraically divergent structure factor peaks at small reciprocal lattice
vectors \cite{DOUS1}.
The interaction of the moving Bragg glass with the underlying material defects
is manifested by a characteristic peak in the power spectrum corresponding to
the periodicity of the vortex lattice \cite{DOUS1}.
Defects in the material temporarily slow vortices resulting in `stick-slip'
motion.
In a structurally ordered phase such as the moving Bragg glass, this behavior
is repeated, resulting in a periodically varying average overall velocity.
With the lattice vector of the Bragg glass oriented in the flow direction, the
resulting characteristic frequency associated with this motion is known as the
`washboard' frequency.
Related phenomena are certainly not unique to dri\-ven flux lines in type-II
superconductors, but are, for example, well-established in charge- and
spin-density wave systems \cite{GRUNER}.
Random point disorder need not be the only pinning structure in superconducting
materials.
Col\-umnar disorder may also be introduced into materials for the purpose of
increasing the critical current \cite{CIVALE}.
Far above the depinning threshold a `moving Bose glass' is formed in the
presence of columnar pins \cite{DOUS1}.
Owing to the correlated nature of the disorder along the length of each vortex,
the structure function tends to resemble that of the moving transverse glass.
Furthermore, akin to the equilibrium Bose glass (i.e., the disorder-dominated
amorphous structure formed by flux lines localized by correlated defects
\cite{NEL1}), the moving Bose glass displays a diverging tilt modulus
\cite{DOUS1,OLIVE}.
Whether a unique voltage noise signal exists for this phase (as well as for the
moving transverse glass) is unclear, and subject to this present investigation.
The washboard frequency has been observed in a number of experiments
\cite{FIORY,HARRIS,TROY}.
However, direct observation of the washboard noise was achieved by Togawa
{\em et al.} \cite{TOGAWA}, who obtained voltage noise spectra of BSCCO
crystals in the mixed state subject to a constant current for various applied
magnetic field strengths.
For low magnetic fields broadband noise was observed and attributed to plastic
vortex flow.
As the applied magnetic field was increased, the broadband noise signal reached
a maximum value and then decreased again, while a narrowband noise peak that
corresponded to the washboard frequency emerged.
Upon increasing the magnetic field further, the characteristic frequency of the
narrowband noise grew owing to a tighter flux line packing of the vortices, and
hence a shorter vortex lattice constant.
The narrowband noise signal also decreased in height and increased in width.
The reason for this is apparently still not fully understood.
Washboard noise has also been detected unambiguously in a number of
two-dimensional numerical simulations; as characteristic examples, we mention
the following:
Olson {\em et al.} \cite{OLSON1} performed mole\-cular dynamics simulations of
vortices that were dri\-ven through a system of randomly placed defects.
Upon varying the drive strength they noticed that the number of regimes
available to the system above the plastic flow phase depended on the vortex
interaction strength.
For systems with intermediate to high interaction strength and large applied
drive, the vortices entered a coupled channel regime indicated by sixfold
coordination of the structure.
In this regime washboard noise was observed.
Furthermore, it was noted that the washboard signal intensity decreased as the
system size was increased.
It is believed that this was due to multiple domains forming in the vortex
lattice resulting in decoherence of the noise signal.
Kolton {\em et al.} \cite{KOLTON1} also performed two-dimension\-al molecular
dynamics simulations to investigate Fiory steps \cite{FIORY} for different
vortex velocities.
In order to understand the relationship between these steps and the temporal
order in the different regi\-mes, the voltage power spectrum was investigated
without an applied ac drive.
The authors observed the evolution of the power spectrum from broadband noise
to the emergence of a narrowband peak as the applied dc drive was increased,
and also found higher harmonics.
Two-dimensional computer simulations employing Langevin dynamics and varying
drive and pinning strength have been performed as well \cite{FANG}, with
similar washboard noise results.
In contrast, we are aware of only one three-dimensional investigation of the
washboard noise.
Using an anisotropic XY model Chen and Hu \cite{CHEN} investigated the
first-order transition from the moving Bragg glass to the moving smectic.
In the moving Bragg glass phase narrowband noise that corresponded to the
washboard frequency was observed along with harmonics for various driving
strengths at zero temperature.
The washboard peak itself persisted for $T>0$, while its harmonics disappeared.
The authors also found that the moving Bragg glass turned into a moving liquid
at high drive, and hence that the washboard signal was destroyed.
They argue this to be due to thermally activated vortex loops inducing
dislocations in the Bragg glass \cite{CHEN}.
The purpose of this work \cite{TOM} is threefold.
First, we wish to establish confidence in our novel simulation approach to
modeling vortex motion by qualitatively comparing our results to well-known
superconducting vortex behavior.
Second, we wish to investigate the evolution of the narrowband noise associated
with the washboard frequency for increasing vortex density (i.e., increasing
magnetic flux density) in the presence of randomly distributed point as well as
columnar defects.
Finally, we are interested in studying the effects of different types of
pinning centers on the narrowband voltage noise.
Below the critical current, in the presence of columnar defects, it has been
predicted that vortices hop between pinning sites temporarily trading elastic
deformation energy in the form of double-kinks and half-loops for a lower
overall energy configuration \cite{NEL1}.
Well above the depinning current remnants of these half-loop excitations and
double kinks may still exist.
At high driving values these remnant excitations would occur predominantly in
the direction of the drive, while other excitations would be suppressed by the
localizing effect of the columnar pins.
Obviously, the impact of such vortex excitations on the power spectrum cannot
be addressed by a two-dimensional simulation, but require a full
three-dimensional model.
Based on the effective free energy for interacting magnetic flux lines in the
London approximation, and subject to attractive pinning centers \cite{NEL1}, we
have developed a three-dimensional Monte Carlo simulation code \cite{DAS} to
study the effects of disorder on the velocity / voltage power spectrum and the
two-dimensional static vortex structure factor in the plane transverse to the
magnetic field for driven vortices in the non-equilibrium steady state.
Specifically, we compare results for point and columnar defects.
We also measure the average radius of gyration in order to examine the effects
of the different defect types on the shape or thermal `wandering' of the
elastic flux lines along the magnetic field ($z$) direction.
The simulation results reported here should be contrasted with our earlier
findings for non-interacting flux lines in the presence of various disorder
distributions \cite{DAS}.
Our results display many similar features for both defect types.
As the vortex density is increased for systems with either weak point or
correlated disorder, positional ordering is observed to increase in the
structure factor plot.
For columnar defects the vortex structure factor is found to change from that
characteristic of a typical liquid, to a smectic, and eventually an ordered
triangular lattice.
For the case of point defects, we only observe the triangular array in the
parameter region studied here.
We find that the structure factor plots at low vortex densities in the presence
of point disorder appear qualitatively similar to the results for columnar
defects at higher densities.
As the structure factor begins to display positional order in the direction of
the drive a narrowband noise signal in the velocity noise power spectrum
corresponding to the washboard effect is detected for both defect types.
Associated with the washboard peak are harmonics, the ratios of
which initially appear to be related to the type of pinning defect in the
system.
We present results that suggest these harmonic ratios are in fact not dependent
on the type of pinning centers present in the sample.
To further examine these potentially distinct effects on the velocity or
voltage power spectrum, we vary the effectiveness of the pinning centers.
For a fixed vortex density the positional arrangement of the pinning sites in
the system is changed from randomly distributed (i.e., point-like defects) to
correlated along the $z$ axis (i.e., columnar defects).
The velocity fluctuation power spectra as well as the structure factor plots
are examined for this series of simulations as well.
We then compare our findings to the results obtained with increasing point
defect pinning strength.
We find that whereas the power spectrum and structure factor evolve in a
qualitatively similar manner when varying the `pinning effectiveness' through
either method, there appear marked differences in the behavior of the mean
radius of gyration.
Namely, as the point pinning strength increases, vortices tend to stretch and
deform following a $r_g \propto U^2$ behavior, where $U$ represents the
pinning potential depth.
In contrast, we observe the radius of gyration to saturate for increasing
columnar defect length.
The behavior is best described phenomenologically as $r_g \propto e^{-l_0/l}$
where $l$ denotes the length of the columnar defects (and $l_0$ gives the
length scale in $z$ direction).
In the following section 2, we describe our model and the simulation algorithm
in detail.
Section 3 contains our simulation results, as already summarized above.
Finally, in section 4, we conclude and provide an outlook for further
investigations.
\section{Model Description and Simulation Algorithm}
In our Monte Carlo simulations, vortices are considered in the London
approximation (with the London penetration depth large compared to the
coherence length, $\lambda \gg \xi$) as discretized elastic lines \cite{NEL1}
(see also Refs.~\cite{ROSSO,SEN}).
The elastic energy associated with the line tension of $N_v$ flux lines is
taken to be
\begin{equation}
E_L = \frac{\epsilon_1}{2} \sum_{i=1}^{N_v} \int_{0}^{L} dz
\bigg\arrowvert\frac{d\boldsymbol r_i(z)}{dz}\bigg\arrowvert^2 ,
\label{etension}
\end{equation}
where $\boldsymbol{r}_i(z)$ describes the configuration of the $i$th vortex by
specifying its two-dimensional position $\boldsymbol{r}$ as function of the
coordinate $z$ ($0 \leq z \leq L$) along the magnetic field direction.
The line stiffness is given by $\epsilon_{1} = \epsilon_{0} \ln \bigl(
\xi_{ab} / \lambda_{ab} \bigr) \Gamma^{-2}$, where $\lambda_{ab}$ is the
in-plane London penetration depth, and $\xi_{ab}$ the in-plane superconducting
coherence length.
$\epsilon_{0} = \bigl( \phi_{0} / 4\pi\lambda_{ab} \bigr)^2$ sets the overall
energy scale, and $\phi_{0} = hc / 2e$ is the magnetic flux quantum.
The expression (\ref{etension}) for the elastic energy holds if
$\big\vert d\boldsymbol{r}_i(z) / dz \big\vert^2 \ll \Gamma^{-1}$, where
$\Gamma^2 = M_{z}/ M_\perp$ denotes the effective mass ratio for the elastic
line.
In this study we model high-$T_{c}$ materials for which $\Gamma \gg 1$.
In the simulation each flux line is represented by $N_{p}$ points located at
$(\boldsymbol{r}_{i},z_{i})$.
Each point is confined to a constant $z_{i}$ (a separate $ab$ plane) and
interacts with its nearest neighbors above and below via a simple harmonic
potential.
The total interaction energy between all pairs of distinct vortices is
\begin{equation}
E_{\rm int} = \sum_{i \not= j}^{N_v} \int_{0}^{L}
V\Bigl(|\boldsymbol{r}_{i}(z)-\boldsymbol{r}_{j} (z)|\Bigr) \, dz \, ,
\end{equation}
with the pair potential $V(r) = 2\epsilon_{0} K_{0}(r / \lambda_{ab})$.
Here, $K_{0}$ is the modified Bessel function of zeroth order, and can be
described qualitatively as diverging logarithmically as $r\rightarrow 0$ and
decreasing exponentially for long distances $r \gg \lambda_{ab}$.
Interactions between vortices occur only within the $ab$ planes (i.e.,
consistent with the London limit we neglect any cross-plane interactions); this
approximation is valid as long as the requirements for Eq.~(\ref{etension}) are
satisfied.
For the simulation, we consider a system of extension $L_x$ and $L_y$ in the
$x$ and $y$ directions with periodic boundary conditions.
For each vortex pair, we compute its minimal distance within this rectangle and
its periodic images adjacent to it, and use that distance to evaluate the
interaction potential.
As a consequence of this nearest-image approximation, the interaction potential
is cut off at distance min$(L_x/2,L_y/2)$.
To minimize the effects of the cut-off, $\lambda_{ab}$ has been decreased to
prevent numerical artifacts observed in the simulation \cite{fnote1}.
We have also run Monte Carlo simulations for a cutoff length twice the original
length used in the bulk of this study.
To accommodate the increase in length the system area was increased by a factor
of four.
For the larger system the washboard noise is recovered for both columnar and
point defects, and higher harmonics are observed for point defects.
Material defects in the system are represented by a distribution of cylindrical
potential wells,
\begin{equation}
E_D = \sum_{j=1}^{N_v} \int_{0}^{L}
V_D\Bigl(\boldsymbol{r}_j(z)\Bigr) \, dz \, ,
\end{equation}
with $V_D\Bigl(\boldsymbol{r}_j(z)\Bigr) = \sum_{k=1}^{N_D} U
\Theta\Bigl(b_0-|\boldsymbol{r}_j(z)-\boldsymbol{r}_k^{(p)}|\Bigr)$.
Here $b_0$ is the pin radius in the $ab$ plane, $\Theta$ denotes the Heaviside
step function, $\boldsymbol{r}_k^{(p)}$ indicates the location of the $k$th
pinning center, and $U$ characterizes the well depth.
Finally, in the presence of an external current the flux lines experience a
force per unit length
$\boldsymbol{f}_L = \phi_{0} \hat{z} \times \boldsymbol{J} / c$,
therefore a corresponding work term is introduced:
\begin{equation}
W = -\sum_{i=1}^{N_v} \int_{0}^{L} \boldsymbol{f}_L \cdot
\boldsymbol{r}_{i}(z) \, dz \, .
\end{equation}
This work contribution favors vortex motion along the direction of the force
while suppressing motion against it.
For this investigation, the force is always applied in the $x$ direction.
The total energy of the system reads
$E_{\rm tot} = E_L + E_{\rm int}+ E_D + W$.
In our simulations the applied magnetic field is taken parallel to the $c$ axis
(oriented parallel to $\hat{z}$); therefore, at time $t=0$ straight lines are
placed vertically in a system of size $L_x \times L_y \times L_z$ with periodic
boundary conditions in all directions.
We have found that the initial configurations of the individual lines do not
affect the steady state attained at long times.
Defect centers at positions $\boldsymbol{r}_k^{(p)}$ are also distributed
throughout the system, either randomly or aligned parallel to the $c$ axis to
model columnar pins.
The state of the system is then updated according to standard Monte Carlo
Metropolis rates.
When the number of attempted updates is equal to the number of points that make
up a flux line multiplied by the total number of lines in the system, this
constitutes a single Monte Carlo step (MCS) and serves as the unit of time in
the simulation.
While this `driven diffusive system' simulation approach, introduced by Katz,
Lebowitz, and Spohn \cite{KATZ}, is likely best suited to model thermally
activated motion close to equilibrium, we find that our results in the driven
regime, considerably away from equilibrium, are quite similar to those observed
in other studies, as mentioned above.
Nevertheless, it is not at all obvious even for steady states far from thermal
equilibrium which choices of `microscopic' Monte Carlo rates yield the most
`realistic' representations of an experimental system, even in a coarse-grained
view.
Gotcheva {\em et al.} \cite{GOTCHA} have recently brought into question the
general validity of the driven diffusive Metropolis Monte Carlo dynamical
method in their simulations for vortices on a two-dimensional lattice.
The non-equilibrium steady states obtained using Metropolis Monte Carlo and
continuous-time Monte Carlo dynamical rules, respectively, were compared as a
function of temperature and driving force.
The results differed dramatically depending on which dynamical rule was used.
In some instances at least, the Metropolis algorithm yielded a spatially
disordered moving steady state while the continuous-time Monte Carlo rules
preserved positional order (in finite-size systems) over much of the
non-equili\-brium phase diagram.
The authors argue convincingly that the lack of order observed in the
Metro\-polis simulation was due to intrinsic randomness in the updating rules.
It would certainly be interesting and worthwhile to probe to what extent these
findings also apply to three-dimensional off-lattice simulations, which are
presumably less likely to be stuck in long-lived metastable configurations.
In our current study we find that spatial order survives in the non-equilibrium
steady state for an extended range of flux density values.
Quite generally, it is crucial for the analysis of out-of-equilibrium systems
to carefully investigate alternative approaches to the description of their
dynamics in order to probe their actual physical properties rather than
spurious artifacts inherent in any mathematical modeling.
Different mathematical and numerical representations of non-equilibri\-um
systems in fact rely on various underlying {\em a-priori} assumptions that must
be tested {\em a-posteriori} by comparing their respective results.
For example, many computer simulation studies of driven flux line systems
invoke stochastic Langevin equations, wherein one assumes that all fast degrees
of freedom are aptly captured in terms of uncorrelated white noise (see, e.g.,
Refs.~\cite{OLSON1,OLSON2,FANG}).
Such a mesoscopic representation of the dynamics is usually adequate in thermal
equilibrium where the form of the input noise correlations is severely
restricted by fluctua\-tion-dissipation relations.
Yet the large-scale and long-time properties of nonlinear Langevin stochastic
differential equations away from thermal equilibrium are well-known to be often
drastically affected by the functional form or even strength of the assumed
noise correlations, which are not uniquely determined by Einstein relations any
more.
One would, e.g., suspect that the flux line correlations along the magnetic
field axis should be reflected in the noise spectrum and relaxational features.
It is therefore imperative to test a variety of different numerical methods and
compare the ensuing results in order to identify those properties that are
generic to the physical system under investigation.
The parameter values used in our simulations correspond to typical high-$T_c$
materials, and are reported in units of $b_0$ (the pin radius) and interaction
energy scale $\epsilon_0$.
The parameters $\xi_{ab}$, $\epsilon_1$, and $\Gamma^2$ are chosen to be
$0.5 \, b_0$, $0.25 \, \epsilon_0$, and $16$, respectively.
In this study we examine vortex motion in the weak pinning regime.
The pinning potential $U$ has been given a value of
$U_0 = 0.03125 \, \epsilon_0$ while the predicted value is approximately
$0.5 \, \epsilon_0$ \cite{NEL1}.
The penetration depth $\lambda_{ab}$ is assigned a value of $16 \, b_0$ which
is about $1/3$ of typical high-$T_c$ values.
As previously mentioned this choice was made in order to minimize artifacts due
to the interaction cut-off.
The average separation distance between randomly distributed defects in each
$ab$ plane is taken to be $15 \, b_0$.
The maximum distance for a point on any flux line to move is limited to
$b_0 / 2$, to help avoid vortices `hopping' over defects.
We have used a discretization along the field direction of $L_z = 20$ parallel
planes.
As is well-known in finite-size vortex simulations, a square planar geometry
favors ordering into a square lattice whose configurational energy is only
slightly above that of the triangular Abrikosov lattice \cite{KLEIN}.
We have thus chosen the system's planar aspect ratio as $L_x:L_y = 2:\sqrt3$ in
order to easily accommodate a triangular lattice.
The vortices are then placed in the system prearranged in a triangular array
with a lattice vector oriented along the x-direction.
Choosing this aspect ratio allows an even square number of vortices to `fit'
in the system while arranged in this configuration.
In the simulation runs, the vortex lattice arranged in local energy minima
configurations by either aligning a lattice vector parallel to the system's
horizontal axis, rotating the lattice vector by $30^{\circ}$ from the
horizontal, or by arranging such that the lattice twisted about the system at a
chiral angle satisfying periodic boundary conditions.
When in our simulations an external force is applied and the vortex system
driven, we find that the lattice maintains its initial orientation.
However, by approximately doubling the defect pinning strength the lattice
reorients such that the principal lattice vector points in the direction of the
applied drive.
Drive-induced reorientation has been observed in experiments \cite{SCHMID} and
simulations \cite{FANG}.
Since the present study is primarily concerned with the effect of defect
correlations on the dynamics, the temperature is chosen such that
$T / T^* <1$, where $T^* = k_{B}^{-1} \sqrt{\epsilon_1 U_0} b_0$ is the
temperature above which entropic corrections due to thermal fluctuations become
relevant for pinned flux lines \cite{NEL1}.
Here, thermally induced stretching and wandering of the flux lines are largely
suppressed (as long as the vortices remain pinned), and the results can be
interpreted in terms of low-temperature kinetics.
Our simulations are thus usually run at $k_{\rm B} T$ per unit length equal to
$0.004 \, \epsilon_0$.
The average or center of mass (CM) velocity for each vortex is then calculated
and averaged over all vortices:
\begin{equation}
\boldsymbol{v}_{\rm cm} = \frac{1}{N_v} \sum_{i=1}^{N_v}
\frac{\boldsymbol{r}_{\rm cm_{i}}(t+\tau)
- \boldsymbol{r}_{\rm cm{_i}}(t)}{\tau} \, ,
\end{equation}
where $\boldsymbol{r}_{\rm cm{_i}}$ denotes the center-of-mass position of the
$i$th vortex, and $\tau$ is the time interval between measurements.
$\tau$ is set to $30$ MCS, and the simulation is then run for $10^5$ MCS to
arrive at a steady state.
Data are subsequently collected for the next $2.5 \times 10^5$ MCS.
From the collected data, we obtain the two-dim\-ensional static structure
factor in the plane transverse to the magnetic field,
\begin{equation}
S({\bf k}) = \int \langle \rho({\bf 0}) \rho({\bf r}) \rangle \,
e^{- i {\bf k} \cdot {\bf r}} \, d{\bf r} \, ,
\end{equation}
where $\langle \rho({\bf 0}) \rho({\bf r}) \rangle$ denotes the density-density
correlation function, for the driven vortices in the non-equilibrium steady
state.
We also measure the average radius of gyration,
\begin{equation}
r_g = \left(\frac{1}{N_v L} \sum_{i=1}^{N_v} \int_0^L
[r_{{\rm cm}_i} - r_i(z)]^2 \, dz \right)^{1/2} ,
\end{equation}
in order to examine the effects of the different defect types on the shape or
thermal `wandering' of the elastic flux lines along the magnetic field ($z$)
direction.
Additional information about the detailed dynamics is encoded in the velocity
fluctuation power spectra
\begin{equation}
S(\omega) = \left| \int v(t) \, e^{i \omega t} \, dt \right|^2 .
\end{equation}
These velocity power spectra have been appropriately windowed in order to
minimize spectral leakage.
Since vortex motion across the sample induces a voltage drop, the velocity
fluctuations are experimentally directly accessible as the measured voltage
noise.
Each of the above observables is measured for various vortex densities and
defect configurations.
Vortex densities are reported as a count of the number of vortices in a unit
system of area $A = 150 \frac{2}{\sqrt{3}} \, b_0 \times 150 \, b_0$.
The vortex structure factor and velocity noise plots are averaged over time and
typically over $40$-$50$ disorder realizations.
\section{Monte Carlo Simulation Results}
\subsection{Current-Voltage (I-V) Characteristics}
In order to validate our model and simulation algorithm we have examined the
average vortex velocity as a function of the applied force and compared our
results to well-established experimental and numerical findings.
In experiments, the driving force is proportional to the externally applied
current, and the induced voltage across the sample proportional to the mean
flux line velocity, hence the current-voltage (I-V) characteristics are given
by the force-velocity curves in our simulations.
Our simulation results for systems with random\-ly distributed columnar pins
and point defects are plotted for various flux densities in
Fig.~\ref{ivresults}.
In addition to averaging over all flux lines in the system, the data are
averaged over time (25,000 Monte Carlo steps) and five disorder realizations.
For both graphs the number of effective pinning sites is equal, as are the
pinning strengths (per unit length) $U_0$ for all the defects.
\begin{figure}
\begin{center}
\subfigure{\label{randomiv}\includegraphics[scale=.6]{./randomiv.eps}}
\vskip -0.2 truecm
\subfigure{\label{pointsiv}\includegraphics[scale=.6]{./pointsiv.eps}}
\vskip -0.2 truecm
\subfigure{\label{depinthr}\includegraphics[scale=.6]{./f_c_vs_rho.eps}}
\end{center}
\vskip -0.3 truecm
\caption{Velocity (induced voltage) vs. force (applied current) curves
(I-V characteristics) for (a) randomly distributed columnar pins, and
(b) point defects.
For both defect types the depinning threshold decreases with increasing
vortex / flux density.
This can be seen most clearly for point defects in the inset in (b), which
amplifies the low current regime: here, the force values on the $x$ axis of
the inset range from $0$ to $0.0025$, the velocity ranges from $0$ to
$0.0016$ and is given in units of $b_0 /$ MCS (pin radius per Monte Carlo
step).
The following symbols represent vortex densities reported as the areal
density $\rho$ in a fixed area
$A = 150 \frac{2}{\sqrt3}\, b_0 \times 150\, b_0$:
$\blacktriangle - 144 / A = 0.00554/b_0^2$,
$\blacksquare - 100 / A = 0.00385/b_0^2$, $* - 64 / A = 0.00246/b_0^2$,
$\bigcirc - 36 / A = 0.00139/b_0^2$, $\times - 16 / A = 0.00062/b_0^2$.
(c) Estimate of the depinning force $f_c$ as function of the vortex density
$\rho$ for random columnar defects.
The inset depicts how the I-V curve for $\rho = 144 / A$ saturates at high
drive values.
In all plots, the data points are connected as a guide for the eye.}
\label{ivresults}
\end{figure}
The vortices are seen to remain pinned for both columnar and point defects up
to a critical depinning force, beyond which the system takes on an average
velocity that increases with increasing applied drive.
As expected the depinning threshold is higher for columnar defects
[Fig.~\ref{randomiv}] since the attractive force from each pinning site adds
coherently over the length of the flux line.
The inset in Fig.~\ref{pointsiv} depicts the region of the curve associated
with depinning for point defects, showing just the data for the densest and
most dilute systems.
We observe (again as expected) that denser vortices depin at lower driving
currents for either type of pinning center; estimates for the depinning current
for columnar defects as obtained from our data are depicted in
Fig.~\ref{depinthr}.
As the repulsive interactions between neighboring vortices grow with flux
density, this helps the vortices to overcome the defect pinning potentials.
We find that these results are qualitatively comparable to experimental
findings in high-temperature superconducting materials (see, e.g.,
Refs.~\cite{ANDO,AMMOR,QIANG}), and in accord with results obtained by means of
Langevin molecular dynamics simulations \cite{OLSON3} (in the three-dimensional
regime, see also Ref.~\cite{KOLTON2}), giving us confidence in the validity of
our model and algorithm.
However, we note two artifacts in the I-V results occuring at higher driving
current and large vortex densities.
First, while at large currents linear Ohmic behavior should ensue, we find that
the velocity values saturate at high applied drive values, see inset of
Fig.~\ref{depinthr}.
This is due to a maximal step size limitation imposed to avoid vortices
'hopping' over pinning defects in the system.
Second, the I-V curves are observed to cross at larger densities due to the
fact that the system is updated locally rather than globally, in the Metropolis
algorithm.
At higher vortex densities larger local moves are suppressed by the repulsive
potential of nearest neighbors resulting in a lower average velocity and an I-V
curve with a lower slope than that of lower density systems.
In the following, we mostly report data that have been collected at a driving
force $f=0.04$.
This high value was chosen so that washboard noise could be observed over a
range of accessible vortex densities; however, in some instances the I-V curve
has already begun to saturate.
Since we are studying the velocity noise spectrum relative to the mean
velocity, we do not expect specific artifacts caused by the saturation in the
voltage noise, nor do we observe any as compared to results obtained at lower
drive values $f=0.01$ (see the discussion at the end of Sec.~\ref{sec:Columnar
Defects}).
\subsection{Narrowband Noise Characteristics}
\begin{figure}
\begin{center}
\subfigure{\label{washboardpts}
\includegraphics[scale=.55]{./washboardpts.eps}}
\subfigure{\label{washboardran}
\includegraphics[scale=.55]{./washboardran.eps}}
\end{center}
\vskip -0.1 truecm
\caption{Washboard frequency versus vortex density $\rho$ for (a) point and (b)
columnar pinning centers.
Vortex density is reported as a count of the number of flux lines in a fixed
area of size $A = 150 \frac{2}{\sqrt{3}} \, b_0 \times 150 \, b_0$.
The results show good agreement between the measured and calculated values
for both defect types.
The frequency $\omega$ is given in units of rad / MCS.
Error bars for the measured values are obtained from the full width at
half-maximum of the washboard peaks in Figs.~\ref{random} and \ref{points}.
Error bars for the calculated values are smaller than the data points.
$\blacktriangle$ - measured, $\blacksquare$ - calculated.}
\label{washboard}
\end{figure}
Power spectral density plots and intensity plots of the structure factor
$S({\bf k})$ have been obtained for several vortex densities as shown in
Figs.~\ref{random} and \ref{points}.
Unless otherwise indicated, all power spectral density plots are
double-logarithmic with the frequency on the $x$ axis ranging from $0.001$ to
$0.1$ rad / MCS and power in normalized units on the the $y$ axis in the range
$2.5 \times 10^{-10} \ldots 1 \times 10^{-4}$.
Distinct peaks are observed as well as higher harmonics in many of the power
spectra for columnar defects.
Peaks are observed in all of the power spectral plots for point defects.
In both cases harmonics are always located at integer multiples of the
fundamental frequency.
In Fig.~\ref{washboard} the fundamental frequencies are plotted versus the
number of vortices per unit system size along with the predicted washboard
frequencies for both point and columnar disorder.
The washboard frequency is calculated by simply dividing the measured average
vortex velocity ${\langle v_x \rangle}$ (parallel to the applied drive, and
averaged over time and defect realizations) by the vortex triangular lattice
constant.
Due to the aspect ratio of the system this distance is obtained by dividing the
system length in the direction of the drive ($L_x$) by the square root of the
number of vortices in the system; hence,
\begin{equation}
\omega = 2\pi {\langle v_x \rangle} \sqrt{N_v} / L_x .
\end{equation}
We obtain good agreement between the measured and predicted values indicating
that the fundamental frequency in the power spectra plots is indeed the
washboard frequency.
Error bars for the measured frequencies are estimated by taking the full widths
at half maximum of the fundamental peaks, while the uncertainties for the
calculated values are obtained from the standard deviations of the average
velocities.
\subsubsection{Randomly Distributed Columnar Defects}
\label{sec:Columnar Defects}
\begin{figure}
\begin{center}
\subfigure{\label{16random}\includegraphics[scale=.25]{./16random_xnoise.eps}}
\vskip -0.2 truecm
\subfigure{\label{36random}\includegraphics[scale=.25]{./36random_xnoise.eps}}
\vskip -0.2 truecm
\subfigure{\label{64random}\includegraphics[scale=.25]{./64random_xnoise.eps}}
\vskip -0.2 truecm
\subfigure{\label{100random}\includegraphics[scale=.25]
{./100random_xnoise.eps}}
\end{center}
\vskip -0.3 truecm
\caption{Velocity / voltage power spectra measured in the direction of the
drive ($x$ direction) and structure factor plots for increasing vortex
density in the presence of randomly distributed columnar defects.
(a) For a density $\rho = 16 / A = 0.00062/b_0^2$ an isotropic liquid is
observed along with a broadband noise signal in the power spectrum.
(b) At $\rho = 36 / A = 0.00139/b_0^2$ two (off-center) peaks appear in the
structure factor located transverse to the drive direction, accompanied by a
narrowband signal in the velocity power spectrum.
(c) Six peaks appear in the structure factor plot for
$\rho = 64 / A = 0.00246/b_0^2$.
The intensities of the peaks with a wave vector component in the $x$
direction are lower than those with only a $y$ component.
In the velocity power spectrum the washboard frequency peak narrows
indicating greater temporal coherence, and higher harmonics become more
visible.
(d) As the density is increased to $\rho = 100 / A = 0.00385/b_0^2$ the
structure becomes more fully ordered into a triangular array, and the
washboard peak sharpens.}
\label{random}
\end{figure}
{\em Noise characteristics and structure.}
Results for flux lines of different densities interacting with random columnar
defects are displayed in Fig.~\ref{random}.
With the average spacing between defects set to $15 \, b_0$ the number of
columns in a unit area $A = 150 \frac{2}{\sqrt{3}} \, b_0 \times 150 \, b_0$ is
$115$.
For a flux density $\rho = 16 / A$ (a filling fraction of $\sim 1/7$) only
broadband noise is observed in the velocity power spectrum, see
Fig.~\ref{16random}.
The corresponding diffraction pattern shows a ring indicating an isotropic
liquid or amorphous solid.
A typical Delaunay triangulation for the vortex positions in a snapshot of a
particular run with $16$ lines is depicted in Fig.~\ref{delauny16random}.
The plot shows the presence of a number of topological defects in the vortex
system.
The `time exposure' plot in Fig.~\ref{16ran_traject} of the flux line
center-of-mass positions produces trajectories reminiscent of the `braided
rivers' observed in two-dimensional plastic flow simulations \cite{OLSON2}.
The trajectories appear to move and cross within winding channels.
For a density $\rho = 36 / A$ the two peaks located perpendicular to the drive
direction fully emerge, suggestive of the predicted moving transverse glass,
see Fig.~\ref{36random}.
Small intensity peaks with wave vector components parallel to the drive are
also visible.
\begin{figure}
\begin{center}
\subfigure[$\rho = 16 / A$]{\label{delauny16random}
\includegraphics[scale=.35]{./delauny16random.eps}} \
\subfigure[$\rho = 64 / A$]{\label{delauny64random}
\includegraphics[scale=.35]{./delauny64random.eps}}
\subfigure[$\rho = 16 / A$]{\label{16ran_traject}
\includegraphics[scale=.125]{./16ran_traject.eps}} \
\subfigure[$\rho = 64 / A$]{\label{64ran_traject}
\includegraphics[scale=.125]{./64ran_traject.eps}}
\end{center}
\vskip -0.3 truecm
\caption{Delaunay triangulation plots for the positions of (a) $16$ and (b)
$64$ flux lines per area $A$ in the presence of random columnar defects.
Topological defects are marked.
A disordered structure is obtained for flux density $\rho = 16 / A$, while a
triangular array free of topological defects is found for $\rho = 64 / A$.
A `time exposure' for densities $16 / A$ and $64 / A$ is plotted in (c) and
(d) to illustrate the flux line motion.
Open circles indicate columnar defect locations, while black lines trace the
trajectories of the average position of each vortex.
For $\rho = 16 / A$, the vortex trajectories cross suggestive of plastic
flow, while for $\rho = 64 / A$ parallel channels form.}
\label{delaunyrandom}
\end{figure}
As the vortex density $\rho$ is increased to $64 / A$, peaks with an $x$
component in the diffraction plot emerge more prominently and sharpen as shown
in Fig.~\ref{64random}.
Just as for $\rho = 36 / A$, these peaks are smaller than those located
perpendicular to the direction of the drive.
In a typical associated Delaunay plot, Fig.~\ref{delauny64random}, topological
defects in the vortex lattice disappear, and the flux line trajectories form
parallel channels, running in comparatively straight lines,
Fig.~\ref{64ran_traject}.
At $\rho = 100 / A$ the structure plot reveals a well-ordered lattice of flux
lines, see Fig.~\ref{100random}.
These results are a good demonstration of the competing energy scales in the
system.
The spatially randomly distributed pinning sites favor disordered flux line
structures while the vortex repulsion induces ordering into a regular array.
For $\rho = 16 / A$ the structure factor displays a random vortex
configuration, indicating that the lattice structure is dominated by the
columnar pinning sites; indeed, the system is close to the depinning threshold,
see Fig.~\ref{depinthr}.
Individual vortices are pinned for periods of time that are long compared to
the time it would take for the flux lattice to move one lattice constant, thus
preventing the formation of the regular structure.
At density $36 / A$, the system has moved away from the depinning threshold,
and the repulsive forces begin to separate the vortices into parallel channels
resulting in spatial periodicity in the $y$ direction (perpendicular to the
drive), and by reaching $\rho = 64 / A$ the system of moving vortices is
dominated by the repulsive vortex interaction potential.
At this stage individual channels begin to couple as additional periodicity
appears in the $x$ (drive) direction.
Distinct peaks emerge in the diffraction plot with the peaks possessing a wave
vector component in the $x$ direction smaller than those with only a $y$
component due to weaker coupling.
As the density is increased to $\rho = 100 / A$ the repulsive energy between
the vortices grows further leading to a stronger coupling between transverse
channels and eventually a symmetric triangular lattice structure emerges.
\begin{figure}
\begin{center}
\includegraphics[scale=.55]{./power_curve.eps}
\end{center}
\caption{Washboard peak power for increasing vortex density for columnar
(square data points) and point defects (triangles).
In either case the peak intensity is observed to decrease as the flux density
is increased.}
\label{powertrend}
\end{figure}
The spatial structure of the vortex array is reflected in the associated
velocity or voltage power spectrum.
At $\rho = 16 / A$ a broadband signal is visible corresponding to the isotropic
vortex distribution and their consequent random incoherent motion shown in
Fig.~\ref{16ran_traject}.
Similar broadband noise that is associated with incoherent flux transport has
been experimentally and numerically observed in a number of studies
\cite{OLSON2,VEST,TOGAWA}.
For $\rho = 36 / A$, as the vortices begin to arrange into a lattice a
narrowband peak appears corresponding to the $x$ wave vector components that
appear in the structure factor plot.
At a density $64 / A$ the washboard signal narrows as the vortex system becomes
more structured.
A sharper peak indicates a greater temporal coherence between the vortices
which is not surprising; as the density of the vortices is increased each
vortex is more tightly held in its place and each line `stiffened' by the
repulsive forces exerted by its neighbors.
For similar reasons the remnants of the broadband noise flatten out more.
This trend continues as the flux density is further increased to $100 / A$.
As the density is increased from $\rho = 64 / A$ upward we observe a decrease
in the washboard peak power, as shown in Fig.~\ref{powertrend}.
The decrease of the fundamental peak intensity is due to the increase in
stiffness of the lattice structure with growing vortex density.
As each flux line passes through a defect site it will be less affected by the
pin in a denser system resulting in smaller velocity fluctuations and hence
reduced power output.
\begin{table*}
\caption{Ratio of the three largest peaks observed in the vortex velocity power
spectrum in the presence of randomly distributed columnar defects.
For each vortex density the ratios of the intensities of the second and third
peak to the first are reported, for measurements taken in both the $x$ and
$y$ directions.
The number of runs over which the power spectral density plots were averaged
is also listed.
The corresponding power spectra peak ratios for a periodic piecewise linear
(sawtooth) signal is included for comparison.}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline vortex density $\rho$ & runs & ratio ($x$ direction) &
ratio ($y$ direction) \\
\hline $64 / A = 0.00246/b_0^2$ & $115$ & $1$ : $0.20\pm0.02$ : $0.030\pm0.003$
& $1$ : $0.25\pm0.03$ : $0.079\pm0.009$ \\
\hline $100 / A = 0.00385/b_0^2$ & $44$ & $1$ : $0.31\pm0.04$ : $0.055\pm0.008$
& $1$ : $0.30\pm0.05$ : $0.070\pm0.010$ \\
\hline $144 / A = 0.00554/b_0^2$ & $44$ & $1$ : $0.38\pm0.08$ : $0.120\pm0.030$
& $1$ : $0.36\pm0.06$ : $0.055\pm0.008$ \\
\hline $f(x)=x, \, 0<x<2\pi$ && $1$ : $0.25$ : $0.11$ & \\ \hline
\end{tabular}
\end{center}
\vskip -0.2 truecm
\label{columnpower}
\end{table*}
{\em Washboard peak harmonics.}
In Table~\ref{columnpower} the ratio of the intensities of the first and second
harmonic with respect to the fundamental peaks are recorded for the velocity
noise measured in the $x$ and $y$ directions.
These ratios vary as the flux density increases indicating a change in the
shape of the velocity vs. time trace.
It is observed that the ratios approach values similar to those measured for
point defects (listed in Table~\ref{pointpower} below).
We will investigate the relationship between the underlying defect type and the
harmonics ratios in Sec.~\ref{variablepinstrength}.
A narrowband signal is also measured in the $y$ direction transverse to the
drive for densities of $64 / A$ through $144 / A$ flux lines (not shown).
The ratios of the harmonics to the fundamental peak intensity in the $y$
direction are also reported in Table~\ref{columnpower}.
The frequency of the fundamental peak as well as the higher harmonics are
identical to the frequencies measured in the $x$ direction.
While the power is lower in the $y$ direction (by factors of $10$ to $40$), the
ratios of the peak intensities are similar.
Since there is no drive in the $y$ direction, these observations could perhaps
be interpreted as a suppression of transverse motion occurring at the same
frequency as the washboard motion.
One likely explanation is that as a vortex becomes temporarily trapped in a
pinning potential, fluctuations transverse to the motion are suppressed until
the vortex departs from the pinning site, resulting in periodic behavior.
\begin{figure}
\begin{center}
\subfigure{\label{radgran1}\includegraphics[scale=.55]{./radgran.eps}}
\subfigure{\label{radgpts}\includegraphics[scale=.55]{./radgpts.eps}}
\end{center}
\vskip -0.1 truecm
\caption{Components of the mean radius of gyration for (a) columnar and (b)
point pins.
For both defect types the radius of gyration decreases with increasing vortex
density.
At higher flux densities vortices are caged by nearest neighbors, suppressing
elastic flux line stretching.
Lengths are reported in units of the pin radius $b_0$.
$\blacktriangle$ - $x$ component, $\blacksquare$ - $y$ component.}
\label{radgran}
\end{figure}
{\em Radius of gyration.}
In order to obtain additional information about the three-dimensional shape of
the elastic vortex lines moving through a sample with columnar defects, we have
obtained radius of gyration data, averaged over vortices, time, and defect
configurations.
We interpret the radius of gyration as the distance the elastic line is
stretched from its average (center-of-mass) position.
The components of the radius of gyration in the $x$ and $y$ direction vs. flux
density are plotted in Fig.~\ref{radgran1}.
As the flux density is increased the radius of gyration decreases indicating
that the vortex lines are straightened at higher densities due to the stronger
repulsion exerted by their nearest neighbors.
The data also display anisotropy in the flux line `stretching'.
The magnitude of the radius of gyration is markedly greater along the direction
of the drive than perpendicular to it, with the $x$ and $y$ components
approaching each other only at quite large densities.
For dilute systems we would expect the competition between the drive and the
pinning potential to result in flexible vortices depinning in sections, with
some parts of the flux lines leaving the columnar pins via `double-kink' and
`half-loop' saddle-point configurations \cite{NEL1}.
In the presence of the drive the free flux line sections move forward and grow,
stretching the vortex until it depins completely.
Since there is no drive in the $y$ direction, the $y$ component of the radius
of gyration is smaller; however, the same trend is observed with increasing
density as with the $x$ component.
At high densities the vortices are stiffer due to a smaller vortex separation
distance.
Rather than stretching, depinning tends toward an `all or nothing' process with
the tightly packed vortex ensembles effectively becoming two-dimensional.
\begin{figure}[b]
\begin{center}
\subfigure{\label{144 low drive}
\includegraphics[scale=.25]{./144ran_for01_xnoise.eps}}
\vskip -0.2 truecm
\subfigure{\label{144 high drive}
\includegraphics[scale=.25]{./144random_xnoise.eps}}
\end{center}
\vskip -0.3 truecm
\caption{Voltage power spectra for (a) low ($f=0.01$) and (b) high ($f=0.04$)
driving forces at $\rho = 144 / A$ in the presence of columnar defects.}
\label{highlowdrive}
\end{figure}
{\em Comparison with results obtained at small drive.}
The power spectra results reported above have been obtained at a rather high
applied drive $f=0.04$ chosen so that the washboard peak could be observed over
a range of vortex densities.
However, as discussed above, the velocity in the I-V characteristics
(Fig.~\ref{ivresults}) begins to saturate at such high drive values, especially
for systems with higher flux density.
We believe that this is an artifact of the limited step size in the simulation.
In order to investigate whether the velocity saturation influences the velocity
fluctuations we now examine the power spectra at a lower drive value $f=0.01$
and compare the results to those for $f=0.04$.
We consider here a dense vortex system with $\rho = 144 / A$ subject to
randomly placed columnar defects.
This flux density was chosen to preserve the washboard peak at a lower drive.
Results are displayed in Fig.~\ref{highlowdrive}.
These data suggest that the velocity saturation does {\em not} adversely affect
the velocity power spectra.
The expected drop in washboard frequency corresponding to the decrease in
applied drive is observed.
The full peak widths at half maximum of the fundamental are measured at high
and low drive and the results are found to be comparable
($0.00036~{\rm rad}/MCS$ and $0.00041~{\rm rad}/MCS$, respectively) indicating
that the temporal correlations are not affected by the saturation.
Both spectra display a similar harmonic ratio $1 : 0.40\pm0.05 : 0.10\pm0.01$.
Likewise the structure factors are found to be similar.
These results suggest that data obtained at high drive values do not possess
any pronounced artifacts due to velocity saturation.
\subsubsection{Randomly Placed Point Defects}
The results for the velocity power spectra and associated structure factors for
vortices driven through randomly distributed point defects show many
qua\-litative similarities to those obtained for columnar pins, see
Fig.~\ref{points}.
However, the structure factor plot associated with an isotropic liquid is never
observed in our simulations, even at the lowest flux densities.
The intensities of the structure factor peaks increase with growing flux
density, indicating an increasing degree of positional order in the system.
For the lowest density system, Fig.~\ref{16points}, the structure factor plot
implies greater spatial periodicity perpendicular to the direction of the drive
than parallel to it, similar to the case of columnar defects, suggesting that
parallel channels are not well-coupled in this regime.
This is further evident in the Delaunay triangulation plot shown in
Fig.~\ref{delauny16points}.
Compared to the system with a density $16 / A$, the Delaunay plot for
$\rho = 36 / A$, Fig.~\ref{delauny36points}, yields greater alignment
perpendicular to the drive owing to the disappearance of topological defects in
the vortex lattice.
The reason for this is similar to that for columnar defects; the pinning sites
introduce shear between parallel channels resulting in less-ordered rows of
vortices that run perpendicular to the drive.
As the density increases the vortex repulsion becomes the dominant energy in
the system, and the coupling between parallel channels is enhanced.
\begin{figure}
\begin{center}
\subfigure{\label{16points}\includegraphics[scale=.25]{./16points_xnoise.eps}}
\vskip -0.2 truecm
\subfigure{\label{36points}\includegraphics[scale=.25]{./36points_xnoise.eps}}
\vskip -0.2 truecm
\subfigure{\label{64points}\includegraphics[scale=.25]{./64points_xnoise.eps}}
\vskip -0.2 truecm
\subfigure{\label{100points}\includegraphics[scale=.25]
{./100points_xnoise.eps}}
\end{center}
\vskip -0.3 truecm
\caption{Power spectra and structure factor plots for increasing vortex density
$\rho$ in the presence of point pins, with
(a) $\rho = 16 / A = 0.00062/b_0^2$, (b) $36 / A = 0.00139/b_0^2$,
(c) $64 / A = 0.00246/b_0^2$, and (d) $100 / A = 0.00385/b_0^2$.
Narrowband washboard noise peaks are observed in all thespectral density
plots, and six-fold coordination in the corresponding vortex structure
factors.}
\label{points}
\end{figure}
\begin{figure}[b]
\begin{center}
\subfigure[$\rho = 16 / A$]{\label{delauny16points}
\includegraphics[scale=.35]{./delauny16points.eps}} \
\subfigure[$\rho = 36 / A$]{\label{delauny36points}
\includegraphics[scale=.35]{./delauny36points.eps}}
\end{center}
\vskip -0.3 truecm
\caption{Delaunay triangulation for vortices with densities
(a) $\rho = 16 / A$ and (b) $36 / A$, interacting with randomly distributed
point pinning centers. Topological defects are indicated.}
\label{delaunypoints}
\end{figure}
In the velocity power spectrum for flux density $\rho = 16 / A$ depicted in
Fig.~\ref{16points}, the narrowband signal rests on a low broadband background
signal.
As the vortex density increases the broadband component diminishes, as do the
power and width of the washboard peak.
In general, the peak widths are smaller than those observed for columnar
defects, as to be expected for uncorrelated and less efficient pinning centers.
In the spectra where peaks are resolvable, the fundamental peak and higher
harmonics are located at frequencies that are identical in the $x$ and $y$
directions.
The ratio of the fundamental to the harmonics in either direction are listed in
Table~\ref{pointpower}.
These ratios for flux densities $\rho = 16 / A$ through $100/A$ lines are
comparable suggesting a similar shape of the velocity-time traces.
For a system with $\rho = 144 / A$ the ratio is however markedly different.
At this point it is not clear to us which physical mechanism dictates the
detailed values of these peak intensity ratios.
Yet, as we shall further argue in Sec.~\ref{variablepinstrength} below, we
believe that these ratios characterize the overall deformation of the vortex
lattice rather than reflect the geometry of the pinning centers.
\begin{table*}
\caption{Ratio of the three largest washboard peaks observed in the velocity
power spectrum for point defects.
For each vortex density the ratios of the intensities of the second and third
peak to the first are reported, for measurements taken in both $x$ and $y$
directions.
The number of runs over which the power spectral density plots were averaged
is also included.}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline vortex density $\rho$ & runs & ratio ($x$ direction) & ratio ($y$
direction) \\
\hline $16 / A = 0.00062/b_0^2$ & $44$ & $1$ : $0.67\pm0.09$ : $0.27\pm0.02$
& n / a \\
\hline $36 / A = 0.00139/b_0^2$ & $44$ & $1$ : $0.55\pm0.06$ : $0.22\pm0.02$
& $1$ : $0.4\pm0.1$ : $0.40\pm0.10$ \\
\hline $64 / A = 0.00246/b_0^2$ & $276$ & $1$ : $0.58\pm0.03$ : $0.24\pm0.01$
& $1$ : $0.51\pm0.04$ : $0.16\pm0.01$ \\
\hline $100 / A = 0.00385/b_0^2$ & $88$ & $1$ : $0.56\pm0.05$ : $0.25\pm0.02$
& n / a \\
\hline $144 / A = 0.00554/b_0^2$ & $88$ & $1$ : $0.80\pm0.10$ : $0.19\pm0.02$
& $1$ : $0.43\pm0.06$ : $0.08\pm0.01$ \\
\hline
\end{tabular}
\end{center}
\vskip -0.2 truecm
\label{pointpower}
\end{table*}
The mean flux line radius of gyration (averaged over vortices, time, and
pinning site distributions) vs. vortex density for point defects is plotted in
Fig.~\ref{radgpts}.
The behavior for both components is similar to that of a system subject to
randomly placed columnar defects.
A larger radius of gyration is observed for low-density systems, while it
decreases for both $x$ and $y$ components as the flux density is increased.
Unlike for columnar defects, the pinning force for uncorrelated point disorder
does not add coherently over the length of the flux lines.
Whereas the random spatial distribution of point pins promotes thermal flux
line wandering, the stretching of the vortices while moving through a defect is
not as severe.
Comparing the data to those for columnar defects with identical flux densities,
we note that the magnitudes of both components of the radius of gyration are
smaller for point defects.
As the density is increased both components of the radius of gyration for point
defects approach the same values as also seen in the corresponding curves for
columnar defects.
\begin{figure}
\begin{center}
\subfigure{\label{64random_long}
\includegraphics[scale=.25]{./64random_long_xnoise.eps}}
\vskip -0.2 truecm
\subfigure{\label{64points_long}
\includegraphics[scale=.25]{./64points_long_xnoise.eps}}
\end{center}
\vskip -0.3 truecm
\caption{Velocity fluctuation power spectra and structure factor plots for a
flux density $\rho = 64 / A$ for (a) columnar and (b) point defects in a
spatially extended system with $L_z = 60$, i.e., all flux lines have been
lengthened by a factor of $3$ compared to the previous runs.
The results for columnar defects shown in (a) are very similar to those for
the same density with shorter lines, as depicted in Fig.~\ref{64random}.
On the other hand, for point defects (b) the power drops noticeably compared
to Fig.~\ref{64points}.}
\label{64long}
\end{figure}
We have also examined the effect of the total vortex length $L_z$ on the
velocity power spectrum.
Data for $L_z = 60$ (three times the length of the vortices shown in all
previous results) at a density of $64$ lines have been included in
Figs.~\ref{64random_long} and \ref{64points_long}.
(The peak power for $L_z = 20$ in normalized units is
$1.14 \cdot 10^{-6} \pm 9 \cdot 10^{-8}$ compared to
$4.2 \cdot 10^{-7} \pm 3 \cdot 10^{-8}$ for $L_z = 60$.)
Qualitatively, the findings are quite similar to those for a shorter vortex
length; however, in the presence of point defects, compared to the results
shown in Fig.~\ref{64points}, the narrowband power is lower.
On the other hand, a similar intensity drop is not observed in the case of
columnar defects, compare Fig.~\ref{64random_long} to Fig.~\ref{64random}.
The additional flux line length and the lack of spatial correlation in the $z$
direction for point disorder further demonstrates the difference in pinning
efficiency between point and columnar defects.
For columnar pins the lengths of both the defect and the vortex span the height
of the sample just as they did for the shorter system; hence we observe a
similar effect on the motion and the power spectrum.
For point defects, as the length of the vortex is increased the effect of a
single point defect on the longer vortex decreases as a whole resulting in a
smaller power output: local fluctuations are averaged out.
We note that naturally these distinctions are only observable in
three-dimension simulations.
\subsection{Variable Pinning Efficiency}
In this section we examine the effect of varying pinning efficiency on the
vortex structure factor and velocity fluctuation power spectrum.
We shall study the cases of different point pin strenghts and linear defects of
fixed strength but varying length, thus interpolating between uncorrelated
point disorder and correlated columnar defects.
We are particularly interested in whether point and columnar defects display
different washboard harmonic signatures in the velocity power spectrum.
\subsubsection{Variable Point Pinning Strength}
\label{variablepinstrength}
\begin{figure}
\begin{center}
\subfigure{\label{powergraph}
\includegraphics[scale=.55]{./power_vs_pinstren.eps}}
\vskip -0.1 truecm
\subfigure{\label{pinstrength1_7}
\includegraphics[scale=.25]{./1_7_def_strength_xnoise.eps}}
\vskip -0.2 truecm
\subfigure{\label{pinstrength1_9}
\includegraphics[scale=.25]{./1_9_def_strength_xnoise.eps}}
\end{center}
\vskip -0.3 truecm
\caption{(a) Velocity spectrum washboard peak power for increasing point
pinning strength $U$.
The vortex density is held constant at $\rho = 64 / A$.
As the pinning strength is increased the washboard signal grows to a maximum
value around $1.7 \, U_0$ (b) and then decreases as the system transitions to
a broadband signal around $1.9 \, U_0$ (c).}
\label{power_v_pinstrength}
\end{figure}
To carry out this investigation simulation runs were performed at fixed flux
density $\rho = 64 / A$ while the point pinning strength was increased from the
value $U_0$ used before up to $U = 1.9 \, U_0$.
For each pinning strength results were obtained by averaging over random
distributions of point defects.
The number of pinning sites per run was held constant; only the pinning
strength was changed.
The power of the washboard peak is plotted versus pinning strength in
Fig.~\ref{powergraph}.
We note that the power reaches a maximum value at approximately $1.7 \, U_0$,
see Fig.~\ref{pinstrength1_7}.
The increase in power is due to the stronger pinning causing larger velocity
fluctuations in the vortex array.
The width of the peak increases along with its power as indicated by the error
bars in Fig.~\ref{powergraph}.
The signal then decreases as the power output transitions to a broadband signal
coinciding with decreasing coherent vortex motion, Fig.~\ref{pinstrength1_9}.
\begin{table}
\caption{Intensity ratio of the three largest velocity power spectrum peaks for
increasing point defect pinning strength, recorded as multiples of the
initial pinning strength $U_0$.
For each pinning strength the ratios of the intensities of the first and
second peak to the third are given for measurements taken in the drive ($x$)
direction.
The number of runs over which the power spectral density plots were averaged
is also included.}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline $U / U_0$ & runs & ratio ($x$ direction) \\
\hline $1.1$ & $44$ & $1$ : $0.55\pm0.06$ : $0.150\pm0.020$ \\
\hline $1.3$ & $44$ & $1$ : $0.33\pm0.04$ : $0.045\pm0.008$ \\
\hline $1.5$ & $44$ & $1$ : $0.20\pm0.03$ : $0.012\pm0.004$ \\
\hline
\end{tabular}
\end{center}
\vskip -0.2 truecm
\label{pointpower_pinstren}
\end{table}
Washboard harmonic peak ratios for a few pinning strenghts are recorded in
Table~\ref{pointpower_pinstren}.
We compare these results to the columnar defect data in Table~\ref{columnpower}
and note that similar harmonic ratios are observed for the weaker columnar
defects at a density of $\rho = 64 / A$ and the stronger point defects with
$U = 1.5 \, U_0$.
These findings demonstrate that the harmonic ratios are not unique signatures
of the material defect types compared in this study.
We speculate that the ratios are rather dependent on the amount of deformation
of the vortex lattice by the defects regardless of whether or not the pins
extend along the length of the vortices.
\subsubsection{Variable Linear Defect Length}
\begin{figure}
\begin{center}
\subfigure{\label{len1}\includegraphics[scale=.25]{./16def_len_1_xnoise.eps}}
\vskip -0.2 truecm
\subfigure{\label{len3}\includegraphics[scale=.25]{./16def_len_3_xnoise.eps}}
\vskip -0.2 truecm
\subfigure{\label{len5}\includegraphics[scale=.25]{./16def_len_5_xnoise.eps}}
\end{center}
\vskip -0.3 truecm
\caption{Vortex velocity / voltage power spectra and diffraction plots for
increasing columnar defect lengths at a fixed vortex density
$\rho = 16 / A = 0.00062/b_0^2$ with (a) $l = 1$, (b) $l = 3$, and
(c) $l = 5$.
The results reveal how the system evolves from an ordered to a disordered
configuration as the defect length increases.
The washboard signal decreases as the broadband noise grows.}
\label{def_length}
\end{figure}
To further compare the effects of point and columnar defects on the velocity
power spectrum, we have investigated systems with varying columnar defect
lengths at a constant vortex density $\rho = 16 / A$.
Each set of results is obtained by averaging runs over random distributions of
linear defects of a particular length $l$, with the total number of pinning
sites held constant.
Defect lengths vary from a single pinning site, i.e., random point defects, to
a length of $10$ contiguous pinning sites.
(Recall that a columnar defect extending over $20$ pinning sites spans the
entire system height $L_z = 20$.)
The driving force remains at $f=0.04$.
As the defect length is increased the spatial order in the system decreases as
indicated by the structure factor peaks in Fig.~\ref{def_length}.
Peaks with wave vector components along the drive decrease in intensity at
shorter defect lengths compared to those perpendicular to the drive.
With increasing length of the linear pins (growing defect correlations) the
peak intensities become diminished.
Results for defect length $l = 1$, $3,$ and $5$ are shown in
Fig.~\ref{def_length}.
By length $l = 10$ the washboard peak disappears entirely and is replaced by a
broadband signal (not shown).
We note a second peak with a frequency corresponding to the system size also
appears in the power spectrum $l = 3$.
While the cause of this peak is unclear, one possible explanation is that it
originates in some type of persistent deformation in the vortex lattice.
With periodic boundary conditions this lattice deformation would repeatedly
travel over the same pinning distribution resulting in a periodicity
corresponding to the `time of flight' across the system.
\begin{figure}
\begin{center}
\subfigure{\label{def_1_1}\includegraphics[scale=.25]{./16def_1_1_xnoise.eps}}
\vskip -0.2 truecm
\subfigure{\label{def_1_3}\includegraphics[scale=.25]{./16def_1_3_xnoise.eps}}
\vskip -0.2 truecm
\subfigure{\label{def_1_4}\includegraphics[scale=.25]{./16def_1_4_xnoise.eps}}
\end{center}
\vskip -0.3 truecm
\caption{Results for simulations with random point defects whose pinning
strength is increased from its original value $U_0$ in (a) to
(b) $U = 1.2 \, U_0$ and $U = 1.4 \, U_0$ at a low vortex density
$\rho = 16 / A = 0.00062/b_0^2$.}
\label{def_strength}
\end{figure}
As a comparison runs were performed for various point defect pinning strengths
at the same low flux density $\rho = 16 / A$.
The process was identical to the previous section.
However, due to the lower density, when the depth of the pinning potential well
was increased to only $1.5$ times its original value $U_0$, the washboard peak
completely disappeared.
The results of these simulations are shown in Fig.~\ref{def_strength}.
We observe that these results look quite similar to those from increasing the
linear defect lengths.
\begin{figure}
\begin{center}
\subfigure{\includegraphics[scale=.6]{variable_def_len_rad.eps}}
\vskip -0.2 truecm
\subfigure{\includegraphics[scale=.6]{variable_point_def_rad.eps}}
\end{center}
\vskip -0.6 truecm
\caption{Mean flux line radius of gyration and average velocity for
(a) increasing linear defect length and (b) enhanced point pinning strength.
The radius of gyration data for increasing lengths initially follows an
exponential of the form $r_g \propto \exp(- l_0/l)$ while the data for
increasing pinning strength is best described by a quadratic fit.
$\blacktriangle$ - average velocity, $\blacksquare$ - radius of gyration
($x$ component).}
\label{def_str_vs_col_len}
\end{figure}
While the above power spectra are qualitatively comparable, the growth of the
mean vortex line radius of gyration turns out to be quantitatively different in
both situations, as shown in Fig.~\ref{def_str_vs_col_len}.
The data for columnar disorder initially grows as an exponential function,
$r_g \propto \exp(- l_0/l)$ where $l$ is the length of the columnar defects,
while the data for point defects is best described by a quadratic fit
$r_g \propto (U / U_0)^2$.
While we cannot offer a quantitative theory for these trends, they can be
understood qualitatively by examining the average vortex velocity as these two
pinning types are varied.
For both cases as the `pinning effectiveness' is increased the velocity
decreases and the radius of gyration increases.
We interpret the increase in radius of gyration as the vortices being stretched
as they are simultaneously held by the defects and pulled by the externally
applied drive.
As the linear defect length is increased the velocity decreases, but by length
$l = 20$ (the height of the system) the vortices are still moving.
[Compare the velocity trends in Fig.~\ref{def_str_vs_col_len}(a) and (b).]
This indicates that the columnar defects cannot stop the vortices at this
particular flux density and applied drive, and the `stretching' of the vortices
is limited leading to saturation of the radius of gyration.
On the other hand, for the point pins the pinning strength is not limited
leading to greater stretching.
\section{Summary and Conclusion}
\label{sec:Summary and Conclusion}
To summarize, a non-equilibrium Monte Carlo simulation code has been developed
to study the effects of different types of pinning centers on the velocity
power spectrum of vortices driven through a disordered medium in the
non-equilibrium steady state \cite{TOM,DAS}.
Specifically, we have investigated the evolution of the washboard signal as we
have increased the vortex / magnetic flux density in the presence of point and
columnar defects.
In order to achieve a more complete understanding of the velocity power
spectra, we have also examined the corresponding two-dimensional vortex
structure factor and the average flux line radius of gyration.
We have confirmed that our numerical model displays the appropriate physical
behavior over a large section of parameter space.
For instance, vortices arrange into a six-fold lattice when interacting with
only weak material defects.
In the presence of an applied current and sufficiently strong pinning sites,
the vortex lattice reorients with a lattice vector along the drive direction.
Furthermore, we have obtained the current-voltage characteristics and found
these to be qualitatively similar to I-V curves obtained in experiments.
In our simulation vortices depin above a critical applied force and gain
velocity as the applied force is increased.
Changes in the I-V curves due to different pinning site types and vortex
densities occur as expected.
At high applied drive, our algorithm produces saturating I-V curves, an
artifact that originates in a necessary limitation of the maximum allowed
displacements.
However, we have provided evidence that suggests this limitation does not
adversely affect the associated power spectra and other physical quantities in
the regime investigated here.
Expected physical behavior is also observed in the vortex structure factor.
For both pointlike and linear extended defects, as the vortex density is
increased spatial ordering is observed to increase in the diffraction plots.
When interacting with columnar pins, the vortex system structure factor is
found to evolve from that of a liquid, to a smectic, and finally a triangular
lattice.
For point defects only the regular triangular array is realized in the
parameter space studied here.
We remark that the structure factor plots with point defects at low vortex
densities appear qualitatively similar to those for columnar defects at higher
flux densities.
Velocity fluctuation or voltage noise power spectra measured parallel to the
drive have been obtained for various vortex densities in the presence of both
columnar and point defects.
A narrowband signal is observed over a large vortex density interval, with the
peak coinciding with the washboard frequency.
The power spectrum has also been measured perpendicular to the drive, and a
signal at the washboard frequency is observed there as well.
Harmonics have been detected at multiples of the washboard frequency for both
types of disorder.
For columnar defects the ratios of the power of the first and second harmonic
to the third turn out to be larger than for point defects.
However, as the density is increased, the ratio decreases for columnar defects.
By varying the pinning strength of the point defects at a constant vortex
density we have obtained similar harmonic ratios to that of columnar defects.
This indicates that the harmonic ratios are not unique indicators of the type
of material defects present in the sample.
Rather, we think that the harmonic ratios should be a function of the degree of
deformation of the vortex lattice.
We remark that the detailed features of the flux flow power spectrum are also
influenced by the configuration of the leads above the sample surface
\cite{CLEM}.
In real experimental setups, these effects may mask the noise signatures
observed in our simulations.
In order to investigate the shape of the fluctuating vortex lines we have
determined the average radius of gyration in the presence of point and columnar
defects in the $x$ and $y$ directions.
The general (and expected) trend is for the radius of gyration to decrease as
the density of the vortices is increased.
Our results for the gyration radius also reveal that transverse fluctuations do
not appear to play a large role in the thermal flux line wandering for the
range of parameters investigated here:
The transverse component in fact rarely exceeds the radius of a pinning site
except for the lowest vortex densities studied.
In addition, we have measured these various observables as the columnar defect
length, i.e., the degree of correlation in the disorder was varied, and
compared the results to the effects of varying just the pinning strength of
point defects.
Changing the defect character from point-like to columnar shows similar results
to increasing point defect pinning strength in both the diffraction and power
spectral density plots.
Different effects on the vortices through adjusting these two distinct pinning
mechanisms become apparent upon comparing the growth of the radius of gyration
in the direction along the drive.
While the results seem to correspond to physical intuition, a precise theory
explaining the quantitative growth of the radius of gyration in either
situation is currently lacking and will have to be developed.
\begin{acknowledgement}
This work was in part supported through the U.S. National Science Foundation,
Division of Materials Research, grants NSF DMR-0075725 and 0308548, and
through the Bank of America Jeffress Memorial Trust, research grant J-594.
Some of the data shown were obtained from simulations run on Virginia Tech's
Anantham cluster.
We gratefully acknowledge helpful discussions with I. Georgiev,
T. Klongcheongsan, E. Lyman, M. Pleimling, G. Pruessner, B. Schmittmann,
S. Teitel, and R.K.P. Zia.
\end{acknowledgement}
|
2,877,628,090,534 | arxiv | \section{INTRODUCTION}\label{sec:intro}
The most prevalent theory for the origin of the galaxy spins is the tidal torque theory which explains
that a galaxy acquired its spinning motion (i.e., the angular momentum) via the tidal interaction with the
surrounding matter fluctuations at its proto-galactic stage \citep{peebles69,dor70,white84}.
One of the key predictions of the tidal torque theory is the existence of the local correlations between
the galaxy spin and the tidal shear fields
\citep{CT96a,CT96b,LP00,PLS00,LP01,porciani-etal02a,porciani-etal02b}:
The initial tidal torques which lasted till the turn-around moments caused the alignments between the spin axes
of the proto-galaxies and the intermediate principal axes of the linear tidal fields. In the subsequent
evolution the initially established spin-shear alignments would become weaker as the non-linear effects like
galaxy-galaxy interactions should play a role of randomizing the galaxy spin axes
\citep{dubinski92,CT96b,cri-etal01,cri-etal02,porciani-etal02a,LP02,LP08}.
Plenty of observational evidences have been found for the existence of the intrinsic spin-shear correlations
\citep[][for a comprehensive review]{schafer09}.
For instance, \citet{LE07} provided a direct observational evidence for the existence of the intrinsic
spin-shear correlations by using the tidal shear fields reconstructed from the 2MASS
redshift survey \citep{erdogdu-etal06} and the galaxy catalog complied by B. Tully \citep{PLS00}.
However, even though the main result of \citet{LE07} was seemingly consistent with the prediction of the linear
tidal torque theory, they noted that the detected environmental dependence of the spin-shear alignments could
not fit well into the tidal torque picture. The nonlinear effects which would destroy the initially established spin-
shear alignments should be stronger in denser environments. Therefore, in the tidal torque picture, the galaxies
located in denser regions were expected to show weaker spin-shear alignments. In contrast, what was found
by \citet{LE07} was an opposite trend that the galaxies located in denser environments exhibit stronger intrinsic
spin-shear alignments \citep[see Figure 6 in][]{LE07}.
A similar trend was also detected by \citet{lee11} who measured the spin-spin alignments by using the nearby
disk galaxies from the seventh data release of the Sloan Digital Sky Survey (SDSS DR7) \citep{sdssdr7}.
They showed that the galaxies which have ten or more neighbors within separation distance of
$2\,h^{-1}$Mpc exhibit stronger spin-spin alignments. If the intrinsic spin-spin alignments were related to the
tidally induced spin-shear alignments
\citep[e.g.,][]{PLS00,cri-etal01,cri-etal02,jing02,mac-etal02,porciani-etal02a,porciani-etal02b},
this observational finding of \citet{lee11} should be interpreted as another evidence for the stronger
spin-shear alignments in denser environments, consistent with the result of \citet{LE07}.
Although \citet{lee11} attempted to explain the observed environmental dependence of the intrinsic spin-spin
alignments in the tidal torque picture, ascribing it to the dependence of the tidal strength on the local density, it
remains unanswered why and how the initially established spin-shear alignments survive better in denser
environments.
Several literatures have also suggested that the tidal torque theory alone could not provide a fully satisfactory
explanation for the observed dependence of the galaxy intrinsic alignments on the cosmic web environment
\citep[see, e.g.,][]{hah-etal07b,jones-etal10,codis-etal12,libes-etal12a,GS13,tro-etal13}.
The general consensus reached by the previous numerical works is that the observed galaxy intrinsic
alignments are not just weakened version of the initial spin-shear correlations but evolved version driven
by some other mechanism than the tidal shears. What has so far suggested as a possible candidate
for the required mechanism in the literatures includes the large-scale cosmic flow, peculiar motions, satellite
infalls, bulk flows and etc.
Very recently, \citet{libes-etal12b} proposed an interesting new scenario that the cosmic vorticity drives the
nonlinear evolution of the galaxy angular momentum after the turn-around moments. In the linear regime the
peculiar velocity field is curl free as it is described as a gradient of the perturbation potential.
In the subsequent nonlinear evolution, however, it develops the curl mode which could affect the galaxy angular
momentum. By analyzing the data from high-resolution N-body simulations, \citet{libes-etal12b}
demonstrated that the halo angular momentum vectors are strongly aligned with the local vorticity vectors,
indicating the presence of the dominant effect of the vorticity field on the nonlinear evolution of the
galaxy angular momentum vectors.
The \citet{libes-etal12b} also showed that the vorticity vectors are strongly anti-aligned with the major
(minor) principal axes of the local tidal shear fields in the knot (void) environments.
We note here that the puzzling environmental dependence of the galaxy spin alignments can be coherently
explained by assuming the vorticity driven evolution of the galaxy angular momentum. The goal of this
Paper is to look for an observational evidence for this new scenario of \citet{libes-etal12b}. To achieve this goal,
we reconstruct the cosmic vorticity and the galaxy spin fields from the large galaxy surveys and investigate if the
vorticity-spin and vorticity-shear alignments found in N-body simulations exist in the real Universe. The
upcoming three sections present the following highlights, respectively: the procedure to reconstruct the cosmic
vorticity field and the result on the vorticity-shear alignments in section \ref{sec:vor}; a brief review of the
algorithm to measure the galaxy spin axes with high accuracy and the result on the vorticity-spin alignments in
section \ref{sec:spin}; a concise summary of the final results and discussion of the cosmological
implication of our results in section \ref{sec:conclusion}.
\section{VORTICITY-SHEAR ALIGNMENTS}\label{sec:vor}
\citet{erdogdu-etal06} reconstructed the density contrast $\delta({\bf x})$ and peculiar velocity ${\bf v}({\bf x})$
fields by applying the Wiener reconstruction algorithm to the galaxy data from the 2MASS redshift survey
\citep{2mass} under the assumption of a $\Lambda$CDM universe. The fields were provided on $64^{3}$
pixels in a regular cubic space of linear size $400\,h^{-1}$Mpc, for which the position vectors ${\bf x}$ were
expressed in the supergalactic coordinate system and the peculiar velocities ${\bf v}$ were measured
in the cosmic microwave background (CMB) frame. Given the spatial resolution of $ 6.25\,h^{-1}$Mpc,
the reconstructed fields represented the density and velocity fluctuations on the quasi-nonlinear scale.
\citet{erdogdu-etal06} proved the robustness of their reconstruction procedure by demonstrating that
the predicted density field recovers well the observed large scale structures of the Universe.
Using the density field $\delta({\bf x})$ reconstructed by \citet{erdogdu-etal06} from the 2MASS redshift survey,
\citet{LE07} calculated the tidal fields as $T_{ij}({\bf x})=\partial_{i}\partial_{j}\nabla^{-1}\delta({\bf x})$.
Given that on those pixel points more distant than $100\,h^{-1}$Mpc from the center the reconstructed
density and velocity fields suffered from large uncertainties (private communication with P.Erdogdu),
\citet{erdogdu-etal06} selected only those $32^{3}$ pixels whose separation distances from the center
are less than $100\,h^{-1}$ Mpc and determined the major, intermediate and minor principal axes of
$T_{ij}({\bf x})$ at each selected pixel point. The detailed descriptions of the reconstructed density,
velocity and tidal fields, and the 2MASS redshift survey can be found in \citet{erdogdu-etal06},
\citet{LE07} and \citet{2mass}, respectively.
Now, we attempt to reconstruct the cosmic vorticity field on the selected $32^{3}$ pixel points by calculating the
curl of the peculiar velocity field as ${\bf w}\equiv{\bf \nabla}\times{\bf v}$. We first perform the Fourier transform
of the peculiar velocity field to obtain its Fourier amplitudes $\tilde{\bf v}$ with the help of the Fast Fourier
Transformation (FFT) code \citep{pre-etal92}. The Fourier amplitude of the vorticity field
can be written as $\tilde{\bf w}={\bf k}\times\tilde{\bf v}$ where ${\bf k}$ is the wave vector in the
Fourier space, and thus the three components of $\tilde{\bf w}$ are calculated in Fourier space as
\begin{equation}
\tilde{w}_{1}=k_{2}\tilde{v}_{3}-k_{3}\tilde{v}_{2}\, ,\qquad
\tilde{w}_{2}=k_{3}\tilde{v}_{1}-k_{1}\tilde{v}_{3}\, ,\qquad
\tilde{w}_{3}=k_{1}\tilde{v}_{2}-k_{2}\tilde{v}_{1}\, .
\end{equation}
Finally, we perform the inverse Fourier transform of $\tilde{\bf w}$ to obtain the real-space
vorticity field ${\bf w}$. Figure \ref{fig:contour} plots the contours of the absolute magnitude of
${\bf w}$ in the supergalactic $x$-$y$ plane, showing the deviation of $\vert{\bf w}\vert$ from zero.
Recalling the fact that the peculiar velocity field reconstructed from the 2MASS redshift survey
was filtered on the quasi-nonlinear scale of $\sim 6.25\,h^{-1}$Mpc, the result shown in Figure
\ref{fig:contour} implies that even in the quasi-nonlinear regime the velocity field is no longer irrotational,
developing the curl mode.
As done in \citet{libes-etal12b}, we first divide the $32^{3}$ pixels into the knots, filaments, sheets and voids
according to the signs of the shear eigenvalues \citep{hah-etal07a}: The pixels at which all three eigenvalues
have positive (negative) values are marked as knots (voids), while the pixels at which one eigenvalue is
negative (positive) and the other two are positive (negative) are marked as filaments (sheets). Then, we
calculate the alignment between the vorticity vector and the principal axes of the tidal tensor at each marked
pixel. Let $\{{\bf e}_{1},\ {\bf e}_{2},\ {\bf e}_{3}\}$ be the major, intermediate and minor principal axes of the tidal
tensor at each marked pixel.
Calculating $\mu\equiv \vert{\bf w}\cdot{\bf e}_{i}\vert $ (for $i=1,\ 2,\ 3$) and binning the values of $\mu$
in the range of $[0,\ 1]$, we determine the probability density distribution of $\mu$. If there were no alignment,
the distribution $p(\mu)$ would be uniformly unity. If $p(\mu)$ increases (decreases) with $\mu$, then there
should be strong alignments (anti-alignments) between the vorticity vectors and the principal axes of the
tidal tensors. We repeat the same calculation for the knot, void, filament and sheet regions to separately
determine $p(\mu)$ for each case.
Figure \ref{fig:evor_knot} plots the probability density distributions , $p(\mu)$, wit Poisson errors for the knot
regions, showing how the vorticity vectors are aligned with the major, intermediate and minor principal axes of
the tidal tensors in the left, middle and right panels, respectively. In each panel the horizontal dotted line
indicates the uniform distribution of $\mu$ for the case of no vorticity-shear alignments.
As can be seen, the vorticity vectors are strongly anti-aligned with the major principal axes of the tidal
shear tensors, preferentially lying in the plane spanned by their intermediate and minor principal axes in the knot
regions. In other words, in the highly dense knot environments the vorticity vectors are anti-aligned with the
directions of the maximum matter compression.
Figure \ref{fig:evor_void} plots $p(\mu)$ for the void regions, revealing that in the void regions the vorticity
vectors are anti-aligned with the minor principal axes of the tidal tensors, preferentially lying in the plane
spanned by the major and intermediate principal axes, which is directly opposite to the case of the knot regions.
This result is consistent with the numerical finding of \citet{libes-etal12b} even though in their work
the vorticity field was filtered on much smaller scale.
Figures \ref{fig:evor_fil} and \ref{fig:evor_pan} plot $p(\mu)$ for the filament and sheet cases, respectively.
As can be seen, the filament regions show no strong signal of vorticity-shear alignments, while for the sheet
case is found a clear signal alignments (anti-alignments) with the major (minor) principal axes but no
strong alignments with the intermediate principal axes.
\section{VORTICITY-SPIN ALIGNMENTS}\label{sec:spin}
The galaxy spin axes are hard to determine in practice, even under the simplified assumption that
the minor axes of the galaxies are aligned with their spin axes.
In the measurements of the alignments between the galaxy spin axes and the local vorticity vectors,
the largest uncertainty would come from the inaccurate determination of the galaxy spin axes. Therefore,
it is important to select carefully only those galaxies whose spin orientations can be determined with relatively
high accuracy. It is often assumed that for the case of the late-type spiral galaxies whose shapes
are close to circular thin discs their spin axes are orthogonal to the disc planes \citep{HG84}.
Provided that information on the position angles and axial ratios of the late-type spiral galaxies are available,
their unit spin vectors can be determined up to the two-fold ambiguity in the sign of their radial components
\citep{PLS00}. The remaining uncertainty due to this two-fold ambiguity in the determination of the spin axes
can be minimized by considering only those face-on or edge-on spirals.
As it is essential for our investigation to determine the directions of the galaxy angular momentum vectors as
accurately as possible, we restrict our analysis only to the nearby large late-type spiral galaxies viewed either
face on or edge on. A sample of the nearby large late-type spiral galaxies was already obtained by \citet{lee11}
from the SDSS DR7. The sample contains those SDSS galaxies which have type Scd on the Hubble sequence
\citep{huertas-etal11}, angular sizes larger than $D_{\rm c}=7.92$ arcseconds in the redshift range of
$0\le z\le 0.02$. The value of this size cut-off $D_{\rm c}$ was imposed to remove the dwarfs which turned
out to cause large uncertainties in the measurements of the spin axes due to their irregular shapes.
Among the nearby large Scd galaxies in the sample of \citet{lee11}, we make a further selection of only those
ones which have axial ratios larger than $0.9$ (nearly face-on) or smaller than $0.15$ (nearly edge-on) to
minimize the uncertainties associated with the two-fold ambiguity. A total of $585$ nearby large late-type face-on
(or edge-on) spirals are finally selected for our analysis.
The unit spin vector of each selected galaxy, $\hat{\bf t}\equiv (\hat{t}_{x},\ \hat{t}_{y},\ \hat{t}_{z})$,
is determined as \citep{lee11}
\begin{eqnarray}
\label{eqn:lx}
\hat{t}_{x}&=& \pm\cos\xi\sin(\pi/2-\delta)\cos\alpha + \vert\sin\xi\vert\sin P\cos(\pi/2-\delta)\cos\alpha -
\vert\sin\xi\vert\cos P\sin\alpha ,\\
\label{eqn:ly}
\hat{t}_{y} &=& \pm\cos\xi\sin(\pi/2-\delta)\sin\alpha + \vert\sin\xi\vert\sin P\cos(\pi/2-\delta)\sin\alpha +
\vert\sin\xi\vert\cos P\cos\alpha ,\\
\label{eqn:lz}
\hat{t}_{z} &=& \pm\cos\xi\cos(\pi/2-\delta) - \vert\sin\xi\vert\sin P\sin(\pi/2-\delta) ,
\end{eqnarray}
where $P$ is the position angle of each selected galaxy, and $(\delta, \alpha)$ are the declination
and right ascension of each galaxy's position vector expressed in the equatorial coordinate system, and
the plus and minus signs in front of the first terms in Equation (\ref{eqn:lx})-(\ref{eqn:lz}) represents the
two-fold ambiguity mentioned above.
Here, $\xi$ is the galaxy's inclination angle related to its axial ratio $q$ and intrinsic flatness
parameter $p$ as $\cos^{2}\xi = (q^{2}-p^{2})(1-p^{2})$. For the galaxies of type Scd, the intrinsic
flatness parameter has the value of $p=0.1$ \citep{HG84}.
Since the tidal shear and the vorticity fields from the 2MASS redshift survey have been reconstructed
in the super-galactic coordinate systems, we find a supergalactic expression for the unit spin vector of
each selected galaxy, $\hat{\bf s}$, through the coordinate transformation of
$\hat{\bf s} = R\hat{\bf t}$ where $R$ is the orthogonal matrix that transforms the equatorial to the
supergalactic frames.
By applying the Cloud-in-Cell interpolation (CIC) algorithm \citep{HE88} to the tidal shear fields reconstructed
by \citet{LE07} from the 2MASS redshift survey, we determine the tidal shear tensors at the positions of the
selected SDSS galaxies and determine their principal axes, $\{{\bf e}_{1},\ {\bf e}_{2},\ {\bf e}_{e}\}$.
Then, we calculate the cosines of the alignment angles, $\mu$, between the galaxy spin axes
and the principal axes of the local tidal tensors as $\mu\equiv\vert{\bf s}\cdot{\bf e}_{i}\vert$.
Recall that there are two different unit spin vectors assigned to each selected galaxy which differ from
each other by the sign of the radial components. As done in \citet{LE07}, we treat two spin vectors assigned
to each galaxy as two independent realizations to end up having twice as many values of $\mu$
as the total number of the selected SDSS galaxies.
Binning the values of $\mu$ and counting the number of those realizations, $n_{\mu}$, belonging to each
$\mu$-bin, we finally determine the probability density distribution, $p(\mu)$, of the spin-shear alignments,
calculating Poisson errors as $1/(n_{\mu}-1)^{1/2}$ associated with the determination of $p(\mu)$.
Figure \ref{fig:spinshear} plots the probability density distributions of the cosines of the alignment angles
between the unit spin vectors of the selected SDSS galaxies and the major, intermediate, and minor
principal axes of the local tidal shear tensors with Poisson errors in the left, middle and right panels,
respectively. As can be seen, the spin axes of the selected galaxies seem to be strongly anti-aligned
with the major principal axes of the local tidal tensors, but aligned with the intermediate and the minor
principal axes. That is, the spin axes of the selected SDSS galaxies tend to lie in the plane perpendicular
to the local direction of maximum matter compression.
Compare our result with that of \citet{LE07} who found a significant signal of the correlations between the spin
axes of the Tully galaxies and the intermediate principal axes of the local tidal tensors but no alignment signal
with the other two principal axes. We believe that the difference resulted from the inaccurate measurements
of the spin axes of the Tully galaxies in the analysis of \citet{LE07} which included not only the Scd galaxies
but also the earlier type spirals with thick bulges without taking into account the two-fold ambiguity.
What is newly found from our analysis is that the anti-alignments between the spin axes and the major principal
axes are strongest while the alignments of the spin axes with the minor principal axes are as strong as
that with the intermediate principal axes. Although the result is not completely against the tidal torque theory
which predicts that the spin axes are preferentially aligned with the intermediate principal axes, we would like
to see if there exists strong spin-vorticity alignments which may help explain better the detected spin-shear
alignment tendency.
Applying the CIC algorithm to the vorticity fields reconstructed in section \ref{sec:vor}, we calculate the local
vorticity vectors at the positions of the selected SDSS disk galaxies. Then, we determine the probability
distribution of the cosines of the angles between the unit spin vectors and the local vorticity vectors ${\bf w}$
in a similar manner, the result of which is plotted Figure \ref{fig:spinvor}. As can be seen, the probability density
increases sharply and almost monotonically as $\mu$ increases, detecting a strong signal of the
spin-vorticity alignments. We test the null hypothesis of no alignment (i.e., $p(\mu)=1$)
with the help of the $\chi^{2}$-statistics and find that the null hypothesis is rejected at the $99.9999\%$
confidence level.
The result shown in Figure \ref{fig:spinvor} is consistent with the numerical finding of \citet{libes-etal12b},
providing an observational support for the scenario that the galaxy angular momentum evolves via its
interaction with the local vorticity in the nonlinear regime. Recalling that in the work of
\citet{libes-etal12b} the vorticity field was filtered on the galactic scale of $\le 1^{-1}\,h^{-1}$Mpc
while in the current work the vorticity field is filtered on the much larger scale of $\sim 6\,h^{-1}$Mpc,
we conclude that the vorticity effect wins over the tidal shear effect on the evolution of the galaxy angular
momentum even in the quasi-nonlinear regime.
\section{SUMMARY AND DISCUSSION}\label{sec:conclusion}
Utilizing the nearby large face-on (or edge-on) late-type spiral galaxies from the SDSS DR7 \citep{sdssdr7}
and the tidal shear and vorticity fields reconstructed from the 2MASS redshift survey \citep{erdogdu-etal06},
we have measured the vorticity-shear, the spin-shear and the spin-vorticity alignments. The reconstructed
vorticity fields have spatial resolution of $\sim 6\,h^{-1}$Mpc, corresponding to the quasi-nonlinear regime.
First, the vorticity vectors have been found to be strongly anti-aligned (aligned) with the major
principal axes of the tidal shear tensors in the knot (void) regions, lying in the plane spanned by the other two
principal axes. Second, the galaxy spin axes have turned out to be strongly anti-aligned with the major principal
axes of the local tidal shear tensors, while aligned with the intermediate and minor principal axes.
Finally, a clear signal of the spin-vorticity alignments has been detected, rejecting the null hypothesis of no spin-
vorticity alignment at $99.9999\%$ significance level. This result observationally stands by the new scenario of
\citet{libes-etal12b} that the tidally generated angular momentum of a galaxy subsequently evolves under the
dominant effect of the vorticity field in the quasi-nonlinear and nonlinear regime.
The spin-shear alignment tendency as well as its environmental dependence reported in the previous
works can now be explained as follows. The cosmic flow in the nonlinear regime develops local vorticities
which are preferentially inclined onto the plane orthogonal to the directions of either the maximum or the
minimum volume compression depending on the web environment. The developed vorticities affect the galaxy angular
momentum, modifying the spin-shear alignments from the initial tendency.
Since it depends on the environment how fast the velocity field develops the vorticities and what
directions the vorticity vectors get aligned with, the spin-shear alignments in the nonlinear regime come to
take on the environmental dependence. The stronger spin-shear alignments found in denser environments should
result from the stronger vorticity effect there via which the nonlinear spin-shear alignments are established.
An interesting cosmological implication of our result is that the nonlinear vorticity fields and the the
vorticity-induced galaxy alignments might be useful as a probe of cosmology and gravity.
As shown in \citet{kitaura-etal12} and mentioned in \citet{libes-etal12b}, the vorticity of cosmic fluid which
equals zero in the linear regime grows in the nonlinear regime at third order. The more rapidly a cosmic fluid
develops the third order nonlinearity, the stronger the spin-vorticity alignments become due to the earlier onset of
the vorticity effect on the evolution of the galaxy angular momentum. In some modified gravity or dynamic dark
energy models the high-order nonlinearity grows faster due to the presence of the fifth force than for the
$\Lambda$CDM case. Thus, in these alternative models the faster growth of the nonlinearity would lead
to the more rapid development of the vorticity field, generating stronger spin-vorticity alignments.
Very recently, \citet{li-etal13} claimed that the small scale powers of the divergence of the
peculiar velocities should be a more sensitive probe of modified gravity than the density power spectrum.
Given our result, the small-scale powers of the absolute magnitude of the curl of the peculiar velocities might
work powerfully as a complimentary probe.
\acknowledgments
I thank P.Erdogdu for providing me the data of the density and peculiar velocity fields from the
2MASS redshift survey. I also thank M. Huertas-Company for providing information on the galaxy
position angles and magnitudes.
This work was supported by the National Research Foundation of Korea (NRF)
grant funded by the Korea government (MEST, No.2012-0004195). Support for this work was also
provided by the National Research Foundation of Korea to the Center for Galaxy Evolution
Research (NO. 2010-0027910).
Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation,
the Participating Institutions, the National Science Foundation, the U.S. Department of
Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho,
the Max Planck Society, and the Higher Education Funding Council for England. The
SDSS Web Site is http://www.sdss.org/.
The SDSS is managed by the Astrophysical Research Consortium for the Participating
Institutions. The Participating Institutions are the American Museum of Natural History,
Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case
Western Reserve University, University of Chicago, Drexel University, Fermi lab, the
Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University,
the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle
Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences
(LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA),
the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State
University, University of Pittsburgh, University of Portsmouth, Princeton University,
the United States Naval Observatory, and the University of Washington.
\clearpage
|
2,877,628,090,535 | arxiv | \section{Introduction}\noindent
\noindent
Consider a complete Riemannian manifold $(M^n,g, \nabla)$ endowed with a
metric connection $\nabla$. The torsion $T$ of $\nabla$, viewed as a
$(3,0)$ tensor, is defined by
\begin{displaymath}
T(X,Y,Z)\ :=\ g(T(X,Y),Z)\ =\ g(\nabla_XY-\nabla_YX-[X,Y] ,Z).
\end{displaymath}
Metric connections for which $T$ is antisymmetric in all arguments,
i.\,e.~$T\in\Lambda^3(M^n)$ are of particular interest, see
\cite{Agricola06}.
They correspond precisely to those metric connections that have the same
geodesics as the Levi-Civita connection. In this note we will investigate
\emph{flat} connections of that type.
The observation that any simple Lie group carries in fact two flat
connections,
usually called the $(+)$- and the $(-)$-connection, with
torsion $T(X,Y)=\pm [X,Y]$ is due to \'E.~Cartan and J.\,A.~Schouten \cite{Cartan&Sch26a}
and is explained in detail in \cite[p.~198-199]{Kobayashi&N2}. If one then chooses a
biinvariant metric, these connections are metric and the torsion becomes
a $3$-form, as desired. Hence, the question is whether there are any further
examples of flat metric connections with antisymmetric torsion beside products
of Lie groups.
The answer can be found in Cartan's work.
In fact, \'E.~Cartan and J.\,A.~Schouten
published a second joint paper very shortly after the one mentioned above,
\cite{Cartan&Sch26b}.
There is only one additional such geometry, realized on $S^7$. Their proof that no more cases can occur is
by diligent
inspection of some defining tensor fields.
Motivated from the problem when the Laplacian of a Riemannian manifold can
at least locally be written as a sum $\Delta = -\sum X_i\circ X_i$,
d'Atri and Nickerson investigated in 1968 manifolds which admit
an orthonormal frame consisting of Killing vector fields. This question is
almost equivalent to the previous.
In two beautiful papers, Joe Wolf picked
up the question again in the early 70ies and provided a complete
classification of
all complete (reductive) pseudo-Riemannian manifolds admitting absolute
parallelism, thus reproving the Cartan-Schouten result by other means
\cite{Wolf72a}, \cite{Wolf72b}. The key observation was that the
Riemannian curvature of such a space must, for three $\nabla$-parallel
vector fields, be given by $R(X,Y)Z=- [[X,Y],Z]/4$, and thus defines a
Lie triple system. The proof is then reduced to an (intricate) algebraic
problem about Lie triple systems, and $S^7$ (together with two
pseudo-Riemannian siblings) appears because of the outer automorphism
inherited from triality.\\
\noindent
The main topic of this paper is to understand this very interesting result
in terms of special geometries with torsion. We will give a new and elementary
proof of the
result not using the classification of symmetric spaces. Moreover, we
describe explicitely
the family of flat metric connections with antisymmetric torsion
on $S^7$ and make the link to
$G_2$ geometry apparent.
\section{The case of skew symmetric torsion}\noindent
Let $(M^n,g,\nabla)$ be a connected Riemannian
manifold endowed with a flat metric connection $\nabla$. The
parallel transport of any orthonormal
frame in a point will define a local orthonormal frame $e_1,\ldots.e_n$
in all other points. In the sequel, no distinction will be
made between vector fields and $1$-forms. The standard formula for the
exterior derivative of a $1$-form yields
\begin{eqnarray*}
d e_i(e_j,e_k)& =& e_j\langle e_i,e_k\rangle - e_k\langle e_i,e_j\rangle
-\langle e_i,[e_j,e_k]\rangle\ =\ -\langle e_i,[e_j,e_k]\rangle \notag \\
&=& -\langle e_i, \nabla_{e_j} e_k -\nabla_{e_k}e_j - T(e_j,e_k)\rangle
\ =\ \langle e_i, T(e_j,e_k)\rangle. \label{dei}
\end{eqnarray*}
Hence, the torsion can be computed from the frame
$e_i$ and their differentials.
As was shown by Cartan 1925, the torsion $T$ of $\nabla$ can basically be
of $3$ possible types---a $3$-form, a vector, and a more difficult type
that has no geometric interpretation \cite{Tricerri&V1}, \cite{Agricola06}.
We shall first study the case that the torsion is a $3$-form. We state the explicit formula for the torsion and
draw some first conclusions from the identities relating the curvatures
of $\nabla$ and $\nabla^g$, the Levi-Civita connection.
Let the flat connection $\nabla$ be given by
\begin{displaymath}
\nabla_X Y\ =\ \nabla^g_X Y +\frac{1}{2}T(X,Y,-)
\end{displaymath}
for a $3$-form $T$.
The general relation between $\ensuremath{\mathrm{Ric}}^g$ and $\ensuremath{\mathrm{Ric}}^\nabla$
\cite[Thm A.1]{Agricola06} yields for any orthonormal frame $e_1,\ldots,e_n$
that the Riemannian Ricci tensor can be computed directly from $T$,
\begin{displaymath}
\ensuremath{\mathrm{Ric}}^g(X,Y)\ =\ \frac{1}{4}\sum_{i=1}^n \langle T(X,e_i),T(Y,e_i) \rangle, \quad
\ensuremath{\mathrm{Scal}}^g\ = \ \frac{3}{2}\|T\|^2.
\end{displaymath}
In particular, $\ensuremath{\mathrm{Ric}}^g$ is non-negative, $\ensuremath{\mathrm{Ric}}^g(X,X)\geq 0$ for all $X$,
and $\ensuremath{\mathrm{Ric}}^g(X,X)=0$ if and only if $X\haken T=0$.
The torsion form $T$ is coclosed, $\delta T=0$,
because it coincides with the skew-symmetric part of $\ensuremath{\mathrm{Ric}}^\nabla = 0$. We
define the $4$-form $\sigma_T$ by the formula
\begin{displaymath}
\sigma_T \ := \, \frac{1}{2} \sum_{i=1}^n ( e_i \haken T) \wedge (e_i \haken
T) \ ,
\end{displaymath}
or equivalently by the formula
\begin{displaymath}
\sigma_T(X,Y,Z,V)\ :=\ \langle T(X,Y),T(Z,V)\rangle
+\langle T(Y,Z),T(X,V)\rangle
+\langle T(Z,X),T(Y,V)\rangle \ .
\end{displaymath}
Denote by $\nabla^{1/3}$ the metric connection with torsion $T/3$. Then we
can formulate some properties of the Riemannian manifold and the torsion form.
\begin{prop}
Let $\nabla$ be a flat metric connection with torsion
$T\in\Lambda^3(M^n)$. Then
\begin{displaymath}
3 \, dT \ = \ 2 \, \sigma_T \, , \quad \nabla^{1/3} T \ = \ 0 \, ,
\quad \nabla^{1/3} \sigma_T \ = \ 0 \ .
\end{displaymath}
The covariant derivative $\nabla T$ is a $4$-form and given by
\begin{displaymath}
(\nabla_V T)(X,Y,Z)\ =\ \frac{1}{3}\sigma_T(X,Y,Z,V) \, \quad
\mbox{or} \quad \nabla_V T \ = \ - \, \frac{1}{3} \, (V \haken
\sigma_T) \ ,
\end{displaymath}
\begin{displaymath}
(\nabla^g_V T)(X,Y,Z)\ =\ - \, \frac{1}{6}\sigma_T(X,Y,Z,V) \, \quad
\mbox{or} \quad \nabla^g_V T \ = \ \frac{1}{6} \, (V \haken
\sigma_T) \ .
\end{displaymath}
In particular, the length $||T||$
and the scalar curvature are constant.
The full Riemann curvature tensor is given by
\begin{displaymath}
\ensuremath{\mathcal{R}}^g(X,Y,Z,V)\ =\ - \frac{1}{6}\langle T(X,Y),T(Z,V)\rangle
+\frac{1}{12}\langle T(Y,Z),T(X,V)\rangle
+\frac{1}{12}\langle T(Z,X),T(Y,V)\rangle,
\end{displaymath}
and is $\nabla^{1/3}$-parallel, $\nabla^{1/3} \ensuremath{\mathcal{R}}^g = 0$. Finally,
the sectional curvature is non-negative,
\begin{displaymath}
K(X,Y)\ =\ \frac{\|T(X,Y)\|^2}{4[\|X\|^2\|Y\|^2-\langle X,Y\rangle^2]}\ \geq\ 0.
\end{displaymath}
\end{prop}
\begin{proof}
The first Bianchi identity \cite[Thm 2.6]{Agricola06},
\cite{Friedrich&I1} states for flat $\nabla$
\begin{equation}\label{Bianchi-I}
dT(X,Y,Z,V) - \sigma_T(X,Y,Z,V)+(\nabla_V T)(X,Y,Z)\ =\ 0 .
\end{equation}
By the general formula
\cite[Cor.~3.2]{Friedrich&I1} we have $3dT = 2 \, \sigma_T $
for any flat connection with skew-symmetric torsion.
Together with equation ($\ref{Bianchi-I}$),
this shows the first and second formula.
The expression for the curvature follows from this and
the general identity \cite[Thm A.1]{Agricola06}, \cite{Friedrich&I1}
\bea[*]
\ensuremath{\mathcal{R}}^g(X,Y,Z,V)& =& \ensuremath{\mathcal{R}}^\nabla(X,Y,Z,V) -\frac{1}{2}(\nabla_X T)(Y,Z,V)\\
&& +\frac{1}{2}(\nabla_Y T)(X,Z,V)-\frac{1}{4}\langle T(X,Y),T(Z,V)\rangle
-\frac{1}{4}\sigma_T(X,Y,Z,V).
\eea[*]
Since $\nabla - \nabla^{1/3} = \frac{1}{3} T$, we obtain
\begin{displaymath}
(\nabla_VT)(X,Y,Z) - (\nabla^{1/3}_V T)(X,Y,Z) \ = \
\frac{1}{3} \, T(V , - , - ) [T] (X,Y,Z) ,
\end{displaymath}
where $T(V , - , - ) [T]$ denotes the action of the $2$-form
$T(V , - , - )$ on the $3$-form $T$. Computing this action, we
obtain
\begin{displaymath}
T(V , - , - ) [T] (X,Y,Z) \ = \ \sigma_T(X,Y,Z,V) .
\end{displaymath}
$\nabla^{1/3} T = 0$ follows now directly from the formula for $\nabla T$.
In a similar way we compute $\nabla^gT$,
\begin{displaymath}
\nabla^g_V T \ = \ \nabla_V T - \frac{1}{2} \, (V \haken T)[T] \ = \
- \, \frac{1}{3} \, (V \haken \sigma_T) + \frac{1}{2} \,
(V \haken \sigma_T) \, = \, \frac{1}{6} (V \haken \sigma_T) .\qedhere
\end{displaymath}
\end{proof}
\noindent
Observe that the curvature identity of the last proposition is
nothing than formula (6) in \cite{Cartan&Sch26b} and the formula for
$\nabla^gT$ is formula (7) in the Cartan/Schouten paper.
\begin{cor}\label{parallel-T-pol}
Consider a tensor field $\mathcal{T}$ being a polynomial of the torsion form
$T$. Then we have
\begin{displaymath}
\nabla \mathcal{T} \ = \ - \, 2 \, \nabla^g \mathcal{T} \ .
\end{displaymath}
In particular, $\mathcal{T}$ is $\nabla$-parallel if and only if it is $\nabla^g$-parallel.
\end{cor}
\noindent
We derive the following splitting principle, which can again be found in
\cite{Cartan&Sch26b}.
\begin{prop}
If $M^n = M_1^{n_1} \times M_2^{n_2}$ is the Riemannian product and $T$ is the
torsion form of a flat metric connection, then T splits into $T = T_1 + T_2$,
where $T_i \in \Lambda^3 (M_i^{n_i})$ are $3$-forms on $M_i^{n_i}$. Moreover,
the connection splits,
\begin{displaymath}
(M^n , g , \nabla) \ = \ (M^{n_1} , g_1 , \nabla^1) \, \times \,
(M^{n_2} , g_2 , \nabla^2)
\end{displaymath}
\end{prop}
\begin{proof}
Consider two vectors $X \in T(M^{n_1}) , \, Y \in T(M^{n_2})$. Then the sectional
curvature of the $\{X,Y\}$-plane vanishes,
$\ensuremath{\mathcal{R}}^g(X,Y,Y,X) = 0$. Consequently, we conclude that
$X \haken (Y \haken T) = 0$ holds.
\end{proof}
\noindent
In the simply connected and complete case we can decompose our flat metric
structure into a product of irreducible ones (de Rham decomposition
Theorem). Consequently we assume from now on that $M^n$ is a complete,
simply connected and
irreducible Riemannian manifold and that $T \neq 0$ is non-trivial. The
$\nabla$-parallel vector fields $e_1 , \ldots , e_n$ are Killing
and we immediately obtain the formulas
\begin{displaymath}
\nabla^g_{e_k} e_l \ = \ - \nabla^g_{e_l} e_k , \quad
[e_k , e_l] \ = \ 2 \, \nabla^g_{e_k} e_l \ = \ - T(e_k, e_l)
\end{displaymath}
and
\begin{displaymath}
e_k \big( \langle [e_i,e_j], e_l \rangle \big) \ = \ -
(\nabla_{e_k}T)(e_i,e_j,e_l) \ =
\ - \,\frac{1}{3} \, \sigma_T (e_i , e_j , e_l , e_k) .
\end{displaymath}
In particular, $e_k \big( \langle [e_i,e_j], e_l \rangle \big)$
is totally skew-symmetric
and the function $ \langle [e_i,e_j], e_l \rangle$ is constant if and only
if the torsion form is $\nabla$-parallel, $\nabla T = 0$ (see
\cite{DAtri&N68}, Lemma 3.3 and Proposition 3.7).
\begin{prop}[{see \cite{DAtri&N68}, Lemma 3.4}]
The Riemannian curvature tensor in the frame $e_1, \ldots , e_n$ is given by
the formula
\begin{displaymath}
\ensuremath{\mathcal{R}}^g ( e_i , e_j) e_k\ = - \, \frac{1}{4} \big[ [ e_i, e_j ] , e_k \big] .
\end{displaymath}
In particular, $ \ensuremath{\mathcal{R}}^g ( e_i , e_j ) e_k$ is a Killing vector field.
\end{prop}
\begin{proof}
We compute
\begin{eqnarray*}
\langle [e_i,e_j],[e_k,e_l] \rangle &=& 2 \, \langle [e_i,e_j] ,
\nabla^g_{e_k} e_l \rangle \ = \
- \, 2 \, \langle \nabla^g_{e_k} [e_i,e_j] , e_l \rangle + 2 e_k \big(
\langle [e_i,e_j] , e_l \rangle \big) \\
&=& - 2 \, \big\langle \nabla^g_{[e_i,e_j]}e_k + \big[ e_k , [e_i ,e_j] \big] ,
e_l \big\rangle + 2 \, e_k \big( \langle [e_i,e_j] , e_l \rangle \big) \\
&=& 2 \, \langle \nabla^g_{e_l} e_k , [e_i, e_j] \rangle +
2 \, \langle \big[ [e_i,e_j] , e_k \big] , e_l \rangle +
2 \, e_k \big( \langle [e_i,e_j] , e_l \rangle \big) \\
&=& \langle [e_l, e_k],[e_i,e_j] \rangle +
2 \, \langle \big[ [e_i,e_j] , e_k \big] , e_l \rangle +
2 \, e_k \big( \langle [e_i,e_j] , e_l \rangle \big)
\end{eqnarray*}
and we obtain the following formula
\begin{displaymath}
\langle [e_i,e_j],[e_k,e_l] \rangle \ = \ \langle \big[ [e_i,e_j],e_k \big] ,
e_l \rangle +
e_k \big( \langle [e_i,e_j] , e_l \rangle \big) .
\end{displaymath}
The required formula follows now from the Jacobi identity and the fact, that
$e_k \big( \langle [e_i,e_j] , e_l \rangle \big)$ is totally skew-symmetric,
\begin{eqnarray*}
\ensuremath{\mathcal{R}}^g(e_i,e_j, e_k,e_l) &=& - \, \frac{1}{6} \langle [e_i,e_j],[e_k,e_l]
\rangle +
\, \frac{1}{12} \, \langle [e_j,e_k],[e_i,e_l] \rangle +
\frac{1}{12} \, \langle [e_k,e_i],[e_j,e_l] \rangle \\
&=& - \, \frac{1}{6} \, \langle \big[[e_i,e_j],e_k\big] ,e_l \rangle -
\frac{1}{6} \, e_k \big( \langle [e_i,e_j] , e_l \rangle \big)
+ \, \frac{1}{12} \, \langle \big[[e_j,e_k],e_i\big] ,e_l \rangle\\
&& + \frac{1}{12} \, e_i \big( \langle [e_j,e_k] , e_l \rangle \big)
+ \, \frac{1}{12} \, \langle \big[[e_k,e_i],e_j\big] ,e_l \rangle +
\frac{1}{12} \, e_j \big( \langle [e_k,e_i] , e_l \rangle \big) \\
&=& - \, \frac{1}{4} \, \langle \big[[e_i,e_j],e_k \big] , e_l \rangle . \hspace{7cm}
\qedhere
\end{eqnarray*}
\end{proof}
\begin{lem}
Let $X,Y$ be a pair of Killing vector fields such that
$\langle X,Y \rangle$ is constant and let $Z$ be a third Killing
vector field. Then
\begin{displaymath}
X \big( \langle Y , Z \rangle \big) \ = \ - \, Y \big( \langle X , Z \rangle
\big) .
\end{displaymath}
is skew-symmetric in $X,Y$.
\end{lem}
\begin{proof}
For any vector field $W$, we obtain
\begin{displaymath}
\langle \nabla^g_X Y , W \rangle \ = \ - \langle \nabla^g_W Y,X \rangle \ = \
- \, W \big(\langle X,Y \rangle \big) + \langle Y , \nabla^g_WX \rangle \
= \ - \, \langle \nabla^g_Y X , W \rangle ,
\end{displaymath}
i.\,e., $\nabla^g_X Y = - \nabla^g_Y X$. Then the result follows,
\begin{displaymath}
X\big( \langle Y , Z \rangle \big) \, = \, \langle \nabla^g_XY , Z \rangle
+ \langle Y , \nabla^g_XZ \rangle \,
= \, - \, \langle \nabla^g_Y X , Z \rangle - \langle X , \nabla^g_Y Z \rangle
\, =\, -
Y \big( \langle X , Z \rangle \big) . \qedhere
\end{displaymath}
\end{proof}
\noindent
Denote by $R_{ijkl} = \ensuremath{\mathcal{R}}^g(e_i,e_j,e_k,e_l)$ the coefficients of the
Riemannian curvature with respect to the $\nabla$-parallel frame $e_1 , \ldots
, e_n$. Since $\big[[e_i,e_j],e_k\big]$ is a Killing vector field, the latter
Lemma reads as $e_m(R_{ijkl}) = - \, e_l(R_{ijkm})$ .
If $m$ is one of the indices $i,j,k,l$, we obtain $e_m(R_{ijkl}) = 0$
immediately. Otherwise we use in addition
the symmetry properties of the curvature tensor,
\begin{eqnarray*}
e_1(R_{2345}) &=& - \, e_5(R_{2341}) \ = \ - \, e_5(R_{1432}) \ = \
e_2(R_{1435}) \ = \ - \, e_2(R_{3541}) \\
&=& e_1(R_{3542}) \ =\ e_1(R_{4235}).
\end{eqnarray*}
Similarly one derives $e_1(R_{4235}) = e_1(R_{3425})$
and the Bianchi identity $R_{2345} + R_{4235} + R_{3425} = 0$ yields the
result, $e_1(R_{2345})= 0$. Consequently, the coefficients are constant and we
proved the following
\begin{thm}[{see \cite{Cartan&Sch26b}, formula (25), \cite{DAtri&N68}},
Theorem 3.6]\label{curv-parallel}
The Riemannian curvature tensor $\ensuremath{\mathcal{R}}^g$ is $\nabla$- and $\nabla^g$-parallel.
In particular,
\begin{displaymath}
\big[ \, X \haken T \, , \, \ensuremath{\mathcal{R}}^g \, \big] \ = \ 0
\end{displaymath}
holds for any vector $X \in T(M^n)$.
\end{thm}
\begin{proof}
$\ensuremath{\mathcal{R}}^g$ is a polynomial depending on $T$. Consequently, $\nabla^g \ensuremath{\mathcal{R}}^g = 0$
implies $\nabla \ensuremath{\mathcal{R}}^g = 0$, see Corollary \ref{parallel-T-pol}. Hence,
the difference
\begin{displaymath}
0 \ = \ (\nabla_X \, - \, \nabla^g_X) \ensuremath{\mathcal{R}}^g \ = \
\big[ \, X \haken T \, , \, \ensuremath{\mathcal{R}}^g \, \big]
\end{displaymath}
vanishes, too.
\end{proof}
\begin{cor}
Let $(M^n,g , \nabla, T)$ be a simply connected, complete and irreducible
Riemannian manifold equipped with a flat metric connection and totally
skew-symmetric torsion $T \neq 0$. Then $M^n$ is a compact, irreducible
symmetric space. Its Ricci tensor is given by
\begin{displaymath}
\ensuremath{\mathrm{Ric}}^g(X,Y)\ =\ \frac{1}{4}\sum_{i=1}^n \langle T(X,e_i),T(Y,e_i) \rangle \ =
\ \frac{\ensuremath{\mathrm{Scal}}^g}{n} \, \langle X
, Y \rangle \ , \quad
\ensuremath{\mathrm{Scal}}^g\ = \ \frac{3}{2}\|T\|^2 .
\end{displaymath}
\end{cor}
\noindent
Since $\sigma_T$ is $\nabla^{1/3}$-parallel, there are two cases. If $\sigma_T
\equiv 0$, then the scalar products $\langle [e_i,e_j],e_k \rangle$ are constant, i.e.,
the vector fields $e_1, , \ldots , e_n$ are a basis of a $n$-dimensional Lie
algebra. The corresponding simply connected Lie group is a simple, compact Lie
group and isometric to $M^n$. The torsion form of the flat connection is
defined by $T(e_k , e_l) = - \, [e_k , e_l]$ (see \cite{Kobayashi&N2}, chapter
X). \\
\noindent
The case $\sigma_T \not\equiv 0$ is more complicated. Since $\sigma_T$ is
a $4$-form, the dimension of the manifold is at least four.
Cartan/Schouten (1926) proved that only the
$7$-dimensional round sphere is possible. A different argument has
been used by
D'Atri/Nickerson (1968) and Wolf (1972), namely the
classification of irreducible, compact symmetric spaces with vanishing Euler
characteristic. This list is very short. Except the compact, simple Lie groups
most of them do not admit Killing vector field of constant length. \\
\noindent
We provide now a new proof that does not use the
classification of symmetric spaces.
Consider, at any point $m \in M^n$, the Lie algebra
\begin{displaymath}
\hat{\ensuremath{\mathfrak{g}}}_T(m) \ := \ Lie \big\{\, X \haken T \, : \ X \in T_m(M^n) \,
\big\} \ \subset \ \ensuremath{\mathg{so}}(T_m(M^n)) ,
\end{displaymath}
that was introduced in \cite{Agri&F04} for the systematic investigation of
algebraic holonomy algebras. Since $T$ is $\nabla^{1/3}$-parallel, the algebras
$\hat{\ensuremath{\mathfrak{g}}}_T(m)$ are $\nabla^{1/3}$-parallel, too.
\begin{prop}
Let $(M^n,g , \nabla, T)$ be a simply connected, complete and irreducible
Riemannian manifold equipped with a flat metric connection and totally
skew-symmetric torsion $T \neq 0$. Then the representation $(\hat{\ensuremath{\mathfrak{g}}}_T(m) ,
T_m(M^n))$ is irreducible.
\end{prop}
\begin{proof}
Suppose that the tangent space splits at some point. Then any tangent space
splits and we obtain a $\nabla^{1/3}$-parallel decomposition $T(M^n) = V_1
\oplus V_2$ of the tangent bundle into two subbundles. Moreover, the torsion
form $T = T_1 \, + \, T_2$ splits into $\nabla^{1/3}$-parallel forms
$T_1 \in \Lambda^3(V_1)$ and $T_2 \in \Lambda^3(V_2)$, see \cite{Agri&F04}.
The subbundles $V_1 , V_2$ are involutive and their
leaves are totally geodesic submanifolds of $(M^n,g)$. This contradicts the
assumption that $M^n$ is an irreducible Riemannian manifold.
\end{proof}
\noindent
If the Lie algebra $\hat{\ensuremath{\mathfrak{g}}}_T \subset \ensuremath{\mathg{so}}(n)$ of a $3$-form acts irreducibly
on the euclidian space, then there are two possibilities.
Either the $3$-form of the euclidian space satisfies the Jacobi
identity or the Lie algebra coincides with the full algebra,
$\hat{\ensuremath{\mathfrak{g}}}_T = \ensuremath{\mathg{so}}(n)$ (see \cite{Agri&F04}, \cite{Nagy07}, \cite{OR08}).
The first case again yields the result that the manifold $M^n$ is a
simple Lie group (we recover the case of $\sigma_T = 0$). Otherwise the Lie
algebra $\hat{g}_T$ coincides with $\ensuremath{\mathg{so}}(n)$ and Theorem \ref{curv-parallel}
implies that $M^n$
is a space of positive constant curvature,
$\ensuremath{\mathcal{R}}^g = c \cdot \mathrm{Id}$.
The formula for the sectional curvature
\begin{displaymath}
K \ = \ K(X,Y)\ =\ \frac{\|T(X,Y)\|^2}{4[\|X\|^2\|Y\|^2-\langle X,Y\rangle^2]}
\end{displaymath}
means that the $3$-form $T$ defines a metric vector cross product.
Consequently, the dimension of the sphere is seven.
\begin{thm} [{see \cite{Cartan&Sch26b}}]
Let $(M^n,g , \nabla, T)$ be a simply connected, complete and irreducible
Riemannian manifold equipped with a flat metric connection and totally
skew-symmetric torsion $T \neq 0$. If $\sigma_T = 0$, then $M^n$ is isometric
to a compact simple Lie group. Otherwise ($\sigma_T \neq 0$) $M^n$ is
isometric to $S^7$.
\end{thm}
\section{The case of vectorial torsion}
\noindent
By definition, such a connection $\nabla$ is given by
\begin{displaymath}
\nabla_X Y\ =\ \nabla^g_X Y + \langle X,Y \rangle V - \langle V,Y \rangle X
\end{displaymath}
for some vector field $V$. The general relation between the curvature
transformations for $\nabla$ and $\nabla^g$
\cite[App. B, proof of Thm 2.6(1)]{Agricola06} reduces to
\begin{displaymath}
\ensuremath{\mathcal{R}}^g(X,Y)Z\ =\ \langle X,Z\rangle\nabla_Y V
-\langle Y,Z\rangle\nabla_X V
+ Y\langle \nabla_X V+ \|V\|^2 X,Z\rangle
- X\langle \nabla_Y V+ \|V\|^2 Y,Z\rangle.
\end{displaymath}
Hence, the curvature depends not only on $V$, but also on $\nabla V$.
This remains true when considering the Ricci tensor, which does not
simplify much. However, the following claim may be read off
immediately: If $\nabla V=0$, then $M^n$ is a {\it non-compact}
space of constant
negative sectional curvature $ - \, \|V \|^2$ and the divergence of the vector
field $V$ is constant, $\delta^g(V) = (n - 1 ) \|V
\|^2 = \mathrm{const} > 0$. Moreover, the integral curves of the vector field
$V$ are geodesics in $M^n$, $\nabla^g_V V = 0$. In
\cite{Tricerri&V1}, this case is discussed in detail; in particular,
a flat metric connection with vectorial torsion is explicitely
constructed. \\
\noindent
For the general case ($\nabla V\neq 0$), the first Bianchi identity
\cite[Thm 2.6]{Agricola06} for a flat connection
\begin{displaymath}
0 \ = \ \cyclic{X,Y,Z}\ensuremath{\mathcal{R}}(X,Y)Z\ =\ \cyclic{X,Y,Z} dV(X,Y)Z.
\end{displaymath}
yields and interesting consequence: For $\dim M\geq 3$, $X,Y,Z$ can be chosen
linearly independent, hence $dV=0$ and $V$ is locally a gradient field.
Observe that a routine calculation shows that $dV(X,Y)=0$ for all $X$ and
$Y$ is equivalent to $\langle \nabla^g_X V,Y\rangle =
\langle \nabla^g_Y V,X\rangle$, and one checks that the same property holds
for $\nabla^g$ replaced by $\nabla$. The triple $(M^n , g , V)$ defines a Weyl
structure, i.e., a conformal class of Riemannian metrics and a torsion
free connection $\nabla^w$ preserving the conformal class. In general , the Weyl connection
and its curvature tensor are given by the formulas
\begin{eqnarray*}
\nabla^{w}_X Y &=& \nabla^g_X Y\,+\,g \langle X\, ,\, V \rangle
\, Y\,+\,
\langle Y \, , \, V \rangle \, X \, - \, \langle X \, , \, Y \rangle
\, V \, , \\
\mathcal{R}^{\nabla}(X , Y) Z &=& \mathcal{R}^{w}(X,Y)Z \,
- \, d V(X , Y) \, Z \, .
\end{eqnarray*}
The connection $\nabla$ with vectorial torsion is flat if and only if $d V =
0$ and the Weyl connection is flat, $ \mathcal{R}^{w} = 0$.
\begin{prop}
There is a correspondence between triples $(M^n, g, \nabla), \, n \geq 3,$ of Riemannian
manifolds and flat metric connections $\nabla$ with vectorial torsion
and closed, flat Weyl structures.
\end{prop}
\noindent
In particular, if a Riemannian manifold $(M^n,g)\, , \, n \geq 3,$ admits a flat metric
connection with vectorial torsion, then it is locally conformal flat (the Weyl
tensor vanishes). Moreover, we can apply Theorem 2.1. and Proposition 2.2. of
the paper \cite{Agri&F06}. If $M^n$ is compact, then its universal covering
splits and is conformally equivalent to $S^{n-1} \times \ensuremath{\mathbb{R}}^1$.\\
\noindent
Let us discuss the exceptional dimension two. In this case, the curvature
$\mathcal{R}^{\nabla}$ is completely defined by {\it one} function, namely
$\langle \mathcal{R}^{\nabla}(e_1, e_2)e_1,e_2 \rangle$. Using the formula
for the
Riemannian curvature tensor we compute this function and then we obtain immediately
\begin{prop}
Let $(M^2,g)$ be a $2$-dimensional Riemannian manifold with Gaussian curvature
$G$. A metric connection with vectorial torsion is flat if and only if
\begin{displaymath}
G \ = \ \mathrm{div}^g(V)
\end{displaymath}
holds. In particular, if $M^2$ is compact, then $M^2$ diffeomorphic to the
torus or the Klein bottle.
\end{prop}
\section{A family of flat connections on $S^7$}\label{fam-conn}\noindent
\subsection{Construction}
In dimension $7$, the complex $\ensuremath{\mathrm{Spin}}(7)$-representation $\Delta^\ensuremath{\mathbb{C}}_7$
is the complexification of a real $8$-dimensional representation
$\kappa: \, \ensuremath{\mathrm{Spin}}(7)\rightarrow \ensuremath{\mathrm{End}}(\Delta_7)$,
since the real Clifford algebra $\mathcal{C}(7)$ is isomorphic to
$\M(8)\oplus \M(8)$. Thus, we may identify $\ensuremath{\mathbb{R}}^8$ with the vector space
$\Delta_7$ and embed therein the sphere $S^7$ as the set of all
spinors of length one. Fix your favorite explicit realization of the
spin representation by skew matrices,
$\kappa_i:=\kappa(e_i)\in\ensuremath{\mathg{so}}(8)\subset \ensuremath{\mathrm{End}}(\ensuremath{\mathbb{R}}^8)$, $i=1,\ldots,7$.
We shall use it to define an explicit parallelization of $S^7$ by
Killing vector fields.
Define vector fields $V_1,\ldots,V_7$ on $S^7$ by
\begin{displaymath}
V_i(x)\ =\ \kappa_i \cdot x \text{ for }x\in S^7\subset \Delta_7.
\end{displaymath}
>From the antisymmetry of $\kappa_1,\ldots,\kappa_7$,
we easily deduce the following properties for these vector fields:
\begin{enumerate}
\item They are indeed tangential to $S^7$, $\langle V_i (x),x\rangle = 0$.
\item They are of constant length one,
\begin{displaymath}
\langle V_i(x),V_i(x)\rangle\,
=\, \langle \kappa_i x ,\kappa_i x\rangle
\, =\, -\langle \kappa_i^2 x , x\rangle\, =\, -\langle (-1)\cdot x, x\rangle
\ =\ 1.
\end{displaymath}
\item They are pairwise orthogonal ($i\neq j$).
\end{enumerate}
The commutator of vector fields is inherited from the ambient space,
hence $[V_i(x),V_j(x)]=[\kappa_i,\kappa_j](x)= 2\kappa_i\kappa_j x$
for $i\neq j$.
In particular, one checks immediately that $[V_i(x),V_j(x)]$ is again
tangential to $S^7$, as it should be. Furthermore, the
vector fields $V_i(x)$ are Killing.
We now define a connection $\nabla$ on $TS^7$ by $\nabla V_i(x)=0$;
observe that this implies that all tensor fields with constant coefficients
are parallel as well.
This connection is trivially flat and metric, and its torsion is given by ($i\neq j$)
\begin{displaymath}
T(V_i,V_j,V_k)(x)\, =\, -\langle [V_i,V_j],V_k\rangle\, =\,
-2\langle \kappa_i \kappa_jx,\kappa_k x\rangle\, =\,
2\langle \kappa_i\kappa_j \kappa_k x,x\rangle.
\end{displaymath}
If $k$ is equal to $i$ or $j$, this quantity vanishes, otherwise
the laws of Clifford multiplication imply that it is antisymmetric
in all three indices. Observe that this final expression is also valid for
$i = j$, though the intermediate calculation is not.
Thus, the torsion lies in $\Lambda^3(S^7)$
as wished, and can be written as
\begin{displaymath}\tag{$*$}
T(x)\ = \ 2\sum_{i<j<k} \langle \kappa_i\kappa_j \kappa_k x,x \rangle
(V_i\wedge V_j\wedge V_k)(x)\ .
\end{displaymath}
Since in general $\nabla_X Y = \nabla^g_XY+ T(X,Y,-)/2$, the
definition of $\nabla$ can equivalently be described for the Levi-Civita
connection $\nabla^g$ by
\begin{displaymath}
\nabla^g_{V_i}V_j\ =\ \left\{\begin{array}{ll} \kappa_i\kappa_j x &
\text{ for }i\neq j \\ 0 & \text{ for }i=j\end{array}\right. .
\end{displaymath}
$T$ is not $\nabla$-parallel, as it does not have constant
coefficients. Of course, the choice of the vector fields $V_1(x),\ldots,V_7(x)$ is
arbitrary: they can be replaced by any other orthonormal frame
$W_i(x):=A\cdot V_i(x)$ for a transformation $A\in\ensuremath{\mathrm{SO}}(7)$. However,
any $A\in \mathrm{Stab}\, T \cong G_2\subset \ensuremath{\mathrm{SO}}(7)$ will yield the same
torsion and hence connection, thus we obtain a family of
connections with $7=\dim\ensuremath{\mathrm{SO}}(7)-\dim G_2$ parameters.
\subsection{$\nabla$ as a $G_2$ connection}
The connection $\nabla$ is best understood from the
point of view of $G_2$ geometry. Recall (see \cite[Thm 4.8]{Friedrich&I1})
that a $7$-dimensional Riemannian manifold $(M^7,g)$ with a fixed $G_2$
structure $\omega\in \Lambda^3(M^7)$ admits a `characteristic' connection
$\nabla^c$
(i.\,e., a metric $G_2$ connection with antisymmetric torsion) if and only
if it is of Fernandez-Gray type
$\ensuremath{\mathfrak{X}}_1\oplus\ensuremath{\mathfrak{X}}_3\oplus\ensuremath{\mathfrak{X}}_4$ (see \cite{FG} and \cite[p.~53]{Agricola06} for
this notation).
Furthermore, if existent, $\nabla^c$ is unique, the torsion of $\nabla^c$
is given by
\begin{equation}\tag{$**$}
T^c\ = \ -*d\omega-\frac{1}{6}\langle d\omega,*\omega\rangle \omega +
*(\theta\wedge\omega),
\end{equation}
where $\theta$ is the $1$-form that describes
the $\ensuremath{\mathfrak{X}}_4$-component defined by
$\delta^g(\omega) = - \, (\theta \haken \omega)$.\\
\noindent
Now, \emph{any} generic $3$-form
$\omega\in\Lambda^3(S^7)$ that is parallel with respect to our connection
$\nabla$ admits $\nabla$ as its characteristic connection, and is related to
the torsion $T$ given in $(*)$ by the general formula $(**)$.
Thus, there is a large family of $G_2$ structures $\omega$ (namely, all
generic $3$-forms with constant coefficients) that induce
the flat connection $\nabla$ as their $G_2$ connection. Let us discuss the possible type of the $G_2$ structures $\omega$
inducing $\nabla$.
One sees immediately that none of these $G_2$ structures can be nearly
parallel (type $\ensuremath{\mathfrak{X}}_1$), since $T$ fails to be parallel.
A more elaborate argument shows that they cannot even be
cocalibrated (type $\ensuremath{\mathfrak{X}}_1\oplus \ensuremath{\mathfrak{X}}_3$): by \cite[Thm 5.4]{Friedrich&I1},
a cocalibrated $G_2$ structure on a $7$-dimensional manifold is
$\mathrm{Ric}^\nabla$-flat if and only if its torsion $T$ is harmonic.
Since $H^3(S^7,\ensuremath{\mathbb{R}})=0$, the assertion follows.
Finally, we show that the underlying $G_2$ structures can also not
be locally conformally parallel (type $\ensuremath{\mathfrak{X}}_4$): in \cite[Example
3.1]{Agri&F06}, we showed that such a structure always satisfies
$12\, \delta \theta=6\|T\|^2-\ensuremath{\mathrm{Scal}}^\nabla$. Since $\nabla$ is flat, the
divergence theorem implies
\begin{displaymath}
0\ = \ 2\int_{S^7}\delta \theta\,dS^7\ =\ \int_{S^7}\|T\|^2\,dS^7,
\end{displaymath}
a contradiction to $T\neq 0$. To summarize: There exists a multitude of
$G_2$ structures $\omega\in\Lambda^3(S^7)$ that admit the flat
metric connection $\nabla$ as their characteristic connection; all
these $G_2$ structures are of general type
$\ensuremath{\mathfrak{X}}_1\oplus \ensuremath{\mathfrak{X}}_3\oplus\ensuremath{\mathfrak{X}}_4$.
|
2,877,628,090,536 | arxiv | \section{Introduction}\label{S:intro}
Racks and quandles are algebraic structures resembling conjugation in a group
or crossing relations in a knot diagram. They are also invertible solutions of
the Yang-Baxter equation \cite{zbMATH01878448}. Quandles were first observed in
nature by Joyce \cite{zbMATH03744146} and Matveev \cite{zbMATH03828846} and
since then have been spreading through various branches of mathematics
\cite{zbMATH06343943,zbMATH07050797,zbMATH07131234}. Their most successful
applications so far have been in knot theory, where they are used to classify
knots with not so many crossings. A rack is a slightly more general structure
than a quandle, so that every quandle is a rack but not vice versa. Every rack
admits a maximal quandle quotient.
Although they are usually defined algebraically, as binary operations on a set,
in this paper we propose a more geometric approach. First of all, we mostly
make use of a more geometric definition, which to every element $x$ of a set
$X$ associates a bijection of $X$ plus a certain compatibility axiom
(Definition \ref{D:geo-quandle}). Secondly, we observe that every rack is
equipped with a family of natural metrics on its connected components
(Definition \ref{D:q-metric}). In other words, a rack is a disjoint union of
metric spaces and the automorphism group of the rack acts on it by isometries
(Proposition \ref{P:iso}).
Our first result establishes a relation between the rack metric and a certain
bi-invariant metric on the group of inner automorphism of the rack (Theorem
\ref{T:quandle-metric}). As an application of this relatively simple
observation together with the general knowledge of bi-invariant metrics on
groups, we identify families of racks and quandles for which their metrics on
all connected components have finite diameter. Here is a sample result
(see Corollary \ref{C:chevalley} and \ref{C:Lie}).
\begin{theorem}\label{T:bounded-racks}
Let $(X,\triangleright)$ be a rack. If the group of its inner automorphisms is either an
S-arithmetic Chevalley group of higher rank or a semi-simple Lie group with
finite centre then the diameter of every connected component of $(X,\triangleright)$ is
finite.
\end{theorem}
On the other hand, we show that all connected components of a
free product of quandles (satisfying a mild hypothesis) have infinite diameter
(Proposition \ref{P:free-product}). This holds, for example, for free
nontrivial racks and quandles (Corollary \ref{C:free-qr}).
We also show in Example \ref{E:knot} that the quandle metric on the quandle of
a nontrivial knot has infinite diameter.
In the second part of the paper we introduce the bounded cohomology of racks
and quandles with real coefficients. First, we relate the second bounded
cohomology to the geometry of the above metric (Corollary \ref{C:unb} and
Proposition \ref{P:unb}).
\begin{theorem}\label{T:H2b-rack}
A rack or a quandle $(X,\triangleright)$ is unbounded (Definition \ref{D:bounded})
if and only if the comparison map
$H^2_b(X;\B R)\to H^2(X;\B R)$ has nontrivial kernel.
\end{theorem}
Having introduced bounded cohomology, it is natural to test it under a suitable
amenability hypothesis. Functions that are constant on the connected components
of a rack give rise to {\it obvious} nontrivial bounded classes. If the group
of inner automorphisms of a rack is bounded and amenable then this is, in fact,
all (see Theorem \ref{T:a-bounded} for more precise statement and Remark
\ref{R:a-quandle} for the statement for quandles).
\begin{theorem}\label{T:amenable-rq}
Let $(X,\triangleright)$ be a rack with bounded, amenable group of inner automorphisms.
Then there is an isomorphism
$$
\operatorname{Fun}_b(\pi_0(X,\triangleright)^k)\cong H^k_b(X;\B R),
$$
where $\operatorname{Fun}_b(\pi_0(X,\triangleright)^k,\B R)$ denotes the set of bounded functions on
the $k$-fold product of the space $\pi_0(X,\triangleright)$ of connected components of
the rack.
\end{theorem}
Examples of amenable bounded groups include semisimple compact Lie groups
\cite{KLM} and affine Coxeter groups \cite{zbMATH07054643,zbMATH05968646}.
Since $\operatorname{Fun}_b(\pi_0(X,\triangleright)^k,\B R)\cong H^k_b(\pi_0(X,\triangleright))$, where
$\pi_0(X,\triangleright)$ is a trivial rack (see Example \ref{E:trivial}, \ref{E:pi0}),
the above theorem is indeed a result about the triviality of the bounded
cohomology. The main difference between the above result and the corresponding
one in group theory is the boundedness hypothesis here. The above theorem is
an extension of a result by Etingof and Gra\~na, who proved an analogous
statement for finite racks \cite[Theorem 4.2]{zbMATH01878448}.
\paragraph{Acknowledgements.} I would like to thank Markus Szymik for
introducing me to racks and quandles and for answering my questions.
\section{Preliminaries}
\subsection{Racks, quandles and their metrics}
\begin{definition}[Algebraic]\label{D:quandle}
A {\em quandle} is a non-empty set $X$ together with a binary operation $\triangleright\colon X\times X\to X$
satisfying the following axioms:
\begin{enumerate}
\item[{\bf A0}] $x\triangleright - \colon X\to X$ is a bijection for every $x\in X$;
\item[{\bf A1}] $x\triangleright (y\triangleright z) = (x\triangleright y)\triangleright(x\triangleright z)$ for every $x,y,z\in X$;
\item[{\bf A2}] $x\triangleright x = x$ for every $x\in X$.
\end{enumerate}
If $(X,\triangleright)$ satisfies the first two axioms only then it is called a {\em rack}.
\end{definition}
\begin{definition}[Geometric]\label{D:geo-quandle}
Let $X$ be a non-empty set and let $\operatorname{Sym}(X)$ denotes the group of all bijections of $X$.
A {\em quandle} is a map $\psi\colon X\to \operatorname{Sym}(X)$ such that
\begin{enumerate}
\item[{\bf G1}] $\psi_{\psi_x(y)} = \psi_x\circ\psi_y\circ\psi_x^{-1}$ for all $x,y\in X$;
\item[{\bf G2}] $\psi_x(x)=x$ for all $x\in X$.
\end{enumerate}
If $\psi$ satisfies the first axiom only then it is called a {\em rack}.
The equivalence of the above definitions is given by the formula
$$
\psi_x(y) = x\triangleright y
$$
and a straightforward verification that suitable axioms are equivalent.
\end{definition}
\begin{definition}\label{D:component}
Let $(X,\triangleright)$ be a rack. Two elements $x,y\in X$ are equivalent if
there exist $x_1,\ldots,x_n\in X$ such that
$$
y = \left( \psi_{x_1}^{\pm 1}\circ \dots \circ \psi_{x_n}^{\pm 1} \right)(x).
$$
The equivalence class with respect to this equivalence relation is called
a {\em connected component} of $(X,\triangleright)$. The set of connected components
of a rack is denoted by $\pi_0(X,\triangleright)$.
\end{definition}
\begin{definition}\label{D:q-metric}
Let $X_0\subseteq X$ be a connected component of a rack. For $x,y\in X_0$ define
$$
d_0(x,y) =
\min\left\{ n\in \B N\ |\ y=\left( \psi_{x_1}^{\pm 1}\circ \dots \circ \psi_{x_n}^{\pm 1} \right)(x)\right\}.
$$
It defines a metric on $X_0$ called the {\em rack metric}.
\end{definition}
\begin{remark}
The above metric is the graph metric on the Cayley graph of the rack $(X,\triangleright)$
associated with the presentation of $(X,\triangleright)$ given by the multiplication table.
In other words, with presentation of $(X,\triangleright)$ in which the whole $X$ is a generating set.
\end{remark}
\subsection{Automorphisms of a rack}
A bijective map $\alpha\colon X\to X$ is called an automorphism of a rack
$(X,\triangleright)$ if
$$
\alpha(x\triangleright y) = \alpha(x)\triangleright \alpha(y).
$$
The group of all automorphisms of a rack is denoted by $\operatorname{Aut}(X,\triangleright)$.
Observe that for every $x\in X$ the bijection $\psi_x$ is an automorphism.
The subgroup of $\operatorname{Aut}(X,\triangleright)$ generated by $\psi_x$ for all $x\in X$
is called the group of inner automorphisms of $(X,\triangleright)$ and it is denoted
by $G_{\psi}$ or by $\operatorname{Inn}(X,\triangleright)$. Notice that a connected component
of a rack is an orbit with respect to the action of the inner automorphism
group. For this reason a connected component is sometimes called an orbit.
\begin{lemma}\label{L:con-inv}
The subset $\psi(X)\subseteq \operatorname{Aut}(X,\triangleright)$ is invariant under conjugation.
\end{lemma}
\begin{proof}
Let $x,y\in X$ and let $\alpha\in \operatorname{Aut}(X,\triangleright)$. We have that
\begin{align*}
\left(\alpha\circ\psi_x\circ \alpha^{-1}\right)(y)
&= \alpha\left( \psi_x\left( \alpha^{-1}(y) \right) \right)\\
&= \alpha\left( x\triangleright \alpha^{-1}(y)\right)\\
&= \alpha(x)\triangleright y\\
&= \psi_{\alpha(x)}(y),
\end{align*}
which shows that $\alpha\circ\psi_x\circ\alpha^{-1}=\psi_{\alpha(x)}$.
This proves that $\psi(X)$ is invariant under conjugations by all automorphisms.
Notice that this is a strengthening of Axiom G1.
\end{proof}
\begin{proposition}\label{P:iso}
The automorphism group of a rack acts by isometries. More precisely,
if $\alpha\in \operatorname{Aut}(X,\triangleright)$ and $x,y\in X_s$ then
$$
d_s(x,y) = d_{\alpha(s)}(\alpha(x),\alpha(y)).
$$
\end{proposition}
\begin{proof}
Let $x,y\in X_s$ be such that $d_s(x,y)=n$. It means that there are $x_1,\ldots,x_n\in X$ such that
$$
y = \left( \psi_{x_1}^{\pm 1}\dots\psi_{x_n}^{\pm 1} \right)(x)
$$
It follows from Lemma \ref{L:con-inv} that $\alpha \circ \psi_z = \psi_{\alpha(z)}\circ \alpha$.
Applying this identity repeatedly yields the following computation.
\begin{align*}
\alpha(y) &= \alpha\left[ \left(\psi_{x_1}^{\pm 1}\dots\psi_{x_n}^{\pm 1} \right)(x)\right]\\
&= \left( \psi_{\alpha(x_1)}^{\pm 1}\dots\psi_{\alpha(x_n)}^{\pm 1} \right)(\alpha(x)),
\end{align*}
which shows that $d_{\alpha(s)}(\alpha(x),\alpha(y))\leq n$. Since $\alpha$ is invertible
we get an equality which finishes the proof.
\end{proof}
\subsection{Representability of racks}
\begin{example}\label{E:}
Let $G$ be a group, $S\subseteq G$ a subset and $\{H_s\leq G\ |\ s\in S\}$ a family
of subgroups such that $H_s\leq Z(s)$, where $Z(s)\leq G$ denotes the centraliser of $s\in S$.
Let $X = \bigsqcup_{s\in S} G/H_s$. The rack operation on $X$ is defined by
$$
xH_s\triangleright yH_t = xsx^{-1}yH_t.
$$
It is straightforward to verify the axioms. If, moreover, $s\in H_s$ then the above
operation defines a quandle. The rack $(X,\triangleright)$ defined above
is denoted by $(G,S,\{H_s\})$.
\hfill $\diamondsuit$
\end{example}
\begin{lemma}[Joyce {\cite[Theorem 7.2]{}}]\label{L:joyce}
Every rack $(X,\triangleright)$ is isomorphic to a rack of the form $(G,S,\{H_s\})$.
\qed
\end{lemma}
\begin{remark}
Joyce proved his theorem for quandles only. The proof for racks is analogous.
\end{remark}
It follows from the above definition that $G$ acts on $(X,\triangleright)=(G,S,\{H_s\})$
by automorphisms.
Moreover, the map $\psi\colon X\to \operatorname{Aut}(X,\triangleright)$ factors through $G$
as
$$
\psi_{gH_s} = gsg^{-1}.
$$
Consequently, the group of inner automorphisms $G_{\psi}$ is a subgroup of $G$
normally generated by the subset $S$.
\begin{lemma}\label{L:inner<G}
The group $G_{\psi}$ of inner automorphisms of a rack $(G,S,\{H_s\})$ is
normally generated by the subset $S$. In particular, if $S$ normally generates
$G$ then $G=G_{\psi}$.
\qed
\end{lemma}
\subsection{The enveloping group of a rack}
\begin{definition}\label{D:env}
Let $(X,\triangleright)$ be a rack. The group $G_X$ defined by the presentation
$$
G_X = \langle X \ |\ x\triangleright y = xyx^{-1},\ x,y \in X\rangle
$$
is called the {\em enveloping group} of the rack $(X,\triangleright)$.
Notice that the constant function $X\to \{1\}\subseteq \B Z$
defines a surjective homomorphism $G_X\to \B Z$ for
any rack. Consequently, the enveloping group is always infinite.
\end{definition}
\begin{lemma}\label{L:central}
The projection $\pi\colon G_X\to G_{\psi}$ defined by
$\pi(x) = \psi_x$ is a central extension.
If, moreover, the natural map $X\to G_X$ is injective then $\ker\pi = Z(G_X)$.
\end{lemma}
\begin{proof}
The projection $\pi$ is obviously surjective. So we need to check
that its kernel is contained in the centre of $G_X$. Let $g=x_1^{\pm 1} \dots x_n^{\pm 1}\in \ker\pi$ which
means that
$$
\psi_{x_1}^{\pm 1}\circ \dots\circ \psi_{x_n}^{\pm 1}=\operatorname{Id}.
$$
Let $x\in X$. Observe that the defining relation can be written as
$$
\psi_{x_i}(x) = x_i x x_i^{-1}.
$$
It follows that
\begin{align*}
gxg^{-1}
&= \left( x_1^{\pm 1} \dots x_n^{\pm 1} \right)\cdot x \cdot \left( x_n^{\mp 1} \dots x_1^{\mp 1} \right)\\
&= \left( \psi_{x_1}^{\pm 1}\circ \dots \circ\psi_{x_n}^{\pm 1} \right)(x) \\
&= x,
\end{align*}
which shows that $g\in Z(G_X)$ since $x\in X$ was an arbitrary generator.
Let $X\to G_X$ be injective.
Let $g=x_1^{\pm 1}\dots x_n^{\pm 1}\in Z(G_X)$. Then
we have that $\left( \psi_{x_1}^{\pm 1}\circ \dots\circ \psi_{x_n}^{\pm 1}\right)(x)=x$ for
every $x\in X$ which implies that $\left( \psi_{x_1}^{\pm 1}\circ \dots\circ \psi_{x_n}^{\pm 1} \right)=\operatorname{Id}$
due to the injectivity of $X\to G_X$.
\end{proof}
\begin{remark}
The map $X\to G_X$ is never injective for racks that are not quandles. Indeed,
if $\psi_x(x)=y\neq x$ then we have that
$$
y=x\triangleright x = xxx^{-1}=x
$$
in $G_X$. It, moreover, follows that $\psi_x = \psi_{\psi_x^n(x)}$ for every $n\in \B Z$.
\end{remark}
\subsection{Conjugation-invariant norms on groups}\label{SS:word-norms}
Let $G$ be a group and let $S\subset G$ be a generating set. The {\em word norm}
associated with $S$ is defined by
$$
\|g\|_S = \min\left\{n\in \B N\ |\ g=s_1^{\pm 1}\dots s_n^{\pm 1},\ s_i\in S\right\}.
$$
If the subset $S\subseteq G$ is invariant under conjugations then the
norm is {\em conjugation-invariant}, that is,
$$
\|hgh^{-1}\|_S = \|g\|_S,
$$
holds for all $g,h\in G$. The {\em associated metric} is defined by $d_S(g,h) =\|g^{-1}h\|_S$.
It is left-invariant and
if the norm is conjugation-invariant then the metric is {\em bi-invariant}. That
is, both left and right multiplications are isometries of the metric.
A group $G$ is called {\em bounded} \cite{zbMATH05526532} if every bi-invariant
metric on $G$ has finite diameter. If a group is generated by a union of
finitely many conjugacy classes then its boundedness is equivalent to the
boundedness of the associated word metric \cite{KLM}. Examples of bounded
groups include S-arithmetic Chevalley groups of higher rank
\cite{zbMATH05936046}, semisimple Lie groups with finite centre,
diffeomorphism groups of compact manifolds and many others
\cite{KLM}.
Let $G$ be equipped with a bi-invariant word metric associated with a normally generating
set $S$.
Let $H\leq G$ be a subgroup. The {\em quotient metric} on the quotient $G/H$
is defined by the distance in $G$ between the cosets. Equivalently,
\begin{align*}
d_{S}(xH,yH)
&= \min\{d_S(xh_1,yh_2)\ |\ h_1,h_2\in H\}\\
&= \min\{d_S(x,yh)\ |\ h\in H\}\\
&= \min\{\|x^{-1}yh\|_S\ |\ h\in H\}\\
&= d_{S}(H,x^{-1}yH).
\end{align*}
In particular, $G$ acts on $G/H$ by isometries.
\begin{example}\label{E:dpsi}
It follows from Lemma \ref{L:con-inv} that $\psi(X)$ generates $G_{\psi}$ and
is invariant under conjugations by elements of ${\rm Aut}(X,\triangleright)$.
Consequently, the associated word norm $\|g\|_{\psi}$ on $G_{\psi}$ is
$\operatorname{Aut}(X,\triangleright)$-invariant (in particular, conjugation-invariant).
\hfill $\diamondsuit$
\end{example}
\section{The geometry of racks}
\subsubsection*{A characterisation of the rack metric}
With the preparations from the previous section the proof of the following
theorem is fairly obvious.
\begin{theorem}\label{T:quandle-metric}
Let $(X,\triangleright) = (G, S, \{H_s\})$ be a rack such that $S\subseteq G$
normally generates~$G$. Let $X_s = G/H_s$ be a connected component of $(X,\triangleright)$.
The rack metric $d_s$ on $X_s$ is equivalent to the quotient metric $d_S$ on $G/H_s$.
\end{theorem}
\begin{proof}
Let $x\in G$.
\begin{align*}
d(xH_s,H_s)
&= \min\left\{n\in \B N\ |\ xH_s = \left( \psi_{x_1H_{s_1}}^{\pm 1}\dots \psi_{x_nH_{s_n}}^{\pm 1} \right)(H_s)\right\}\\
&= \min\left\{n\in \B N\ |\ xH_s = x_1{s_1}^{\pm 1}x_1^{-1}\dots x_ns_n^{\pm 1}x_n^{-1}H_s\right\}\\
&= \min\left\{\|g\|_S \ |\ xH_s = gH_s\right\}\\
&= \min\left\{\|xh\|_S \ |\ h\in H\right\}\\
&= d_S(xH_s,H_s)
\end{align*}
Since both metrics are $G$-invariant, the above computation proves the statement.
\end{proof}
\subsubsection*{A canonical rack-quandle extension}
Let $(X,\triangleright)$ be a rack and let $y=\psi_x(x)$. We have that
$$
\psi_y = \psi_{\psi_x(x)}=\psi_x\cdot\psi_x\cdot\psi_x^{-1}=\psi_x.
$$
It follows that if $y=\psi_x^n(x)$ then $\psi_y=\psi_x$ for any $n\in \B Z$.
Call $x$ and $y$ equivalent if $y = \psi_x^n(x)$ for some $n\in \B Z$. It is
straightforward to verify that it is an equivalence relation. Let
$\underline{X}=X/\!\approx$ be the quotient. The rack product descends
to the quotient. That is,
$$
[x]\triangleright [y] = [x\triangleright y]
$$
is well defined and defines a quandle structure on $\underline{X}$.
We call $X\to \underline{X}$ the {\em canonical rack-quandle extension}.
Notice that this extension maps connected components of the rack to
connected components of the quandle.
\begin{lemma}\label{L:r-ext-q}
The canonical rack-quandle extension is $1$-Lipschitz. That is,
$$
d_{\underline{X}}([x],[y]) \leq d_{X}(x,y),
$$
for every $x,y\in X$.
\end{lemma}
\begin{proof}
Suppose that $d_X(x,y)=n$. Then there elements exist $x_1,\ldots, x_n\in X$ such that
$y=\left( \psi_{x_1}^{\pm 1}\dots\psi_{x_n}^{\pm 1} \right)(x)$. It follows that
$$
[y] =\left( \psi_{[x_1]}^{\pm}\dots\psi_{[x_n]}^{\pm 1} \right)[x],
$$
which implies that $d_{\underline{X}}([x],[y])\leq n$.
\end{proof}
\subsubsection*{Bounded racks}
\begin{definition}\label{D:bounded}
If the diameter of the rack metric on each connected component of a rack $(X,\triangleright)$
is finite then $(X,\triangleright)$ is called {\em bounded}. Otherwise, it is called
{\em unbounded}.
\end{definition}
In the rack language this means that there exists a number $N>0$ such
that for every $x,y\in X_s$ there exist $x_1,\ldots,x_n\in X$ with $n\leq N$
such that
$$
y =x_n\triangleright (x_{n-1}\triangleright (\dots (x_1\triangleright x)\dots)).
$$
It follows from Lemma \ref{L:r-ext-q} that if the rack in the canonical
rack-quandle extension is bounded then
so is the underlying quandle. Conversely, if the underlying quandle
is unbounded then so is the rack.
\begin{corollary}\label{C:bounded}
Let $(X,\triangleright) = (G,S,\{H_s\})$ be a rack. If the group $G$ is bounded then
the rack $(X,\triangleright)$ is bounded. \qed
\end{corollary}
In what follows we specify the above corollary to classes of groups that are
well known to be bounded.
\begin{corollary}\label{C:chevalley}
Let $(X,\triangleright) =(\Gamma,S,\{H_s\})$, where $\Gamma$ is an $S$-arithmetic
Chevalley group of rank at least $2$, $S\subseteq \Gamma$ is a set of root
elements normally generating $\Gamma$ (such a set can be chosen finite) and
$H_s \subseteq Z(s)$. Then the rack $(X,\triangleright)$ is bounded.
\qed
\end{corollary}
\begin{example}\label{E:bounded}
Consider a rack
$$
\left( \operatorname{SL}(n,\B Z), E_{12}, Z(E_{12}) \right),
$$
where $n\geq 3$ and $E_{12}$ denotes the elementary matrix with entries $e_{ii}=e_{12}=1$ and
zero otherwise. Notice that the above rack is a quandle.
Since the conjugacy class of $E_{12}$ generates $\operatorname{SL}(n,\B Z)$ the above quandle
is connected. It is bounded, since $\operatorname{SL}(n,\B Z)$ is bounded for $n\geq 3$.
\hfill $\diamondsuit$
\end{example}
\begin{corollary}\label{C:Lie}
Let $(X,\triangleright)=(G,S,\{H_s\})$, where $G$ is a semisimple Lie group with finite centre,
$S$ is a normal generating set (which can be chosen finite) and $H_s\leq Z(s)$.
Then the rack metric on each connected component of $(X,\triangleright)$ has finite diameter.
\end{corollary}
\begin{example}\label{E:inoue}
Let $\C P\subseteq \operatorname{PSL}(2,\B C)$ be the quandle consisting of all parabolic elements.
It can be represented as $(\operatorname{PSL}(2,\B C),S,\{Z(s)\})$, where $S\subseteq \operatorname{PSL}(2,\B C)$
is a set of representatives of conjugacy classes of parabolic elements.
Inoue and Kabaya \cite{zbMATH06343943} used the cohomology of this quandle to compute
the complex volume of hyperbolic links. It follows from Theorem \ref{T:quandle-metric}
and boundedness of $\operatorname{PSL}(2,\B C)$ that the quandle $\C P$ is bounded.
\hfill $\diamondsuit$
\end{example}
\begin{example}\label{E:Lie}
Let $G$ be a bounded simple group and let $1\neq g\in G$. Since the conjugacy
class of $g$ generates $G$, due to simplicity, we get that the rack $(G,g, H_g)$,
where $H_g\subseteq Z(g)$,
is bounded. This, for example, holds for simple Lie groups or (the commutator subgroups of)
Higman-Thompson groups \cite{zbMATH06790225,KLM}.
\hfill $\diamondsuit$
\end{example}
All the examples above rely of the fact that the group of inner automorphisms
of the rack is bounded. However, for the boundedness of a connected component
$G/H_s$ is is enough that the embedding $H_s\subseteq G$ is {\em coarsely
surjective}. The latter means that there exists a number $N>0$ such that for
every $g\in G$ there exists $h\in H_s$ with $d_S(g,h)\leq N$.
\begin{example}\label{E:Sp}
Let $\B Z\to \widetilde{\operatorname{Sp}}(2n;\B Z)\to \operatorname{Sp}(2n;\B Z)$ be a nontrivial
central extension of the integral symplectic group. This extension is unbounded
and the inclusion of the centre is coarsely surjective
\cite{zbMATH05936046}. Consequently,
every connected component $\widetilde{\operatorname{Sp}}(2n;\B Z)/H_s$ of a rack $\left(
\widetilde{\operatorname{Sp}}(2n;\B Z),S,\{H_s\} \right)$ such that $H_s$ contains the
centre of $\widetilde{\operatorname{Sp}}(2n;\B Z)$ has finite diameter.
\hfill $\diamondsuit$
\end{example}
\subsubsection*{Unbounded racks and quandles}
\begin{example}\label{E:Zrack}
Let $(\B Z,\triangleright)$ be a rack defined by $k\triangleright \ell = \ell +1$. It is connected
and isometric to $\B Z$ with the standard metric, hence unbounded.
\hfill $\diamondsuit$
\end{example}
\begin{example}\label{E:F2}
Let $\B F_2 = \langle x,y\rangle$ be the free group on two generators.
Let $(X,\triangleright) = (\B F_2,\{x,y\},\{Z(x),Z(y)\})$ be a quandle. It is a union
of conjugacy classes of $x$ and of $y$ and each conjugacy class is a connected
component. Since $Z(x) = \langle x\rangle$ is the cyclic subgroup generated by
$x$ its inclusion into $\B F_2$ is not coarsely surjective (for example
$d_{\{x,y\}}(Z(x),y^n)=n$) and hence the quandle is unbounded.
\hfill $\diamondsuit$
\end{example}
\begin{proposition}\label{P:Fn}
Let $\B F_n$ be the free group of rank $n\geq 2$ and let
$(X,\triangleright) = (\B F_n,S,\{H_s\})$, where $S\subseteq \B F_n$ is finite normally generating subset.
Then each connected component $X_s$ of $(X,\triangleright)$ has infinite diameter.
\end{proposition}
\begin{proof}
It is enough to show that the embedding
the cetraliser $Z(g)$ of any nontrivial element $g\in \B F_n$ cannot be coarsely surjective.
First observe that the centraliser $Z(g)$ is a cyclic subgroup containing $g$.
The inclusion $Z(g)\to \B F_n$ is never coarsely surjective which can be seen as follows.
Let $\pi\colon \B F_n\to \B Z^n$ be the abelianisation. The projection is Lipschitz with constant
$1$, provided $\B Z^n$ is equipped with the word metric associated with the (finite) generating
set $\pi(S)$. This metric is Lipschitz equivalent to the standard word metric.
Let $s_1,s_2\in S$ be two generators such that their images $\pi(s_1)$ and $\pi(s_2)$ generate
a free abelian subgroup of rank $2$.
The image $\pi(Z(g))\leq \B Z^n$ is cyclic and hence the distance of either $\pi(s_1^k)$ or $\pi(s_2^k)$
from $\pi(Z(g))$ grows linearly with $k\in \B N$. Since the projection is Lipschitz the same is
true for the distance between $Z(g)$ and $s_1$ or $s_2$ in $\B F_n$.
\end{proof}
The following proposition deals with free products of quandles. See
\cite[Section 7]{zbMATH07227846} for a definition of a free product of quandles
as well as their presentations. Notice, for example, that the free quandle on
$n$ generators is the free product of $n$ copies of the trivial quandle.
\begin{proposition}\label{P:free-product}
Let $(X_i,\triangleright)$ for $i=1,2$ be quandles with finitely many connected components and
such that the maps $X_i\to G_{X_i}$ are injective.
Let $A_i\subseteq X_i$ be the set of representatives of connected components of $X_i$.
Then every connected component of the free product $X_1* X_2$ has infinite diameter.
\end{proposition}
\begin{proof}
Recall that the enveloping group $G_{X_i}$ is infinite (see Definition
\ref{D:env}). It follows from \cite[Theorem 7.2]{zbMATH07227846} that the free
product $X_1*X_2$ is isomorphic to the quandle $(G_{X_1}*G_{X_2},A_1\cup A_2)$;
the latter denotes a quandle defined in \cite[Section 4]{zbMATH07227846}.
Moreover, $G=G_{X_1}*G_{X_2}$ is the enveloping group of $X_1*X_2$ by
\cite[Lemma 7.1]{zbMATH07227846}. Furthermore, Proposition 4.3 of the same
paper implies that this quandle is isomorphic to $(G_{X_1}*G_{X_2},A_1\cup
A_2,\{Z_G(a_1),Z_G(a_2)\})$. Since both $G_{X_i}$ are infinite, their free
product has trivial centre and we have an isomorphism $G_{X_1}*G_{X_2} \cong
\operatorname{Inn}(X_1*X_2)$. Thus in order to prove the statement it suffices to show
that the inclusion of no centraliser $Z_G(a_1)$ or $Z_G(a_2)$, where $a_i\in
A_i$, is coarsely surjective.
To see this, notice that $Z_G(a_i)\leq G_{X_i}\leq G_{X_1}*G_{X_2}$ and
consider the surjective homomorphism $\varphi\colon G_{X_1}*G_{X_2} \to \B F_2
=\langle u_1,u_2\rangle$ defined by $x_i\to u_i$. Notice that this homomorphism
is Lipschitz with respect to the bi-invariant metrics associated with $A_1\cup
A_2$ and $\{u_1,u_2\}$, due to the finiteness of $A_i$. If the inclusion of
the centraliser $Z_G(a_i)$ was coarsely surjective then the composition with
$\varphi$ would be coarsely surjective. However, $\varphi(Z(a_i))\leq \langle
u_i\rangle$ and hence it is not coarsely surjective. This shows that every
connected component of $X_1* X_2$ has infinite diameter.
\end{proof}
\begin{example}\label{E:free}
Let $T_1=\{x_i\}$ and $T_2=\{x_2\}$ be trivial quandles.
Each has one connected component represented by $x_i$.
The enveloping group $G_{T_i}$ of $T_i$ is infinite cyclic $G_{T_i}\cong \B Z$
and hence the free product
$$
T_1*T_2 = (\B F_2=\langle x_1,x_2\rangle, \{x_1,x_2\},\{Z(x_1),Z(x_2)\}).
$$
Notice that this is the quandle considered in Example \ref{E:F2}. It follows
from Proposition~\ref{P:free-product} that both connected components
have infinite diameter.
\hfill $\diamondsuit$
\end{example}
\begin{corollary}\label{C:free-qr}
Every connected component of a free quandle $\operatorname{FQ}(X)$ has infinite diameter.
Consequently, the same is true for free racks due to Lemma \ref{L:r-ext-q}.
\qed
\end{corollary}
\begin{example}\label{E:knot}
Let $K\subseteq \B S^3$ be a non-trivial knot and let $G_K = \pi_1(\B
S^3\setminus K)$ be the fundamental group of its complement. Let $Q_K$ be the
associated quandle. Then $Q_K = (G_K,\{s\},P)$, where $s\in G_K$ is the
element represented by the meridian of $K$ and $P$ is the image of
$\pi_1(\partial U)\to G_K$, where $U\subseteq \B S^3$ is a tubular
neighbourhood of $K$ \cite[Corollary 16.2]{zbMATH03744146}.
Notice that $Q_K$ is connected.
Fujiwara \cite[Theorem 1.6]{zbMATH01060878} proved that the second bounded
cohomology of $G_K$ is infinite dimensional. Since $P$ is abelian, it follows
that there is a non-trivial homogeneous quasi-morphism $q\colon G_K\to \B R$
vanishing on $P$. Let $g\in G_K$ be an element such that $q(g)>0$. Then for any
$h\in P$ we have that
$$
d(g^n,h) = \|g^nh^{-1}\| \geq C(q(gh^{-1})) \geq C(nq(g)-D)
$$
is arbitrarily large for a large $n\in \B N$. We used here the fact that
quasimorphisms are Lipschitz with respect to conjugation-invariant norms on
normally finitely generated groups and $C>0$ above is the Lipschitz constant;
$D\geq 0$ is the defect of $q$. This shows that the inclusion $P\subseteq G_K$
is not coarsely surjective and hence the quandle metric has infinite diameter.
\hfill $\diamondsuit$
\end{example}
\section{Bounded cohomology of racks and quandles}\label{S:bcrq}
\begin{remark}
In this paper we consider only the cohomology with real coefficients,
considered as a trivial module. For a general definition of quandle
or rack cohomology see, for example,~\cite{zbMATH01716035,zbMATH01878448}.
\end{remark}
\subsubsection*{Definition of bounded cohomology}
Let $(X,\triangleright)$ be a quandle.
Recall that the {\em rack cochain complex} $C^*(X;\B R)$ with real coefficients
is a complex in which $C^k(X;\B R)$ consist of functions
$f\colon X^k\to \B R$ with
the differential is given by
\begin{align*}
\delta f(x_1,x_2,\ldots,x_{k+1}) &=
\sum_{i=1}^{k} (-1)^{i-1}f(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_{k+1}) \\
&- \sum_{i=1}^{k} (-1)^{i-1}f(x_1,\ldots,x_{i-1},x_i\triangleright x_{i+1},\ldots,x_i\triangleright x_{k+1}).
\end{align*}
The cohomology $H^*(X;\B R)$ of the above complex
is called the real {\em rack cohomology} of $(X,\triangleright)$.
The subcomplex $C^*_b(X;\B R)\subseteq C^*(X;\B R)$ consisting
of {\it bounded} functions defines the (real) {\em rack bounded cohomology}
of $(X,\triangleright)$ denoted by $H^*_b(X;\B R)$.
The inclusion $C^*_b(X;\B R)\subseteq C^*(X;\B R)$ induces a homomorphism
$$
H^*_b(X;\B R)\to H^*(X;\B R),
$$
called the {\em comparison} map.
The {\em quandle (bounded) cohomology} is defined as above for the subcomplex
consisting of (bounded) functions $f\colon X^k\to \B R$ satisfying the
additional condition that
\begin{equation}
f(x_1,\ldots,x_k) = 0 \text{ if } x_i=x_{i+1} \text{ for some } i=1,\ldots,k-1.
\label{Eq:q-complex}
\end{equation}
The notation for both the rack and quandle cohomology is for simplicity the
same with the convention that if $(X,\triangleright)$ is a rack (quandle) then
$H^*_b(X;\B R)$ is the rack (quandle) cohomology.
\begin{remark}
The reason that the quandle cochain complex is smaller is to discard {\it
redundant} cohomology arising from inclusions of singletons which, in the case
of quandles, are retracts. More precisely, if $x\in X$ then $p\colon X\to
\{x\}$ is a morphism of quandles that has a section $i\colon \{x\}\to X$. The
rack cohomology of the trivial rack $\{x\}$ is isomorphic to $\B R$ in each
degree. Taking into account the retracts for every point of the quandle makes
its rack cohomology unnecessarily enormous; see also \cite[Remark
2.6]{zbMATH07050797}.
\end{remark}
\paragraph{Convention:} {\it In order to make the paper less cumbersome, in
what folllows, we present examples, arguments and proofs mostly for racks. All
arguments carry over almost verbatim for quandles and their bounded cohomology.
If necessary, a separate statement will be given for quandles. }
\begin{example}\label{E:H1b}
A (bounded) one-cocycle $f\colon X\to \B R$ is constant on each connected
component of $(X,\triangleright)$. Indeed, the defining property yields
$$
0 = \delta f(x,y) = f(x) - f(y\triangleright x),
$$
which implies that $f(x) = f\left( \left( \psi_{x_1}\dots \psi_{x_n} \right)(x) \right)$
for every $x,x_1,\ldots,x_n\in X$. Consequently,
$H^1_b(X;\B R) = \operatorname{Fun}_b(\pi_0(X,\triangleright))$ and
$H^1(X;\B R) = \operatorname{Fun}(\pi_0(X,\triangleright))$.
Notice, that a one-cocycle is a rack homomorphism, where $\B R$ is
considered as a trivial rack.
\hfill $\diamondsuit$
\end{example}
\begin{example}\label{E:trivial}
Let $(X,\triangleright)$ be the trivial rack. That is $x\triangleright y=y$ for all
$x,y\in X$ or, equivalently, $\psi\colon X\to \operatorname{Sym}(X)$ is constant
equal to the identity. It is immediate to verify that
the differential in the rack cochain complex is identically zero which
implies that $H^k_b(X;\B R)$ is isomorphic to the space
$\operatorname{Fun}_b(X^k;\B R)$ of bounded functions on $X^k$. If $(X,\triangleright)$ is
a quandle then its quandle bounded cohomology of degree $k$
is isomorphic to the space $\underline{\operatorname{Fun}}_b(X^k,\B R)$ of
bounded functions which are zero on the elements $(x_1,\ldots,x_k)$
such that $x_i=x_{i+1}$ for some $i=1,\ldots,k-1$.
\hfill $\diamondsuit$
\end{example}
\begin{example}\label{E:pi0}
If $(X,\triangleright)$ is a rack then the set $\pi_0(X,\triangleright)$ of its connected component
is a trivial rack with respect to the operation induced from $(X,\triangleright)$. That
is,
$$
\pi(x)\triangleright \pi(y) = \pi(x\triangleright y) = \pi(y).
$$
The latter also means that the
projection $\pi\colon X\to \pi_0(X,\triangleright)$ is a morphism of racks.
Hence, it induces a homomorphism
$$
\pi^*\colon H^k_b(\pi_0(X);\B R)=\operatorname{Fun}_b(\pi_0(X)^k,\B R)\to H^k_b(X;\B R).
$$
There is an analogous homomorphism on quandles defined on
$\underline{\operatorname{Fun}}_b(X^k,\B R)$.
\hfill $\diamondsuit$
\end{example}
The space of cochains $C^k_b(X;\B R)$ is an $\operatorname{Aut}(X,\triangleright)$-module.
Indeed, let $f\colon X^k\to \B R$ be a cochain.
If $\alpha\in \operatorname{Aut}(X,\triangleright)$ is an automorphism of $(X,\triangleright)$ then let
$f\cdot\alpha\colon X^k\to \B R$ be defined by
$$
(f\cdot \alpha)(x_1,\ldots,x_k) = f(\alpha(x_1),\ldots,\alpha(x_k)).
$$
Thus $C^k(X;\B R)$ is a right $\operatorname{Aut}(X,\triangleright)$-module.
\begin{lemma}\label{L:Gpsi-mod}
The differential $\delta\colon C^k(X;\B R)\to C^{k+1}(X;\B R)$
is a map of $\operatorname{Aut}(X,\triangleright)$-modules.
\end{lemma}
\begin{proof}
We apply the identity $\alpha\circ \psi_x=\psi_{\alpha}\circ \alpha$
from Lemma \ref{L:con-inv} in the following computation.
\begin{align*}
(\delta(f\cdot \alpha))(x_1,\ldots,x_{k+1})
&=\sum_{i=1}^k (-1)^{i-1}f(\alpha(x_1),\ldots,\alpha(x_{i-1}),\alpha(x_{i+1}),
\ldots \alpha(x_{k+1}))\\
&-\sum_{i=1}^k (-1)^{i-1} f(\alpha(x_1),\ldots,\alpha(x_{i-1}),\alpha(\psi_{x_i}(x_{i+1})),
\ldots \alpha(\psi_{x_i}(x_{k+1})))\\
&=\sum_{i=1}^k (-1)^{i-1}f(\alpha(x_1),\ldots,\alpha(x_{i-1}),\alpha(x_{i+1}),
\ldots \alpha(x_{k+1}))\\
&-\sum_{i=1}^k (-1)^{i-1} f(\alpha(x_1),\ldots,\alpha(x_{i-1}),\psi_{\alpha(x_i)}(\alpha(x_{i+1}))),
\ldots \psi_{\alpha(x_i)}(\alpha(x_{k+1}))))\\
&=((\delta f)\cdot {\alpha})(x_1,\ldots,x_{k+1}).
\end{align*}
\end{proof}
\begin{corollary}\label{C:}
The $\operatorname{Aut}(X,\triangleright)$-invariant cochains form a subcomplex
$C^*_{b}(X;\B R)^{\rm Aut}$.\qed
\end{corollary}
By restricting the action to the inner automorphism group
$G_{\psi}\leq \operatorname{Aut}(X,\triangleright)$ we also obtain an subcomplex
of $G_{\psi}$-invariant cochains and its homology will be
called the {\em invariant bounded} cohomology of $(X,\triangleright)$.
It will be denoted by $H^*_{b,\rm inv}(X;\B R)$.
The inclusion of the complex induces the homomorphism
$$
H^*_{b,\rm inv}(X;\B R)\to H^*_b(X;\B R).
$$
\begin{lemma}\label{L:f=fz}
If $f\colon X^k\to \B R$ is a cocycle then $f$ and $f\cdot g$ are cohomologous
for every $g\in G_{\psi}$. Consequently, $H^*_b(X;\B R)$ is a trivial
$G_{\psi}$-module.
\end{lemma}
\begin{proof}
Given $z\in X$, let $f_z\in C^{k-1}(X;\B R)$ be defined by
$$
f_z(x_1,\ldots,x_{k-1}) = f(z,x_1,\ldots,x_{k-1}).
$$
The following equalities are straightforward to verify. The second one follows
because $f$ is a cocycle.
\begin{align*}
\delta (f_{z})(x_1,\ldots,x_k)
&= (f - f\cdot {\psi_z})(x_1,\ldots,x_k) - \delta f(z,x_1,\ldots,x_k)\\
&= (f-f\cdot{\psi_z})(x_1,\ldots,x_k).
\end{align*}
Since $\psi_z$ generate $G_{\psi}$, we have
\begin{align*}
f-f\cdot g
&= f - f\cdot \psi_{x_1}\dots \psi_{x_n}\\
&= f - f\cdot \psi_{x_1} + f\cdot \psi_{x_1}
- f\cdot \psi_{x_1}\psi_{x_2} + f\cdot \psi_{x_1}\psi_{x_2} -\dots
-f\cdot \psi_{x_1}\dots \psi_{x_n}\\
&=\delta(f_{x_1}) + \delta\left( (f\cdot\psi_{x_1})_{x_2} \right)
+\dots+ \delta\left( (f\cdot\psi_{x_1}\dots\psi_{x_{n-1}})_{x_n} \right)
\end{align*}
which proves the statement holds for every $g\in G_{\psi}$.
\end{proof}
\section{Rack quasimorphisms and unboundedness}
\begin{definition}\label{D:qqmor}
A {\em rack quasimorphism} is a function $f\colon X\to \B R$ for which
there exists $D\geq 0$ such that
$$
|f(y) - f(x\triangleright y)| \leq D,
$$
for all $x,y\in X$. If $(X,\triangleright)$ is a quandle then $f$ will be called
a quandle quasimorphism (definition is the same).
\end{definition}
\begin{lemma}\label{L:H2b}
If $f\colon X\to \B R$ is a rack quasimorphism then $\delta f$ is a
two-cocycle. The class $[\delta f]\in H^2_b(X;\B R)$ is in the kernel of
the comparison map.
If $f$ is unbounded on a connected component of $(X,\triangleright)$ then
$\delta f$ is nontrival.
\end{lemma}
\begin{proof}
The first two statements are obvious. Notice that if $(X,\triangleright)$ is a quandle
then $\delta f$ is a quandle cocycle, since $\delta f(x,x) = f(x) - f(\psi_x(x))=0$.
Suppose that $f$ is unbounded on a connected component of $(X,\triangleright)$. If
$[\delta f]=0$ in $H^2_b(X;\B R)$ then $\delta f=\delta\beta$ for some bounded
cochain $\beta\colon X\to \B R$. It follows that $\delta(f-\beta)=0$, that is
$f-\beta$ is an ordinary one-cocycle and hence it has to be constant on each
connected component of $(X,\triangleright)$, which is impossible, because $f$ is unbounded
and $\beta$ is bounded.
\end{proof}
\begin{example}\label{E:qqm-F2}
Let $\B F_2=\langle x,y\rangle$ be a free group on two generators.
Let $\varphi\colon \B F_2\to \B R$ be a non-trivial homogeneous group quasimorphism.
Let $(X,\triangleright)= (\B F_2,\{x,y\},\{Z(x),Z(y)\})$ be a free quandle on two generators
and let $\widehat{\varphi}\colon X\to \B R$ be defined by
$$
\widehat{\varphi}(gxg^{-1}) = \varphi(g'),
$$
where $g=g'x^k$ and $g'$ is a reduced word finishing with $y$. Define
$\widehat{\varphi}(gyg^{-1})$ analogously.
Since $\varphi$ is a homogeneous quasimorphism on $\B F_2$, it is constant on
conjugacy classes and hence bounded, say by $B>0$, on $C(x^{\pm 1})\cup
C(y^{\pm 1})$. Let $s\in \B F_2$ be a conjugate of a generator.
We thus have that
\begin{align*}
|\widehat{\varphi}(s\triangleright gxg^{-1}) - \widehat{\varphi}(gxg^{-1})|
&= |\widehat{\varphi}(sgxg^{-1}s^{-1}) - \varphi(g')|\\
&=|\varphi(sg') - \varphi(g')|\\
&\leq |\varphi(s)| + D\\
&\leq B+D,
\end{align*}
which shows that $\widehat{\varphi}$ is an unbounded quandle quasimorphism
(a similar computation is done for $gyg^{-1}$).
\hfill $\diamondsuit$
\end{example}
\begin{proposition}\label{P:qqm}
A rack quasimorphism $f\colon X\to \B R$ is Lipschitz with respect
to the rack metric. More precisely, there exists a constant $C>0$
such that
$$
|f(x) - f(y)|\leq Cd(x,y)
$$
for any $x,y\in X_s$, where $X_s\subseteq X$ is a connected component.
\end{proposition}
\begin{proof}
Let $x,y\in X_s$ be such that $d(x,y)=n$. It means that
there exist $x_1,\ldots,x_n\in X$ such that
$y= \left( \psi_{x_1}\dots \psi_{x_n} \right)(x)$.
By applying inductively the defining property $n$ times we get the following
estimate which proves the statement.
\begin{align*}
|f(y)-f(x)|
&=\left |f\left( \left( \psi_{x_1}\dots\psi_{x_n} \right)(x) \right) -f(x)\right |\\
&=\left |f\left( \left( \psi_{x_1}\dots\psi_{x_n} \right)(x) \right)
-f\left( \left( \psi_{x_2}\dots\psi_{x_n} \right)(x) \right)
+f\left( \left( \psi_{x_2}\dots\psi_{x_n} \right)(x) \right)
-f(x)\right |\\
&\leq D + \left|f\left( \left( \psi_{x_2}\dots\psi_{x_n} \right)(x) \right)
-f(x)\right |\\
&\leq \dots
\leq Dn
= Dd(x,y).
\end{align*}
\end{proof}
\begin{corollary}\label{C:unb}
If a rack $(X,\triangleright)$ admits a quasimorphism that is unbounded
on a connected component $X_s$ then this component is unbounded.
Equivalently, if the kernel of the comparison map $H^2_b(X;\B R)\to H^2(X;\B R)$
is nontrivial then $(X,\triangleright)$ is unbounded.
\qed
\end{corollary}
\begin{proposition}\label{P:unb}
If $(X,\triangleright)$ is unbounded then the kernel of the comparison map
$H^2_b(X;\B R)\to H^2(X;\B R)$ is nontrivial. In particular,
$H^2_b(X;\B R)\neq 0$.
\end{proposition}
\begin{proof}
Let $x_s\in X_s$ be a basepoint fixed for each connected component
$X_s$ of $(X,\triangleright)$. Let $f\colon X\to \B R$ be defined by
$$
f(x) = d(x_s,x)
$$
if $x\in X_s$. It is unbounded according to the hypothesis. Then
$$
|\delta f(x,y)| = |f(x) - f(\psi_y(x))| = |d(x_s,x) - d(x_s,\psi_y(x)|\leq 1.
$$
The cocycle is non-zero for a non-trivial rack. Its bounded
cohomology class is non-zero by an argument analogous to the one in
Lemma \ref{L:H2b}.
\end{proof}
\section{Amenability}
This section is motivated by the result in group theory that states that the
bounded cohomology of an amenable group is trivial \cite[Section
3.0]{zbMATH03816552}. We prove that the bounded cohomology of a rack or a
quandle $(X,\triangleright)$ with amenable inner automorphism group is as trivial as if
$(X,\triangleright)$ were finite. Our proof follows almost verbatim the proof
of the analogous statement finite racks by Etingof-Gra\~na
\cite[Theorem 4.2]{zbMATH01878448}.
\begin{definition}\label{D:amenable}
A group $G$ is called {\em amenable} if there exist a functional
${\bf m}\colon\ell^{\infty}(G)\to \B R$ such that
\begin{enumerate}
\item $\B m(1)=1$;
\item if $\varphi\geq 0$ then ${\bf m}(\varphi) \geq 0$;
\item $\B m(\varphi\circ R_h) = \B m(\varphi)$ for every $h\in G$.
\end{enumerate}
Such a functional is called a {\em right-invariant mean} on $G$.
\end{definition}
Let $(X,\triangleright)$ be a rack and assume that the group $G_{\psi}$ of
its inner automorphism is amenable.
Let $P\colon C^k_b(X;\B R)\to C^k_{b,\rm inv}(X;\B R)$
be defined by
$$
P(f)(x_1,\ldots,x_k) = \B m(g\mapsto f(g(x_1),\ldots,g(x_k)).
$$
\begin{lemma}\label{L:proj}
Let $(X,\triangleright)$ be a rack with amenable $G_{\psi}$.
The projection $P$ is a morphism of complexes and it
induces a surjective homomorphism $P\colon H^*_b(X;\B R)\to H^*_{b,\rm inv}(X;\B R)$.
The map $\iota\colon H^*_{b,\rm inv}(X;\B R)\to H^*_b(X;\B R)$
induced by the inclusion of complexes is the right inverse of $P$.
\end{lemma}
\begin{proof}
First observe that $P(f)$ is an invariant cochain.
\begin{align*}
(Pf)(hx_1,\ldots,hx_k)
&= \B m(g\mapsto f(ghx_1,\ldots,ghx_k))\\
&= \B m(g\mapsto f(gx_1,\ldots,gx_k))\\
&= (Pf)(x_1,\ldots,x_k),
\end{align*}
because the mean $\B m$ is right-invariant.
Secondly, the projection $P$ commutes with differential.
\begin{align*}
P(\delta f)(x_1,\ldots,x_{k+1})
&=\B m(g\mapsto \delta f(gx_1,\ldots,gx_{k+1}))\\
&=
\sum_{i=1}^k (-1)^{i-1}\B m(g\mapsto f(gx_1,\ldots,gx_{i-1},gx_{i+1},\ldots,gx_{k+1})))\\
&-\sum_{i=1}^k (-1)^{i-1}\B m(g\mapsto f(gx_1,\ldots,gx_{i-1},\psi_{gx_i}gx_{i+1},\ldots,
\psi_{gx_i}gx_{k+1})))\\
&=
\sum_{i=1}^k (-1)^{i-1}\B m(g\mapsto f(gx_1,\ldots,gx_{i-1},gx_{i+1},\ldots,gx_{k+1})))\\
&-\sum_{i=1}^k (-1)^{i-1}\B m(g\mapsto f(gx_1,\ldots,gx_{i-1},g\psi_{x_i}x_{i+1},\ldots,
g\psi_{x_i}x_{k+1})))\\
&=
\sum_{i=1}^k (-1)^{i-1} Pf(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_{k+1})\\
&-\sum_{i=1}^k (-1)^{i-1}Pf(x_1,\ldots,x_{i-1},\psi_{x_i}x_{i+1},\ldots,\psi_{x_i}x_{k+1})\\
&=\delta (Pf)(x_1,\ldots,x_k).
\end{align*}
The third equality follows from the identity $\psi_{gx}=g\psi_x g^{-1}$.
If $f$ is an invariant cocycle then it is immediate that $P(f)=f$ and
hence $P\circ \iota=\operatorname{Id}$ which implies the surjectivity of $P$ and
the last statement.
\end{proof}
\begin{theorem}\label{T:a-bounded}
Let $(X,\triangleright)$ be a rack with amenable group $G_{\psi}$ of inner automorphisms.
If the conjugation-invariant word norm on $G_{\psi}$ associated with
$\psi(X)$ has finite diameter then:
\begin{enumerate}
\item
the projection
$$
P\colon H^*_b(X;\B R)\to H^*_{b,\rm inv}(X;\B R)
$$
is an isomorphism;
\item
the homomorphism (from Example \ref{E:pi0})
$$
\pi^*\colon\operatorname{Fun}_b(\pi_0(X)^k,\B R)\to H^k_b(X;\B R)
$$
is an isomorphism.
\end{enumerate}
Consequently we have isomorphisms
$$
\operatorname{Fun}_b(\pi_0(X)^k,\B R)\cong H^k_{b,\rm inv}(X;\B R)\cong H^k_b(X;\B R).
$$
\end{theorem}
\begin{remark}\label{R:a-quandle}
The statement for quandles is the same except that the
space bounded functions is replaced by
the subspace $\underline{\operatorname{Fun}}_b(\pi_0(X,\triangleright)^k,\B R)$
from Example \ref{E:trivial}.
\end{remark}
\begin{proof}
We first prove that the projection
$P\colon H^*_b(X;\B R)\to H^*_{b,\rm inv}(X;\B R)$
is an isomorphism.
Since the associated word metric on $G_{\psi}$ has finite
diameter, there exists $N\in \B N$ such that for every $g\in G_{\psi}$ we
have that $g=\psi_{x_1}\dots\psi_{x_n}$ for
some $x_1,\ldots,x_n\in G_{\psi}$ and $n\leq N$. Let $f\colon X^k\to \B R$ be
a bounded cocycle. We know from Lemma \ref{L:f=fz} that $f-f\circ \psi_x = \delta(f_x)$.
Applying this identity inductively, we get that
$$
f\cdot g
= f - \delta\left( (f_{x_1}) + (f\cdot \psi_{x_1})_{x_2} + \dots
+ (f\cdot \psi_{x_1}\dots \psi_{x_{k-1}})_{x_k} \right).
$$
Let $\alpha_g = (f_{x_1}) + (f\cdot \psi_{x_1})_{x_2} + \dots + (f\cdot \psi_{x_1}\dots \psi_{x_{k-1}})_{x_k}$
(notice that there is a choice of a decomposition of $g$ embedded in this definition). Since $f$ is bounded,
so is the cochain $\alpha_g$, and there exists a bound uniform with respect to $g$.
We claim that $P(f)$ and $f$ are homologous. Indeed,
\begin{align*}
P(f)(x_1,\ldots,x_k)
&= \B m(g\mapsto (f\cdot g)(x_1,\ldots,x_k))\\
&= \B m(g\mapsto ( (f - \delta\alpha_g)(x_1,\ldots,x_k))\\
&= f(x_1,\ldots,x_k) - \B m(g\mapsto \delta\alpha_g(x_1,\ldots,x_k))\\
&= f(x_1,\ldots,x_k) - \delta \B m(g\mapsto \alpha_g(x_1,\ldots,x_k)).
\end{align*}
Since $\alpha_g$ is uniformly bounded, the cochain $(x_1,\ldots,x_k)\mapsto \B m(g\mapsto \alpha_g(x_1,\ldots,x_k))$
is well defined and bounded.
This shows that if $P[f] = 0$ then, since $P(f)$ and $f$ are homologous,
$[f]=0$ which shows the injectivity of $P$ on cohomology. The surjectivity was
shown in Lemma \ref{L:proj}.
Now we prove the second statement. We start with the injectivity of $\pi^*$
and the proof is by induction with respect to the degree.
The statement is clear in degree zero. Assume it is true for all degrees
smaller than $k$ and let $f\in \operatorname{Fun}_b(\pi_0(X,\triangleright)^k,\B R)$ be such
that $\pi^*(f)=\delta \alpha$ for some $\alpha \in C^{k-1}_b(X;\B R)$.
It follows from the first part that we can take $\alpha$ to be an invariant
cochain which implies that
$$
(\pi^*f)_x=(\delta \alpha)_x = - \delta(\alpha_x),
$$
for every $x\in X$. However, $(\pi^*f)_x = \pi^*(f_{[x]})$, where
$f_{[x]}\colon \pi_0(X,\triangleright)^{k-1}\to \B R$ is defined analogously.
The induction hypothesis implies that $f_{[x]}=0$ for every $x\in X$
which proves the statement.
In order to prove surjectivity assume that the mean $\B m$ on
$G_{\psi}$ is bi-invariant. Such a mean always exists on an amenable
group $G$ \cite[Exercise 1.26]{zbMATH00193576} which can be seen by taking
the left-right action of $G\times G$ on the compact convex set of
means on $G$. Since $G\times G$ is amenable as well, this action
has a fixed point.
The second ingredient of the proof is the cochain complex decomposition
$$
C^*_b(X;\B R)=C_{b,\rm inv}^*(X;\B R)\oplus (1-P)C^*_b(X;\B R);
$$
notice that the second summand is acyclic.
Let $[f]\in H^k_b(X;\B R)$ be any element. We can assume it is represented
by an invariant cocycle $f$. The invariance of $f$ implies
that $f_x$ is a cocycle for every $x\in X$. Consider the decomposition
$$
f_x = P(f_x) + (1-P)(f_x)
$$
for every $x\in X$. Let $f^+,f^-\in C_b^k(X;\B R)$ be cochains such that
$$
f^+_x = P(f_x)\qquad\text{and}\qquad f^-_x = (1-P)(f_x),
$$
for every $x\in X$. The aim is to show that $f^+$ is an invariant cocycle
homologous to $f$. Let's show the invariance first and let's start with
the following computation. Let $h\in G_{\psi}$ and $x\in X$ be any elements.
\begin{align*}
f^+_{hx}(x_2,\ldots,x_k)
&= P(f_{hx})(x_2,\ldots,x_k)\\
&= \B m(g\mapsto f_{hx}(gx_2,\ldots,gx_k))\\
&= \B m(g\mapsto f_{hx}(hgx_2,\ldots,hgx_k))
&\text{\tt by left-invariance of $\B m$}\\
&= \B m(g\mapsto f(hx, hgx_2,\ldots,hgx_k))\\
&= \B m(g\mapsto f(x, gx_2,\ldots,gx_k)) &\text{\tt by invariance of $f$}\\
&= P(f_x)(x_2,\ldots,x_k)\\
&= f^+_x(x_2,\ldots,x_k).
\end{align*}
It follows that
\begin{align*}
(f^+\cdot h)_x &= f^+_{hx}\cdot h &\text{\tt obvious}\\
&= f^+_x\cdot h &\text{\tt by the previous computation}\\
&= P(f_x)\cdot h\\
&= P(f_x) &\text{\tt by the invariance of $P(f_x)$}\\
&= f^+_x.
\end{align*}
The invariance of $f^+$ is now clear:
\begin{align*}
f^+(hx_1,\ldots,hx_k)
&= f^+_{hx_1}(hx_2,\ldots,hx_k)= f^+_{x_1}(hx_2,\ldots,hx_k)\\
&= (f^+_{x_1}\cdot h)(x_2,\ldots,x_k)
= f^+_{x_1}(x_2,\ldots,x_k) = f^+(x_1,\ldots,x_k).
\end{align*}
Next observe that $f^+$ is a cocycle:
\begin{align*}
\delta f^+(x,x_1,\ldots,x_{k})
&= f^+(x_1,\ldots,x_{k}) - f^+(\psi_{x}x_1,\ldots,\psi_{x}x_{k})
- \delta f^+_x(x_1,\ldots,x_k) =0.
\end{align*}
The sum of the first two terms vanish by invariance of $f^+$ and
the $\delta f^+_x = \delta P(f_x)=0$ because $f_x$ is a cocycle
since $f$ is an invariant one.
We obtain that both summands in the decomposition $f=f^+ + f^-$
are invariant cocycles, since $f$ and $f^+$ are.
Next we show that $f^-$ is a coboundary. Let $\alpha\in C^{k-1}(X;\B R)$
be such that $\delta(\alpha_x) = f^-_x$ for every $x\in X$. We have
that
$$
\delta\left( (\alpha \cdot g)_x \right)
= \delta(\alpha_{gx}\cdot g)
= \delta(\alpha_{gx})\cdot g
= f^-_{gx}\cdot g
= (f^-\cdot g)_x
= f^-_x,
$$
for every $g\in G_{\psi}$ and $x\in X$.
It follows that
\begin{align*}
\delta\left( (P\alpha)_x \right)(x_1,\ldots,x_k)
&= (P\alpha)_x(x_2,\ldots,x_k)
- (P\alpha)_x(\psi_{x_1}x_2,\ldots,\psi_{x_1}x_k)-\dots\\
&= (P\alpha)(x,x_2,\ldots,x_k)
- (P\alpha)(x,\psi_{x_1}x_2,\ldots,\psi_{x_1}x_k)-\dots\\
&=\B m(g\mapsto \alpha(gx,gx_2,\ldots,gx_k)
- \alpha(gx,g\psi_{x_1}x_2,\ldots,g\psi_{x_1}x_k) - \dots)\\
&=\B m(g\mapsto (\alpha\cdot g)(x,x_2,\ldots,x_k)
- (\alpha\cdot g)(x,\psi_{x_1}x_2,\ldots,\psi_{x_1}x_k) - \dots)\\
&=\B m(g\mapsto (\alpha\cdot g)_x(x_2,\ldots,x_k)
- (\alpha\cdot g)_x(\psi_{x_1}x_2,\ldots,\psi_{x_1}x_k) - \dots)\\
&=\B m(g\mapsto \delta\left( (\alpha\cdot g)_x \right)(x_1,\ldots,x_k))\\
&=\B m(g\mapsto f^-_x(x_1,\ldots,x_k))
=f^-_x(x_1,\ldots,x_k).
\end{align*}
Since $P\alpha$ is invariant we get that
$$
(\delta(P\alpha))_x = \delta((P\alpha)_x) = f^-_x,
$$
for every $x\in X$ which implies that $\delta\alpha = f^-$.
Hence we can assume that the class $[f]$ is represented
by $f^+$.
Since $f^+_x = P(f_x)$ is an invariant $(k-1)$-cocycle for every $x\in X$,
we can consider $f^+$ as a cocycle of the form
$$
\sum_{s\in \pi_0(X,\triangleright)}I_{s}\otimes f^+_s
\in Z^1_b(X;\B R)\otimes Z^{k-1}_{b,\rm inv}(X;\B R)
\subseteq Z_{b,\rm inv}^k(X;\B R),
$$
where $I_s$ is the indicator function of $\{s\}$, that
is $I_{s}(t) = 1$ if $s=t$ and zero otherwise.
Proceeding by induction we get that every cohomology class
in $H^k_b(X;\B R)$ is decomposed as a product of classes
of degree one and hence they are represented by cocycles from
$\operatorname{Fun}_b(\pi_0(X)^k,\B R)$.
\end{proof}
|
2,877,628,090,537 | arxiv | \section{Introduction}
Two classical identities in the representation theory of real Lie groups are:
\begin{theorem}
For any integer $n \geq 0$ and partition $\lambda$ with at most $n$ parts, we have
\begin{align*}
\int_{O \in O(n)} s_{\lambda}(O) dO
= \begin{cases} 1, &\text{if all parts of $\lambda$ are even } \\
0, & \text{otherwise}
\end{cases}
\end{align*}
(where the integral is with respect to Haar measure on the orthogonal group). Similarly, for $n$ even, we have
\begin{align*}
\int_{S \in Sp(n)} s_{\lambda}(S) dS
=\begin{cases} 1, &\text{if all parts of $\lambda$ have even multiplicity} \\
0, & \text{otherwise}
\end{cases}
\end{align*}
(where the integral is with respect to Haar measure on the symplectic group).
\end{theorem}
Here $s_{\lambda}$ is the Schur function in $n$ variables indexed by the partition $\lambda$. Schur functions have an intimate connection to representation theory: they give the character of an irreducible representation of the unitary group, $U(n)$. In particular, the character's value on a matrix is given by evaluating the Schur function at the matrix's eigenvalues. Thus, the above identities encode the following facts: in the expansion of $s_{\lambda}$ into irreducible characters of $O(n)$ (resp. $Sp(n)$), the coefficient of the trivial character is zero unless all parts of $\lambda$ are even (resp. all parts of $\lambda$ have even multiplicity). These identities can be proved using the Gelfand pairs $(G,K) = (GL_{n}(\mathbb{R}), O(n))$ and $(GL_{n}(\mathbb{H}), U(n, \mathbb{H}))$ and the decomposition of the induced trivial representation into irreducible representations of $G$, see \cite{Mac}. For example, the orthogonal group identity follows from the structure result
\begin{align*}
e_{K} P(G) = P(K \backslash G) \cong \displaystyle \bigoplus_{l(\lambda) \leq n} F_{2\lambda}(V)
\end{align*}
(in the notation of \cite{Mac}) and the fact that the Schur function gives the character of a polynomial representation of $GL_{n}(\mathbb{R})$.
Note that using the eigenvalue densities for the orthogonal and symplectic groups, we may rephrase the above identities in terms of random matrix averages. For example, the symplectic integral above can be rephrased as
\begin{align*}
\frac{1}{2^{n}n!}\int_{T} s_{\lambda}(z_{1}, z_{1}^{-1}, z_{2}, z_{2}^{-1}, \dots, z_{n}, z_{n}^{-1}) \prod_{1 \leq i \leq n} |z_{i} - z_{i}^{-1}|^{2} \prod_{1 \leq i<j \leq n} |z_{i} + z_{i}^{-1} - z_{j} - z_{j}^{-1}|^{2} dT,
\end{align*}
where
\begin{align*}
T &= \{ (z_{1}, \dots, z_{n}) : |z_{1}| = \dots = |z_{n}| = 1 \} \\
dT &= \prod_{j} \frac{dz_{j}}{2 \pi \sqrt{-1} z_{j}}
\end{align*}
are the $n$-torus and Haar measure, respectively. Such identities, and their generalizations, have consequences outside symmetric function theory. For example, in their work dealing with symmetrized generalizations of the Hammersely process \cite{FR}, Forrester and Rains developed an $\alpha$-generalization of the above orthogonal group identity.
A natural question, then, is whether such identities admit a $q,t$ generalization to the level of Macdonald polynomials. In \cite{R}, a number of such identities were conjectured: that is, a suitable choice of density was suggested so that integrating Macdonald polynomials against it should vanish unless the partition is of the appropriate form, and such that when $q=t$, these identities become the known ones for Schur functions. In \cite{RV}, Rains and Vazirani developed Hecke algebra techniques which enabled them to prove many of these results. In fact, only Conjectures 3 and 5 of \cite{R} remain open.
An interesting subfamily of the Macdonald polynomials are the Hall--Littlewood polynomials which are obtained at $q=0$, see Chapter 3 of \cite{Mac}. Unfortunately, none of the above proofs work at this level: they involve $q$-shift operators, which do not behave well at $q=0$. In this paper, we develop a direct method for dealing with these cases. This method allows us to explicitly obtain the nonzero values as well as generalizations involving extra parameters. We also prove Conjectures 3 and 5 from \cite{R} for Hall--Littlewood polynomials.
There are several nice consequences. The first involves a (recent) identity discovered by Warnaar for Hall--Littlewood polynomials \cite{W}. He uses the Rogers--Szeg\H{o} polynomials to unify the Littlewood/Macdonald identities for Hall--Littlewood functions. We find a two-parameter integral identity and, using a method of Rains, we show that in the limit $n \rightarrow \infty$ it becomes Warnaar's identity. Thus, our identity may be viewed as a finite-dimensional analog of Warnaar's summation result. Another unexpected feature of the direct method we employ is an underlying Pfaffian structure in the orthogonal cases. It turns out that Pfaffians of suitable matrices nicely enumerate the nonzero values of these integrals. While Pfaffians are very common in Schur function identities (Schur functions are ratios of determinants), to our knowledge this is the first time they are appearing in the Hall--Littlewood context. Finally, the identities below involve Hall--Littlewood polynomials with a parameter $t$, but in many instances the evaluation of the integral produces a polynomial in $t^{2}$ or $\sqrt{t}$ (see for example, the symplectic and Kawanaka integrals, Corollary 6.3 and 6.4 respectively). Thus these identities may be viewed as quadratic transformations of Hall--Littlewood polynomials.
The outline of the paper is as follows. In the second section, we introduce some basic notation and review Hall--Littlewood polynomials. In the third section, we prove Hall--Littlewood orthogonality to illustrate our method of proof in a basic case. In the next section, we use Pfaffians and some technical arguments to prove $\alpha$ generalizations of the orthogonal integrals. In section 5, we use a Pieri rule to add one more parameter $\beta$ to these identities. In section 6, we discuss special cases of the $\alpha, \beta$ identity. In section 7, we prove that in the limit $n \rightarrow \infty$ of the $\alpha, \beta$ identity, we recover Warnaar's identity. Finally, in the last section, we prove some remaining vanishing results from \cite{RV} and \cite{R}.
We mention some related work in progress. As was discussed above, many of the integral identities for $t=0$ (the Schur case) follow from the theory of symmetric spaces, and thus have a representation theoretic significance. In \cite{MacP}, Macdonald shows that Hall--Littlewood polynomials (and their analogs for other classical root systems) arise as zonal spherical functions on $p$-adic reductive groups. Given this construction, it is natural to wonder whether our identities can be interpreted as $p$-adic analogs of the Schur cases. In a follow-up project, we will show that this is indeed the case: we give another proof via integrals over $p$-adic groups.
Finally, many of the integral identities of \cite{RV} involve Koornwinder polynomials, a $6$-parameter $BC_{n}$-symmetric family of Laurent polynomials that contain the Macdonald polynomials as suitable limits of the parameters. Just as in the Macdonald polynomial case, standard constructions via difference operators do not allow one to control the $q=0$ polynomials. The first step in obtaining an analog of the Hall--Littlewood polynomials is to produce a $q=0$ closed form. Such a formula is not known; in further work we will use orthogonality of Koornwinder polynomials to provide an explicit closed form \cite{VV}. We then use this result to prove the $q=0$ cases of the remaining identities in \cite{RV}.
\medskip
\noindent\textbf{Acknowledgements.} The author would like to thank her advisor, E. Rains, for suggesting the problems in this paper and for all the guidance and support he gave her during this project. She would also like to thank A. Borodin, P. Diaconis, P. Forrester, M. Vazirani, and O. Warnaar for many useful discussions and comments.
\section{Background and Notation}
We will briefly review Hall--Littlewood polynomials; we follow \cite{Mac}. We also set up the required notation.
Let $\lambda = (\lambda_{1}, \dots, \lambda_{n})$ be a partition, in which some of the $\lambda_{i}$ may be zero. In particular, note that $\lambda_{1} \geq \lambda_{2} \geq\cdots \geq \lambda_{n} \geq 0$. Let $l(\lambda)$, the length of $\lambda$, be the number of nonzero $\lambda_{i}$ and let $|\lambda|$, the weight of $\lambda$, be the sum of the nonzero $\lambda_{i}$. We will write $\lambda = \mu^{2}$ if there exists a partition $\mu$ such that $\lambda_{2i-1} = \lambda_{2i} = \mu_{i}$ (equivalently all parts of $\lambda$ occur with even multiplicity). Analogously, we write $\lambda = 2\mu$ if there exists a partition $\mu$ such that $\lambda_{i} = 2\mu_{i}$ (equivalently each part of $\lambda$ is even). Also let $m_{i}(\lambda)$ be the number of $\lambda_{j}$ equal to $i$ for each $i \geq 0$.
Recall the $t$-integer $[i] = [i]_{t} = (1-t^{i})/(1-t)$, as well as the $t$-factorial $[m]! = [m][m-1] \cdots [1]$, $[0]! = 1$. Let
\begin{align*}
\phi_{r}(t) &= (1-t)(1-t^{2}) \cdots (1-t^{r}),
\end{align*}
so that in particular $\phi_{r}(t)/(1-t)^{r} = [r]!$. Then we define
\begin{align*}
v_{\lambda}(t) &= \prod_{i \geq 0} \prod_{j=1}^{m_{i}(\lambda)} \frac{1- t^{j}}{1-t}
= \prod_{i \geq 0} \frac{\phi_{m_{i}(\lambda)}(t)}{(1-t)^{m_{i}(\lambda)}} = \prod_{i \geq 0} [m_{i}(\lambda)]!
\end{align*}
and
\begin{align*}
v_{\lambda+}(t) &= \prod_{i \geq 1} \prod_{j=1}^{m_{i}(\lambda)} \frac{1-t^{j}}{1-t}
= \prod_{i \geq 1} \frac{\phi_{m_{i}(\lambda)}(t)}{(1-t)^{m_{i}(\lambda)}} = \prod_{i \geq 1} [m_{i}(\lambda)]!,
\end{align*}
so that the first takes into account the zero parts, while the second does not. The Hall--Littlewood polynomial $P_{\lambda}(x_{1}, \dots, x_{n};t)$ indexed by $\lambda$ is defined to be
\begin{align*}
\frac{1}{v_{\lambda}(t)} \sum_{w \in S_{n}} w\Big( x^{\lambda} \prod_{1 \leq i<j \leq n} \frac{x_{i}-tx_{j}}{x_{i}-x_{j}} \Big),
\end{align*}
where we write $x^{\lambda}$ for $x_{1}^{\lambda_{1}} \cdots x_{n}^{\lambda_{n}}$ and $w$ acts on the subscripts of the $x_{i}$. The normalization $1/v_\lambda(t)$ has the effect of making the
coefficient of $x^\lambda$ equal to unity. (We will also write $P_{\lambda}^{(n)}(x;t)$ and use $P_{\lambda}(x^{(m)}, y^{(n)};t)$ to denote $P_{\lambda}(x_{1}, \dots, x_{m}, y_{1}, \dots, y_{n};t)$ in the final section). We define the polynomials $\{R_{\lambda}^{(n)}(x;t)\}$ by $R_{\lambda}^{(n)}(x;t)=v_{\lambda}(t)P_{\lambda}^{(n)}(x;t)$. For $w \in S_{n}$, we also define
\begin{equation} \label{Rpol}
R_{\lambda, w}^{(n)}(x;t) = w\Big( x^{\lambda} \prod_{1 \leq i<j \leq n} \frac{x_{i}-tx_{j}}{x_{i}-x_{j}} \Big),
\end{equation}
so that $R_{\lambda, w}^{(n)}(x;t)$ is the term of $R_{\lambda}^{(n)}(x;t)$ associated to the permutation $w$.
There are two important degenerations of the Hall--Littlewood symmetric functions: at $t=0$, we recover the Schur functions $s_{\lambda}(x)$ and at $t=1$ the monomial symmetric functions $m_{\lambda}(x)$. We remark that the Macdonald polynomials $P_{\lambda}(x;q,t)$ do not have poles at $q=0$, so there is no obstruction to specializing $q$ to zero; in fact we obtain the Hall--Littlewood polynomials (see \cite{Mac}, Ch. 6). Similarly, when $q=t$ (or $q=0$ then $t=0$), $P_{\lambda}(x;q,t)$ reduces to $s_{\lambda}(x)$.
Let
\begin{align*}
b_{\lambda}(t) = \prod_{i \geq 1} \phi_{m_{i}(\lambda)}(t)
= v_{\lambda+}(t) (1-t)^{l(\lambda)}.
\end{align*}
Then we let $Q_{\lambda}(x;t)$ be multiples of the $P_{\lambda}(x;t)$:
\begin{align*}
Q_{\lambda}(x;t) &= b_{\lambda}(t) P_{\lambda}(x;t);
\end{align*}
these form the adjoint basis with respect to the $t$-analog of the Hall inner product. With this notation the Cauchy identity for Hall--Littlewood functions is
\begin{equation} \label{Cauchyid}
\sum_{\lambda} P_{\lambda}(x;t) Q_{\lambda}(x;t) = \prod_{i,j \geq 1} \frac{1-tx_{i}y_{j}}{1-x_{i}y_{j}}.
\end{equation}
We recall the definition of Rogers--Szeg\H{o} polynomials, which appears in Sections 5--7. Let $m$ be a nonnegative integer. Then we let $H_{m}(z;t)$ denote the Rogers--Szeg\H{o} polynomial (see \cite{A}, Ch. 3, Examples 3--9)
\begin{equation} \label{RSpol}
H_{m}(z;t) = \sum_{i=0}^{m} z^{i} {\qbinom{m}{i}}_{t},
\end{equation}
where
\begin{equation*}
{\qbinom{m}{i}}_{t} = \begin{cases}
\frac{[m]!}{[m-i]![i]!}, & \text{ if } m \geq i \geq 0 \\
0, &\text{ otherwise}
\end{cases}
\end{equation*}
is the $t$-binomial coefficient. It can be verified that the Rogers--Szeg\H{o} polynomials satisfy the following second-order recurrence:
\begin{align*}
H_{m}(z;t) &= (1+z)H_{m-1}(z;t) - (1-t^{m-1})zH_{m-2}(z;t).
\end{align*}
Also, we define the symmetric $q=0$ Selberg density \cite{RV}:
\begin{align*}
\tilde \Delta_{S}^{(n)}(x;t) &= \prod_{1 \leq i \neq j \leq n} \frac{1-x_{i}x_{j}^{-1}}{1-tx_{i}x_{j}^{-1}}
\end{align*}
and the symmetric Koornwinder density \cite{K}:
\begin{equation} \label{Kd}
\tilde \Delta_{K}^{(n)}(x;a,b,c,d;t) = \frac{1}{2^{n}n!} \prod_{1 \leq i \leq n} \frac{1-x_{i}^{\pm 2}}{(1-ax_{i}^{\pm 1})(1-bx_{i}^{\pm 1})(1-cx_{i}^{\pm 1})(1-dx_{i}^{\pm 1})} \prod_{1 \leq i<j \leq n} \frac{1-x_{i}^{\pm 1}x_{j}^{\pm 1}}{1-tx_{i}^{\pm 1}x_{j}^{\pm 1}},
\end{equation}
where we write $1-x_{i}^{\pm 2}$ for the product $(1-x_{i}^{2})(1-x_{i}^{-2})$ and $1-x_{i}^{\pm 1}x_{j}^{\pm 1}$ for $(1-x_{i}x_{j})(1-x_{i}^{-1}x_{j}^{-1})(1-x_{i}^{-1}x_{j})(1-x_{i}x_{j}^{-1})$ etc. For convenience, we will write $\tilde \Delta_{S}^{(n)}$ and $\tilde \Delta_{K}^{(n)}(a,b,c,d)$ with the assumption that these densities are in $x_{1}, \dots, x_{n}$ with parameter $t$ when it is clear.
We recall some notation for hypergeometric series from \cite{RV} and \cite{R}. We define the $q$-symbol
\begin{align*}
(a;q) &= \prod_{k \geq 0} (1-aq^{k})
\end{align*}
and $(a_{1}, a_{2}, \dots, a_{l};q) = (a_{1};q)(a_{2};q) \cdots (a_{l};q)$. Also, let
\begin{equation*}
(a;q)_{n} = \prod_{j=0}^{n-1} (1-aq^{j})
\end{equation*}
for $n>0$ and $(a;q)_{0} = 1$. We also define the $C$-symbols, which appear in the identities of \cite{RV}. Let
\begin{align*}
C^{0}_{\mu}(x;q,t) &= \prod_{1 \leq i \leq l(\mu)} \frac{(t^{1-i}x;q)}{(q^{\mu_{i}}t^{1-i}x;q)} \\
C^{-}_{\mu}(x;q,t) &= \prod_{1 \leq i \leq l(\mu)} \frac{(x;q)}{(q^{\mu_{i}}t^{l(\mu)-i}x;q)} \prod_{1 \leq i<j \leq l(\mu)} \frac{(q^{\mu_{i}-\mu_{j}}t^{j-i}x;q)}{(q^{\mu_{i}-\mu_{j}}t^{j-i-1}x;q)} \\
C^{+}_{\mu}(x;q,t) &= \prod_{1 \leq i \leq l(\mu)} \frac{(q^{\mu_{i}}t^{2-l(\mu)-i}x;q)}{(q^{2\mu_{i}}t^{2-2i}x;q)} \prod_{1 \leq i<j \leq l(\mu)} \frac{(q^{\mu_{i}+\mu_{j}}t^{3-j-i}x;q)}{(q^{\mu_{i}+\mu_{j}}t^{2-j-i}x;q)}.
\end{align*}
We note that $C_{\mu}^{0}(x;q,t)$ is the $q,t$-shifted factorial. As before, we extend this by $C^{0, \pm}_{\mu}(a_{1}, a_{2}, \dots, a_{l};q) = C^{0, \pm}_{\mu}(a_{1};q) \cdots C^{0, \pm}_{\mu}(a_{l};q)$.
We note that for $q=0$ we have
\begin{align*}
C^{0}_{\mu}(x;0,t) &= \prod_{1 \leq i \leq l(\mu)} (1-t^{1-i}x) \\
C^{-}_{\mu}(t;0,t) &= (1-t)^{l(\mu)}v_{\mu+}(t)\\
C^{+}_{\mu}(x;0,t) &= 1.\\
\end{align*}
Finally, we explain some notation involving permutations. Let $w \in S_{n}$ act on the variables $z_{1}, \dots, z_{n}$ by
\begin{align*}
w(z_{1} \cdots z_{n}) = z_{w(1)} \cdots z_{w(n)}
\end{align*}
as in the definition of Hall--Littlewood polynomials above. We view the permutation $w$ as this string of variables. For example the condition ``$z_{i}$ is in the $k$-th position of $w$" means that $w(k) = i$. Also we write
\begin{align*}
``z_{i} \prec_{w} z_{j}"
\end{align*}
if $i = w(i')$ and $j= w(j')$ for some $i' < j'$, i.e., $z_{i}$ appears to the left of $z_{j}$ in the permutation representation $z_{w(1)} \cdots z_{w(n)}$. For $w \in S_{2n}$, we use $w(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1})$ to represent $z_{w(1)} \cdots z_{w(2n)}$, with $z_{i} = x_{i}$ for $1 \leq i \leq n$ and $z_{j} = x_{j-n}^{-1}$ for $n+1 \leq j \leq 2n$.
\section{Hall--Littlewood Orthogonality}
It is a well known result that Hall--Littlewood polynomials are orthogonal with respect to the density $\tilde \Delta_{S}$. We prove this result using our method below, to illustrate the technique in a simple case.
\begin{theorem}\label{orthog}
We have the following orthogonality relation for Hall--Littlewood polynomials:
\begin{equation*}
\int_{T} P_{\lambda}(x_{1}, \dots, x_{n};t) P_{\mu}(x_{1}^{-1}, \dots, x_{n}^{-1};t) \tilde \Delta_{S}^{(n)}(x;t) dT = \delta_{\lambda \mu} \frac{n!}{v_{\mu}(t)}
\end{equation*}
\end{theorem}
\begin{proof}
Note first that by the definition of Hall--Littlewood polynomials, the LHS is a sum of $(n!)^{2}$ integrals in bijection with $S_{n} \times S_{n}$. Now, since the integral is invariant under inverting all variables, we may restrict to the case where $\lambda \geq \mu$ in the reverse lexicographic ordering (we assume this throughout). We will show that each of these terms vanish unless $\lambda = \mu$, and this argument will allow us to compute the normalization in the case $\lambda = \mu$. By symmetry and (\ref{Rpol}), we have
\begin{equation*}
\int_{T} P_{\lambda}^{(n)}(x;t) P_{\mu}^{(n)}(x^{-1};t) \tilde \Delta_{S}^{(n)} dT = \frac{n!}{v_{\lambda}(t)v_{\mu}(t)} \sum_{\rho \in S_{n}} \int_{T} R_{\lambda, \text{id}}^{(n)}(x;t) R_{\mu, \rho}^{(n)}(x^{-1};t) \tilde \Delta_{S}^{(n)} dT.
\end{equation*}
\begin{claim}
We have the term-evaluation
\begin{equation*}
\int_{T} R_{\lambda, \text{id}}^{(n)}(x;t) R_{\mu, \rho}^{(n)}(x^{-1};t) \tilde \Delta_{S}^{(n)} dT =
t^{i(\rho)}
\end{equation*}
if $x_{1}^{\lambda_{1}} \cdots x_{n}^{\lambda_{n}}x_{\rho(1)}^{-\mu_{1}} \cdots x_{\rho(n)}^{-\mu_{n}} =1$, and is otherwise equal to 0. Here $i(\rho)$ is the number of inversions of $\rho$ with respect to the permutation $x_{1}^{-1} \cdots x_{n}^{-1}$.
\end{claim}
\noindent Note that $i(\rho)$ is the Coxeter length and recall the distribution of this statistic: $\sum_{\rho} t^{i(\rho)} = [n]!$.
To prove the claim, we use induction on $n$. Note first that for $n=1$, the only term is $\int x_{1}^{\lambda_{1}} x_{1}^{-\mu_{1}} dT$, which vanishes unless $\lambda_{1} = \mu_{1}$. Now suppose the result is true for $n-1$. With this assumption we want to show that it holds true for $n$ variables. One can compute, by integrating with respect to $x_{1}$ in the iterated integral, that the LHS above is equal to
\begin{equation*}
\int_{T_{n-1}} \Big( \int_{T_{1}} x_{1}^{\lambda_{1}-\mu_{\rho^{-1}(1)}} \prod_{x_{j}^{-1} \prec_{\rho} x_{1}^{-1}} \frac{tx_{j} - x_{1}}{x_{j} - tx_{1}}\frac{dx_{1}}{2\pi \sqrt{-1}x_{1}}\Big) R_{\widehat{\lambda}, \widehat{\text{id}}}^{(n-1)}(x;t) R_{\widehat{\mu}, \widehat{\rho}}^{(n-1)}(x^{-1};t) \tilde \Delta_{S}^{(n-1)}(x;t) dT,
\end{equation*}
where
\begin{align*}
\widehat{\text{id}} &= \text{id} \text{ with $x_{1}$ deleted} \\
\widehat{\rho} &= \rho \text{ with $x_{1}^{-1}$ deleted} \\
\widehat{\lambda} &= \lambda \text{ with $\lambda_{1}$ deleted} \\
\widehat{\mu} &= \mu \text{ with $\mu_{\rho^{-1}(1)}$ deleted}.
\end{align*}
Recall that $\lambda_{1} \geq \mu_{1} \geq \mu_{i}$ for all $1 \leq i \leq n$. Thus, the inner integral in $x_{1}$ is zero if $\lambda_{1} > \mu_{\rho^{-1}(1)}$ and is $t^{|\{ j: x_{j}^{-1} \prec_{\rho} x_{1}^{-1} \}|}$ if $\lambda_{1} = \mu_{\rho^{-1}(1)}$. In the latter case, note that $\widehat{\lambda} \geq \widehat{\mu}$, so we may use the induction hypothesis on the resulting $(n-1)$-dimensional integral, and combining this with the contribution from $x_{1}$ gives the result of the claim.
Note that the claim implies each term is zero if $\lambda \neq \mu$, so consequently the entire integral in zero. Finally, we use the claim to compute the normalization value in the case $\lambda = \mu$. By the above remarks, we have
\begin{equation*}
\int_{T} P_{\lambda}^{(n)}(x;t) P_{\mu}^{(n)}(x^{-1};t) \tilde \Delta_{S}^{(n)} dT = \frac{n!}{v_{\mu}(t)^{2}}\sum_{\substack{\rho \in S_{n} : \\ x_{1}^{\lambda_{1}} \cdots x_{n}^{\lambda_{n}}x_{\rho(1)}^{-\mu_{1}} \cdots x_{\rho(n)}^{-\mu_{n}}=1}} t^{i(\rho)}
\end{equation*}
Note that the permutations in the index of the sum are in statistic-preserving bijection with $S_{m_{0}(\mu)} \times S_{m_{1}(\mu)} \times \cdots$ so, using the comment immediately following the Claim, the above expression is equal to
\begin{equation*}
\frac{n!}{v_{\mu}(t)^{2}} \sum_{\rho \in S_{m_{0}(\mu)} \times S_{m_{1}(\mu)} \times \cdots} t^{i(\rho)} = \frac{n!}{v_{\mu}(t)^{2}} \prod_{i \geq 0} [m_{i}(\mu)]! = \frac{n!}{v_{\mu}(t)},
\end{equation*}
as desired.
\end{proof}
\section{$\alpha$ version}
In this section, we prove the orthogonal group integrals with an extra parameter $\alpha$. This gives four identities - one for each component of $O(l)$, depending on the parity of $l$. First, we use a result of Gustafson \cite{G} to compute some normalizations that will be used throughout the paper.
\begin{proposition} \label{normalizations}
We have the following normalizations:
\begin{enumerate}
\item (\text{symplectic})
\begin{align*}
\int_{T} \tilde \Delta_{K}^{(n)}(x;\pm \sqrt{t},0,0;t) dT &= \frac{(1-t)^{n}}{(t^{2};t^{2})_{n}}
\end{align*}
\item (\text{Kawanaka})
\begin{align*}
\int_{T} \tilde \Delta_{K}^{(n)}(x;1, \sqrt{t}, 0,0;t) dT &= \frac{(1-t)^{n}}{(\sqrt{t};\sqrt{t})_{2n}}
\end{align*}
\item ($O^{+}(2n)$)
\begin{align*}
\int_{T} \tilde \Delta_{K}^{(n)}(x;\pm 1, \pm \sqrt{t};t) dT &= \frac{(1-t)^{n}}{2(t;t)_{2n}}
\end{align*}
\item ($O^{-}(2n)$)
\begin{align*}
\int_{T} \tilde \Delta_{K}^{(n-1)}(x;\pm t, \pm \sqrt{t};t) dT &= \frac{(1-t)^{n-1}}{(t^{3};t)_{2n-2}}
\end{align*}
\item ($O^{+}(2n+1)$)
\begin{align*}
\int_{T} \tilde \Delta_{K}^{(n)}(x;t,-1, \pm \sqrt{t};t) dT &= \frac{(1-t)^{n+1}}{(t;t)_{2n+1}}
\end{align*}
\item ($O^{-}(2n+1)$)
\begin{align*}
\int_{T} \tilde \Delta_{K}^{(n)}(x;1,-t, \pm \sqrt{t};t) dT &= \frac{(1-t)^{n+1}}{(t;t)_{2n+1}}.
\end{align*}
\end{enumerate}
\end{proposition}
We omit the proof, but in all cases it follows from setting $q=0$ and the appropriate values of $(a,b,c,d)$ in the integral evaluation:
\begin{align*}
\int_{T} \tilde \Delta_{K}^{(n)}(x;a,b,c,d;q,t) dT &= \prod_{0 \leq j<n} \frac{(t,t^{2n-2-j}abcd;q)}{(t^{j+1},t^{j}ab, t^{j}ac, t^{j}ad, t^{j}bc, t^{j}bd, t^{j}cd;q)},
\end{align*}
which may be found in \cite{G}.
We remark that at $t=0$ the above densities have special significance. In particular, (i) is the eigenvalue density of the symplectic group and (iii) - (vi) are the eigenvalue densities of $O^{+}(2n), O^{-}(2n), O^{+}(2n+1)$ and $O^{-}(2n+1)$ (in the orthogonal group case, the density depends on the component of the orthogonal group as well as whether the dimension is odd or even). The density in (ii) appears in Corollary \ref{Kawanaka}, and that result corresponds to a summation identity of Kawanaka \cite{Ka1}.
In this section, we want to use a technique similar to the one used to prove Hall--Littlewood orthogonality. Namely, we want to break up the integral into a sum of terms, one for each permutation, and study the resulting term integral. The obstruction to this approach is that in many cases the poles lie on the contour, i.e., occur at $\pm 1$, so the pieces of the integral are not well-defined. However, since the overall integral does not have singularities, we may use the principal value integral which we denote by P.V. (see \cite{Kan}, Section 8.3). We first prove some results involving the principal value integrals.
\begin{lemma} \label{pvl1}
Let $f(z)$ be a function in $z$ such that $zf(z)$ is holomorphic in a neighborhood of the unit disk. Then
$$
\PV \int_{T} f(z) \frac{1}{1-z^{-2}} dT = \frac{f(1) + f(-1)}{4}.
$$
\end{lemma}
\begin{proof}
We have
\begin{multline*}
\PV \frac{1}{2\pi \sqrt{-1}} \int_{|z| =1} f(z) \frac{1}{1-z^{-2}} \frac{1}{z} dz
= \lim_{\epsilon \rightarrow 0^{+}} \frac{1}{2} \Biggl[ \frac{1}{2\pi \sqrt{-1}} \int_{|z| = 1-\epsilon} zf(z) \frac{1}{z^{2} - 1}dz \\
+ \frac{1}{2\pi \sqrt{-1}} \int_{|z| = 1 + \epsilon} zf(z) \frac{1}{z^{2} - 1} dz \Biggr]
\end{multline*}
But now as $zf(z)$ is holomorphic in a neighborhood of the disk, and the singularities of $1/(z^{2}-1)$ lie outside of the disk, the first integral is zero by Cauchy's theorem. Using the residue theorem for the second integral (it has simple poles at $\pm 1$) gives
\begin{align*}
\lim_{\epsilon \rightarrow 0} \frac{1}{2} \Biggl[ \text{Res}_{z=1} \frac{zf(z)}{(z-1)(z+1)} + \text{Res}_{z=-1} \frac{zf(z)}{(z-1)(z+1)} \Biggr]
= \frac{1}{2}\Biggl[\frac{f(1)}{2} + \frac{f(-1)}{2}\Biggr]
= \frac{1}{4}\Big[f(1) + f(-1)\Big].
\end{align*}
\end{proof}
\begin{lemma} \label{pvl2}
Let $p$ be a function in $x_{1}, \dots, x_{n}$ such that $x_{i}p$ is holomorphic in $x_{i}$ in a neighborhood of the unit disk for all $1 \leq i \leq n$ and $p( \pm 1, \dots, \pm 1) = 0$ for all $2^{n}$ combinations. Let $\Delta$ be a function in $x_{1}, \dots, x_{n}$ such that $\Delta( \pm 1, \dots, \pm 1, x_{i+1}, \dots, x_{n})$ is holomorphic in $x_{i+1}$ in a neighborhood of the unit disk for all $0 \leq i \leq n-1$ (again for all $2^{i}$ combinations). Then
$$
\PV \int_{T} p \cdot \Delta \cdot \prod_{1 \leq i \leq n} \frac{1}{1-x_{i}^{-2}} dT = 0.
$$
\end{lemma}
\begin{proof}
We give a proof by induction on $n$.
For $n=1$, since $x_{1} \cdot p \cdot \Delta$ is holomorphic in $x_{1}$ we may use Lemma \ref{pvl1}:
$$
\PV \int_{T} p \cdot \Delta \cdot \frac{1}{1-x_{1}^{-2}} dT = \frac{1}{4}[ p(1)\Delta(1) + p(-1)\Delta(-1)].
$$
But then $p(1) = p(-1) = 0$ by assumption, so the integral is zero as desired.
Now suppose the result holds in the case of $n-1$ variables. Consider the $n$ variable case, and let $p, \Delta$ in $x_{1}, \dots, x_{n}$ satisfy the above conditions. Integrate first with respect to $x_{1}$ and note that $x_{1} \cdot p \cdot \Delta$ is holomorphic in $x_{1}$ so we can apply Lemma \ref{pvl1}:
\begin{multline*}
\PV \int_{T} p \cdot \Delta \cdot \prod_{1 \leq i \leq n} \frac{1}{1-x_{i}^{-2}} dT = \frac{1}{4} \PV \int_{T_{n-1}} p(1,x_{2}, \dots, x_{n})\Delta(1,x_{2}, \dots,x_{n}) \prod_{2 \leq i \leq n} \frac{1}{1-x_{i}^{-2}} dT \\
+ \frac{1}{4} \PV \int_{T_{n-1}} p(-1, x_{2}, \dots, x_{n})\Delta(-1, x_{2}, \dots, x_{n}) \prod_{2 \leq i \leq n} \frac{1}{1-x_{i}^{-2}} dT.
\end{multline*}
But now the pairs $p(1,x_{2}, \dots, x_{n}), \Delta(1,x_{2},\dots,x_{n})$ and $p(-1,x_{2},\dots,x_{n}),$
$\Delta(-1,x_{2},\dots,x_{n})$ satisfy the conditions of the theorem for $n-1$ variables $x_{2}, \dots, x_{n}$, so by the induction hypothesis each of the two integrals is zero, so the total integral is zero.
\end{proof}
For this section, we let $\rho_{2n} = (1, 2, \dots, 2n)$. We also let $1^{k} = (1,1, \dots, 1)$ with exactly $k$ ones. As above we will work with principal value integrals, as necessary. For simplicity, we will suppress the notation P.V.
\begin{theorem}\label{ape}
Let $l(\lambda) \leq 2n$. We have the following integral identity for $O^{+}(2n)$:
\begin{multline*}
\frac{1}{\int \tilde \Delta_{K}^{(n)}(\pm 1, \pm \sqrt{t}) dT} \int P_{\lambda}(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1};t) \tilde \Delta_{K}^{(n)}( \pm 1, \pm \sqrt{t}) \prod_{i=1}^{n} (1-\alpha x_{i}^{\pm 1})dT\\
= \frac{\phi_{2n}(t)}{v_{\lambda}(t) (1-t)^{2n}} \Big[ (-\alpha)^{\text{\# of odd parts of $\lambda$}} + (-\alpha)^{\# \text{ of even parts of $\lambda$}}\Big] \\
= \frac{[2n]!}{v_{\lambda}(t)} \Big[ (-\alpha)^{\text{\# of odd parts of $\lambda$}} + (-\alpha)^{\# \text{ of even parts of $\lambda$}}\Big].
\end{multline*}
\end{theorem}
\begin{proof}
We will first show the following:
\begin{align*}
\int R_{\lambda}(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1};t) \tilde \Delta_{K}^{(n)}( \pm 1, \pm \sqrt{t}) \prod_{i=1}^{n} (1-\alpha x_{i}^{\pm 1})dT
= \frac{1}{2^{n}(1-t)^{n}}\mathrm{Pf}[a_{j,k}]^{\lambda},
\end{align*}
where $\mathrm{Pf}$ denotes the Pfaffian and the $2n \times 2n$ antisymmetric matrix $[a_{j,k}]^{\lambda}$ is defined by
\begin{align*}
a_{j,k}^{\lambda} &= (1+\alpha^{2})\chi_{(\lambda_{j}-j)-(\lambda_{k}-k) \text{ odd}} + 2(-\alpha)\chi_{(\lambda_{j}-j)-(\lambda_{k}-k) \text{ even}},
\end{align*}
for $1 \leq j<k \leq 2n$.
First, note that by symmetry we can rewrite the above integral as $2^{n} n!$ times the sum over all matchings $w$ of $x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1}$, where a matching is a permutation in $S_{2n}$ such that $x_{i}$ occurs to the left of $x_{i}^{-1}$ and $x_{i}$ occurs to the left of $x_{j}$ for $1 \leq i <j \leq n$. In particular, $x_{1}$ occurs first. Thus, we have
\begin{multline*}
\int R_{\lambda}^{(2n)}(x^{\pm 1};t) \tilde \Delta_{K}^{(n)}(\pm 1; \pm \sqrt{t}) \prod_{i=1}^{n}(1-\alpha x_{i}^{\pm 1})dT\\
= 2^{n}n! \sum_{w} \int R_{\lambda, w}^{(2n)}(x^{\pm 1};t) \tilde \Delta_{K}^{(n)}(\pm 1; \pm \sqrt{t}) \prod_{i=1}^{n}(1-\alpha x_{i}^{\pm 1})dT,
\end{multline*}
where the sum is over matchings $w$ in $S_{2n}$.
We introduce some notation for a matching $w \in S_{2n}$. We write $w = \{ (i_{1}, i_{1}'), \dots, (i_{n}, i_{n}') \}$ to indicate that $x_{k}$ occurs in position $i_{k}$ and $x_{k}^{-1}$ occurs in position $i_{k}'$ for all $1 \leq k \leq n$. Clearly we have $i_{k} < i_{k}'$ for all $k$ and $i_{j} < i_{k}$ for all $j<k$.
\begin{claim}\label{apet}
Let $\lambda = (\lambda_{1}, \dots, \lambda_{2n})$ with $\lambda_{1} \geq \lambda_{2} \geq \cdots \geq \lambda_{2n} \in \mathbb{Z}$. Then we have the following term-evaluation:
\begin{align*}
2^{n}n! \PV \int_{T} R_{\lambda, w}(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1};t) \tilde \Delta_{K}^{(n)}(\pm 1, \pm \sqrt{t}) \prod_{i=1}^{n} (1-\alpha x_{i}^{\pm 1}) dT &= \frac{\epsilon(w)}{2^{n}(1-t)^{n}} \prod_{1 \leq k \leq n} a_{i_{k},i_{k}'}^{\lambda},
\end{align*}
where
$\epsilon(w)$ is the sign of $w$ and $a_{i_{k},i_{k}'}^{\lambda}$ is the $(i_{k}, i_{k}')$ entry of the matrix $[a_{j,k}]^{\lambda}$. In particular, the term integral only depends on the parity of the parts $\lambda_{1}, \dots, \lambda_{2n}$.
\end{claim}
Let $\mu$ be such that $\lambda = \mu + \rho_{2n}$. We give a proof by induction on $n$, the number of variables. For $n=1$, there is only one matching---in particular, $x_{1}^{-1}$ must occur in position $2$. The (principal value) integral is
\begin{multline*}
\int_{T} x_{1}^{\lambda_{1} - \lambda_{2}} \frac{(1-tx_{1}^{-2})}{(1-x_{1}^{-2})} \frac{(1-\alpha x_{1})(1-\alpha x_{1}^{-1})}{(1-tx_{1}^{2})(1-tx_{1}^{-2})} dT
= \int_{T} x_{1}^{\lambda_{1} - \lambda_{2}} \frac{(1-\alpha x_{1})(1-\alpha x_{1}^{-1})}{(1-x_{1}^{-2})(1-tx_{1}^{2})} dT \\
= \int_{T} x_{1}^{\lambda_{1}-\lambda_{2}} \frac{(1 + \alpha^{2}) - \alpha(x_{1} + x_{1}^{-1})}{(1-tx_{1}^{2})(1-x_{1}^{-2})} dT
\end{multline*}
and $\lambda_{1}-\lambda_{2} \geq 0$. Note that the conditions for Lemma \ref{pvl1} are satisfied. Applying that result gives that the value of the integral is $2(-\alpha)/2(1-t)$ if $\lambda_{1}-\lambda_{2}$ is odd, and $(1+\alpha^{2})/2(1-t)$ if $\lambda_{1}-\lambda_{2}$ is even, which agrees with the above claim.
Now suppose the result is true for up to $n-1$ variables and consider the $n$ variable case. Note first that $i_{1} = 1$. One can compute, by combining terms involving $x_{1}$ in the iterated integral, that
\begin{multline*}
2^{n}n! \int R_{\lambda, w}^{(2n)}(x^{\pm 1};t) \tilde \Delta_{K}^{(n)}(\pm 1, \pm \sqrt{t}) \prod_{i=1}^{n} (1-\alpha x_{i}^{\pm 1}) dT \\
=\int_{T_{n-1}} \Big( \int_{T_{1}} x_{1}^{\lambda_{1}-\lambda_{i_{1}'}} \frac{(1-\alpha x_{1})(1-\alpha x_{1}^{-1})}{(1-tx_{1}^{2})(1-x_{1}^{-2})} \prod_{\substack{x_{j} : \\ x_{1} \prec_{w} x_{j} \prec_{w} x_{1}^{-1} \prec_{w} x_{j}^{-1} }} \frac{(t-x_{1}x_{j})}{(1-tx_{1}x_{j})}\\
\prod_{\substack{x_{j} : \\ x_{1} \prec_{w} x_{j} \prec_{w} x_{j}^{-1} \prec_{w} x_{1}^{-1}}}\frac{(t-x_{1}x_{j}^{-1})(t-x_{1}x_{j})}{(1-tx_{1}x_{j}^{-1})(1-tx_{1}x_{j})} dT \Big) F_{\widehat{\lambda}, \tilde w} dT,
\end{multline*}
where
\begin{equation*}
F_{\widehat \lambda, \tilde w} = 2^{n-1}(n-1)! R_{\widehat \lambda, \tilde w}(x_{2}^{\pm 1}, \dots, x_{n}^{\pm 1};t) \tilde \Delta_{K}^{(n-1)}(\pm 1, \pm \sqrt{t}) \prod_{i=2}^{n} (1-\alpha x_{i}^{\pm 1})
\end{equation*}
and $\widehat \lambda$ is $\lambda$ with parts $\lambda_{1}, \lambda_{i_{1}'}$ deleted; $\tilde w$ is $w$ with $x_{1}, x_{1}^{-1}$ deleted.
In particular, the conditions for Lemma \ref{pvl1} are satisfied for the inner integral in $x_{1}$. Note that the terms
$$
\frac{(t-x_{1}x_{i})}{(1-tx_{1}x_{i})}\frac{(t-x_{1}x_{i}^{-1})}{(1-tx_{1}x_{i}^{-1})}
$$
give $1$ when evaluated at $x_{1} = \pm 1$, so the above integral evaluates to
\begin{multline*}
\frac{1}{4(1-t)} \int_{T_{n-1}} \Bigg[ F_{\widehat \lambda, \tilde w} \cdot (1 + \alpha^{2} - 2\alpha) \Big(\prod_{\substack{x_{j} : \\ x_{1} \prec_{w} x_{j} \prec_{w} x_{1}^{-1} \prec_{w} x_{j}^{-1} }} \frac{t-x_{j}}{1-tx_{j}} \Big)\\
+ F_{\widehat \lambda, \tilde w} \cdot(1 + \alpha^{2} + 2\alpha)(-1)^{\lambda_{1}-\lambda_{i_{1}'}} \Big(\prod_{\substack{x_{j}: \\ x_{1} \prec_{w} x_{j} \prec_{w} x_{1}^{-1} \prec_{w} x_{j}^{-1}}}\frac{t+x_{j}}{1+tx_{j}} \Big) \Bigg] dT.
\end{multline*}
But now since $(t-x_{i})/(1-tx_{i})$ and $(t+x_{i})/(1+tx_{i})$ are power series in $x_{i}$, we may apply the inductive hypothesis to each part of the new integral: we reduce exponents on $x_{i}$ modulo $2$. We get
\begin{multline*}
\frac{1}{4(1-t)} \int_{T_{n-1}} \Bigg[ F_{\widehat \lambda, \tilde w} \cdot (1 + \alpha^{2} - 2\alpha) \Big(\prod_{\substack{x_{j} : \\ x_{1} \prec_{w} x_{j} \prec_{w} x_{1}^{-1} \prec_{w} x_{j}^{-1} }} (-x_{j})\Big) \\
+ F_{\widehat \lambda, \tilde w} \cdot (1 + \alpha^{2} + 2\alpha) (-1)^{\lambda_{1}-\lambda_{i_{1}'}} \Big(\prod_{\substack{x_{j} : \\ x_{1} \prec_{w} x_{j} \prec_{w} x_{1}^{-1} \prec_{w} x_{j}^{-1} }} x_{j}\Big)\Bigg] dT.
\end{multline*}
But now note that
\begin{equation*}
\prod_{\substack{x_{j} : \\ x_{1} \prec_{w} x_{j} \prec_{w} x_{1}^{-1} \prec_{w} x_{j}^{-1} }} (-1) = \prod_{\substack{x_{j} : \\ x_{1} \prec_{w} x_{j} \prec_{w} x_{1}^{-1} \prec_{w} x_{j}^{-1} }} (-1) \prod_{\substack{x_{j} : \\ x_{1} \prec_{w} x_{j} \prec_{w} x_{j}^{-1} \prec_{w} x_{1}^{-1} }} (-1)^{2} = (-1)^{i_{1}'-2},
\end{equation*}
since $i_{1}'-2$ is the number of variables between $x_{1}$ and $x_{1}^{-1}$ in the matching $w$. We can compute
\begin{multline*}
(1 + \alpha^{2} - 2\alpha)(-1)^{i_{1}'-2} + (1 + \alpha^{2} + 2\alpha)(-1)^{\lambda_{1}-\lambda_{i_{1}'}} = (1 + \alpha^{2})[(-1)^{i_{1}'} + (-1)^{\lambda_{1}-\lambda_{i_{1}'}} ] - 2\alpha[(-1)^{i_{1}'} + (-1)^{\lambda_{1} -\lambda_{i_{1}'} +1}] \\
= \begin{cases} 2(-1)^{i_{1}'}(1+\alpha^{2}) & \text{if $\lambda_{1}-\lambda_{i_{1}'}+i_{1}'-1$ is odd,}
\\
-4(-1)^{i_{1}'}\alpha &\text{if $\lambda_{1}-\lambda_{i_{1}'} +i_{1}'-1$ is even.} \\
\end{cases}
\end{multline*}
Combining this with the factor $1/4(1-t)$ and noting that
\begin{align*}
F_{\widehat \lambda, \tilde w} \cdot \Big(\prod_{\substack{x_{j} : \\ x_{1} \prec_{w} x_{j} \prec_{w} x_{1}^{-1} \prec_{w} x_{j}^{-1} }} x_{j}\Big) &= F_{\tilde \lambda, \tilde w},
\end{align*}
with
\begin{align*}
\tilde \lambda &= (\lambda_{2}+1, \dots, \lambda_{i_{1}'-1}+1, \lambda_{i_{1}'+1}, \dots, \lambda_{2n}),
\end{align*}
gives that
\begin{multline*}
2^{n}n! \int R_{\lambda, w}^{(2n)}(x^{\pm 1};t) \tilde \Delta_{K}^{(n)}(\pm 1, \pm \sqrt{t}) \prod_{i=1}^{n} (1-\alpha x_{i}^{\pm 1}) dT \\
= \frac{2^{n-1}(n-1)!}{2(1-t)} a_{i_{1}, i_{1}'}^{\lambda} (-1)^{i_{1}'} \int_{T} R_{\tilde \lambda, \tilde w}(x_{1}^{\pm 1}, \dots, x_{n-1}^{\pm 1};t) \tilde \Delta_{K}^{(n-1)}(\pm 1, \pm \sqrt{t}) \prod_{i=1}^{n-1} (1-\alpha x_{i}^{\pm 1}) dT.
\end{multline*}
Now set $\widehat{\mu} = (\mu_{2}, \dots, \mu_{i_{1}'-1}, \mu_{i_{1}'+1}, \dots, \mu_{2n})$, and note that $\tilde \lambda$ and $\widehat{\mu} + \rho_{2n-2}$ have equivalent parts modulo $2$. Thus, using the induction hypothesis twice, the above is equal to
\begin{multline*}
\frac{2^{n-1}(n-1)!}{2(1-t)} a_{i_{1}, i_{1}'}^{\lambda} (-1)^{i_{1}'} \int_{T} R_{\widehat{\mu} + \rho_{2n-2}, \tilde w}(x_{1}^{\pm 1}, \dots, x_{n-1}^{\pm 1};t) \tilde \Delta_{K}^{(n-1)}(\pm 1, \pm \sqrt{t}) \prod_{i=1}^{n-1} (1-\alpha x_{i}^{\pm 1}) dT \\
= \frac{a_{i_{1}, i_{1}'}^{\lambda} (-1)^{i_{1}'}}{2(1-t)} \frac{\epsilon(\tilde w)}{2^{n-1}(1-t)^{n-1}} \prod_{2 \leq k \leq n} a_{i_{k},i_{k}'}^{\widehat{\mu} + \rho_{2n-2}} = \frac{\epsilon(w)}{2^{n}(1-t)^{n}} \prod_{1 \leq k \leq n} a_{i_{k},i_{k}'}^{\lambda}
\end{multline*}
as desired. This proves the claim.
Note in particular this result implies that the integral of a matching $w$ is the term in $\frac{1}{2^{n}(1-t)^{n}} \mathrm{Pf}[a_{j,k}]^{\lambda}$ corresponding to $w$.
Now using the claim, we have
\begin{multline*}
\int_{T} R_{\lambda}(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1};t) \tilde \Delta_{K}^{(n)}(\pm 1, \pm \sqrt{t}) \prod_{i=1}^{n} (1- \alpha x_{i}^{\pm 1}) dT \\
= 2^{n} n! \sum_{\substack{w \text{ a matching }\\ \text{in }S_{2n}}} \PV \int_{T} R_{\lambda, w}(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1};t) \tilde \Delta_{K}^{(n)}(\pm 1, \pm \sqrt{t}) \prod_{i=1}^{n} (1-\alpha x_{i}^{\pm 1}) dT
= \frac{1}{2^{n}(1-t)^{n}} \mathrm{Pf}[a_{j,k}]^{\lambda}
\end{multline*}
since the term integrals are in bijection with the terms of the Pfaffian.
Now we use this to prove the theorem. Using Proposition \ref{normalizations}(iii), we have
\begin{multline*}
\frac{1}{\int \tilde \Delta_{K}^{(n)}(\pm 1, \pm \sqrt{t}) dT} \int P_{\lambda}(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1};t) \tilde \Delta_{K}^{(n)}( \pm 1, \pm \sqrt{t}) \prod_{i=1}^{n} (1-\alpha x_{i}^{\pm 1})dT\\
= \frac{2(1-t)(1-t^{2}) \cdots (1-t^{2n})}{(1-t)^{n}} \frac{1}{v_{\lambda}(t)2^{n}(1-t)^{n}}\mathrm{Pf}[a_{j,k}]^{\lambda}
= \frac{(1-t)(1-t^{2}) \cdots (1-t^{2n})}{(1-t)^{2n}} \frac{1}{v_{\lambda}(t)2^{n-1}}\mathrm{Pf}[a_{j,k}]^{\lambda}.
\end{multline*}
But now by \cite[5.17]{FR}
\begin{align*}
\mathrm{Pf}[a_{j,k}]^{\lambda} &= 2^{n-1}\Big[ (-\alpha)^{\sum_{j=1}^{2n} [\lambda_{j} \text{ mod}2]}+ (-\alpha)^{\sum_{j=1}^{2n} [(\lambda_{j}+1) \text{ mod}2]}\Big],
\end{align*}
which gives the result.
\end{proof}
\begin{theorem}\label{ame}
Let $l(\lambda) \leq 2n$. We have the following integral identity for $O^{-}(2n)$:
\begin{multline*}
\frac{(1-\alpha^{2})}{\int \tilde \Delta_{K}^{(n-1)}(\pm t, \pm \sqrt{t}) dT} \int P_{\lambda}(x_{1}^{\pm 1}, \dots, x_{n-1}^{\pm 1},1,-1;t) \tilde \Delta_{K}^{(n-1)}(\pm t, \pm \sqrt{t}) \prod_{i=1}^{n-1} (1-\alpha x_{i}^{\pm 1}) dT\\
= \frac{\phi_{2n}(t)}{v_{\lambda}(t) (1-t)^{2n}} \Big[ (-\alpha)^{\# \text{ of odd parts of $\lambda$}} - (-\alpha)^{\# \text{ of even parts of $\lambda$}}\Big]. \\
\end{multline*}
\end{theorem}
\begin{proof}
We will first show the following:
\begin{align*}
\int R_{\lambda}(x_{1}^{\pm 1}, \dots, x_{n-1}^{\pm 1},1,-1;t) \tilde \Delta_{K}^{(n-1)}(\pm t, \pm \sqrt{t}) \prod_{i=1}^{n-1} (1-\alpha x_{i}^{\pm 1}) dT
= \frac{(1+t)}{2} \frac{1}{2^{n-1}(1-t)^{n-1}}\mathrm{Pf}[M]^{\lambda},
\end{align*}
where the $(2n+2) \times (2n+2)$ antisymmetric matrix $[M]^{\lambda}$ is defined by
\begin{align*}
\begin{cases} M_{1,2}^{\lambda} =0 \\
M_{1,k}^{\lambda} = (-1)^{\lambda_{k-2}-(k-2)} &\text{if $k \geq 3$} \\
M_{2,k}^{\lambda} = 1 &\text{if $k \geq 3$} \\
M_{j,k}^{\lambda} = a_{j-2,k-2}^{\lambda} &\text{if $3 \leq j<k \leq 2n+2$}
\end{cases}
\end{align*}
and the $2n \times 2n$ matrix $[a_{j,k}]^{\lambda}$ is as in Theorem \ref{ape}.
Note first that the integral is a sum of $(2n)!$ terms, but by symmetry we may restrict to the ``pseudo-matchings"---those with $\pm 1$ anywhere, but $x_{i}$ to the left of $x_{i}^{-1}$ for $1 \leq i \leq n-1$ and $x_{i}$ to the left of $x_{j}$ for $1 \leq i<j \leq n-1$. There are $(2n)!/2^{n-1}(n-1)!$ such pseudo-matchings, and each has $2^{n-1}(n-1)!$ permutations with identical integral.
\begin{claim} \label{ame1}
Let $w$ be a fixed pseudo-matching with $(-1)$ in position $j$ and $(+1)$ in position $k$ (here $1 \leq j \neq k \leq 2n$). Then we have the following:
\begin{multline*}
2^{n-1}(n-1)! \PV \int R_{\lambda, w}(x_{1}^{\pm 1}, \dots, x_{n-1}^{\pm 1}, \pm 1;t) \tilde \Delta_{K}^{(n-1)}(\pm t, \pm \sqrt{t}) \prod_{i=1}^{n-1} (1-\alpha x_{i}^{\pm 1}) dT \\
= 2^{n-1}(n-1)! (-1)^{\lambda_{j}+k-2 + \chi_{j>k}} \frac{(1+t)}{2} \PV \int R_{\tilde \lambda, \tilde w}^{(2(n-1))}(x^{\pm 1};t) \tilde \Delta_{K}^{(n-1)}(\pm 1, \pm \sqrt{t}) \prod_{i=1}^{n-1}(1- \alpha x_{i}^{\pm 1})dT,
\end{multline*}
where $\tilde w$ is $w$ with $\pm 1$ deleted (in particular, a matching in $S_{2n-2}$) and $\tilde \lambda$ is $\lambda$ with parts $\lambda_{j}, \lambda_{k}$ deleted and all parts between $\lambda_{j}$ and $\lambda_{k}$ increased by $1$, so that (in the case $j<k$, for example)
\begin{align*}
\tilde \lambda &= (\lambda_{1}, \dots, \lambda_{j-1}, \lambda_{j+1}+1, \dots, \lambda_{k-1}+1, \lambda_{k+1}, \dots, \lambda_{2n}).
\end{align*}
\end{claim}
We prove the claim. First, using (\ref{Kd}), we have
\begin{multline*}
2^{n-1}(n-1)!\tilde \Delta_{K}^{(n-1)}(\pm t, \pm \sqrt{t}) \\
= \prod_{1 \leq i \leq n-1} \frac{1-x_{i}^{\pm 2}}{(1+tx_{i}^{\pm 1})(1-tx_{i}^{\pm 1})(1+\sqrt{t}x_{i}^{\pm 1})(1-\sqrt{t}x_{i}^{\pm 1})} \prod_{1 \leq i<j \leq n-1} \frac{1-x_{i}^{\pm 1}x_{j}^{\pm 1}}{1-tx_{i}^{\pm 1}x_{j}^{\pm 1}}.
\end{multline*}
Define the set $X = \{ (x_{i}^{\pm 1}, x_{j}^{\pm 1}) : 1 \leq i \neq j \leq n-1 \}$, and let $u_{\lambda, w}^{(n-1)}(x;t)$ be defined by
\begin{equation*}
R_{\lambda, w}(x_{1}^{\pm 1}, \dots, x_{n-1}^{\pm 1}, \pm 1;t) = u_{\lambda, w}^{(n-1)}(x;t) \prod_{\substack{(z_{i}, z_{j}) \in X: \\ z_{i} \prec_{w} z_{j}}} \frac{z_{i} - tz_{j}}{z_{i} - z_{j}}.
\end{equation*}
Also define $p_{1}$ and $\Delta_{1}$ by
\begin{equation*}
u_{\lambda, w}^{(n-1)}(x;t) \prod_{1 \leq i \leq n-1} \frac{1-x_{i}^{\pm 2}}{(1+tx_{i}^{\pm 1})(1-tx_{i}^{\pm 1})(1+\sqrt{t}x_{i}^{\pm 1})(1-\sqrt{t}x_{i}^{\pm 1})} \prod_{i=1}^{n-1} (1-\alpha x_{i}^{\pm 1})= p_{1} \prod_{i=1}^{n-1} \frac{1}{1-x_{i}^{-2}}
\end{equation*}
and
\begin{equation*}
\prod_{1 \leq i<j \leq n-1} \frac{1-x_{i}^{\pm 1}x_{j}^{\pm 1}}{1-tx_{i}^{\pm 1}x_{j}^{\pm 1}}\prod_{\substack{(z_{i}, z_{j}) \in X: \\ z_{i} \prec_{w} z_{j}}} \frac{z_{i} - tz_{j}}{z_{i} - z_{j}} = \Delta_{1}.
\end{equation*}
Note that
\begin{equation*}
R_{\lambda, w}(x_{1}^{\pm 1}, \dots, x_{n-1}^{\pm 1}, \pm 1;t) \tilde \Delta_{K}^{(n-1)}(\pm t, \pm \sqrt{t}) \prod_{i=1}^{n-1} (1-\alpha x_{i}^{\pm 1}) = p_{1} \Delta_{1} \prod_{i=1}^{n-1} \frac{1}{1-x_{i}^{-2}} .
\end{equation*}
Define analogously $p_{2}$ and $\Delta_{2}$ using $R_{\tilde \lambda, \tilde w}(x_{1}^{\pm 1}, \dots, x_{n-1}^{\pm 1};t)$ and $\tilde \Delta_{K}^{(n-1)}(\pm 1, \pm \sqrt{t})$ instead of $R_{\lambda, w}^{(2n)}(x^{\pm 1}, \pm 1;t)$ and $\tilde \Delta_{K}^{(n-1)}( \pm t, \pm \sqrt{t})$.
Then one can check $\Delta_{1} = \Delta_{2} =: \Delta$ and $\Delta(\pm 1, \dots, \pm 1, x_{i+1}, \dots, x_{n-1})$ is holomorphic in $x_{i+1}$ for all $0 \leq i \leq n-2$ and all $2^{i}$ combinations. Also, the function $p =p_{1} - (-1)^{\lambda_{j}+k-2}\frac{(1+t)}{2}p_{2}$ (resp. $p=p_{1} - (-1)^{\lambda_{j}+k-1} \frac{(1+t)}{2}p_{2}$) satisfies the conditions of Lemma \ref{pvl2} if $j<k$ (resp. $j>k$). So using that result, we have
\begin{align*}
\int p_{1} \cdot \Delta \cdot \prod_{1 \leq i \leq n-1} \frac{1}{1-x_{i}^{-2}} dT = (-1)^{\lambda_{j}+k-2} \frac{(1+t)}{2}\int p_{2} \cdot \Delta \cdot \prod_{1 \leq i \leq n-1} \frac{1}{1-x_{i}^{-2}} dT
\end{align*}
if $j<k$ and
\begin{align*}
\int p_{1} \cdot \Delta \cdot \prod_{1 \leq i \leq n-1} \frac{1}{1-x_{i}^{-2}} dT = (-1)^{\lambda_{j}+k-1} \frac{(1+t)}{2} \int p_{2} \cdot \Delta \cdot \prod_{1 \leq i \leq n-1} \frac{1}{1-x_{i}^{-2}} dT
\end{align*}
if $j>k$. Thus, in the case $j<k$ we obtain
\begin{multline*}
\int R_{\lambda, w}(x_{1}^{\pm 1}, \dots, x_{n-1}^{\pm 1}, \pm 1;t) \tilde \Delta_{K}^{(n-1)}(\pm t, \pm \sqrt{t}) \prod_{i=1}^{n-1} (1-\alpha x_{i}^{\pm 1}) dT \\
= (-1)^{\lambda_{j}+k-2} \frac{(1+t)}{2} \int R_{\tilde \lambda, \tilde w}(x_{1}^{\pm 1}, \dots, x_{n-1}^{\pm 1};t) \tilde \Delta_{K}^{(n-1)}(\pm 1, \pm \sqrt{t}) \prod_{i=1}^{n-1}(1- \alpha x_{i}^{\pm 1}) dT,
\end{multline*}
and analogously for the case $j>k$, which proves the claim.
As in Theorem \ref{ape}, we introduce notation for pseudo-matchings. We write $\{(j,k), (i_{1}, i_{1}'), \dots, (i_{n-1}, i_{n-1}') \}$ for the pseudo-matching with $-1$ in position $j$, $1$ in position $k$ and $x_{k}$ in position $i_{k}$, $x_{k}^{-1}$ in position $i_{k}'$ for all $1 \leq k \leq n-1$. Note that we have $i_{k} < i_{k}'$ and $i_{l} < i_{k}$ for $l<k$. We may extend this to a matching in $S_{2(n+1)}$ by $\{(1,j+2), (2,k+2), (i_{1} + 2, i_{1}'+2), \dots, (i_{n-1}+2, i_{n-1}'+2) \} = \{ (j_{1}=1 , j_{1}' = j+2), (j_{2} =2, j_{2}' = k+2), \dots, (j_{n+1},j_{n+1}') \}$, with $i_{k}+2 = j_{k+2}$ and $i_{k}'+2 = j_{k+2}'$ for all $1 \leq k \leq n-1$.
\begin{claim}\label{amet}
Let $w = \{ (j,k), (i_{1}, i_{1}'), \dots, (i_{n-1},i_{n-1}') \}$ be a pseudo-matching in $S_{2n}$, and extend it to a matching $\{ (j_{1}=1, j_{1}' = j+2), (j_{2}=2, j_{2}' = k+2) \dots, (j_{n+1},j_{n+1}') \}$ of $S_{2(n+1)}$ as discussed above. Let $\lambda = (\lambda_{1}, \dots, \lambda_{2n})$ with $\lambda_{1} \geq \lambda_{2} \geq \cdots \geq \lambda_{2n} \in \mathbb{Z}$. Then we have the following term-evaluation:
\begin{multline*}
2^{n-1}(n-1)! \PV \int_{T} R_{\lambda, w}(x_{1}^{\pm 1}, \dots, x_{n-1}^{\pm 1}, \pm 1;t) \tilde \Delta_{K}^{(n-1)}(\pm t, \pm \sqrt{t}) \prod_{i=1}^{n-1} (1-\alpha x_{i}^{\pm 1}) dT \\
= \frac{1+t}{2} \frac{\epsilon(w)}{2^{n-1}(1-t)^{n-1}} \prod_{1 \leq k \leq n+1} M_{j_{k},j_{k}'}^{\lambda}.
\end{multline*}
\end{claim}
We prove the claim. Let $\mu$ be such that $\lambda = \mu + \rho_{2n}$. By Claim \ref{ame1} the above LHS is equal to
\begin{multline*}
\begin{cases} 2^{n-1}(n-1)! (-1)^{\lambda_{j}+k-2} \frac{(1+t)}{2} \int R_{\tilde \lambda, \tilde w}(x_{1}^{\pm 1}, \dots, x_{n-1}^{\pm 1};t) \tilde \Delta_{K}^{(n-1)}(\pm 1, \pm \sqrt{t}) \prod_{i=1}^{n-1}(1- \alpha x_{i}^{\pm 1})dT& \text{$j<k$,} \\
2^{n-1}(n-1)! (-1)^{\lambda_{j}+k-1} \frac{(1+t)}{2} \int R_{\tilde \lambda, \tilde w}(x_{1}^{\pm 1}, \dots, x_{n-1}^{\pm 1};t) \tilde \Delta_{K}^{(n-1)}(\pm 1, \pm \sqrt{t}) \prod_{i=1}^{n-1}(1- \alpha x_{i}^{\pm 1})dT&\text{$j>k$,}
\end{cases}\\
= 2^{n-1} (n-1)! \frac{1+t}{2} (-1)^{j_{1}'+j_{2}'-1-c_{2}(w)} M_{1,j_{1}'}^{\lambda} M_{2, j_{2}'}^{\lambda} \\ \cdot \int_{T} R_{\tilde \lambda, \tilde w}(x_{1}^{\pm 1}, \dots, x_{n-1}^{\pm 1};t) \tilde \Delta_{K}^{(n-1)}(\pm 1, \pm \sqrt{t}) \prod_{i=1}^{n-1} (1-\alpha x_{i}^{\pm 1}) dT,
\end{multline*}
where $c_{2}(w)$ is $0$ if $j_{1}' > j_{2}'$ (i.e., $(1,j_{1}')$ and $(2, j_{2}')$ do not cross) and $1$ if they do. Now we may use Claim \ref{apet} on the $(n-1)$-dimensional integral: let $\widehat{\mu}$ be the partition $\mu$ with parts $\mu_{j}$ and $\mu_{k}$ deleted; note that $\tilde \lambda$ and $\widehat{\mu} + \rho_{2n-2}$ have equivalent parts modulo $2$. Using this, we find that the above is equal to
\begin{multline*}
2^{n-1} (n-1)! \frac{1+t}{2} (-1)^{j_{1}'+j_{2}'-1-c_{2}(w)} M_{1,j_{1}'}^{\lambda} M_{2, j_{2}'}^{\lambda} \\\cdot \int_{T} R_{\widehat{\mu} + \rho_{2n-2}, \tilde w}(x_{1}^{\pm 1}, \dots, x_{n-1}^{\pm 1};t) \tilde \Delta_{K}^{(n-1)}(\pm 1, \pm \sqrt{t}) \prod_{i=1}^{n-1} (1-\alpha x_{i}^{\pm 1}) dT \\
= \frac{1+t}{2} (-1)^{j_{1}'+j_{2}'-1-c_{2}(w)} M_{1,j_{1}'}^{\lambda} M_{2, j_{2}'}^{\lambda} \frac{\epsilon(\tilde w)}{2^{n-1}(1-t)^{n-1}} \prod_{1 \leq k \leq n-1} a_{i_{k},i_{k}'}^{\widehat{\mu} + \rho_{2n-2}} \\
= \frac{1+t}{2} \frac{\epsilon(w)}{2^{n-1}(1-t)^{n-1}} \prod_{1 \leq k \leq n+1} M_{j_{k},j_{k}'}^{\lambda},
\end{multline*}
as desired.
Note that in particular this result shows that the integral of a matching is a term in $ \mathrm{Pf}[M]^{\lambda}(1+t)/2^{n}(1-t)^{n-1}$.
Now using the claim, we have
\begin{align*}
\int R_{\lambda}(x_{1}^{\pm 1}, \dots, x_{n-1}^{\pm 1},1,-1;t) \tilde \Delta_{K}^{(n-1)}(\pm t, \pm \sqrt{t}) \prod_{i=1}^{n-1} (1-\alpha x_{i}^{\pm 1}) dT
= \frac{(1+t)}{2} \frac{1}{2^{n-1}(1-t)^{n-1}}\mathrm{Pf}[M]^{\lambda},
\end{align*}
since the terms of the Pfaffian are in bijection with the integrals of the pseudo-matchings.
Finally, to prove the theorem, we use Proposition \ref{normalizations}(iv) to obtain
\begin{multline*}
\frac{(1-\alpha^{2})}{\int \tilde \Delta_{K}^{(n-1)}(\pm t, \pm \sqrt{t}) dT} \int P_{\lambda}(x_{1}^{\pm 1}, \dots, x_{n-1}^{\pm 1},1,-1;t) \tilde \Delta_{K}^{(n-1)}(\pm t, \pm \sqrt{t}) \prod_{i=1}^{n-1} (1-\alpha x_{i}^{\pm 1}) dT\\
= \frac{(1-\alpha^{2}) (1-t)(1-t^{2}) \cdots (1-t^{2n})}{v_{\lambda}(t)(1-t)^{n+1}} \frac{1}{2^{n}(1-t)^{n-1}}\mathrm{Pf}[M]^{\lambda}
= \frac{\phi_{2n}(t)}{v_{\lambda}(t)(1-t)^{2n}} \frac{(1-\alpha^{2})}{2^{n}}\mathrm{Pf}[M]^{\lambda}.
\end{multline*}
Following the computation in \cite[5.21]{FR} (but noting that they are missing a factor of $2$), $\mathrm{Pf}[M]^{\lambda}$ may be evaluated as
\begin{align*}
\frac{2^{n}}{(1-\alpha^{2})}\Big[ (-\alpha)^{\sum_{j=1}^{2n} [\lambda_{j} \text{ mod}2]} - (-\alpha)^{\sum_{j=1}^{2n} [(\lambda_{j}+1) \text{ mod}2]} \Big],
\end{align*}
which proves the theorem.
\end{proof}
\begin{theorem}\label{apo}
Let $l(\lambda) \leq 2n+1$. We have the following integral identity for $O^{+}(2n+1)$:
\begin{multline*}
\frac{(1-\alpha)}{\int \tilde \Delta_{K}^{(n)}(t,-1,\pm \sqrt{t}) dT} \int P_{\lambda}(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1}, 1;t) \tilde \Delta_{K}^{(n)}(t,-1,\pm \sqrt{t})\prod_{i=1}^{n}(1-\alpha x_{i}^{\pm 1})dT \\
= \frac{\phi_{2n+1}(t)}{v_{\lambda}(t)(1-t)^{2n+1}} \Big[ (-\alpha)^{\# \text{ of odd parts of $\lambda$}} + (-\alpha)^{\# \text{ of even parts of $\lambda$}} \Big].
\end{multline*}
\end{theorem}
\begin{proof}
We use an argument analogous to the $O^{-}(2n)$ case. We will first show the following:
\begin{align*}
\int R_{\lambda}(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1}, 1;t) \tilde \Delta_{K}^{(n)}(t,-1, \pm \sqrt{t}) \prod_{i=1}^{n} (1-\alpha x_{i}^{\pm 1}) dT
= \frac{1}{2^{n}(1-t)^{n}} \mathrm{Pf}[M]^{\lambda},
\end{align*}
where the $2n+2 \times 2n+2$ antisymmetric matrix $[M]^{\lambda}$ is given by
\begin{align*}
\begin{cases} M_{1,k}^{\lambda} = 1 &\text{if $1 <k \leq 2n+2$} \\
M_{j,k}^{\lambda} = a_{j-1,k-1}^{\lambda} &\text{if $2 \leq j \leq k \leq 2n+2$},
\end{cases}
\end{align*}
and as usual $[a_{j,k}]^{\lambda}$ is the $2n+1 \times 2n+1$ antisymmetric matrix specified by Theorem \ref{ape}. The integral is a sum of $(2n+1)!$ terms, one for each permutation in $S_{2n+1}$. But note that by symmetry we may restrict to pseudo-matchings in $S_{2n+1}$: those with $1$ anywhere but $x_{i}$ to the left of $x_{i}^{-1}$ for all $1 \leq i \leq n$, and $x_{i}$ to the left of $x_{j}$ for $1 \leq i<j \leq n$. There are $(2n+1)!/2^{n}n!$ such pseudo-matchings, and for each there are exactly $2^{n}n!$ other permutations with identical integral value.
\begin{claim}\label{apo1}
Let $w$ be a fixed pseudo-matching with $1$ in position $k$, for some $1 \leq k \leq 2n+1$. Then we have the following:
\begin{multline*}
2^{n}n! \PV \int R_{\lambda, w}(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1},1;t) \tilde \Delta_{K}^{(n)}(t,-1,\pm \sqrt{t}) \prod_{i=1}^{n}(1- \alpha x_{i}^{\pm 1}) dT \\
= 2^{n} n! (-1)^{k-1} \PV \int R_{\tilde \lambda, \tilde w}(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1};t) \tilde \Delta_{K}^{(n)}(\pm 1, \pm \sqrt{t}) \prod_{i=1}^{n} (1-\alpha x_{i}^{\pm 1}) dT,
\end{multline*}
where $\tilde w$ is $w$ with $1$ deleted (in particular, a matching in $S_{2n}$) and $\tilde \lambda$ is $\lambda$ with $\lambda_{k}$ deleted and the parts to the left of $\lambda_{k}$ increased by $1$, i.e.,
\begin{align*}
\tilde \lambda &= (\lambda_{1} + 1, \dots, \lambda_{k-1}+1, \lambda_{k+1}, \dots, \lambda_{2n+1}).
\end{align*}
\end{claim}
We prove the claim; note that this proof is very similar to Claim \ref{ame1} for the $O^{-}(2n)$ case. First, using (\ref{Kd}), we have
\begin{equation*}
2^{n}n! \tilde \Delta_{K}^{(n)}(t,-1, \pm \sqrt{t})
= \prod_{1 \leq i \leq n} \frac{1-x_{i}^{\pm 2}}{(1-tx_{i}^{\pm 1})(1+x_{i}^{\pm 1})(1-\sqrt{t}x_{i}^{\pm 1})(1+ \sqrt{t}x_{i}^{\pm 1})} \prod_{1 \leq i<j \leq n} \frac{1-x_{i}^{\pm 1}x_{j}^{\pm 1}}{1-tx_{i}^{\pm 1}x_{j}^{\pm 1}}.
\end{equation*}
Define the set $X = \{ (x_{i}^{\pm 1}, x_{j}^{\pm 1}): 1 \leq i \neq j \leq n \}$, and let $u_{\lambda, w}^{(n)}(x;t)$ be defined by
\begin{equation*}
R_{\lambda, w}(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1},1;t) = u_{\lambda, w}^{(n)}(x;t) \prod_{\substack{(z_{i},z_{j}) \in X: \\ z_{i} \prec_{w} z_{j}}} \frac{z_{i} - tz_{j}}{z_{i} - z_{j}}.
\end{equation*}
Also define $p_{1}$ and $\Delta_{1}$ by
\begin{equation*}
u_{\lambda, w}^{(n)}(x;t) \prod_{1 \leq i \leq n} \frac{1-x_{i}^{\pm 2}}{(1-tx_{i}^{\pm 1})(1+x_{i}^{\pm 1})(1-\sqrt{t}x_{i}^{\pm 1})(1+ \sqrt{t}x_{i}^{\pm 1})} \prod_{i=1}^{n}(1- \alpha x_{i}^{\pm 1}) = p_{1} \prod_{i=1}^{n} \frac{1}{1-x_{i}^{-2}}
\end{equation*}
and
\begin{equation*}
\prod_{1 \leq i<j \leq n} \frac{1-x_{i}^{\pm 1}x_{j}^{\pm 1}}{1-tx_{i}^{\pm 1}x_{j}^{\pm 1}} \prod_{\substack{(z_{i},z_{j}) \in X: \\ z_{i} \prec_{w} z_{j}}} \frac{z_{i} - tz_{j}}{z_{i} - z_{j}} = \Delta_{1}.
\end{equation*}
Note that
\begin{equation*}
R_{\lambda, w}(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1}, 1;t) \tilde \Delta_{K}^{(n)}(t,-1, \pm \sqrt{t}) \prod_{i=1}^{n}(1- \alpha x_{i}^{\pm 1}) = p_{1} \Delta_{1} \prod_{i=1}^{n} \frac{1}{1-x_{i}^{-2}}.
\end{equation*}
Define analogously $p_{2}$ and $\Delta_{2}$ using $R_{\tilde \lambda, \tilde w}(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1};t)$ and $\tilde \Delta_{K}^{(n)}(\pm 1, \pm \sqrt{t})$ instead of $R_{\lambda, w}^{(2n+1)}(x^{\pm 1},1;t)$ and $\tilde \Delta_{K}^{(n)}(t,-1, \pm \sqrt{t})$.
Then note that $\Delta_{1} = \Delta_{2} := \Delta$. Some computation shows that $\Delta(\pm 1, \dots, \pm 1, x_{i+1}, \dots, x_{n})$ is holomorphic in $x_{i+1}$ for all $0 \leq i \leq n-1$ and all $2^{i}$ combinations. Further computations show that the function $p = p_{1} - (-1)^{k-1}p_{2}$ satisfies the conditions of Lemma \ref{pvl2}, so we have
\begin{align*}
\int p \cdot \Delta \cdot \prod_{i=1}^{n} \frac{1}{1-x_{i}^{-2}} dT &= 0
\end{align*}
or,
\begin{align*}
\int p_{1} \cdot \Delta_{1} \cdot \prod_{i=1}^{n} \frac{1}{1-x_{i}^{-2}} dT &= (-1)^{k-1} \int p_{2} \cdot \Delta_{2} \cdot \prod_{i=1}^{n} \frac{1}{1-x_{i}^{-2}} dT,
\end{align*}
which proves the claim.
In keeping with the notation of the previous two theorems, we write $\{(k), (i_{1}, i_{1}'), \dots, (i_{n}, i_{n}') \}$ for the pseudo-matching $w$ with $1$ in position $k$ and $x_{k}$ in position $i_{k}$, $x_{k}^{-1}$ in position $i_{k}'$, for all $1 \leq k \leq n$. We can extend this to a matching in $S_{2(n+1)}$ by $\{ (1, k+1), (i_{1} + 1, i_{1}'+1), \dots, (i_{n} + 1, i_{n}' + 1) \} = \{ (j_{1} = 1, j_{1}' = k+1), \dots, (j_{n+1}, j_{n+1}') \} $, with $i_{k} + 1 = j_{k+1}, i_{k'}+1 = j_{k+1}'$ for $1 \leq k \leq n$.
\begin{claim}\label{apot}
Let $w = \{ (k), (i_{1}, i_{1}'), \dots, (i_{n}, i_{n}') \}$ be a pseudo-matching in $S_{2n+1}$, and extend it to a matching $\{ (j_{1} = 1, j_{1}' = k+1), \dots, (j_{n+1}, j_{n+1}') \}$ as discussed above. Let $\lambda = (\lambda_{1}, \dots, \lambda_{2n+1})$ with $\lambda_{1} \geq \lambda_{2} \geq \cdots \geq \lambda_{2n+1} \in \mathbb{Z}$. Then we have the following term-evaluation:
\begin{align*}
2^{n}n! \PV \int_{T} R_{\lambda, w}(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1},1;t) \tilde \Delta_{K}^{(n)}(t,-1,\pm \sqrt{t}) \prod_{i=1}^{n} (1-\alpha x_{i}^{\pm 1}) dT
= \frac{\epsilon(w)}{2^{n}(1-t)^{n}} \prod_{1 \leq k \leq n+1} M_{j_{k}, j_{k}'}^{\lambda}.
\end{align*}
\end{claim}
We prove the claim. Let $\mu$ be such that $\lambda = \mu + \rho_{2n+1}$. By Claim \ref{apo1} the above LHS is equal to
\begin{align*}
& 2^{n}n! (-1)^{k-1} \int_{T} R_{\tilde \lambda, \tilde w} (x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1};t) \tilde \Delta_{K}^{(n)}(\pm 1, \pm \sqrt{t}) \prod_{i=1}^{n} (1-\alpha x_{i}^{\pm 1}) dT \\
&= 2^{n}n! (-1)^{j_{1}'-j_{1} + 1} M_{j_{1}, j_{1}'}^{\lambda} \int_{T} R_{\tilde \lambda, \tilde w} (x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1};t) \tilde \Delta_{K}^{(n)}(\pm 1, \pm \sqrt{t}) \prod_{i=1}^{n} (1-\alpha x_{i}^{\pm 1}) dT.
\end{align*}
Now we use Claim \ref{apet}: let $\widehat{\mu}$ be $\mu$ with part $\mu_{k}$ deleted; note $\tilde \lambda - 1^{2n} = \widehat{\mu} + \rho_{2n}$. Using that result, the above is equal to
\begin{multline*}
2^{n}n! (-1)^{j_{1}'-j_{1} + 1} M_{j_{1}, j_{1}'}^{\lambda} \int_{T} R_{\widehat{\mu} + \rho_{2n}, \tilde w} (x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1};t) \tilde \Delta_{K}^{(n)}(\pm 1, \pm \sqrt{t}) \prod_{i=1}^{n} (1-\alpha x_{i}^{\pm 1}) dT \\
= (-1)^{j_{1}'-j_{1} + 1} M_{j_{1}, j_{1}'}^{\lambda} \frac{\epsilon(\tilde w)}{2^{n}(1-t)^{n}} \prod_{1 \leq k \leq n} a_{i_{k},i_{k}'}^{\widehat{\mu} + \rho_{2n}}= \frac{\epsilon(w)}{2^{n}(1-t)^{n}} \prod_{1 \leq k \leq n+1} M_{j_{k},j_{k}'}^{\lambda},
\end{multline*}
as desired.
Note that in particular this result shows that the integral of a matching is a term in $\frac{1}{2^{n}(1-t)^{n}} \mathrm{Pf}[M]^{\lambda}$.
Now using the claim, we have
\begin{align*}
\int R_{\lambda}(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1}, 1;t) \tilde \Delta_{K}^{(n)}(t,-1, \pm \sqrt{t}) \prod_{i=1}^{n} (1-\alpha x_{i}^{\pm 1}) dT
= \frac{1}{2^{n}(1-t)^{n}} \mathrm{Pf}[M]^{\lambda},
\end{align*}
since the terms of the Pfaffian are in bijection with the integrals of the pseudo-matchings.
Finally, to prove the theorem, we use Proposition \ref{normalizations}(v) to obtain
\begin{multline*}
\frac{(1-\alpha)}{\int \tilde \Delta_{K}^{(n)}(t,-1,\pm \sqrt{t}) dT} \int P_{\lambda}(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1}, 1;t) \tilde \Delta_{K}^{(n)}(t,-1,\pm \sqrt{t})\prod_{i=1}^{n}(1-\alpha x_{i}^{\pm 1})dT \\
= \frac{(1-\alpha)\phi_{2n+1}(t)}{v_{\lambda}(t)(1-t)^{n+1}} \frac{1}{2^{n}(1-t)^{n}}\mathrm{Pf}[M]^{\lambda},
\end{multline*}
but by a change of basis $[M]^{\lambda}$ is equivalent to the one defined in \cite[5.24]{FR}, and that Pfaffian was computed to be
\begin{align*}
\frac{2^{n}}{(1-\alpha)}\Big[ (-\alpha)^{\sum_{j=1}^{2n+1} [\lambda_{j} \text{ mod}2]} + (-\alpha)^{\sum_{j=1}^{2n+1} [(\lambda_{j}+1) \text{ mod}2]} \Big],
\end{align*}
which proves the theorem.
\end{proof}
\begin{theorem}\label{amo}
Let $l(\lambda) \leq 2n+1$. We have the following integral identity for $O^{-}(2n+1)$:
\begin{multline*}
\frac{(1+\alpha)}{\int \tilde \Delta_{K}^{(n)}(1,-t,\pm \sqrt{t}) dT} \int P_{\lambda}(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1}, -1;t) \tilde \Delta_{K}^{(n)}(1,-t,\pm \sqrt{t})\prod_{i=1}^{n}(1-\alpha x_{i}^{\pm 1})dT \\
= \frac{\phi_{2n+1}(t)}{v_{\lambda}(t)(1-t)^{2n+1}} \Big[ (-\alpha)^{\# \text{ of odd parts of $\lambda$}} - (-\alpha)^{\# \text{ of even parts of $\lambda$}} \Big],
\end{multline*}
\end{theorem}
\begin{proof}
We obtain the $O^{-}(2n+1)$ integral from the $O^{+}(2n+1)$ integral. See the discussion for the $O^{-}(2n+1)$ integral in the next section. The upshot is that the $O^{-}(2n+1)$ integral is $(-1)^{|\lambda|}$ times the $O^{+}(2n+1)$ integral with parameter $-\alpha$. Using Theorem \ref{apo}, we get
\begin{align*}
(-1)^{|\lambda|} \frac{\phi_{2n+1}(t)}{v_{\lambda}(t)(1-t)^{2n+1}} \Big[ \alpha^{\# \text{ of odd parts of $\lambda$}} + \alpha^{\# \text{ of even parts of $\lambda$}} \Big]. \\
\end{align*}
But note that $(-1)^{\lambda_{i}}$ is $-1$ if $\lambda_{i}$ is odd, and $1$ if $\lambda_{i}$ is even, so that $(-1)^{|\lambda|} = (-1)^{\# \text{ of odd parts of $\lambda$}}$. Also,
\begin{align*}
(-1)^{\# \text{ of odd parts of $\lambda$}} (-1)^{\# \text{ of even parts of $\lambda$}} = (-1)^{2n+1}
= -1.
\end{align*}
Combining these facts gives the result.
\end{proof}
We briefly mention some existing results related to Theorems \ref{ape}, \ref{ame}, \ref{apo}, and \ref{amo}. First, note that these four results are $t$-analogs of the results of Proposition $2$ of \cite{FR}. For example, in the $O^{+}(2n)$ case, that result states
\begin{align*}
\langle \text{det}(1_{2n} + \alpha U) s_{\rho}(U)\rangle_{U \in O^{+}(2n)} = \frac{1}{2^{n-1}} \mathrm{Pf}[a_{jk}]
= \alpha^{\sum_{j=1}^{2n} [\rho_{j} \text{ mod}2]} + \alpha^{\sum_{j=1}^{2n} [(\rho_{j}+1) \text{ mod}2]},
\end{align*}
where $\langle \cdot \rangle_{O^{+}(2n)}$ denotes the integral with respect to the eigenvalue density of the group $O^{+}(2n)$.
Also, note that the $\alpha = 0$ case of these identities gives that the four integrals
\begin{align*}
\frac{1}{Z} \int P_{\lambda}(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1};t) \tilde \Delta_{K}^{(n)}(\pm 1, \pm \sqrt{t}) dT \\
\frac{1}{Z} \int P_{\lambda}(x_{1}^{\pm 1}, \dots, x_{n-1}^{\pm 1}, \pm 1;t) \tilde \Delta_{K}^{(n-1)}(\pm t, \pm \sqrt{t}) dT \\
\frac{1}{Z} \int P_{\lambda}(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1},1;t) \tilde \Delta_{K}^{(n)}(t,-1,\pm \sqrt{t}) dT \\
\frac{1}{Z} \int P_{\lambda}(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1}, -1;t) \tilde \Delta_{K}^{(n)}(1,-t, \pm \sqrt{t})dT
\end{align*}
vanish unless all $2n$ or $2n+1$ (as appropriate) parts of $\lambda$ have the same parity (see Theorem 4.1 of \cite{RV}). Here $Z$ is the normalization: it makes the integral equal to unity when $\lambda$ is the zero partition.
\section{$\alpha, \beta$ version}
In this section, we further generalize the identities of the previous section by using the Pieri rule to add an extra parameter $\beta$. The values are given in terms of Rogers--Szeg\H{o} polynomials (\ref{RSpol}).
\begin{theorem}\label{absum}
We have the following integral identities:
\begin{enumerate}
\item for $O(2n)$
\begin{multline*}
\frac{1}{\int \tilde \Delta_{K}^{(n)}(\pm 1, \pm \sqrt{t})dT} \int P_{\mu}(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1};t) \tilde \Delta_{K}^{(n)}( \pm 1, \pm \sqrt{t}) \prod_{i=1}^{n} (1-\alpha x_{i}^{\pm 1}) (1-\beta x_{i}^{\pm 1}) dT\\
+ \frac{(1-\alpha^{2})(1-\beta^{2})}{\int \tilde \Delta_{K}^{(n-1)}(\pm t, \pm \sqrt{t})dT} \int P_{\mu}(x_{1}^{\pm 1}, \dots, x_{n-1}^{\pm 1},1,-1;t) \tilde \Delta_{K}^{(n-1)}(\pm t, \pm \sqrt{t}) \prod_{i=1}^{n-1} (1-\alpha x_{i}^{\pm 1})(1-\beta x_{i}^{\pm 1})dT\\
= \frac{2\phi_{2n}(t)}{v_{\mu}(t) (1-t)^{2n}} \Big[ \Big(\prod_{i \geq 0} H_{m_{2i}(\mu)}(\alpha \beta;t) \prod_{i \geq 0} H_{m_{2i+1}(\mu)}(\beta/\alpha;t)\Big)(-\alpha)^{\# \text{ of odd parts of } \mu} \Big].
\end{multline*}
\item for $O(2n+1)$
\begin{multline*}
\frac{(1-\alpha)(1-\beta)}{\int \tilde \Delta_{K}^{(n)}(t,-1,\pm \sqrt{t}) dT } \int P_{\mu}(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1}, 1;t) \tilde \Delta_{K}^{(n)}(t,-1,\pm \sqrt{t})\prod_{i=1}^{n}(1-\alpha x_{i}^{\pm 1}) (1-\beta x_{i}^{\pm 1}) dT \\
+ \frac{(1+\alpha)(1+\beta)}{\int \tilde \Delta_{K}^{(n)}(1,-t, \pm \sqrt{t}) dT} \int P_{\mu}(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1}, -1;t) \tilde \Delta_{K}^{(n)}(1,-t,\pm \sqrt{t})\prod_{i=1}^{n}(1-\alpha x_{i}^{\pm 1})(1-\beta x_{i}^{\pm 1})dT \\
= \frac{2\phi_{2n+1}(t)}{v_{\mu}(t)(1-t)^{2n+1}} \Big[ \Big(\prod_{i \geq 0} H_{m_{2i}(\mu)}(\alpha \beta;t) \prod_{i \geq 0} H_{m_{2i+1}(\mu)}(\beta/\alpha;t)\Big)(-\alpha)^{\# \text{ of odd parts of } \mu} \Big].
\end{multline*}
\end{enumerate}
\end{theorem}
\begin{proof}
The proof follows Warnaar's argument (Theorem 1.1 of \cite{W}), with the only difference being that we take into account zero parts in the computation whereas Warnaar's infinite version is concerned only with nonzero parts. The basic method is to use the Pieri rule for $P_{\mu}(x;t)e_{r}(x)$ in combination with the results of the previous section (the sum of the results of Theorems \ref{ape}, \ref{ame} for $O(2n)$ and similarly Theorems \ref{apo}, \ref{amo} for $O(2n+1)$). Note that Warnaar starts with the case $a=b=0$ in his notation (the orthogonal group case) and successively applies the Pieri rule two times, introducing a parameter each time. Because we proved the $\alpha$ case in the previous section, we need only use the Pieri rule once.
\end{proof}
\begin{theorem}\label{abindiv}
Write $\lambda = 0^{m_{0}(\lambda)} \; 1^{m_{1}(\lambda)} \; 2^{m_{2}(\lambda)} \cdots$, with total number of parts $2n$ or $2n+1$ as necessary. Then we have the following integral identities for the components of the orthogonal group:
\begin{enumerate}
\item for $O^{+}(2n)$
\begin{multline*}
\frac{1}{Z} \int P_{\lambda}(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1};t) \tilde \Delta_{K}^{(n)}( \pm 1, \pm \sqrt{t}) \prod_{i=1}^{n} (1-\alpha x_{i}^{\pm 1}) (1-\beta x_{i}^{\pm 1}) dT\\
= \frac{\phi_{2n}(t)}{v_{\lambda}(t) (1-t)^{2n}} \Big[ \Big(\prod_{i \geq 0} H_{m_{2i}(\lambda)}(\alpha \beta;t) \prod_{i \geq 0} H_{m_{2i+1}(\lambda)}(\beta/\alpha;t)\Big)(-\alpha)^{\# \text{ of odd parts of } \lambda} \\
+ \Big(\prod_{i \geq 0} H_{m_{2i+1}(\lambda)}(\alpha \beta;t) \prod_{i \geq 0} H_{m_{2i}(\lambda)}(\beta/\alpha;t)\Big)(-\alpha)^{\# \text{ of even parts of } \lambda} \Big]\\
\end{multline*}
\item for $O^{-}(2n)$
\begin{multline*}
\frac{(1-\alpha^{2})(1-\beta^{2})}{Z} \int P_{\lambda}(x_{1}^{\pm 1}, \dots, x_{n-1}^{\pm 1},1,-1;t) \tilde \Delta_{K}^{(n-1)}(\pm t, \pm \sqrt{t}) \prod_{i=1}^{n-1} (1-\alpha x_{i}^{\pm 1})(1-\beta x_{i}^{\pm 1}) dT\\
= \frac{\phi_{2n}(t)}{v_{\lambda}(t) (1-t)^{2n}} \Big[ \Big(\prod_{i \geq 0} H_{m_{2i}(\lambda)}(\alpha \beta;t) \prod_{i \geq 0} H_{m_{2i+1}(\lambda)}(\beta/\alpha;t)\Big)(-\alpha)^{\# \text{ of odd parts of } \lambda} \\
- \Big(\prod_{i \geq 0} H_{m_{2i+1}(\lambda)}(\alpha \beta;t) \prod_{i \geq 0} H_{m_{2i}(\lambda)}(\beta/\alpha;t)\Big)(-\alpha)^{\# \text{ of even parts of } \lambda} \Big] \\
\end{multline*}
\item for $O^{+}(2n+1)$
\begin{multline*}
\frac{(1-\alpha)(1-\beta)}{Z} \int P_{\lambda}(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1}, 1;t) \tilde \Delta_{K}^{(n)}(t,-1,\pm \sqrt{t})\prod_{i=1}^{n}(1-\alpha x_{i}^{\pm 1}) (1-\beta x_{i}^{\pm 1}) dT \\
= \frac{\phi_{2n+1}(t)}{v_{\lambda}(t)(1-t)^{2n+1}} \Big[ \Big(\prod_{i \geq 0} H_{m_{2i}(\lambda)}(\alpha \beta;t) \prod_{i \geq 0 } H_{m_{2i+1}(\lambda)}(\beta/\alpha;t)\Big)(-\alpha)^{\# \text{ of odd parts of } \lambda} \\
+ \Big(\prod_{i \geq 0} H_{m_{2i+1}(\lambda)}(\alpha \beta;t) \prod_{i \geq 0} H_{m_{2i}(\lambda)}(\beta/\alpha;t)\Big)(-\alpha)^{\# \text{ of even parts of } \lambda} \Big]\\\
\end{multline*}
\item for $O^{-}(2n+1)$
\begin{multline*}
\frac{(1+\alpha)(1+\beta)}{Z} \int P_{\lambda}(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1}, -1;t) \tilde \Delta_{K}^{(n)}(1,-t,\pm \sqrt{t})\prod_{i=1}^{n}(1-\alpha x_{i}^{\pm 1})(1-\beta x_{i}^{\pm 1})dT \\
= \frac{\phi_{2n+1}(t)}{v_{\lambda}(t)(1-t)^{2n+1}} \Big[ \Big(\prod_{i \geq 0} H_{m_{2i}(\lambda)}(\alpha \beta;t) \prod_{i \geq 0} H_{m_{2i+1}(\lambda)}(\beta/\alpha;t)\Big)(-\alpha)^{\# \text{ of odd parts of } \lambda} \\
- \Big(\prod_{i \geq 0} H_{m_{2i+1}(\lambda)}(\alpha \beta;t) \prod_{i \geq 0} H_{m_{2i}(\lambda)}(\beta/\alpha;t)\Big)(-\alpha)^{\# \text{ of even parts of } \lambda} \Big]\ ,
\end{multline*}
\end{enumerate}
where $Z$ is the normalization at $\alpha = 0, \beta=0$ and $\lambda = 0^{2n}, 0^{2n+1}$ as appropriate.
\end{theorem}
\begin{proof}
Note that the Hall--Littlewood polynomials satisfy the following property:
\begin{align*}
\Big( \prod_{i=1}^{l} z_{i} \Big) P_{\lambda}(z_{1}, \dots, z_{l};t) &= P_{\lambda + 1^{l}}(z_{1}, \dots, z_{l};t).\\
\end{align*}
So in the case $O(2n)$, for example, we have
\begin{align*}
P_{\mu}(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1};t) &= P_{\mu+1^{2n}}(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1};t) \\
P_{\mu}(x_{1}^{\pm 1}, \dots, x_{n-1}^{\pm 1},1,-1;t) &= -P_{\mu + 1^{2n}}(x_{1}^{\pm 1}, \dots, x_{n-1}^{\pm 1},1,-1;t). \\
\end{align*}
Thus,
\begin{multline*}
\frac{1}{\int \tilde \Delta_{K}^{(n)}(\pm 1, \pm \sqrt{t})dT} \int P_{\mu}(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1};t) \tilde \Delta_{K}^{(n)}( \pm 1, \pm \sqrt{t}) \prod_{i=1}^{n} (1-\alpha x_{i}^{\pm 1}) (1-\beta x_{i}^{\pm 1}) dT\\
- \frac{(1-\alpha^{2})(1-\beta^{2})}{\int \tilde \Delta_{K}^{(n-1)}(\pm t, \pm \sqrt{t})dT} \int P_{\mu}(x_{1}^{\pm 1}, \dots, x_{n-1}^{\pm 1},1,-1;t) \tilde \Delta_{K}^{(n-1)}(\pm t, \pm \sqrt{t}) \prod_{i=1}^{n-1} (1-\alpha x_{i}^{\pm 1})(1-\beta x_{i}^{\pm 1})dT\\
=\frac{1}{\int \tilde \Delta_{K}^{(n)}(\pm 1, \pm \sqrt{t})dT} \int P_{\mu + 1^{2n}}(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1};t) \tilde \Delta_{K}^{(n)}( \pm 1, \pm \sqrt{t}) \prod_{i=1}^{n} (1-\alpha x_{i}^{\pm 1}) (1-\beta x_{i}^{\pm 1}) dT\\
+ \frac{(1-\alpha^{2})(1-\beta^{2})}{\int \tilde \Delta_{K}^{(n-1)}(\pm t, \pm \sqrt{t})dT} \int P_{\mu + 1^{2n}}(x_{1}^{\pm 1}, \dots, x_{n-1}^{\pm 1},1,-1;t) \tilde \Delta_{K}^{(n-1)}(\pm t, \pm \sqrt{t}) \prod_{i=1}^{n-1} (1-\alpha x_{i}^{\pm 1})(1-\beta x_{i}^{\pm 1})dT\\
= \frac{2\phi_{2n}(t)}{v_{\mu + 1^{2n}}(t) (1-t)^{2n}} \Big[ \Big(\prod_{i \geq 0} H_{m_{2i}(\mu + 1^{2n})}(\alpha \beta;t) \prod_{i \geq 0} H_{m_{2i+1}(\mu + 1^{2n})}(\beta/\alpha;t)\Big)(-\alpha)^{\# \text{ of odd parts of } \mu + 1^{2n}} \Big],
\end{multline*}
where the last equality follows from Theorem \ref{absum}(i). Now note that $v_{\mu + 1^{2n}}(t) = v_{\mu}(t)$, $m_{i}(\mu + 1^{2n}) = m_{i-1}(\mu)$ for all $i \geq 1$, and the number of odd parts in $\mu + 1^{2n}$ is the same as the number of even parts in $\mu$. Thus the above is equal to
\begin{equation*}
\frac{2\phi_{2n}(t)}{v_{\mu}(t) (1-t)^{2n}} \Big[ \Big(\prod_{i \geq 0} H_{m_{2i+1}(\mu)}(\alpha \beta;t) \prod_{i \geq 0} H_{m_{2i}(\mu)}(\beta/\alpha;t)\Big)(-\alpha)^{\# \text{ of even parts of } \mu} \Big].
\end{equation*}
Then, taking the sum/difference of this equation and Theorem \ref{absum}(i), we obtain
\begin{multline*}
\frac{2}{\int \tilde \Delta_{K}^{(n)}(\pm 1, \pm \sqrt{t})dT} \int P_{\mu}(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1};t) \tilde \Delta_{K}^{(n)}( \pm 1, \pm \sqrt{t}) \prod_{i=1}^{n} (1-\alpha x_{i}^{\pm 1}) (1-\beta x_{i}^{\pm 1}) dT\\
= \frac{2\phi_{2n}(t)}{v_{\lambda}(t) (1-t)^{2n}} \Big[ \Big(\prod_{i \geq 0} H_{m_{2i}(\lambda)}(\alpha \beta;t) \prod_{i \geq 0} H_{m_{2i+1}(\lambda)}(\beta/\alpha;t)\Big)(-\alpha)^{\# \text{ of odd parts of } \lambda} \\
+ \Big(\prod_{i \geq 0} H_{m_{2i+1}(\lambda)}(\alpha \beta;t) \prod_{i \geq 0} H_{m_{2i}(\lambda)}(\beta/\alpha;t)\Big)(-\alpha)^{\# \text{ of even parts of } \lambda} \Big],
\end{multline*}
and
\begin{multline*}
\frac{2(1-\alpha^{2})(1-\beta^{2})}{\int \tilde \Delta_{K}^{(n-1)}(\pm t, \pm \sqrt{t})} \int P_{\lambda}(x_{1}^{\pm 1}, \dots, x_{n-1}^{\pm 1},1,-1;t) \tilde \Delta_{K}^{(n-1)}(\pm t, \pm \sqrt{t}) \prod_{i=1}^{n-1} (1-\alpha x_{i}^{\pm 1})(1-\beta x_{i}^{\pm 1}) dT\\
= \frac{2\phi_{2n}(t)}{v_{\lambda}(t) (1-t)^{2n}} \Big[ \Big(\prod_{i \geq 0} H_{m_{2i}(\lambda)}(\alpha \beta;t) \prod_{i \geq 0} H_{m_{2i+1}(\lambda)}(\beta/\alpha;t)\Big)(-\alpha)^{\# \text{ of odd parts of } \lambda} \\
- \Big(\prod_{i \geq 0} H_{m_{2i+1}(\lambda)}(\alpha \beta;t) \prod_{i \geq 0} H_{m_{2i}(\lambda)}(\beta/\alpha;t)\Big)(-\alpha)^{\# \text{ of even parts of } \lambda} \Big], \\
\end{multline*}
as desired. The $O(2n+1)$ result is analogous; use instead Theorem \ref{absum}(ii). Note alternatively that as in the $\alpha$ case, we can obtain the $O^{-}(2n+1)$ integral directly from the $O^{+}(2n+1)$ integral, since the change of variables $x_{i} \rightarrow -x_{i}$ gives
\begin{multline*}
\int P_{\lambda}(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1}, -1;t) \tilde \Delta_{K}^{(n)}(1,-t,\pm \sqrt{t}) \prod_{i=1}^{n} (1-\alpha x_{i}^{\pm 1})(1-\beta x_{i}^{\pm 1}) dT \\
= \int P_{\lambda}(-x_{1}^{\pm 1}, \dots, -x_{n}^{\pm 1}, -1;t) \tilde \Delta_{K}^{(n)}(-1,t, \pm \sqrt{t}) \prod_{i=1}^{n} (1+ \alpha x_{i}^{\pm 1})(1+\beta x_{i}^{\pm 1}) dT\\
= (-1)^{|\lambda|}\int P_{\lambda}(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1}, 1;t) \tilde \Delta_{K}^{(n)}(-1, t, \pm \sqrt{t}) \prod_{i=1}^{n} (1+\alpha x_{i}^{\pm 1})(1+\beta x_{i}^{\pm 1}) dT,
\end{multline*}
and $\int \tilde \Delta_{K}^{(n)}(1,-t,\pm \sqrt{t}) dT = \int \tilde \Delta_{K}^{(n)}(-1,t,\pm \sqrt{t}) dT$, so that
\begin{multline*}
\frac{(1+\alpha)(1+\beta)}{\int \tilde \Delta_{K}^{(n)}(1,-t,\pm \sqrt{t}) dT}\int P_{\lambda}(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1}, -1;t) \tilde \Delta_{K}^{(n)}(1,-t,\pm \sqrt{t}) \prod_{i=1}^{n} (1-\alpha x_{i}^{\pm 1})(1-\beta x_{i}^{\pm 1}) dT \\
= \frac{(-1)^{|\lambda|}(1+\alpha)(1+\beta)}{\int \tilde \Delta_{K}^{(n)}(-1,t,\pm \sqrt{t})dT} \int P_{\lambda}(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1},1;t) \tilde \Delta_{K}^{(n)}(-1,t,\pm \sqrt{t}) \prod_{i=1}^{n}(1+\alpha x_{i}^{\pm 1})(1+\beta x_{i}^{\pm 1}) dT,
\end{multline*}
which is $(-1)^{|\lambda|}$ times the $O^{+}(2n+1)$ integral with parameters $-\alpha, -\beta$.
\end{proof}
We remark that Theorem \ref{abindiv}(i) may be obtained using the direct method of the previous section. One ultimately obtains a recursive formula, for which the Rogers--Szego polynomials are a solution. However, this argument does not easily work for $O^{-}(2n), O^{+}(2n+1)$ and $O^{-}(2n+1)$. Thus, it is more practical to use the Pieri rule to obtain the $O(l)$ ($l$ odd or even) integrals, and then solve for the components.
\section{Special Cases}
We will use the results of the previous section to prove some identities that correspond to particular values of $\alpha$ and $\beta$.
\begin{corollary}\label{aminus1}
($\alpha = -1$) We have the following identity:
\begin{align*}
\frac{1}{Z} \int P_{\lambda}^{(2n)}(x^{\pm 1};t) \tilde \Delta_{K}^{(n)}( \pm 1, \pm \sqrt{t}) \prod_{i=1}^{n}(1+x_{i}^{\pm 1})(1-\beta x_{i}^{\pm 1}) dT
= \frac{2\phi_{2n}(t)}{v_{\lambda}(t)(1-t)^{2n}} \prod_{i \geq 0} H_{m_{i}(\lambda)}(-\beta;t),
\end{align*}
where the normalization $Z = \int \tilde \Delta_{K}^{(n)}(\pm 1, \pm \sqrt{t}) dT$.
\end{corollary}
\begin{proof}
Just put $\alpha = -1$ into \ref{abindiv}(i).
\end{proof}
\begin{corollary}
($\alpha = -\beta$) We have the following identity:
\begin{multline*}
\frac{1}{Z} \int P_{\lambda}^{(2n)}(x^{\pm 1};t) \tilde \Delta_{K}^{(n)}(\pm 1, \pm \sqrt{t}) \prod_{i=1}^{n} (1-\alpha^{2} x_{i}^{\pm 2}) dT\\
= \frac{\phi_{2n}(t)}{v_{\lambda}(t)(1-t)^{2n}}\Big[ \Big(\prod_{i \geq 0} H_{m_{2i}(\lambda)}(-\alpha^{2};t) \prod_{i \geq 0}H_{m_{2i+1}(\lambda)}(-1;t)\Big)
(-\alpha)^{\# \text{ of odd parts of } \lambda}
\\+ \Big(\prod_{i \geq 0} H_{m_{2i+1}(\lambda)}(-\alpha^{2};t) \prod_{i \geq 0} H_{m_{2i}(\lambda)}(-1;t)\Big)(-\alpha)^{\# \text{ of even parts of } \lambda} \Big],
\end{multline*}
where the normalization $Z = \int \tilde \Delta_{K}^{(n)}(\pm 1, \pm \sqrt{t}) dT$. In particular, this vanishes unless all odd parts of $\lambda$ have even multiplicity, or all even parts of $\lambda$ have even multiplicity.
\end{corollary}
\begin{proof}
Just put $\alpha = -\beta$ into Theorem \ref{abindiv}(i). For the second part, we use \cite[1.10b]{W}: $H_{m}(-1;t)$ vanishes unless $m$ is even, in which case it is $(t;t^{2})_{m/2} = (1-t)(1-t^{3}) \cdots (1-t^{m-1})$.
\end{proof}
\begin{corollary}
Symplectic Integral (see Theorem 4.1 of \cite{RV}). We have the following identity:
\begin{align*}
\frac{1}{Z} \int P_{\lambda}(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1};t) \tilde \Delta_{K}^{(n)}(\pm \sqrt{t},0,0)dT = \frac{\phi_{n}(t^{2})}{(1-t^{2})^{n}v_{\mu}(t^{2})} =\frac{C^{0}_{\mu}(t^{2n};0,t^{2})}{C^{-}_{\mu}(t^{2};0,t^{2})},
\end{align*}
when $\lambda = \mu^{2}$ for some $\mu$ and $0$ otherwise (here the normalization $Z = \int \tilde \Delta_{K}^{(n)}( \pm \sqrt{t},0,0) dT$).
\end{corollary}
\begin{proof}
Use the computation
\begin{align*}
\tilde \Delta_{K}^{(n)}(\pm \sqrt{t},0,0) &= \tilde \Delta_{K}^{(n)}(\pm 1, \pm \sqrt{t}) \prod_{1 \leq i \leq n} (1-\alpha x_{i}^{\pm 1})(1-\beta x_{i}^{\pm 1}) \big|_{\alpha = -1, \beta = 1},
\end{align*}
and Corollary \ref{aminus1} with $\beta = 1$. The result then follows from \cite[1.10b]{W}: $H_{m_{i}(\lambda)}(-1;t)$ vanishes unless $m_{i}(\lambda)$ is even, in which case it is $(1-t)(1-t^{3}) \cdots (1-t^{m_{i}(\lambda)-1})$.
\end{proof}
We remark that this integral identity may also be proved directly, using techniques similar to those used for the orthogonal group integrals of Section 4. In fact, in this case, there are no poles on the unit circle so the analysis is much more straightforward.
\begin{corollary} \label{Kawanaka}
Kawanaka's identity (see \cite{Ka2}, \cite{Ka1}). We have the following identity:
\begin{align*}
\frac{1}{Z} \int P_{\lambda}(x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1};t) \tilde \Delta_{K}^{(n)}(1,\sqrt{t},0,0) &= \frac{\phi_{2n}(\sqrt{t})}{(1-\sqrt{t})^{2n}v_{\lambda}(\sqrt{t})} = \frac{C^0_\lambda(t^{n};0,\sqrt{t})}{C^-_\lambda(\sqrt{t};0,\sqrt{t})}
\end{align*}
(here the normalization $Z = \int \tilde \Delta_{K}^{(n)}(1,\sqrt{t},0,0) dT$).
\end{corollary}
\begin{proof}
Use the computation
\begin{align*}
\tilde \Delta_{K}^{(n)}(1, \sqrt{t},0,0) &= \tilde \Delta_{K}^{(n)}(\pm 1, \pm \sqrt{t}) \prod_{1 \leq i \leq n} (1-\alpha x_{i}^{\pm 1})(1-\beta x_{i}^{\pm 1}) \big|_{\alpha = -1, \beta = -\sqrt{t}},
\end{align*}
and Corollary \ref{aminus1} with $\beta = -\sqrt{t}$. The result then follows from \cite[1.10d]{W}: $H_{m}(\sqrt{t};t) = \prod_{j=1}^{m} (1+ (\sqrt{t})^{j})$.
\end{proof}
\section{Limit $n \rightarrow \infty$}
In this section, we show that the $n \rightarrow \infty$ limit of Theorem \ref{abindiv}(i) in conjunction with the Cauchy identity gives Warnaar's identity (\cite[Theorem 1.1]{W}). Thus, Theorem \ref{abindiv}(i) may be viewed as a finite dimensional analog of that particular generalized Littlewood identity.
\begin{proposition}
(Gaussian result for $O^{+}(2n)$) For any symmetric function $f$,
\begin{align*}
\lim_{n \rightarrow \infty} \frac{\displaystyle \int f(x^{\pm 1}) \tilde \Delta_{K}^{(n)}(x;t;\pm 1, t_{2}, t_{3})dT}{\displaystyle \int \tilde \Delta_{K}^{(n)}(x;t;\pm 1, t_{2}, t_{3})dT} &= I_{G}(f; m; s),
\end{align*}
where $|t|, |t_{2}|, t_{3}| < 1$ and $m$ and $s$ are defined as follows:
\begin{align*}
m_{2k-1} &= \frac{t_{2}^{2k-1} + t_{3}^{2k-1}}{1-t^{2k-1}} \\
m_{2k} &= \frac{t_{2}^{2k} + t_{3}^{2k} + 1-t^{k}}{1-t^{2k}}\\
s_{k} &= \frac{k}{1-t^{k}}.
\end{align*}
Here $I_{G}(;m;s)$ is the Gaussian functional on symmetric functions defined by
\begin{align*}
\int_{\mathbb{R}^{\deg(f)}} f \prod_{j=1}^{\deg(f)} (2 \pi s_{j})^{-1/2} e^{-(p_{j}-m_{j})^{2}/2s_{j}}dp_{j}.
\end{align*}
\end{proposition}
\begin{proof}
This is formally a special case of \cite[Theorem 7.17]{R}. That proof relies on Theorem 6 of \cite{DS} and Section 8 of \cite{BR}. The fact that two of the parameters $(t_{0}, \dots, t_{3})$ are $\pm 1$ makes that argument fail: however, replacing the symplectic group with $O^{+}(2n)$ resolves that issue.
\end{proof}
Note that a similar argument would work for the components $O^{-}(2n), O^{+}(2n+1)$ and $O^{-}(2n+1)$.
\begin{proposition}
We have the following:
\begin{multline*}
\lim_{n \rightarrow \infty} \frac{ \displaystyle \int \prod_{j,k} \frac{1-tx_{j}y_{k}^{\pm 1}}{1-x_{j}y_{k}^{\pm 1}} \displaystyle \prod_{k} (1-\alpha y_{k}^{\pm 1})(1-\beta y_{k}^{\pm 1}) \tilde \Delta_{K}^{(n)}(y;t; \pm 1, t_{2}, t_{3})dT}{\displaystyle \int \tilde \Delta_{K}^{(n)}(y;t; \pm 1, t_{2}, t_{3}) dT} \\
= \frac{(t_{2}\alpha, t_{3}\alpha, t_{2}\beta, t_{3}\beta;t)}{(\alpha^{2}t, \beta^{2}t; t^{2}) (\alpha\beta;t)} \prod_{j<k} \frac{1-tx_{j}x_{k}}{1-x_{j}x_{k}} \prod_{j} \frac{(1-tx_{j}^{2})(1-\alpha x_{j})(1-\beta x_{j})}{(1-t_{2}x_{j})(1-t_{3}x_{j})(1-x_{j})(1+x_{j})}.
\end{multline*}
\end{proposition}
\begin{proof}
Put
\begin{align*}
f = \prod_{j,k} \frac{1-tx_{j}y_{k}^{\pm 1}}{1-x_{j}y_{k}^{\pm 1}} \prod_{k} (1-\alpha y_{k}^{\pm 1})(1-\beta y_{k}^{\pm 1}) = \text{exp}\Big( \sum_{1 \leq k} \frac{p_{k}(x)p_{k}(y)(1-t^{k})}{k} - \frac{p_{k}(y)(\alpha^{k} + \beta^{k})}{k} \Big)
\end{align*}
(see \cite{Mac} for more details). Then use the previous result, and complete the square in the Gaussian integral.
\end{proof}
\begin{corollary}\label{cor}
We have the following identity in the limit:
\begin{multline*}
\lim_{n \rightarrow \infty} \frac{\displaystyle \int \prod_{j,k} \frac{1-tx_{j}y_{k}^{\pm 1}}{1-x_{j}y_{k}^{\pm 1}} \prod_{k} (1-\alpha y_{k}^{\pm 1})(1-\beta y_{k}^{\pm 1}) \tilde \Delta_{K}^{(n)}(y;t; \pm 1, \pm \sqrt{t})dT}{\displaystyle \int \tilde \Delta_{K}^{(n)}(y;t; \pm 1, \pm \sqrt{t}) dT} \\
= \frac{1}{(\alpha\beta;t)} \prod_{j<k} \frac{1-tx_{j}x_{k}}{1-x_{j}x_{k}} \prod_{j} \frac{(1-\alpha x_{j})(1-\beta x_{j})}{(1-x_{j})(1+x_{j})}.
\end{multline*}
\end{corollary}
\begin{proof}
Put $t_{2}, t_{3} = \pm \sqrt{t}$ in the previous result. Also note that
\begin{equation*}
(\sqrt{t}\alpha;t)(-\sqrt{t}\alpha;t) = (t\alpha^{2};t^{2})
\end{equation*}
so that
\begin{align*}
\frac{(\sqrt{t}\alpha, -\sqrt{t}\alpha, \sqrt{t}\beta, -\sqrt{t}\beta;t)}{(\alpha^{2}t, \beta^{2}t;t^{2})} &= 1.
\end{align*}
\end{proof}
\begin{theorem}
We have the following formal identity (\cite{W} Theorem 1.1):
\begin{multline*}
\sum_{\lambda} P_{\lambda}(x;t)\Big[ \Big(\prod_{i > 0} H_{m_{2i}(\lambda)}(\alpha \beta;t) \prod_{i \geq 0} H_{m_{2i+1}(\lambda)}(\beta/\alpha;t)\Big)(-\alpha)^{\# \text{ of odd parts of } \lambda} \Big] \\
= \prod_{j<k} \frac{1-tx_{j}x_{k}}{1-x_{j}x_{k}} \prod_{j} \frac{(1-\alpha x_{j})(1-\beta x_{j})}{(1-x_{j})(1+x_{j})}.
\end{multline*}
\end{theorem}
\begin{proof}
We prove the result for $|\alpha|, |\beta| < 1$, then use analytic continuation to obtain it for all $\alpha, \beta$. We start with the Cauchy identity for Hall--Littlewood polynomials (\ref{Cauchyid}).
Using this in the LHS of Corollary \ref{cor}, and multiplying both sides by $(\alpha \beta;t)$ gives
\begin{multline*}
(\alpha \beta;t) \sum_{\lambda} P_{\lambda}(x;t) \lim_{n \rightarrow \infty} \Big[ \frac{ b_{\lambda}(t)\int P_{\lambda}(y_{1}^{\pm 1}, \dots, y_{n}^{\pm 1};t) \prod_{k} (1-\alpha y_{k}^{\pm 1})(1-\beta y_{k}^{\pm 1}) \tilde \Delta_{K}^{(n)}(y;t; \pm 1, \pm \sqrt{t}) dT}{ \int \tilde \Delta_{K}^{(n)}(y;t;\pm 1, \pm \sqrt{t})dT}\Big] \\
= \prod_{j<k} \frac{1-tx_{j}x_{k}}{1-x_{j}x_{k}} \prod_{j} \frac{(1-\alpha x_{j})(1-\beta x_{j})}{(1-x_{j})(1+x_{j})}.
\end{multline*}
Now note that the quantity within the limit is the $\alpha, \beta$ version of the $O^{+}(2n)$ integral, see Theorem \ref{abindiv}(i). Using that result, the above equation becomes
\begin{multline*}
(\alpha\beta;t)\sum_{\lambda}P_{\lambda}(x;t)\lim_{n \rightarrow \infty} \frac{b_{\lambda}(t)\phi_{2n}(t)}{v_{\lambda}(t) (1-t)^{2n}} \Big[ \Big(\prod_{i \geq 0} H_{m_{2i}(\lambda)}(\alpha \beta;t) \prod_{i \geq 0} H_{m_{2i+1}(\lambda)}(\beta/\alpha;t)\Big)(-\alpha)^{\# \text{ of odd parts of } \lambda} \\
+ \Big(\prod_{i \geq 0} H_{m_{2i+1}(\lambda)}(\alpha \beta;t) \prod_{i \geq 0} H_{m_{2i}(\lambda)}(\beta/\alpha;t)\Big)(-\alpha)^{\# \text{ of even parts of } \lambda} \Big] \\
= \prod_{j<k} \frac{1-tx_{j}x_{k}}{1-x_{j}x_{k}} \prod_{j} \frac{(1-\alpha x_{j})(1-\beta x_{j})}{(1-x_{j})(1+x_{j})}.
\end{multline*}
But note that
\begin{align*}
\frac{b_{\lambda}(t)}{v_{\lambda}(t)}&= \frac{(1-t)^{2n}}{\phi_{m_{0}(\lambda)}(t)},
\end{align*}
so that
\begin{align*}
\frac{b_{\lambda}(t)\phi_{2n}(t)}{v_{\lambda}(t)(1-t)^{2n}} &= \frac{\phi_{2n}(t)}{\phi_{m_{0}(\lambda)}(t)} = (1-t^{m_{0}(\lambda) +1}) \cdots (1-t^{2n}),
\end{align*}
which goes to $1$ as $m_{0}(\lambda), n \rightarrow \infty$. Moreover, as $m_{0}(\lambda) \rightarrow \infty$, we have
\begin{multline*}
H_{m_{0}(\lambda)}(\alpha\beta;t) = \sum_{j=0}^{m_{0}(\lambda)} {\qbinom{m_{0}(\lambda)}{j}}_{t} (\alpha\beta)^{j} = \sum_{j=0}^{m_{0}(\lambda)} \frac{\phi_{m_{0}(\lambda)}(t)}{\phi_{j}(t)\phi_{m_{0}(\lambda)-j}(t)} (\alpha\beta)^{j}\\
= \sum_{j=0}^{m_{0}(\lambda)} \frac{(1-t^{m_{0}(\lambda)-j+1})(1-t^{m_{0}(\lambda)-j+2}) \cdots (1-t^{m_{0}(\lambda)})}{(1-t)(1-t^{2}) \cdots (1-t^{j})}(\alpha \beta)^{j}
\rightarrow \sum_{j=0}^{\infty} \frac{(\alpha\beta)^{j}}{(t;t)_{j}}.
\end{multline*}
But for $|\alpha\beta|<1$, it is an identity that this is $1/(\alpha\beta;t)$.
Finally, we show that the second term in the sum vanishes. We must look at
\begin{align*}
\lim_{m_{0}(\lambda),k \rightarrow \infty }(-\alpha)^{k} H_{m_{0}(\lambda)}(\beta/\alpha;t) ,
\end{align*}
where $k$ is the number of even parts, so in particular $k \geq m_{0}(\lambda)$. We have the following upper bound:
\begin{align*}
\lim_{m_{0}(\lambda) \rightarrow \infty} \alpha^{m_{0}(\lambda)} \sum_{j=0}^{m_{0}(\lambda)} \frac{(\beta/\alpha)^{j}}{(1-t)^{j}};
\end{align*}
the sum is geometric with ratio $\beta/\alpha(1-t)$. Thus, this is equal to
\begin{equation*}
\lim_{m_{0}(\lambda) \rightarrow \infty} \alpha^{m_{0}(\lambda)} \frac{1-\Big(\frac{\beta}{\alpha(1-t)} \Big)^{m_{0}(\lambda)+1}}{1-\frac{\beta}{\alpha(1-t)} }
= \lim_{m_{0}(\lambda) \rightarrow \infty} \frac{\alpha^{m_{0}(\lambda)}- \frac{\beta^{m_{0}(\lambda)+1}}{\alpha (1-t)^{m_{0}(\lambda)+1}}}{1-\frac{\beta}{\alpha(1-t)} }.
\end{equation*}
But since $\alpha,\beta$ are sufficiently small (take $|\beta| < |1-t|$), this is zero, giving the result.
\end{proof}
\section{Other Vanishing Results}
We introduce notation for dominant weights with negative parts: if $\mu, \nu$ are partitions with $l(\mu) + l(\nu) \leq n$ then $\mu \bar{\nu}$ is the dominant weight vector of $SL_{n} \times GL_{1}$, $\mu \bar{\nu} = (\mu_{1}, \dots, \mu_{l(\mu)},0, \cdots, 0, -\nu_{l(\nu)}, \dots, -\nu_{1})$. Often, we will use $\lambda$ for a dominant weight with negative parts, i.e., $\lambda = \mu \bar{\nu}$.
In this section, we prove four other vanishing identities from \cite{RV} and \cite{R}. In all four cases, the structure of the partition that produces a nonvanishing integral is the same: opposite parts must add to zero ($\lambda_{i} + \lambda_{l+1-i} = 0$ for all $1 \leq i \leq l$, where $l$ is the total number of parts). Note that an equivalent condition is that there exists a partition $\mu$ such that $\lambda = \mu \bar{\mu}$.
We comment that the technique is similar to that of previous sections: we first use symmetries of the integrand to restrict to the term integrals associated to specific permutations. Then, we obtain an inductive evaluation for the term integral, and use this to give a combinatorial formula for the total integral. We mention that the first result corresponds to the symmetric space $(U(m+n), U(m) \times U(n))$ in the Schur case $t=0$.
\begin{theorem}(see \cite[Conjecture 3]{R})
Let $m$ and $n$ be integers with $0 \leq m \leq n$. Then for a dominant weight $\lambda = \mu \bar{\nu}$ of $U(n+m)$,
\begin{align*}
\frac{1}{Z} \int_{T} P_{\mu \bar{\nu}}(x_{1}, \dots, x_{m}, y_{1}, \dots, y_{n};t) \frac{1}{n!m!} \prod_{1 \leq i \neq j \leq m} \frac{1-x_{i}x_{j}^{-1}}{1-tx_{i}x_{j}^{-1}} \prod_{1 \leq i \neq j \leq n} \frac{1-y_{i}y_{j}^{-1}}{1-ty_{i}y_{j}^{-1}} dT &= 0,
\end{align*}
unless $\mu = \nu$ and $l(\mu) \leq m$, in which case the integral is
\begin{align*}
\frac{C^{0}_{\mu}(t^{n}, t^{m};0,t)}{C^{-}_{\mu}(t;0,t)C^{+}_{\mu}(t^{m+n-2}t;0,t)}.
\end{align*}
Here the normalization $Z$ is the integral for $\mu = \nu = 0$.
\end{theorem}
\begin{proof}
Note first that the integral is a sum of $(n+m)!$ terms, one for each element in $S_{n+m}$. But by the symmetry of the integrand, we may restrict to the permutations with $x_{i}$ (resp. $y_{i}$) to the left of $x_{j}$ (resp. $y_{j}$) for $1 \leq i<j \leq m$ (resp. $1 \leq i<j \leq n$). Moreover, by symmetry we can deform the torus to
\begin{align*}
T= \{ |y| = 1 + \epsilon; |x| = 1 \},
\end{align*}
and preserve the integral.
Thus, we have
\begin{multline*}
\int_{T} R_{\mu \bar{\nu}}(x^{(m)}, y^{(n)};t) \frac{1}{n!m!} \prod_{1 \leq i \neq j \leq m} \frac{1-x_{i}x_{j}^{-1}}{1-tx_{i}x_{j}^{-1}} \prod_{1 \leq i \neq j \leq n} \frac{1-y_{i}y_{j}^{-1}}{1-ty_{i}y_{j}^{-1}} dT \\
= \sum_{\substack{w \in S_{n+m}\\ x_{i} \prec_{w} x_{j} \text{ for }1 \leq i<j \leq m \\ y_{i} \prec_{w} y_{j} \text{ for }1 \leq i<j \leq n}} \int_{T} R_{\mu \bar{\nu}, w}(x^{(m)}, y^{(n)};t) \prod_{1 \leq i \neq j \leq m} \frac{1-x_{i}x_{j}^{-1}}{1-tx_{i}x_{j}^{-1}} \prod_{1 \leq i \neq j \leq n} \frac{1-y_{i}y_{j}^{-1}}{1-ty_{i}y_{j}^{-1}} dT
\end{multline*}
We first compute the normalization.
\begin{claim}\label{conj3norm}
We have
\begin{equation*}
Z = \int_{T} P_{0^{n+m}} (x^{(m)}, y^{(n)};t) \frac{1}{n!m!} \prod_{1 \leq i \neq j \leq m} \frac{1-x_{i}x_{j}^{-1}}{1-tx_{i}x_{j}^{-1}} \prod_{1 \leq i \neq j \leq n} \frac{1-y_{i}y_{j}^{-1}}{1-ty_{i}y_{j}^{-1}} dT= \frac{(1-t)^{m+n}}{\phi_{n}(t)\phi_{m}(t)}.
\end{equation*}
\end{claim}
Since
\begin{align*}
\frac{1}{v_{(0^{n+m})}(t)} &= \frac{(1-t)^{m+n}}{\phi_{m+n}(t)},
\end{align*}
this is equivalent to showing
\begin{align*}
&\int R_{0^{n+m}} (x^{(m)}, y^{(n)};t) \frac{1}{n!m!} \prod_{1 \leq i \neq j \leq m} \frac{1-x_{i}x_{j}^{-1}}{1-tx_{i}x_{j}^{-1}} \prod_{1 \leq i \neq j \leq n} \frac{1-y_{i}y_{j}^{-1}}{1-ty_{i}y_{j}^{-1}} dT = \frac{\phi_{m+n}(t)}{\phi_{n}(t)\phi_{m}(t)}.
\end{align*}
We may use the above discussion to rewrite the LHS as a sum over suitable permutations. Let $w \in S_{n+m}$ be a permutation with the $x$, $y$ variables in order and consider
\begin{equation*}
\int_{T} R_{0^{n+m}, w}(x^{(m)}, y^{(n)};t) \prod_{1 \leq i \neq j \leq m} \frac{1-x_{i}x_{j}^{-1}}{1-tx_{i}x_{j}^{-1}} \prod_{1 \leq i \neq j \leq n} \frac{1-y_{i}y_{j}^{-1}}{1-ty_{i}y_{j}^{-1}} dT.
\end{equation*}
Integrating with respect to $x_{1}, \dots, x_{m}, y_{1}, \dots, y_{n}$ in order shows that this is $t^{\# \text{inversions of } w}$, where inversions are in the sense of the multiset $M = \{ 0^{n}, 1^{m} \}$, and we define $y_{1} \cdots y_{n} x_{1} \cdots x_{m}$ to have $0$ inversions. But now by an identity of MacMahon
\begin{align*}
\sum_{\text{ multiset permutations $w$ of }\{0^{n}, 1^{m} \} } t^{\text{\# inversions of $w$}} &= {\qbinom{m+n}{n}}_{t} = \frac{\phi_{m+n}(t)}{\phi_{n}(t)\phi_{m}(t)},
\end{align*}
which proves the claim. Note that we could also prove the claim by observing that
\begin{equation*}
\int_{T} P_{0^{n+m}} (x^{(m)}, y^{(n)};t) \frac{1}{n!m!} \prod_{1 \leq i \neq j \leq m} \frac{1-x_{i}x_{j}^{-1}}{1-tx_{i}x_{j}^{-1}} \prod_{1 \leq i \neq j \leq n} \frac{1-y_{i}y_{j}^{-1}}{1-ty_{i}y_{j}^{-1}} dT = \frac{1}{n!m!} \int_{T} \tilde \Delta_{S}^{(m)}(x;t) \tilde \Delta_{S}^{(n)}(y;t) dT
\end{equation*}
and using the results of Theorem \ref{orthog}.
For convenience, from now on we will write
\begin{align*}
\Delta(x^{(m)}; y^{(n)};t) = \prod_{1 \leq i \neq j \leq m} \frac{1-x_{i}x_{j}^{-1}}{1-tx_{i}x_{j}^{-1}} \prod_{1 \leq i \neq j \leq n} \frac{1-y_{i}y_{j}^{-1}}{1-ty_{i}y_{j}^{-1}} = \tilde \Delta_{S}^{(m)}(x;t) \tilde \Delta_{S}^{(n)}(y;t),
\end{align*}
for the density function.
\begin{claim}\label{conj3cl1}
Let $w \in S_{n+m}$ be a permutation of $\{x^{(m)},y^{(n)}\}$ with $x_{i} \prec_{w} x_{j}$ for all $1 \leq i<j \leq m$ and $y_{i} \prec_{w} y_{j}$ for all $1 \leq i<j \leq n$. Suppose
\begin{align*}
\int_{T} R_{\mu\bar{\nu}, w}(x^{(m)},y^{(n)};t) \Delta(x^{(m)};y^{(n)};t) dT \neq 0.
\end{align*}
Then $w$ has $y_{1} \dots y_{l(\mu)}$ in first $l(\mu)$ positions, and $x_{m-l(\nu)+1} \dots x_{m}$ in the last $l(\nu)$ positions. Consequently $l(\nu) \leq m$, $l(\mu) \leq n$.
\end{claim}
We prove the claim. We will first show that if, in $w(x,y)^{\mu \bar{\nu}}$, $x_{1}$ has exponent a strictly positive part, the integral is zero. Indeed, one can compute that the integral restricted to the terms in $x_{1}$ is:
\begin{align*}
\int_{T_{1}} x_{1}^{\mu_{i}} \prod_{1 <i \leq m} \frac{x_{i}-x_{1}}{x_{i}-tx_{1}} \prod_{y_{j} \prec_{w} x_{1}} \frac{y_{j}-tx_{1}}{y_{j}-x_{1}} \prod_{x_{1} \prec_{w} y_{j}} \frac{x_{1}-ty_{j}}{x_{1}-y_{j}} dT &= 0,
\end{align*}
since by assumption $\mu_{i} > 0$.
Dually if in $w(x,y)^{\mu \bar{\nu}}$, $y_{n}$ has exponent a strictly negative part, we can show the integral is zero. The integral restricted to the terms in in $y_{n}$ is:
\begin{multline*}
\int_{T_{1}} y_{n}^{\bar{\nu}_{i}} \prod_{1 \leq i < n} \frac{y_{n}-y_{i}}{y_{n}-ty_{i}} \prod_{x_{j} \prec_{w} y_{n}} \frac{x_{j}-ty_{n}}{x_{j}-y_{n}} \prod_{y_{n} \prec_{w} x_{j}} \frac{y_{n}-tx_{j}}{y_{n}-x_{j}}dT \\
= \int_{T: |x| > |y|} y_{n}^{-\bar{\nu}_{i}} \prod_{1 \leq i < n} \frac{y_{i}-y_{n}}{y_{i}-ty_{n}} \prod_{x_{j} \prec_{w} y_{n}} \frac{y_{n}-tx_{j}}{y_{n}-x_{j}} \prod_{y_{n} \prec_{w} x_{j}} \frac{x_{j}-ty_{n}}{x_{j}-y_{n}}dT,
\end{multline*}
where in the second step we have inverted all variables which preserves the integral. But now by assumption $\bar{\nu}_{i} < 0$, so integrating with respect to $y_{n}$ gives that the above integral is zero. This gives the desired structure of $w$ to have nonvanishing associated integral.
\begin{claim}\label{conj3cl2}
Let $w \in S_{n+m}$ be a permutation of $\{x^{(m)},y^{(n)}\}$ with $x_{i} \prec_{w} x_{j}$ for all $1 \leq i<j \leq m$ and $y_{i} \prec_{w} y_{j}$ for all $1 \leq i<j \leq n$. Suppose also that $y_{1}, \dots, y_{l(\mu)}$ are in the first $l(\mu)$ positions and $x_{m-l(\nu)+1}, \dots, x_{m}$ are in the last $l(\nu)$ positions.
Let $l(\mu) > 0$. Then we have the following formula for the term integral associated to $w$:
\begin{multline*}
\int_{T} R_{\mu \bar{\nu}, w}(x^{(m)},y^{(n)};t) \Delta(x^{(m)};y^{(n)};t) dT \\
= (1-t) \Big(\displaystyle \sum_{\substack{i: \\ \lambda_{1} + \lambda_{i} = 0}} t^{n+m - i} \Big) \int R_{\widehat{\lambda}, \widehat{w}}(x^{(m-1)}, y^{(n-1)};t) \Delta(x^{(m-1)};y^{(n-1)};t) dT
\end{multline*}
where $\widehat{w}$ is $w$ with $y_{1}, x_{m}$ deleted and $\widehat{\lambda}$ is $\lambda$ with $\lambda_{1}$ and $\lambda_{i}$ deleted (where index $i$ is such that $\lambda_{1} + \lambda_{i} = 0$).
Similarly, if $l(\nu) > 0$, we have
\begin{multline*}
\int_{T} R_{\mu \bar{\nu}, w}(x^{(m)},y^{(n)};t) \Delta(x^{(m)};y^{(n)};t) dT \\
= (1-t) \Big(\displaystyle \sum_{\substack{i: \\ \lambda_{i} + \lambda_{n+m} = 0}} t^{i-1} \Big) \int R_{\widehat{\lambda}, \widehat{w}}(x^{(m-1)}, y^{(n-1)};t) \Delta(x^{(m-1)};y^{(n-1)};t) dT
\end{multline*}
where $\widehat{w}$ is $w$ with $y_{1}, x_{m}$ deleted and $\widehat{\lambda}$ is $\lambda$ with $\lambda_{i}$ and $\lambda_{n+m}$ deleted (where index $i$ is such that $\lambda_{i} + \lambda_{n+m} = 0$).
\end{claim}
For the first statement, integrate with respect to $y_{1}$. We have the following integral restricted to the terms involving $y_{1}$:
\begin{align*}
\int_{T_{1}} y_{1}^{\lambda_{1}} \prod_{1<i \leq n} \frac{y_{i}-y_{1}}{y_{i}-ty_{1}} \prod_{1 \leq j \leq m} \frac{y_{1}-tx_{j}}{y_{1} - x_{j}} dT,
\end{align*}
with $\lambda_{1} = \mu_{1} > 0$. Evaluating gives a sum of $m$ terms, one for each residue $y_{1} = x_{j}$. We consider one of these residues: suppose $x_{j}$ is in position $i$, then the resulting integral in $x_{j}$ is:
\begin{multline*}
(1-t) \int_{T_{1}} \negthickspace\negthickspace x_{j}^{\lambda_{1}+\lambda_{i}} \prod_{1<i \leq n} \frac{y_{i} - x_{j}}{y_{i}-tx_{j}} \prod_{i \neq j} \frac{x_{j}-tx_{i}}{x_{j}-x_{i}} \prod_{\substack{y_{i} \prec_{w} x_{j} \\ y_{i} \neq y_{1}}} \frac{y_{i}-tx_{j}}{y_{i}-x_{j}} \prod_{x_{j} \prec_{w} y_{i}} \frac{x_{j}-ty_{i}}{x_{j}-y_{i}} \prod_{i<j} \frac{x_{j}-x_{i}}{x_{j}-tx_{i}} \prod_{j<i} \frac{x_{i}-x_{j}}{x_{i}-tx_{j}} dT \\
= (1-t) \int_{T_{1}} x_{j}^{\lambda_{1} + \lambda_{i}} \prod_{x_{j} \prec_{w} y_{i}}(-1) \frac{x_{j}-ty_{i}}{y_{i}-tx_{j}} \prod_{j<i} (-1) \frac{x_{j}-tx_{i}}{x_{i}-tx_{j}}dT,
\end{multline*}
where we may assume $\lambda_{i} \leq 0$, by the structure of $w$. Note first that if $\lambda_{1} + \lambda_{i} >0$, the integral is zero. One can similarly argue that the term integral is zero if $\lambda_{1} + \lambda_{i} <0$ (use $\lambda_{n+m} + \lambda_{k} < 0$ for any $1 \leq k <n+m$ and integrate with respect to $x_{m}$, and take the residue at any $x_{m} = y_{i}$). Thus for a nonvanishing residue term we must have $\lambda_{1} = -\lambda_{i}$, and in this case one can verify that the above integral evaluates to
\begin{align*}
(1-t) t^{| \{ z: x_{j} \prec_{w} z \}| } = (1-t) t^{n+m-i},
\end{align*}
as desired.
The second statement is analogous, except integrate with respect to $x_{m}$ instead of $y_{1}$, and invert all variables. This proves the claim.
Thus,
\begin{align*}
\int_{T} R_{\mu \bar{\nu}, w}(x^{(m)}, y^{(n)};t) \Delta(x^{(m)};y^{(n)};t) dT &= 0
\end{align*}
unless $\mu = \nu$ and $l(\mu) \leq m$, which gives the vanishing part of the theorem. For the second part, suppose $\mu = \nu$ and $l(\mu) \leq m$. Then by the above claims,
\begin{multline*}
\int_{T} R_{\mu\bar{\mu}, w}(x^{(m)}, y^{(n)};t) \Delta(x^{(m)};y^{(n)};t) dT \\
= (1-t)^{l(\mu)} v_{\mu+}(t) \int R_{0^{(n-l(\mu)) + (m-l(\nu))}, \delta}(x^{(m-l(\nu))}, y^{(n-l(\mu))};t)\Delta(x^{(m-l(\nu))};y^{(n-l(\mu))};t) dT
\end{multline*}
if $w = y_{1} \dots y_{l(\mu)} \delta x_{m-l(\nu)+1} \dots x_{m}$ for some permutation $\delta$ of $\{y_{l(\mu)+1}, \dots, y_{n}, x_{1}, \dots, x_{m-l(\nu)} \}$, and $0$ otherwise.
By Claim \ref{conj3norm}, we have
\begin{align*}
\int R_{0^{(n-l(\mu)) + (m-l(\mu))}}(x^{(m-l(\mu))}, y^{(n-l(\mu))};t)\frac{\Delta(x^{(m-l(\mu))};y^{(n-l(\mu))};t)}{(m-l(\mu))!(n-l(\mu))!} dT = \qbinom{m+n-2l(\mu)}{n-l(\mu)}_{t}.
\end{align*}
So we have
\begin{align*}
\int_{T} P_{\mu \bar{\mu}}(x^{(m)}, y^{(n)};t) \frac{1}{n!m!} \Delta(x^{(m)};y^{(n)};t) dT &= \frac{1}{v_{\mu\bar{\mu}}(t)} (1-t)^{l(\mu)} v_{\mu+}(t) \qbinom{m+n-2l(\mu)}{n-l(\mu)}_{t}.
\end{align*}
Noting that $v_{\mu\bar{\mu}}(t) = v_{\mu+}(t)^{2} v_{(0^{m+n-2l(\mu)})}(t)$ and multiplying by the reciprocal of the normalization gives
\begin{multline*}
\frac{1}{Z}\int_{T} P_{\mu \bar{\mu}}(x^{(m)}, y^{(n)};t)\frac{1}{n!m!} \Delta(x^{(m)};y^{(n)};t) dT= \frac{\phi_{n}(t)\phi_{m}(t)}{(1-t)^{m+n}}\frac{(1-t)^{l(\mu)}}{v_{\mu+}(t)v_{(0^{m+n-2l(\mu)})}(t)} \qbinom{m+n-2l(\mu)}{n-l(\mu)}_{t} \\
= (1-t^{n-l(\mu)+1}) \cdots (1-t^{n}) (1-t^{m-l(\mu)+1}) \cdots (1-t^{m}) \frac{ \phi_{m+n-2l(\mu)}(t) }{(1-t)^{m+n-l(\mu)}v_{\mu+}(t) v_{(0^{m+n-2l(\mu)})}(t)} \\
=\frac{(1-t^{n-l(\mu)+1}) (1-t^{n-l(\mu)+2}) \cdots (1-t^{n}) (1-t^{m-l(\mu)+1})(1-t^{m-l(\mu)+2}) \cdots (1-t^{m})}{(1-t)^{l(\mu)} v_{\mu+}(t)},
\end{multline*}
where the last equality follows from the definition of $v_{(0^{m+n-2l(\mu)})}$.
One can check from the definition of the $C$-symbols that
\begin{align*}
C_{\mu}^{+}(t^{m+n-2}t;0,t) &= 1 \\
C^{-}_{\mu}(t;0,t) &= v_{\mu+}(t) (1-t)^{l(\mu)} \\
C^{0}_{\mu}(t^{n}, t^{m} ;0,t) &= \prod_{1 \leq i \leq l(\mu)} (1-t^{n+1-i})(1-t^{m+1-i}),
\end{align*}
so that our formula gives
\begin{align*}
\frac{C^{0}_{\mu}(t^{n},t^{m};0,t)}{C^{-}_{\mu}(t;0,t)C^{+}_{\mu}(t^{m+n-2}t;0,t)},
\end{align*}
as desired.
\end{proof}
\begin{theorem}
(see \cite[Conjecture 5]{R}) Let $n \geq 0$ be an integer and $\lambda = \mu \bar{\nu}$ a dominant weight of $U(2n)$. Then
\begin{align*}
\frac{1}{Z} \int_{T} P_{\mu \bar{\nu}}(x_{1}, \dots, x_{n}, y_{1}, \dots, y_{n};t) \frac{1}{(n!)^{2}} \prod_{1 \leq i,j \leq n} \frac{1}{(1-tx_{i}y_{j}^{-1})(1-ty_{i}x_{j}^{-1})} \prod_{1 \leq i \neq j \leq n} (1-x_{i}x_{j}^{-1})(1-y_{i}y_{j}^{-1})dT,
\end{align*}
is equal to 0 unless $\mu = \nu$, in which case the integral is
\begin{align*}
\frac{C^{0}_{\mu}(t^{n}, -t^{n};0,t)}{C^{-}_{\mu}(t;0,t)C^{+}_{\mu}(t^{2n-2}t;0,t)}.
\end{align*}
Here the normalization $Z$ is the integral for $\mu = \nu = 0$.
\end{theorem}
\begin{proof}
Note first that the integral is a sum of $(2n)!$ terms, one for each element in $S_{2n}$. But by the symmetry of the integrand, we may restrict to the permutations with $x_{i}$ (resp. $y_{i}$) to the left of $x_{j}$ (resp. $y_{j}$) for $1 \leq i<j \leq n$. By symmetry, we can deform the torus to
\begin{align*}
T = \{ |y| = 1 + \epsilon; |x| = 1 \}.
\end{align*}
For convenience, we will write $\Delta(x^{(n)};y^{(n)};t)$ for the density
\begin{align*}
\prod_{1 \leq i,j \leq n} \frac{1}{(1-tx_{i}y_{j}^{-1})(1-ty_{i}x_{j}^{-1})} \prod_{1 \leq i \neq j \leq n} (1-x_{i}x_{j}^{-1})(1-y_{i}y_{j}^{-1}).
\end{align*}
We first compute the normalization.
\begin{claim}\label{conj5norm}
We have
\begin{align*}
Z = \int_{T} P_{0^{2n}}(x^{(n)}, y^{(n)};t) \frac{1}{(n!)^{2}} \Delta(x^{(n)};y^{(n)};t) dT = \frac{1}{\phi_{n}(t^{2})}.
\end{align*}
\end{claim}
By the definition of $v_{(0^{2n})}(t)$, this is equivalent to showing
\begin{align*}
&\int_{T} R_{0^{2n}}(x^{(n)}, y^{(n)};t) \frac{1}{(n!)^{2}} \Delta(x^{(n)};y^{(n)};t) dT
= \frac{\phi_{2n}(t)}{(1-t)^{2n}\phi_{n}(t^{2})}.
\end{align*}
We prove this statement by induction on $n$. For $n=1$, we have
\begin{align*}
\int_{T} \frac{x_{1}y_{1}}{(x_{1}-y_{1})(y_{1}-tx_{1})}dT = 0
\end{align*}
and
\begin{align*}
\int_{T} \frac{x_{1}y_{1}}{(y_{1}-x_{1})(x_{1}-ty_{1})} dT = \frac{1}{1-t}
= \frac{\phi_{2}(t)}{(1-t)^{2}\phi_{1}(t^{2})}
\end{align*}
as desired. Now suppose the claim holds for $n-1$; with this assumption we show that it holds for $n$.
Consider permutations $w$ with $x_{1}$ first. We claim $\int_{T} R_{\mu \bar{\nu}, w}(x^{(n)},y^{(n)};t) \Delta(x^{(n)};y^{(n)};t) dT =0$. Indeed, we have the following integral restricting to the terms in $x_{1}$:
\begin{multline*}
\int_{T_{1}} \prod_{1 \leq i \leq n} \frac{x_{1} - ty_{i}}{x_{1}-y_{i}} \prod_{1<i \leq n} \frac{x_{1}-tx_{i}}{x_{1}-x_{i}} \prod_{1 \leq j \leq n} \frac{x_{1}y_{j}}{(y_{j}-tx_{1})(x_{1}-ty_{j})} \prod_{1<j \leq n} \frac{(x_{j}-x_{1})(x_{1}-x_{j})}{x_{1}x_{j}}dT \\
= \int_{T_{1}} \prod_{1 \leq j \leq n} \frac{x_{1}y_{j}}{(x_{1}-y_{j})(y_{j}-tx_{1})} \prod_{1<j \leq n} \frac{(x_{1}-tx_{j})(x_{j}-x_{1})}{x_{1}x_{j}} dT \\
= \int_{T_{1}} x_{1} \prod_{1 \leq j \leq n} \frac{1}{(x_{1}-y_{j})(y_{j}-tx_{1})} \prod_{1<j \leq n} (x_{1}-tx_{j})(x_{j}-x_{1}) dT
= 0.
\end{multline*}
Thus, we may suppose $y_{1}$ occurs first in $w$. A similar calculation for the integral restricting to terms in $y_{1}$ yields:
\begin{align*}
\int_{T_{1}} y_{1} \prod_{1<j \leq n} (y_{1} - ty_{j})(y_{j}-y_{1}) \prod_{1 \leq i \leq n} \frac{1}{(y_{1} - x_{i})(x_{i}-ty_{1})}dT.
\end{align*}
We may evaluate this as the sum of $n$ residues, one for each $y_{1} = x_{i}$ for $1 \leq i \leq n$. We compute the residue at $y_{1} = x_{i}$, and look at the resulting integral in $x_{i}$:
\begin{multline*}
\frac{1}{1-t}\int_{T_{1}} \prod_{1<j \leq n} (x_{i}-ty_{j})(y_{j}-x_{i}) \prod_{j \neq i} \frac{1}{(x_{i}-x_{j})(x_{j}-tx_{i})}
\prod_{i' < i} (x_{i'} - tx_{i})(x_{i}-x_{i'}) \prod_{i<i''} (x_{i}-tx_{i''})(x_{i''}-x_{i}) \\
\cdot \prod_{x_{i} \prec_{w} y_{j}} \frac{1}{(x_{i}-y_{j})(y_{j}-tx_{i})} \prod_{\substack{y_{j} \prec_{w} x_{i} \\ y_j \neq y_{1}}} \frac{1}{(y_{j}-x_{i})(x_{i}-ty_{j})} dT
= \frac{1}{1-t}\int_{T_{1}} \prod_{i< i''} \frac{(tx_{i''}-x_i)}{(x_{i''}-tx_{i})} \prod_{x_{i} \prec_{w} y_{j}} \frac{(ty_{j}-x_i)}{(y_{j}-tx_{i})} dT.
\end{multline*}
But, letting $2 \leq k \leq 2n$ be the position of $x_{i}$ in $w$, this evaluates to
\begin{align*}
\frac{1}{1-t} \prod_{i< i''} t \prod_{x_{i} \prec_{w} y_{j}} t = \frac{t^{2n-k}}{1-t}.
\end{align*}
Thus, varying over all such permutations with $y_{1}$ first gives a factor of
\begin{align*}
\frac{1}{1-t} (t^{2n-2} + t^{2n-3} + \cdots +t+1) &= \frac{(1-t^{2n-1})}{(1-t)^{2}}.
\end{align*}
Note that permutations of $\{y_{1}, \dots, y_{n}, x_{1}, \dots, x_{n}\}$ with $y_{1}$ in position $1$ and $x_{i}$ in position $k$ are in bijection with permutations of $\{y_{2}, \dots, y_{n}, x_{1}, \dots, \widehat{x_{i}}, \dots, x_{n}\}$. So using the induction hypothesis, the total integral evaluates to
\begin{align*}
\frac{(1-t^{2n-1})}{(1-t)^{2}} \frac{\phi_{2(n-1)}(t)}{(1-t)^{2(n-1)}\phi_{n-1}(t^{2})} &= \frac{\phi_{2n}(t)}{(1-t)^{2n} \phi_{n}(t^{2})},
\end{align*}
as desired.
Note that the density is not of a standard form (i.e., as a product of Koornwinder or Selberg densities), so we cannot appeal to an earlier result (compare with Claim \ref{conj3norm}).
\begin{claim}
Let $w \in S_{2n}$ a permutation of $\{x^{(n)},y^{(n)}\}$ with $x_{i} \prec_{w} x_{j}$ for all $1 \leq i<j \leq n$ and $y_{i} \prec_{w} y_{j}$ for all $1 \leq i<j \leq n$. Suppose
\begin{align*}
\int_{T} R_{\mu \bar{\nu},w}(x^{(n)},y^{(n)};t) \Delta(x^{(n)};y^{(n)};t) dT \neq 0.
\end{align*}
Then $w$ has $y_{1} \dots y_{l(\mu)}$ in the first $l(\mu)$ coordinates, and $x_{n-l(\nu)+1} \dots x_{n}$ in the last $l(\nu)$ coordinates. Consequently $l(\nu) \leq n, l(\mu) \leq n$.
\end{claim}
The proof is analogous to Claim \ref{conj3cl1} of the previous theorem.
\begin{claim}
Let $w \in S_{2n}$ be a permutation of $\{x^{(n)},y^{(n)}\}$ with $x_{i} \prec_{w} x_{j}$ for all $1 \leq i<j \leq n$ and $y_{i} \prec_{w} y_{j}$ for all $1 \leq i<j \leq n$. Suppose also that $y_{1}, \dots, y_{l(\mu)}$ are in the first $l(\mu)$ coordinates, and $x_{n-l(\nu)+1} \dots x_{n}$ in the last $l(\nu)$ coordinates.
Let $l(\mu) > 0$. Then we have the following formula for the term integral associated to $w$:
\begin{multline*}
\int_{T} R_{\mu \bar{\nu}, w}(x^{(n)},y^{(n)};t) \Delta(x^{(n)};y^{(n)};t) dT \\
= \frac{1}{1-t} \Big(\displaystyle \sum_{\substack{i : \\ \lambda_{1} + \lambda_{i} = 0}} t^{2n-i} \Big) \int R_{\widehat{\lambda}, \widehat{w}}(x^{(n-1)}, y^{(n-1)};t) \Delta(x^{(n-1)};y^{(n-1)};t) dT
\end{multline*}
where $\widehat{w}$ is $w$ with $y_{1}, x_{n}$ deleted and $\widehat{\lambda}$ is $\lambda$ with $\lambda_{1}$ and $\lambda_{i}$ deleted (where the index $i$ is such that $\lambda_{1} + \lambda_{i} = 0$).
Similarly, if $l(\nu) > 0$, we have
\begin{multline*}
\int_{T} R_{\mu \bar{\nu}, w}(x^{(n)},y^{(n)};t) \Delta(x^{(n)};y^{(n)};t) dT \\
= \frac{1}{1-t} \Big(\displaystyle \sum_{\substack{i :\\ \lambda_{i} + \lambda_{2n} = 0}} t^{i-1} \Big) \int R_{\widehat{\lambda}, \widehat{w}}(x^{(n-1)}, y^{(n-1)};t) \Delta(x^{(n-1)};y^{(n-1)};t) dT
\end{multline*}
where $\widehat{w}$ is $w$ with $y_{1}, x_{n}$ deleted and $\widehat{\lambda}$ is $\lambda$ with $\lambda_{i}$ and $\lambda_{2n}$ deleted (where the index $i$ is such that $\lambda_{i} + \lambda_{2n} = 0$).
\end{claim}
The proof is analogous to the proof of Claim \ref{conj3cl2} of the previous theorem.
Thus,
\begin{align*}
\int_{T} R_{\mu \bar{\nu}, w}(x^{(n)}, y^{(n)};t) \Delta(x^{(n)};y^{(n)};t) dT &= 0
\end{align*}
unless $\mu = \nu$. Moreover, if $\mu = \nu$, the integral is
\begin{align*}
\frac{1}{(1-t)^{l(\mu)}} v_{\mu+}(t) \int R_{0^{2n-2l(\mu)}, \delta}(x^{(n-l(\mu))}, y^{(n-l(\mu))};t)\Delta(x^{(n-l(\mu))}; y^{(n-l(\mu))};t) dT
\end{align*}
if $w = y_{1} \dots y_{l(\mu)} \delta x_{n-l(\nu)+1} \dots x_{n}$ for some permutation $\delta$ of $\{ y_{l(\mu)+1}, \dots, y_{n}, x_{1}, \dots, x_{n-l(\nu)} \}$ and $0$ otherwise.
By Claim \ref{conj5norm}, we have
\begin{equation*}
\int_{T} R_{0^{2n-2l(\mu)}}(x^{(n-l(\mu))},y^{(n-l(\mu))};t) \frac{\Delta(x^{(n-l(\mu))};y^{(n-l(\mu))};t)}{\Big((2n-2l(\mu))!\Big)^{2}} dT = \frac{\phi_{2n-2l(\mu)}(t)}{(1-t)^{2n-2l(\mu)}\phi_{n-l(\mu)}(t^{2})}
\end{equation*}
Thus,
\begin{multline*}
\frac{1}{Z}\int_{T} P_{\mu \bar{\mu}}(x^{(n)},y^{(n)};t)\frac{1}{(n!)^{2}} \Delta(x^{(n)};y^{(n)};t) dT = \frac{\phi_{n}(t^{2}) }{v_{\mu+}(t)^{2}v_{(0^{2n-2l(\mu)})}(t)}\frac{v_{\mu+}(t)}{(1-t)^{l(\mu)}} \frac{\phi_{2n-2l(\mu)}(t)}{(1-t)^{2n-2l(\mu)}\phi_{n-l(\mu)}(t^{2})} \\
= \frac{(1-(t^{2})^{n-l(\mu)+1}) \cdots (1-(t^{2})^{n})}{v_{\mu+}(t)(1-t)^{2n-l(\mu)}}\frac{\phi_{2n-2l(\mu)}(t)}{v_{(0^{2n-2l(\mu)})}(t)}
=\frac{(1-(t^{2})^{n-l(\mu)+1}) \cdots (1-(t^{2})^{n})}{v_{\mu+}(t) (1-t)^{l(\mu)}}.
\end{multline*}
where the last equality follows from the definition of $v_{(0^{2n-2l(\mu)})}(t)$.
Finally, one can check from the definition of the $C$-symbols that
\begin{align*}
C^{+}_{\mu}(t^{2n-2}t;0,t) &= 1 \\
C^{0}_{\mu}(t^{n}, -t^{n};0,t) &= \prod_{1 \leq i \leq l(\mu)} (1-t^{2(n+1-i)}) \\
C^{-}_{\mu}(t;0,t) &= (1-t)^{l(\mu)}v_{\mu+}(t).
\end{align*}
so that our formula gives
\begin{align*}
\frac{C^{0}_{\mu}(t^{n},-t^{n};0,t)}{C^{-}_{\mu}(t;0,t) C^{+}_{\mu}(t^{2n-2}t;0,t)},
\end{align*}
as desired.
\end{proof}
\begin{theorem}
(see \cite[Theorem 4.4]{RV}) Let $\lambda$ be a weight of the double cover of $GL_{2n}$, i.e., a half-integer vector such that $\lambda_{i} - \lambda_{j} \in \mathbb{Z}$ for all $i,j$. Then
\begin{align*}
\frac{1}{Z}\int P_{\lambda}^{(2n)}( \cdots t^{\pm 1/2}z_{i} \cdots ;t) \frac{1}{n!} \prod_{1 \leq i<j \leq n} \frac{(1-z_{i}/z_{j})(1-z_{j}/z_{i})}{(1-t^{2}z_{i}/z_{j})(1-t^{2}z_{j}/z_{i})} dT &=0,
\end{align*}
unless $\lambda = \mu \bar{\mu}$. In this case, the nonzero value is
\begin{align*}
\frac{\phi_{n}(t^{2})}{(1-t)^{n}v_{\mu}(t) (1+t)(1+t^{2}) \cdots (1+t^{n-l(\mu)})}&= \frac{C^{0}_{\mu}(t^{n}, -t^{n};0,t)}{C^{-}_{\mu}(t;0,t) C^{+}_{\mu}(t^{2n-2}t;0,t)}.
\end{align*}
\end{theorem}
\begin{proof}
As usual, note that $P_{\lambda}^{(2n)}( \cdots t^{\pm 1/2}z_{i} \cdots ;t)$ is a sum of $(2n)!$ terms, one for each permutation in $S_{2n}$. We first note that many of these have vanishing integrals:
\begin{claim}
Let $w \in S_{2n}$ be a permutation of $(t^{\pm 1/2}z_{1}, \dots, t^{\pm 1/2}z_{n})$, such that for some $1 \leq i \leq n$ $\sqrt{t}z_{i}$ appears to the left of $\frac{z_{i}}{\sqrt{t}}$ in $w$. Then
\begin{align*}
\int R_{\lambda, w}^{(2n)}(\cdots t^{\pm 1/2}z_{i} \cdots ;t) \tilde \Delta_{S}^{(n)}(z;t^{2}) dT = 0.
\end{align*}
\end{claim}
To prove the claim note that $R_{\lambda, w}^{(2n)}( \cdots t^{\pm 1/2}z_{i} \cdots;t ) = 0$ in this case. Indeed, we have the term
\begin{align*}
\frac{\sqrt{t}z_{i} - tz_{i}/\sqrt{t}}{\sqrt{t}z_{i} - z_{i}/\sqrt{t}} = \frac{tz_{i} - tz_{i}}{z_{i}(t-1)} =0
\end{align*}
appearing in the product defining the Hall--Littlewood polynomial.
Thus, we may restrict our attention to those permutations $w$ with $z_{i}/\sqrt{t}$ to the left of $\sqrt{t}z_{i}$ for all $1 \leq i \leq n$. Moreover, we may order the variables so that $z_{i}/\sqrt{t}$ appears to the left of $z_{j}/\sqrt{t}$ for all $1 \leq i<j \leq n$.
We compute the normalization first.
\begin{claim}\label{8.3.2norm}
We have
\begin{align*}
Z = \int_{T} P_{0^{2n}}^{(2n)}(\cdots t^{\pm 1/2}z_{i} \cdots ;t) \frac{1}{n!}\tilde\Delta_{S}^{(n)}(z;t^{2}) dT = \frac{1}{v_{(0^{n})}(t^{2})} = \frac{(1-t^{2})^{n}}{(1-t^{2})(1-t^{4}) \cdots (1-t^{2n})}.
\end{align*}
\end{claim}
The proof follows by noting that $P_{0^{2n}}^{(2n)}( \cdots t^{\pm 1/2}z_{i} \cdots ;t) = 1$ and applying Theorem \ref{orthog}.
\begin{claim}
Let $w \in S_{2n}$ be a permutation with $z_{i}/\sqrt{t}$ to the left of $\sqrt{t}z_{i}$ for all $1 \leq i \leq n$ and $z_{i}/\sqrt{t}$ to the left of $z_{j}/\sqrt{t}$ for all $1 \leq i<j \leq n$, and $\sqrt{t}z_{1}$ in position $k$ for some $2 \leq k \leq 2n$. Then
\begin{multline*}
\int_{T} R_{\lambda, w}^{(2n)}(\cdots t^{\pm 1/2}z_{i} \cdots ;t) \tilde\Delta_{S}^{(n)}(z;t^{2}) dT \\
=
\chi_{\lambda_{1} + \lambda_{k}=0} (1+t) t^{2n-k} \int_{T} R_{\widehat{\lambda}, \widehat{w}}^{(2(n-1))}( \cdots t^{\pm 1/2}z_{i} \cdots ;t) \tilde\Delta_{S}^{(n-1)}(z;t^{2}) dT
\end{multline*}
where $\widehat{w}$ is the permutation $w$ with $z_{1}/\sqrt{t}$ and $\sqrt{t}z_{1}$ deleted, and $\widehat{\lambda}$ is the partition $\lambda$ with parts $\lambda_{1}$ and $\lambda_{k}$ deleted.
\end{claim}
To prove the claim, integrate with respect to $z_{1}$. Note that if $\lambda_{1} + \lambda_{k} > 0$, the integral vanishes. If $\lambda_{1} + \lambda_{k} < 0$, note that $\lambda_{2n} + \lambda_{j} < 0$ for all $1 \leq j \leq 2n-1$. Integrate with respect to the last variable in $w$, and invert all variables to find the integral vanishes, as desired.
The above claim implies that the integral $\int_{T} R_{\lambda, w}^{(2n)}( \cdots t^{\pm 1/2}z_{i} \cdots ;t) \tilde\Delta_{S}^{(n)}(z;t^{2}) dT$ vanishes unless $\lambda = \mu \bar{\mu}$ for some $\mu$. Moreover, if $\lambda = \mu \bar{\mu}$, the term integral vanishes unless
\begin{align*}
w( \cdots t^{\pm 1/2}z_{i} \cdots )^{\lambda}
\end{align*}
is a constant in $t$ (i.e., independent of $z_{i}$).
Thus, in the case $\lambda = \mu \bar{\mu}$, a computation gives that the total integral
\begin{multline*}
\int_{T} R_{\lambda}^{(2n)}( \cdots t^{\pm 1/2}z_{i} \cdots ;t)\frac{1}{n!} \tilde\Delta_{S}^{(n)}(z;t^{2}) dT \\
= (1+t)^{l(\mu)}v_{\mu+}(t) \int_{T} R_{0^{2(n-l(\mu))}}^{(2(n-l(\mu)))}(\cdots t^{\pm 1/2}z_{i} \cdots;t) \frac{1}{(n-l(\mu))!} \tilde\Delta_{S}^{(n-l(\mu))}(z;t^{2}) dT \\
= (1+t)^{l(\mu)}v_{\mu+}(t) \frac{(1-t^{2})^{n-l(\mu)}}{(1-t^{2})(1-t^{4}) \cdots (1-t^{2(n-l(\mu))})}v_{(0^{2(n-l(\mu))})}(t).
\end{multline*}
Multiplying this by $1/Zv_{\lambda}(t) = 1/Zv_{\mu+}(t)^{2}v_{(0^{2(n-l(\mu))})}(t)$ and simplifying gives the result.
\end{proof}
\begin{theorem}
(see \cite[Corollary 4.7(ii)]{RV}) Let $\lambda$ be a partition with $l(\lambda) \leq n$. Then the integral
\begin{align*}
\int P_{\lambda}(x_{1}, \dots, x_{n};t^{2}) P_{m^{n}}(x_{1}^{-1}, \dots, x_{n}^{-1};t) \frac{1}{n!} \tilde \Delta_{S}^{(n)}(x;t) dT
\end{align*}
vanishes unless $\lambda = (2m)^{n} - \lambda$.
\end{theorem}
Note that the above integral gives the coefficient of $P_{m^{n}}(x;t)$ in the expansion of $P_{\lambda}(x;t^{2})$ as Hall--Littlewood polynomials with parameter $t$.
\begin{proof}
Since $P_{m^{n}}(x_{1}^{-1}, \dots, x_{n}^{-1};t) = (x_{1}^{-1} \cdots x_{n}^{-1})^{m}$, an equivalent statement is the following:
Let $\lambda$ be a weight of $GL_{n}$ with possibly negative parts. Then the integral
\begin{align*}
\frac{1}{Z} \int P_{\lambda}(x_{1}, \dots, x_{n};t^{2}) \frac{1}{n!} \tilde \Delta_{S}^{(n)}(x;t) dT
\end{align*}
vanishes unless $\lambda = \mu \bar{\mu}$, and in this case it is
\begin{align*}
\frac{(1-t^{n-2l(\mu)+1}) \cdots (1-t^{n}) t^{|\mu|}}{(1-t^{2})^{l(\mu)} v_{\mu+}(t^{2})}.
\end{align*}
We first compute the normalization $Z = \frac{1}{n!}\int P_{0^{n}}^{(n)}(x;t^{2}) \tilde \Delta_{S}^{(n)}(x;t) dT$. Note that $P_{0^{n}}(x;t^{2}) = 1$, so we have
\begin{multline*}
Z = \frac{1}{n!}\int \tilde \Delta_{S}^{(n)}(x;t) dT
= \frac{1}{n!} \int P_{0^{n}}^{(n)}(x;t) P_{0^{n}}^{(n)}(x^{-1};t) \tilde \Delta_{S}^{(n)} dT
= \frac{1}{n!} \frac{n!}{v_{(0^{n})}(t)}
= \frac{(1-t)^{n}}{(1-t)(1-t^{2}) \cdots (1-t^{n})}
\end{multline*}
using Theorem \ref{orthog}.
Now we look at $\frac{1}{n!}\int R_{\lambda}(x_{1}, \dots, x_{n}; t^{2}) \tilde \Delta_{S}^{(n)}(x;t) dT$, which is a sum of $n!$ integrals---one for each $w \in S_{n}$. By symmetry we have
\begin{align*}
\frac{1}{n!}\int R_{\lambda}^{(n)}(x; t^{2}) \tilde \Delta_{S}^{(n)}(x;t) dT &= \int R_{\lambda, \text{id}}^{(n)}(x;t^{2}) \tilde \Delta_{S}^{(n)}(x;t) dT,
\end{align*}
so we may restrict to the case $w = \text{id}$. We assume $\lambda_{1} >0$: note that if $\lambda_{1} \leq 0$ we have $\lambda_{n} < 0$ (we are assuming $\lambda$ is not the zero partition) and we can invert all variables and make a change of variables to reduce to the case $\lambda_{1} > 0$. Then the integral restricted to terms in $x_{1}$ is
\begin{multline*}
\int_{T_{1}} x_{1}^{\lambda_{1}} \prod_{j > 1} \frac{x_{1} - t^{2}x_{j}}{x_{1} - x_{j}} \prod_{j > 1} \frac{(x_{1} - x_{j})(x_{j}-x_{1})}{(x_{1}-tx_{j})(x_{j}-tx_{1})} \frac{dx_{1}}{2\pi \sqrt{-1}x_{1}}
= \int_{T_{1}} x_{1}^{\lambda_{1}} \prod_{j>1} \frac{(x_{1} - t^{2}x_{j})(x_{j}-x_{1})}{(x_{1}-tx_{j})(x_{j}-tx_{1})}\frac{dx_{1}}{2\pi \sqrt{-1}x_{1}} \\
= \sum_{j > 1} \frac{t^{\lambda_{1}}(1-t)^{2}}{(1-t^{2})} x_{j}^{\lambda_{1}} \prod_{i \neq 1,j} \frac{(tx_{j} - t^{2}x_{i})(x_{i}-tx_{j})}{(tx_{j}-tx_{i})(x_{i}-t^{2}x_{j})}
= \sum_{j > 1} \frac{t^{\lambda_{1}}(1-t)^{2}}{(1-t^{2})}x_{j}^{\lambda_{1}} \prod_{i \neq 1,j} \frac{(x_{j} - tx_{i})(x_{i}-tx_{j})}{(x_{j}-x_{i})(x_{i}-t^{2}x_{j})},
\end{multline*}
where the second line follows by evaluating the residues at $x_{1} = tx_{j}$ for $j>1$.
For each $j > 1$, we can combine this with the terms in $x_{j}$ from the original integrand. The integral restricted to terms in $x_{j}$ is
\begin{multline*}
\frac{t^{\lambda_{1}}(1-t)^{2}}{(1-t^{2})}\int_{T_{1}} x_{j}^{\lambda_{1}} \prod_{i \neq 1,j} \frac{(x_{j} - tx_{i})(x_{i}-tx_{j})}{(x_{j}-x_{i})(x_{i}-t^{2}x_{j})} x_{j}^{\lambda_{j}}\prod_{1 \neq i<j} \frac{x_{i} - t^{2}x_{j}}{x_{i}-x_{j}} \prod_{j<i} \frac{x_{j}-t^{2}x_{i}}{x_{j}-x_{i}}\\
\cdot \prod_{i \neq 1,j} \frac{(x_{i}-x_{j})(x_{j}-x_{i})}{(x_{i}-tx_{j})(x_{j}-tx_{i})} \frac{dx_{j}}{2\pi \sqrt{-1}x_{j}}
= \frac{t^{\lambda_{1}}(1-t)^{2}}{(1-t^{2})} \int x_{j}^{\lambda_{1} + \lambda_{j}} (-1)^{n-j} \prod_{j < i} \frac{x_{j}-t^{2}x_{i}}{x_{i} - t^{2}x_{j}} \frac{dx_{j}}{2 \pi \sqrt{-1} x_{j}}.
\end{multline*}
Now, this is $0$ if $\lambda_{1} + \lambda_{j} > 0$ and
\begin{align*}
\frac{t^{\lambda_{1}}(1-t)(t^{2})^{n-i}}{(1+t)}
\end{align*}
if $\lambda_{1} + \lambda_{j} = 0$. Finally, if $\lambda_{1} + \lambda_{j} < 0$ note that $\lambda_{n} + \lambda_{i} < 0$ for all $1 \leq i < n$. We can invert all variables and make a change of variables to arrive at the case $\lambda_{1} + \lambda_{j} > 0$, so the integral is zero by the above argument.
Iterating this argument shows that the partition $\lambda$ must satisfy $\lambda_{i} + \lambda_{n+1-i} = 0$ for the integral to be nonvanishing. Thus $\lambda = \mu \bar{\mu}$ for some $\mu$. In this case, we compute from the above remarks:
\begin{multline*}
\frac{1}{Z} \int P_{\lambda}^{(n)}(x;t^{2}) \frac{1}{n!} \tilde \Delta_{S}^{(n)}(x;t) dT
= \frac{1}{Z} \frac{1}{v_{\lambda}(t^{2})} \int R_{\lambda, \text{id}}^{(n)}(x;t^{2}) \tilde \Delta_{S}^{(n)}(x;t) dT \\
= \frac{\phi_{n}(t)}{(1-t)^{n}} \frac{t^{|\mu|}}{v_{\mu+}(t^{2})^{2} v_{(0^{n-2l(\mu)})}(t^{2})} \frac{(1-t)^{l(\mu)}}{(1+t)^{l(\mu)}} v_{\mu+}(t^{2}) \int R_{0^{n-2l(\mu)}}(x;t^{2}) \frac{1}{(n-2l(\mu))!}\tilde \Delta_{S}^{(n)}(x;t)dT.
\end{multline*}
Using the computation of $Z$, this is equal to
\begin{multline*}
\frac{\phi_{n}(t)}{(1-t)^{n}} \frac{t^{|\mu|}}{v_{\mu+}(t^{2})} \frac{(1-t)^{l(\mu)}}{(1+t)^{l(\mu)}} \int P_{0^{n-2l(\mu)}}^{(n-2l(\mu))}(x;t^{2}) \frac{1}{(n-2l(\mu))!}\tilde \Delta_{S}^{(n)}(x;t) dT\\
= \frac{\phi_{n}(t)}{(1-t)^{n}} \frac{t^{|\mu|} }{v_{\mu+}(t^{2})} \frac{(1-t)^{l(\mu)}}{(1+t)^{l(\mu)}} \frac{(1-t)^{n-2l(\mu)}}{\phi_{n-2l(\mu)}(t)}
= \frac{\phi_{n}(t)}{\phi_{n-2l(\mu)}(t)} \frac{t^{|\mu|}}{(1-t^{2})^{l(\mu)}v_{\mu+}(t^{2})} \\
= \frac{(1-t^{n-2l(\mu)+1}) \cdots (1-t^{n}) t^{|\mu|}}{(1-t^{2})^{l(\mu)}v_{\mu+}(t^{2})},
\end{multline*}
as desired.
\end{proof}
|
2,877,628,090,538 | arxiv | \section{Introduction}
When a seismic wave propagates through a water-saturated porous medium, it produces a
relative motion between the fluid phase and the rock matrix \citep{biot_low}.
This flow transports an electrical excess charge, contained in the electrical double layer
located along grain surfaces, thus producing an electrical current source.
This is the physical principle of
the seismolectric phenomenon, which \cite{pride94} formalized by coupling
Biot's (\citeyear{biot_low}) and Maxwell's equations.
For seismic waves propagating through water-saturated porous media,
the theory predicts two kinds of seismoelectric conversions:
(1) the coseismic field, and (2) the interface response.
The coseismic field is a consequence of the wavelength-scale flow accompanying a compressional
wave, which generates a current source even in homogeneous media.
The resulting electric field travels with the wave
and has a very limited extent outside of the support of the wave. Conversely,
when a seismic wave encounters a contrast in mechanical or electrical
properties, there is a corresponding variation in the current source
distribution, thus generating electrical potential differences that
can be measured outside of the wave support. The associated electric fields are highly sensitive
to the fluid pressure gradients in the vicinity of the interface. Accurate
modeling of seismic wave conversions at interfaces and, in particular,
of the associated Biot slow waves, is therefore critical for a realistic simulation of
the seismolectric response \citep{pride2002biot}.
A particular interface-type response is expected to take place in the presence of mesoscopic
heterogeneities, that is, heterogeneities having sizes larger than the
characteristic pore scale but smaller than the prevailing wavelengths.
It is well known that the propagation of seismic waves through a medium containing
these kinds of heterogeneities can induce significant
oscillatory fluid flow as, in response to the spatial variations in
elastic compliances, the stresses associated with the passing seismic wave produce a pore
fluid pressure gradient.
Indeed, the energy dissipation
associated with this phenomenon is widely considered to be one of the
most important intrinsic seismic
attenuation mechanisms in the shallower parts of the crust \citep[e.g.,][]{muller-et-al10}.
The characteristics of such wave-induced flow are mainly controlled by the compressibility contrast
between the heterogeneities and the embedding matrix as well as permeabilities and
geometrical characteristics of the heterogeneities.
Given that the amount of flow produced by this phenomenon can
be significant, corresponding strong seismoelectric signals carrying
valuable information about these properties are also expected to arise.
Modeling wave-induced fluid flow is problematic because the corresponding diffusion
lengths, that is, the spatial scales at which fluid flow occurs,
are very small compared with the seismic wavelengths. Together with the theoretical complications
arising from the coupling of the poro-elastic and electromagnetic responses, this may
explain why the generation of seismoelectric effects due to
mesoscopic heterogeneities is largely unexplored. Arguably, the most important work
on this topic is by \cite{haartsen1997Electroseismic}, who modeled the seismoelectric
response from a single sand layer having a thickness much smaller
than the predominant seismic wavelengths.
More recently, \cite{zhu2008electroseismic}
performed seismoelectric laboratory experiments demonstrating that electromagnetic waves are generated at horizontal fractures intersecting boreholes.
In this paper, we propose a novel approach to address this problem based on a numerical oscillatory
compressibility test coupled with a model for seismoelectric
conversion and signal generation. We illustrate the methodology
by analyzing the electrical potential produced by mesoscopic
fractures in an otherwise homogeneous water-saturated sandstone sample.
The reason for this choice of model is that
the amount of wave-induced fluid flow scales with the compressibility
contrasts between the mesoscopic heterogeneities and their embedding
matrix, which in turn implies that a particularly prominent seismoelectric
response can be expected in fractured media.
\section{Methodological background}
We consider a square rock sample containing mesoscopic heterogeneities and
apply a time-harmonic compression
\begin{equation}\label{source}
P(t)=\Delta P\cos{\left(\omega t\right)},
\end{equation} at its top boundary,
where $\omega$ is the angular frequency and $t$ time.
No tangential forces are applied on the boundaries of the sample
and the solid is neither allowed to move on the bottom boundary nor to
have horizontal displacements on the lateral boundaries.
The fluid is not allowed to flow into or out of the sample.
To obtain the response of the sample,
we numerically solve the equations of quasi-static poroelasticity
under corresponding boundary conditions \citep{rubino-et-al09}. This
methodology allows for
computing the relative fluid velocity field $\dot{\boldsymbol w}$ which
is then employed to calculate the corresponding seismoelectric signal.
Wave-induced flow exerts a drag on the excess electrical charges of the double layer
surrounding grain surfaces, thereby generating a source current density of the
form \citep{jardani2010stochastic}
\begin{equation}
\textbf{j}_s = \bar{Q}_v^{eff} \dot{\boldsymbol w},
\label{eq:Js}
\end{equation}
where $\bar{Q}_v^{eff}$ is the effective excess charge density.
In absence of an external current density, the electrical potential $\varphi$ in
response to a given source current density is described by \citep{Sill1983SP}
\begin{equation}
\nabla \cdot (\sigma \nabla \varphi) = \nabla \cdot \textbf{j}_s,
\label{eq:poisson}
\end{equation}
where $\sigma$ denotes the electrical conductivity. Given the
fluid velocity field $\dot{\boldsymbol w}$ inferred from the oscillatory compressibility test,
the seismoelectric signal induced by the presence of mesoscopic heterogeneities
is then obtained by numerically solving equations (\ref{eq:Js}) and (\ref{eq:poisson}).
As proposed by \cite{revil2013coupled}, we choose to ignore electroosmotic phenomena.
Conversely, we assumed the electrical conductivity to be invariant with frequency
because the frequency-dependence of the effects of the wave-induced fluid flow
are orders-of-magnitudes larger than those of the electrical conductivity
(see \cite{kruschwitz2010textural} for electrical measurements on sandstones).
\section{Numerical example: Seismoelectric response of fractured rocks}
We consider a model of a homogeneous water-saturated clean sandstone
permeated by three mesoscopic fractures (Figure \ref{fig:Geom&Vf}a).
The sample is a square with a side length of $2.5$~cm, and the mean
aperture of the fractures is $h=0.033$~cm.
The poroelastic response of this medium is modeled by
representing the mesoscopic fractures as highly compliant and
permeable porous regions embedded in a
stiffer porous matrix \citep{rubino2013fracture}.
The background and the fracture materials are hereafter denoted through the
superscripts \textit{b} and \textit{f} \citep{rubino2013fracture}:
drained-frame bulk moduli $K_m^b =$ 23~GPa and $K_m^f =$ 0.02~GPa,
shear moduli $\mu_m^b =$ 27 GPa and $\mu_m^f =$ 0.01~GPa,
porosities $\phi^b =$ 0.1 and $\phi^f =$ 0.5,
permeabilities $\kappa^b =$ 2.37 $\times$ 10$^{-14}$~m$^2$ and $\kappa^f =$ 10$^{-10}$~m$^2$,
and solid grain bulk moduli $K_s^b=K_s^f=37$~GPa.
The sample is fully saturated with water, with bulk modulus $K_w =$ 2.25~GPa and
viscosity $\eta_w =$ 0.003~Pa s. The amplitude of the applied compression is $\Delta P=1$~kPa.
We determine the electrical conductivities using $\sigma = \sigma_w \phi^{m}$,
with $m$ denoting the cementation exponent from Archie's law
and $\sigma_w$ the pore water conductivity.
Given the considered medium, the surface conductivity can be neglected.
We use $m^b$=2, $m^f$=1.3, and $\sigma_w$ = 10$^{-2}$~S/m, which represents
a typical value for pore water conductivity in laboratory experiments.
The simulated flow is in the viscous laminar regime and we can therefore estimate
the effective excess charge for the two materials using the empirical relationship proposed by
\cite{jardani2007tomography}
\begin{equation}
\log{(\bar{Q}_v^{eff})} = - 9.2349 - 0.8219 \log{(\kappa)}.
\label{eq:Qveff}
\end{equation}
This yields $\bar{Q}_v^{eff,b}$=87.6~C/m$^{3}$ and
$\bar{Q}_v^{eff,f}$=9.33~$\times 10^{-2}$~C/m$^{3}$. These values are consistent with those obtained by \cite{jouniaux1995permeability}
from voltage coupling coefficient measurements on sandstones with similar permeabilities.
\begin{figure}\vskip -3cm
\centering\includegraphics[angle=0,width=0.90\textwidth]{figure01.pdf}
\caption{(a) Numerical sample with three fractures.
(b and c) Horizontal and (d and e) vertical components of the
relative fluid velocity field at $t=0$ in equation (\ref{source}). The results correspond to an
oscillatory compressibility test with an amplitude of 1~kPa and frequencies of
(b and d) 1~Hz and (c and e) 10~kHz.}
\label{fig:Geom&Vf}
\end{figure}
Figure \ref{fig:Geom&Vf} displays the horizontal ($x$) and vertical ($y$) components
of the relative fluid velocity field at $t=0$ in equation (\ref{source}). For 1~Hz (Figures \ref{fig:Geom&Vf}b
and \ref{fig:Geom&Vf}d), we observe that the induced fluid velocity field is negligible.
This is expected, as for such a low frequency the diffusion lengths are larger than the size of
the considered heterogeneities and there is enough time during each oscillatory
half-cycle for the pore fluid pressure to equilibrate to a common value.
For a frequency of 10~kHz, the oscillatory compression produces
a significant fluid pressure increase in the highly compliant
fractures as compared to the stiffer embedding matrix, thus establishing
an important fluid pressure gradient.
Correspondingly, the amount of fluid flow is much more important in this case
(Figures \ref{fig:Geom&Vf}c and \ref{fig:Geom&Vf}e). Significant fluid flow
occurs inside the fractures (Figure \ref{fig:Geom&Vf}c), but there is also
an important fluid exchange between the fractures and the embedding matrix material
(Figure \ref{fig:Geom&Vf}e).
\begin{figure}
\centering\includegraphics[angle=0,width=1\textwidth]{figure02_corr.pdf}
\caption{(a and b) Electrical potential distribution at $t=0$ in equation (\ref{source}) and (c and d)
resulting electrical potential differences at two electrodes with respect to
the reference electrode as functions of the normalized time.
The grey square and the two circles (in a and b) highlight the position of the reference
and the potential electrodes, respectively.
The results correspond to an oscillatory compressibility test with an amplitude of 1~kPa
and frequencies of (a and c) 1~Hz and (b and d) 10~kHz.}
\label{fig:Voltage}
\end{figure}
The relative fluid velocity field obtained from the oscillatory compressibility test
is used to solve the electrical problem through equations (\ref{eq:Js}) and (\ref{eq:poisson}).
Neumann boundary conditions (electrical insulation) are considered at the boundaries of
the sample, in conjunction with a Dirichlet boundary condition ($\varphi=0$~V) at the right bottom corner
of the sample ($x=$2.5 and $y=$0~cm).
Figures \ref{fig:Voltage}a and \ref{fig:Voltage}b show the
electrical potential at $t=0$ for 1~Hz and 10~kHz, respectively. The magnitude and distribution
of the seismoelectric signal strongly depends on the frequency of the applied compression.
In particular, the fluid flow occurring in the vicinity
of the fractures induces at 10~kHz an important divergence of the source current density
$\textbf{j}_s$, which in turn generates seismoelectric signals with measurable amplitudes of a few mV
(Figure \ref{fig:Voltage}b).
For a frequency of 1~Hz, the signal is too small to be measured (Figure \ref{fig:Voltage}a),
which is expected given the smaller magnitude of fluid flow (Figures \ref{fig:Geom&Vf}b and \ref{fig:Geom&Vf}d).
To evaluate the feasibility of observing such seismoelectric
signals in laboratory experiments,
we consider the responses at two measurement electrodes with respect to a reference electrode
($\varphi=0$~V) as functions of ``normalized'' time ($t \times f$)
(Figures \ref{fig:Voltage}c and \ref{fig:Voltage}d).
While no clear signal can be seen for a frequency of 1~Hz (Figure \ref{fig:Voltage}c),
the seismoelectric responses are on the order of a few mV at 10~kHz
(Figure \ref{fig:Voltage}d), which can be readily measured under laboratory conditions.
We also observe a discrepancy, both in magnitude $\vert \varphi \vert$ and
phase $\theta$, between the two electrodes simulated for this higher frequency.
Indeed, even though the electrode located at the top boundary is
further from the reference than the one located in the middle,
its signal is 1.8 times weaker. This electrode is almost in phase
($\theta\approx 20^\circ$) with the oscillatory compression, while the middle electrode
shows a more important phase shift ($\theta\approx 140^\circ$).
This phase shift is due to both the viscosity-related lag experienced by
the wave-induced fluid flow and the relative position of the electrode with respect to the fractures.
\begin{figure}
\centering\includegraphics[angle=0,width=0.5\textwidth]{figure03_corr.pdf}
\caption{Effect of background permeability upon (a) the amplitude
$\vert \varphi \vert$ and (b) the phase $\theta$ of the electrical voltage
recorded at the top electrode ($y$ = 2.5 cm) for different frequencies.}
\label{fig:Kappa_test}
\end{figure}
Additional tests were conducted for a wide frequency range using different background permeabilities.
Figures \ref{fig:Kappa_test}a and \ref{fig:Kappa_test}b show that the electrical
potential measured at the top electrode is strongly frequency-dependent in terms of its amplitude and
phase. The amplitude has a first peak at a frequency that depends on the background
permeability, followed by a general increase at higher frequencies (Figure \ref{fig:Kappa_test}a).
The phase spectrum also shows a dependence on $\kappa^b$,
as the transition from high to low phase angles $\theta$ shifts to lower
frequencies as the background permeability decreases (Figure \ref{fig:Kappa_test}b).
These spectral signatures are explained by the
fact that the frequency range where fluid flow prevails scales with the background
permeability, together with the effects produced by the variations of the effective excess charge
according to equation (\ref{eq:Qveff}).
\section{Discussion}
When a seismic wave is incident perpendicular to a mesoscopic fracture plane,
or equivalently, when an oscillatory compression is applied to a rock sample
containing sub-horizontal fractures,
an oscillatory flow is induced from the fracture into the pore space of the embedding matrix
and vice versa. The spatial scales at which this flow occurs is limited
by the spacing between the fractures and characterized by the diffusion length $L_{\rm d}\equiv\sqrt{D/\omega}$,
where $D$ is the pressure diffusivity of the embedding matrix, a
parameter directly proportional to the background permeability. Therefore,
the frequency range where significant flow prevails scales with this hydraulic parameter and
the spacing between the fractures. The amount of fluid flow is mainly governed by the compressibility contrast
between the fractures and the embedding matrix.
Therefore, while the frequency dependence of the seismoelectric signal is mainly governed
by the separation between fractures and the embedding matrix permeability, its magnitude is expected to be
controlled by the compressibility contrast between the fractures and the background,
in addition to the permeability of the background, which also affects the effective excess charge (equation \ref{eq:Qveff}).
Additional numerical simulations indicate that the seismoelectric
signal is highly sensitive to the orientation of the fractures. This is expected, as
the amount of fluid flow in response to a vertical compression is maximum for sub-horizontal fractures
and minimum for the sub-vertical case \citep{rubino2013fracture}.
Moreover, fracture connectivity is also expected to play an important
role in determining the characteristics of
seismoelectric signals \citep{rubino2013fracture}.
Consequently, the seismoelectric responses due to mesoscopic effects
are expected to contain key information on structural and hydraulic
properties of the rock samples.
The most relevant result of this work is that mesoscopic heterogeneities can produce
measurable seismoelectric signals in response to an oscillatory compression
(Figures \ref{fig:Voltage} and \ref{fig:Kappa_test}). Corresponding laboratory experiments are already conducted
for seismic purposes \citep[e.g.][]{batzle2006fluid},
and could be extended to seismoelectric measurements. Although our analysis was performed
considering laboratory-size samples, the results of this study also
have corresponding implications for seismoelectric conversions at the field scale.
Due to ubiquitous fractal scaling laws, virtually all
geological formations contain mesoscopic heterogeneities and, therefore,
seismic waves will produce seismoelectric signals as they travel through such heterogeneities.
Indeed, it is likely that some of the notorious difficulties encountered in seismoelectric field applications,
notably the generally high noise levels \citep[e.g.][]{strahser2011dependence},
could be related to heterogeneities of different nature and sizes,
yielding a multiplicity of seismoelectric source currents
dispersed over the volume traversed by the seismic waves.
A better understanding of the role played by mesoscopic heterogeneities
is therefore needed to improve the generation, recording and interpretation of seismoelectric
signals.
\section{Conclusions}
Based on a novel methodological framework that couples recent improvements in the modeling
of wave-induced fluid flow and seismoelectric conversion mechanisms,
we have shown for the first time that the presence of mesoscopic heterogeneities
can produce measurable seismoelectric signals for typical laboratory
setups. In particular, we find a measurable
frequency-dependent response of the seismoelectric signal caused by mesoscopic fractures
in an otherwise homogeneous water-saturated sandstone sample.
The magnitude of the seismoelectric signal is mainly governed by the compressibility contrast
between the embedding matrix and the mesoscopic heterogeneities. Therefore,
prominent seismoelectric effects are expected to arise not only in fractured media,
but also in partially saturated porous rocks.
This in turn opens the perspective of developing seismoelectric spectroscopy
as a novel method for characterizing such media.
Given the ever increasing interest in the measurement and interpretation of seismoelectric signals,
the results of this pilot study are expected to be of interest in a wide range of domains
of the Earth, environmental, and engineering sciences, including
nondestructive testing, groundwater and contaminant hydrology,
hydrothermal operations, nuclear waste storage as well as hydrocarbon exploration and production,
among many others.
Correspondingly, further computational and experimental work on this topic is needed, as it promises to provide
deeper insights on the dependence of the recorded signals on pertinent hydraulic and mechanical
properties as well as to improve the acquisition, recording, and interpretation of seismoelectric data
{\it per se}.
\begin{acknowledgments}
This work was supported in part by grants from the Swiss National Science Foundation. The authors modified Maflot (maflot.com), kindly provided by I. Lunati and R. Kunze. The authors thank the Editor Michael Wysession and two anonymous reviewers for constructive comments that helped to improve the quality of this manuscript.
\end{acknowledgments}
|
2,877,628,090,539 | arxiv | \section{Introduction and Notation}
Perfect digraphs have been introduced by Andres and Hochst\"attler
\cite{perfectdigraphs} as the class of digraphs where the clique
number equals the dichromatic number for every induced subdigraph.
Reed \cite{semistrong} showed that, if two graphs are
$P_4$-isomorphic, then either both are perfect or none of them is. In
this note we will derive an analogous result for perfect digraphs.
We start with some definitions. For basic terminology we refer to
Bang-Jensen and Gutin~\cite{digraphs}. For the rest of the paper, we
only consider digraphs without loops. Let $D=(V,A)$ be a digraph. The
\emph{symmetric part} $S(D)$ of $D=(V,A)$ is the digraph $(V,A_2)$
where $A_2$ is the union of all pairs of antiparallel arcs of $D$, the
\emph{oriented part} $O(D)$ of $D$ is the digraph $(V,A_1)$ where
$A_1=A\setminus A_2$.
A proper $k$-coloring of $D$ is an assignment $c:V\to \{1,\ldots,k\}$
such that for all $1\le i \le k$ the digraph induced by
$c^{-1}(\{i\})$ is acyclic. The \emph{dichromatic number} $\chi(D)$
of $D$ is the smallest nonnegative integer $k$ such that $D$ admits a proper $k$-coloring.
A \emph{clique} in a digraph $D$ is a subdigraph in which for any two
distinct vertices $v$ and $w$ both arcs $(v,w)$ and $(w,v)$ exist.
The \emph{clique number} $\omega(D)$ of $D$ is the size of the largest
clique in $S(D)$. The clique number is an obvious lower bound for the
dichromatic number. $D$ is called \emph{perfect} if, for any induced
subdigraph $H$ of~$D$, $\chi(H)=\omega(H)$.
An (undirected) graph $G=(V,E)$ can be considered as the symmetric
digraph $D_G=(V,A)$ with $A=\{(v,w),(w,v)\mid vw\in E\}$. In the
following, we will not distinguish between $G$ and $D_G$. In this way,
the dichromatic number of a graph $G$ is its chromatic number
$\chi(G)$, the clique number of $G$ is its usual clique number
$\omega(G)$, and $G$ is perfect as a digraph if and only if $G$ is
perfect as a graph.
A main result of \cite{perfectdigraphs} is the following:
\begin{thm}[\cite{perfectdigraphs}]\label{main}
A digraph $D=(V,A)$ is perfect if and only if $S(D)$ is perfect and $D$
does not
contain any directed cycle $\vec{C}_n$ with $n\ge3$ as induced subdigraph.
\end{thm}
Together with the Strong Perfect Graph Theorem (see e.g.
\cite{golumbic}) this yields a characterization of perfect digraphs in
form of forbidden induced minors. The Weak Perfect Graph Theorem (see
\cite{golumbic}), though, does not generalize. The directed 4-cycle $\vec{C_4}$ is
not perfect but its complement is perfect, thus perfection is in
general not maintained under taking complements.
Two graphs $G=(V,E_1)$ and $H=(V,E_2)$ are {\em $P_4$-isomorphic}, if
any set $\{a,b,c,d\}\subseteq V$ induces a chordless path, i.e.\ a
$P_4$, in $G$ if and only if it induces a $P_4$ in $H$.
\begin{thm}[Semi-strong Perfect Graph Theorem \cite{semistrong}]\label{reedresult}
\label{semistrong} If $G$ and $H$ are $P_4$-isomorphic, then
\[G \text{ is perfect } \Longleftrightarrow H \text{ is perfect}.\]
\end{thm}
The graphs without an induced $P_4$ are the cographs \cite{corneil}. Thus any
pair of cographs with the same number of vertices is $P_4$-isomorphic.
In order to generalize
Theorem~\ref{reedresult} to digraphs we consider the
class of directed cographs~\cite{dicographs}, which are characterized
by a set $\Fscr$ of eight forbidden induced minors. Since the class of
directed cographs is invariant under taking complements and perfect
digraphs are not, it is clear that isomorphism with respect to
$\Fscr$ will not yield the right notion of isomorphism for our
purposes. It turns out that restricting to five of these minors
yields the desired result.
\section{$P^4C$-isomorphic digraphs}
The five forbidden induced minors from \cite{dicographs} we need are
the symmetric path $P_4$, the directed
3-cycle $\vec{C_3}$, the directed path $\vec{P_3}$
and the two possible augmentations $\vec{P}_3^+$ and
$\vec{P}_3^-$ of the $\vec{P_3}$ with one antiparallel edge (see
Figure~\ref{fig:1}).
\begin{figure}[htbp]
\centering
\includegraphics[width=.7\textwidth]{minors
\caption{The five induced subdigraphs considered\label{fig:1}}
\end{figure}
\begin{definition}
Let $D=(V,A)$ and $D'=(V,A')$ be two digraphs on the same vertex
set. Then $D$ and $D'$ are said to be {\em $P^4C$-isomorphic} if and
only if
\begin{enumerate}
\item any set $\{a,b,c,d\} \subseteq V$ induces
a~$P_4$ in $S(D)$ if and only if it induces
a~$P_4$ in~$S(D')$,
\item any set $\{a,b,c\} \subseteq V$ induces a $\vec{C_3}$ in $D$
if and only if it induces a $\vec{C_3}$ in~$D'$,
\item any set $\{a,b,c\} \subseteq V$ induces a $\vec{P_3}$ with midpoint $b$ in $D$
if and only if it induces a $\vec{P_3}$ with midpoint $b$ in $D'$ and
\item any set $\{a,b,c\} \subseteq V$ induces a $\vec{P}_3^+$ or a $\vec{P}_3^-$ in either case with midpoint $b$ in $D$
if and only if it induces one of them with midpoint $b$ in $D'$.
\end{enumerate}
\end{definition}
Note that the
$P_4$ in case 1 is not
necessarily induced in $D$, resp.\ in $D'$.
\begin{prop}\label{prop:cycle}
If $D$ and $D'$ are $P^4C$-isomorphic, then $D$ contains an induced
directed cycle of length $k\ge 3$ if and only if the same is true for $D'$.
\end{prop}
\begin{proof}
By symmetry it suffices to prove that, if $\{v_0,\ldots v_{k-1}\}$
induces a directed cycle $\vec{C_k}$ in $D$, then the same holds for $D'$.
The assertion is clear if $k=3$, thus assume $k \ge 4$. We may,
furthermore, assume that the vertices are traversed in consecutive
order in $D$. Since $D$ and $D'$ are $P^4C$-isomorphic, each set
$\{v_i,v_{i+1}, v_{i+2}\}$ induces a $\vec{P_3}$ with midpoint
$v_{i+1}$ in $D'$, where indices are taken modulo $k$. This yields a
directed cycle $C$ on $v_0,\ldots v_{k-1}$, possibly with opposite
orientation wrt.~$D$. In that case we relabel the vertices such
that the label coincides with the direction of traversal. We claim
the cycle is induced in $D'$, too.
Assume it is not, i.e.\ $C$ has a chord $(v_i,v_j)$, $j\ne i-1$
in~$D'$. We choose $j$ such that the directed path from $v_j$ to
$v_i$ on $C$ is shortest possible. If $(v_i,v_j)$ is an asymmetric
arc, then, since $\{v_i,v_j,v_{j+1}\}$ does not induce a
$\vec{C_3}$, it must induce a $\vec{P_3}$ with midpoint $v_j$
in~$D'$ and hence the same must hold in $D$, contradicting
$\vec{C_k}$ being induced. If we have a pair of antiparallel edges
between $v_i$ and $v_j$, then, similarly, $\{v_i,v_j,v_{j+1}\}$
induces a $\vec{P}_3^+$ or a $\vec{P}_3^-$ with midpoint $v_j$, also
leading to a contradiction.
\end{proof}
\begin{thm} If $D$ and $D'$ are $P^4C$-isomorphic then
\[D \text{ is perfect } \Longleftrightarrow D' \text{ is perfect}.\]
\end{thm}
\begin{proof}
By assumption $S(D)$ and $S(D')$ are $P_4$-isomorphic, hence using
Theorem~\ref{semistrong} we find that $S(D)$ is perfect if and only
if $S(D')$ is perfect. By Proposition~\ref{prop:cycle}, $D$ contains
an induced directed cycle of length at least three if and only if
the same holds for $D'$. The assertion thus follows from Theorem~\ref{main}.
\end{proof}
\section{Transitive extensions of cographs}
In this section we will analyse the class of digraphs without any of
the five subgraphs, which thus are trivially pairwise
$P^4C$-isomorphic.
Since the symmetric part of such a graph is a cograph, we may consider
its cotree~\cite{corneil} in canonical form, where the labels
alternate between $0$ and $1$. Since the $1$-labeled tree vertices
correspond to complete joins, there is no additional room for
asymmetric arcs. The $0$-labeled vertices correspond to disjoint
unions. Assume the connected components in $S(G)$ are $G_1,\ldots G_k$.
\begin{prop}
If there exists an asymmetric arc connecting a vertex $v_i$ in $G_i$ to a
vertex $v_j$ in~$G_j$, then $G_i$ and $G_j$ are connected by an orientation
of the complete bipartite graph $K_{V(G_i),V(G_j)}$.
\end{prop}
\begin{proof}
Since $S(G_i)$ and $S(G_j)$ are connected and by symmetry, it
suffices to show that $v_i$ must be connected by an asymmetric arc to all
symmetric neighbors of~$v_j$. Let $w$ be such a neighbor. Since
there is no symmetric arc from $v_i$ to $w$ and $\{v_i,v_j,w\}$ must
neither induce a $\vec{P}_3^-$ nor a $\vec{P}_3^+$, we must have an
asymmetric arc between $v_i$ and $w$.
\end{proof}
Hence, the asymmetric arcs between the components $G_1,\ldots,G_k$
constitute an orientation of a complete $\ell$-partite graph for $1\le
\ell \le k$. The situation is further complicated by the fact that we
must neither create a $\vec{C_3}$ nor a $\vec{P_3}$, where we have to
take into account that there may also be asymmetric arcs within the $G_i$.
We wonder whether this structure is strict enough to make some
problems tractable that are $\mathcal{NP}$-complete in general. In
particular we would be interested in the complexity of the problem to cover all vertices with a minimum number of vertex disjoint directed paths.
|
2,877,628,090,540 | arxiv | \section{\bf 1. Introduction}
In [\putref{Dowsut,Dowcycl}] I computed the operator determinants on lens space factors of
the unit 3--sphere for the Laplacian conformal in four dimensions. This meant that the
eigenvalues were squares of integers and the $\zeta$--function\ analysis was relatively straightforward
and explicit. Nash and O'Connor, [\putref{NandO}], evaluated the determinants for other
operators in connection with analytic torsion. Factors of higher spheres have been
considered by Bauer and Furutani [\putref{BandF}]. Evaluations have also been conducted
lately in string theory and quantum field theory, {\it e.g.}\ Castro, Lashkar and Maloney,
[\putref{CLM}], Gang, [\putref{Gang}], Alday, Fluder and Sparks, [\putref{AFS}],
Radi\u{c}evi\'c, [\putref{Radicevic}].
I here return to this topic still restricting to the 3--sphere, essentially because it is somewhat
special and therefore more interesting (and easier). My attitude is mainly numerical and I
wish to generalise the Laplacian to one that includes mass, in particular to the Laplacian
conformal on the three--sphere. The analysis may be useful when considering the topology
of the Universe, a topic under present scrutiny. I feel that a mostly computational approach
has advantages such as speed and will prove a useful adjunct to the analytical route.
Fixed--point--free factors of the three sphere, S$^3/\Gamma'$, divide into the homogeneous
variety in which the deck group, $\Gamma'$, acts on the right (or left) the left (or right) action
being trivial, and the inhomogeneous sort with a two--sided action. Left and right here refer
to the SU(2) groups in the symmetry group isomorphism $SO(4)\sim SU(2)\times
SU(2)/\mbox{\open\char90}_2$ acting on S$^3$ in its guise as $SU(2)$, {\it i.e. } $\Gamma'=\Gamma'_L\times\Gamma'_R$.
In this paper I mostly confine my attention to the homogeneous type which is somewhat
easier to deal with. The classification of relevance here is given in Wolf, [\putref{Wolf}],
Corollary 2.7.2 which says that $\Gamma'_R$ is either a cyclic group or a binary polyhedral
group \footnotecheck\defaultoption[]\@footnote{ I consider the dihedron to be a polyhedron. Incidentally, my favourite
potted summary of the classification is that by Ellis, [\putref{Ellis}].}.
I showed in [\putref{Dowsut}] that, because of the homomorphism SO(3)$\sim$
SU(2)/$\mbox{\open\char90}_2$, the binary degeneracies, which are the essential part of the spectrum, can
be reduced to an analysis on the {\it orbifolded} two--sphere, S$^2/\Gamma$, where $\Gamma'$ is
the lift of $\Gamma$. Following Bethe, I refer to $\Gamma'$ as the {\it double} of $\Gamma$. For many
purposes, one does not need the degeneracies themselves, their generating functions often
being sufficient, and these are classically available, {\it e.g.}\ [\putref{Meyer}].
Lens spaces arise from the choice of a cyclic group, $\mbox{\open\char90}_q$, for $\Gamma'_R$. In order to use
the reduction mentioned in the last paragraph, the restriction that $q$ should be even has to
be made because the binary lift of $\mbox{\open\char90}_q$ is $\mbox{\open\char90}_{2q}$. However it is possible to treat
even and odd orders together, which I will do.
\section{\bf 2. Massive determinants on homogeneous lens spaces}
In order to be in accord with a previous notation, the scalar eigenvalues are taken to be
$l^2-\alpha^2$, $l=1,2,\ldots$, with {\it total} degeneracies, $D_l(\Gamma')$. I need the
associated $\zeta$--function\ which I write using a Bessel function relation, [\putref{Minak,CaandWe}], as,
{\it cf }\ [\putref{Dowren}],
$$\eqalign{
Z(s,\Gamma',\alpha)&=\sum_l{D_l(\Gamma')\over(l^2-\alpha^2)^s}\cr
&={\sqrt\pi\over\Gamma(s)}\int_0^\infty d\tau\sum_l D_l(\Gamma')\,e^{-l\tau}\,
\bigg({\tau\over2\alpha}\bigg)
^{s-1/2}\,I_{s-1/2}(\alpha\tau)\cr
&={\sqrt\pi\over\Gamma(s)}\int_0^\infty d\tau\,K_{\Gamma'}^{1/2}(\tau)\,
\bigg({\tau\over2\alpha}\bigg)
^{s-1/2}\,I_{s-1/2}(\alpha\tau)\,,
}
\eql{zet}
$$
where $K_{\Gamma'}^{1/2}$ is the cylinder kernel for the Laplacian conformal in four
dimensions. and is, equivalently, the degeneracy generating function used in
[\putref{Dowsut}].
For the homogeneous case, the total degeneracy is a product of the left and right
degeneracies, the former simply being a factor of $l$. (One should regard $l$ as an SU(2)
representation dimension, $2j+1$, which is just the range of the left index on a Wigner
${\cal D}^{(j)}$ matrix.) Then, $D_l(\Gamma')=l\,d_l(\Gamma')$ and we need the second factor.
For quotient spaces, simple character theory allows a closed form for the generating function
of right degeneracies. In particular, for lens space degeneracies, $d_l(q)$,
$$
G(t,q)\equiv\sum_{l=1}^\infty d_l(q)\,t^l={t(1+t^q)\over(1-t^2)(1-t^q)}\,,
\eql{lsg}
$$
and, therefore, setting $t=e^{-\tau}$,
$$\eqalign{
K_{q}^{1/2}(\tau)&=\sum_{l=1}^\infty l\,d_l(q)\,e^{-l\tau}\cr
&=-{d\over d\tau}{\coth(q\tau/2)\over2\sinh\tau}\,.
}
\eql{cylk}
$$
The mathematical problem before us is the evaluation of the derivative at zero,
$Z'(0,q,\alpha)$, of the $\zeta$--function\ (\puteqn{zet}) with (\puteqn{cylk}). Since the dimension of the manifold
is odd, I can proceed exactly as in [\putref{CaandWe}], [\putref{Dowren}], [\putref{Chodos1}]
and employ a complex contour method of continuing to $s=0$.
Defining $I(s,q,\alpha)$ by $Z(s,q,\alpha)=I(s,q,\alpha)/\Gamma(s)$, one has $Z'(0,q,\alpha)=I(0,q,\alpha)$ and
a simple calculation which parallels that in [\putref{Dowren}] produces,
$$\eqalign{
I(0,q,\alpha)&=\int_0^\infty dx\, {\rm Re\,}{(q\sinh\tau+
\cosh\tau\sinh q\tau)\,\cosh\alpha\tau\over2\tau \sinh^2\tau\,\sinh^2 q\tau/2}\,,\cr
}
\eql{zedash0}
$$
where $\tau=x+i\Delta$ with $\Delta<2\pi/q$. For S$^3$, $q=1$ and,
$$\eqalign{
I(0,1,\alpha)&=\int_0^\infty dx\, {\rm Re\,}{(1+
\cosh\tau)\,\cosh\alpha\tau\over2\tau \sinh\tau\,\sinh^2 \tau/2}\cr
&=\int_0^\infty dx\, {\rm Re\,}{\coth\tau/2\,\cosh\alpha\tau\over2\tau \sinh^2 \tau/2}\,,
}
$$
which, as a check, agrees with an expression derived in [\putref{Dowren}].
In Fig.1, I plot logdet $=-I(0,q,\alpha)$ as a continuous function of $q$ for $\alpha=0$ and for
$\alpha=1/2$, corresponding to conformal in four and in three dimensions respectively.
\epsfxsize=5truein \epsfbox{lensdet1.eps}
Figure 2 gives, for three values of $q$, the variation of logdet against the ``mass'', $\mu$
defined by $\mu^2=1/4-\alpha^2$, which is, for this purpose, a more convenient variable.
(What should be called the mass is somewhat arbitrary.)
The asymptotic variation for large mass was discussed, in a similar context, in
[\putref{Dowren}] and is best expressed in terms of the short--time expansion coefficients of
the heat--kernel for propagation by the Laplacian conformal in {\it four} dimensions
because this expansion, in the present case, terminates with the first, `volume' term and it
easily follows that $Z'(0,q,im)\sim \pi m^3/3q$ with an exponentially small remainder. The
beginnings of this can be seen in the figure.
\epsfxsize=5truein \epsfbox{lensdet2.eps}
\section{\bf 3. The other quotients}
As a simple `application' of these expressions, the logdets for the binary tetrahedral, (${\bf
T'}$), octahedral, (${\bf O}'$), and icosahedral, (${\bf I}'$), factored spaces can be
determined using the cyclic decomposition of the corresponding SO(3) groups,
[\putref{Meyer,PandM}]. This was used in [\putref{Dowsut}]. Any spectral quantity\footnotecheck\defaultoption[]\@footnote{ I
restrict the notion of spectral quantity by imposing linearity in the sense that, if the
spectrum is composed of the union of two sets, its spectral quantity is the sum of the
spectral quantities of the two parts. A typical example is the $\zeta$--function\ but not the determinant.}
${\cal S}(\Gamma')$ on such an S$^3/\Gamma'$ can be expressed in terms of binary cyclic quantities,
$$
{\cal S}(\Gamma')={1\over2}\bigg(\sum_q {\cal S}(\mbox{\open\char90}_{2q}) -{\cal S}(\mbox{\open\char90}_2)\bigg)\,,
\eql{cycldec}
$$
where the sum is over the (three) $q$--fold axes of rotation in the SO(3) subgroup, $\Gamma$,
of which $\Gamma'$ is the lift. The values of $q$ are contained in the symbol of the polyhedral
group, $(l,m,n)$, {\it i.e. } $l=2,m=3$ and $n=3,4,5$ for ${\bf T,O,I}$, respectively.
I give the numbers for the conformal--in--three and conformal--in--four dimensions
Laplacians. The latter agree with those in [\putref{Dowsut}] computed using a different
algorithm for the even lens spaces. Thus,
$$
{\rm det\,}({\bf T}')=0.159259\,,\quad {\rm det\,}({\bf O}')=0.099650\,,\quad {\rm det\,}({\bf I}')=0.055743,
$$
for conformal--in--three, and,
$$
{\rm det\,}({\bf T}')=0.202089\,,\quad {\rm det\,}({\bf O}')=0.128776\,,\quad {\rm det\,}({\bf I}')=0.073056,
$$
for conformal--in--four dimensions. These values are for real scalar fields.
\section{\bf 4. Twisted fields}
A simple, and basic, example is a complex scalar field or U(1) line bundle. In general terms,
the quantum field theory is characterised by the homomorphisms of the deck group, $\Gamma'$,
which here is the fundamental group, into the bundle group, ${\cal G}$, {\it i.e. }\ by
$\rm Hom(\Gamma',{\cal G})$. which can be realised as flat connections associated with `fluxes' of
gauge fields through `holes' in the configuration space, {\it e.g.}\
[\putref{DandB,BandD2,dowaustin,dowstat}].
For the simplest situation, $\Gamma'=\mbox{\open\char90}_q$ and ${\cal G}= $U(1), as considered in
[\putref{DandB}], for example. The homomorphisms correspond to the $q$ $q$th roots of
unity and the elements of $\rm Hom(\mbox{\open\char90}_q,U(1))$, {\it i.e. }\ the multiplying representations of
$\mbox{\open\char90}_q$ (the `twists') are generated by $\omega^r$ where $\omega^q=1$ and $0\le r<q$.
Instead of (\puteqn{lsg}), character theory gives the degeneracy generating function
$$
G(t,q,r)\equiv\sum_{l=1}^\infty d_l(q,r)\,t^l={t^{1+r}(1+t^{-2q\delta})\over(1-t^2)(1-t^q)}\,,
$$
where $\delta+1/2=r/q$, which is the flux. For (\puteqn{cylk}) one has,
$$\eqalign{
K_{q}^{1/2}(\tau,r)&=\sum_{l=1}^\infty l\,d_l(q,r)\,e^{-l\tau}\cr
&=-{d\over d\tau}{\cosh(q\tau\delta)\over2\sinh\tau\,\sinh q\tau/2}\,.
}
\eql{cylk2}
$$
The effect of the twisting on the logdet can now be determined numerically from the
equivalent of (\puteqn{zedash0}), {\it i.e. }, ($\tau=x+i\Delta$),
$$\eqalign{
I(0,q,r,\alpha)&=-\int_0^\infty dx\, {\rm Re\,} \bigg({\cosh\alpha\tau\over\tau}{d\over dx}
{\cosh(q\tau\delta)\over2\sinh\tau\,\sinh q\tau/2}\bigg)\,,\cr
}
\eql{zedash1}
$$
where I have not, this time, performed the differentiation.
Figure 3 shows the (typical) variation of the effective action (minus half the logdet), for a
complex field with the twist of the bundle plotted continuously. The mass parameter, $\alpha$,
has been set to $1/2$ to give the Laplacian conformal in 3--d.
\epsfxsize=5truein \epsfbox{twist1.eps}
\section{\bf 5. More complicated twistings}
More generally, if the field belongs to some representation of the internal symmetry
(bundle) group, ${\cal G}$, a twisting, {\it i.e. } an element $\rho\in\rm Hom(\Gamma',{\cal G})$ is determined
by expressing this representation as a direct sum of irreps, $\mbox{{\ssb\char65}}$, of $\Gamma'$. All that is
needed, for a flat bundle, is that the dimensions match and that any other conditions, such
as unimodularity, be met, [\putref{DandJ}]. This is an enumerative, rather than an
algorithmic, prescription for cataloguing the gauge vacua.
For a flat bundle spectral quantity, ${\cal S}(\rho)$, the direct sum becomes, by additivity, an
algebraic sum of the components, ${\cal S}(\mbox{{\ssb\char65}})$, which act as spectral building blocks. These,
in turn can be expressed linearly in terms of {\it cyclic} spectral quantities. This follows from
the algebraic sufficiency of the cyclic subgroups of $\Gamma'$, which is guaranteed by Artin's
theorem, and an inducing isospectral theorem of Ray and Singer, [\putref{RandS}]. (See
[\putref{Dowgt}], Tsuchiya, [\putref{Tsuchiya}].) Equation (\puteqn{cycldec}) is an example
which I rewrite more explicitly as,
$$
{\cal S}(\mbox{{\ssb\char49}})={1\over2}\big({\cal S}(0;R)+{\cal S}(0;S)+{\cal S}(0;T)-{\cal S}(0;RST)\big)\,,
\eql{cycldet2}
$$
where ${\cal S}(r;\gamma)$ is the twisted spectral quantity for a cyclic quotient generated by
$\gamma$. $R,S$ and $T$ are the generators for the binary polyhedral groups, {\it e.g.}\
[\putref{CandM}], corresponding to $q=2$, $q=3$ and $q=(3,4$ or $5$),
respectively.\footnotecheck\defaultoption[]\@footnote{ Only $R$, $S$ and $T$ are really needed. The inclusion of the
central element $\overline E=RST$ gives a more symmetrical formulation, $\overline E^2=E={\rm
id}$.} The left--hand side of (\puteqn{cycldet2}) refers to the trivial irrep, ${\mbox{{\ssb\char49}}}$, of
$\Gamma'$, {\it i.e. } the trivial bundle.
As another example, derived in [\putref{Dowgt}], I give the first twisted relation,
$$
{\cal S}(\mbox{{\ssb\char50}}_s)={1\over2}\big({\cal S}(1;R)+{\cal S}(1;S)+{\cal S}(1;T)-{\cal S}(1;RST)\big)\,,
\eql{cycldet3}
$$
where $\mbox{{\ssb\char50}}_s$ is the two--dimensional spinor representation of ${\bf T}'$, ${\bf O}'$
or ${\bf I}'$.
Numerical evaluation of (\puteqn{cycldet3}) using (\puteqn{zedash1}) produces,
$$
{\rm det\,}({\bf T}')=0.652112\,,\quad {\rm det\,}({\bf O}')=0.439366\,,\quad {\rm det\,}({\bf I}')=0.260126,
$$
for conformal in three, and,
$$
{\rm det\,}({\bf T}')=0.663348\,,\quad {\rm det\,}({\bf O}')=0.454594\,,\quad {\rm det\,}({\bf I}')=0.272797,
$$
for conformal in four dimensions.
Just to illustrate the sort of calculation that is possible, I consider a (scalar) field belonging to
the fundamental rep of an internal U(3) on icosahedral space, S$^3/{{\bf I}'}$. There are
four elements of $\rm Hom\big({\bf I}',$U(3)\big), {\it viz} $\mbox{{\ssb\char51}}$, $\mbox{{\ssb\char51}}'$,
$\mbox{{\ssb\char49}}\oplus \mbox{{\ssb\char50}}_s$, $\mbox{{\ssb\char49}}\oplus \mbox{{\ssb\char50}}_s'$ and the trivial one,
$\mbox{{\ssb\char49}}\oplus\mbox{{\ssb\char49}}\oplus\mbox{{\ssb\char49}}$.\footnotecheck\defaultoption[]\@footnote{ The irreps are labeled by their dimensions.
The subscript $s$ stands for `spinor'.} Inducing representations shows that, [\putref{Dowgt}],
$$\eqalign{
{\cal S}(\mbox{{\ssb\char50}}_s')&={\cal S}(1;S)-{1\over2}\,{\cal S}(5;T)-{\cal S}(1;T)\cr
{\cal S}(\mbox{{\ssb\char51}}')&={1\over2}\,{\cal S}(2;R)-{\cal S}(2;T)\cr
{\cal S}(\mbox{{\ssb\char51}})&={1\over2}\,{\cal S}(2;R)-{\cal S}(4;T)\,.
}
$$
Together with (\puteqn{cycldet2}) and (\puteqn{cycldet3}) these yield the following numbers for
the 4d--conformal Laplacian,
$$\eqalign{
{\rm det\,}(\mbox{{\ssb\char49}}\oplus\mbox{{\ssb\char49}}\oplus\mbox{{\ssb\char49}})&=0.000391\cr
{\rm det\,}(\mbox{{\ssb\char49}}\oplus\mbox{{\ssb\char50}}_s)&=0.019929\cr
{\rm det\,}(\mbox{{\ssb\char49}}\oplus\mbox{{\ssb\char50}}_s')&=0.021993\cr
{\rm det\,}(\mbox{{\ssb\char51}})&=0.164545\cr
{\rm det\,}(\mbox{{\ssb\char51}}')&=2.00091\cr
}
$$
A crude physical conclusion might be that, if the smallest effective action ($-$logdet/2) is
preferred, on the basis of some unspecified dynamics, then the $\mbox{{\ssb\char51}}'$ twisting stands
out as the lowest, for this artificial situation. The symmetry is broken from U(3) to U(1).
\section{\bf 6. Application to thermodynamics on factored spaces}
In [\putref{DandJ}] a related analysis was made of Witten's symmetry breaking mechanism
induced, as a toy model, by fluxes in geometries involving factors of the three--sphere. The
Casimir energies were evaluated in the different gauge vacua and closed forms were found
for lens spaces, the fields being in the adjoint representation of ${\cal G},=$ SU($n$). The
space--time was a factored Einstein universe, $T\times$S$^3/\Gamma'$. In
[\putref{Jadhav2,DandJ2}], the system was put at a finite temperature, and symmetry
restoration for increasing temperature analysed. The object of present interest, the effective
action on S$^3/\Gamma'$, enters into this scheme through the high temperature expansions of
thermodynamic quantities such as the free energy.
General asymptotic series are given in [\putref{DandK}] in terms of the coefficients of the
short--time expansion of the heat--kernel, apart from one term which stems from the zero
mode on the thermal circle. To a constant factor, this term is $Z'(0,\Gamma',\alpha)$. Furthermore,
for the case of conformal propagation in four--dimensions, and scalar fields, the
heat--kernel expansion terminates with the first volume, or Weyl, term, on
fixed--point--free factors of the three-sphere, as already stated. The high temperature
expansions, [\putref{DandK}], of the, here finite, free energy, internal energy and entropy
then simplify drastically to, (for a real scalar),
$$\eqalign{
F&\sim -{\pi^4\over45|\Gamma'|}\beta^{-4}-{1\over2}\,Z'(0,\Gamma',0)\,\beta^{-1}\cr
E&\sim {\pi^4\over15|\Gamma'|}\beta^{-4}\cr
S&\sim {4\pi^4\over45|\Gamma'|}\beta^{-3}+{1\over2}\,Z'(0,\Gamma',0)\,,
}
\eql{hitemp}
$$
respectively, with the error being exponentially small.
These expansions have been rederived recently by Asorey {\it et al}, [\putref{ABAS}]. This
reference also contains expressions for the complete thermodynamic quantities.
Thermal theory on factors of the three--sphere had been set up by Kennedy, {\it passim},
[\putref{Kennedy}], and was pursued in more detail by Unwin, [\putref{Unwin1}].
As noted in [\putref{ChandD}] and [\putref{Dowsut}], knowledge of the degeneracy generating
functions allows one to write down the free energy, say, with minimum fuss, in terms of the
cylinder kernel, $K^{1/2}$, as,
$$
F(\beta)=E_0-{1\over\beta}\sum_{m=1}^\infty {1\over m}\, K^{1/2}(m\beta)\,,
$$
where $E_0$ is the zero temperature internal energy or vacuum energy, or Casimir energy.
In particular for twisted $q$--lens spaces, using (\puteqn{cylk2}),
$$
F(\beta,q,\delta)=E_0(q,\delta)+{1\over\beta}\sum_{m=1}^\infty {1\over m^2}\,
{d\over d\beta}{\cosh(qm\beta\delta)\over\sinh m\beta\,\sinh qm\beta/2}\,,
\eql{freen}
$$
which is a way of rewriting the standard statistical sum over states.
The twisted Casimir energies are effectively available in [\putref{Dowsut}]. Equation
(\puteqn{cylk2}) for the twisted cylinder kernel should be compared with equation (46) in
[\putref{Dowsut}]. The integer `degrees' there, here transcribe to the reals,
$$
\delta_0=q\,\delta\,,\quad \delta_1=q/2\,,\quad \delta_2=1\,,
$$
and the Casimir energies are given in [\putref{Dowsut}] equn.(52) \footnotecheck\defaultoption[]\@footnote{ The $\delta_0$ in
the numerator should read $\delta_0^2$.}. Thus,
$$
E_0(q,\delta)={(7-120\delta^2+240\delta^4)\,q^4+40(1-12\delta^2)\,q^2+112\over2880\,q}\,.
\eql{casen}
$$
I have multiplied by two in (\puteqn{freen}) and (\puteqn{casen}) to allow for a complex field
$\in$ U(1).
Everything in (\puteqn{freen}) is explicit and computable. In figure 4, I plot the free energy
versus the temperature, $T=1/\beta$, for an untwisted field for $q=1,2,3$ and in figure 5
the free energy on the $4$--lens for several twistings, {\it i.e. }\ fluxes. The high temperature
behaviour, (\puteqn{hitemp}), can just about be seen.
\epsfxsize=5truein \epsfbox{freen5.eps}
\epsfxsize=5truein \epsfbox{freen6.eps}
\section{\bf 7. Two--sided quotients}
The same basic procedure can be applied to the inhomogeneous quotients, in the simplest
case to the general lens spaces, $L(q;\lambda_1,\lambda_2)$. The only difference is that, when when
the degeneracy is being found, the necessary group average has to be done element by
element. Without going into the standard details of the construction of the lens space, the
elements of $\Gamma'$ are determined by the angles $a\beta_1$ and $\beta_2$,
$$
{\beta_1\over2\pi}={ p\nu_1\over q}\,,\quad {\beta_2\over2\pi}={p\nu_2\over q}\,,
\eql{angles1}
$$
where $p,\,=0,\ldots,q-1\,,$ labels $\gamma$. $\nu_1$ and $\nu_2$ are integers coprime to
$q$, with $\lambda_1$ and $\lambda_2$ their mod $q$ inverses.\footnotecheck\defaultoption[]\@footnote{ Some authors label the
lens space, equivalently, as $L(q;\nu_1,\nu_2)$.}
By an appropriate selection of a $q$-th root of unity, it is possible to set $\nu_1=1$, {\it i.e. }\
$\lambda_1=1$, without loss of generality. Any pair, $(\nu_1,\nu_2)$, can be reduced to
$(1,\nu)$ by multiplying through by the mod $q$ inverse of $\nu_1$.
The simple, one--sided lens space, $L(q;1,1)$, corresponds to setting $\nu=1$ so that
$\theta_L=0$, $\theta_R=2\pi p/q$.
The degeneracy generating function is, [\putref{Dowsut}], \footnotecheck\defaultoption[]\@footnote{ Higher dimensional
spaces can be treated similarly {\it e.g.}\ [\putref{Ray,BandF}]. A few remarks are in section 7.}
$$\eqalign{
\sum_{l=0}^\infty d_l(q;\lambda_1,\lambda_2) t^l&= {1\over q}\sum_{p=0}^{q-1}
\sum_{l=0}^\infty
{\cos l\beta_1-\cos l\beta_2\over\cos\beta_1-\cos\beta_2}\,t^l\cr
&=t(1-t^2){1\over q}\sum_{p=0}^{q-1}\bigg({1\over 1+t^2-2t\cos\beta_1}\bigg)
\bigg({1\over 1+t^2-2t\cos\beta_2}\bigg)\,,
}
\eql{genfun3}
$$
with (\puteqn{angles1}). The group sum over $p$ has to be left until last.
Again putting $t=e^{-\tau}$, the 4d--conformal cylinder kernel this time is,
$$
K^{1/2}(\tau;\lambda_1,\lambda_2)={\sinh\tau\over 2q}\sum_{p=0}^{q-1}
{1\over (\cosh\tau-\cos\beta_1)(\cosh\tau-\cos\beta_2)}\,,
\eql{cylk3}
$$
with (\puteqn{angles1}). When $K^{1/2}(\tau)$ is antisymmetric in $\tau$, as here and
earlier,\footnotecheck\defaultoption[]\@footnote{ The antisymmetry arises essentially from the harmonic factor,
$(1-t^2)$, in the generating function.} the familiar contour routine can be followed leading
to the general expression for $Z'(0,\alpha)$,
$$\eqalign{
Z'(0,\alpha)&={1\over2}\int_0^\infty dx\, {\rm Re\,}\bigg( {K^{1/2}(\tau)\over\tau}\,
\cosh\alpha\tau\bigg)\,,\quad \tau=x+i\Delta\,,
}
\eql{zedash01}
$$
(of which (\puteqn{zedash0}) and (\puteqn{zedash1}) are examples). $\Delta$ lies between $0$ and
the imaginary part of the first pole of the cylinder kernel off the real $\tau$ axis.
As explained above, (\puteqn{angles1}), it is sufficient to set $\nu_1=1$ and $\nu_2=\nu$ with
$\nu$ coprime to $q$. Then we have the corresponding $-\log{\rm det\,}$,
$$\eqalign{
Z_q'(0,\alpha;\nu)&={1\over q}\sum_{p=0}^{q-1}\int_0^\infty dx\,
{\rm Re\,}\bigg( { \sinh\tau\,\cosh\alpha\tau\over\tau (\cosh\tau-\cos\beta_1)(\cosh\tau-\cos\beta_2)}\,
\bigg)\,,
}
\eql{zedash02}
$$
with $0<\Delta<2\pi/q$. This is easily computed.
It is convenient to make the cyclic order, $q$, an odd prime so that all $\nu$ from 1 to
$q-1$ are relevant. I refer to $\nu$ as the `twist' of the manifold (as opposed to $\delta$, the
twist of the bundle which, in this section, is untwisted).
The logdet for $q=29$ and $1\le\nu\le28$ is plotted in figure 6 for the 4d--conformal
Laplacian. The effects of the isomorphisms between lens spaces can be seen. This is the
same case figured in [\putref{Dowcycl}] computed by a different method so a comparison can
be made. The results for the 3d--conformal Laplacian are very similar.
\epsfxsize=5truein \epsfbox{twistedlens6.eps}
The {\it global} finite temperature considerations of section 6 can be directly extended to
the inhomogeneous quotients. The Casimir energies would again be required to find the
analogue of (\puteqn{freen}) for the total free energy. Analytical results for these are
contained in [\putref{Dowded}] although, to be in keeping with the spirit of this paper, a
simple numerical summation of the group averages would suffice.
\section{\bf 8. Higher dimensions}
The notion of lens space extends to higher odd--dimensional sphere quotients,
S$^{2e-1}/\mbox{\open\char90}_q$ ({\it e.g.}\ [\putref{franz,Wolf}]). If $q=2e-1$ is an odd prime, they are
catalogued by $e$ integers, $\nu_i$ ($i=1$ to $e$), the twists, coprime to $q$. The cylinder
kernel for the Laplacian, conformal in $2e$ dimensions, is then,
$$
K^{1/2}(\tau;\bnu)={\sinh\tau\over 2q}\sum_{p=0}^{q-1}\prod_{i=1}^{e}
{1\over (\cosh\tau-\cos\beta_i)}\,,
\eql{cylk7}
$$
where the angles $\beta_i$ are given by,
$$
\bbe={2\pi\bnu\over q}\,,
$$
and the previous numerical procedure is effected without difficulty. If one just requires a
number, this approach is easier than the more analytical routes leading to Hurwitz $\zeta$--functions, say.
\section{\bf 9. Minimal coupling}
The value $\alpha=1$ gives minimal coupling. There is then a zero mode ($l=1$) which shows
up as an infra--red divergence as $\tau$ tends to infinity in the relevant integrals such as
(\puteqn{zedash0}). Removing the zero mode is conventional and amounts to subtracting
$e^{-\tau}$ from the cylinder functions. This however destroys their antisymmetry and the
contour method used earlier is impossible. Instead I employ a different approach which goes
back to the early days of asymptotic analysis, see [\putref{BaandH}].
To begin with, I explicitly remove the (single) zero mode from the generating functions
(cylinder kernels) by defining the subtracted functions,
$$\eqalign{
\overline K^{1/2}(\tau)&=K^{1/2}(\tau)-e^{-\tau}\cr
\overline H(\tau)&=H(\tau)-e^{-\tau}\cr
\overline Z(s,\Gamma',\alpha)&= Z(s,\Gamma',\alpha)-{1\over(1-\alpha^2)^s}\,,
}
\eql{subf}
$$
and note that $\overline Z(s,\Gamma',1)$ is the minimal coupling $\zeta$--function\ by definition.
Differentiating the basic $\zeta$--function\ definition (\puteqn{zet}) {\it twice} with respect to $\alpha^2$ yields
a higher resolvent,
$$\eqalign{
\bigg({d\over d\alpha^2}\bigg)^2\,\overline Z(s,\Gamma',\alpha)&=s(s+1)\sum_{l=2}^\infty{D_l(\Gamma')
\over(l^2-\alpha^2)^{s+2}}\,.
}
\eql{zetd}
$$
The sum converges at $s=0$ and so
$$
\bigg({d\over d\alpha^2}\bigg)^2\,\overline Z'(0,\Gamma',\alpha)=\sum_{l=2}^\infty{D_l(\Gamma')
\over(l^2-\alpha^2)^{2}}\,.
$$
The idea is to integrate this quantity twice, determining the constants of integration from the
reference point $\alpha=0$ since $\overline Z(s,\Gamma',0)=Z(s,\Gamma',0)-1$, is known,
[\putref{Dowsut}].\footnotecheck\defaultoption[]\@footnote{ Any reference point could be chosen. This seems a convenient
one, numerically.}
First one has,
$$\eqalign{
\sum_{l=2}^\infty{D_l(\Gamma')
\over(l^2-\alpha^2)^{2}}
&=\sqrt\pi\int_0^\infty d\tau\,\overline K_{\Gamma'}^{1/2}(\tau)\,
\bigg({\tau\over2\alpha}\bigg)
^{3/2}\,I_{3/2}(\alpha\tau)\,.
}
\eql{zetd2}
$$
I now use the fact that, ({\it cf } (\puteqn{cylk})),
$$
\overline K_{\Gamma'}^{1/2}(\tau) =-{d\over d\tau}\,\overline H(\tau)\,,
$$
to perform an integration by parts (the endpoint contributions vanish), which gives,
$$\eqalign{
\sum_{l=2}^\infty{D_l(\Gamma')
\over(l^2-\alpha^2)^{2}}
&=\sqrt\pi\int_0^\infty d\tau\,\overline H(\tau)\,{d\over d\tau}
\bigg({\tau\over2\alpha}\bigg)^{3/2}\,I_{3/2}(\alpha\tau)\cr
&=\sqrt\pi\int_0^\infty d\tau\,\overline H(\tau)\,
\bigg({\tau\over2\alpha}\bigg)^{3/2}\,I_{1/2}(\alpha\tau)\cr
&={1\over2}\int_0^\infty d\tau\,\tau\,\overline H(\tau)\,{\sinh\alpha\tau\over\alpha}\,.
}
\eql{zetd3}
$$
A first integration with respect to $\alpha^2$ is now easy to perform, yielding
$$
{d\over d\alpha^2}\,\overline Z'(0,\Gamma',\alpha)=\int_0^\infty d\tau\,
\overline H(\tau) (\cosh\alpha\tau-1)+
{d\over d\alpha^2}\,\overline Z'(0,\Gamma',\alpha)\bigg|_{\alpha=0}\,.
\eql{diff}
$$
The constant of intregration (the final term) is found from the standard relation,
$$\eqalign{
{d\over d\alpha^2}\,\overline Z(s,\Gamma',\alpha)&=s\sum_{l=2}^\infty{D_l(\Gamma')
\over(l^2-\alpha^2)^{s+1}}=s\overline Z(s+1,\Gamma',\alpha)\,,\cr
}
\eql{zetd4}
$$
and one obtains for (\puteqn{diff}),
$$\eqalign{
{d\over d\alpha^2}\,\overline Z'(0,\Gamma',\alpha)&=\int_0^\infty d\tau\, \overline H(\tau) (\cosh\alpha\tau-1)+
\overline Z(1,\Gamma',0) \cr
&=\int_0^\infty d\tau\, \overline H(\tau) (\cosh\alpha\tau-1)+
Z(1,\Gamma',0)-1\,, \cr
}
$$
using (\puteqn{subf}). I evaluate the constant later.
A final integration with respect to $\alpha^2$ is required, and produces,
$$\eqalign{
\overline Z'(0,\Gamma',\alpha)
=\int_0^\infty {d\tau\over\tau^2}\, &\overline H(\tau) \big(2\alpha\tau\sinh\alpha\tau
-2\cosh\alpha\tau+2-\alpha^2\tau^2\big)+\cr
&\alpha^2 \big(Z(1,\Gamma',0)-1\big) + Z'(0,\Gamma',0)\,,
}
\eql{zetot}
$$
again using (\puteqn{subf}). The last term can be found numerically from the method in this
paper, {\it e.g.}\ (\puteqn{zedash0}), as can the penultimate term. To be in harmony with the
numerical viewpoint, the contour technique of [\putref{CaandWe}] allows the values of
$Z(n,\Gamma',\alpha)$ ($n\in\mbox{\open\char90}_+$) to be obtained easily. For example, I find, after partial
integration, for $0\le \alpha<1$,
$$
Z(1,\Gamma',\alpha)=\int_0^{\infty} dx\, {\rm Re\,} \big( H(\tau)\,\cosh\alpha\tau\big)\,,
\quad \tau=x+i\Delta\,.
\eql{zed1}
$$
As before, it is sufficient to consider the cyclic quotient, $\Gamma'=\mbox{\open\char90}_q$, for which $H(\tau)$
is given in (\puteqn{cylk}) and so ($\tau=x+i\Delta, 0<\Delta<4\pi/q$),
$$
Z(1,q,\alpha)={1\over2}\int_0^{\infty} dx\, {\rm Re\,}
{\coth( q\tau/2)\,\cosh\alpha\tau\over\sinh\tau}\,.
\eql{zed12}
$$
Finally from (\puteqn{zetot}), the minimal case is got by setting $\alpha=1$ to give my
computational formula for the minimal $-$logdet on homogeneous lens spaces,
$$\eqalign{
\overline Z'(0,q,1)
=\int_0^\infty {d\tau\over\tau^2}\,
&\bigg( {\coth( q\tau/2)\over2\sinh\tau}-e^{-\tau}\bigg) \big(2\tau\sinh\tau
-2\cosh\tau+2-\tau^2\big)\cr
& +Z(1,q,0)-1+ Z'(0,q,0)\,.
}
\eql{zetot2}
$$
One sees that the method of back integration has produced a systematic method of
subtraction that simultaneously tempers the UV and IR divergences.
The value $\alpha=0$ gives the 4d--conformal operator and, although not needed, closed form
expressions for the relevant quantities can be found in [\putref{Dowsut}]. For example, for
even--ordered lens spaces, one has the simple result,
$$
Z(1,2q,0)={\pi\over4q}\sum_{p=1}^{q-1} {\rm cosec\,} (\pi p/q)\,.
\eql{zed13}
$$
This agrees numerically with (\puteqn{zed12}) from which, of course, it can be derived.
The case $q=2$ provides a simple check. The quotient manifold is projective three--space,
P$^3$. Periodicity restricts $l$ to being odd, [\putref{DandB}], and the $\zeta$--function\ rewrites to the
difference of those on the three--sphere for minimal,`$m$', ($\alpha=1$) and 3d--conformal,
`$c$', ($\alpha=1/2$) propagation. Precisely,
$$
\zeta^m_{P^3}(s)=\zeta^m_{S^3}(s)-2^{2-2s}\,\zeta^c_{S^3}(s)\,,
$$
so that,
$$
\zeta^{m'}_{P^3}(0)=\zeta^{m'}_{S^3}(0)-4\,\zeta^{c'}_{S^3}(0)\,,
\eql{proj}
$$
where I have used the fact that $\zeta^c_{S^3}(0)$ vanishes, reflecting the closed nature of
S$^3$ and the absence of a zero mode.
Both values on the right--hand side of (\puteqn{proj}) for the full sphere were evaluated
analytically a long time ago giving $\zeta^{m'}_{P^3}(0)=-0.695171$, agreeing with the
quadrature, (\puteqn{zetot2}).\footnotecheck\defaultoption[]\@footnote{Simplifications arise for this case. For example
$Z(1,2,0)$ vanishes.}
There do not seem to be many published {\it numerical} values for lens space
determinants. Nash and O'Connor, [\putref{NandO}], give some quite involved expressions
with their corresponding plots, which seem to agree with my results. Figure 7 below gives
an indication of my values, which, as an aside, suggest an inflection. For comparison, the
upper curve is for the 4d conformal Laplacian (see fig.1), {\it i.e. }\ the last term in
(\puteqn{zetot2}).
\epsfxsize=5truein \epsfbox{minimal7.eps}
Using $\alpha=0$ as a reference point is related to the method of binomially expanding the $\zeta$--function,
(\puteqn{zet}), in powers of $\alpha^2$, utilised in [\putref{Dowcmp}]; see [\putref{DoandKi}].
\section{\bf 10. Discussion}
Relatively simple expressions allow the determinants on any homogeneous quotient of the
three--sphere to be computed for any mass and for any twisting by an internal symmetry
group. The inhomogeneous case has been treated in less detail.
I have given some examples and plotted a few graphs, the physical significance of which is
somewhat moot, and are meant only to illustrate possibilities. It is hoped that the numerical
approach will be useful in confirming analytical results.
I have discussed only scalar fields, but the extensions to spins half and one are
straightforward, {\it cf }\ [\putref{Dowsut}].
A different technique has been used for minimal coupling which involves back integrating the
expression for a higher resolvent. This method could also be used for the other couplings
although then the reference $\alpha=0$ values would need to be found independently.
Another set of quotients that lend themselves readily to similar treatment are the flat ones,
classified by Hantsche and Wendt, on which a certain amount of work has been done. I draw
attention now only to the early analysis of Unwin, [\putref{Unwin1,Unwin2}], on the finite
temperature field theory and resulting thermodynamics. Earlier, zero temperature
calculations can be found in [\putref{DandB,BandD}] and de Witt, Hart and Isham,
[\putref{DWHI}].
\vglue 20truept
\noindent{\bf References.} \vskip5truept
\begin{putreferences}
\reference{Lindelof}{Lindel\"of,E. {\it Le Calcul des Residues} (Gauthier--Villars, Paris,1904).}
\reference{franz}{Franz,W. \jram {173}{1935}{245}.}
\reference{DWHI}{De Witt, B.S., Hart,C.F. and Isham, C.J. {\it Physica} {\bf 96} (1979) 197.}
\reference{CLM}{Castro,A., Lahkari,N. and Maloney,A. \prD{83}{2011}{124027}.}
\reference{Gang}{Gang,D.{\it Chern--Simons theory on L(p,q) lens spaces and Localization},
\break ArXiv:0912.4664.}
\reference{ABAS}{Asorey,M, Beneventano, C.G., D'Ascanio, D. and Santangelo, E.M.
{\it Thermodynmaics of conformal fields in topologically non--trivial space--time
backgrounds}, ArXiv:1212.6220.}
\reference{AFS}{Alday,L.F., Fluder,M. and Sparks,J. {\it JHEP} {\bf10} (2012) {057}.}
\reference{Radicevic}{Radi\u{c}evi\'c,D. {\it Singlet Vector Models on Lens Spaces},
ArXiv:1210:0255.}
\reference{dowaustin}{Dowker,J.S. 1979 {\it Selected topics in topology and quantum
field theory}
(Lectures at Center for Relativity, University of Texas, Austin).}
\reference{dowstat}{Dowker,J.S. \jpa{18}{1985}{3521}.}
\reference{Dowded}{Dowker,J.S. \cqg{21}{2004}{4977}.}
\reference{Dowgt}{Dowker,J.S. {\it Group theory aspects of spectral problems on
spherical factors}, ArXiv:0907.1309.}
\reference{LandD}{Laidlaw,M. and De Witt, C. \prD{3}{1971}{1375}.}
\reference{CaandC}{Calabrese,P. and Cardy,J. {\it J.Stat.Phys.} {\bf 0406} (2004) 002.}
\reference{MFS}{Metlitski,M.A., Fuertes,C.A. and Sachdev,S. \prB{80}{2009}{115122}.}
\reference{Gromes}{Gromes, D. \mz{94}{1966}{110}.}
\reference{LMS}{Lewkowycz,A., Myers,R.C. and Smolkin,M. {\it Observations on
entanglement entropy in massive QFTs.} ArXiv:1210.6858.}
\reference{Bierens}{Bierens de Haan,D. {\it Nouvelles tables d'int\'egrales
d\'efinies}, (P.Engels, Leiden, 1867).}
\reference{DowGJMS}{Dowker,J.S. \jpa{44}{2011}{115402}.}
\reference{Dowsut}{Dowker,J.S. \cqg{21}{2004}{4247}.}
\reference{Dowcycl}{Dowker,J.S. \jpa{38}{2005}{1049}.}
\reference{Dowren}{Dowker,J.S. {\it Sphere R\'enyi entropies}, ArXiv:1212.2098.}
\reference{Doweven}{Dowker,J.S. {\it Entanglement entropy on even spheres.}
ArXiv:1009.3854.}
\reference{Dowodd}{Dowker,J.S. {\it Entanglement entropy on odd spheres.}
ArXiv:1012.1548.}
\reference{DeWitt}{DeWitt,B.S. {\it Quantum gravity: the new synthesis} in
{\it General Relativity} edited by S.W.Hawking and W.Israel (CUP,Cambridge,1979).}
\reference{Nielsen}{Nielsen,N. {\it Handbuch der Theorie von Gammafunktion}
(Teubner,Leipzig,1906).}
\reference{KPSS}{Klebanov,I.R., Pufu,S.S., Sachdev,S. and Saddi,B.R.
{\it JHEP} 1204 (2012) 074.}
\reference{KPS2}{Klebanov,I.R., Pufu,S.S. and Safdi,B.R. {\it F-Theorem without
Supersymmetry} 1105.4598.}
\reference{KNPS}{Klebanov,I.R., Nishioka,T, Pufu,S.S. and Safdi,B.R. {\it Is Renormalized
Entanglement Entropy Stationary at RG Fixed Points?} 1207.3360.}
\reference{Stern}{Stern,W. \jram {79}{1875}{67}.}
\reference{Gregory}{Gregory, D.F. {\it Examples of the processes of the Differential
and Integral Calculus} 2nd. Edn (Deighton,Cambridge,1847).}
\reference{Juhl}{Juhl,A. {\it On conformally covariant powers of the Laplacian}
ArXiv:math.DG/0905.3993.}
\reference{Minak}{Minakshisundaram,S. {\it J. Ind. Math. Soc.} {\bf 13} (1949) 41.}
\reference{CaandWe}{Candelas,P. and Weinberg,S. \np{237}{1984}{397}.}
\reference{Chodos1}{Chodos,A. and Myers,E. \aop{156}{1984}{412}.}
\reference{Branson}{Branson,T. \tams{347} {1995}{3671}.}
\reference{Graham}{Graham,C.R. SIGMA {\bf 3} (2007) 121.}
\reference{Graham2}{Graham,C.R. {\it Rend.Circ.Mat.Palermo Suppl.} No.63 (2000) 31.}
\reference{Gover}{Gover,A.R. {\it Laplacian operators and Q-curvature on
conformally Einstein manifolds} ArXiv:math.DG/0506037.}
\reference{Diaz}{Diaz,D.E. JHEP {\bf 7} (2008)103.}
\reference{Laflamme}{Laflamme,R. \np{324} {1989}{233}.}
\reference{NFM}{De Nardo,L., Fursaev,D.V. and Miele,G. \cqg{14}{1987}{3269}.}
\reference{BiandD}{Birrell,N.D, and Davies,P.C.W. {\it Quantum fields in curved
space} (Cambridge Univ. Press, Cambridge, 1982).}
\reference{MRR}{Marolf,D., Rangamani,M. and Van Raamsdonk,M. {\it
Holographic Models of de Sitter QFTs}, ArXiv: 1007.3996.}
\reference{MilneT}{Milne--Thomson,L.M. {\it The Calculus of Finite
Differences} (MacMillan,London, 1933).}
\reference{Birmingham}{Birmingham,D. \prD{36}{1987}{3037}.}
\reference{Dowcascone}{Dowker,J.S. \prD{36}{1987}{3095}.}
\reference{Dowcos}{Dowker,J.S. \prD{36}{1987}{3742}.}
\reference{Dowtherm}{Dowker,J.S. \prD{18}{1978}{1856}.}
\reference{Dowgeo}{Dowker,J.S. \cqg{11}{1994}{L55}.}
\reference{ApandD2}{Dowker,J.S. and Apps,J.S. \cqg{12}{1995}{1363}.}
\reference{HandW}{Hertzberg,M.P. and Wilczek,F. \prl{106}{2011}{050404}.}
\reference{KandB}{Kamela,M. and Burgess,C.P. \cjp{77}{1999}{85}.}
\reference{Dowhyp}{Dowker,J.S. \jpa{43}{2010}{445402}.}
\reference{LNST}{Lohmayer,R., Neuberger,H, Schwimmer,A. and Theisen,S.
\plb{685}{2010}{222}.}
\reference{Allen2}{Allen,B. PhD Thesis, University of Cambridge, 1984.}
\reference{MyandS}{Myers,R.C. and Sinha, A. \prD{82}{2010}{046006}.}
\reference{RyandT}{Ryu,S. and Takayanagi,T. JHEP {\bf 0608}(2006)045.}
\reference{CaandH}{Casini,H. and Huerta,M. {\it Entanglement entropy
for the n--sphere},\break arXiv:1007.1813.}
\reference{CaandH3}{Casini,H. and Huerta,M. \jpa {42}{2009}{504007}.}
\reference{Solodukhin}{Solodukhin,S.N. \plb{665}{2008}{305}.}
\reference{Solodukhin2}{Solodukhin,S.N. {\it Entanglement entropy on round spheres}
ArXiv: 1008.4314.}
\reference{CaandW}{Callan,C.G. and Wilczek,F. \plb{333}{1994}{55}.}
\reference{FandS1}{Fursaev,D.V. and Solodukhin,S.N. \plb{365}{1996}{51}.}
\reference{FandS2}{Fursaev,D.V. and Solodukhin,S.N. \prD{52}{1995}{2133}.}
\reference{Fursaev}{Fursaev,D.V. \plb{334}{1994}{53}.}
\reference{Donnelly2}{Donnelly,H. \ma{224}{1976}{161}.}
\reference{ApandD}{Apps,J.S. and Dowker,J.S. \cqg{15}{1998}{1121}.}
\reference{FandM}{Fursaev,D.V. and Miele,G. \prD{49}{1994}{987}.}
\reference{FandM2}{Fursaev,D.V. and Miele,G. \prD{}{}{}.}
\reference{Dowker2}{Dowker,J.S.\cqg{11}{1994}{L137}.}
\reference{Dowker1}{Dowker,J.S.\prD{50}{1994}{6369}.}
\reference{FNT}{Fujita,M.,Nishioka,T. and Takayanagi,T. JHEP {\bf 0809}
(2008) 016.}
\reference{Hund}{Hund,F. \zfp{51}{1928}{1}.}
\reference{Elert}{Elert,W. \zfp {51}{1928}{8}.}
\reference{Poole2}{Poole,E.G.C. \qjm{3}{1932}{183}.}
\reference{Bellon}{Bellon,M.P. {\it On the icosahedron: from two to three
dimensions}, arXiv:0705.3241.}
\reference{Bellon2}{Bellon,M.P. \cqg{23}{2006}{7029}.}
\reference{McLellan}{McLellan,A,G. \jpc{7}{1974}{3326}.}
\reference{Boiteaux}{Boiteaux, M. \jmp{23}{1982}{1311}.}
\reference{HHandK}{Hage Hassan,M. and Kibler,M. {\it On Hurwitz
transformations} in {Le probl\`eme de factorisation de Hurwitz}, Eds.,
A.Ronveaux and D.Lambert (Fac.Univ.N.D. de la Paix, Namur, 1991),
pp.1-29.}
\reference{Weeks2}{Weeks,Jeffrey \cqg{23}{2006}{6971}.}
\reference{LandW}{Lachi\`eze-Rey,M. and Weeks,Jeffrey, {\it Orbifold construction of
the modes on the Poincar\'e dodecahedral space}, arXiv:0801.4232.}
\reference{Cayley4}{Cayley,A. \qjpam{58}{1879}{280}.}
\reference{JMS}{Jari\'c,M.V., Michel,L. and Sharp,R.T. {\it J.Physique}
{\bf 45} (1984) 1. }
\reference{AandB}{Altmann,S.L. and Bradley,C.J. {\it Phil. Trans. Roy. Soc. Lond.}
{\bf 255} (1963) 199.}
\reference{CandP}{Cummins,C.J. and Patera,J. \jmp{29}{1988}{1736}.}
\reference{Sloane}{Sloane,N.J.A. \amm{84}{1977}{82}.}
\reference{Gordan2}{Gordan,P. \ma{12}{1877}{147}.}
\reference{DandSh}{Desmier,P.E. and Sharp,R.T. \jmp{20}{1979}{74}.}
\reference{Kramer}{Kramer,P., \jpa{38}{2005}{3517}.}
\reference{Klein2}{Klein, F.\ma{9}{1875}{183}.}
\reference{Hodgkinson}{Hodgkinson,J. \jlms{10}{1935}{221}.}
\reference{ZandD}{Zheng,Y. and Doerschuk, P.C. {\it Acta Cryst.} {\bf A52}
(1996) 221.}
\reference{EPM}{Elcoro,L., Perez--Mato,J.M. and Madariaga,G.
{\it Acta Cryst.} {\bf A50} (1994) 182.}
\reference{PSW2}{Prandl,W., Schiebel,P. and Wulf,K.
{\it Acta Cryst.} {\bf A52} (1999) 171.}
\reference{FCD}{Fan,P--D., Chen,J--Q. and Draayer,J.P.
{\it Acta Cryst.} {\bf A55} (1999) 871.}
\reference{FCD2}{Fan,P--D., Chen,J--Q. and Draayer,J.P.
{\it Acta Cryst.} {\bf A55} (1999) 1049.}
\reference{Honl}{H\"onl,H. \zfp{89}{1934}{244}.}
\reference{PSW}{Patera,J., Sharp,R.T. and Winternitz,P. \jmp{19}{1978}{2362}.}
\reference{LandH}{Lohe,M.A. and Hurst,C.A. \jmp{12}{1971}{1882}.}
\reference{RandSA}{Ronveaux,A. and Saint-Aubin,Y. \jmp{24}{1983}{1037}.}
\reference{JandDeV}{Jonker,J.E. and De Vries,E. \npa{105}{1967}{621}.}
\reference{Rowe}{Rowe, E.G.Peter. \jmp{19}{1978}{1962}.}
\reference{KNR}{Kibler,M., N\'egadi,T. and Ronveaux,A. {\it The Kustaanheimo-Stiefel
transformation and certain special functions} \lnm{1171}{1985}{497}.}
\reference{GLP}{Gilkey,P.B., Leahy,J.V. and Park,J-H, \jpa{29}{1996}{5645}.}
\reference{Kohler}{K\"ohler,K.: Equivariant Reidemeister torsion on
symmetric spaces. Math.Ann. {\bf 307}, 57-69 (1997)}
\reference{Kohler2}{K\"ohler,K.: Equivariant analytic torsion on ${\bf P^nC}$.
Math.Ann.{\bf 297}, 553-565 (1993) }
\reference{Kohler3}{K\"ohler,K.: Holomorphic analytic torsion on Hermitian
symmetric spaces. J.Reine Angew.Math. {\bf 460}, 93-116 (1995)}
\reference{Zagierzf}{Zagier,D. {\it Zetafunktionen und Quadratische
K\"orper}, (Springer--Verlag, Berlin, 1981).}
\reference{Stek}{Stekholschkik,R. {\it Notes on Coxeter transformations and the McKay
correspondence.} (Springer, Berlin, 2008).}
\reference{Pesce}{Pesce,H. \cmh {71}{1996}{243}.}
\reference{Pesce2}{Pesce,H. {\it Contemp. Math} {\bf 173} (1994) 231.}
\reference{Sutton}{Sutton,C.J. {\it Equivariant isospectrality
and isospectral deformations on spherical orbifolds}, ArXiv:math/0608567.}
\reference{Sunada}{Sunada,T. \aom{121}{1985}{169}.}
\reference{GoandM}{Gornet,R, and McGowan,J. {\it J.Comp. and Math.}
{\bf 9} (2006) 270.}
\reference{Suter}{Suter,R. {\it Manusc.Math.} {\bf 122} (2007) 1-21.}
\reference{Lomont}{Lomont,J.S. {\it Applications of finite groups} (Academic
Press, New York, 1959).}
\reference{DandCh2}{Dowker,J.S. and Chang,Peter {\it Analytic torsion on
spherical factors and tessellations}, arXiv:math.DG/0904.0744 .}
\reference{Mackey}{Mackey,G. {\it Induced representations}
(Benjamin, New York, 1968).}
\reference{Koca}{Koca, {\it Turkish J.Physics}.}
\reference{Brylinski}{Brylinski, J-L., {\it A correspondence dual to McKay's}
ArXiv alg-geom/9612003.}
\reference{Rossmann}{Rossman,W. {\it McKay's correspondence
and characters of finite subgroups of\break SU(2)} {\it Progress in Math.}
Birkhauser (to appear) .}
\reference{JandL}{James, G. and Liebeck, M. {\it Representations and
characters of groups} (CUP, Cambridge, 2001).}
\reference{IandR}{Ito,Y. and Reid,M. {\it The Mckay correspondence for finite
subgroups of SL(3,C)} Higher dimensional varieties, (Trento 1994),
221-240, (Berlin, de Gruyter 1996).}
\reference{BandF}{Bauer,W. and Furutani, K. {\it J. Geom. and Phys.} {\bf
58} (2008) 64.}
\reference{Luck}{L\"uck,W. \jdg{37}{1993}{263}.}
\reference{LandR}{Lott,J. and Rothenberg,M. \jdg{34}{1991}{431}.}
\reference{DoandKi} {Dowker.J.S. and Kirsten, K. {\it Analysis and Appl.}
{\bf 3} (2005) 45.}
\reference{dowtess1}{Dowker,J.S. \cqg{23}{2006}{1}.}
\reference{dowtess2}{Dowker,J.S. {\it J.Geom. and Phys.} {\bf 57} (2007) 1505.}
\reference{MHS}{De Melo,T., Hartmann,L. and Spreafico,M. {\it Reidemeister
Torsion and analytic torsion of discs}, ArXiv:0811.3196.}
\reference{Vertman}{Vertman, B. {\it Analytic Torsion of a bounded
generalized cone}, ArXiv:0808.0449.}
\reference{WandY} {Weng,L. and You,Y., {\it Int.J. of Math.}{\bf 7} (1996)
109.}
\reference{ScandT}{Schwartz, A.S. and Tyupkin,Yu.S. \np{242}{1984}{436}.}
\reference{AAR}{Andrews, G.E., Askey,R. and Roy,R. {\it Special functions}
(CUP, Cambridge, 1999).}
\reference{Tsuchiya}{Tsuchiya, N.: R-torsion and analytic torsion for spherical
Clifford-Klein manifolds.: J. Fac.Sci., Tokyo Univ. Sect.1 A, Math.
{\bf 23}, 289-295 (1976).}
\reference{Tsuchiya2}{Tsuchiya, N. J. Fac.Sci., Tokyo Univ. Sect.1 A, Math.
{\bf 23}, 289-295 (1976).}
\reference{Lerch}{Lerch,M. \am{11}{1887}{19}.}
\reference{Lerch2}{Lerch,M. \am{29}{1905}{333}.}
\reference{TandS}{Threlfall, W. and Seifert, H. \ma{104}{1930}{1}.}
\reference{RandS}{Ray, D.B., and Singer, I. \aim{7}{1971}{145}.}
\reference{RandS2}{Ray, D.B., and Singer, I. {\it Proc.Symp.Pure Math.}
{\bf 23} (1973) 167.}
\reference{Jensen}{Jensen,J.L.W.V. \aom{17}{1915-1916}{124}.}
\reference{Rosenberg}{Rosenberg, S. {\it The Laplacian on a Riemannian Manifold}
(CUP, Cambridge, 1997).}
\reference{Nando2}{Nash, C. and O'Connor, D-J. {\it Int.J.Mod.Phys.}
{\bf A10} (1995) 1779.}
\reference{Fock}{Fock,V. \zfp{98}{1935}{145}.}
\reference{Levy}{Levy,M. \prs {204}{1950}{145}.}
\reference{Schwinger2}{Schwinger,J. \jmp{5}{1964}{1606}.}
\reference{Muller}{M\"uller, \lnm{}{}{}.}
\reference{VMK}{Varshalovich.}
\reference{DandWo}{Dowker,J.S. and Wolski, A. \prA{46}{1992}{6417}.}
\reference{Zeitlin1}{Zeitlin,V. {\it Physica D} {\bf 49} (1991). }
\reference{Zeitlin0}{Zeitlin,V. {\it Nonlinear World} Ed by
V.Baryakhtar {\it et al}, Vol.I p.717, (World Scientific, Singapore, 1989).}
\reference{Zeitlin2}{Zeitlin,V. \prl{93}{2004}{264501}. }
\reference{Zeitlin3}{Zeitlin,V. \pla{339}{2005}{316}. }
\reference{Groenewold}{Groenewold, H.J. {\it Physica} {\bf 12} (1946) 405.}
\reference{Cohen}{Cohen, L. \jmp{7}{1966}{781}.}
\reference{AandW}{Argawal G.S. and Wolf, E. \prD{2}{1970}{2161,2187,2206}.}
\reference{Jantzen}{Jantzen,R.T. \jmp{19}{1978}{1163}.}
\reference{Moses2}{Moses,H.E. \aop{42}{1967}{343}.}
\reference{Carmeli}{Carmeli,M. \jmp{9}{1968}{1987}.}
\reference{SHS}{Siemans,M., Hancock,J. and Siminovitch,D. {\it Solid State
Nuclear Magnetic Resonance} {\bf 31}(2007)35.}
\reference{Dowk}{Dowker,J.S. \prD{28}{1983}{3013}.}
\reference{Heine}{Heine, E. {\it Handbuch der Kugelfunctionen}
(G.Reimer, Berlin. 1878, 1881).}
\reference{Pockels}{Pockels, F. {\it \"Uber die Differentialgleichung $\Delta
u+k^2u=0$} (Teubner, Leipzig. 1891).}
\reference{Hamermesh}{Hamermesh, M., {\it Group Theory} (Addison--Wesley,
Reading. 1962).}
\reference{Racah}{Racah, G. {\it Group Theory and Spectroscopy}
(Princeton Lecture Notes, 1951). }
\reference{Gourdin}{Gourdin, M. {\it Basics of Lie Groups} (Editions
Fronti\'eres, Gif sur Yvette. 1982.)}
\reference{Clifford}{Clifford, W.K. \plms{2}{1866}{116}.}
\reference{Story2}{Story, W.E. \plms{23}{1892}{265}.}
\reference{Story}{Story, W.E. \ma{41}{1893}{469}.}
\reference{Poole}{Poole, E.G.C. \plms{33}{1932}{435}.}
\reference{Dickson}{Dickson, L.E. {\it Algebraic Invariants} (Wiley, N.Y.
1915).}
\reference{Dickson2}{Dickson, L.E. {\it Modern Algebraic Theories}
(Sanborn and Co., Boston. 1926).}
\reference{Hilbert2}{Hilbert, D. {\it Theory of algebraic invariants} (C.U.P.,
Cambridge. 1993).}
\reference{Olver}{Olver, P.J. {\it Classical Invariant Theory} (C.U.P., Cambridge.
1999.)}
\reference{AST}{A\v{s}erova, R.M., Smirnov, J.F. and Tolsto\v{i}, V.N. {\it
Teoret. Mat. Fyz.} {\bf 8} (1971) 255.}
\reference{AandS}{A\v{s}erova, R.M., Smirnov, J.F. \np{4}{1968}{399}.}
\reference{Shapiro}{Shapiro, J. \jmp{6}{1965}{1680}.}
\reference{Shapiro2}{Shapiro, J.Y. \jmp{14}{1973}{1262}.}
\reference{NandS}{Noz, M.E. and Shapiro, J.Y. \np{51}{1973}{309}.}
\reference{Cayley2}{Cayley, A. {\it Phil. Trans. Roy. Soc. Lond.}
{\bf 144} (1854) 244.}
\reference{Cayley3}{Cayley, A. {\it Phil. Trans. Roy. Soc. Lond.}
{\bf 146} (1856) 101.}
\reference{Wigner}{Wigner, E.P. {\it Gruppentheorie} (Vieweg, Braunschweig. 1931).}
\reference{Sharp}{Sharp, R.T. \ajop{28}{1960}{116}.}
\reference{Laporte}{Laporte, O. {\it Z. f. Naturf.} {\bf 3a} (1948) 447.}
\reference{Lowdin}{L\"owdin, P-O. \rmp{36}{1964}{966}.}
\reference{Ansari}{Ansari, S.M.R. {\it Fort. d. Phys.} {\bf 15} (1967) 707.}
\reference{SSJR}{Samal, P.K., Saha, R., Jain, P. and Ralston, J.P. {\it
Testing Isotropy of Cosmic Microwave Background Radiation},
astro-ph/0708.2816.}
\reference{Lachieze}{Lachi\'eze-Rey, M. {\it Harmonic projection and
multipole Vectors}. astro- \break ph/0409081.}
\reference{CHS}{Copi, C.J., Huterer, D. and Starkman, G.D.
\prD{70}{2003}{043515}.}
\reference{Jaric}{Jari\'c, J.P. {\it Int. J. Eng. Sci.} {\bf 41} (2003) 2123.}
\reference{RandD}{Roche, J.A. and Dowker, J.S. \jpa{1}{1968}{527}.}
\reference{KandW}{Katz, G. and Weeks, J.R. \prD{70}{2004}{063527}.}
\reference{Waerden}{van der Waerden, B.L. {\it Die Gruppen-theoretische
Methode in der Quantenmechanik} (Springer, Berlin. 1932).}
\reference{EMOT}{Erdelyi, A., Magnus, W., Oberhettinger, F. and Tricomi, F.G. {
\it Higher Transcendental Functions} Vol.1 (McGraw-Hill, N.Y. 1953).}
\reference{Dowzilch}{Dowker, J.S. {\it Proc. Phys. Soc.} {\bf 91} (1967) 28.}
\reference{DandD}{Dowker, J.S. and Dowker, Y.P. {\it Proc. Phys. Soc.}
{\bf 87} (1966) 65.}
\reference{DandD2}{Dowker, J.S. and Dowker, Y.P. \prs{}{}{}.}
\reference{Dowk3}{Dowker,J.S. \cqg{7}{1990}{1241}.}
\reference{Dowk5}{Dowker,J.S. \cqg{7}{1990}{2353}.}
\reference{CoandH}{Courant, R. and Hilbert, D. {\it Methoden der
Mathematischen Physik} vol.1 \break (Springer, Berlin. 1931).}
\reference{Applequist}{Applequist, J. \jpa{22}{1989}{4303}.}
\reference{Torruella}{Torruella, \jmp{16}{1975}{1637}.}
\reference{Weinberg}{Weinberg, S.W. \pr{133}{1964}{B1318}.}
\reference{Meyerw}{Meyer, W.F. {\it Apolarit\"at und rationale Curven}
(Fues, T\"ubingen. 1883.) }
\reference{Ostrowski}{Ostrowski, A. {\it Jahrsb. Deutsch. Math. Verein.} {\bf
33} (1923) 245.}
\reference{Kramers}{Kramers, H.A. {\it Grundlagen der Quantenmechanik}, (Akad.
Verlag., Leipzig, 1938).}
\reference{ZandZ}{Zou, W.-N. and Zheng, Q.-S. \prs{459}{2003}{527}.}
\reference{Weeks1}{Weeks, J.R. {\it Maxwell's multipole vectors
and the CMB}. astro-ph/0412231.}
\reference{Corson}{Corson, E.M. {\it Tensors, Spinors and Relativistic Wave
Equations} (Blackie, London. 1950).}
\reference{Rosanes}{Rosanes, J. \jram{76}{1873}{312}.}
\reference{Salmon}{Salmon, G. {\it Lessons Introductory to the Modern Higher
Algebra} 3rd. edn. \break (Hodges, Dublin. 1876.)}
\reference{Milnew}{Milne, W.P. {\it Homogeneous Coordinates} (Arnold. London. 1910).}
\reference{Niven}{Niven, W.D. {\it Phil. Trans. Roy. Soc.} {\bf 170} (1879) 393.}
\reference{Scott}{Scott, C.A. {\it An Introductory Account of
Certain Modern Ideas and Methods in Plane Analytical Geometry,}
(MacMillan, N.Y. 1896).}
\reference{Bargmann}{Bargmann, V. \rmp{34}{1962}{300}.}
\reference{Maxwell}{Maxwell, J.C. {\it A Treatise on Electricity and
Magnetism} 2nd. edn. (Clarendon Press, Oxford. 1882).}
\reference{BandL}{Biedenharn, L.C. and Louck, J.D.
{\it Angular Momentum in Quantum Physics} (Addison-Wesley, Reading. 1981).}
\reference{Weylqm}{Weyl, H. {\it The Theory of Groups and Quantum Mechanics}
(Methuen, London. 1931).}
\reference{Robson}{Robson, A. {\it An Introduction to Analytical Geometry} Vol I
(C.U.P., Cambridge. 1940.)}
\reference{Sommerville}{Sommerville, D.M.Y. {\it Analytical Conics} 3rd. edn.
(Bell, London. 1933).}
\reference{Coolidge}{Coolidge, J.L. {\it A Treatise on Algebraic Plane Curves}
(Clarendon Press, Oxford. 1931).}
\reference{SandK}{Semple, G. and Kneebone. G.T. {\it Algebraic Projective
Geometry} (Clarendon Press, Oxford. 1952).}
\reference{AandC}{Abdesselam A., and Chipalkatti, J. {\it The Higher
Transvectants are redundant}, arXiv:0801.1533 [math.AG] 2008.}
\reference{Elliott}{Elliott, E.B. {\it The Algebra of Quantics} 2nd edn.
(Clarendon Press, Oxford. 1913).}
\reference{Elliott2}{Elliott, E.B. \qjpam{48}{1917}{372}.}
\reference{Howe}{Howe, R. \tams{313}{1989}{539}.}
\reference{Clebsch}{Clebsch, A. \jram{60}{1862}{343}.}
\reference{Prasad}{Prasad, G. \ma{72}{1912}{136}.}
\reference{Dougall}{Dougall, J. \pems{32}{1913}{30}.}
\reference{Penrose}{Penrose, R. \aop{10}{1960}{171}.}
\reference{Penrose2}{Penrose, R. \prs{273}{1965}{171}.}
\reference{Burnside}{Burnside, W.S. \qjm{10}{1870}{211}. }
\reference{Lindemann}{Lindemann, F. \ma{23} {1884}{111}.}
\reference{Backus}{Backus, G. {\it Rev. Geophys. Space Phys.} {\bf 8} (1970) 633.}
\reference{Baerheim}{Baerheim, R. {\it Q.J. Mech. appl. Math.} {\bf 51} (1998) 73.}
\reference{Lense}{Lense, J. {\it Kugelfunktionen} (Akad.Verlag, Leipzig. 1950).}
\reference{Littlewood}{Littlewood, D.E. \plms{50}{1948}{349}.}
\reference{Fierz}{Fierz, M. {\it Helv. Phys. Acta} {\bf 12} (1938) 3.}
\reference{Williams}{Williams, D.N. {\it Lectures in Theoretical Physics} Vol. VII,
(Univ.Colorado Press, Boulder. 1965).}
\reference{Dennis}{Dennis, M. \jpa{37}{2004}{9487}.}
\reference{Pirani}{Pirani, F. {\it Brandeis Lecture Notes on
General Relativity,} edited by S. Deser and K. Ford. (Brandeis, Mass. 1964).}
\reference{Sturm}{Sturm, R. \jram{86}{1878}{116}.}
\reference{Schlesinger}{Schlesinger, O. \ma{22}{1883}{521}.}
\reference{Askwith}{Askwith, E.H. {\it Analytical Geometry of the Conic
Sections} (A.\&C. Black, London. 1908).}
\reference{Todd}{Todd, J.A. {\it Projective and Analytical Geometry}.
(Pitman, London. 1946).}
\reference{Glenn}{Glenn. O.E. {\it Theory of Invariants} (Ginn \& Co, N.Y. 1915).}
\reference{DowkandG}{Dowker, J.S. and Goldstone, M. \prs{303}{1968}{381}.}
\reference{Turnbull}{Turnbull, H.A. {\it The Theory of Determinants,
Matrices and Invariants} 3rd. edn. (Dover, N.Y. 1960).}
\reference{MacMillan}{MacMillan, W.D. {\it The Theory of the Potential}
(McGraw-Hill, N.Y. 1930).}
\reference{Hobson}{Hobson, E.W. {\it The Theory of Spherical
and Ellipsoidal Harmonics} (C.U.P., Cambridge. 1931).}
\reference{Hobson1}{Hobson, E.W. \plms {24}{1892}{55}.}
\reference{GandY}{Grace, J.H. and Young, A. {\it The Algebra of Invariants}
(C.U.P., Cambridge, 1903).}
\reference{FandR}{Fano, U. and Racah, G. {\it Irreducible Tensorial Sets}
(Academic Press, N.Y. 1959).}
\reference{TandT}{Thomson, W. and Tait, P.G. {\it Treatise on Natural Philosophy}
(Clarendon Press, Oxford. 1867).}
\reference{Brinkman}{Brinkman, H.C. {\it Applications of spinor invariants in
atomic physics}, North Holland, Amsterdam 1956.}
\reference{Kramers1}{Kramers, H.A. {\it Proc. Roy. Soc. Amst.} {\bf 33} (1930) 953.}
\reference{DandP2}{Dowker,J.S. and Pettengill,D.F. \jpa{7}{1974}{1527}}
\reference{Dowk1}{Dowker,J.S. \jpa{}{}{45}.}
\reference{Dowk2}{Dowker,J.S. \aop{71}{1972}{577}}
\reference{DandA}{Dowker,J.S. and Apps, J.S. \cqg{15}{1998}{1121}.}
\reference{Weil}{Weil,A., {\it Elliptic functions according to Eisenstein
and Kronecker}, Springer, Berlin, 1976.}
\reference{Ling}{Ling,C-H. {\it SIAM J.Math.Anal.} {\bf5} (1974) 551.}
\reference{Ling2}{Ling,C-H. {\it J.Math.Anal.Appl.}(1988).}
\reference{BMO}{Brevik,I., Milton,K.A. and Odintsov, S.D. \aop{302}{2002}{120}.}
\reference{KandL}{Kutasov,D. and Larsen,F. {\it JHEP} 0101 (2001) 1.}
\reference{KPS}{Klemm,D., Petkou,A.C. and Siopsis {\it Entropy
bounds, monoticity properties and scaling in CFT's}. hep-th/0101076.}
\reference{DandC}{Dowker,J.S. and Critchley,R. \prD{15}{1976}{1484}.}
\reference{AandD}{Al'taie, M.B. and Dowker, J.S. \prD{18}{1978}{3557}.}
\reference{Dow1}{Dowker,J.S. \prD{37}{1988}{558}.}
\reference{Dow30}{Dowker,J.S. \prD{28}{1983}{3013}.}
\reference{DandK}{Dowker,J.S. and Kennedy,G. \jpa{11}{1978}{895}.}
\reference{Dow2}{Dowker,J.S. \cqg{1}{1984}{359}.}
\reference{DandKi}{Dowker,J.S. and Kirsten, K. {\it Comm. in Anal. and Geom.
}{\bf7} (1999) 641.}
\reference{DandKe}{Dowker,J.S. and Kennedy,G.\jpa{11}{1978}{895}.}
\reference{Gibbons}{Gibbons,G.W. \pl{60A}{1977}{385}.}
\reference{Cardy}{Cardy,J.L. \np{366}{1991}{403}.}
\reference{ChandD}{Chang,P. and Dowker,J.S. \np{395}{1993}{407}.}
\reference{DandC2}{Dowker,J.S. and Critchley,R. \prD{13}{1976}{224}.}
\reference{Camporesi}{Camporesi,R. \prp{196}{1990}{1}.}
\reference{BandM}{Brown,L.S. and Maclay,G.J. \pr{184}{1969}{1272}.}
\reference{CandD}{Candelas,P. and Dowker,J.S. \prD{19}{1979}{2902}.}
\reference{Unwin1}{Unwin,S.D. {\it Selected quantum field theory effects in multiply
connected spacetimes}. Thesis, University of Manchester, 1980.}
\reference{Unwin2}{Unwin,S.D. \jpa{13}{1980}{313}.}
\reference{DandB}{Dowker,J.S.and Banach,R. \jpa{11}{1978}{2255}.}
\reference{Obhukov}{Obhukov,Yu.N. \pl{109B}{1982}{195}.}
\reference{Kennedy}{Kennedy,G. \prD{23}{1981}{2884}.}
\reference{CandT}{Copeland,E. and Toms,D.J. \np {255}{1985}{201}.}
\reference{CandT2}{Copeland,E. and Toms,D.J. \cqg {3}{1986}{431}.}
\reference{ELV}{Elizalde,E., Lygren, M. and Vassilevich,
D.V. \jmp {37}{1996}{3105}.}
\reference{Malurkar}{Malurkar,S.L. {\it J.Ind.Math.Soc} {\bf16} (1925/26) 130.}
\reference{Glaisher}{Glaisher,J.W.L. {\it Messenger of Math.} {\bf18}
(1889) 1.} \reference{Anderson}{Anderson,A. \prD{37}{1988}{536}.}
\reference{CandA}{Cappelli,A. and D'Appollonio, \pl{487B}{2000}{87}.}
\reference{Wot}{Wotzasek,C. \jpa{23}{1990}{1627}.}
\reference{RandT}{Ravndal,F. and Tollesen,D. \prD{40}{1989}{4191}.}
\reference{SandT}{Santos,F.C. and Tort,A.C. \pl{482B}{2000}{323}.}
\reference{FandO}{Fukushima,K. and Ohta,K. {\it Physica} {\bf A299} (2001) 455.}
\reference{GandP}{Gibbons,G.W. and Perry,M. \prs{358}{1978}{467}.}
\reference{Dow4}{Dowker,J.S..}
\reference{Rad}{Rademacher,H. {\it Topics in analytic number theory,}
Springer-Verlag, Berlin,1973.}
\reference{Halphen}{Halphen,G.-H. {\it Trait\'e des Fonctions Elliptiques},
Vol 1, Gauthier-Villars, Paris, 1886.}
\reference{CandW}{Cahn,R.S. and Wolf,J.A. {\it Comm.Mat.Helv.} {\bf 51}
(1976) 1.}
\reference{Berndt}{Berndt,B.C. \rmjm{7}{1977}{147}.}
\reference{Hurwitz}{Hurwitz,A. \ma{18}{1881}{528}.}
\reference{Hurwitz2}{Hurwitz,A. {\it Mathematische Werke} Vol.I. Basel,
Birkhauser, 1932.}
\reference{Berndt2}{Berndt,B.C. \jram{303/304}{1978}{332}.}
\reference{RandA}{Rao,M.B. and Ayyar,M.V. \jims{15}{1923/24}{150}.}
\reference{Hardy}{Hardy,G.H. \jlms{3}{1928}{238}.}
\reference{TandM}{Tannery,J. and Molk,J. {\it Fonctions Elliptiques},
Gauthier-Villars, Paris, 1893--1902.}
\reference{schwarz}{Schwarz,H.-A. {\it Formeln und
Lehrs\"atzen zum Gebrauche..},Springer 1893.(The first edition was 1885.)
The French translation by Henri Pad\'e is {\it Formules et Propositions
pour L'Emploi...},Gauthier-Villars, Paris, 1894}
\reference{Hancock}{Hancock,H. {\it Theory of elliptic functions}, Vol I.
Wiley, New York 1910.}
\reference{watson}{Watson,G.N. \jlms{3}{1928}{216}.}
\reference{MandO}{Magnus,W. and Oberhettinger,F. {\it Formeln und S\"atze},
Springer-Verlag, Berlin 1948.}
\reference{Klein}{Klein,F. {\it Lectures on the Icosohedron}
(Methuen, London. 1913).}
\reference{AandL}{Appell,P. and Lacour,E. {\it Fonctions Elliptiques},
Gauthier-Villars,
Paris. 1897.}
\reference{HandC}{Hurwitz,A. and Courant,C. {\it Allgemeine Funktionentheorie},
Springer,
Berlin. 1922.}
\reference{WandW}{Whittaker,E.T. and Watson,G.N. {\it Modern analysis},
Cambridge. 1927.}
\reference{SandC}{Selberg,A. and Chowla,S. \jram{227}{1967}{86}. }
\reference{zucker}{Zucker,I.J. {\it Math.Proc.Camb.Phil.Soc} {\bf 82 }(1977)
111.}
\reference{glasser}{Glasser,M.L. {\it Maths.of Comp.} {\bf 25} (1971) 533.}
\reference{GandW}{Glasser, M.L. and Wood,V.E. {\it Maths of Comp.} {\bf 25}
(1971)
535.}
\reference{greenhill}{Greenhill,A,G. {\it The Applications of Elliptic
Functions}, MacMillan. London, 1892.}
\reference{Weierstrass}{Weierstrass,K. {\it J.f.Mathematik (Crelle)}
{\bf 52} (1856) 346.}
\reference{Weierstrass2}{Weierstrass,K. {\it Mathematische Werke} Vol.I,p.1,
Mayer u. M\"uller, Berlin, 1894.}
\reference{Fricke}{Fricke,R. {\it Die Elliptische Funktionen und Ihre Anwendungen},
Teubner, Leipzig. 1915, 1922.}
\reference{Konig}{K\"onigsberger,L. {\it Vorlesungen \"uber die Theorie der
Elliptischen Funktionen}, \break Teubner, Leipzig, 1874.}
\reference{Milne}{Milne,S.C. {\it The Ramanujan Journal} {\bf 6} (2002) 7-149.}
\reference{Schlomilch}{Schl\"omilch,O. {\it Ber. Verh. K. Sachs. Gesell. Wiss.
Leipzig} {\bf 29} (1877) 101-105; {\it Compendium der h\"oheren
Analysis}, Bd.II, 3rd Edn, Vieweg, Brunswick, 1878.}
\reference{BandB}{Briot,C. and Bouquet,C. {\it Th\`eorie des Fonctions
Elliptiques}, Gauthier-Villars, Paris, 1875.}
\reference{Dumont}{Dumont,D. \aim {41}{1981}{1}.}
\reference{Andre}{Andr\'e,D. {\it Ann.\'Ecole Normale Superior} {\bf 6} (1877)
265;
{\it J.Math.Pures et Appl.} {\bf 5} (1878) 31.}
\reference{Raman}{Ramanujan,S. {\it Trans.Camb.Phil.Soc.} {\bf 22} (1916) 159;
{\it Collected Papers}, Cambridge, 1927}
\reference{Weber}{Weber,H.M. {\it Lehrbuch der Algebra} Bd.III, Vieweg,
Brunswick 190 3.}
\reference{Weber2}{Weber,H.M. {\it Elliptische Funktionen und algebraische
Zahlen},
Vieweg, Brunswick 1891.}
\reference{ZandR}{Zucker,I.J. and Robertson,M.M.
{\it Math.Proc.Camb.Phil.Soc} {\bf 95 }(1984) 5.}
\reference{JandZ1}{Joyce,G.S. and Zucker,I.J.
{\it Math.Proc.Camb.Phil.Soc} {\bf 109 }(1991) 257.}
\reference{JandZ2}{Zucker,I.J. and Joyce.G.S.
{\it Math.Proc.Camb.Phil.Soc} {\bf 131 }(2001) 309.}
\reference{zucker2}{Zucker,I.J. {\it SIAM J.Math.Anal.} {\bf 10} (1979) 192,}
\reference{BandZ}{Borwein,J.M. and Zucker,I.J. {\it IMA J.Math.Anal.} {\bf 12}
(1992) 519.}
\reference{Cox}{Cox,D.A. {\it Primes of the form $x^2+n\,y^2$}, Wiley,
New York, 1989.}
\reference{BandCh}{Berndt,B.C. and Chan,H.H. {\it Mathematika} {\bf42} (1995)
278.}
\reference{EandT}{Elizalde,R. and Tort.hep-th/}
\reference{KandS}{Kiyek,K. and Schmidt,H. {\it Arch.Math.} {\bf 18} (1967) 438.}
\reference{Oshima}{Oshima,K. \prD{46}{1992}{4765}.}
\reference{greenhill2}{Greenhill,A.G. \plms{19} {1888} {301}.}
\reference{Russell}{Russell,R. \plms{19} {1888} {91}.}
\reference{BandB}{Borwein,J.M. and Borwein,P.B. {\it Pi and the AGM}, Wiley,
New York, 1998.}
\reference{Resnikoff}{Resnikoff,H.L. \tams{124}{1966}{334}.}
\reference{vandp}{Van der Pol, B. {\it Indag.Math.} {\bf18} (1951) 261,272.}
\reference{Rankin}{Rankin,R.A. {\it Modular forms} C.U.P. Cambridge}
\reference{Rankin2}{Rankin,R.A. {\it Proc. Roy.Soc. Edin.} {\bf76 A} (1976) 107.}
\reference{Skoruppa}{Skoruppa,N-P. {\it J.of Number Th.} {\bf43} (1993) 68 .}
\reference{Down}{Dowker.J.S. \np {104}{2002}{153}; ahlso Dowker,J.S.
hep-th/0007129}
\reference{Eichler}{Eichler,M. \mz {67}{1957}{267}.}
\reference{Zagier}{Zagier,D. \invm{104}{1991}{449}.}
\reference{Lang}{Lang,S. {\it Modular Forms}, Springer, Berlin, 1976.}
\reference{Kosh}{Koshliakov,N.S. {\it Mess.of Math.} {\bf 58} (1928) 1.}
\reference{BandH}{Bodendiek, R. and Halbritter,U. \amsh{38}{1972}{147}.}
\reference{Smart}{Smart,L.R., \pgma{14}{1973}{1}.}
\reference{Grosswald}{Grosswald,E. {\it Acta. Arith.} {\bf 21} (1972) 25.}
\reference{Kata}{Katayama,K. {\it Acta Arith.} {\bf 22} (1973) 149.}
\reference{Ogg}{Ogg,A. {\it Modular forms and Dirichlet series} (Benjamin,
New York,
1969).}
\reference{Bol}{Bol,G. \amsh{16}{1949}{1}.}
\reference{Epstein}{Epstein,P. \ma{56}{1903}{615}.}
\reference{Petersson}{Petersson.}
\reference{Serre}{Serre,J-P. {\it A Course in Arithmetic}, Springer,
New York, 1973.}
\reference{Schoenberg}{Schoenberg,B., {\it Elliptic Modular Functions},
Springer, Berlin, 1974.}
\reference{Apostol}{Apostol,T.M. \dmj {17}{1950}{147}.}
\reference{Ogg2}{Ogg,A. {\it Lecture Notes in Math.} {\bf 320} (1973) 1.}
\reference{Knopp}{Knopp,M.I. \dmj {45}{1978}{47}.}
\reference{Knopp2}{Knopp,M.I. \invm {}{1994}{361}.}
\reference{LandZ}{Lewis,J. and Zagier,D. \aom{153}{2001}{191}.}
\reference{DandK1}{Dowker,J.S. and Kirsten,K. {\it Elliptic functions and
temperature inversion symmetry on spheres} hep-th/.}
\reference{HandK}{Husseini and Knopp.}
\reference{Kober}{Kober,H. \mz{39}{1934-5}{609}.}
\reference{HandL}{Hardy,G.H. and Littlewood, \am{41}{1917}{119}.}
\reference{Watson}{Watson,G.N. \qjm{2}{1931}{300}.}
\reference{SandC2}{Chowla,S. and Selberg,A. {\it Proc.Nat.Acad.} {\bf 35}
(1949) 371.}
\reference{Landau}{Landau, E. {\it Lehre von der Verteilung der Primzahlen},
(Teubner, Leipzig, 1909).}
\reference{Berndt4}{Berndt,B.C. \tams {146}{1969}{323}.}
\reference{Berndt3}{Berndt,B.C. \tams {}{}{}.}
\reference{Bochner}{Bochner,S. \aom{53}{1951}{332}.}
\reference{Weil2}{Weil,A.\ma{168}{1967}{}.}
\reference{CandN}{Chandrasekharan,K. and Narasimhan,R. \aom{74}{1961}{1}.}
\reference{Rankin3}{Rankin,R.A. {} {} ().}
\reference{Berndt6}{Berndt,B.C. {\it Trans.Edin.Math.Soc}.}
\reference{Elizalde}{Elizalde,E. {\it Ten Physical Applications of Spectral
Zeta Function Theory}, \break (Springer, Berlin, 1995).}
\reference{Allen}{Allen,B., Folacci,A. and Gibbons,G.W. \pl{189}{1987}{304}.}
\reference{Krazer}{Krazer}
\reference{Elizalde3}{Elizalde,E. {\it J.Comp.and Appl. Math.} {\bf 118}
(2000) 125.}
\reference{Elizalde2}{Elizalde,E., Odintsov.S.D, Romeo, A. and Bytsenko,
A.A and
Zerbini,S.
{\it Zeta function regularisation}, (World Scientific, Singapore,
1994).}
\reference{Eisenstein}{Eisenstein}
\reference{Hecke}{Hecke,E. \ma{112}{1936}{664}.}
\reference{Hecke2}{Hecke,E. \ma{112}{1918}{398}.}
\reference{Terras}{Terras,A. {\it Harmonic analysis on Symmetric Spaces} (Springer,
New York, 1985).}
\reference{BandG}{Bateman,P.T. and Grosswald,E. {\it Acta Arith.} {\bf 9}
(1964) 365.}
\reference{Deuring}{Deuring,M. \aom{38}{1937}{585}.}
\reference{Guinand}{Guinand.}
\reference{Guinand2}{Guinand.}
\reference{Mordell}{Mordell,J. \prs{}{}{}.}
\reference{GandZ}{Glasser,M.L. and Zucker, {}.}
\reference{Landau2}{Landau,E. \jram{}{1903}{64}.}
\reference{Kirsten1}{Kirsten,K. \jmp{35}{1994}{459}.}
\reference{Sommer}{Sommer,J. {\it Vorlesungen \"uber Zahlentheorie}
(1907,Teubner,Leipzig).
French edition 1913 .}
\reference{Reid}{Reid,L.W. {\it Theory of Algebraic Numbers},
(1910,MacMillan,New York).}
\reference{Milnor}{Milnor, J. {\it Is the Universe simply--connected?},
IAS, Princeton, 1978.}
\reference{Milnor2}{Milnor, J. \ajm{79}{1957}{623}.}
\reference{Opechowski}{Opechowski,W. {\it Physica} {\bf 7} (1940) 552.}
\reference{Bethe}{Bethe, H.A. \zfp{3}{1929}{133}.}
\reference{LandL}{Landau, L.D. and Lishitz, E.M. {\it Quantum
Mechanics} (Pergamon Press, London, 1958).}
\reference{GPR}{Gibbons, G.W., Pope, C. and R\"omer, H., \np{157}{1979}{377}.}
\reference{Jadhav}{Jadhav,S.P. PhD Thesis, University of Manchester 1990.}
\reference{DandJ}{Dowker,J.S. and Jadhav, S. \prD{39}{1989}{1196}.}
\reference{DandJ2}{Dowker,J.S. and Jadhav, S. \prD{39}{1989}{2368}.}
\reference{CandM}{Coxeter, H.S.M. and Moser, W.O.J. {\it Generators and
relations of finite groups} (Springer. Berlin. 1957).}
\reference{Coxeter2}{Coxeter, H.S.M. {\it Regular Complex Polytopes},
(Cambridge University Press, \break Cambridge, 1975).}
\reference{Coxeter}{Coxeter, H.S.M. {\it Regular Polytopes}.}
\reference{Stiefel}{Stiefel, E., J.Research NBS {\bf 48} (1952) 424.}
\reference{BandS}{Brink, D.M. and Satchler, G.R. {\it Angular momentum theory}.
(Clarendon Press, Oxford. 1962.).}
\reference{Rose}{Rose}
\reference{Schwinger}{Schwinger, J. {\it On Angular Momentum}
in {\it Quantum Theory of Angular Momentum} edited by
Biedenharn,L.C. and van Dam, H. (Academic Press, N.Y. 1965).}
\reference{Bromwich}{Bromwich, T.J.I'A. {\it Infinite Series},
(Macmillan, 1947).}
\reference{Ray}{Ray,D.B. \aim{4}{1970}{109}.}
\reference{Ikeda}{Ikeda,A. {\it Kodai Math.J.} {\bf 18} (1995) 57.}
\reference{Kennedy}{Kennedy,G. \prD{23}{1981}{2884}.}
\reference{Ellis}{Ellis,G.F.R. {\it General Relativity} {\bf2} (1971) 7.}
\reference{Dow8}{Dowker,J.S. \cqg{20}{2003}{L105}.}
\reference{IandY}{Ikeda, A and Yamamoto, Y. \ojm {16}{1979}{447}.}
\reference{BandI}{Bander,M. and Itzykson,C. \rmp{18}{1966}{2}.}
\reference{Schulman}{Schulman, L.S. \pr{176}{1968}{1558}.}
\reference{Bar1}{B\"ar,C. {\it Arch.d.Math.}{\bf 59} (1992) 65.}
\reference{Bar2}{B\"ar,C. {\it Geom. and Func. Anal.} {\bf 6} (1996) 899.}
\reference{Vilenkin}{Vilenkin, N.J. {\it Special functions},
(Am.Math.Soc., Providence, 1968).}
\reference{Talman}{Talman, J.D. {\it Special functions} (Benjamin,N.Y.,1968).}
\reference{Miller}{Miller, W. {\it Symmetry groups and their applications}
(Wiley, N.Y., 1972).}
\reference{Dow3}{Dowker,J.S. \cmp{162}{1994}{633}.}
\reference{Dowcmp}{Dowker,J.S. \cmp{162}{1994}{633}.}
\reference{Cheeger}{Cheeger, J. \jdg {18}{1983}{575}.}
\reference{Cheeger2}{Cheeger, J. \aom {109}{1979}{259}.}
\reference{Dow6}{Dowker,J.S. \jmp{30}{1989}{770}.}
\reference{Dow20}{Dowker,J.S. \jmp{35}{1994}{6076}.}
\reference{Dowjmp}{Dowker,J.S. \jmp{35}{1994}{4989}.}
\reference{Dow21}{Dowker,J.S. {\it Heat kernels and polytopes} in {\it
Heat Kernel Techniques and Quantum Gravity}, ed. by S.A.Fulling,
Discourses in Mathematics and its Applications, No.4, Dept.
Maths., Texas A\&M University, College Station, Texas, 1995.}
\reference{Dow9}{Dowker,J.S. \jmp{42}{2001}{1501}.}
\reference{Dow7}{Dowker,J.S. \jpa{25}{1992}{2641}.}
\reference{Warner}{Warner.N.P. \prs{383}{1982}{379}.}
\reference{Wolf}{Wolf, J.A. {\it Spaces of constant curvature},
(McGraw--Hill,N.Y., 1967).}
\reference{Meyer}{Meyer,B. \cjm{6}{1954}{135}.}
\reference{BandB}{B\'erard,P. and Besson,G. {\it Ann. Inst. Four.} {\bf 30}
(1980) 237.}
\reference{PandM}{Polya,G. and Meyer,B. \cras{228}{1948}{28}.}
\reference{Springer}{Springer, T.A. Lecture Notes in Math. vol 585 (Springer,
Berlin,1977).}
\reference{SeandT}{Threlfall, H. and Seifert, W. \ma{104}{1930}{1}.}
\reference{Hopf}{Hopf,H. \ma{95}{1925}{313}. }
\reference{Dow}{Dowker,J.S. \jpa{5}{1972}{936}.}
\reference{LLL}{Lehoucq,R., Lachi\'eze-Rey,M. and Luminet, J.--P. {\it
Astron.Astrophys.} {\bf 313} (1996) 339.}
\reference{LaandL}{Lachi\'eze-Rey,M. and Luminet, J.--P.
\prp{254}{1995}{135}.}
\reference{Schwarzschild}{Schwarzschild, K., {\it Vierteljahrschrift der
Ast.Ges.} {\bf 35} (1900) 337.}
\reference{Starkman}{Starkman,G.D. \cqg{15}{1998}{2529}.}
\reference{LWUGL}{Lehoucq,R., Weeks,J.R., Uzan,J.P., Gausman, E. and
Luminet, J.--P. \cqg{19}{2002}{4683}.}
\reference{Dow10}{Dowker,J.S. \prD{28}{1983}{3013}.}
\reference{BandD}{Banach, R. and Dowker, J.S. \jpa{12}{1979}{2527}.}
\reference{BandD2}{Banach, R. and Dowker, J.S. \jpa{12}{1979}{2545}.}
\reference{Jadhav2}{Jadhav,S. \prD{43}{1991}{2656}.}
\reference{Gilkey}{Gilkey,P.B. {\it Invariance theory,the heat equation and
the Atiyah--Singer Index theorem} (CRC Press, Boca Raton, 1994).}
\reference{BandY}{Berndt,B.C. and Yeap,B.P. {\it Adv. Appl. Math.}
{\bf29} (2002) 358.}
\reference{HandR}{Hanson,A.J. and R\"omer,H. \pl{80B}{1978}{58}.}
\reference{Hill}{Hill,M.J.M. {\it Trans.Camb.Phil.Soc.} {\bf 13} (1883) 36.}
\reference{Cayley}{Cayley,A. {\it Quart.Math.J.} {\bf 7} (1866) 304.}
\reference{Seade}{Seade,J.A. {\it Anal.Inst.Mat.Univ.Nac.Aut\'on
M\'exico} {\bf 21} (1981) 129.}
\reference{CM}{Cisneros--Molina,J.L. {\it Geom.Dedicata} {\bf84} (2001)
\reference{Goette1}{Goette,S. \jram {526} {2000} 181.}
207.}
\reference{NandO}{Nash,C. and O'Connor,D--J, \jmp {36}{1995}{1462}.}
\reference{Dows}{Dowker,J.S. \aop{71}{1972}{577}; Dowker,J.S. and Pettengill,D.F.
\jpa{7}{1974}{1527}; J.S.Dowker in {\it Quantum Gravity}, edited by
S. C. Christensen (Hilger,Bristol,1984)}
\reference{Jadhav2}{Jadhav,S.P. \prD{43}{1991}{2656}.}
\reference{Dow11}{Dowker,J.S. \cqg{21}{2004}4247.}
\reference{Dow12}{Dowker,J.S. \cqg{21}{2004}4977.}
\reference{Dow13}{Dowker,J.S. \jpa{38}{2005}1049.}
\reference{Zagier}{Zagier,D. \ma{202}{1973}{149}}
\reference{RandG}{Rademacher, H. and Grosswald,E. {\it Dedekind Sums},
(Carus, MAA, 1972).}
\reference{Berndt7}{Berndt,B, \aim{23}{1977}{285}.}
\reference{HKMM}{Harvey,J.A., Kutasov,D., Martinec,E.J. and Moore,G.
{\it Localised Tachyons and RG Flows}, hep-th/0111154.}
\reference{Beck}{Beck,M., {\it Dedekind Cotangent Sums}, {\it Acta Arithmetica}
{\bf 109} (2003) 109-139 ; math.NT/0112077.}
\reference{McInnes}{McInnes,B. {\it APS instability and the topology of the brane
world}, hep-th/0401035.}
\reference{BHS}{Brevik,I, Herikstad,R. and Skriudalen,S. {\it Entropy Bound for the
TM Electromagnetic Field in the Half Einstein Universe}; hep-th/0508123.}
\reference{BandO}{Brevik,I. and Owe,C. \prD{55}{4689}{1997}.}
\reference{Kenn}{Kennedy,G. Thesis. University of Manchester 1978.}
\reference{KandU}{Kennedy,G. and Unwin S. \jpa{12}{L253}{1980}.}
\reference{BandO1}{Bayin,S.S.and Ozcan,M.
\prD{48}{2806}{1993}; \prD{49}{5313}{1994}.}
\reference{Chang}{Chang, P., {\it Quantum Field Theory on Regular Polytopes}.
Thesis. University of Manchester, 1993.}
\reference{Barnesa}{Barnes,E.W. {\it Trans. Camb. Phil. Soc.} {\bf 19} (1903) 374.}
\reference{Barnesb}{Barnes,E.W. {\it Trans. Camb. Phil. Soc.}
{\bf 19} (1903) 426.}
\reference{Stanley1}{Stanley,R.P. \joa {49Hilf}{1977}{134}.}
\reference{Stanley}{Stanley,R.P. \bams {1}{1979}{475}.}
\reference{Hurley}{Hurley,A.C. \pcps {47}{1951}{51}.}
\reference{IandK}{Iwasaki,I. and Katase,K. {\it Proc.Japan Acad. Ser} {\bf A55}
(1979) 141.}
\reference{IandT}{Ikeda,A. and Taniguchi,Y. {\it Osaka J. Math.} {\bf 15} (1978)
515.}
\reference{GandM}{Gallot,S. and Meyer,D. \jmpa{54}{1975}{259}.}
\reference{Flatto}{Flatto,L. {\it Enseign. Math.} {\bf 24} (1978) 237.}
\reference{OandT}{Orlik,P and Terao,H. {\it Arrangements of Hyperplanes},
Grundlehren der Math. Wiss. {\bf 300}, (Springer--Verlag, 1992).}
\reference{Shepler}{Shepler,A.V. \joa{220}{1999}{314}.}
\reference{SandT}{Solomon,L. and Terao,H. \cmh {73}{1998}{237}.}
\reference{Vass}{Vassilevich, D.V. \plb {348}{1995}39.}
\reference{Vass2}{Vassilevich, D.V. \jmp {36}{1995}3174.}
\reference{CandH}{Camporesi,R. and Higuchi,A. {\it J.Geom. and Physics}
{\bf 15} (1994) 57.}
\reference{Solomon2}{Solomon,L. \tams{113}{1964}{274}.}
\reference{Solomon}{Solomon,L. {\it Nagoya Math. J.} {\bf 22} (1963) 57.}
\reference{Obukhov}{Obukhov,Yu.N. \pl{109B}{1982}{195}.}
\reference{BGH}{Bernasconi,F., Graf,G.M. and Hasler,D. {\it The heat kernel
expansion for the electromagnetic field in a cavity}; math-ph/0302035.}
\reference{Baltes}{Baltes,H.P. \prA {6}{1972}{2252}.}
\reference{BaandH}{Baltes.H.P and Hilf,E.R. {\it Spectra of Finite Systems}
(Bibliographisches Institut, Mannheim, 1976).}
\reference{Ray}{Ray,D.B. \aim{4}{1970}{109}.}
\reference{Hirzebruch}{Hirzebruch,F. {\it Topological methods in algebraic
geometry} (Springer-- Verlag,\break Berlin, 1978). }
\reference{BBG}{Bla\v{z}i\'c,N., Bokan,N. and Gilkey, P.B. {\it Ind.J.Pure and
Appl.Math.} {\bf 23} (1992) 103.}
\reference{WandWi}{Weck,N. and Witsch,K.J. {\it Math.Meth.Appl.Sci.} {\bf 17}
(1994) 1017.}
\reference{Norlund}{N\"orlund,N.E. \am{43}{1922}{121}.}
\reference{Duff}{Duff,G.F.D. \aom{56}{1952}{115}.}
\reference{DandS}{Duff,G.F.D. and Spencer,D.C. \aom{45}{1951}{128}.}
\reference{BGM}{Berger, M., Gauduchon, P. and Mazet, E. {\it Lect.Notes.Math.}
{\bf 194} (1971) 1. }
\reference{Patodi}{Patodi,V.K. \jdg{5}{1971}{233}.}
\reference{GandS}{G\"unther,P. and Schimming,R. \jdg{12}{1977}{599}.}
\reference{MandS}{McKean,H.P. and Singer,I.M. \jdg{1}{1967}{43}.}
\reference{Conner}{Conner,P.E. {\it Mem.Am.Math.Soc.} {\bf 20} (1956).}
\reference{Gilkey2}{Gilkey,P.B. \aim {15}{1975}{334}.}
\reference{MandP}{Moss,I.G. and Poletti,S.J. \plb{333}{1994}{326}.}
\reference{BKD}{Bordag,M., Kirsten,K. and Dowker,J.S. \cmp{182}{1996}{371}.}
\reference{RandO}{Rubin,M.A. and Ordonez,C. \jmp{25}{1984}{2888}.}
\reference{BaandD}{Balian,R. and Duplantier,B. \aop {112}{1978}{165}.}
\reference{Kennedy2}{Kennedy,G. \aop{138}{1982}{353}.}
\reference{DandKi2}{Dowker,J.S. and Kirsten, K. {\it Analysis and Appl.}
{\bf 3} (2005) 45.}
\reference{Dow40}{Dowker,J.S. \cqg{23}{2006}{1}.}
\reference{BandHe}{Br\"uning,J. and Heintze,E. {\it Duke Math.J.} {\bf 51} (1984)
959.}
\reference{Dowl}{Dowker,J.S. {\it Functional determinants on M\"obius corners};
Proceedings, `Quantum field theory under
the influence of external conditions', 111-121,Leipzig 1995.}
\reference{Dowqg}{Dowker,J.S. in {\it Quantum Gravity}, edited by
S. C. Christensen (Hilger, Bristol, 1984).}
\reference{Dowit}{Dowker,J.S. \jpa{11}{1978}{347}.}
\reference{Kane}{Kane,R. {\it Reflection Groups and Invariant Theory} (Springer,
New York, 2001).}
\reference{Sturmfels}{Sturmfels,B. {\it Algorithms in Invariant Theory}
(Springer, Vienna, 1993).}
\reference{Bourbaki}{Bourbaki,N. {\it Groupes et Alg\`ebres de Lie} Chap.III, IV
(Hermann, Paris, 1968).}
\reference{SandTy}{Schwarz,A.S. and Tyupkin, Yu.S. \np{242}{1984}{436}.}
\reference{Reuter}{Reuter,M. \prD{37}{1988}{1456}.}
\reference{EGH}{Eguchi,T. Gilkey,P.B. and Hanson,A.J. \prp{66}{1980}{213}.}
\reference{DandCh}{Dowker,J.S. and Chang,Peter, \prD{46}{1992}{3458}.}
\reference{APS}{Atiyah M., Patodi and Singer,I.\mpcps{77}{1975}{43}.}
\reference{Donnelly}{Donnelly.H. {\it Indiana U. Math.J.} {\bf 27} (1978) 889.}
\reference{Katase}{Katase,K. {\it Proc.Jap.Acad.} {\bf 57} (1981) 233.}
\reference{Gilkey3}{Gilkey,P.B.\invm{76}{1984}{309}.}
\reference{Degeratu}{Degeratu.A. {\it Eta--Invariants and Molien Series for
Unimodular Groups}, Thesis MIT, 2001.}
\reference{Seeley}{Seeley,R. \ijmp {A\bf18}{2003}{2197}.}
\reference{Seeley2}{Seeley,R. .}
\reference{melrose}{Melrose}
\reference{berard}{B\'erard,P.}
\reference{gromes}{Gromes,D.}
\reference{Ivrii}{Ivrii}
\reference{DandW}{Douglas,R.G. and Wojciekowski,K.P. \cmp{142}{1991}{139}.}
\reference{Dai}{Dai,X. \tams{354}{2001}{107}.}
\reference{Kuznecov}{Kuznecov}
\reference{DandG}{Duistermaat and Guillemin.}
\reference{PTL}{Pham The Lai}
\end{putreferences}
\bye
|
2,877,628,090,541 | arxiv | \section{Introduction}
The goal of this paper is to study the metastable dynamics of the solutions to the \emph{hyperbolic mass-conserving Allen--Cahn equation}
\begin{equation}\label{eq:hyp-nonlocal}
\tau u_{tt}+g(u)u_t+\int_0^1\left[1-g(u)\right]u_t\,dx=\varepsilon^2 u_{xx}+f(u)-\int_0^1f(u)\,dx,
\end{equation}
where $u=u(x,t): (0,1)\times(0,+\infty)\rightarrow\mathbb R$,
subject to homogeneous Neumann boundary conditions
\begin{equation}\label{eq:Neumann}
u_x(0,t)=u_x(1,t)=0, \qquad \qquad t>0,
\end{equation}
and initial data
\begin{equation}\label{eq:initial}
u(x,0)=u_0(x), \qquad u_t(x,0)=u_1(x), \qquad \qquad x\in[0,1].
\end{equation}
Precisely, we are interested in the behavior of the solutions to the initial boundary value problem \eqref{eq:hyp-nonlocal}-\eqref{eq:Neumann}-\eqref{eq:initial},
when the diffusion coefficient $\varepsilon^2$ is very small (and strictly positive), the initial data $u_0,u_1$ satisfy appropriate assumptions that will be specified later,
the damping coefficient $g\in C^1(\mathbb{R})$ is strictly positive, namely
\begin{equation}\label{eq:ass-g}
g(u)\geq\sigma>0,\qquad \forall\,u\in\mathbb{R},
\end{equation}
and $f:\mathbb{R}\to\mathbb{R}$ is a balanced bistable reaction term, that is we assume $f=-F'$, where $F\in C^3(\mathbb{R})$ satisfies
\begin{equation}\label{eq:ass-F}
F(\pm1)=F'(\pm1)=0, \qquad F''(\pm1)>0, \qquad F(u)>0, \; \, \forall\,u\neq\pm1.
\end{equation}
In other words, $-f$ is the derivative of a double well potential with wells of equal depth located at $\pm1$;
the typical example is $F(u)=\frac14(u^2-1)^2$.
Formally, by taking $\tau=0$ and $g\equiv1$ in \eqref{eq:hyp-nonlocal}, one obtains the celebrated \emph{mass conserving Allen--Cahn equation} in one space dimension
\begin{equation}\label{eq:maco-AC}
u_t=\varepsilon^2 u_{xx}+f(u)-\int_0^1f(u)\,dx.
\end{equation}
Before presenting our results, we do a short historical review on the mass conserving Allen--Cahn equation \eqref{eq:maco-AC}
and we show how to formally derive the hyperbolic variant \eqref{eq:hyp-nonlocal}.
\subsection{Mass conserving Allen--Cahn equation}
In \cite{RubSte}, Rubinstein and Sternberg introduced the following \emph{nonlocal reaction-diffusion equation}
\begin{equation}\label{eq:nonlocal-AC-multiD}
u_t=\Delta u+f(u)-\lambda_f, \qquad \bm x\in\Omega, \, t>0,
\end{equation}
with no-flux boundary conditions
\begin{equation*}
\bm n \cdot \nabla u=0, \qquad \bm x\in\partial\Omega,
\end{equation*}
where $u=u(\bm x,t): \Omega\times(0,+\infty)\rightarrow\mathbb R$, $\Omega\subset\mathbb R^n$ is a smooth bounded domain
with outer unit normal $\bm n$ and total volume $|\Omega|$,
the reaction term $f$ is equal to $-F'$, where $F$ is a double well potential, and
\begin{equation*}
\lambda_f:=\frac{1}{|\Omega|}\int_\Omega f(u)\,dx.
\end{equation*}
Rubinstein and Sternberg proposed equation \eqref{eq:nonlocal-AC-multiD} to model phase separation after rapid cooling of homogeneous binary systems
(such as glasses and polymers).
If we omit the term $\lambda_f$ in \eqref{eq:nonlocal-AC-multiD}, we obtain a (parabolic) reaction-diffusion equation and when $f=-F'$
with $F$ satisfying \eqref{eq:ass-F}, we have the bistable equation known as \emph{Allen--Cahn equation}
\begin{equation}\label{eq:AC-multiD}
u_t=\Delta u+f(u),
\end{equation}
which has been originally proposed in \cite{Allen-Cahn} to describe the motion of antiphase boundaries in iron alloys.
The presence of the term $\lambda_f$ implies the conservation of the mass of the solutions:
by integrating equation \eqref{eq:nonlocal-AC-multiD} in $\Omega$ and using the no-flux boundary conditions we infer
\begin{equation*}
m(t):=\int_\Omega u(\bm x,t)\,dx=\int_\Omega u(\bm x,0)\,dx, \qquad \qquad \forall\,t\geq0.
\end{equation*}
Therefore, equation \eqref{eq:nonlocal-AC-multiD} is a reaction-diffusion equation with the important property that
the total mass is preserved in time and it was proposed as a simpler alternative to the \emph{Cahn--Hilliard equation} \cite{Cahn-Hill}
\begin{equation}\label{eq:CH-multiD}
u_t=-\Delta\left(\Delta u+f(u)\right).
\end{equation}
Let us briefly compare the mass conserving Allen--Cahn equation \eqref{eq:nonlocal-AC-multiD}
with respect to the Allen--Cahn \eqref{eq:AC-multiD} and Cahn--Hiliard \eqref{eq:CH-multiD} equations (for details see \cite{Bron-Stoth,MurRin,RubSte}).
As \eqref{eq:AC-multiD}, equation \eqref{eq:nonlocal-AC-multiD} is a second order PDE and it can be seen as the gradient flow in $L^2$ for the functional
\begin{equation*}
E[u]:=\int_\Omega\left[\frac12|\nabla u|^2+F(u)\right]\,dx.
\end{equation*}
More precisely, the solutions of equations \eqref{eq:nonlocal-AC-multiD}-\eqref{eq:AC-multiD} with no-flux boundary conditions satisfy
\begin{equation*}
\frac{d}{dt}E[u](t)=-\int_\Omega u_t^2(\bm x,t)\,dx.
\end{equation*}
On the contrary, in the case of \eqref{eq:nonlocal-AC-multiD} we have conservation of mass and the stationary solutions are the same of \eqref{eq:CH-multiD}.
In particular, notice that the only constant equilibria for \eqref{eq:AC-multiD} are the zeros of $f$,
while all the constants $c\in\mathbb{R}$ are equilibria for \eqref{eq:nonlocal-AC-multiD} and \eqref{eq:CH-multiD}.
Nonetheless, the behavior of the solutions to the three equations \eqref{eq:nonlocal-AC-multiD}-\eqref{eq:AC-multiD}-\eqref{eq:CH-multiD} is rather different.
It is impossible to mention all the results, but we briefly recall that the solutions of the one-dimensional Allen--Cahn equation exhibit the phenomenon of \emph{metastability}
and we have persistence of unstable structure for an exponentially long time \cite{Bron-Kohn,Carr-Pego,Carr-Pego2,Chen,Fusco-Hale},
while in the multidimensional case, equation \eqref{eq:AC-multiD} is strictly related to the motion by mean curvature flow \cite{Bron-Kohn2,Chen2,deM-Sch}.
Roughly speaking, if we add a small diffusion coefficient $\varepsilon^2$ in \eqref{eq:AC-multiD} and consider an initial datum with
finitely many sign changes in $\Omega$, then in a first phase, the solution $u$ behaves as if there were no diffusion and develops steep interfaces;
after that, diffusion plays a crucial role and it is very interesting to study the propagation of the \emph{interface} $\Gamma_t:=\left\{\bm x\in\Omega : u(\bm x,t)=0\right\}$.
In the one-dimensional case, $\Gamma_t$ consists of a finite number of points and they move with an exponentially small velocity $\mathcal{O}(\exp(-C/\varepsilon))$ as $\varepsilon\to0^+$;
in the multi-dimensional case the interface moves by mean curvature flow and its velocity is of order $\varepsilon^2$.
It is very interesting to study the propagation of the interface also when the mass is conserved:
for the one-dimensional case, we recall the contributions \cite{ReyWar,SunWard} and \cite{Bates-Xun1,Bates-Xun2},
where the authors study the metastable dynamics of the solutions for the mass conserving Allen--Cahn and the Cahn--Hilliard equations, respectively.
In the multi-dimensional case, we mention \cite{Bron-Stoth,ChHiLo,MurRin} for \eqref{eq:nonlocal-AC-multiD}
and \cite{AlFu,AlFuKa,Pego} for \eqref{eq:CH-multiD}.
In this paper, we are interested in studying the interface motion for some hyperbolic variations of the one-dimensional version of \eqref{eq:nonlocal-AC-multiD}
and in Sections \ref{sec:st-main}-\ref{sec:layerdyn} we describe in detail the layer dynamics for \eqref{eq:hyp-nonlocal},
comparing it with equations \eqref{eq:nonlocal-AC-multiD}, \eqref{eq:AC-multiD} and \eqref{eq:CH-multiD}.
In the next section, we introduce the hyperbolic variation \eqref{eq:hyp-nonlocal} of the mass conserving Allen--Cahn equation.
\subsection{Hyperbolic mass conserving Allen--Cahn equation}
In the previous section, we discussed some properties of the mass conserving Allen--Cahn equation and the link with the classical Allen--Cahn and Cahn--Hilliard equations.
In the past years, hyperbolic variations of the classical versions \eqref{eq:AC-multiD}-\eqref{eq:CH-multiD}
have been proposed to avoid some unphysical behavior of the solutions.
First, (parabolic) reaction-diffusion equations of the form \eqref{eq:AC-multiD} undergo the same criticism of the linear diffusion equation,
mainly concerning infinite speed of propagation of disturbances and lack of inertia.
Hence, following some ideas developed by Maxwell in the context of kinetic theories,
Cattaneo \cite{Cat} proposed a relaxation law instead of the classic Fourier (or Fick) law,
leading to a hyperbolic reaction-diffusion equation (see \cite{JP89a,JP89b}, \cite{FLM17} and references therein).
Second, following the classical Maxwell--Cattaneo modification of the Fick's diffusion law,
Galenko \cite{Galenko} proposed a hyperbolic relaxation of \eqref{eq:CH-multiD} in order to describe the early stages of spinodal decomposition
in certain glasses (among others see \cite{FLMpre} and reference therein).
Here, following the same ideas of \cite{Cat} and \cite{Galenko},
we consider a hyperbolic variant of equation \eqref{eq:nonlocal-AC-multiD},
which is obtained by using the Maxwell-Cattaneo law, instead of the classic Fick law.
A generic reaction-diffusion equation of the form \eqref{eq:nonlocal-AC-multiD} can be obtained from the continuity equation
\begin{equation}\label{eq:continuity}
u_t+\nabla\cdot\bm v=f(u)-\lambda_f,
\end{equation}
where $\bm v$ is the flux of $u$, and the Fick (or Fourier) law
\begin{equation}\label{eq:Fick}
\bm v=-\nabla u.
\end{equation}
By substituting \eqref{eq:Fick} into \eqref{eq:continuity}, one obtains equation \eqref{eq:nonlocal-AC-multiD}.
Therefore, equation \eqref{eq:nonlocal-AC-multiD} is a consequence of the instantaneous equilibrium between the flux $\bm v$ and $-\nabla u$ given by \eqref{eq:Fick}.
On the other hand, one can think that such equilibrium is not instantaneous but delayed, namely we assume that there exists $\tau>0$ such that
\begin{equation*}
\bm v(\bm x,t+\tau)=-\nabla u(\bm x,t), \qquad \qquad \forall\,\bm x\in\Omega, \, t>0.
\end{equation*}
By taking $\bm v+\tau \bm v_t$ as first approximation of $\bm v(t+\tau)$, we obtain the \emph{Maxwell--Cattaneo law}
\begin{equation}\label{eq:Max-Cat}
\tau\bm v_t+\bm v=-\nabla u, \qquad \qquad \tau>0,
\end{equation}
which has been proposed to describe heat propagation by conduction with finite speed \cite{Cat}, \cite{JP89a,JP89b}.
Indeed, in the case $f=0$, the system \eqref{eq:continuity}-\eqref{eq:Fick} becomes the linear diffusion equation (heat equation) and it is well-known that
it allows infinite speed of propagation of disturbances: a small perturbation in a point $\bm{x_0}$ changes instantaneously
the solution $u$ in every point $\bm x$ of the domain $\Omega$.
The relaxation law \eqref{eq:Max-Cat} has been proposed in order to avoid this unphysical property and to take in account inertial effects.
The parameter $\tau$ is a relaxation time and describes the time taken by the flux $\bm v$ to relax to $-\nabla u$.
Using the constitutive equation \eqref{eq:Max-Cat} instead of \eqref{eq:Fick}, we obtain the system
\begin{equation*}
\begin{cases}
u_t+\nabla\cdot\bm v=f(u)-\lambda_f,\\
\tau\bm v_t+\nabla u=-\bm v.
\end{cases}
\end{equation*}
To obtain a single equation for $u$, let us multiply by $\tau$ and differentiate with respect to time the first equation,
and take the divergence of the second one;
we deduce the following \emph{mass-conserving reaction-diffusion equation with relaxation}
\begin{equation}\label{eq:hyp-nonlocal-multiD}
\tau u_{tt}+\left\{u-\tau f(u)+\tau\lambda_f\right\}_t=\Delta u+f(u)-\lambda_f.
\end{equation}
In the rest of the paper, we consider a more general version of \eqref{eq:hyp-nonlocal-multiD} in $[0,1]$:
for $G:\mathbb{R}\to\mathbb{R}$, we consider the equation
\begin{equation*}
\tau u_{tt}+\left\{G(u)+\int_0^1\left[u-G(u)\right]\,dx\right\}_t=\varepsilon^2u_{xx}+f(u)-\int_0^1f(u)\,dx.
\end{equation*}
Notice that, by expanding the time derivative, one obtains equation \eqref{eq:hyp-nonlocal} with $g=G'$.
The main examples we have in mind are $g\equiv1$, which corresponds to
\begin{equation*}
\tau u_{tt}+u_t=\varepsilon^2 u_{xx}+f(u)-\int_0^1f(u)\,dx,
\end{equation*}
and the relaxation case $g(u)=1-\tau f'(u)$, which corresponds to
\begin{equation*}
\tau u_{tt}+\{1-\tau f'(u)\}u_t+\tau\int_0^1f'(u)u_t\,dx= \varepsilon^2u_{xx}+f(u)-\int_0^1f(u)\,dx.
\end{equation*}
In the latter case, once the reaction term $f$ is fixed, assumption \eqref{eq:ass-g} imposes a restriction on the parameter $\tau$, which must satisfy
\begin{equation*}
0<\tau<\frac1{\max f'(u)}.
\end{equation*}
Further details on the laws \eqref{eq:Fick}, \eqref{eq:Max-Cat} and other choices of the damping coefficient $g$,
corresponding to different modifications of the Fick's law can be found in \cite{LMPS}.
As we will see in Section \ref{sec:st-main}, in general the solutions to the hyperbolic version \eqref{eq:hyp-nonlocal} do not conserve the mass.
However, imposing the following condition on the initial velocity,
\begin{equation}\label{eq:ass-u1}
\int_0^1u_1(x)\,dx=0,
\end{equation}
we obtain conservation of the mass and \eqref{eq:hyp-nonlocal} possesses the energy functional
\begin{equation}\label{eq:energy}
E[u,u_t](t):=\int_0^1\left[\frac\tau2u^2_t(x,t)+\frac{\varepsilon^2}2 u_x^2(x,t)+F(u(x,t))\right]\,dx.
\end{equation}
More precisely, the assumptions \eqref{eq:ass-g} and \eqref{eq:ass-u1} imply that if $u$ is a solution to \eqref{eq:hyp-nonlocal}
with boundary conditions \eqref{eq:Neumann}, then (see Lemma \ref{lem:energy-estimate})
\begin{equation*}
\frac{d}{dt}E[u,u_t](t)\leq-\sigma\int_0^1 u_t^2(x,t)\,dx.
\end{equation*}
Therefore, when the initial velocity is a function of zero mean,
we have a hyperbolic reaction-diffusion equation with the property that the total mass is preserved in time
and with the energy functional \eqref{eq:energy}, which has been used in \cite{JHDE2017} to study hyperbolic reaction-diffusion equations and in \cite{FLM19} to prove
exponentially slow motion for some solutions to a hyperbolic relaxation of the Cahn--Hilliard equation.
In this paper, we assume that \eqref{eq:ass-u1} is satisfied and then we study the metastable dynamics of the solutions when the mass is conserved.
It is worth to stress that, by using the energy functional \eqref{eq:energy} and adapting the procedure of \cite{FLM19},
one can prove the exponentially slow motion of the solutions also without the assumption \eqref{eq:ass-u1} (see Section \ref{sec:energyapp}).
On the contrary, the strictly positiveness of the damping coefficient $g$ \eqref{eq:ass-g} is crucial
because it guarantees the dissipative character of equation \eqref{eq:hyp-nonlocal}.
We conclude this Introduction with a short presentation of the main results of this paper.
First of all, we shall prove that there exists an \emph{approximately invariant manifold} $\mathcal{M}_{{}_0}$ for the IBVP \eqref{eq:hyp-nonlocal}-\eqref{eq:Neumann}-\eqref{eq:initial}.
Precisely, the manifold $\mathcal{M}_{{}_0}$ is not invariant, but we will construct a tubular neighborhood (slow channel) of $\mathcal{M}_{{}_0}$ satisfying the following property:
any solution to \eqref{eq:hyp-nonlocal}-\eqref{eq:Neumann}-\eqref{eq:initial} starting from such a slow channel can leave it
only after an exponentially long time, i. e. a time of $\mathcal{O}(\exp(C/\varepsilon))$ as $\varepsilon\to0^+$.
Moreover, inside the slow channel the solution is a function with a finite number ($N>1$) of transitions between the minimum points $\pm1$ of the potential $F$;
we shall derive a system of ODEs which describes the motion of the layers inside the slow channel, and as a consequence
the dynamics of the solution to \eqref{eq:hyp-nonlocal}-\eqref{eq:Neumann}-\eqref{eq:initial}.
Summarizing, we shall prove that the phenomenon of metastability is also present in the case of \eqref{eq:hyp-nonlocal}-\eqref{eq:Neumann}:
some solutions maintain for a very long time an unstable structure with $N>1$ transitions and we describe in detail the exponentially slow motion of the layers.
The approach we used here can be also adapted to study the mass conserving Allen--Cahn equation \eqref{eq:maco-AC}
in order to obtain similar results on the metastable dynamics of the solutions:
existence of an approximately invariant manifold and derivation of the ODEs for the layers.
To the best of our knowledge, the only papers devoted to the metastability for the mass conserving Allen--Cahn equation \eqref{eq:maco-AC}
are \cite{ReyWar,SunWard}, where the authors use formal asymptotic methods and impose the conservation of mass
to derive a system of ODEs describing the layer dynamics for \eqref{eq:maco-AC}.
Then, they compare these asymptotic results with corresponding full numerical results.
As we will see in Sections \ref{sec:st-main} and \ref{sec:layerdyn}, by using a different approach,
we derive a system of ODEs describing the layer dynamics for \eqref{eq:hyp-nonlocal}
and in the limit $\tau\to0^+$, $g\to1$, we obtain the same system of \cite{ReyWar,SunWard}.
The rest of the paper is organized as follows.
In Section \ref{sec:st-main} we present our main results.
First, we state Theorem \ref{thm:main}, which establishes the existence of a slow channel for \eqref{eq:hyp-nonlocal}-\eqref{eq:Neumann}
and, as a consequence, the existence of an approximately invariant manifold $\mathcal{M}_{{}_0}$ for \eqref{eq:hyp-nonlocal}-\eqref{eq:Neumann}.
Second, we present the system of ODEs which describes the motion of the layers.
In Section \ref{sec:base}, we collect some preliminary results needed to prove our main results;
in particular, we introduce a new system of coordinates for functions close to the manifold $\mathcal{M}_{{}_0}$.
Finally, Section \ref{sec:slow} contains the proof of Theorem \ref{thm:main} and
in Section \ref{sec:layerdyn} we derive the ODEs describing the layer dynamics.
\section{Main results}\label{sec:st-main}
The goal of this section is to present the main results of the paper.
Before doing this, we prove some properties of the solution to the IBVP \eqref{eq:hyp-nonlocal}-\eqref{eq:Neumann}-\eqref{eq:initial},
valid for a generic reaction term $f$, which are consequences of the assumption \eqref{eq:ass-u1}.
Moreover, we present some energy estimates, which permit to obtain persistence of metastable patterns for an exponentially long time as $\varepsilon\to0^+$, in the case of a balanced bistable reaction term, i.e. a reaction term $f=-F'$ with $F$ satisfying \eqref{eq:ass-F}.
\subsection{Mass conservation and energy estimates}\label{sec:energyapp}
By integrating \eqref{eq:hyp-nonlocal} in $[0,1]$ and using the homogeneous Neumann boundary conditions \eqref{eq:Neumann},
we deduce the following ODE for the mass $m(t):=\displaystyle\int_0^1 u(x,t)\,dx$:
\begin{equation}\label{eq:ODEmass}
\tau m''(t)+m'(t)=0, \qquad m(0)=\int_0^1 u_0(x)\,dx, \qquad m'(0)=\int_0^1u_1(x)\,dx,
\end{equation}
and, as a consequence, $m(t)=m(0)+\tau m'(0)(1-\exp(-t/\tau))$.
It follows that the mass is conserved, i.e. $m(t)\equiv m(0)$, if and only if \eqref{eq:ass-u1} holds.
Another consequence of the assumption \eqref{eq:ass-u1} is that if $g$ is a strictly positive function \eqref{eq:ass-g},
then the energy defined in \eqref{eq:energy} is a non-increasing function of $t$ along the solutions to \eqref{eq:hyp-nonlocal}-\eqref{eq:Neumann}.
Precisely, we have the following energy estimates.
\begin{lem}\label{lem:energy-estimate}
Assume that $g$ satisfies \eqref{eq:ass-g}.
If $(u,u_t)\in C\left([0,T],H^2(0,1)\times H^1(0,1)\right)$ is solution to \eqref{eq:hyp-nonlocal}-\eqref{eq:Neumann}-\eqref{eq:initial} for some $T>0$,
with $u_1$ satisfying \eqref{eq:ass-u1}, then
\begin{equation}\label{eq:energy-dissipation}
\frac{d}{dt} E[u,u_t](t)\leq-\sigma\int_0^1 u_t^2(x,t)\,dx,
\end{equation}
for any $t\in[0,T]$.
\end{lem}
\begin{proof}
By differentiating with respect to $t$ the definition \eqref{eq:energy} and integrating by parts, we infer
\begin{equation*}
\frac{d}{dt} E[u,u_t](t)=\int_0^1 u_t(x,t)\left[\tau u_{tt}(x,t)-\varepsilon^2u_{xx}(x,t)-f(u(x,t))\right]\,dx,
\end{equation*}
where we used the homogeneous Neumann boundary conditions \eqref{eq:Neumann} and the fact that $F'=-f$.
Since $u$ is a solution to \eqref{eq:hyp-nonlocal}, we have
\begin{equation}\label{eq:energy-assu1}
\begin{aligned}
\frac{d}{dt} E[u,u_t](t)=&-\int_0^1 g(u(x,t))u_t(x,t)^2\,dx\\
&-m'(t)\int_0^1\Big\{\big[1-g(u(x,t))\big]u_t(x,t)+f(u(x,t))\Big\}\,dx,
\end{aligned}
\end{equation}
and the estimate \eqref{eq:energy-dissipation} follows from the assumptions \eqref{eq:ass-g}-\eqref{eq:ass-u1} and \eqref{eq:ODEmass}.
\end{proof}
\begin{rem}
In Lemma \ref{lem:energy-estimate}, we assume that there exists a sufficiently smooth solution to \eqref{eq:hyp-nonlocal}-\eqref{eq:Neumann}-\eqref{eq:initial}
and we prove the estimate \eqref{eq:energy-dissipation}.
Studying the well-posedness of the IBVP \eqref{eq:hyp-nonlocal}-\eqref{eq:Neumann}-\eqref{eq:initial} is beyond the scope of this paper
and in the following we assume that there exists a sufficiently smooth solution.
However, in the case of a strictly positive damping coefficient \eqref{eq:ass-g} and with initial velocity of zero-mean \eqref{eq:ass-u1},
one can extend to the IBVP \eqref{eq:hyp-nonlocal}-\eqref{eq:Neumann}-\eqref{eq:initial} the well-posedness results of \cite[Appendix A]{JHDE2017}.
\end{rem}
Thanks to the dissipative estimate \eqref{eq:energy-dissipation},
one can prove existence of metastable patterns for the boundary problem \eqref{eq:hyp-nonlocal}-\eqref{eq:Neumann},
by using the energy approach firstly introduced in \cite{Bron-Kohn} to study the classical Allen--Cahn equation
\begin{equation}\label{eq:AC}
u_t=\varepsilon^2u_{xx}+f(u),
\end{equation}
and then successfully applied to different models, like the \emph{hyperbolic Allen--Cahn equation}
\begin{equation}\label{eq:hypAC}
\tau u_{tt}+g(u)u_t=\varepsilon^2u_{xx}+f(u),
\end{equation}
and the \emph{hyperbolic Cahn--Hilliard equation}
\begin{equation}\label{eq:hypCH}
\tau u_{tt}+u_t=-\left(\varepsilon^2u_{xx}+f(u)\right)_{xx},
\end{equation}
for details see \cite{JHDE2017,FLM19} and references therein.
In the following, we briefly explain the strategy of such energy approach and how
to apply it to the IBVP \eqref{eq:hyp-nonlocal}-\eqref{eq:Neumann}-\eqref{eq:initial} when $F$ satisfies \eqref{eq:ass-F}.
Multiplying by $\varepsilon^{-1}$ and integrating \eqref{eq:energy-dissipation} in $[0,T]$, for any $T>0$, we deduce the estimate
\begin{equation}\label{eq:energy-variation}
\sigma\varepsilon^{-1}\int_0^T\!\int_0^1u_t^2(x,t)\,dxdt\leq E_\varepsilon[u_0,u_1]-E_\varepsilon[u,u_t](T),
\end{equation}
where $E_\varepsilon$ is the renormalized energy
\begin{equation*}
E_\varepsilon[u,u_t](t):=\frac{1}{\varepsilon}E[u,u_t](t):=\int_0^1\left[\frac\tau{2\varepsilon}u^2_t(x,t)+\frac{\varepsilon}2 u_x^2(x,t)+\frac{F(u(x,t))}\varepsilon\right]\,dx.
\end{equation*}
The main idea of the energy approach \cite{Bron-Kohn} is to derive an estimate for the $L^2$--norm of the time derivative $u_t$ from \eqref{eq:energy-variation} when $T\gg1$;
then, we need an \emph{upper bound} on $E_\varepsilon[u_0,u_1]$ and a \emph{lower bound} on $E_\varepsilon[u,u_t](T)$ for some $T$ very large when $\varepsilon\to0^+$.
For the upper bound, we can properly choose the initial datum $(u_0^\varepsilon,u_1^\varepsilon)$ (depending on $\varepsilon$):
fix $N\in\mathbb{N}$, $0<h_1<\dots<h_{N+1}<1$ and assume that
\begin{equation}\label{eq:ass-energyapp}
\lim_{\varepsilon\to0}\|u_0^\varepsilon-v\|_{{}_{L^1}}=0, \qquad \qquad E_\varepsilon[u_0^\varepsilon,u_1^\varepsilon]\leq (N+1)c_{{}_F}+C_1\exp(-C_2/\varepsilon),
\end{equation}
where $v:[0,1]\to\{-1,+1\}$ is a step function with exactly $N+1$ jumps at $h_1<\dots<h_{N+1}$, the constants $C_1,C_2$ are strictly positive and independent on $\varepsilon$, and
\begin{equation}\label{eq:c_F}
c_{{}_F}:=\displaystyle\int_{-1}^1\sqrt{2F(s)}\,ds
\end{equation}
represents the minimum energy to have a transition between $-1$ and $+1$ \cite{Bron-Kohn,JHDE2017,FLM19}.
An example of initial data satisfying \eqref{eq:ass-energyapp} can be found in \cite{FLM19}.
Concerning the lower bound, it could be obtained by proceeding as in \cite{JHDE2017,FLM19}, because the energy functional $E_\varepsilon$ is the same.
In particular, the lower bound is a consequence of a variational result on the Ginzburg--Landau functional
\begin{equation*}
\int_0^1\left[\frac{\varepsilon}2 u_x^2+\frac{F(u)}\varepsilon\right]\,dx,
\end{equation*}
and it reads as
\begin{equation*}
E_\varepsilon[u,u_t](\varepsilon^{-1}T_\varepsilon)\geq(N+1)c_{{}_F}-C_3\exp(-C_2/\varepsilon),
\end{equation*}
where $T_\varepsilon=\mathcal{O}(\exp(C_2/\varepsilon))$.
Substitution of the latter lower bound and assumption \eqref{eq:ass-energyapp} in the key estimate \eqref{eq:energy-variation} yields the bound
\begin{equation}\label{eq:ut-energyapp}
\int_0^{\varepsilon^{-1}T_\varepsilon}\!\!\int_0^1u_t^2(x,t)\,dxdt\leq C\varepsilon\exp(-C_2/\varepsilon),
\end{equation}
which permits to prove that some solutions to \eqref{eq:hyp-nonlocal}-\eqref{eq:Neumann} maintain the same structure of the initial datum
for the time $T_\varepsilon$ as $\varepsilon\to0^+$, for details see \cite{Bron-Kohn,JHDE2017,FLM19}.
We stress again that the key point of the energy approach is the estimate \eqref{eq:energy-dissipation}, which implies \eqref{eq:energy-variation}.
It is worth to notice that the energy approach also works when the assumption \eqref{eq:ass-u1} on $u_1$ is not satisfied.
For simplicity, consider the case $g\equiv1$; from \eqref{eq:energy-assu1} it follows that
\begin{equation*}
\varepsilon^{-1}\int_0^T\!\int_0^1u_t^2(x,t)\,dxdt\leq E_\varepsilon[u_0,u_1]-E_\varepsilon[u,u_t](T)+C\|u_1\|_{{}_{L^1}}, \qquad \qquad \forall\,T>0,
\end{equation*}
and then, the estimate \eqref{eq:ut-energyapp} could be obtained as in \cite{JHDE2017,FLM19} by using the fact that $\|u_1\|_{{}_{L^1}}=\mathcal{O}(\exp(-C_2/\varepsilon))$.
\subsection{Approximately invariant manifold}
The goal of this paper is to study the metastable dynamics of the solutions to \eqref{eq:hyp-nonlocal}-\eqref{eq:Neumann}-\eqref{eq:initial},
by using the dynamical approach proposed by Carr--Pego \cite{Carr-Pego} and Fusco--Hale \cite{Fusco-Hale}
to describe the metastable dynamics of the solutions to \eqref{eq:AC} and then applied to the Cahn--Hilliard equation in \cite{Bates-Xun1,Bates-Xun2},
and to the hyperbolic variants \eqref{eq:hypAC}, \eqref{eq:hypCH} in \cite{FLM17}, \cite{FLMpre}, respectively.
To start with, we introduce some notations and definitions.
In all the paper we denote by $\|\cdot\|$ and $\langle\cdot,\cdot\rangle$ the norm and inner product in $L^2(0,1)$.
Moreover, in what follows we fix $N\in\mathbb{N}$ and define for $\rho>0$ the set of admissible layer positions
\begin{align*}
\Omega_\rho:=\Bigl\{\bm h\in\mathbb{R}^{N+1} \,:\, 0<h_1<\dots<h_{N+1}<1, \, \mbox{ and } \, h_{j+1}&-h_j>\varepsilon/\rho, \\
&\mbox{ for } j=0,\dots,N+1\Bigr\},
\end{align*}
where $h_0=-h_1$ and $h_{N+2}=2-h_{N+1}$, because of the homogeneous Neumann boundary conditions \eqref{eq:Neumann}.
Finally, we fix $\delta\in(0,1/N+1)$, consider the parameters $\varepsilon$ and $\rho$ such that
\begin{equation}\label{eq:triangle}
\varepsilon\in(0,\varepsilon_0) \qquad \mbox{ and } \qquad \delta<\frac{\varepsilon}{\rho}<\frac{1}{N+1},
\end{equation}
for some $\varepsilon_0>0$ to be chosen appropriately small and we introduce the $(N+1)$--manifold
\begin{equation}\label{eq:M^AC}
\mathcal{M}^{AC}:=\{u^{\bm h} :\bm h\in\Omega_\rho\},
\end{equation}
where $u^{\bm h}$ is a function with $N+1$ transitions, which approximates a metastable patterns with layers at $h_1, \dots,h_{N+1}$.
The construction of $u^{\bm h}$ was introduced in \cite{Carr-Pego} and since
the metastable states are the same for the equations \eqref{eq:AC}, \eqref{eq:hypAC} and \eqref{eq:hypCH},
it was also used in \cite{Bates-Xun1,Bates-Xun2}, \cite{FLM17} and \cite{FLMpre}.
We give the precise definition of $u^{\bm h}$ in Section \ref{sec:base};
here we recall that $u^{\bm h}$ is approximately $\pm1$ except to an $\mathcal{O}(\varepsilon)$-neighborhood of $h_1,\dots,h_{N+1}$, namely
\begin{equation}\label{eq:uh-approx}
u^{\bm h}(x)\approx(-1)^j, \quad \mbox{for } x\in[h_{j-1}+\mathcal{O}(\varepsilon),h_j-\mathcal{O}(\varepsilon)]\cap[0,1] \quad \mbox{ and } \quad j=1,\dots,N+2,
\end{equation}
and $u^{\bm h}$ is well approximated by standing waves solutions to \eqref{eq:AC} in the $\mathcal{O}(\varepsilon)$-neighborhood of $h_j$ (for details see \cite[Proposition 2.2]{Carr-Pego}).
In \cite{Carr-Pego}, the authors show that the manifold $\mathcal{M}^{AC}$ is approximately invariant for the Allen--Cahn equation \eqref{eq:AC},
while in \cite{FLM17} it is proved that the \emph{extended} manifold
\begin{equation*}
\mathcal{M}^{AC}_{{}_0}:=\mathcal{M}^{AC}\times\{0\}=\{(u^{\bm h},0) :u^{\bm h}\in\mathcal{M}^{AC}\}
\end{equation*}
is approximately invariant for the hyperbolic variant \eqref{eq:hypAC}.
The mass conservation allows us to work with the manifolds
\begin{equation}\label{eq:M_0}
\mathcal{M}:=\left\{u^{\bm h}\in \mathcal{M}^{AC} : \, \int_0^1u^{\bm h}(x)\,dx=M\right\}, \qquad\;
\mathcal{M}_{{}_0}:=\{(u^{\bm h},0) :u^{\bm h}\in\mathcal{M}\},
\end{equation}
where $M\in(-1,1)$ represents the mass of the solution (the mass of the initial datum $u_0$).
The manifolds $\mathcal{M}$ and $\mathcal{M}_{{}_0}$ are approximately invariant for the Cahn--Hilliard equation (see \cite{Bates-Xun1})
and its hyperbolic variant \cite{FLMpre}, respectively.
Our goal is to prove that the \emph{base manifold} $\mathcal{M}_{{}_0}$ is also approximately invariant for \eqref{eq:hyp-nonlocal}-\eqref{eq:Neumann}.
As we already mentioned, the fact that $\mathcal{M}$ is approximately invariant for \eqref{eq:maco-AC}
has not been proved in literature, but it can be proved with the approach we used here.
To prove that $\mathcal{M}_{{}_0}$ is approximately invariant for equation \eqref{eq:hyp-nonlocal},
we shall construct a tubular neighborhood $\mathcal{Z}_{{}_{\rho}}$ of $\mathcal{M}_{{}_0}$ (see definition \eqref{eq:slowchannel})
and we prove that if the initial datum $(u_0,u_1)\in\stackrel{\circ}{\mathcal{Z}}_{{}_{\rho}}$,
then the corresponding solution to the IBVP \eqref{eq:hyp-nonlocal}-\eqref{eq:Neumann}-\eqref{eq:initial} can leave $\mathcal{Z}_{{}_{\rho}}$
only after an exponentially long time.
\begin{thm}\label{thm:main}
Let $f\in C^2(\mathbb{R})$ and $g\in C^1(\mathbb{R})$ be such that $f=-F'$ and \eqref{eq:ass-g}-\eqref{eq:ass-F} hold.
Given $N\in\mathbb{N}$ and $\delta\in(0,1/N+1)$, there exist $\varepsilon_0>0$ and a slow channel $\mathcal{Z}_{{}_{\rho}}$
containing $\mathcal{M}_{{}_{0}}$, such that if $\varepsilon,\rho$ satisfy \eqref{eq:triangle},
and the initial datum satisfies $(u_0,u_1)\in\,\stackrel{\circ}{\mathcal{Z}}_{{}_{\rho}}$,
then the solution $(u,u_t)$ to the initial-boundary value problem \eqref{eq:hyp-nonlocal}-\eqref{eq:Neumann}-\eqref{eq:initial}
remains in $\mathcal{Z}_{{}_{\rho}}$ for a time $T_\varepsilon>0$, and there exists $C>0$ such that
for any $t\in[0,T_\varepsilon]$,
\begin{align}
\varepsilon^{1/2}\|u-u^{\bm h}\|_{{}_{L^\infty}}+\|u-u^{\bm h}\|+\tau^{1/2}\|u_t\|&\leq C\exp(-A\ell^{\bm h}/\varepsilon), \label{eq:umenouh}\\
|{\bm h}'|_{{}_{\infty}} &\leq C\left(\varepsilon/\tau\right)^{1/2}\exp(-A\ell^{\bm h}/\varepsilon), \label{eq:|h'|<exp-intro}
\end{align}
where $A:=\sqrt{\min\{F''(-1),F''(1)\}}$, $\ell^{\bm h}:=\min\{h_j-h_{j-1}\}$ and $|\cdot|_{{}_{\infty}}$
denotes the maximum norm in $\mathbb{R}^N$.
Moreover,
\begin{equation*}
T_\varepsilon\geq C\left(\tau/\varepsilon\right)^{1/2}(\ell^{\bm h(0)}-\varepsilon/\rho)\exp(A\delta /\varepsilon).
\end{equation*}
\end{thm}
Thanks to the estimate \eqref{eq:umenouh} and the lower bound on $T_\varepsilon$ we can say that, for an exponentially long time,
the solution $u$ to the IBVP \eqref{eq:hyp-nonlocal}-\eqref{eq:Neumann}-\eqref{eq:initial} is well approximated by $u^{\bm h}\in\mathcal{M}$
and the $L^2$--norm of the time derivative $u_t$ is exponentially small as $\varepsilon\to0^+$.
Therefore, $u$ is a function with $N+1$ layers satisfying \eqref{eq:uh-approx}
and \eqref{eq:|h'|<exp-intro} ensures that the layers move with an exponentially small velocity.
Now, we briefly explain the strategy to prove Theorem \ref{thm:main}.
As in the case of the Cahn--Hilliard equation, we work with a different variable describing the position of the layers;
indeed, the manifold $\mathcal{M}$ is a constant mass submanifold of the $(N+1)$--manifold $\mathcal{M}^{AC}$ (cfr. definitions \eqref{eq:M^AC}, \eqref{eq:M_0}),
and it can be parametrized by the first $N$ components of the vector $\bm h$, for details see \cite[Lemma 2.1]{Bates-Xun1}.
Then, we introduce the vector $\bm\xi=(h_1,\dots,h_N)$ consisting of the first $N$ components of $\bm h$
and we denote by $u^{\bm\xi}$ an element of $\mathcal{M}$.
Next, we introduce the decomposition $u=u^{\bm\xi}+w$, where the remainder $w$ is orthogonal to appropriate functions $\nu_j^{\bm\xi}$, i. e.
\begin{equation}\label{eq:ortho-sec2}
\langle w,\nu^{\bm\xi}_j\rangle=0, \qquad \qquad j=1,\dots,N.
\end{equation}
The choice of the functions $\nu_j^{\bm\xi}$ is crucial in our work and, as we will see in the definition \eqref{eq:newtangvec},
they are linear combinations of the approximate tangent vectors of $\mathcal{M}^{AC}$ introduced in \cite{Carr-Pego}.
Then, we prove that for any function $u$ having mass equal to $M$ and belonging to a small neighborhood of $\mathcal{M}$,
there exists a unique $u^{\bm\xi}\in\mathcal{M}$ (then, having the same mass of $u$) such that
$u=u^{\bm\xi}+w$, with $w$ satisfying the orthogonality condition \eqref{eq:ortho-sec2}, for details see Theorem \ref{thm:existence-coord}.
Therefore, we extend to the constant mass submanifold $\mathcal{M}$ the results valid for the $(N+1)$--manifold $\mathcal{M}^{AC}$ \cite{Carr-Pego}
and, as we will see in Section \ref{sec:slow}, such a decomposition plays a crucial role in the proof of Theorem \ref{thm:main}.
In particular, we derive an ODE-PDE coupled system \eqref{eq:system-w-v-xi} for the new coordinates $(\bm\xi,w)$ and study it in an appropriate slow channel.
By using some energy estimates, we prove that in $\mathcal{Z}_{{}_{\rho}}$ the estimates \eqref{eq:umenouh}-\eqref{eq:|h'|<exp-intro} hold
and the solution $u$ leaves $\mathcal{Z}_{{}_{\rho}}$ if and only if $\bm h\in\partial\Omega_\rho$,
meaning that $h_{j+1}-h_j=\varepsilon/\rho$ for some $j\in1,\dots,N+1$ (two transition points are close enough).
Since the layers move with an exponentially small velocity, the time taken for the solution to leave $\mathcal{Z}_{{}_{\rho}}$ is exponentially large.
\begin{rem}\label{rem:main-tau}
The appearance of the relaxation parameter $\tau>0$ in \eqref{eq:|h'|<exp-intro} and in the lower bound for $T_\varepsilon$ is a consequence of the estimate \eqref{eq:umenouh}.
Indeed, as we already mentioned, we first prove that in the slow channel the solution satisfies \eqref{eq:umenouh}-\eqref{eq:|h'|<exp-intro};
in particular, the velocity of the layers can be bounded by the quantity $\|u_t\|$, cfr. Proposition \ref{prop:E>},
and as a consequence, $\tau$ appears in the denominator of the right hand side of \eqref{eq:|h'|<exp-intro} and in the lower bound for $T_\varepsilon$,
that is inversely proportional to the velocity of the layers.
Such a way to obtain the exponentially small velocity of the layers is due to the \emph{hyperbolic} character of the equation \eqref{eq:hyp-nonlocal}
(the presence of the inertial term $\tau u_{tt}$);
in the case of the classic Allen--Cahn, Cahn--Hilliard and mass conserving Allen--Cahn equations,
the exponentially small velocity could be directly obtained from the ODEs for the layers,
without using estimates on $\|u_t\|$ (cfr. \cite{Carr-Pego}, \cite{Bates-Xun1} and Remark \ref{rem:energy-tau}).
\end{rem}
\subsection{ODE for the layers}
After proving Theorem \ref{thm:main}, in Section \ref{sec:layerdyn} we derive the system of ODEs
describing the layer dynamics, which read as
\begin{equation}\label{eq:ODE-hypnonlocal}
\tau h''_j+\gamma_{{}_{F,g}} h'_j=\frac{\varepsilon}{c_{{}_F}}\left(\alpha^{j+1}-\alpha^j+\frac{(-1)^{j+1}}{N+1}\sum_{i=1}^{N+1}(-1)^i(\alpha^{i+1}-\alpha^i)\right),
\end{equation}
for $j=1,\dots,N+1$, where $\gamma_{{}_{F,g}}$ is a positive constant depending only on $F$ and $g$ (see definition below), $c_{{}_F}$ is defined in \eqref{eq:c_F} and $\alpha^j$ depends on $\varepsilon$, $F$ and $\bm h$.
In particular, the term $\alpha^{j+1}-\alpha^j$ determines the speed of the transition point $h_j$
in the case of the classical Allen--Cahn equation \eqref{eq:AC} (see \cite[Section 6]{Carr-Pego}), that is
\begin{equation}\label{eq:ODE-AC}
h'_j=\frac{\varepsilon}{c_{{}_F}}\left(\alpha^{j+1}-\alpha^j\right), \qquad \qquad j=1,\dots,N+1.
\end{equation}
The system \eqref{eq:ODE-AC}, which describes the layer dynamics in the case of \eqref{eq:AC}, has been derived and studied in detail in \cite[Section 6]{Carr-Pego};
here, we stress that the velocity of $h_j$ is exponentially small and
depends only from the distance to the nearest layers $h_{j-1}$ and $h_{j+1}$.
Precisely, we recall (see Proposition \ref{prop:alfa,beta}) that if $F$ is an even function, then
\begin{equation*}
\alpha^j=K\exp\left(-\frac{Al_j}{\varepsilon}\right)\left\{1+\mathcal{O}\left(\varepsilon^{-1}\exp\left(-\frac{Al_j}{2\varepsilon}\right)\right)\right\}, \qquad \qquad j=1,\dots,N+1,
\end{equation*}
for some $K>0$, where $A:=\sqrt{F''(\pm1)}$ and $l_j:=h_{j+1}-h_j$.
Hence, the layer dynamics of \eqref{eq:AC} is described by the ODEs
\begin{equation*}
h'_j=\frac{\varepsilon K}{c_{{}_F}}\left[\exp\left\{-\frac{A(h_{j+1}-h_j)}{\varepsilon}\right\}-\exp\left\{-\frac{A(h_j-h_{j-1})}{\varepsilon}\right\}\right],
\end{equation*}
for $j=1,\dots,N+1$.
Moreover, one has
\begin{equation*}
\frac{\alpha^j}{\alpha^i}\leq C\exp\left(-\frac{A}{\varepsilon}(l_j-l_i)\right),
\end{equation*}
for some $C>0$, and if $l_j-l_i\geq \kappa$ for some $\kappa>0$, we deduce
\begin{equation*}
\alpha^j\leq C\exp\left(-\frac{A\kappa}{\varepsilon}\right)\alpha^i.
\end{equation*}
Therefore, if $l_j>l_i$ then $\alpha^j<\alpha^i$, and for $\varepsilon/\kappa\ll1$, $\alpha^j$ is \emph{exponentially small} with respect to $\alpha^i$.
Such properties of $\alpha^j$ allow us to briefly describe the layer dynamics for \eqref{eq:AC} as follows.
For simplicity, assume that there exists a unique $i\in\{1,\dots,N\}$ such that
\begin{equation}\label{eq:ass-i}
h_{i+1}-h_i<h_{j+1}-h_j, \qquad \qquad j\neq i,\quad j=0,\dots,N+1,
\end{equation}
meaning that $h_i$ and $h_{i+1}$ are the closest layers for some $i\neq0,N+1$.
In this case, $h_i$ and $h_{i+1}$ move towards each other with approximately the same speed and the other $N-2$ points are essentially static,
being $\alpha^{i+1}\gg\alpha^j$ for $\varepsilon\ll1$ and $j\neq i+1$.
In the case of equation \eqref{eq:maco-AC}, the situation is different because of the mass conservation.
Taking (formally) the limit as $\tau\to0^+$ and $\gamma_{{}_{F,g}}\to1$ in \eqref{eq:ODE-hypnonlocal}, we found the ODEs
\begin{equation}\label{eq:ODE-nonlocalAC}
h'_j=\frac{\varepsilon}{c_{{}_F}}\left(\alpha^{j+1}-\alpha^j+ \frac{(-1)^{j+1}}{N+1}\sum_{i=1}^{N+1}(-1)^i(\alpha^{i+1}-\alpha^i)\right), \qquad j=1,\dots,N+1,
\end{equation}
which describe the dynamics in the case of the mass conserving Allen--Cahn equation \eqref{eq:maco-AC} and was originally proposed in \cite{ReyWar,SunWard}.
Therefore, in \eqref{eq:ODE-nonlocalAC} we have new terms with respect to \eqref{eq:ODE-AC},
which take into account the effects of the mass conservation and change notably the motion of the layer.
Indeed, let us assume for definiteness that \eqref{eq:ass-i} holds, $F$ is an even function as above,
and compare equations \eqref{eq:ODE-AC}-\eqref{eq:ODE-nonlocalAC}:
we have that the biggest term $\alpha^i$ appears in $h'_j$ for any $j=1,\dots,N+1$ in \eqref{eq:ODE-nonlocalAC},
and so, all the layers approximately move with the same exponentially small velocity as $\varepsilon\to0^+$.
This is in contrast with \eqref{eq:ODE-AC}, where (as it was already mentioned) the two closest layers move towards each other and the other points are essentially static.
For instance, in the case $N=1$ ($2$ layers), \eqref{eq:ODE-nonlocalAC} becomes
\begin{equation*}
h'_1=h'_2=\frac{\varepsilon}{2c_{{}_F}}\left(\alpha^3-\alpha^1\right),
\end{equation*}
and the two layers move together in an almost rigid way, that is they move in the same direction at the same speed.
Precisely, $h_1$ and $h_2$ move to the right if and only if $\alpha^3>\alpha^1$, meaning that $1-h_2<h_1$.
In case $N=1$, the layer dynamics is very similar to the one of the Cahn--Hilliard equation, see \cite{Bates-Xun2} or \cite{FLMpre}.
We stress that the dynamics is very different with respect to the Allen--Cahn equation \eqref{eq:AC};
indeed, for $N=1$ \eqref{eq:ODE-AC} becomes
\begin{equation*}
h'_1=\frac{\varepsilon}{c_{{}_F}}\left(\alpha^2-\alpha^1\right), \qquad\qquad
h'_2=\frac{\varepsilon}{c_{{}_F}}\left(\alpha^3-\alpha^2\right),
\end{equation*}
and the layers either move towards each other with speed approximately given by $\varepsilon\alpha^2$ (if $h_2-h_1<2\min\{h_1,1-h_2\}$)
or one of the two layers moves towards the closest boundary point ($0$ or $1$) and the other one is essentially static for $\varepsilon$ very small.
In the case $N=2$ (3 layers), \eqref{eq:ODE-nonlocalAC} becomes
\begin{align*}
h'_1&=\frac{\varepsilon}{3c_{{}_F}}\left(-2\alpha^1+\alpha^2+2\alpha^3-\alpha^4\right),\\
h'_2&=\frac{\varepsilon}{3c_{{}_F}}\left(-\alpha^1-\alpha^2+\alpha^3+\alpha^4\right),\\
h'_3&=\frac{\varepsilon}{3c_{{}_F}}\left(\alpha^1-2\alpha^2-\alpha^3+2\alpha^4\right),
\end{align*}
and we have 3 points moving with approximately the same speed as $\varepsilon\to0^+$;
precisely, two points move with speed satisfying $|h'_i|\approx\varepsilon\alpha^j$ for some $j\in\{1,2,3\}$ and
the speed $v$ of the third one satisfy $|v|\approx2\varepsilon\alpha^j$ as $\varepsilon\to0^+$.
This is very different from the layer dynamics of the classical Allen--Cahn equation, described by \eqref{eq:ODE-AC},
and the Cahn--Hilliard equation \cite{Bates-Xun2,FLMpre}, described by
\begin{align*}
h'_1&=\frac{1}{4(h_2-h_1)}\left(\alpha^3-\alpha^1\right), \\
h'_2&=\frac{1}{4(h_2-h_1)}\left(\alpha^3-\alpha^1\right)+\frac{1}{4(h_3-h_2)}\left(\alpha^4-\alpha^2\right),\\
h'_3&=\frac{1}{4(h_3-h_2)}\left(\alpha^4-\alpha^2\right).
\end{align*}
Indeed, for the classical Allen--Cahn equation we have either one point moving towards the closest boundary point and
the other two essentially static or two points moving towards each other and the third one essentially static;
for the Cahn--Hilliard equation, we have two transitions points moving in the same direction
at approximately the same speed and the third one is essentially static as $\varepsilon\to0^+$.
To conclude this comparison between the layer dynamics of the Allen--Cahn, Cahn--Hilliard and mass-conserving Allen--Cahn equations,
we recall \cite{Bates-Xun2,FLMpre} that, in the case of the Cahn--Hilliard equation with $N\geq3$ and condition \eqref{eq:ass-i} satisfied with $i\in\{2,\dots,N-1\}$,
we have four points moving at approximately the same speed, while all the other layers remain essentially stationary in time.
Precisely, we have
\begin{equation*}
h'_{i-1}>0, \; h'_i>0, \; h'_{i+1}<0, \; h'_{i+2}<0, \; h'_j=\mathcal{O}(e^{-C/\varepsilon}h'_i) \; \mbox{ for } j\notin\{i-1,i,i+1,i+2\},
\end{equation*}
and so, the closest layers move towards each other, each being followed by its nearest transition point from ``behind'',
at approximately the same speed, until the points $h_i$ and $h_{i+1}$ are close enough.
Hence, the loss of the mass due to the annihilation of the transitions at $h_i$ and $h_{i+1}$ is compensated by the movement of the nearest neighbors $h_{i-1}$ and $h_{i+2}$.
This is the main difference with respect to the mass conserving Allen--Cahn equation, where the loss of the mass is compensated by the movements of all the layers.
There are two cases when the layer dynamics of the mass conserving Allen--Cahn and the Cahn--Hilliard equations are similar:
the previously mentioned case with 2 layers and when we have 4 layers with the closest ones $h_2$ and $h_3$.
Indeed, in such a case, we have 4 layers approximately moving at the same speed in both the mass conversing Allen--Cahn and Cahn--Hilliard equations.
Therefore, we conclude that, under assumption \eqref{eq:ass-i},
the layer dynamics in the case of equation \eqref{eq:maco-AC} is always different with respect to equation \eqref{eq:AC},
while it is similar to the Cahn--Hilliard equation only in the case of 2 layers and 4 layers with $i=2$ in \eqref{eq:ass-i}.
Some numerical experiments comparing the layer dynamics of the mass conserving Allen--Cahn and the Cahn--Hilliard equations can be found in \cite{SunWard}.
In the hyperbolic framework \eqref{eq:hyp-nonlocal}, the right hand side of the ODE \eqref{eq:ODE-hypnonlocal} is the same of \eqref{eq:ODE-nonlocalAC},
while in the left hand side we have two novelties: the second time derivative $\tau h''_j$ and the coefficient $\gamma_{{}_{F,g}}$ of $h'_j$.
From this point of view, we have the same results of the hyperbolic Allen--Cahn equation \eqref{eq:hypAC};
indeed, the ODEs describing the motion of the layers for \eqref{eq:hypAC} are \cite{FLM17}
\begin{equation*}
\tau h''_j+\gamma_{{}_{F,g}} h'_j=\frac{\varepsilon}{c_{{}_F}}\left(\alpha^{j+1}-\alpha^j\right),
\qquad \qquad i=1,\dots,N+1,
\end{equation*}
and they differ from \eqref{eq:ODE-AC} only from the term $\tau h''_j$ and the coefficient $\gamma_{{}_{F,g}}$ of $h'_j$.
As we will see in Section \ref{sec:layerdyn}, the constant $\gamma_{{}_{F,g}}$ is the following \emph{weighted average} of $g$
\begin{equation*}
\gamma_{{}_{F,g}}:=\frac{\displaystyle\int_{-1}^1\sqrt{F(s)}g(s)\,ds}{\displaystyle\int_{-1}^1\sqrt{F(s)}\,ds}.
\end{equation*}
In particular, when the damping coefficient is constantly equal to $1$, we have $\gamma_{{}_{F,g}}=1$,
while in the relaxation case $g(u)=1-\tau f'(u)$ one has
\begin{align*}
\gamma_{{}_{F,g}}&=1+\tau\int_{-1}^1\sqrt{F(s)}F''(s)\,ds\left(\int_{-1}^1\sqrt{F(s)}\,ds\right)^{-1}\\
&=1-\tau\int_{-1}^1\frac{F'(s)^2}{2\sqrt{F(s)}}\,ds\left(\int_{-1}^1\sqrt{F(s)}\,ds\right)^{-1}<1,
\end{align*}
where we used $f=-F'$ and integration by parts.
Hence, in the latter case the relaxation time $\tau$ appears also in the coefficient of $h'_j$, which is smaller than the constant damping case $g\equiv1$.
In general, notice that $\gamma_{{}_{F,g}}\to1$ as $g\to1$ in any reasonable way.
Reasoning as in \cite[Theorem 4.5]{FLM17}, one can compare the solutions to the systems \eqref{eq:ODE-hypnonlocal} and \eqref{eq:ODE-nonlocalAC}
and prove that if $\tau\to0^+$ and $\gamma_{{}_{F,g}}\to1$,
then a solution to \eqref{eq:ODE-hypnonlocal} converges to the corresponding one of \eqref{eq:ODE-nonlocalAC}.
Concerning the conservation of the mass, we recall that the solution $u$ is well approximated by the function $u^{\bm h}$ satisfying \eqref{eq:uh-approx}.
This means that $u\approx\pm1$ and denoting by $L_-$ and $L_+$ the length of all the intervals where the solution is approximately $-1$ and $+1$, respectively, we have
\begin{align*}
L_-:&=\frac{l_1}2+\sum_{i=1}^{N/2} l_{2i+1}, \qquad \qquad \qquad &L_+&=\sum_{i=1}^{N/2} l_{2i}+\frac{l_{N+2}}{2}, \qquad &\mbox{ if }\, N \mbox{ is even},\\
L_-:&=\frac{l_1}2+\sum_{i=1}^{(N-1)/2} l_{2i+1}+\frac{l_{N+2}}{2}, &L_+&=\sum_{i=1}^{(N+1)/2} l_{2i}, &\mbox{ if }\, N \mbox{ is odd},
\end{align*}
(recall that $l_j=h_j-h_{j-1}$, $\,j=2,N+1$, $\,l_1=2h_1$ and $\,l_{N+2}=2(1-h_{N+1})$).
In particular, we have that the mass of the solution is approximately given by $L_+-L_-$.
Let us compute the variation on time of the quantities $L_+$ and $L_-$.
From \eqref{eq:ODE-hypnonlocal}, we derive the following equations for the interval length $l_j=h_j-h_{j-1}$:
\begin{equation*}
\begin{aligned}
\tau l_1''+\gamma_{{}_{F,g}}l'_1&=\frac{2\varepsilon}{c_{{}_F}}\left(\alpha^{2}-\alpha^1+\Sigma\right),\\
\tau l_j''+\gamma_{{}_{F,g}}l'_j&=\frac{\varepsilon}{c_{{}_F}}\left(\alpha^{j+1}-2\alpha^j+\alpha^{j-1}+2(-1)^{j+1}\Sigma\right), \qquad j=1,\dots,N+1,\\
\tau l_{N+2}''+\gamma_{{}_{F,g}}l'_{N+2}&=-\frac{2\varepsilon}{c_{{}_F}}\left(\alpha^{N+2}-\alpha^{N+1}+(-1)^{N}\Sigma\right),
\end{aligned}
\end{equation*}
where $\Sigma=\displaystyle\frac{1}{N+1}\sum_{i=1}^{N+1}(-1)^i(\alpha^{i+1}-\alpha^i)$.
Therefore, by using the definitions of $L_\pm$ we end up with
\begin{equation*}
\tau L_{\pm}''+\gamma_{{}_{F,g}}L_{\pm}'=0.
\end{equation*}
When the ODEs \eqref{eq:ODE-hypnonlocal} describe the layer dynamics of \eqref{eq:hyp-nonlocal},
the positions of the transition points $\bm h(0)$ and their initial velocity $\bm h'(0)$ depend on the initial data $u_0,u_1$
and, in particular, $\bm h'(0)$ is such that $L'_\pm(0)=0$.
Therefore, we have $L_\pm'(t)=0$ for any $t$ and this is coherent with the conservation of mass.
In general, this is different from \eqref{eq:ODE-nonlocalAC}, which directly implies $L_\pm'\equiv0$,
while in the case of \eqref{eq:ODE-hypnonlocal}, we need a further assumption on the initial velocity of the points.
The rest of the paper is devoted to prove Theorem \ref{thm:main} and to derive the system \eqref{eq:ODE-hypnonlocal}.
\section{The coordinate system close to the submanifold $\mathcal{M}$}\label{sec:base}
The main result of this section is the smooth decomposition $u=u^{\bm h}+w$,
where $w$ is a function of zero mean which satisfies the orthogonality condition \eqref{eq:ortho-sec2},
for any function $u$ sufficiently close to the constant mass submanifold $\mathcal{M}$ defined in \eqref{eq:M_0}, for details see Theorem \ref{thm:existence-coord}.
Moreover, we collect some results we use later in the proof of the main results presented in Section \ref{sec:st-main}.
\subsection{Preliminaries}
First of all, we briefly recall some properties of the $(N+1)$--manifold $\mathcal{M}^{AC}$ defined in \eqref{eq:M^AC} and
introduced by Carr and Pego in \cite{Carr-Pego}, where the authors prove that it is approximately invariant for
the Allen--Cahn equation \eqref{eq:AC}.
For any $\bm h\in\Omega_\rho$, we define the function $u^{\bm h}=u^{\bm h}(x)$,
which approximates a metastable state with $N+1$ transition points located at $h_1,\dots,h_{N+1}$.
To do this, we make use of the solutions to the following boundary value problem:
given $\ell>0$, let $\phi(\cdot,\ell,+1)$ be the solution to
\begin{equation}\label{eq:fi}
\mathcal{L}(\phi):=\varepsilon^2\phi_{xx}+f(\phi)=0, \qquad \quad
\phi\bigl(-\tfrac12\ell\bigr)=\phi\bigl(\tfrac12\ell\bigr)=0,
\end{equation}
with $\phi>0$ in $(-\tfrac12\ell,\tfrac12\ell)$,
and $\phi(\cdot,\ell,-1)$ the solution to \eqref{eq:fi} with $\phi<0$ in $(-\tfrac12\ell,\tfrac12\ell)$.
The functions $\phi(\cdot,\ell,\pm1)$ are well-defined if $\ell/\varepsilon$ is sufficiently large, and they depend on $\varepsilon$
and $\ell$ only through the ratio $\varepsilon/\ell$, for details see \cite{Carr-Pego} or \cite{FLM17,FLMpre}.
The function $u^{\bm h}$ is constructed by matching together the functions $\phi(\cdot,\ell,\pm1)$, using smooth cut-off functions:
given $\chi:\mathbb{R}\rightarrow[0,1]$ a $C^\infty$-function with $\chi(x)=0$ for $x\leq-1$ and $\chi(x)=1$ for $x\geq1$, set
\begin{equation*}
\chi^j(x):=\chi\left(\frac{x-h_j}\varepsilon\right) \qquad\textrm{and}\qquad
\phi^j(x):=\phi\left(x-h_{j-1/2},h_j-h_{j-1},(-1)^j\right),
\end{equation*}
where
\begin{equation*}
h_{j+1/2}:=\tfrac12(h_j+h_{j+1})\qquad j=0,\dots,N+1,
\end{equation*}
are the middle points (note that $h_{1/2}=0$, $h_{N+3/2}=1$).
Then, we define the function $u^{\bm h}$ as
\begin{equation}\label{eq:uh}
u^{\bm h}:=\left(1-\chi^j\right)\phi^j+\chi^j\phi^{j+1} \qquad \textrm{in}\quad I_j:=[h_{j-1/2},h_{j+1/2}],
\end{equation}
for $j=1,\dots,N+1$.
A complete list of the properties of $u^{\bm h}$ can be found in \cite{Carr-Pego};
here, we only recall that $u^{\bm h}$ is a smooth function of $\bm h$ and $x$, which satisfies \eqref{eq:uh-approx} and that
$\mathcal{L}(u^{\bm h})=0$ except in an $\varepsilon$--neighborhood of the transition points $h_j$.
Precisely, we have
\begin{equation}\label{eq:properties-uh}
\begin{aligned}
u^{\bm h}(0)&=\phi(0,2h_1,-1)<0,
&\qquad u^{\bm h}(h_{j+1/2})&=\phi\left(0,h_{j+1}-h_j,(-1)^{j+1}\right),\\
u^{\bm h}(h_j)&=0,
&\qquad \mathcal{L}(u^{\bm h}(x))&=0\quad \textrm{for }|x-h_j|\geq\varepsilon,
\end{aligned}
\end{equation}
for any $j=1,\dots,N+1$.
Now, we give the precise definition of the quantities $\alpha^j$ introduced in Section \ref{sec:st-main}
and appearing in the ODEs \eqref{eq:ODE-hypnonlocal}, \eqref{eq:ODE-AC} and \eqref{eq:ODE-nonlocalAC}.
Since $\phi(0,\ell,\pm1)$ depends only on the ratio $r=\varepsilon/\ell$, we can define
\begin{equation*}
\alpha_\pm(r):=F(\phi(0,\ell,\pm1)), \qquad \quad \beta_\pm(r):=1\mp\phi(0,\ell,\pm1),
\end{equation*}
where we recall that $f=-F'$.
By definition, $\phi(0,\ell,\pm1)$ is close to $+1$ or $-1$ and so, $\alpha_\pm(r), \beta_\pm(r)$ are close to $0$.
The next result characterizes the leading terms in $\alpha_\pm$ and $\beta_\pm$ as $r\to 0$.
\begin{prop} [Carr--Pego \cite{Carr-Pego}] \label{prop:alfa,beta}
Let $F$ be such that \eqref{eq:ass-F} holds and set
\begin{equation*}
A_\pm^2:=F''(\pm1), \qquad K_{\pm}=2\exp\left\{\int_0^1\left(\frac{A_\pm}{(2F(\pm t))^{1/2}}-\frac{1}{1-t}\right)\,dt\right\}.
\end{equation*}
There exists $r_0>0$ such that if $0<r<r_0$, then
\begin{equation*}
\begin{aligned}
\alpha_\pm(r)&=\tfrac12K^2_\pm A^2_\pm\,\exp(-{A_\pm}/r\bigr)\bigl\{1+O\left(r^{-1} \exp(-{A_\pm}/2r)\right)\bigr\},\\
\beta_\pm(r)&=K_\pm\,\exp\bigl(-{A_\pm}/2r\bigr)\bigl\{1+O\left(r^{-1} \exp(-{A_\pm}/2r)\right)\bigr\},
\end{aligned}
\end{equation*}
with corresponding asymptotic formulae for the derivatives of $\alpha_\pm$ and $\beta_\pm$.
\end{prop}
For $j=1,\dots,N+1$, we set
\begin{equation*}
l_j:=h_{j+1}-h_{j}, \qquad \qquad r_{j}:=\frac{\varepsilon}{l_j},
\end{equation*}
and
\begin{equation*}
\alpha^{j}:=\left\{\begin{aligned}
&\alpha_+(r_{j}) &j \textrm{ odd},\\
&\alpha_-(r_{j}) &j \textrm{ even},\\
\end{aligned}\right.
\qquad
\beta^{j}:=\left\{\begin{aligned}
&\beta_+(r_{j}) &j \textrm{ odd},\\
&\beta_-(r_{j}) &j \textrm{ even}.\\
\end{aligned}\right.
\end{equation*}
Finally, let us introduce the \emph{barrier function}
\begin{equation}\label{eq:barrier}
\Psi(\bm h):=\sum_{j=1}^{N+1}{\langle\mathcal{L}\bigl(u^{\bm h}\bigr),k^{\bm h}_j\rangle}^2=\sum_{j=1}^{N+1}\bigl(\alpha^{j+1}-\alpha^{j}\bigr)^2,
\end{equation}
where $\mathcal{L}$ is the Allen--Cahn differential operator introduced above and the functions $k^{\bm h}_j$ are defined by
\begin{equation*}
k^{\bm h}_j(x):=-\gamma^j(x)u^{\bm h}_x(x), \qquad \mbox{ with } \;
\gamma^j(x):=\chi\left(\frac{x-h_{j}-\varepsilon}\varepsilon\right)\left[1-\chi\left(\frac{x-h_{j+1}+\varepsilon}\varepsilon\right)\right].
\end{equation*}
By construction, $k^{\bm h}_j$ are smooth functions of $x$ and $\bm h$ and are such that
\begin{equation*}
\begin{aligned}
k^{\bm h}_j(x)&=0 &\quad \textrm{for}\quad &x\notin[h_{j-1/2},h_{j+1/2}],\\
k^{\bm h}_j(x)&=-u^{\bm h}_x(x) &\quad \textrm{for}\quad &x\in[h_{j-1/2}+2\varepsilon,h_{j+1/2}-2\varepsilon].
\end{aligned}
\end{equation*}
As the function $u^{\bm h}$, the functions $k^{\bm h}_j(x)$ and $\Psi(\bm h)$ are introduced in \cite{Carr-Pego}:
$k^{\bm h}_j$ are approximate tangent vectors to the manifold $\mathcal{M}^{AC}$ and
the barrier function $\Psi(\bm h)$ may be considered an approximation of the quantity $\|P^{\bm h}\mathcal{L}\bigl(u^{\bm h}\bigr)\|^2$,
where $P^{\bm h}$ is the projection to the tangent space to $\mathcal{M}^{AC}$ at $u^{\bm h}$.
\subsection{The constant mass submanifold}
As it was previously mentioned, we use different variables to describe a function $u$
sufficiently close to the constant mass submanifold manifold $\mathcal{M}$ defined in \eqref{eq:M_0}.
First of all, we recall that the manifold $\mathcal{M}$ can be parametrized by the first $N$ components of the vector $\bm h$
and the component $h_{N+1}$ can be seen as a function of $h_1,\dots,h_{N+1}$;
precisely, if $u^{\bm h}\in\mathcal{M}$, then we have $h_{N+1}=z(h_1,\dots,h_N)$ for some $z:\mathbb{R}^N\to\mathbb{R}$ satisfying
\begin{equation}\label{eq:derh_N+1}
z_j:=\frac{\partial z}{\partial h_j}=(-1)^{N-j}+\mathcal{O}\left(\varepsilon^{-1}\exp(-A\ell^{\bm h}/\varepsilon)\right),
\end{equation}
where $A:=\sqrt{\min\{F''(-1),F''(1)\}}$ and $\ell^{\bm h}:=\min\{h_j-h_{j-1}\}$ as in Theorem \ref{thm:main};
for details see \cite[Lemma 2.4]{FLMpre}.
Therefore, in what follows we denote by $\bm\xi\in\mathbb{R}^N$ the vector of the first $N$ components of $\bm h$
and we interchangeably use $\bm\xi$ and $\bm h$, meaning that $\bm h=(\bm\xi,z(\bm\xi))$.
In particular, we use the notations $u^{\bm\xi}$ for $u^{(\bm\xi,z(\bm\xi))}$ and
\begin{equation}\label{eq:uj}
u^{\bm\xi}_j:=\frac{\partial u^{\bm\xi}}{\partial\xi_j}=u^{\bm h}_j+z_j\,u^{\bm h}_{N+1},
\qquad\quad \mbox{where } \quad u^{\bm h}_j:=\frac{\partial u^{\bm h}}{\partial h_j}.
\end{equation}
Accordingly to the new variables, we define the functions $\nu^{\bm h}_j$ as
\begin{equation}\label{eq:newtangvec}
\nu^{\bm h}_j:=k^{\bm h}_j+(-1)^{N-j}k^{\bm h}_{N+1}, \qquad \qquad j=1,\dots N,
\end{equation}
where $k^{\bm h}_j$ are the approximations of the tangent vectors to $\mathcal{M}^{AC}$ introduced above.
Similarly as $u^{\bm\xi}$, we shall use the notation $\nu^{\bm\xi}_j$ for $\nu^{(\bm\xi,z(\bm\xi))}_j$ and
\begin{equation}\label{eq:nuji}
\nu^{\bm\xi}_{ji}:=\frac{\partial \nu^{\bm\xi}_j}{\partial\xi_i}=k^{\bm h}_{ji}+z_jk^{\bm h}_{j,N+1}+(-1)^{N-j}k^{\bm h}_{N+1,i}+(-1)^{N-j}z_jk^{\bm h}_{N+1,N+1}.
\end{equation}
The idea of using the functions $\nu^{\bm h}_j$ \eqref{eq:newtangvec} instead of $k^{\bm h}_j$ is crucial in our study
and it can be also applied to the study of the mass conserving Allen--Cahn equation \eqref{eq:maco-AC}.
In the following proposition we collect some estimates concerning $u_j^{\bm\xi}$, $\nu^{\bm\xi}_j$ and their derivatives, which will be useful in the sequel.
\begin{prop}\label{prop:est-u-nu}
Fix $F\in C^3(\mathbb{R})$ satisfying \eqref{eq:ass-F} and define $c_{{}_F}$ as in \eqref{eq:c_F}.
Given $N\in\mathbb{N}$ and $\delta\in(0,1/N+1)$,
there exist positive constants $\varepsilon_0,C$ such that if $\varepsilon$ and $\rho$ satisfy \eqref{eq:triangle} and $\bm h=(\bm\xi,z(\bm\xi))\in\Omega_\rho$, then
\begin{align}
\varepsilon\|u^{\bm\xi}_j\|_{{}_{L^\infty}}+\varepsilon^{1/2}\|u^{\bm\xi}_j\|+\varepsilon^{1/2}\|\nu^{\bm\xi}_j\|&\leq C,\label{eq:uj-est}\\
\langle u^{\bm\xi}_j,\nu^{\bm\xi}_j\rangle&=2c_{{}_F}\,\varepsilon^{-1}+\mathcal{O}\left(\exp(-C/\varepsilon)\right), \label{eq:uj,nuj}\\
\int_0^1\nu^{\bm\xi}_j\,dx&=\mathcal{O}\left(\exp(-C/\varepsilon)\right),\label{eq:int-nu}
\end{align}
for $j=1,\dots,N$.
Moreover, if $i\neq j$, we have
\begin{equation}\label{eq:uj,nui}
\langle u^{\bm\xi}_i,\nu^{\bm\xi}_j\rangle=(-1)^{i+j}c_{{}_F}\,\varepsilon^{-1}+\mathcal{O}\left(\exp(-C/\varepsilon)\right).
\end{equation}
Finally,
\begin{equation}\label{eq:nuij-est}
\varepsilon^{3/2}\|\nu^{\bm\xi}_{ij}\|+\varepsilon\|\nu^{\bm\xi}_{ij}\|_{{}_{L^1}}\leq C,
\end{equation}
for any $i,j=1,\dots,N+1$.
\end{prop}
\begin{proof}
The proof of the estimates \eqref{eq:uj-est}-\eqref{eq:nuij-est} follows from the definitions of $u_j^{\bm\xi}$, $\nu_j^{\bm\xi}$ \eqref{eq:uj}, \eqref{eq:newtangvec} and the properties of the functions $u_j^{\bm h}$, $k_j^{\bm h}$ proved in \cite{Carr-Pego}.
The estimates \eqref{eq:uj-est} are a consequence of the definitions \eqref{eq:uj}-\eqref{eq:newtangvec},
the formula \eqref{eq:derh_N+1}, the fact that $\varepsilon,\rho$ satisfy \eqref{eq:triangle} and \cite[Proposition 2.3-Lemma 8.3]{Carr-Pego}.
Similarly, one can obtain the estimates \eqref{eq:nuij-est},
which follow from
\begin{equation*}
\varepsilon^{3/2}\|k^{\bm h}_{ij}\|+\varepsilon\|k^{\bm h}_{ij}\|_{{}_{L^1}}\leq C,
\end{equation*}
for any $i,j=1,\dots,N+1$, and the definition \eqref{eq:nuji}.
Moreover, by using the definitions \eqref{eq:uj}-\eqref{eq:newtangvec}, we infer
\begin{equation*}
\langle u^{\bm\xi}_i,\nu^{\bm\xi}_j\rangle=\langle u^{\bm h}_i,k^{\bm h}_j\rangle+(-1)^{N-j}\langle u^{\bm h}_i,k^{\bm h}_{N+1}\rangle
+z_i\langle u^{\bm h}_{N+1},k^{\bm h}_j\rangle+(-1)^{N-j}z_i\langle u^{\bm h}_{N+1},k^{\bm h}_{N+1}\rangle,
\end{equation*}
for $i,j=1,\dots,N$.
Since, for \cite[Theorem 3.5]{Carr-Pego}, one has
\begin{equation}\label{eq:matrix-AC}
\begin{aligned}
\langle u^{\bm h}_i,k^{\bm h}_j\rangle&=\mathcal{O}\left(\exp(-C/\varepsilon)\right), \qquad \qquad\qquad & i\neq j, \\
\langle u^{\bm h}_i,k^{\bm h}_i\rangle&=c_{{}_F}\,\varepsilon^{-1}+\mathcal{O}\left(\exp(-C/\varepsilon)\right), & i=1,\dots, N+1,
\end{aligned}
\end{equation}
by using again \eqref{eq:triangle}-\eqref{eq:derh_N+1}, we obtain \eqref{eq:uj,nuj} and \eqref{eq:uj,nui}.
It remains to prove \eqref{eq:int-nu}.
By definition, we get
\begin{equation*}
\int_0^1\nu^{\bm\xi}_j\,dx=\int_0^1k^{\bm h}_j\,dx+(-1)^{N-j}\int_0^1k^{\bm h}_{N+1}\,dx, \qquad\qquad j=1,\dots,N,
\end{equation*}
and
\begin{align*}
\int_0^1k^{\bm h}_j\,dx&=-\int_{I_j}u_x^{\bm h}\,dx+\int_{I_j}(1-\gamma^j)u_x^{\bm h}\,dx\\
&=u^{\bm h}(h_{j-1/2})-u^{\bm h}(h_{j+1/2})+\mathcal{O}\left(\exp(-C/\varepsilon)\right), \qquad\qquad j=1,\dots,N+1,
\end{align*}
where we used the estimate \cite[Eq. (4.13)]{FLM17}.
Using \eqref{eq:properties-uh}, the definition of $\beta_j$ and Proposition \ref{prop:alfa,beta}, we deduce
\begin{equation*}
u^{\bm h}(h_{j-1/2})=(-1)^{j}+(-1)^{j+1}\beta^{j+1}=(-1)^{j}+\mathcal{O}\left(\exp(-C/\varepsilon)\right), \qquad j=1,\dots,N+2,
\end{equation*}
and, as a consequence
\begin{equation*}
\int_0^1k^{\bm h}_j\,dx=2(-1)^j+\mathcal{O}\left(\exp(-C/\varepsilon)\right), \qquad\qquad j=1,\dots,N+1.
\end{equation*}
Therefore, we end up with
\begin{equation*}
\int_0^1\nu^{\bm\xi}_j\,dx=2(-1)^j+2(-1)^{2N-j+1}+\mathcal{O}\left(\exp(-C/\varepsilon)\right)\qquad\qquad j=1,\dots,N,
\end{equation*}
and the proof is complete.
\end{proof}
Let $S(\bm\xi)$ be the $N\times N$ matrix with elements $s_{ij}(\bm\xi):=\langle u^{\bm\xi}_j,\nu^{\bm\xi}_i\rangle$;
from Proposition \ref{prop:est-u-nu} it follows that
\begin{equation}\label{eq:S-matrix}
S(\bm\xi):=\frac{c_{{}_F}}{\varepsilon}\left(\begin{array}{ccccc} 2 & -1 & 1 & \dots & (-1)^{N+1}\\
-1 & 2 & -1 & \dots & (-1)^{N}\\
1 & -1 & 2 & \dots & (-1)^{N+1}\\
\dots & \dots & \dots & \dots & \dots\\
(-1)^{N+1}& (-1)^{N} & (-1)^{N+1} & \dots & 2
\end{array}\right)+\mathcal{O}\left(\exp(-C/\varepsilon)\right),
\end{equation}
where we recall $c_{{}_F}$ is defined in \eqref{eq:c_F}.
By inverting such matrix, we obtain
\begin{equation}\label{eq:S^-1}
S^{-1}(\bm\xi):=\frac{\varepsilon}{(N+1)c_{{}_F}}\left(\begin{array}{ccccc} N & 1 & -1 & \dots & (-1)^{N}\\
1 & N & 1 & \dots & (-1)^{N+1}\\
-1 & 1 & N & \dots & (-1)^{N}\\
\dots & \dots & \dots & \dots & \dots\\
(-1)^{N}& (-1)^{N+1} & (-1)^{N} & \dots & N
\end{array}\right)+\mathcal{O}\left(\exp(-C/\varepsilon)\right).
\end{equation}
The matrix $S$ is different with respect to the cases of the Allen--Cahn equation, where it is (up to the small error) a diagonal matrix for \eqref{eq:matrix-AC},
and to the Cahn--Hilliard equation, where it is (up to a small error) a lower triangular matrix, see \cite[pag. 18]{FLMpre}.
We will use later the formula for $S^{-1}$ to determine the system of ODEs which describes the movement of the layers.
Now, we have all the tools to prove the existence of the smooth decomposition
$u=u^{\bm h}+w$ with $u^{\bm h}\in\mathcal{M}$ and $w$ satisfying
\begin{equation}\label{eq:cond-w}
\int_0^1 w\,dx=0, \qquad w_x(0)=w_x(1)=0, \qquad \langle w,\nu^{\bm\xi}_j\rangle=0, \;\; j=1,\dots,N,
\end{equation}
for any function $u$ in a small neighborhood of $\mathcal{M}$.
We emphasize that in \cite{Carr-Pego} the authors prove the existence of the coordinates $(\bm h,w)$, with $w$ orthogonal to $k^{\bm h}_j$ and $u^{\bm h}\in\mathcal{M}^{AC}$,
while in our work, we need $u^{\bm h}\in\mathcal{M}$, i.e. $u^{\bm h}$ with mass equal to $M$;
hence, we need a further condition on $w$, that is $\displaystyle\int_0^1 w\,dx=0$.
\begin{thm}\label{thm:existence-coord}
Given $N\in\mathbb{N}$ and $\delta\in(0,1/N+1)$,
there exists $\varepsilon_0>0$ such that if $\varepsilon$, $\rho$ satisfy \eqref{eq:triangle} and $u$ satisfies
\begin{equation}\label{eq:ass-excoord}
\int_0^1u\,dx=M, \qquad u_x(0)=u_x(1)=0, \qquad \mbox{ and } \qquad \|u-u^{\bm h}\|_{{}_{L^\infty}}\leq\varepsilon^2,
\end{equation}
for some $\bm h\in\Omega_\rho$, then there is a unique $\bar{\bm{h}}\in\Omega_\rho$ such that $u=u^{\bar{\bm h}}+w$ with $w$ satisfying
\begin{equation}\label{eq:w-excoord}
\int_0^1 w\,dx=0, \qquad w_x(0)=w_x(1)=0, \qquad \langle w,\nu^{\bar{\bm h}}_j\rangle=0, \;\; j=1,\dots,N,
\end{equation}
where the functions $\nu_j^{\bm h}$ are defined in \eqref{eq:newtangvec}.
Moreover, if $\|u-u^{\bm{h}^*}\|_{{}_{L^\infty}}=\inf\{ \|u-u^{\bm h}\|_{{}_{L^\infty}}\, :\, \bm h\in\Omega_\rho\}$ for some $\bm{h}^*\in\Omega_\rho$,
then there exists a positive constant $C$ such that
\begin{equation}\label{eq:h-h*}
|\bar{\bm{h}}-\bm{h}^*|\leq C\varepsilon\|u-u^{\bm{h}^*}\|_{{}_{L^\infty}}, \qquad \mbox{ and } \qquad \|u-u^{\bar{\bm{h}}}\|_{{}_{L^\infty}}\leq C\|u-u^{\bm{h}^*}\|_{{}_{L^\infty}}.
\end{equation}
\end{thm}
\begin{proof}
We proceed as in \cite[Section 9]{Carr-Pego} and \cite[Theorem A.7]{Bates-Xun2}.
For any $u$ satisfying \eqref{eq:ass-excoord}, the existence of the decomposition $u=u^{\bm h}+w$ with $w$ satisfying \eqref{eq:cond-w}
is equivalent to the existence of $\bm\xi\in\mathbb{R}^N$ such that $(\xi_1,\dots,\xi_N,z(\xi_1,\dots,\xi_N))=\bm h\in\Omega_\rho$ and
\begin{equation*}
\Theta_j(\bm\xi,u):=\langle u-u^{\bm\xi},\nu_j^{\bm\xi}\rangle=0, \qquad \qquad \mbox{for any }\, j=1,\dots,N.
\end{equation*}
To this aim, we define the functions $\bm\Theta(\bm\xi,u)=(\Theta_1(\bm\xi,u),\dots,\Theta_N(\bm\xi,u))$ and
\begin{equation*}
\bm\Lambda(\bm\xi,u,\bm\xi^*):=\bm\xi+S^{-1}(\bm\xi^*)\bm\Theta(\bm\xi,u),
\end{equation*}
where $\bm{\xi^*}$ is the vector of the first $N$ components of $\bm h^*$ and $\bm h^*$ is the same of \eqref{eq:h-h*}.
Precisely, our goal is to prove that there exists a unique fixed point of $\bm\Lambda(\cdot,u,\bm\xi^*)$ in a neighborhood of $\bm\xi^*$.
We claim that if $\varepsilon_0$ is sufficiently small and $|\bm\xi-\bm\xi^*|\leq\varepsilon^2$, then $\|\partial\bm\Lambda/\partial\bm\xi\|_{{}_\infty}\leq\frac14$,
where $\|\cdot\|_{{}_\infty}$ is the matrix norm induced by the vector norm $|\cdot|_{{}_\infty}$.
Indeed, since
\begin{equation*}
\frac{\partial\bm\Lambda}{\partial\bm\xi}=\mathbb{I}_N+S^{-1}(\bm\xi^*)\frac{\partial\bm\Theta}{\partial\bm\xi}
=S^{-1}(\bm\xi^*)\left[S(\bm\xi^*)-S(\bm\xi)+B(\bm\xi)\right],
\end{equation*}
where $B_{ij}=\langle u-u^{\bm\xi},\nu_{ij}^{\bm\xi}\rangle$, we deduce
\begin{align*}
\left\|\frac{\partial\bm\Lambda}{\partial\bm\xi}\right\|_{{}_\infty}&\leq\left\|S^{-1}(\bm\xi^*)\right\|_{{}_\infty}
\left\|S(\bm\xi^*)-S(\bm\xi)+B(\bm\xi)\right\|_{{}_\infty}\\
&\leq C\varepsilon\left(\left\|S(\bm\xi^*)-S(\bm\xi)\right\|_{{}_\infty}+\varepsilon^{-1}\|u-u^{\bm\xi}\|_{{}_{L^\infty}}\right),
\end{align*}
where we used the estimates $\left\|S^{-1}(\bm\xi^*)\right\|_{{}_\infty}\leq C\varepsilon$ and $\|\nu_{ij}^{\bm\xi}\|_{{}_{L^1}}\leq C\varepsilon^{-1}$
for some $C>0$ independent on $\varepsilon$ (see \eqref{eq:S^-1} and \eqref{eq:nuij-est}).
Let us estimate the last term as follows
\begin{equation*}
\|u-u^{\bm\xi}\|_{{}_{L^\infty}}\leq\|u-u^{\bm\xi^*}\|_{{}_{L^\infty}}+\|u^{\bm\xi^*}-u^{\bm\xi}\|_{{}_{L^\infty}}\leq\varepsilon^2+C\varepsilon^{-1}|\bm\xi-\bm\xi^*|,
\end{equation*}
where we used the definition of $\bm\xi^*$, \eqref{eq:uj-est} and the assumption \eqref{eq:ass-excoord}.
By using the latter estimate and \eqref{eq:S-matrix}, we conclude that if $|\bm\xi-\bm\xi^*|\leq\varepsilon^2$, then
\begin{equation*}
\left\|\frac{\partial\bm\Lambda}{\partial\bm\xi}\right\|_{{}_\infty}=\mathcal{O}(\varepsilon), \qquad \mbox{ as }\, \varepsilon\to0^+.
\end{equation*}
Hence, we can choose $\varepsilon_0$ so small that $\|\partial\bm\Lambda/\partial\bm\xi\|_{{}_\infty}\leq\frac14$.
Such estimates allows us to prove that $\bm\Lambda(\cdot,u,\bm\xi^*)$ is a contraction if $|\bm\xi-\bm\xi^*|\leq\varepsilon^2$.
Indeed, we have
\begin{align}
|\bm\Lambda(\bm\xi,u,\bm\xi^*)-\bm\xi^*|_{{}_\infty}&\leq |\bm\Lambda(\bm\xi,u,\bm\xi^*)-\bm\Lambda(\bm\xi,u^{\bm\xi^*},\bm\xi^*)|_{{}_\infty}+
|\bm\Lambda(\bm\xi,u^{\bm\xi^*},\bm\xi^*)-\bm\Lambda(\bm\xi^*,u^{\bm\xi^*},\bm\xi^*)|_{{}_\infty}\notag\\
&\leq\|S^{-1}(\bm\xi^*)\|_{{}_\infty} |\bm\Theta(\bm\xi,u)-\bm\Theta(\bm\xi,u^{\bm\xi^*})|_{{}_\infty} +\frac14|\bm\xi-\bm\xi^*|_{{}_\infty}\notag\\
&\leq C\varepsilon\|u-u^{\bm\xi^*}\|_{{}_{L^\infty}}+\frac14\varepsilon^2<\frac{\varepsilon^2}{2},\label{eq:Lambda-first}
\end{align}
provided $C\varepsilon<\frac14$.
Therefore, $\bm\Lambda(\cdot,u,\bm\xi^*)$ is a contraction in $|\bm\xi-\bm\xi^*|\leq\varepsilon^2$ and, as a consequence, has a unique fixed point $\bar{\bm\xi}$.
It follows that there exists $\bar{\bm h}\in\Omega_\rho$ such that $u=u^{\bar{\bm h}}+w$ with $w$ satisfying \eqref{eq:w-excoord}.
Next, we show that such representation is unique and we prove \eqref{eq:h-h*}.
To prove the uniqueness of the tubular coordinates we use \cite[Lemma 9.2]{Carr-Pego} and
the fact that if $\bm h^*$ and $\bm h^{**}$ belong to $\Omega_\rho$ with $\rho$ sufficiently small, then
\begin{equation}\label{eq:h*-h**}
\|u^{\bm h^*}-u^{\bm h^{**}}\|_{{}_{L^\infty}}<2\varepsilon^2 \qquad \quad \Longrightarrow \qquad \quad
|\bm h^*-\bm h^{**}|_{{}_\infty}<\frac{\varepsilon^2}{2}.
\end{equation}
Let us assume that there exists $\bm h^{**}\in\Omega_\rho$, such that $u=u^{\bm h^{**}}+w^{**}$ with
$\|w^{**}\|_{{}_{L^\infty}}<\varepsilon^2$ and $\langle w^{**},\nu^{\bm\xi^{**}}_j\rangle=0$, for $j=1,\dots,N$.
Then, we infer
\begin{equation*}
\|u^{\bm h^*}-u^{\bm h^{**}}\|_{{}_{L^\infty}}\leq\|u^{\bm h^*}-u\|_{{}_{L^\infty}}+\|u-u^{\bm h^{**}}\|_{{}_{L^\infty}}<2\varepsilon^2.
\end{equation*}
Hence, by using \eqref{eq:h*-h**} we obtain $|\bm\xi^{**}-\bm\xi^*|<\varepsilon^2$, and so $\bm\xi^{**}=\bar{\bm\xi}$, which implies $\bm h^{**}=\bar{\bm h}$.
Now, we prove the first inequality of \eqref{eq:h-h*};
by reasoning as in \eqref{eq:Lambda-first} we get
\begin{equation*}
|\bar{\bm\xi}-\bm\xi^*|_{{}_\infty}=|\bm\Lambda(\bar{\bm\xi},u,\bm\xi^*)-\bm\xi^*|_{{}_\infty}\leq C\varepsilon\|u-u^{\bm\xi^*}\|_{{}_{L^\infty}}+\frac14|\bar{\bm\xi}-\bm\xi^*|_{{}_\infty}.
\end{equation*}
Thus, by using \eqref{eq:derh_N+1} we obtain the first inequality of \eqref{eq:h-h*}.
Concerning the second one, we have
\begin{align*}
\|u-u^{\bar{\bm{h}}}\|_{{}_{L^\infty}}&=\|u-u^{\bar{\bm{\xi}}}\|_{{}_{L^\infty}}\leq\|u-u^{\bm\xi^*}\|_{{}_{L^\infty}}+\|u^{\bm\xi^*}-u^{\bar{\bm{\xi}}}\|_{{}_{L^\infty}}\\
&\leq\|u-u^{\bm\xi^*}\|_{{}_{L^\infty}}+C\varepsilon^{-1}|\bar{\bm\xi}-\bm\xi^*|_{{}_\infty}\leq C\|u-u^{\bm\xi^*}\|_{{}_{L^\infty}},
\end{align*}
and the proof is complete.
\end{proof}
\section{Existence of the slow channel}\label{sec:slow}
This section is devoted to the proof of Theorem \ref{thm:main}.
Thanks to Theorem \ref{thm:existence-coord}, we can use the decomposition $u=u^{\bm\xi}+w$ with $w$ satisfying \eqref{eq:cond-w},
to study the dynamics of the solutions in a tubular neighborhood of the manifold $\mathcal{M}_{{}_0}$.
Let us rewrite equation \eqref{eq:hyp-nonlocal} as
\begin{equation}\label{eq:system-u-v}
\begin{cases}
u_t=v,\\
\displaystyle\tau v_t=\mathcal{L}(u)-g(u)v-\int_0^1 f(u)\,dx-\int_0^1[1-g(u)]v\,dx,
\end{cases}
\end{equation}
where $\mathcal{L}(u):=\varepsilon^2u_{xx}+f(u)$.
From system \eqref{eq:system-u-v} and the decomposition $u=u^{\bm\xi}+w$, it follows that
\begin{equation*}
\begin{cases}
\displaystyle w_t=v-\sum_{j=1}^{N}u^{\bm\xi}_j\xi_j',\\
\displaystyle\tau v_t=\mathcal{L}(u^{\bm\xi}+w)-g(u^{\bm\xi}+w)v-\int_0^1 f(u^{\bm\xi}+w)\,dx-\int_0^1[1-g(u^{\bm\xi}+w)]v\,dx.
\end{cases}
\end{equation*}
By using the expansion
\begin{equation*}
\mathcal{L}(u^{\bm\xi}+w)=\mathcal{L}(u^{\bm\xi})-L^{\bm\xi}w -f_2w^2, \qquad \mbox{ where } \qquad f_2:=\int_0^1 (1-s)f''(u^{\bm\xi}+sw)\,ds,
\end{equation*}
and $L^{\bm\xi}$ is the linearized operator of $\mathcal{L}$ about $u^{\bm\xi}$, that is $L^{\bm\xi}w:=-\varepsilon^2w_{xx}-f'(u^{\bm\xi})w$,
we rewrite the system for $(w,v)$ in the form
\begin{equation}\label{eq:system-w-v}
\begin{cases}
\displaystyle w_t=v-\sum_{j=1}^{N}u^{\bm\xi}_j\xi_j',\\
\displaystyle\tau v_t=\mathcal{L}(u^{\bm\xi})-L^{\bm\xi}w -f_2w^2-g(u^{\bm\xi}+w)v-\int_0^1 f(u^{\bm\xi}+w)\,dx-\int_0^1[1-g(u^{\bm\xi}+w)]v\,dx.
\end{cases}
\end{equation}
Now, we derive the equation for $\bm\xi$ by differentiating with respect to $t$ the orthogonality condition in \eqref{eq:cond-w}:
\begin{equation}\label{eq:xi-ort}
\sum_{i=1}^{N}\left\{\langle u^{\bm\xi}_i,\nu^{\bm\xi}_j\rangle-\langle w,\nu^{\bm\xi}_{ji}\rangle\right\}\xi_i'=\langle v,\nu^{\bm\xi}_j\rangle, \qquad \qquad j=1,\dots,N,
\end{equation}
which can be rewritten in the compact form
\begin{equation}\label{eq:xi-compact}
\hat{S}(\bm\xi,w)\bm\xi'=\bm Y(\bm\xi,v),
\end{equation}
where
\begin{equation*}
\hat{S}_{ji}(\bm\xi,w):=\langle u^{\bm\xi}_i,\nu^{\bm\xi}_j\rangle-\langle w,\nu^{\bm\xi}_{ji}\rangle, \qquad \qquad Y_j(\bm\xi,v):=\langle v,\nu^{\bm\xi}_j\rangle.
\end{equation*}
Combining \eqref{eq:system-w-v} and \eqref{eq:xi-compact}, we end up with the ODE-PDE coupled system
\begin{equation}\label{eq:system-w-v-xi}
\begin{cases}
\displaystyle w_t=v-\sum_{j=1}^{N}u^{\bm\xi}_j\xi_j',\\
\displaystyle\tau v_t=\mathcal{L}(u^{\bm\xi})-L^{\bm\xi}w -f_2w^2-g(u^{\bm\xi}+w)v-\int_0^1 f(u^{\bm\xi}+w)\,dx-\int_0^1[1-g(u^{\bm\xi}+w)]v\,dx,\\
\hat{S}(\bm\xi,w)\bm\xi'=\bm Y(\bm\xi,v).
\end{cases}
\end{equation}
The next step is to study the dynamics of the solutions to \eqref{eq:system-w-v-xi} when $(w,v,\bm\xi)$ satisfying appropriate assumptions.
Precisely, we define the spaces
\begin{equation*}
W:=\left\{w\in H^2(0,1)\, : \, w\, \textrm{ satisfies \eqref{eq:cond-w}}\right\}, \qquad \qquad V:=\left\{v\in L^2(0,1)\, : \, \int_0^1 v\,dx=0\right\},
\end{equation*}
the functional
\begin{equation}\label{eq:functional-Exi}
E^{\bm\xi}[w,v]:=\frac12\langle w,L^{\bm\xi}w\rangle+\frac12\tau\|v\|^2+\varepsilon\tau\langle w,v\rangle,
\end{equation}
and for $\Gamma>0$, the set
\begin{align*}
\hat{\mathcal{Z}}_{{}_{\Gamma,\rho}}:=\Bigg\{(w,v,\bm\xi)\, :\, (w,v)\in W\times V, \quad \bm\xi \, \mbox{ is such that }\, &
\bm h=(\bm\xi,z(\bm\xi))\in\bar\Omega_\rho\\
& \;\qquad\mbox{ and } \, E^{\bm\xi}[w,v]\leq\Gamma\Psi(\bm h)\Bigg\},
\end{align*}
where the barrier function $\Psi$ is defined in \eqref{eq:barrier}.
It is well known that the linearized operator $L^{\bm\xi}$ has an infinite sequence of real and simple eigenvalues satisfying
$\lambda_1<\lambda_2<\dots<\lambda_j\to+\infty$ as $j\to+\infty$ and that the first $N+1$ eigenvalues are exponentially small as $\varepsilon\to0$, namely
\begin{equation*}
\max_{1\leq j\leq N+1}|\lambda_j|\leq C\exp(-A\ell^{\bm h}/2\varepsilon), \qquad \qquad \lambda_{N+2}>\Lambda_0,
\end{equation*}
for some $C,\Lambda_0>0$ independent on $\varepsilon$; for details see \cite[Section 4]{Carr-Pego}.
Moreover, Carr and Pego \cite{Carr-Pego} proved that $L^{\bm\xi}$ is coercive in directions not tangent to the manifold $\mathcal{M}^{AC}$;
precisely, they show that if $w\in H^2(0,1)$ satisfies $w_x=0$ at $x=0,1$, and
\begin{equation}\label{eq:condforcoerc}
\sup\left\{\frac{\langle w,\kappa\rangle}{\|w\|\|\kappa\|}\, :\, \kappa\in\mbox{span}\left\{k_1^{\bm h},\dots,k_{N+1}^{\bm h}\right\}\right\}\leq\cos\theta_0,
\end{equation}
for some $\theta_0\in(0,\frac{\pi}{2}]$, then there exists $\Lambda_0>0$ (independent on $\varepsilon$) such that
\begin{equation}\label{eq:coercive}
\frac12\Lambda_0\varepsilon\|w\|^2_{{}_{L^\infty}}\leq\Lambda_0\int_0^1(\varepsilon^2 w_x^2+w^2)\,dx\leq\langle w,L^{\bm\xi}w\rangle.
\end{equation}
In the case of the Allen--Cahn equation \eqref{eq:AC},
the reminder function $w$ is orthogonal to $k^{\bm h}_j$ for any $j$ and as a trivial consequence, the condition \eqref{eq:condforcoerc} is satisfied.
In our case, $w$ satisfies \eqref{eq:cond-w} and so, we assume that $\langle w,\nu^{\bm\xi}_j\rangle=0$, for $j=1,\dots,N$.
The latter property does not imply the orthogonality to $k^{\bm h}_j$ for $j=1,\dots,N+1$ and before using the property \eqref{eq:coercive},
we need to prove that $w\in W$ implies $w$ satisfying condition \eqref{eq:condforcoerc}.
\begin{lem}
Given $N\in\mathbb{N}$ and $\delta\in(0,1/N+1)$,
there exists $\varepsilon_0>0$ such that if $\varepsilon,\rho$ satisfy \eqref{eq:triangle}, $\bm h\in\Omega_\rho$ and $w\in W$, then $w$ satisfies \eqref{eq:coercive}.
\end{lem}
\begin{proof}
Fix $\bm h\in\Omega_\rho$ and $w\in W$.
Let us prove that $w\notin\mbox{span}\left\{k_1^{\bm h},\dots,k_{N+1}^{\bm h}\right\}$.
By contradiction, assume $w=\displaystyle\sum_{i=i}^{N+1}c_i k_i^{\bm h}$, for some $c_i\in\mathbb{R}$, satisfying $\displaystyle\sum_{i=1}^{N+1}|c_i|>0$.
Hence,
\begin{equation*}
\langle w,\nu_j^{\bm\xi}\rangle=\sum_{i=1}^{N+1}c_i\langle k_i^{\bm h},\nu_j^{\bm\xi}\rangle=
c_j\|k_j^{\bm h}\|^2+(-1)^{N-j}c_{N+1}\|k_{N+1}^{\bm h}\|^2, \qquad \qquad j=1,\dots,N,
\end{equation*}
where we used definition \eqref{eq:newtangvec} and the fact that $\langle k_i^{\bm h},k_j^{\bm h}\rangle=0$ for $i\neq j$.
If $c_{N+1}=0$, the condition $\langle w,\nu_j^{\bm\xi}\rangle=0$ for any $j=1,\dots,N$ implies $c_i=0$ for any $i$ and we have a contradiction.
Thus, assume $c_{N+1}\neq0$ and from $\langle w,\nu_j^{\bm\xi}\rangle=0$, it follows that
\begin{equation*}
\frac{c_j}{c_{N+1}}=(-1)^{N-j+1}\frac{\|k_{N+1}^{\bm h}\|^2}{\|k_j^{\bm h}\|^2}, \qquad \qquad j=1,\dots,N.
\end{equation*}
Now, the assumption $\displaystyle\int_0^1 w\,dx=0$ implies
\begin{align*}
0&=\sum_{j=1}^{N}\frac{c_j}{c_{N+1}}\int_0^1k_j^{\bm h}\,dx+\int_0^1k_{N+1}^{\bm h}\,dx\\
&=\sum_{j=1}^{N}\frac{(-1)^{N-j+1}\|k_{N+1}^{\bm h}\|^2}{\|k_j^{\bm h}\|^2}\int_0^1k_j^{\bm h}\,dx+\int_0^1k_{N+1}^{\bm h}\,dx.
\end{align*}
Since
\begin{equation*}
\lim_{\varepsilon\to0}\frac{\|k_{N+1}^{\bm h}\|^2}{\|k_j^{\bm h}\|^2}=1 \qquad \mbox{and} \qquad
\lim_{\varepsilon\to0}\int_0^1k_j^{\bm h}\,dx=2(-1)^j, \quad \forall\, j=1,\dots,N+1,
\end{equation*}
we end up with $2N(-1)^{N+1}+2(-1)^{N+1}=0$, and we have a contradiction.
Therefore, $w$ satisfies condition \eqref{eq:condforcoerc} for some $\theta_0\in(0,\frac{\pi}{2}]$, and \eqref{eq:coercive} follows from \cite[Section 4.2]{Carr-Pego}.
\end{proof}
Thanks to the previous lemma, we can prove the following result.
\begin{prop}\label{prop:E>}
Let $F\in C^3(\mathbb{R})$ be such that \eqref{eq:ass-F} holds and $g\in C^1(\mathbb{R})$.
Given $N\in\mathbb{N}$ and $\delta\in(0,1/N+1)$, there exist $\varepsilon_0, C>0$,
such that for $\varepsilon$ and $\rho$ satisfying \eqref{eq:triangle},
\begin{itemize}
\item[(i)] if $(w,v,\bm\xi)\in\hat{\mathcal{Z}}_{{}_{\Gamma,\rho}}$, then
\begin{equation}\label{eq:boundsforE}
\begin{aligned}
\tfrac18\Lambda_0\varepsilon\|w\|^2_{{}_{L^\infty}}+ \tfrac14\tau\|v\|^2\leq E^{\bm\xi}[w,v],\\
\tfrac14\Lambda_0\|w\|^2 +\tfrac14\tau\|v\|^2\leq E^{\bm\xi}[w,v],\\
E^{\bm\xi}[w,v]\leq C\Gamma\exp(-2A\ell^{\bm h}/\varepsilon),
\end{aligned}
\end{equation}
where $E^{\bm\xi}[w,v]$ is the functional defined in \eqref{eq:functional-Exi} and $\Lambda$ is the positive constant introduced in \eqref{eq:coercive};
\item[(ii)] if $(w,v,\bm\xi)\in\hat{\mathcal{Z}}_{{}_{\Gamma,\rho}}$ is a solution to \eqref{eq:system-w-v-xi} for $t\in[0,T]$, then
\begin{equation}\label{eq:boundforxi'}
|\bm\xi'|\leq C\varepsilon^{1/2}\|v\|\leq C(\varepsilon/\tau)^{1/2}\exp(-A\ell^{\bm h}/\varepsilon).
\end{equation}
\end{itemize}
\end{prop}
\begin{proof}
We proceed as in the proof of \cite[Proposition 3.1]{FLM17}.
The proof of the three inequalities in \eqref{eq:boundsforE} is very similar;
let us prove \eqref{eq:boundforxi'}.
Assume that $(w,v,\bm\xi)\in\hat{\mathcal{Z}}_{{}_{\Gamma,\rho}}$ is a solution to \eqref{eq:system-w-v-xi} for $t\in[0,T]$.
To obtain an upper bound on $|\bm\xi'|$, consider the matrix $\hat{S}(\bm\xi,w)$.
We infer
\begin{equation*}
|\langle w,\nu^{\bm\xi}_{ji}\rangle| \leq\|w\|\|\nu^{\bm\xi}_{ji}\|\leq C\varepsilon^{-3/2}\|w\|,
\end{equation*}
where we used the formula \eqref{eq:nuij-est}.
By using \eqref{eq:boundsforE}, we deduce that $\hat{S}(\bm\xi,w)$
satisfies the formula \eqref{eq:S-matrix} and the inverse matrix $\hat{S}^{-1}(\bm\xi,w)$ satisfies \eqref{eq:S^-1}.
Therefore, by applying $\hat{S}^{-1}(\bm\xi,w)$ to the third equation of \eqref{eq:system-w-v-xi} and using \eqref{eq:uj-est}, we conclude
\begin{equation*}
|\bm\xi'|\leq\|\hat{S}^{-1}(\bm\xi,w)\||\bm Y(\bm\xi,v)|\leq C\varepsilon\|v\|\|\nu^{\bm\xi}_j\|\leq C\varepsilon^{1/2}\|v\|,
\end{equation*}
that is, the first inequality in \eqref{eq:boundforxi'}.
The second one follows from \eqref{eq:boundsforE}.
\end{proof}
Proposition \ref{prop:E>} states that if $(w,v,\bm\xi)\in\hat{\mathcal{Z}}_{{}_{\Gamma,\rho}}$ is a solution to \eqref{eq:system-w-v-xi} for $t\in[0,T]$,
then $\|w\|_{{}_{L^\infty}}$, $\|v\|$, and $|\bm\xi'|$ are exponentially small as $\varepsilon\to0$.
As a consequence, the solution $u$ to equation \eqref{eq:hyp-nonlocal} is well approximated by $u^{\bm\xi}\in\mathcal{M}$,
the $L^2$--norm of $u_t$ and the speed of the transition points are exponentially small.
Indeed, since $\bm h=(\bm\xi,z(\bm\xi))$ and $z$ satisfies \eqref{eq:derh_N+1}, by using \eqref{eq:boundforxi'} we obtain
\begin{equation}\label{eq:boundforhn+1'}
|h'_{N+1}|=\left|\sum_{j=1}^Nz_j\xi'_j\right|\leq C\varepsilon^{1/2}\|v\|\leq C(\varepsilon/\tau)^{1/2}\exp(-A\ell^{\bm h}/\varepsilon),
\end{equation}
and all the $N+1$ layers move with an exponentially small speed.
\begin{rem}\label{rem:energy-tau}
Here, we do some comments on the choice of the functional $E^{\bm\xi}[w,v]$ \eqref{eq:functional-Exi}.
First, we mention that by (formally) taking $\tau=0$ in \eqref{eq:functional-Exi}, one obtains the functional used in \cite{Carr-Pego}
to study the Allen--Cahn equation \eqref{eq:AC}, and that can be used to study the mass conserving Allen--Cahn equation \eqref{eq:maco-AC}.
In the latter case, the system describing the metastable dynamics of the solutions can be (formally) obtained
by taking $\tau=0$ and $g\equiv1$ in the ODE-PDE coupled system \eqref{eq:system-w-v-xi}.
In particular, notice that if $\tau=0$ and $g\equiv1$, the second equation of \eqref{eq:system-w-v-xi}
gives the expression for $v$, which has to be substituted in the equations for $w$ and $\bm\xi$.
Hence, the exponentially small velocity of the layers can be deduced by estimating all the terms appearing
in the equation for $\bm\xi$ (cfr. the estimates of Section \ref{sec:layerdyn}).
In the \emph{hyperbolic} case $\tau>0$, we add two terms in the definition \eqref{eq:functional-Exi}.
Similarly to the definition of the energy \eqref{eq:energy}, we add the term $\frac\tau2\|v\|^2$, which corresponds to the $L^2$--norm of the time derivative $u_t$.
The presence of the linear term $\varepsilon\tau\langle w,v\rangle$ is perhaps not so natural,
but, as we will see in the following, it is crucial to prove that if $(w,v,\bm\xi)$ is a solution to the system \eqref{eq:system-w-v-xi},
belonging to $\hat{\mathcal{Z}}_{{}_{\Gamma,\rho}}$ for $t\in[0,T]$, then we have $E^{\bm\xi}[w,v]<\Gamma\Psi(\bm h)$ for any $t\in[0,T]$.
Thanks to this result, we can state that solutions can leave $\hat{\mathcal{Z}}_{{}_{\Gamma,\rho}}$ only if $\bm h\in\partial\Omega_\rho$,
and we have persistence in the slow channel for (at least) an exponentially long time because the layers move with an exponentially small velocity.
As we already mentioned in Remark \ref{rem:main-tau}, notice that the exponentially small velocity of the layers in the slow channel is a consequence
of the exponentially smallness of the $L^2$--norm of $v=u_t$.
\end{rem}
To obtain the lower bound of the time taken for the solution to leave $\hat{\mathcal{Z}}_{{}_{\Gamma,\rho}}$, we will use the following result.
\begin{prop}\label{prop:d/dtE}
Let $F\in C^3(\mathbb{R})$ and $g\in C^1(\mathbb{R})$ be such that \eqref{eq:ass-F} and \eqref{eq:ass-g} hold.
Given $N\in\mathbb{N}$ and $\delta\in(0,1/N+1)$, there exist $\Gamma_2>\Gamma_1>0$ and $\varepsilon_0>0$
such that if $\Gamma\in[\Gamma_1,\Gamma_2]$, $\varepsilon,\rho$ satisfy \eqref{eq:triangle}
and $(w,v,\bm\xi)\in\hat{\mathcal{Z}}_{{}_{\Gamma,\rho}}$ is a solution to \eqref{eq:system-w-v-xi} for $t\in[0,T]$, then
for some $\eta\in(0,1)$, we have
\begin{equation}\label{E-GPsi^2}
\frac{d}{dt}\bigl\{E^{\bm\xi}[w,v]-\Gamma\Psi(\bm h)\bigr\}
\leq-\eta\,\varepsilon\bigl\{E^{\bm\xi}[w,v]-\Gamma\Psi(\bm h)\bigr\}
\qquad\textrm{for}\quad t\in[0,T].
\end{equation}
\end{prop}
\begin{proof}
In all the proof, symbols $C, c, \eta$ denote generic positive constants, independent on $\varepsilon$,
and with $\eta\in(0,1)$.
Let $(w,v,\bm\xi)\in\hat{\mathcal{Z}}_{{}_{\Gamma,\rho}}$ be a solution to \eqref{eq:system-w-v-xi} for $t\in[0,T]$;
in particular, in the proof we shall use that $w$ and $v$ are functions of zero mean and satisfy inequalities \eqref{eq:boundsforE} and \eqref{eq:boundforxi'}.
Let us start by differentiating with respect to $t$ and estimating all the terms appearing in the functional $E^{\bm\xi}[w,v]$ \eqref{eq:functional-Exi}.
Regarding the first term, we have
\begin{equation*}
\frac{d}{dt}\Bigl\{\tfrac12\langle w,L^{\bm\xi}w\rangle\Bigr\}=\langle w_t,L^{\bm\xi}w\rangle+\tfrac12\sum_{j=1}^N\xi'_j\langle w,f''(u^{\bm\xi})u_j^{\bm\xi}w\rangle,
\end{equation*}
and so, taking the inner product between the first equation of \eqref{eq:system-w-v-xi} and $L^{\bm\xi}w$, we infer
\begin{align*}
\frac{d}{dt}\Bigl\{\tfrac12\langle w,L^{\bm\xi}w\rangle\Bigr\}&=\langle v,L^{\bm\xi}w\rangle-\sum_{j=1}^{N}\xi_j'\langle u^{\bm\xi}_j,L^{\bm\xi}w\rangle
+\tfrac12\sum_{j=1}^N\xi'_j\langle w,f''(u^{\bm\xi})u_j^{\bm\xi}w\rangle\\
&= \langle v,L^{\bm\xi}w\rangle-\sum_{j=1}^{N}\xi_j'\langle L^{\bm\xi}u^{\bm\xi}_j,w\rangle
+\tfrac12\sum_{j=1}^N\xi'_j\langle w,f''(u^{\bm\xi})u_j^{\bm\xi}w\rangle\\
&\leq\langle v,L^{\bm\xi}w\rangle+C\varepsilon^{1/2}\|v\|\|w\|\left(\max_j\|L^{\bm\xi}u^{\bm\xi}_j\|+\max_j\|u_j^{\bm\xi}\|_{{}_{L^\infty}}\|w\|\right),
\end{align*}
where in the last passage we used the first inequality of \eqref{eq:boundforxi'} and H\"older inequality.
Since
\begin{equation*}
\|L^{\bm\xi}u_j^{\bm\xi}\|=\|L^{\bm\xi}u_j^{\bm h}+z_jL^{\bm\xi}u_{N+1}^{\bm h}\|\leq C\varepsilon^{-1/2}\exp(-Al^{\bm h}/\varepsilon), \qquad \qquad j=1,\dots,N,
\end{equation*}
where we used \cite[Proposition 7.2]{Carr-Pego2}, and
\begin{equation*}
\|u_j^{\bm\xi}\|_{{}_{L^\infty}}\|w\|\leq C\varepsilon^{-3/2}\sqrt{\Gamma}\exp(-A\ell^{\bm h}/\varepsilon), \qquad \qquad j=1,\dots,N,
\end{equation*}
because of \eqref{eq:uj-est}-\eqref{eq:boundsforE}, by using Young's inequality and \eqref{eq:triangle}, we conclude
\begin{equation}\label{eq:1termE}
\frac{d}{dt}\Bigl\{\tfrac12\langle w,L^{\bm\xi}w\rangle\Bigr\}\leq\langle v,L^{\bm\xi}w\rangle+C_\Gamma\exp(-c/\varepsilon)\|w\|^2+\eta\|v\|^2,
\end{equation}
where $C_\Gamma$ depends on $\Gamma$, but it is independent on $\varepsilon$.
For what concerns the second term in the energy $E^{\bm\xi}[w,v]$ \eqref{eq:functional-Exi},
taking the inner product between the second equation of \eqref{eq:system-w-v-xi} and $v$, we deduce
\begin{align*}
\frac{d}{dt}\Bigl\{\tfrac12\tau\|v\|^2\Bigr\}& =\langle \tau v_t,v\rangle
=\langle\mathcal{L}(u^{\bm\xi})-L^{\bm\xi}w -f_2w^2-g(u^{\bm\xi}+w)v,v\rangle\\
&\qquad -\int_0^1 f(u^{\bm\xi}+w)\,dx\int_0^1v\,dx-\int_0^1[1-g(u^{\bm\xi}+w)]v\,dx\int_0^1v\,dx.
\end{align*}
Since $v$ is a function of zero mean, Young's inequality and the assumption on $g$ \eqref{eq:ass-g} yield
\begin{equation}\label{eq:2termE}
\begin{aligned}
\frac{d}{dt}\Bigl\{\tfrac12\tau\|v\|^2\Bigr\}&\leq-\langle L^{\bm\xi}w,v\rangle+\|\mathcal{L}(u^{\bm\xi})\|\|v\|+C\|w\|_{{}_{L^\infty}}\|w\|\|v\|-\sigma\|v\|^2\\
&\leq -\langle L^{\bm\xi}w,v\rangle-(\sigma-\eta)\|v\|^2+C\|w\|_{{}_{L^\infty}}^2\|w\|^2+C\|\mathcal{L}(u^{\bm\xi})\|^2\\
&\leq -\langle L^{\bm\xi}w,v\rangle-(\sigma-\eta)\|v\|^2+C_\Gamma\exp(-c/\varepsilon)\|w\|^2+C\|\mathcal{L}(u^{\bm\xi})\|^2,
\end{aligned}
\end{equation}
where we again used \eqref{eq:boundsforE} and \eqref{eq:triangle}.
Finally, we estimate the time derivative of the scalar product $\langle w,\tau v\rangle$ as follows
\begin{align*}
\frac{d}{dt}\langle w,\tau v\rangle & =\langle w_t,\tau v\rangle+\langle w,\tau v_t\rangle\\
&=\langle v-\sum_{j=1}^{N}u^{\bm\xi}_j\xi_j',\tau v\rangle+\langle w,\mathcal{L}(u^{\bm\xi})-L^{\bm\xi}w -f_2w^2-g(u^{\bm\xi}+w)v\rangle\\
&\qquad\qquad -\int_0^1 w\,dx\int_0^1 f(u^{\bm\xi}+w)\,dx-\int_0^1 w\,dx\int_0^1[1-g(u^{\bm\xi}+w)]v\,dx.
\end{align*}
Since $w$ is a function of zero mean, one has
\begin{equation}\label{eq:3termE}
\begin{aligned}
\frac{d}{dt}\langle w,\tau v\rangle &\leq\tau\|v\|^2+C\tau\varepsilon^{1/2}\max_j\|u^{\bm\xi}_j\|\|v\|^2-\langle w,L^{\bm\xi}w\rangle+C\varepsilon\|w\|^2\\
&\qquad \qquad +\varepsilon^{-1}\|\mathcal{L}(u^{\bm\xi})\|^2+C\|w\|_{{}_{L^\infty}}\|w\|^2+\varepsilon^{-1}\eta\|v\|^2\\
&\leq -\langle w,L^{\bm\xi}w\rangle+C(\varepsilon+\|w\|_{{}_{L^\infty}})\|w\|^2
+(C+\eta\,\varepsilon^{-1})\|v\|^2+\varepsilon^{-1}\|\mathcal{L}(u^{\bm\xi})\|^2,
\end{aligned}
\end{equation}
where, in particular, the inequalities
\begin{equation*}
\begin{aligned}
\langle w,\mathcal{L}(u^{\bm\xi})\rangle&\leq \tfrac12\varepsilon\|w\|^2+\tfrac12\varepsilon^{-1}\|\mathcal{L}(u^{\bm\xi})\|^2,\\
\langle w,g(u^{\bm\xi}+w,\tau)v\rangle&\leq C\varepsilon\|w\|^2+\eta\,\varepsilon^{-1}\|v\|^2
\end{aligned}
\end{equation*}
have been used.
Collecting the estimates \eqref{eq:1termE}, \eqref{eq:2termE} and \eqref{eq:3termE}, we get
\begin{equation*}
\begin{aligned}
\frac{dE^{\bm\xi}}{dt}[w,v] &\leq -\varepsilon\langle w,L^{\bm\xi}w\rangle-[\sigma-C\varepsilon-3\eta]\|v\|^2\\
&\hskip1.0cm +C_\Gamma\bigl\{\exp(-c/\varepsilon)
+\varepsilon (\varepsilon+\|w\|_{{}_{L^\infty}})\bigr\}\|w\|^2+(C+1)\|\mathcal{L}(u^{\bm\xi})\|^2\\
&\leq -\varepsilon\langle w,L^{\bm\xi}w\rangle-\eta\sigma\|v\|^2+C_\Gamma\varepsilon\bigl\{\exp(-c/\varepsilon)+\varepsilon\bigr\}\|w\|^2+C\|\mathcal{L}(u^{\bm\xi})\|^2,
\end{aligned}
\end{equation*}
for $\varepsilon$ and $\eta$ small.
Thus, by using \eqref{eq:coercive} and the following estimate
\begin{equation}\label{L(u^h)<Psi}
\|\mathcal{L}(u^{\bm h})\|^2\leq C\varepsilon\,\Psi(\bm h)\leq C\varepsilon\exp(-2A\ell^{\bm h}/\varepsilon),
\end{equation}
see \cite[Theorem 3.5]{Carr-Pego}, we obtain
\begin{equation*}
\frac{dE^{\bm\xi}}{dt}[w,v]\leq -\varepsilon\bigl\{1-C_\Gamma\bigl(\exp(-c/\varepsilon)+\varepsilon\bigr)\bigr\}\langle w,L^{\bm\xi}w\rangle
-\eta\sigma\|v\|^2+C\varepsilon\Psi,
\end{equation*}
Hence, for $\varepsilon\in(0,\varepsilon_0)$, with $\varepsilon_0$ small (and dependent on $\Gamma$), we deduce the bound
\begin{equation*}
1- C_\Gamma\bigl(\exp(-c/\varepsilon)+\varepsilon\bigr)\geq\eta.
\end{equation*}
Substituting, we infer
\begin{equation*}
\begin{aligned}
\frac{dE^{\bm\xi}}{dt}[w,v] &\leq -\eta\,\varepsilon\langle w,L^{\bm\xi}w\rangle-\eta\sigma\|v\|^2+C\varepsilon\Psi\\
&\leq -\eta\,\varepsilon E^{\bm\xi}[w,v]
-\tfrac12\eta\,\varepsilon\langle w,L^{\bm\xi}w\rangle+\eta\,\varepsilon^2\tau\langle w,v\rangle
-\eta\bigl(\sigma-\tfrac12\varepsilon\tau\bigr)\|v\|^2+C\varepsilon\Psi\\
&\leq -\eta\,\varepsilon E^{\bm\xi}[w,v]
-\tfrac12\eta\,\varepsilon\bigl(1-C\varepsilon\tau\bigr)\langle w,L^{\bm\xi}w\rangle
-\eta\bigl(\sigma-C\varepsilon\tau\bigr)\|v\|^2+C\varepsilon\Psi,
\end{aligned}
\end{equation*}
again from Young's inequality and \eqref{eq:coercive}.
Finally, for $\varepsilon_0$ sufficiently small, we end up with
\begin{equation}\label{eq:E'}
\frac{dE^{\bm\xi}}{dt}[w,v]\leq -\eta\,\varepsilon E^{\bm\xi}[w,v]-\eta\sigma\|v\|^2+C\varepsilon\Psi.
\end{equation}
Now, let us consider the term $\Psi(\bm h)$; direct differentiation gives
\begin{equation*}
\frac{d\Psi}{dt}=2\sum_{i,j=1}^{N+1}\langle\mathcal{L}(u^{\bm\xi}),k^{\bm h}_j\rangle\Bigl\{\langle\mathcal{L}(u^{\bm\xi}),k_{ji}^{\bm h}\rangle
-\langle L^{\bm\xi}u^{\bm h}_i,k^{\bm h}_j\rangle\Bigr\}h'_i.
\end{equation*}
Using the estimates provided by \cite[Proposition 7.2]{Carr-Pego2} and by \eqref{eq:boundforxi'}, \eqref{eq:boundforhn+1'}, we have
\begin{equation*}
\begin{aligned}
\left|h'_i\langle\mathcal{L}(u^{\bm\xi}),k_{ji}^{\bm h}\rangle\right|
&\leq|\bm h'|_{{}_\infty}\|\mathcal{L}(u^{\bm\xi})\|\|k^{\bm h}_{ji}\|
\leq C\varepsilon^{-1}\|\mathcal{L}(u^{\bm\xi})\|\|v\|,\\
\left|h'_i\langle L^{\bm\xi}u^{\bm h}_i,k^{\bm h}_j\rangle\right|
&\leq|\bm h'|_{{}_\infty}\|k^{\bm h}_j\|\|L^{\bm h}u^{\bm h}_i\|
\leq C\exp(-c/\varepsilon)\|v\|,
\end{aligned}
\end{equation*}
for any $i,j=1,\dots,N+1$.
Therefore, observing that $|\langle\mathcal{L}(u^{\bm\xi}),k^{\bm h}_j\rangle|\leq C\varepsilon^{-1/2} \|\mathcal{L}(u^{\bm\xi})\|$,
we infer the bound
\begin{equation*}
\left|\frac{d\Psi}{dt}\right|
\leq C\varepsilon^{-1/2} \left\{\varepsilon^{-1}\|\mathcal{L}(u^{\bm\xi})\|+\exp(-c/\varepsilon)\right\}
\|\mathcal{L}(u^{\bm\xi})\|\|v\|.
\end{equation*}
Using the inequality \eqref{L(u^h)<Psi}, we obtain
\begin{equation*}
\begin{aligned}
\left|\Gamma\frac{d\Psi}{dt}\right|
&\leq C\,\Gamma\,\varepsilon^{-1/2}\bigl\{\Psi^{1/2}+\exp(-c/\varepsilon)\bigr\}\|v\|\Psi^{1/2}\\
&\leq \eta\|v\|^2+C\,\Gamma^2\varepsilon^{-1}\bigl\{\Psi^{1/2}+\exp(-c/\varepsilon)\bigr\}^2\Psi.
\end{aligned}
\end{equation*}
Hence, observing that $\Psi\leq C\exp\bigl(-c/\varepsilon\bigr)$, we end up with
\begin{equation}\label{eq:Psi'}
\left|\Gamma\frac{d\Psi}{dt}\right|\leq \eta\|v\|^2+C\,\Gamma^2\exp(-c/\varepsilon)\Psi.
\end{equation}
In conclusion, combining the estimates \eqref{eq:E'} and \eqref{eq:Psi'}, we obtain that if
$(w,v,\bm\xi)\in\hat{\mathcal{Z}}_{{}_{\Gamma,\rho}}$ is a solution to \eqref{eq:system-w-v-xi}, then
\begin{equation*}
\frac d{dt}\bigl\{E^{\bm\xi}[w,v]-\Gamma\Psi(\bm h)\bigr\} \leq
-\eta\,\varepsilon E^{\bm\xi}[w,v]+C\bigl(\varepsilon+\Gamma^2\exp(-c/\varepsilon)\bigr)\Psi,
\end{equation*}
for some $\eta\in(0,1)$.
Therefore the estimate \eqref{E-GPsi^2} follows from
\begin{equation*}
C\exp(-c/\varepsilon)\Gamma^2-\eta\,\varepsilon\Gamma +C\varepsilon\leq 0,
\end{equation*}
and the latter is verified for $\Gamma\in [\Gamma_1,\Gamma_2]$, provided $\varepsilon\in(0,\varepsilon_0)$,
with $\varepsilon_0$ sufficiently small so that $\eta^2\varepsilon-4C^2\exp(-c/\varepsilon)>0$.
\end{proof}
We stress that in the estimates \eqref{eq:1termE}-\eqref{eq:2termE}-\eqref{eq:3termE} it is fundamental that $w$ and $v$ are functions of zero mean,
and so, Theorem \ref{thm:existence-coord} is crucial in the proof of the persistence of the solution to \eqref{eq:hyp-nonlocal}-\eqref{eq:Neumann} in the slow channel.
Now, we have all the tools needed to prove Theorem \ref{thm:main}.
\begin{proof}[Proof of Theorem \ref{thm:main}]
First of all, we define the slow channel $\mathcal{Z}_{{}_\rho}$.
Fix $\Gamma\in[\Gamma_1,\Gamma_2]$, $\varepsilon_0>0$ small and $\varepsilon,\rho$ satisfying \eqref{eq:triangle}
so that Proposition \ref{prop:d/dtE} holds.
Then, the slow channel is
\begin{equation}\label{eq:slowchannel}
\begin{aligned}
\mathcal{Z}_{{}_\rho}:=\bigl\{(u,v)\,:\;u=u^{\bm\xi}+w,\;\; (w,v)\in W\times V, \quad \bm\xi & \, \mbox{ is such that }\,
\bm h=(\bm\xi,z(\bm\xi))\in\bar\Omega_\rho, \\
&\qquad\mbox{ and } \; E^{\bm{\xi}}[w,v]\leq\Gamma \Psi({\bm h})\bigr\}.
\end{aligned}
\end{equation}
Assume that the initial data $(u_0,u_1)\in\,\stackrel{\circ}{\mathcal{Z}}_{{}_\rho}$, which means $u_0=u^{\bm h_0}+w_0$, $u_1=v_0$,
with $\bm h_0\in\Omega_\rho$ and $E^{\bm{\xi}}[w_0,v_0]<\Gamma \Psi({\bm h})$.
Notice that the estimates \eqref{eq:boundsforE} and the smallness of $\varepsilon$ ensure that the assumptions \eqref{eq:w-excoord} of Theorem \ref{thm:existence-coord}
are satisfied and we have the decomposition $u=u^{\bm\xi}+w$.
Studying the dynamics inside the slow channel \eqref{eq:slowchannel}
is equivalent to study the dynamics of the ODE-PDE coupled system \eqref{eq:system-w-v-xi} in the set $\hat{\mathcal{Z}}_{{}_{\Gamma,\rho}}$.
The estimates \eqref{eq:umenouh}-\eqref{eq:|h'|<exp-intro} inside the slow channel $\mathcal{Z}_{{}_\rho}$ follow from \eqref{eq:boundsforE} and \eqref{eq:boundforxi'}.
Let us give a lower bound on the time taken for the solution to leave the slow channel.
Assume that $(u,v)\in\mathcal{Z}_{{}_\rho}$ for $t\in[0,T_\varepsilon]$, where $T_\varepsilon$ is maximal.
The boundary of $\mathcal{Z}_{{}_\rho}$ consists of two parts: the ``ends'' where ${\bm h}\in\partial\Omega_\rho$,
meaning $h_j-h_{j-1}=\varepsilon/\rho$ for some $j$ and ``sides'' where $E^{\bm\xi}[w,v]=\Gamma\Psi({\bm h})$.
Thanks to Proposition \ref{prop:d/dtE}, we can state that the solution can leave $\mathcal{Z}_{{}_\rho}$ only through the ends.
Indeed, from \eqref{E-GPsi^2} it follows that
\begin{equation*}
\frac d{dt}\Bigl\{\exp(\eta\,\varepsilon t)(E^{\bm\xi}[w,v]-\Gamma\Psi(\bm h))\Bigr\}\leq0,
\quad \qquad t\in[0,T_\varepsilon]
\end{equation*}
and so,
\begin{equation*}
\exp(\eta\,\varepsilon t)\{E^{\bm\xi}[w,v]-\Gamma\Psi(\bm h)\}(t)\leq\{E^{\bm\xi}[w,v]-\Gamma\Psi(\bm h)\}(0)<0,
\qquad \quad t\in[0,T_\varepsilon].
\end{equation*}
Therefore, the solution $(u,v)$ remains in the channel $\mathcal{Z}_{{}_\rho}$ while $\bm h\in\Omega_\rho$
and if $T_\varepsilon<+\infty$ is maximal, then $\bm h(T_\varepsilon)\in\partial\Omega_\rho$, that is
\begin{equation}\label{hfrontiera}
h_j(T_\varepsilon)-h_{j-1}(T_\varepsilon)=\varepsilon/\rho, \quad \qquad \textrm{for some } j\in\{1,\dots,N+2\}.
\end{equation}
Since the transition points move with an exponentially small velocity \eqref{eq:boundforxi'}-\eqref{eq:boundforhn+1'},
the solution $(u,v)$ remains in the channel for an exponentially long time.
Precisely, from \eqref{eq:|h'|<exp-intro} we deduce
\begin{equation}\label{dhmax}
|h_j(t)-h_j(0)|\leq C\left(\varepsilon/\tau\right)^{1/2}\exp(-A\ell^{\bm h(t)}/\varepsilon)t \qquad \textrm{for any } j=1,\dots,N+1,
\end{equation}
for all $t\in[0,T_\varepsilon]$, where $\ell^{\bm h(t)}$ is the minimum distance between layers at the time $t$.
Combining \eqref{hfrontiera} and \eqref{dhmax}, we obtain
\begin{equation*}
\varepsilon/\rho\geq \ell^{\bm h(0)}-2C(\varepsilon/\tau)^{1/2}\exp(-A/\rho)T_\varepsilon.
\end{equation*}
Hence, by using \eqref{eq:triangle} we obtain
\begin{equation*}
T_\varepsilon\geq C\bigl(\ell^{\bm h(0)}-\varepsilon/\rho\bigr)(\varepsilon/\tau)^{-1/2}\exp(A/\rho)\geq
C\bigl(\ell^{\bm h(0)}-\varepsilon/\rho\bigr)(\varepsilon/\tau)^{-1/2}\exp(A\delta/\varepsilon),
\end{equation*}
and the proof is complete.
\end{proof}
\section{Layer dynamics}\label{sec:layerdyn}
In this section, we derive the ODEs describing the exponentially slow motion of the $N+1$ layers.
We reason as in the derivation of the ODEs for the layer dynamics in \cite{FLM17,FLMpre}.
Since $w$ is very small, we use the approximation $w\approx0$ in \eqref{eq:xi-ort} and then
\begin{equation}\label{eq:xi'-w=0}
\sum_{i=1}^N\langle u^{\bm\xi}_i,\nu^{\bm\xi}_j\rangle\xi'_i=\langle v,\nu^{\bm\xi}_j\rangle, \qquad j=1,\dots,N.
\end{equation}
In order to eliminate $v$, let us differentiate and multiply by $\tau$ equation \eqref{eq:xi'-w=0}.
We have
\begin{align*}
\tau\sum_{i,l=1}^N \bigl(\langle u^{\bm\xi}_{il},\nu^{\bm\xi}_j\rangle+\langle u^{\bm\xi}_i,\nu^{\bm\xi}_{jl}\rangle\bigr)\xi'_l\xi'_i
+\tau\sum_{i=1}^N\langle u^{\bm\xi}_i,\nu^{\bm\xi}_j\rangle\xi''_i=&\langle\mathcal{L}(u^{\bm\xi}),\nu^{\bm\xi}_j\rangle-\langle g(u^{\bm\xi})v,\nu^{\bm\xi}_j\rangle\\
& -\int_0^1\nu^{\bm\xi}_j\,dx\int_0^1 f(u^{\bm\xi})\,dx\\
&-\int_0^1\nu^{\bm\xi}_j\,dx\int_0^1[1-g(u^{\bm\xi})]v\,dx\\
&+\tau\sum_{l=1}^N\langle v,\nu^{\bm\xi}_{jl}\rangle\xi'_l,
\end{align*}
for $j=1,\dots,N$.
Using the approximation $v\approx\displaystyle\sum_{i=1}^Nu^{\bm\xi}_i\xi'_i$, we obtain
\begin{align*}
\tau\sum_{i,l=1}^N \bigl(\langle u^{\bm\xi}_{il},\nu^{\bm\xi}_j\rangle+\langle u^{\bm\xi}_i,\nu^{\bm\xi}_{jl}\rangle\bigr)\xi'_l\xi'_i
+\tau\sum_{i=1}^N\langle u^{\bm\xi}_i,\nu^{\bm\xi}_j\rangle\xi''_i=
&\langle\mathcal{L}(u^{\bm\xi}),\nu^{\bm\xi}_j\rangle-\sum_{i=1}^N\langle g(u^{\bm\xi})u^{\bm\xi}_i,\nu^{\bm\xi}_j\rangle\xi'_i\\\
& -\int_0^1\nu^{\bm\xi}_j\,dx\int_0^1 f(u^{\bm\xi})\,dx\\
&-\int_0^1\nu^{\bm\xi}_j\,dx\sum_{i=1}^N\xi'_i\int_0^1[1-g(u^{\bm\xi})]u^{\bm\xi}_i\,dx\\
&+\tau\sum_{i,l=1}^N\langle u^{\bm\xi}_i,\nu^{\bm\xi}_{jl}\rangle\xi'_l\xi'_i,
\end{align*}
for $j=1,\dots,N$.
Let us denote by $\nabla^2_{\bm\xi}u^{\bm\xi}$ the Hessian of $u^{\bm\xi}$ with respect to $\bm\xi$ and
by $q(\bm\upsilon):=\displaystyle\sum_{i,l=1}^N u^{\bm\xi}_{il}\upsilon_l \upsilon_i$ the quadratic form associated to $\nabla^2_{\bm\xi}u^{\bm\xi}$.
Simplifying, we get
\begin{equation}\label{h-eq}
\begin{aligned}
\tau\sum_{i=1}^N\langle u^{\bm\xi}_i,\nu^{\bm\xi}_j\rangle\xi''_i+\sum_{i=1}^N\langle g(u^{\bm\xi})u^{\bm\xi}_i,\nu^{\bm\xi}_j\rangle\xi'_i
+\tau\langle q(\bm\xi'),\nu^{\bm\xi}_j\rangle =&\langle\mathcal{L}(u^{\bm\xi}),\nu^{\bm\xi}_j\rangle-\int_0^1\nu^{\bm\xi}_j\,dx\int_0^1 f(u^{\bm\xi})\,dx\\
&-\int_0^1\nu^{\bm\xi}_j\,dx\sum_{i=1}^N\xi'_i\int_0^1[1-g(u^{\bm\xi})]u^{\bm\xi}_i\,dx,
\end{aligned}
\end{equation}
for $ j=1,\dots,N$.
Let us rewrite equations \eqref{h-eq} in the compact form
\begin{equation}\label{eq:xi-vect}
\tau S(\bm\xi)\bm\xi''+\mathcal{G}(\bm\xi)\bm\xi' +\tau\bm{\mathcal{Q}}(\bm\xi,\bm\xi')=\bm{\mathcal{P}}(\bm\xi)-\mathcal{R}(\bm\xi)\bm\xi',
\end{equation}
where the matrix $S$ has the form \eqref{eq:S-matrix}, the matrices $\mathcal{G},\mathcal{R}\in\mathbb{R}^{N\times N}$ are defined by
\begin{equation*}
\mathcal{G}_{ji}(\bm\xi):=\langle g(u^{\bm\xi})u^{\bm\xi}_i,\nu^{\bm\xi}_j\rangle, \qquad \qquad
\mathcal{R}_{ji}(\bm\xi):=\int_0^1\nu^{\bm\xi}_j\,dx\int_0^1[1-g(u^{\bm\xi})]u^{\bm\xi}_i\,dx,
\end{equation*}
and the vectors $\bm{\mathcal{Q}},\bm{\mathcal{P}}\in\mathbb{R}^{N}$ are given by
\begin{equation*}
\mathcal{Q}_j(\bm\xi,\bm\xi'):=\langle q(\bm\xi'),\nu^{\bm\xi}_j\rangle, \qquad \qquad
\mathcal{P}_j(\bm\xi):=\langle\mathcal{L}(u^{\bm\xi}),\nu^{\bm\xi}_j\rangle-\int_0^1\nu^{\bm\xi}_j\,dx\int_0^1 f(u^{\bm\xi})\,dx.
\end{equation*}
We want to identify the leading terms in \eqref{eq:xi-vect}, having in mind the estimates for $u^{\bm\xi}$, $\nu_j^{\bm\xi}$ and their derivatives;
namely we shall rewrite $\mathcal{G}$, $\bm{\mathcal{Q}}$, $\bm{\mathcal{P}}$ and $\mathcal{R}$
by neglecting the exponentially small remainders in the asymptotic expansion for $\varepsilon\to 0$.
Let us start with the matrix $\mathcal{G}$ and use \cite[Proposition 4.1]{FLM17},
which states that if $\rho$ is sufficiently small and $\bm h\in\Omega_\rho$, then there exists $C>0$ such that,
\begin{equation}\label{g(u^h)u_j,k_j}
\begin{aligned}
&\bigl|\langle g(u^{\bm h})u_j^{\bm h},k^{\bm h}_j\rangle-\varepsilon^{-1}C_{F,g}\bigr|
\leq C\varepsilon^{-1}\max\{\beta^{j-1/2},\beta^{j+1/2}\}
\leq C\varepsilon^{-1}\exp(-A\ell^{\bm h}/2\varepsilon), \\
&\bigl|\langle g(u^{\bm h})u_j^{\bm h},k^{\bm h}_{j+1}\rangle\bigr|+\bigl|\langle g(u^{\bm h})u_{j+1}^{\bm h},k^{\bm h}_j\rangle\bigr|
\leq C\varepsilon^{-1}\beta^{j+1/2}
\leq C\varepsilon^{-1}\exp(-A\ell^{\bm h}/2\varepsilon),\\
&\langle g(u^{\bm h})u_j^{\bm h},k^{\bm h}_i\rangle=0 \qquad\mbox{ if } |j-i|>1.
\end{aligned}
\end{equation}
where
\begin{equation*}
C_{F,g}:=\int_{-1}^1\sqrt{2F(s)}g(s)ds.
\end{equation*}
From the definitions of $\mathcal{G}_{ji}(\bm\xi)$, $u^{\bm\xi}_i$ and $\nu^{\bm\xi}_j$, it follows that
\begin{align*}
\mathcal{G}_{ji}(\bm\xi)=&\langle g(u^{\bm\xi})u^{\bm h}_i,k^{\bm h}_j\rangle+(-1)^{N-j}\langle g(u^{\bm\xi})u^{\bm h}_i,k^{\bm h}_{N+1}\rangle\\
&\qquad +z_i\langle g(u^{\bm\xi})u^{\bm h}_{N+1},k^{\bm h}_j\rangle+(-1)^{N-j}z_i\langle g(u^{\bm\xi})u^{\bm h}_{N+1},k^{\bm h}_{N+1}\rangle,
\end{align*}
and, by using \eqref{eq:derh_N+1}, \eqref{eq:triangle} and the estimates \eqref{g(u^h)u_j,k_j},
we obtain the following formula for the matrix $\mathcal{G}$:
\begin{equation*}
\mathcal{G}(\bm\xi)=\frac{C_{F,g}}{\varepsilon}\left(\begin{array}{ccccc} 2 & -1 & 1 & \dots & (-1)^{N+1}\\
-1 & 2 & -1 & \dots & (-1)^{N}\\
1 & -1 & 2 & \dots & (-1)^{N+1}\\
\dots & \dots & \dots & \dots & \dots\\
(-1)^{N+1}& (-1)^{N} & (-1)^{N+1} & \dots & 2
\end{array}\right)+\mathcal{O}\left(\exp(-c/\varepsilon)\right),
\end{equation*}
for some positive constant $c$ (independent on $\varepsilon$).
Therefore, we have
\begin{equation}\label{eq:G-matrix}
\mathcal{G}(\bm\xi)=\gamma_{{}_{F,g}} S(\bm\xi)+\mathcal{O}(\exp(-c/\varepsilon)),
\end{equation}
where $S(\bm\xi)$ satisfies \eqref{eq:S-matrix} and $\gamma_{{}_{F,g}}$ is the constant introduced in Section \ref{sec:st-main}:
\begin{equation*}
\gamma_{{}_{F,g}}:=\frac{C_{F,g}}{c_{{}_F}}=\frac{\displaystyle\int_{-1}^1\sqrt{F(s)}g(s)\,ds}{\displaystyle\int_{-1}^1\sqrt{F(s)}\,ds}.
\end{equation*}
Now, let us focus our attention on the term $\tau\bm{\mathcal{Q}}(\bm\xi,\bm\xi')$;
one has
\begin{align*}
\mathcal{Q}_j(\bm\xi,\bm\xi')&=\sum_{i,l=1}^{N}\langle u_{il}^{\bm\xi},\nu^{\bm\xi}_j\rangle\xi'_i\xi'_l\\
&=\sum_{i,l=1}^{N}\langle u_{il}^{\bm h}+z_lu_{i,N+1}^{\bm h}+z_iu_{N+1,l}^{\bm h}+z_iz_lu_{N+1,N+1}^{\bm h},k^{\bm h}_j+(-1)^{N-j}k^{\bm h}_{N+1}\rangle\xi'_i\xi'_l.
\end{align*}
All the elements $\langle u^{\bm h}_{il},k^{\bm h}_j\rangle$ have been estimated in \cite[Section 4]{FLM17} and
we have $\langle u^{\bm h}_{il},k^{\bm h}_j\rangle=\mathcal{O}\left(\exp(-c/\varepsilon)\right)$ for any $i,l,j$, and then
\begin{equation}\label{eq:Q-vector}
\mathcal{Q}_j(\bm\xi,\bm\xi')=\mathcal{O}(\exp(-c/\varepsilon))\sum_{i,l=1}^{N}\xi'_i\xi'_l, \qquad \qquad j=1,\dots,N.
\end{equation}
It remains to identify the leading terms in the right hand side of \eqref{eq:xi-vect};
concerning the first term appearing in $\bm{\mathcal{P}}(\bm\xi)$, we have
\begin{equation}\label{eq:leadP}
\begin{aligned}
\langle\mathcal{L}(u^{\bm\xi}),\nu^{\bm\xi}_j\rangle&=\langle\mathcal{L}(u^{\bm\xi}),k^{\bm h}_j\rangle+(-1)^{N-j}\langle\mathcal{L}(u^{\bm\xi}),k^{\bm h}_{N+1}\rangle\\
&=\alpha^{j+1}-\alpha^{j}+(-1)^{N-j}\left(\alpha^{N+2}-\alpha^{N+1}\right),
\end{aligned}
\end{equation}
for $j=1,\dots,N$, where we used the definition \eqref{eq:newtangvec} and \cite[Lemma 3.3]{Carr-Pego}.
In the next result, we give an estimate on $\int_0^1 f(u^{\bm\xi})\,dx$.
\begin{lem}
Let $f=-F'$ with $F$ satisfying \eqref{eq:ass-F} and $u^{\bm\xi}\in\mathcal{M}$ defined by \eqref{eq:uh}.
Then,
\begin{equation}\label{eq:intf}
\left|\int_0^1 f(u^{\bm\xi})\,dx\right|\leq C\varepsilon\sum_{i=1}^{N+1}\left|\alpha^i-\alpha^{i+1}\right|.
\end{equation}
\end{lem}
\begin{proof}
From the definition \eqref{eq:uh}, it follows that
\begin{align*}
\int_0^1 f(u^{\bm\xi})\,dx= \sum_{i=1}^{N+1}\int_{I_j}f(u^{\bm\xi})\,dx=&\sum_{i=1}^{N+1}\Bigg[\int_{h_{j-1/2}}^{h_j}f(\phi^j+\chi^j(\phi^{j+1}-\phi^j))\,dx+\\
&\qquad\quad\int_{h_j}^{h_{j+1/2}}f\left(\phi^{j+1}+(1-\chi^j)(\phi^j-\phi^{j+1})\right)\,dx\Bigg].
\end{align*}
Since for $x\in[h_j-\varepsilon,h_j+\varepsilon]$ it holds
\begin{equation*}
|\phi^j(x)-\phi^{j+1}(x)|\leq C|\alpha^j-\alpha^{j+1}|, \qquad\qquad j=1,\dots,N+1,
\end{equation*}
for some $C>0$ independent on $\varepsilon$ (see \cite[Lemma 8.2]{Carr-Pego}), we split
\begin{align*}
\int_{I_j}f(u^{\bm\xi})\,dx&=\int_{h_{j-1/2}}^{h_j-\varepsilon}f(\phi^j)\,dx+\int_{h_j-\varepsilon}^{h_j}\left[f(\phi^j)+f'(\zeta_{j_1})\chi^j(\phi^{j+1}-\phi^j)\right]\,dx\\
&\;+\int_{h_j}^{h_j+\varepsilon}\left[f(\phi^{j+1})+f'(\zeta_{j_2})(1-\chi^j)(\phi^j-\phi^{j+1})\right]\,dx+\int_{h_j+\varepsilon}^{h_{j+1/2}}f(\phi^{j+1})\,dx,
\end{align*}
where we used the definition of $\chi^j$,
and we obtain
\begin{equation*}
\left|\int_{I_j}f(u^{\bm\xi})\,dx\right|\leq\left|\int_{h_{j-1/2}}^{h_j}f(\phi^j)\,dx+\int_{h_j}^{h_{j+1/2}}f(\phi^{j+1})\,dx\right|+C\varepsilon|\alpha^j-\alpha^{j+1}|,
\end{equation*}
for $j=1,\dots,N+1$.
However, by definition $\varepsilon^2\phi^j_{xx}+f(\phi^j)=0$ \eqref{eq:fi}, and so
\begin{equation*}
\left|\int_{I_j}f(u^{\bm\xi})\,dx\right|\leq\varepsilon^2\left|\phi^j_x(h_j)-\phi^{j+1}_x(h_j)\right|+C\varepsilon|\alpha^j-\alpha^{j+1}|,
\end{equation*}
for $ j=1,\dots,N+1$.
By using \cite[Lemma 8.2, estimate (8.2)]{Carr-Pego}, we end up with
\begin{equation*}
\left|\int_{I_j}f(u^{\bm\xi})\,dx\right|\leq C\varepsilon|\alpha^j-\alpha^{j+1}|, \qquad\qquad j=1,\dots,N+1,
\end{equation*}
and, as a trivial consequence we conclude \eqref{eq:intf}.
\end{proof}
Combining \eqref{eq:int-nu}, \eqref{eq:leadP} and \eqref{eq:intf}, we deduce that the leading term in $\bm{\mathcal{P}}(\bm\xi)$ is
\begin{equation*}
\mathcal{P}^*_j(\bm\xi):=\alpha^{j+1}-\alpha^{j}+(-1)^{N-j}\left(\alpha^{N+2}-\alpha^{N+1}\right), \qquad \qquad j=1,\dots,N.
\end{equation*}
Indeed, for \eqref{eq:int-nu}, \eqref{eq:leadP} and \eqref{eq:intf} one has
\begin{equation}\label{eq:P-vector}
\left|\bm{\mathcal{P}}(\bm\xi)-\bm{\mathcal{P}}^*(\bm\xi)\right|\leq C\exp\left(-c/\varepsilon\right)|\bm{\mathcal{P}}^*(\bm\xi)|.
\end{equation}
Finally, by using again \eqref{eq:int-nu} we infer
\begin{equation}\label{eq:R-matrix}
\begin{aligned}
|\mathcal{R}_{ji}(\bm\xi)|&=\left|\int_0^1\nu^{\bm\xi}_j\,dx\right|\left|\int_0^1[1-g(u^{\bm\xi})]u^{\bm\xi}_i\,dx\right|\\
&\leq\left|\int_0^1\nu^{\bm\xi}_j\,dx\right|\|1-g(u^{\bm\xi})\|\|u^{\bm\xi}_i\|=\mathcal{O}\left(\exp(-c/\varepsilon)\right), \qquad \qquad i,j=1,\dots,N.
\end{aligned}
\end{equation}
Taking into account \eqref{eq:G-matrix}, \eqref{eq:Q-vector}, \eqref{eq:P-vector}, \eqref{eq:R-matrix} and neglecting the exponentially smallest terms,
from \eqref{eq:xi-vect} we derive the following system of ODEs
\begin{equation*}
\tau S(\bm\xi)\bm\xi''+\gamma_{{}_{F,g}} S(\bm\xi)\bm\xi'=\bm{\mathcal{P}}^*(\bm\xi).
\end{equation*}
By applying the inverse matrix $S^{-1}(\bm\xi)$, we end up with
\begin{equation*}
\tau\bm\xi''+\gamma_{{}_{F,g}}\bm\xi'=S^{-1}(\bm\xi)\bm{\mathcal{P}}^*(\bm\xi).
\end{equation*}
Hence, using the formula \eqref{eq:S^-1} for $S^{-1}(\bm\xi)$, we obtain the following ODE for $\xi_j$
\begin{equation*}
\tau \xi''_j+\gamma_{{}_{F,g}}\xi'_j=\frac{\varepsilon}{c_{{}_F}}\left(\alpha^{j+1}-\alpha^j+\frac{(-1)^{j+1}}{N+1}\sum_{i=1}^{N+1}(-1)^i\left(\alpha^{i+1}-\alpha^i\right)\right),
\end{equation*}
for $j=1,\dots,N$.
Since $\bm\xi$ represents the vector of the first $N$ components of $\bm h$, we derived the ODEs for the first $N$ transition points;
to obtain the equation for $h_{N+1}$ we use the first equality in \eqref{eq:boundforhn+1'} and
we neglect the exponentially smallest terms in \eqref{eq:derh_N+1}, namely we consider the approximation
\begin{equation*}
h'_{N+1}\approx\sum_{j=1}^{N}(-1)^{N-j}\xi'_j, \qquad \qquad
h''_{N+1}\approx\sum_{j=1}^{N}(-1)^{N-j}\xi''_j.
\end{equation*}
Thus, we get
\begin{align*}
\tau h''_{N+1}+\gamma_{{}_{F,g}}h'_{N+1}&=\frac{\varepsilon}{c_{{}_F}}\left(\sum_{j=1}^{N}(-1)^{N-j}\left(\alpha^{j+1}-\alpha^j\right)+\sum_{j=1}^{N+1}\frac{N(-1)^{N+j+1}}{N+1}\left(\alpha^{j+1}-\alpha^j\right)\right)\\
&=\frac{\varepsilon}{c_{{}_F}}\left(\alpha^{N+2}-\alpha^{N+1}+\frac{(-1)^N}{N+1}\sum_{j=1}^{N+1}(-1)^j\left(\alpha^{j+1}-\alpha^j\right)\right).
\end{align*}
We conclude that the dynamics of the transition points $(h_1,\dots,h_{N+1})$ is described by the ODEs \eqref{eq:ODE-hypnonlocal}, that is
\begin{equation*}
\tau h''_j+\gamma_{{}_{F,g}} h'_j=\frac{\varepsilon}{c_{{}_F}}\left(\alpha^{j+1}-\alpha^j+\frac{(-1)^{j+1}}{N+1}\sum_{i=1}^{N+1}(-1)^i(\alpha^{i+1}-\alpha^i)\right),
\end{equation*}
for $j=1,\dots,N+1$.
By (formally) taking $\tau=0$ and $\gamma_{{}_{F,g}}=1$,
one obtains the ODEs describing the layer dynamics in the case of the mass conserving Allen--Cahn equation \eqref{eq:maco-AC}.
|
2,877,628,090,542 | arxiv |
\section{Introduction}
The field of metapopulations ecology deals with the study of spatial systems describing the behavior of interacting populations that live in fragmented habitats \cite{Hanski}. The purpose of these models is to understand how the local and global dynamics of metapopulation systems, usually balanced between local extinctions and new colonizations of unoccupied patches, depend on the spatial arrangement of the habitat. Consequently, relevant insights into related fields of ecological research, such as evolutionary ecology or conservation and landscape management, can be achieved. Indeed, the topology of fragmented habitats potentially holds relevant implications for the persistence of populations, and their robustness against natural or anthropogenic disturbance \cite{habitat_mosaics}.
Recently, in addition to ever increasing applications of graph-based methods for the analysis of complex networks in cell biology \cite{graph_cellbiology,scalefree_cellbiology}, graph theory has also been applied to the study of metapopulations systems. In graph models of metapopulations, nodes are used to represent habitat patches, and graph edges are used to denote some functional connections between patches (typically related to the dispersal of individuals). Attributes can be associated to nodes, describing the quality or dimension of patches, while different types of edges can be exploited to represent the distance between connected patches, the rate of dispersal between a couple of patches, or simply whether two patches are connected or not.
Metapopulation models using graph-based methods \cite{habitat_mosaics,spatial_graphs} are simple to implement and require relatively few data for their definition, while individual-based models implement more detailed aspects, concerning the nature and the interaction of populations \cite{spatial_explicit,spatially_explicit_review}. Both types of modeling approaches are useful for the analysis of specific features of metapopulations but, while the first focuses on the properties of the habitat topology, the second is more concerned with the emergent dynamics. In this paper, we present a stochastic multivolume model of metapopulations, which integrates the explicit representation of interactions between the individuals of the populations -- and therefore allows to simulate the emergent local and global dynamics -- with a graph description of the habitat topology -- which allows to investigate the influence of distinct spatial structures on the dynamics.
This model, which represents a simplified extension of a previous metapopulation model that we introduced in \cite{metapop,bicta07}, is based on the multivolume stochastic simulation algorithm tau-DPP \cite{tauWMC7,VolumeRozenberg}, a stochastic class of membrane systems. Membrane systems, or P systems, were introduced in \cite{Paun00} as a class of unconventional computing devices of distributed, parallel and nondeterministic
type, inspired by the compartmental structure and the functioning of living
cells. The basic model consists of a membrane structure where multisets of
objects evolve according to given evolution rules. A
comprehensive overview of P systems and of its many applications in various research areas, ranging
from Biology to Linguistics to Computer Science, can be found in \cite{PaunBook,VAPS,oxford_handbook_MC}.
In tau-DPP, the distinct compartments of any multivolume model can be arranged according to a specified hierarchy (e.g., a membrane structure), under the additional assumption that the topological structure and the volume dimensions do not change during the system evolution (each volume is assumed to satisfy the standard requirements of the classical stochastic simulation algorithm, see \cite{Gill77} and \cite{BioSimWare} for more details). Inside each volume, two different types of rules can be defined: the \emph{internal rules}, which modify the objects contained inside the volume where they take place (in the case of metapopulation, they describe the growth and death of population individuals according to the Lotka-Volterra model of preys and predators), and the \emph{communication rules}, which are used to move the objects between adjacent volumes (in the case of metapopulation, they describe the migration of population individuals).
In this paper, tau-DPP is exploited to analyze the emergent dynamics of metapopulation systems, where the focus is on the influence that the topology of patches has on the migration of individuals, and their capability to colonize other patches in the habitat. To this purpose, we consider six different habitat topologies, formally described by graph structures, and analyze how the topological structure of patch-to-patch connections, and the rate of individual dispersal between connected patches, influence the local and global dynamics of a metapopulation. In particular, we will first consider how a given topology and a fixed dispersal rate between patches can influence the prey-predators dynamics, and then we will focus on the colonization of empty patches, starting from the dispersal of predators that live in a few patches which occupy peculiar positions in the given network topology.
The paper is structured as follows: in Section \ref{sec:metapop} we present the concept of metapopulations in Ecology, and then describe the multivolume model of metapopulations by focusing, in particular, to the different habitat topologies. In Section \ref{sec:sim_dyn} we will show the simulation results concerning the influence of these habitat topologies on the emergent dynamics of metapopulations, considering the effects of predators dispersal and colonization. Finally, in Section \ref{sec:concl} we conclude the paper with some final remarks and several proposals for further research issues concerning metapopulations.
\section{Metapopulations}\label{sec:metapop}
In this section, we first provide a brief introduction to the most relevant features of metapopulations, concerning both the topology of the habitats and the emergent dynamics. Then, we describe the modeling approach used in this paper, that is based on a stochastic class of membrane systems, which will be used in Section \ref{sec:sim_dyn} to analyze the influence of different network topologies on the dynamics of metapopulations.
\subsection{Dynamical models of interacting populations in Ecology}\label{subsec:eco_metapop}
Since its introduction in \cite{Levins}, the concept of metapopulations (also called
\emph{multi-patch systems}) has been extensively applied in Ecology to analyze the behavior of interacting
populations, to the purpose of determining how fragmented habitats can
influence various aspects of these systems, such as local and global population persistence, or the evolution
of species \cite{genetic}. Lately, this topic has been largely employed for other populations species, living in both
natural and artificial/theoretical fragmented landscapes \cite{Hanski}.
A metapopulation consists of local populations, living in spatially separated
habitats called \emph{patches} -- which can be characterized by different areas, quality
or isolation -- connected each other through a \emph{dispersal pool}, which is the spatial place where
individuals from a population spend some lifetime during the migration among
patches. In multi-patch systems, two principal
types of dynamics exist: on the one hand, the individuals of the different populations can have \emph{local} interactions inside each patch (according to a given dynamical model, e.g., the Lotka-Volterra system of interaction between preys and predators \cite{Murray}); on the other hand, the dispersal of individuals among mutually connected patches can influence the \emph{global} behavior of the whole system \cite{Jansen-2,Jansen,Taylor,Weisser}.
The dispersal of individuals, which is usually dependent on the distance between patches, may reduce the
local population growth, and thus increase the extinction risk, which can
be due also to environmental and demographical stochasticity. Hence, the
persistence of populations is assumed to be balanced between local extinctions
and the process of colonization, that is, the establishment of new populations in empty patches \cite{Hanski}.
Several theoretical frameworks for metapopulation analysis have been defined up
to now, remarking specific properties of multi-patch systems which have been either
explicitly or implicitly considered in these modeling methods (see, e.g., \cite{dunning,Hanski,SPOMSIM,Hastings} for
further details). For instance, referring to the landscape, most theoretical
models take care of the spatial structure of the habitat, the local quality of the environment, the
patch areas and their mutual connectivity (or isolation), in order to capture the effect of
habitat fragmentation on species persistence. In fact, good local conditions can
determine the growth and the survival of populations inside the patches, and high patch
connectivity can decrease local extinction risk. Moreover, as dispersal and
colonization are distance-dependent elements, they can be used to account for the
importance of real landscape structures. Referring to population interactions
and dynamics, colonization can depend or not on the cooperation of migrating
individuals (in the first case, it is called ``Allee effect''). Models not accounting for within-patch dynamics -- but only assuming whether a patch is occupied or not -- usually consider local dynamics
on a faster time scale with respect to the global dynamics, and also neglect the
dependence of colonization and extinction rates on population sizes. Finally,
regional stochasticity can account for ``bad'' or ``good'' years over the local
environmental quality, which depends on, e.g., the weather conditions which
affect sustenance resource availability and, once more, they can influence the
growth and survival of populations.
Recently, graph-based models for metapopulations have started to be more and more defined because of the intuitive and visual way they hold for the representation of these ecological systems (see \cite{habitat_mosaics,MinorUrban2008,Urban_Ecology_2001} and references therein). In these models, nodes represent habitat patches and graph edges denote functional connections between patches (typically related to the dispersal of individuals). In addition, attributes can be associated to nodes, describing the quality or dimension of patches, and different types of edges can be adopted to represent the distance between connected patches, the rate of dispersal between a couple of patches, or simply whether two patches are connected or not. These models allow to make insights into the features of habitat distribution, such as the predominant importance of some nodes or clusters of nodes with respect to other characteristics of metapopulation, like their dynamics, the vulnerability to disturbance, the persistence of populations according to dispersal, and so on. These results open promising perspective in related research fields as evolutionary ecology, conservation biology, epidemiology, management and design of natural reserves.
\subsection{A P system--based model of metapopulations: focusing on network topologies}\label{subsec:Psyst_metapop}
Most of the issues discussed in Section \ref{subsec:eco_metapop} were explicitly considered in our previous model for metapopulations \cite{bicta07,metapop}. In those works, metapopulation models were based on a class of membrane systems called DPP \cite{DNA11,IJFCS}, which were used to execute qualitative stochastic simulations of the local and global dynamics of metapopulations. In particular, in \cite{metapop} we introduced a model of metapopulations with predator-prey dynamics, where additional features were used in order to catch and better describe relevant properties of the modeled system.
For instance, the regions of the membrane structure were represented as nodes of a weighted graph with attributes, where the weight associated to edges corresponds to the ``distance''
among connected regions, while attributes specify their surface dimension. These new features are necessary in order to outline the spatial distribution of patches and the relevant additional
features associated to them: the dimension of a patch is needed to
define the density of the populations living inside that patch, while the distance is
needed to identify isolated patches, as well as to define the dispersal rates
of migrating individuals.
Moreover, by using some rules which do not modify the objects on which
they act (the so-called \virg mute rules''), we modified the classical view of maximal parallelism, by
allowing the maximal application of rules but, at the same time, reducing the maximal
consumption of objects. The model was applied to investigate some emergent metapopulation
behaviors, such as the influence of patch dimension, patch-to-patch distance, stochastic breeding, the dynamics underlying migration and colonization, the effects due to isolated patches, etc.
Then, in \cite{bicta07} we extended the analysis of that model by focusing on periodic resource feeding strategies, and compared different systems where either increasing, decreasing, stationary or purely feeding stochastic phases were
defined inside each patch. We have shown there, for instance, how the seasonal
variance can transform the basic Lotka-Volterra dynamics inside each patch into
a more complex dynamics, where the different phases of a feeding cycle can be
identified through the effect that they have on the standard oscillations of preys and predators.
In this section, we present a simplified model of metapopulations, which exploits the multivolume stochastic simulation algorithm tau-DPP \cite{tauWMC7,BioSimWare}. With respect to the previous model, here we will not need to use the concept of mute rules, as the probabilistic choice and applications of rules is already embedded in the tau leaping algorithm \cite{Gill06}, on which tau-DPP is based. Moreover, we will not consider the presence of the dispersal pool, but we will instead focus our analysis on the direct communication of individuals among interconnected patches, according to some fixed network topologies. In order to compare the influence of each network, we have decided to perform our analysis on a total of 6 patches, spatially arranged in different ways. Namely, we assume that these network topologies can be described by graphs having the same number of nodes, but distinct connections, such as the chain, grid, star, ring, complete or random structure (see graphs $a, b, c, d, e, f$, respectively, in Fig. \ref{fig:topologies}). From now on, we will refer to the formal data structure by using the term `graph', and use the term `network' to denote the topological relationship on each graph.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=10cm]{eps/graphs}
\end{center}
\caption{Network topologies.} \label{fig:topologies}
\end{figure}
Formally, each network topology $\nu \in \{a, b, c, d, e, f\}$, can be generally described by a weighted undirected graph $G_{\nu}=(N_{\Delta}^{\nu}, E^{\nu}, w^{\nu})$ where:
\begin{itemize}
\item $N_{\Delta}^{\nu}$ is the set of nodes, such that each node $p_i \in N_{\Delta}^{\nu}$, $i$=1, $\dots$, 6, is characterized by a value $\delta(p_i) \in \Delta$ (with $\Delta$ being a set of attributes of some kind);
\item $E^{\nu} \subseteq \{(p_i,p_j) \mid p_i,p_j \in N_{\Delta}^{\nu}\}$ is the set of (undirected) edges between nodes;
\item $w^{\nu} : E^{\nu} \ra \mathbb{R}^+$ is the weight function associating a cost to
each edge.
\end{itemize}
In the case of metapopulations, the set of nodes $N_{\Delta}^{\nu}$ coincides with the set of patches, the attribute of a node represents the area of the patch, the edges characterize which patches are directly
reachable from any patch (self-edges might exist as well but will not be considered in this work), and the weight $w^{\nu}_{i,j}$ of an edge $(p_i, p_j)$ represents a cost to measure the effort that individuals have to face when moving from patch $p_i$ to $p_j$. Given a network topology $\nu$, we denote by $Adj(p_i)^{\nu}$ the set of nodes that are directly connected to any node $p_i$, that is, $Adj(p_i)^{\nu}=\{p_j \in N_{\Delta}^{\nu} \mid \exists \: (p_i,p_j) \in E^{\nu}\}$. We also denote by $deg(p_i)^{\nu}$ the degree of patch $p_i$, that is, the number of patches directly connected to $p_i$ (formally, $deg(p_i)^{\nu}=card(Adj(p_i)^{\nu})$). We outline that, in what follows, we will assume that: (1) $w^{\nu}_{i,j}=1$ for each $(p_i,p_j) \in E^{\nu}$ and each $\nu \in \{a, b, c, d, e, f\}$, that is, all edges have the same cost; (2) $\delta(p_i)=1$ for each $p_i \in N_{\Delta}^{\nu}$ and each $\nu \in \{a, b, c, d, e, f\}$, that is, all patches have the same dimension. The rational behind this is that, in this paper, we focus our attention on the influence that different topologies of the habitat network can have on the local and global dynamics of metapopulations, regardless of the local features of each patch, or of the distances between patches. These features might be naturally added in further works related to this model, where real data can be used to define a specific model of some metapopulation systems.
In addition to the chosen network topology, this model of metapopulations also considers the presence of species individuals, which locally interact according to a chosen dynamics, and give rise to global dynamics thanks to the dispersal processes. To this purpose, in this paper we assume that each patch is characterized by the Lotka-Volterra (LV) model describing the interaction between the individuals of two populations, namely preys and predators. Inside each patch, the LV model is described by the following set of internal rules:
\begin{eqnarray*}
r_1 :& AX \ra XX \\
r_2 :& XY \ra YY \\
r_3 :& Y \ra \lambda
\end{eqnarray*}
\noindent where $X$ denotes the preys, $Y$ denotes the predators, $A$ denotes the sustenance resources and $\lambda$ is the empty symbol. Rules $r_1$ and $r_2$ model the growth of preys and predators, respectively, while rule $r_3$ models the death of predators. Each rule is also characterized by a stochastic constants (expressed in $time^{-1}$), that is used -- together with the current amounts of individuals occurring in the patch -- to evaluate its application probability step by step, according to the tau leaping algorithm (see \cite{Gill06,tauWMC7,VolumeRozenberg} for more details). All the simulations shown hereafter have been executed using the following values of stochastic constants and of initial amount of preys, predators, and sustenance resources: $c_1$=0.1, $c_2$=0.01, $c_3$=10, $X_0$=$Y_0$=1000, $A_0$=200 (the value of $A$ is fixed for the entire duration of each simulation). The simulations have been performed with the software BioSimWare \cite{BioSimWare}, that implements different stochastic simulation algorithms for both single and multivolume systems. The software is available for free download at http://bimib.disco.unimib.it/index.php/Software.
In Fig. \ref{fig:LV_singlepatch} we show the oscillating dynamics (left side) of preys and predators in the single patch, obtained with this choice of parameters, and the corresponding phase space (right side). These figures can be considered as reference to compare and discuss the dynamics obtained in the multi-patch model, as described in Section \ref{sec:sim_dyn}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=8cm]{eps/LVdyn} \hspace{-0.5cm}
\includegraphics[width=8cm]{eps/LVps}
\end{center}
\caption{The Lotka-Volterra dynamics in the single patch: oscillations in preys, $X$, and predators, $Y$ (left side), and corresponding phase space (right side).} \label{fig:LV_singlepatch}
\end{figure}
The single patch model is then extended to a multi-patch model where, inside each patch $p_i$ of each network topology $\nu$, we add as many communication rules as the number of patches connected to $p_i$ (that is, a total of $deg(p_i)^{\nu}$ rules inside each patch). These rules are needed to move population individuals among the various patches of the network, thus allowing to analyze the effects of migration and colonization in the metapopulation. This is done by attaching a destination target to each communication rule, specifying the destination patch, as it is usually done in P systems. Formally, in each patch $p_i$ of network $\nu$, we add the so-called \emph{dispersal rules}
$$r_{d_{p_j}} : Y \ra (Y, target(p_j)),$$
for each $p_j \in Adj(p_i)^{\nu}$. Similarly to the local rules $r_1, r_2, r_3$, the probability of applying each dispersal rule is determined by using its stochastic constant $c_{d_{p_j}}$, whose values will be given in the next section to consider different migration rates.
\section{The influence of network topologies on metapopulation dynamics}\label{sec:sim_dyn}
In this section we analyze how the topological structure of patch-to-patch connections, and the rate of individual dispersal between connected patches, influence the local and global dynamics of a metapopulation. In particular, in Section \ref{subsec:communication} we consider how a given topology and a fixed dispersal rate can influence the prey-predators dynamics, while in Section \ref{subsec:colonization} we focus on the capability of colonization of empty patches, starting from the dispersal of predators living in a few patches which occupy peculiar positions in the given network topology.
\subsection{Network topologies and migration}\label{subsec:communication}
In this section, we analyze the role of migration and compare the six network topologies with respect to four different conditions for the dispersal rules. Namely, we assume that each patch of each topology is initialized with a complete LV model as given in Section \ref{subsec:Psyst_metapop}, where the value of the stochastic constant $c_{d_{p_j}}$ for the dispersal of predators, in each patch $p_i \in N_{\Delta}^{\nu}$, can assume one of the following values:
\begin{enumerate}
\item $c_{d_{p_j}}$=1, for each $p_j \in Adj(p_i)^{\nu}$;
\item $c_{d_{p_j}}$=10, for each $p_j \in Adj(p_i)^{\nu}$;
\item $c_{d_{p_j}}$=20, for each $p_j \in Adj(p_i)^{\nu}$;
\item $c_{d_{p_j}}$=$\frac{10}{deg(p_i)}$, for each $p_j \in Adj(p_i)^{\nu}$.
\end{enumerate}
\smallskip
By considering the first condition as reference, the power of dispersal in the second (third) condition is ten-fold (twenty-fold) the first one, irrespective of the position that patch $p_i$ occupies in the considered network. In other terms, the flux of dispersal from each patch, in the first three conditions, results amplified by the number of connections that each patch has with respect to the other patches in the network. On the contrary, the fourth condition corresponds to the situation when, for each patch $p_j \in Adj(p_i)^{\nu}$, the sum of the values of constants of dispersal rules in $p_i$ is always equal to 10, but the rate of dispersal along each edge from $p_i$ to $p_j$ depends on the degree of $p_i$. For instance, in the network topology $a$ (Fig. \ref{fig:topologies}), the value of $c_{d_{p_j}}$ in patches $p_0$ and $p_5$ is equal to 10, while the value of $c_{d_{p_j}}$ in patches $p_1$, $\dots$, $p_4$ is equal to 5; in the network topology $c$ (Fig. \ref{fig:topologies}), the value of $c_{d_{p_j}}$ in patch $p_0$ is equal to 2, while the value of $c_{d_{p_j}}$ in all other patches is equal to 10, and so on. So doing, we can weigh the dispersal of predators according to the position of each patch in the network, and simulate a situation where the flux of dispersal from each patch towards its adjacent patches is uniform throughout the whole network.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=8cm]{eps/chainPs3D} \hspace{-0.8cm}
\includegraphics[width=8cm]{eps/gridPs3D}
\includegraphics[width=8cm]{eps/starPs3D} \hspace{-0.8cm}
\includegraphics[width=8cm]{eps/ringPs3D}
\includegraphics[width=8cm]{eps/completePs3D} \hspace{-0.8cm}
\includegraphics[width=8cm]{eps/randomPs3D}
\end{center}
\caption{The power of migration: LV dynamics in the phase space of each network topology.} \label{fig:phasespace_networks}
\end{figure}
For space limits, in Fig. \ref{fig:phasespace_networks} we present the phase spaces of all network topologies, obtained from simulations of the fourth condition only. For each network, in particular, we show the phase space of the local dynamics of each patch. The graphics show that, in the case of the chain graph (phase space (a)), the patches having different degrees are characterized by different dynamics: in fact, patches $p_0$ and $p_5$ show a different behavior with respect to the other patches. In addition to the role of patch degree, we can see that also the position of patches in the graph plays a central role: despite the fact that patches $p_1, p_2, p_3$ and $p_4$ have all the same degree, the dynamics inside $p_1$ and $p_4$ differs from that of patches $p_2$ and $p_3$. This is due to the different power of dispersal rules of their two neighbors, namely $c_{d_{p_j}}=10$ in patches $p_0$, $p_5$, while $c_{d_{p_j}}=5$ in patches $p_2$, $p_3$, which cause a larger flux of predators dispersal towards patches $p_1$ and $p_4$. The global effect is the presence of three different dynamics (one in $p_0$, $p_5$, another one in $p_1$, $p_4$, and a third one in $p_2$, $p_3$), all of which are characterized by oscillations in $X$ and $Y$ with no regular amplitudes (compare these phase spaces with the standard LV phase space in the single patch model given in Fig. \ref{fig:LV_singlepatch}, right side, and also with the phase spaces in Fig. \ref{fig:phasespace_networks}, graphics (d) and (e)). Furthermore, we can evidence that these oscillations are characterized by an initial wider amplitude, which is reduced during time.
Similarly, the dynamics of the patches in the grid graph (phase space (b)) is influenced only by the number of edges; in this phase space, we can identify two different types of dynamics: one for the patches with three edges ($p_1$, $p_4$) and another one for those with two connections.
In the star graph (phase space (c)), the LV dynamics endures in all patches apart from $p_0$, where the number of preys $X$ collapses to an attractor in zero, and no oscillations according to the LV dynamics in both $X$ and $Y$ can be established. In this patch, the number of predators fluctuates in a certain range, because of their dispersal from/to the other patches. Basically, in this condition patch $p_0$, that represents the center of the star, becomes a local area of the habitat where only dispersal occurs.
The simulations for the ring and complete graphs (phase spaces (d), (e)) show very similar results: in both cases, all patches in each graph have the same degree (two in the first configuration and five in the second one), leading to regular oscillations in $X$ and $Y$ with almost constant amplitude.
The results concerning the last configuration, the random graph (phase space (f)), show a combination of the effects described above. In particular, the dynamics of the patches differ each other depending on the degree of the patches themselves; moreover, in $p_4$, which is characterized by the highest degree, the high number of incoming predators (migrating from the four adjacent patches) leads to the extinction of preys (similarly to what happens in patch $p_0$ of the star graph).
We also tested, for each network topology, the other three conditions listed above. In these cases, the results have shown that the amplification of the power of dispersal with respect to the patch degree gives rise to a balance between the incoming and migrating individuals, which leads to comparable LV dynamics for all networks, with regular oscillations inside each patch (data not shown).
\subsection{Network topologies and colonization}\label{subsec:colonization}
In this section, we compare the six network topologies with respect to the capability of colonizing the empty patches that each network contains, starting from the patches that contain a complete LV model and that occupy a peculiar position in that network . We recall that in this work we are considering only the migration of predators, hence the empty patches are hereby assumed to contain no predators but only an initial amount of preys. In each network $\nu$, the set of patches initialized with the complete LV model will be denoted as $p_{LV}^{\nu}$. To test the feature of colonization, we consider four different initial conditions, hereby denoted as IC$k$, $k$=$1, \dots, 4$, where $Y_0$=0 and:
\begin{enumerate}
\item IC1 is characterized by $c_{d_{p_j}}$=1 and $X_0$=10;
\item IC2 is characterized by $c_{d_{p_j}}$=1 and $X_0$=100;
\item IC3 is characterized by $c_{d_{p_j}}$=10 and $X_0$=10;
\item IC4 is characterized by $c_{d_{p_j}}$=10 and $X_0$=100.
\end{enumerate}
In each given network, all empty patches are initialized with the same chosen condition IC$k$, besides the patches in the set $p_{LV}^{\nu}$ that are initialized with a standard LV model, having the communication constant $c_{d_{p_j}}$ equal to the one given in the chosen IC$k$, and all other parameters as given in Section \ref{subsec:Psyst_metapop}.
With this type of analysis, we expect to determine which features of the network topologies are more relevant with respect to the colonization of empty patches, under a given initial condition. All conditions have been tested for each network and, for each fixed initial condition, different sets of $p_{LV}^{\nu}$ have been considered. In the following, for space limits, we present only some results of these simulations, and briefly discuss the results obtained in the other analyzed conditions. In each of the following graph, preys ($X$) are represented with solid lines, while predators ($Y$) are represented with dashed lines.
We start by considering the network $\nu=a$, that is, the chain graph. In this case, we present the results obtained in all the initial conditions IC$1$, IC$2$, IC$3$, IC$4$, considering three sets of LV patches, namely $p_{LV}^{a}$=$\{p_0, p_5\}$, $p_{LV}^{a}$=$\{p_2\}$ and $p_{LV}^{a}$=$\{p_0\}$. In the first case ($p_{LV}^{a}$=$\{p_0, p_5\}$, shown in Fig. \ref{fig:chain_patches0and5}) we can see that, when the power of dispersal is low (IC1, IC2), the time required by the predators to reach the patches $p_2$ and $p_3$, which are at the highest distance from $p_0$ and $p_5$, allows an initial uncontrolled growth of the preys in $p_2$ and $p_3$, which subsequently undergo extinction as soon as the predators enter the patch. Such \virg delay'' in the local establishment of a population of predators is the effect that prevent the formation of the LV dynamics; this effect, as shown hereafter, is a common aspect of all network topologies. Concerning the chain network, this is more evident in condition IC2, where the initial amount of preys inside the empty patches is higher than IC1: in this case, the LV dynamics can be established only in four of the six patches.
On the other hand, with the initial conditions IC3 and IC4, the power of dispersal is sufficient to colonize all of the patches, irrespectively of the numbers of preys that are initially present in the empty patches and of the position of the LV complete patch.
Similar results for the chain network have been obtained in the second analyzed case ($p_{LV}^{a}$=$\{p_2\}$, shown in Fig. \ref{fig:chain_patch2}) and in the third case ($p_{LV}^{a}$=$\{p_0\}$, data not shown).
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=8cm]{eps/chainDyn1c} \hspace{-0.7cm}
\includegraphics[width=8cm]{eps/chainDyn2c}
\includegraphics[width=8cm]{eps/chainDyn3c} \hspace{-0.7cm}
\includegraphics[width=8cm]{eps/chainDyn4c}
\end{center}
\caption{Colonization in the chain topology, with $p_{LV}^{a}$=$\{p_0, p_5\}$ and initial conditions IC1 (top left), IC2 (top right), IC3 (bottom left), IC4 (bottom right).} \label{fig:chain_patches0and5}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=8cm]{eps/chainDyn1b} \hspace{-0.7cm}
\includegraphics[width=8cm]{eps/chainDyn2b}
\includegraphics[width=8cm]{eps/chainDyn3b} \hspace{-0.7cm}
\includegraphics[width=8cm]{eps/chainDyn4b}
\end{center}
\caption{Colonization in the chain topology, with $p_{LV}^{a}$=$\{p_2\}$ and initial conditions IC1 (top left), IC2 (top right), IC3 (bottom left), IC4 (bottom right).} \label{fig:chain_patch2}
\end{figure}
For the network topology $\nu=b$, that is, the grid graph, we show the results obtained in the cases IC$1$, when $p_{LV}^{b}$=$\{p_0\}$ (Fig. \ref{fig:grid}, left side) and $p_{LV}^{b}$=$\{p_1\}$ (Fig. \ref{fig:grid}, right side). According to the position of the LV complete patches in this network topology, we can see that, in the first case, the predators are capable to colonize patches $p_1$ and $p_3$, that are directly connected to $p_0$, and patch $p_4$, that is directly connected to both $p_1$ and $p_3$. However, patches $p_2$ and $p_5$ cannot be colonized. In the second case, the higher degree of the LV complete patch $p_1$, allows the colonization of all patches. With the initial condition IC2 (data not shown), in the other tested cases $p_{LV}^{b}$=$\{p_0\}$ and $p_{LV}^{b}$=$\{p_1\}$, only the patches directly connected to $p_0$ and $p_1$, respectively, are colonized by the predators.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=7.5cm]{eps/gridDyn1a} \hspace{-0.7cm}
\includegraphics[width=7.5cm]{eps/gridDyn1b}
\end{center}
\caption{Colonization in the grid topology, with initial condition IC1 and $p_{LV}^{b}$=$\{p_0\}$ (left), $p_{LV}^{b}$=$\{p_1\}$ (right).} \label{fig:grid}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=7.5cm]{eps/starDyn1b} \hspace{-0.7cm}
\includegraphics[width=7.5cm]{eps/starDyn1c}
\end{center}
\caption{Colonization in the star topology, with initial condition IC1 and $p_{LV}^{c}$=$\{p_1\}$ (left), $p_{LV}^{c}$=$\{p_1, p_3\}$ (right).} \label{fig:star}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=7.5cm]{eps/ringDyn1a} \hspace{-0.7cm}
\includegraphics[width=7.5cm]{eps/ringDyn2a}
\end{center}
\caption{Colonization in the ring topology, with $p_{LV}^{d}$=$\{p_0\}$ and initial condition IC1 (left) and IC2 (right).} \label{fig:ring}
\end{figure}
For the network topology $\nu=c$, that is, the star graph, we show the results obtained in the cases IC$1$, when $p_{LV}^{c}$=$\{p_1\}$ (Fig. \ref{fig:star}, left side) and $p_{LV}^{c}$=$\{p_1, p_3\}$ (Fig. \ref{fig:star}, right side). According to the position of the LV complete patches in this network topology, we can see that, in the first case, no patches are colonized because of the high degree of $p_0$ (which is the only patch connected to $p_1$) that spreads the predators over the other patches, thus preventing the formation of the LV dynamics. In the second case, the combined effect of migration from $p_1$ and $p_3$ allows the colonization of patch $p_0$, which is directly connected with both of them. We then performed other simulations starting with conditions IC3 and IC4: in these cases, the higher value of $c_{d_{p_j}}$ allows the colonization of every patch (except from patch $p_0$) independently from the initial position of the LV complete patch (data not shown). On the contrary, when we assume $p_{LV}^{c}$=$\{p_0\}$, that is, the center of the star, then all patches are fully colonized, independently from the considered initial condition.
For the network topology $\nu=d$, that is, the circular graph, we show the results obtained in the cases IC$1$ and IC$2$, when $p_{LV}^{d}$=$\{p_0\}$ (Fig. \ref{fig:ring}, left and right sides, respectively). Starting with the initial condition IC2, the predators are capable of colonizing only the patches directly connected to the LV complete patch $p_0$, while in the case IC1, also patch $p_4$ (being at distance 2 from the LV complete patch) is colonized. These results highlight, in particular, another aspect that was more marginal in the other simulations: the stochastic nature of the communication process and of the growth of preys, which leads to the extinction of preys in patch $p_2$, while in patch $p_4$ it drives the local behavior to an oscillatory dynamics.
For the network topology $\nu=e$, that is, the complete graph, we show the results obtained in the cases IC$1$, when $p_{LV}^{e}$=$\{p_0\}$ (Fig. \ref{fig:complete}, left side) and $p_{LV}^{e}$=$\{p_0, p_3\}$ (Fig. \ref{fig:complete}, right side). While in the second case -- where the LV dynamics is initially placed in two patches -- the predators can colonize all patches, in the first case the colonization of all empty patches fails. Once more, this is an effect of the stochastic noise combined with the low amounts of predators, which is in turn caused by the fact that the higher the number of adjacent patches, the lower the number of predators that persist inside each patch. In all other simulations performed with initial conditions IC3 and IC4, all patches have always been colonized, as the higher values of dispersal rules assure a more uniform spread of predators throughout the network, and thus flattens the influence of migration delay (data not shown).
For the network topology $\nu=f$, that is, the random graph, we show the results obtained in the cases IC$1$, when $p_{LV}^{f}$=$\{p_0\}$ (Fig. \ref{fig:random}, left side) and $p_{LV}^{f}$=$\{p_2\}$ (Fig. \ref{fig:random}, right side). According to the position of the LV complete patches in this network topology, we can see that, in the first case, all patches are colonized by predators (similar results are obtained by placing the LV complete model in patch $p_4$ -- data not shown). In the second case, patch $p_5$ is not colonized because there is only one path of length 2 which connects it to the initial complete LV patch $p_2$; the same holds for patch $p_3$, which has distance from $p_2$ equal to 3. For similar reasons, considering the case of initial condition IC1, with the LV complete model in patch $p_3$, the only patch that is not colonized by predators is $p_2$ (data not shown).
In all the simulations performed with the initial condition IC2, some of the patches have not been colonized because of the high amount of preys initially occurring in the patches. On the other hand, with the initial conditions IC3, IC4, the power of dispersal allows the colonization of all patches (data not shown).
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=7.5cm]{eps/completeDyn1a} \hspace{-0.7cm}
\includegraphics[width=7.5cm]{eps/completeDyn1b}
\end{center}
\caption{Colonization in the complete topology, with initial condition IC1 and $p_{LV}^{e}$=$\{p_0\}$ (left), $p_{LV}^{e}$=$\{p_0, p_3\}$ (right).} \label{fig:complete}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=7.5cm]{eps/randomDyn1a} \hspace{-0.7cm}
\includegraphics[width=7.5cm]{eps/randomDyn1d}
\end{center}
\caption{Colonization in the random topology, with initial condition IC1 and $p_{LV}^{f}$=$\{p_0\}$ (left), $p_{LV}^{f}$=$\{p_2\}$ (right).} \label{fig:random}
\end{figure}
\section{Discussion}\label{sec:concl}
The fragmented habitats of real metapopulations are usually characterized by complex network topologies. In this paper, we have analyzed six small topologies that can be considered representative of local areas in a structured habitat, and we have investigated the influence that the degree and the position of each patch in the topology can have on the migration of individuals, as well as on the capability of colonizing empty patches. Our analysis suggests that, with respect to the power of migration (Section \ref{subsec:communication}), we can identify different behaviours that depend on two characteristics of the topology: on a first level, the local behaviour inside each patch is influenced by its degree. This is especially evident if we compare the network topology described by the circular or complete graphs, with the topology described by the star graph: while in the first case (where all nodes have the same degree) all patches are characterized by a similar (regular) oscillating dynamics, in the second case the most critical node is the center of the star (which has a much higher degree than all other nodes in the same graph). In the latter case, this patch is likely to undergo a local modification of its initial dynamics, due to a more higher incoming migration of individuals from all other adjacent patches. On a second level, assuming in this case that the degree of nodes is equal, then also the position of each patch in the topology matters: for instance, we have seen that in the network topology described by the chain graph -- where all nodes, besides the ones at the extremes of the chain, have the same degree -- the local dynamics is also influenced by the dynamics of the adjacent patches in the graph. Therefore, in hypothetical habitats where there exist many patches connected in a linear way, our results suggest that the length of the chain might have a negative role in the establishment and in the maintenance of local dynamics.
Considering the feature of colonization (Section \ref{subsec:colonization}), we have evidenced that, in most network topologies, the lack of colonization can be due to the delay of migrating predators with respect to the (uncontrolled) local growth of prey, which then leads to the extinction of preys and the prevention of the LV dynamics. To effectively measure how strong is the power of the delay, it would be interesting to understand whether the local growth of preys can be controlled by inducing their death and thus potentially allowing the establishment of oscillations. Besides this aspect deserving further investigations, our analysis have evidenced that the colonization of empty patches occurs more easily in those patches that are adjacent to the patch(es) initialized with the LV complete model. Once more, this highlights the relevance of the position of the patch(es) where standard oscillations in preys and predators are already settled at the beginning of the simulation. Indeed, the power of colonization is stronger in the circular and complete networks -- where the position of the LV complete patch is irrelevant (as the spread of migrating individuals throughout the network results uniform), and it is weaker in the star network -- where the position of the LV complete patch is of primary importance (as the spread of migrating individuals throughout the network strongly depends on whether the patch is placed at the center or at the tips of the star).
In addition to the investigations that we have presented in this work, further types of analysis that we plan to perform on metapopulation systems concern, for instance, the study of the aspects considered in this paper (migration, colonization, network topologies, etc.) by assuming other local and global dynamics, e.g., the population growth according to the logistic function. Moreover, an interesting issue that might be investigated is the synchronization of local population dynamics (e.g. by considering the establishment and decay of oscillations in preys and predators) during migration through a given network topology, or in the process of colonization.
Concerning the use of graphs, other relevant questions regard the analysis of the dynamics with respect to graph properties, such as different measures of habitat connectivity (centrality indexes) \cite{evolution_networks,newman_siam}. In this context, for example, the star graph can resemble the notion of hub (a node with high degree) in a typical scale-free network, a structure that is known to be robust to random disturbances but highly vulnerable to deliberate attacks on the hubs \cite{strogatz,barabasi}.
Another topic of interest concerns the fact that various populations can coexist in a common habitat, but have distinct (inter)species dynamics or different dispersal capabilities in that habitat \cite{Bunn00}. In cases like this, it would be interesting to construct and analyze different metapopulation models, one for each target species, according to both the patch-to-patch connections and to the specific population dynamics. By comparing and intersecting the results obtained on the distinct network topologies of the common habitat derived in this way, it would be possible to determine the locations of the habitat that are most important for each species, and thus aid the design of natural reserve systems where we can have the most appropriate solution for all species in terms of the maximal improvement of dispersal (reduction of species isolation) and the minimal spread of disturbances (diseases, pathogens, invasive species, etc.) \cite{habitat_mosaics}.
We believe that our modeling approach opens interesting perspectives and can represent an useful tool for the investigation of a wide range of properties in metapopulation systems. We expect that applications of this model to real cases -- characterized by complex habitat networks (where each patch possesses its own features of quality, occupancy, connectivity) and different population dynamics -- will aid in the achievement of important results and new perspective in Ecology.
\bibliographystyle{eptcs}
|
2,877,628,090,543 | arxiv | \section{Introduction}
Yukawa systems, i.e. many particle systems where the pair interaction potential energy is
\begin{eqnarray}
\phi(r)&=&Z_1Z_2\varphi(r) \nonumber \\
\varphi(r)&=&\frac{\exp(-\kappa r)}{r}
\end{eqnarray} have been of interest for some time. The Yukawa potential has the unique feature that by varying the screening parameter $\kappa$ the potential can assume the feature both of a short range (hard sphere like) and of a long range (Coulomb like) interaction potential. Since the 1970-s this feature motivated a number of investigations relating to the properties of Yukawa liquids and solids, their phases and phase transitions \cite{LOWEN12,LOWEN15,LOWEN16}. Quite apart from this academic interest, the Yukawa potential has been recognized as a good approximation for the interaction potential between charged particles in colloids \cite{LOWEN} and, more recently, in complex (dusty) plasmas, where the original Coulomb interaction between the main constituents is transformed by Debye screening into a Yukawa-type potential (for a review see, e.g. \cite{MorfREV,BonREV}). Complex plasmas constitute an especially suitable medium for the study of waves and collective excitations because these are much less damped than in colloidal systems. In recent years a host of papers, both theoretical \cite{WangPRL01,PeetersPRA87,MurilloPRL00,DonkoJPC,PRL-Einstein,HartmannJPA06,IEEED,GoldenPRE10,KalmanEPL10} and experimental \cite{NosenkoPRL06,PielPOP06} have studied collective modes in Yukawa systems \cite{SmithASR08,MatthewsASR06,QiaoPRE03,RaoPSS90,RaoPSS90,MelzerPRE03,LinI00,LinI01,Fortov04}.
Most of the experimental work on colloidal systems and complex plasmas has focused attention on two dimensional (2D) layers. We note that the physics of the 2D and 3D systems, the dynamics of the collective excitations in particular, is, in fact, quite different and addressing 2D and 3D systems separately is also warranted on theoretical grounds \cite{LOWEN}.
The strength of the coupling between the particles can be characterized by the nominal bare coupling parameter $\Gamma=(Z^2e^2)/(k_{\rm b}Ta)$, with $a$ being the Wigner-Seitz radius. A physically more meaningful $\Gamma_{\rm eff}$ that, basically represents the ratio of the potential and kinetic energies, can be defined for orientation purposes as $\Gamma_{\rm eff}=\Gamma\exp(-\kappa a)$ \cite{IkeziPF86}, although more sophisticated expressions are available \cite{Vau02,HaPRE05,OtteffG}.
The main interest lies in the behavior of the strongly coupled state, $\Gamma_{\rm eff} \gg 1$. In this strongly coupled state the system can be either in the dense liquid or in the crystalline solid phase.
Past works on the dynamics have overwhelmingly concentrated on Yukawa systems consisting of one single component, the equivalent of the One Component Plasma, both in three and two dimensions (Y3dOCP and Y2dOCP, respectively). A great deal of theoretical \cite{MurilloPRL00,MurilloPOP98,MurilloPOP00,PRL-Einstein,HartmannJPA06,IEEED,GoldenPRE10,KalmanEPL10} and computer simulation \cite{HamaguchiPRE97,OhtaPRL00,DonkoJPC,PPL10,DonkoJPA09} efforts have been devoted to the mapping and understanding of the collective mode structures in these systems, both in the liquid and solid phases. The theoretical methods required in the two situations are, of course, quite different. Once the lattice structure is identified, the crystalline solid is amenable to the standard harmonic phonon analysis. Concerning the treatment of the collective modes in the strongly coupled liquid phase, the Quasi Localized Charge Approximation (QLCA) approach developed by Kalman and Golden \cite{KalmanPRA90,GoldenPOP00} has turned out quite successful. The QLCA borrows ideas from the harmonic phonon approximation and describes the liquid in terms of particles trapped and oscillating in local potential wells of the fluctuating potential (for details, see \cite{GoldenPOP00}). As a result of these works, the collective mode spectra of the Y3dOCP and Y2dOCP are well understood and this understanding is well corroborated by observations \cite{NosenkoPRL06,PielPOP06}.
As to strongly coupled Yukawa mixtures consisting of more than one single species, in particular binary Yukawa mixtures (Y3dBM, Y2dBM), the collective dynamics of these systems constitutes a largely unexplored area (see, however, a recent work by Daligault \cite{DaligaultPRL12}) , even though the problem is of great theoretical interest. (For a related one dimensional problem see \cite{FerreiraPRB08,FerreiraJPC10}). One expects that the simple analytic structure of the Yukawa potential allows one to derive nearly exact solutions, which will elucidate the common features of the dynamics of binary liquids and solids \cite{ScaliseJCE09,ScaliseFPE10}. Also, the flexibility of the Yukawa interaction would make the qualitative features of the results to serve as paradigms for the collective mode structures of binary systems interacting through other potentials as well (alloys, dipole systems, etc.). From the point of view of actual applications, the creations of binary complex plasmas has technical problems, but, nevertheless, one expects that such strongly coupled complex plasmas of two different grain species will become available in the near future.
This paper, the first in a series, presents a systematic study of he collective mode spectra of the Y2dBM system. The system consists of two species, with charges $Z_1$ and $Z_2$, masses $m_1$ and $m_2$ and densities $n_1$ and $n_2$ (or concentrations $c_1$ and $c_2$), respectively. Our strategy is similar to the one followed in our previous works on the Y3dOCP and Y2dOCP: for the theoretical analysis of the liquid state we apply the QLCA formalism; for the crystalline solid we calculate dispersion relations by the standard method. In both cases, we parallel our theoretical analysis with detailed Molecular Dynamics (MD) simulations of the density and current fluctuation spectra of the system; it is, then, the positions of the peaks of the fluctuation spectra from which the dispersion relations are inferred. It has to be noted, however, that following this road map is fraught with questions stemming from the fundamental difference between the binary and single component systems. Concerning the QLCA, is it justified to represent the system through separate collective coordinates for each of the species in a liquid where the two species are spatially not separated? Concerning the MD, if different partial fluctuation spectra provide conflicting information, which one of them should be accepted as most relevant to the actual dispersion? Finally, one has to be aware of the fact that in the presence of different $Z_1$ and $Z_2$ charges with different $c_1$ and $c_2$ concentrations, the liquid phase is governed by a complex phase diagram \cite{ScaliseJCE09,ScaliseFPE10,OgataPRE93,IyetomiPRB89,LowenJPC12,JiangEPL11,JiangIEE12} in which only certain combinations of these parameters allows a homogeneous system. In the solid phase, similarly, with a given set of parameters only certain lattice structures are permissible \cite{LOWEN1,LOWEN2}.
We tackle these issues along the work as presented in the sequel. We have to emphasize though, that our goal is restricted to determining the existence, interrelationships and dispersion of the collective modes. We do not address a number of related problems: the damping of the modes, the detailed structures and the link between the various fluctuation spectra, the critical freezing values of $\Gamma$, the nature of the underlying order in the liquid phase, lattice stability and structures, etc. The issues investigated in this paper are organized according to the following plan: Section II is devoted to the description of the liquid phase and Section III of the crystalline solid phase. In each case we first analyze the qualitative features of the optic ($\omega$ finite at $k=0$) and then the acoustic ($\omega \rightarrow 0$ as $k \rightarrow 0$) excitations, before presenting the description of the full mode structure. In Section III we compare the mode structures in the two phases and draw conclusions. (For a preliminary account of some of the results pertaining to the optic modes see \cite{KalmanCPP12} and to the acoustic modes see \cite{PRL-2011}).
Whenever not noted otherwise, we measure frequencies in units of $\omega_1$, the plasma frequency of species 1, use $\Gamma_1$, the bare coupling value for species 1 to characterize the coupling strength, and adopt $\kappa a=1$ ($a=\sqrt{a_1 a_2}$) for the screening parameter.
\section{Strongly Coupled Liquid Phase}
The theoretical analysis of the mode structure in the liquid state is based on the QLCA approach, as discussed above. The fundamental equation for the dynamical matrix is
\begin{eqnarray}
C_{AB}^{\mu\nu}({\bf k})&=&-Z_AZ_Be^2\frac{\sqrt{n_An_B}}{\sqrt{m_Am_B}}\Bigg[\int {\rm d}^2r \Bigg\{\Psi^{\mu\nu}({\bf r})\left(\exp(-i{\bf k}\cdot {\bf r})-\delta_{AB}\right)\left[1+h_{AB}(r)\right]- \nonumber \\
& &\delta_{AB}\sum_{C\ne A}\frac{Z_Cn_C}{Z_An_A}\Psi^{\mu\nu}({\bf r})\left[1+h_{AC}(r)\right] \Bigg\} \Bigg] \nonumber \\
&=& -\omega^2_{AB}\frac{1}{2\pi}\int{\rm d}^2\bar{r}\Psi^{\mu\nu}({\bf \bar{r}})\left(\exp(-i{\bf k}\cdot {\bf r})-\delta_{AB}\right)\left[1+h_{AB}(\bar{r})\right]+ \nonumber \\
& & \delta_{AB}\sum_{C\ne A}\Omega^2_{AC}\int{\rm d}^2\bar{r}\Psi^{\mu\nu}({\bf \bar{r}})\left[1+h_{AC}(\bar{r})\right]
\end{eqnarray}
with
\begin{equation}
\Psi^{\mu\nu}(r) = \partial_\mu \partial_\nu \varphi(r),
\end{equation}
where $\varphi(r)$ is the Yukawa interaction, $\varphi(r)=\exp(-\kappa r)/r$ characterized by the screening constant $\kappa$. Then
\begin{eqnarray}
\Psi^{\mu\nu}(r) &=& \frac{\exp(-\kappa r)}{r}\left(3\frac{r^\mu r^\nu}{r^2} a(\kappa r)-\delta^{\mu \nu} b (\kappa r)\right) \nonumber \\
a(y) &=& 1+y+\frac13 y^2, ~~~~~~~~b(y)=1+y.
\end{eqnarray}
Additional notational conventions are
\begin{eqnarray}
\Omega^2_{AB} &=&\frac{2\pi e^2 Z_AZ_Bn_B}{m_A a} \nonumber \\
\omega^2_{AB} &=&\frac{2\pi e^2 Z_AZ_B\sqrt{n_An_B}}{\sqrt{m_Am_B} a} \nonumber \\
a &=& \sqrt{a_1a_2} \nonumber \\
a_A &=& 1/\sqrt{\pi n_A} \nonumber \\
\bar{\kappa} &=& \kappa a \nonumber \\
\bar{r} &=& r/a \nonumber \\
y &=& \kappa r
\end{eqnarray}
with $Z$, $m$, $n$ and $a$ representing the charge number, mass, density and Wigner-Seitz radius for the respective components. The $\Omega_{AB}$ and $\omega_{AB}$ frequencies are the nominal Einstein and nominal plasma frequencies of the system. The $h_{AB}$ pair correlation functions are to be obtained from the MD simulations, as described below.
The elements of the $C$-matrix can be expressed in terms of the kernel functions $\cal{K}$, $\cal{L}$:
\begin{eqnarray} \label{eq:kernels}
C^L_{AB} &=& \omega^2_{AB} \int\frac{{\rm d}\bar{r}}{\bar{r}^2}{\cal K}(kr,y)\left[1+h_{AB}(\bar{r})\right] - \nonumber \\
& & \delta_{AB}\sum_{C(all)}\Omega^2_{BC}\int\frac{{\rm d}\bar{r}}{\bar{r}^2}{\cal K}(0,y)\left[1+h_{BC}(\bar{r})\right] \nonumber \\
C^T_{AB} &=& \omega^2_{AB} \int\frac{{\rm d}\bar{r}}{\bar{r}^2}{\cal L}(kr,y)\left[1+h_{AB}(\bar{r})\right] - \nonumber \\
& & \delta_{AB}\sum_{C(all)}\Omega^2_{BC}\int\frac{{\rm d}\bar{r}}{\bar{r}^2}{\cal L}(0,y)\left[1+h_{BC}(\bar{r})\right]
\end{eqnarray}
with the kernel functions given by
\begin{eqnarray}
{\cal K}(u,r) &=& -\exp(-y)\left\{\left[1+y+y^2\right]J_0(u)-3\left[1+y+y^2/3\right]J_2(u) \right\} \nonumber \\
{\cal L}(u,r) &=& -\exp(-y)\left\{\left[1+y+y^2\right]J_0(u)+3\left[1+y+y^2/3\right]J_2(u) \right\}
\end{eqnarray}
In order to clearly display the behavior in the vicinity of $k=0$
we also introduce
\begin{eqnarray}
{\cal G}(u,r) &=& {\cal K}(u,r)-{\cal K}(0,r) \nonumber \\
{\cal H}(u,r) &=& {\cal L}(u,r)-{\cal L}(0,r) \nonumber \\
{\cal F}(r) &=& -{\cal K}(0,r) = -{\cal L}(0,r)
\end{eqnarray}
The integrals of the kernel functions over the pair correlation functions $1+h(r)$ are
\begin{eqnarray} \label{eq:kerint}
K_{AB}(k) &=& \int\frac{{\rm d}\bar{r}}{\bar{r}^2}{\cal K}(kr,r)\left[1+h_{AB}(r)\right] \nonumber \\
L_{AB}(k) &=& \int\frac{{\rm d}\bar{r}}{\bar{r}^2}{\cal L}(kr,r)\left[1+h_{AB}(r)\right] \nonumber \\
F_{AB} &=& \int\frac{{\rm d}\bar{r}}{\bar{r}^2}{\cal F}(r)\left[1+h_{AB}(r)\right]
\end{eqnarray}
These integrals would be divergent at $r=0$, were it not for the pair correlation function $1+h(r)$ that becomes 0 at $r=0$. Similarly
\begin{eqnarray}
G_{AB}(k) &=& \int\frac{{\rm d}\bar{r}}{\bar{r}^2}{\cal G}(kr,r)\left[1+h_{AB}(r)\right] \nonumber \\
H_{AB}(k) &=& \int\frac{{\rm d}\bar{r}}{\bar{r}^2}{\cal H}(kr,r)\left[1+h_{AB}(r)\right]
\end{eqnarray}
Introducing the asymmetry parameters $p$ and $q$
\begin{eqnarray}
p^2 &=& Z_2n_2 / Z_1n_1 \nonumber \\
q^2 &=& Z_2m_1 / Z_1m_2
\end{eqnarray}
one obtains for the longitudinal elements
\begin{eqnarray} \label{eq:Clong}
C^L_{11}(k) &=& \frac{\omega_1^2}{2}\left[ G_{11}(k)+p^2F_{12} \right] \nonumber \\
C^L_{12}(k) &=& \frac{\omega_1^2}{2}pq\left[ G_{12}(k)-F_{12} \right] \nonumber \\
C^L_{22}(k) &=& \frac{\omega_1^2}{2}\left[ p^2q^2G_{22}(k)+q^2F_{12} \right]
\end{eqnarray}
while the transverse elements are
\begin{eqnarray} \label{eq:Ctran}
C^T_{11}(k) &=& \frac{\omega_1^2}{2}\left[ H_{11}(k)+p^2F_{12} \right] \nonumber \\
C^T_{12}(k) &=& \frac{\omega_1^2}{2}pq\left[ H_{12}(k)-F_{12} \right] \nonumber \\
C^T_{22}(k) &=& \frac{\omega_1^2}{2}\left[ p^2q^2H_{22}(k)+q^2F_{12} \right]
\end{eqnarray}
We have found it useful to introduce $\omega_1$ ($=\omega_{11}$) as the reference frequency. In general, there exist 4 modes as the roots of the characteristic equations,
\begin{equation}
||C^{L,T}_{AB}-\omega^2|| = 0,
\end{equation}
which will be labeled $\omega^L_{+}$, $\omega^T_{+}$, $\omega^L_{-}$, $\omega^T_{-}$. The $\pm$ notation identifies the polarizations in species space of the modes: the ``+'' sign designates polarization where the two components move in-phase, while the ``--'' sign designates polarization where the two components move out-of-phase. The two + modes are acoustic ($\omega \rightarrow 0$ as $k \rightarrow 0$) and the two -- modes are optic modes ($\omega$ finite for $k=0$). In addition, the modes are labeled as Longitudinal $L$ or Transverse $T$, referring to the their polarization with respect to $k$ when the propagation is along the principal axes. We note that the elements of the $C$-matrix, and consequently the eigenfrequencies, depend only on the two $p$ and $q$ combinations of the originally introduced three $Z=Z_2/Z_1$, $M=m_2/m_1$, $N=n_2/n_1$ parameters.
\subsection{Optic modes}
At $k=0$, $G_{AB}(k) \propto H_{AB}(k) \propto O(k^2)$, thus $\omega_{-}(k=0)$, the gap frequency, is longitudinal/transverse degenerate, as it should be for an isotropic liquid:
\begin{eqnarray} \label{eq:Qopt}
\omega_{\rm GAP} &=& \omega^L_{-}(k=0) = \omega^T_{-}(k=0) = \omega_1\sqrt{\frac12\left(p^2+q^2\right)F_{12}} \nonumber \\
&=& \sqrt{\frac12\left(\Omega_{12}^2+\Omega_{21}^2\right) F_{12}} = \sqrt{\frac12\left(\bar{\Omega}_{12}^2+\bar{\Omega}_{21}^2\right)}.
\end{eqnarray}
In view of Eqs. (\ref{eq:kernels}) through (\ref{eq:kerint}) $F_{AB}$ can be interpreted as the average potential generated by species $B$ in the environment of a particle of species $A$.
\begin{eqnarray} \label{eq:FFF}
F_{AB} &=& \frac{1}{2\pi}\int {\rm d}^2\bar{r}\langle\Psi(r)\rangle \left[1+h_{AB}(r)\right] \nonumber \\
\bar{\Omega}^2_{AB} &=& \Omega^2_{AB} F_{AB}
\end{eqnarray}
with $\langle \dots \rangle$ designating angular averaging. The $\bar{\Omega}_{AB}$ frequency represents the oscillation frequency of a particle of species $A$ in the frozen environment of particles of species $B$. We note that it is the correlation dependent $\bar{\Omega}$-s, rather than the nominal $\Omega$-s that are the real Einstein frequencies of the system \cite{PPL10}, with a similar definition being used in the theory of liquids \cite{CPP43}. In a single component system the Einstein frequency $\bar{\Omega}$ also provides the $\omega(k\rightarrow \infty)$ limiting frequency \cite{PRL-Einstein}.
In order to find the $k\rightarrow \infty$ limits for the binary systems we re-express the elements of the $C$-matrix as
\begin{eqnarray}
C^L_{11} &=& \frac{\omega^2_{11}}{2}\left\{K_{11}(k)+\left(\bar{\Omega}^2_{11} + \bar{\Omega}^2_{12}\right)\right\} \nonumber \\
C^L_{12} &=& \frac{\omega^2_{12}}{2}K_{12}(k) \nonumber \\
C^L_{22} &=& \frac{\omega^2_{22}}{2}\left\{K_{22}(k)+\left(\bar{\Omega}^2_{22} + \bar{\Omega}^2_{21}\right)\right\}
\end{eqnarray}
In the $k\rightarrow \infty$ limits the $K$-terms vanish. This can be seen by observing that
\begin{equation}
K_{AB}(k) = k\int\frac{{\rm d}u}{u^2}{\cal K}(u,y)\left[1+h_{AB}(u/k)\right]
\end{equation}
and that $\left[1+h(r\rightarrow 0)\right] \rightarrow0$ fast enough to make this happen. Similar considerations apply to the transverse elements.
Thus the $k\rightarrow \infty$ respective upper and lower effective Einstein frequencies become $\Omega_{I}, \Omega_{II}$:
\begin{eqnarray} \label{eq:EinF}
\omega_{-}(k\rightarrow \infty) &=& \sqrt{\frac12 \left(\bar{\Omega}^2_{11} + \bar{\Omega}^2_{12}\right)} = \bar{\Omega}_{I} \nonumber \\
\omega_{+}(k\rightarrow \infty) &=& \sqrt{\frac12 \left(\bar{\Omega}^2_{22} + \bar{\Omega}^2_{21}\right)} = \bar{\Omega}_{II}.
\end{eqnarray}
The calculated gap frequencies and the effective Einstein frequencies as functions of $\Gamma$, together with the results obtained by MD simulations (see below) are shown in FIG \ref{fig:GapEins}; also shown is the variation of the correlation integral $F_{12}$. In anticipation of the results of the next Section, we have also indicated the gap frequencies in the crystal lattices. We will further comment on the relationships between these gap frequencies in the next Section.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=8cm]{008figure01}
\caption{Liquid state: QLCA gap (\ding{108}) and Einstein (\ding{110}) frequencies vs. $\Gamma$. The arrows indicate the positions of the corresponding gap in the lattice. The Inset shows the $\Gamma$ dependence of the correlation integral $F_{12}$ (symbols), which does not vary with the mass ratio. (a) $n_2 = n_1/2$; (b) $n_2=n_1$.}
\label{fig:GapEins}
\end{center}
\end{figure}
\subsection{Acoustic modes and sound speed}
We now turn to the calculation of the acoustic modes in the binary system. We are interested primarily in the small-$k$ behavior, which will lead to the determination of the sound speed.
First we observe that by dropping $h(r)$ in the integrals for the $G(k)$ and $H(k)$ functions the resulting $G^0(k)$ and $H^0(k)$ integrals become doable and provide the RPA expressions:
\begin{eqnarray} \label{eq:G0H0}
G^0(k) &=& \int\frac{{\rm d}\bar{r}}{\bar{r}^2}\exp(-\kappa r)\left[\left\{1+y+y^2\right\}\left\{1-J_0(u)\right\} - 3\left\{1+y+y^2/3\right\}J_2(u)\right] = \frac{\bar{k}}{\sqrt{\bar{\kappa}^2+\bar{k}^2}} \nonumber \\
H^0(k) &=& \int\frac{{\rm d}\bar{r}}{\bar{r}^2}\exp(-\kappa r)\left[\left\{1+y+y^2\right\}\left\{1-J_0(u)\right\} + 3\left\{1+y+y^2/3\right\}J_2(u)\right] = 0
\end{eqnarray}
The $C$-matrix equivalent to the cold RPA approximation would be obtained by dropping the $F_{12}$ terms in (\ref{eq:Clong}) and (\ref{eq:Ctran}), and using (\ref{eq:G0H0}) for $C_{11}$, $C_{12}$ and $C_{22}$. Then one obtains the RPA result
\begin{eqnarray}
\omega_{+}^L &=& \omega_0\frac{\bar{k}}{\sqrt{\bar{\kappa}^2+\bar{k}^2}} \nonumber \\
\omega_{-}^L &=& 0 \nonumber \\
\omega_0 &=& \omega_1\sqrt{1+p^2q^2} = \sqrt{\omega_1^2+\omega_2^2}.
\label{eq:RPAw}
\end{eqnarray}
Note that the intuitively more reasonable requirement that in order to obtain the RPA limit one sets $h_{12}$ equal to zero everywhere in (\ref{eq:Clong}) and (\ref{eq:Ctran}) would result in a meaningless divergent integral for $F_{12}$. This feature shows that there is no smooth transition from the QLCA expression to the RPA. In other words, in contrast to the case of the YOCP, in the YBM the RPA Eq. (\ref{eq:G0H0}) cannot be simply amended by adding correlational corrections in order to obtain the strong coupling expression: the strong correlations show up in an essentially non-perturbative fashion.
Returning now to Eqs. (\ref{eq:Clong}) and (\ref{eq:Ctran}) we now calculate the small $k$ expansion. The result is given in terms of the integrals
\begin{eqnarray}
U_{AB} &=& -\frac{5}{16}\int_0^\infty {\rm d}y\left[1+y+\frac35y^2\right]\exp(-y)h_{AB}(r), \nonumber \\
V_{AB} &=& -\frac{1}{16}\int_0^\infty {\rm d}y\left[1+y-y^2\right]\exp(-y)h_{AB}(r).
\end{eqnarray}
Thus the longitudinal and transverse $C^L_{AB}$ and $C^T_{AB}$ matrix elements in the $k\rightarrow 0$ limit become
\begin{eqnarray} \label{eq:221}
C^L_{11}(k\rightarrow 0) &=& \frac{\omega_1^2}{2}\left\{(1-U_{11})\frac{\bar{k}^2}{\bar{\kappa}} + \frac12 p^2 \bar{\kappa} F_{12}\right\} \nonumber \\
C^L_{12}(k\rightarrow 0) &=& \frac{\omega_1^2}{2}\left\{pq(1-U_{12})\frac{\bar{k}^2}{\bar{\kappa}} - \frac12 pq \bar{\kappa} F_{12}\right\} \nonumber \\
C^L_{22}(k\rightarrow 0) &=& \frac{\omega_1^2}{2}\left\{p^2q^2(1-U_{22})\frac{\bar{k}^2}{\bar{\kappa}} + \frac12 q^2 \bar{\kappa} F_{12}\right\} \nonumber \\
C^T_{11}(k\rightarrow 0) &=& \frac{\omega_1^2}{2}\left\{V_{11}\frac{\bar{k}^2}{\bar{\kappa}} + p^2 \bar{\kappa} F_{12}\right\} \nonumber \\
C^T_{12}(k\rightarrow 0) &=& \frac{\omega_1^2}{2}\left\{pqV_{12}\frac{\bar{k}^2}{\bar{\kappa}} - \frac12 pq \bar{\kappa} F_{12}\right\} \nonumber \\
C^T_{22}(k\rightarrow 0) &=& \frac{\omega_1^2}{2}\left\{p^2q^2V_{22}\frac{\bar{k}^2}{\bar{\kappa}} + \frac12 q^2 \bar{\kappa} F_{12}\right\}.
\end{eqnarray}
Proceeding now from (\ref{eq:221}), after some algebra one finds the small-$k$ expansion of the relevant $\omega_{+}^L(k)$, $\omega_{+}^T(k)$ mode frequencies as
\begin{eqnarray}\label{eq:222}
(\omega_{+}^{L})^2(k\rightarrow 0) &=& \bar{\omega}^2\left\{1-\frac{U_{11}+2p^2U_{12}+p^4U_{22}}{\left(1+p^2\right)^2}\right\}\frac{\bar{k}^2}{\bar{\kappa}} \nonumber \\
(\omega_{+}^{T})^2(k\rightarrow 0) &=& \bar{\omega}^2\left\{\frac{V_{11}+2p^2V_{12}+p^4V_{22}}{\left(1+p^2\right)^2}\right\} \frac{\bar{k}^2}{\bar{\kappa}}
\end{eqnarray}
While the first term in $(\omega_{+}^{L})^2 $ is RPA-like in appearance since it shows no explicit dependence on $h(r)$, in fact it reflects an essentially strong coupling behavior, the correlational effects manifesting themselves through the $\bar{\omega}$ coefficient, which we will refer to as the ``virtual average atom'' (VAA) frequency (this frequency has also been mentioned in relation to the self-diffusion coefficient of a plasma in \cite{HansenJoly}).
\begin{equation} \label{eq:bon}
\bar{\omega}^2=\omega_1^2\frac{q^2}{p^2+q^2}\left(1+p^2\right)^2.
\end{equation}
The Virtual Atom in fact represents an entity created from the averages of the system parameters. To see this, Eq. \ref{eq:bon} is re-written in terms of the average charge and mass as
\begin{eqnarray} \label{eq:226}
\bar{\omega} &=& \sqrt{\frac{2\pi e^2}{a}\frac{\langle Z\rangle^2}{\langle m \rangle}n}, \nonumber \\
n &=& n_1 + n_2.
\end{eqnarray}
The averages are defined through
\begin{equation}
\langle X \rangle = \frac{\sum_i X_in_i}{\sum_i n_i}.
\end{equation}
Compare now $\bar{\omega}$ with of Eq. \ref{eq:RPAw}: the dramatic difference in the dependence on the plasma parameters, in particular on the mass ratio, is evident. (A similar result but restricted to the $Z_1=Z_2$ case was already anticipated in \cite{PRL-2011}).
The notion of the VAA originates from the literature, pertaining to liquid alloys and disordered binary systems \cite{VAA1,VAA2,VAA3}, as a heuristic concept. Here the derivation of this behavior, as a result of the evolution of the system from weak to strong coupling, is given.
All the observations now made on the $k\rightarrow 0$ behavior of the acoustic mode can be translated into
statements about the sound speeds
\begin{equation} \label{eq:225}
s^{L,T} = \left[\omega_{+}^{L,T}(k)/k\right]_{k\rightarrow 0}.
\end{equation}
Thus, according to (\ref{eq:221}) and (\ref{eq:225}), the longitudinal sound speed at weak coupling has its RPA value, governed by $\omega_0$; for strong correlations the sound speed is substantially reduced and strong correlations manifest themselves, in contrast to the YOCP, in two ways: first, by morphing the mean field contribution into one whose properties are dictated by the VAA and do not explicitly depend on the correlations and, second, by generating an explicit correlational correction. For the transverse sound speed, similarly to the YOCP, there is no $h$-independent contribution.
In parentheses we remark that to what extent the weak coupling value of the sound velocity is well represented by the RPA (or ``cold fluid'') expression is not clear. It is generally assumed that it is \cite{HansenJoly,MARLENE}. Nevertheless, the issue is that while for a Coulomb system there exists a clear rigorous derivation (also supported by ample observational evidence) that shows that in the $\Gamma \rightarrow 0$ limit the RPA is correct, no such demonstration is currently available for a Yukawa system. In fact, there is reason to believe \cite{GoldenX} that for a finite range system the description of the behavior of the system in the weak coupling limit is more involved. All this, however, has very little bearing on our conclusion that the sound speeds and the low frequency excitations in the strongly coupled system are governed by the frequency of the VAA and thus are quite different from their weak coupling counterparts.
We have studied the $\Gamma$-dependence of the sound speeds and of the related effective masses, the latter being defined by subtracting the explicitly correlation dependent term from the sound speed coefficient
\begin{equation} \label{eq:efm}
\frac{m_{\rm eff}}{m_1} = \frac{\omega_1^2 a^2}{s^{L^2}}\frac{\langle Z \rangle^2}{\bar{\kappa}}\left[1+\frac{n_2}{n_1}\right](1-U)
\end{equation}
by MD simulations for the parameter set given previously. Results are shown in Figs. \ref{fig:SoundS} and \ref{fig:effmass} for $\Gamma$ values between $\Gamma=5$ and $\Gamma=120$. For $\Gamma$=1 and $\Gamma$=5 sound speed values calculated through the (Vlasov Equation based) RPA approach are also displayed.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=8cm]{008figure02}
\caption{Liquid state: Longitudinal sound speeds. (\ding{108}) MD, line: QLCA, gray shaded area/line: lattice value. For $\Gamma=1$ and $\Gamma=5$ the RPA (Vlasov equation based) values of the sound speeds are also indicated (\ding{110}). (a) $n_2=n_1/2$; (b) $n_2=n_1$.}
\label{fig:SoundS}
\end{center}
\end{figure}
At the high $\Gamma$ end the QLCA predicted behavior is in excellent agreement with simulation results. As $\Gamma$ approaches the freezing boundary, the sound speeds smoothly join their values in the crystal lattice, which are also given, in anticipation of the results of the next Section. Some further comments on the relationship between the sound speeds in the two domains will be given there. In the liquid, as $\Gamma$ is lowered, the remarkable decrease of the effective mass and the concomitant increase of the sound speed can be observed. At the same time, the QLCA sound speed, in general, stays below the observed value because the QLCA ignores the modification of the effective mass as $\Gamma$ is reduced. It can be noted, that even at the relatively low $\Gamma=5$ value the strong coupling behavior seems to be still dominant and the sound speed is much below its calculated RPA value. The behavior of the sound speed below this $\Gamma$ value is not clear: it is a domain that would require substantial theoretical, simulation and experimental work to arrive at a reliable and coherent picture.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=8cm]{008figure03}
\caption{Liquid state: Effective masses for $Z_1=Z_2$ vs. $\Gamma$. (a,b) $n_2=n_1/2$; (c,d) $n_2=n_1$. The arrows indicate the mass average $\langle m \rangle =(n_1m_1+n_2Z_2)/(n_1+n_2)$. The error bars represent 5\% (10\%) uncertainty in the measurement of the MD sound speed for $M=5$ (20).}
\label{fig:effmass}
\end{center}
\end{figure}
\subsection{Mode Dispersion}
Now we turn to the description of the full mode structure in the liquid state. By solving the characteristic equations for the matrices (\ref{eq:Clong}) and (\ref{eq:Ctran}) one obtains the full $\omega(k)$ dispersion for the 4 liquid modes. The results of this calculation are displayed for $n_2/n_1=1$ and $1/2$ density ratios and for the already chosen $Z=Z_2/Z_1=0.7$ and 1.4 (2.0), $M=m_2/m_1=0.2$ and $5.0$ parameter values. The $Z_2/Z_1$ values have been chosen in the vicinity of the stability boundary for the (staggered rectangular and honeycomb) binary lattices.
Our theoretical analysis of the mode structure was accompanied by detailed Molecular Dynamics studies of the dynamical fluctuation spectra of the system, as described below.
In the Molecular Dynamics simulations we trace the trajectories of individual particles as obtained from the integration of the their equations of motion:
\begin{equation}
m_i \frac{d {\bf v}_i}{dt}= - \sum_{j \neq i}^N \nabla \phi_{ij},
\label{eq:lmotion}
\end{equation}
where $\phi_{ij}$ is the interaction potential energy ($\propto Z_i Z_j$) between the particles $i$ and $j$, and $m_i$ are the masses of the particles. We use periodic boundary conditions, the edge lengths of the computational box ($L_x$ and $L_y$) and the total number of particles are chosen to accommodate a perfect lattice for the selected density ratios (and expected associated lattice structures). In the case of $n_2 / n_1 = 1$ we use $N_1=N_2=2040$ particles, while in the case of $n_2 / n_1 = 1/2$ we use $N_1=2720$ and $N_2=1360$ particles.
In the simulations of liquid-phase systems normally random initial particle configurations are set up. In all cases ample time is given to the systems to reach thermodynamic equilibrium before measurements on the systems start. During this equilibration phase, rescaling of the particle velocities is applied to reach the desired system temperature; this procedure is, however, stopped before data collection.
The central quantities to be calculated in the simulations are the fluctuation spectra of the densities and currents. Static pair distribution functions $g_{AB}(r)=1+h_{AB}(r)$ are also obtained and used as input for the QLCA calculations. Information about the (thermally excited) collective modes is obtained from the Fourier analysis of the correlation spectra of the density fluctuations of the different species ($A,B$= 1,2):
\begin{equation}\label{eq:rho}
\rho_A(k,t)= \sum_{j=1}^{N_\alpha} \exp \bigl[ i k x_j(t) \bigr],
\end{equation}
yielding the dynamical structure functions as \cite{HMP75}:
\begin{equation}\label{eq:sp1}
S_{AB}(k,\omega) = \frac{1}{2 \pi \sqrt{N_A N_B}} \lim_{\Delta t \rightarrow \infty}
\frac{1}{\Delta t} \rho_A(k,\omega) \rho^\ast_B(k,\omega) ,
\end{equation}
where $\Delta t$ is the length of data recording period and $\rho(k,\omega) = {\cal{F}} \bigl[ \rho(k,t) \bigr]$ is the Fourier transform of (\ref{eq:rho}). The $(A,B)$ combinations label spectra related to component 1, $S_{11}$, to component 2, $S_{22}$, as well as to the cross term $S_{12}$.
Similarly, the spectra of the longitudinal and transverse current fluctuations, $L(k,\omega)$ and $T(k,\omega)$ are obtained from Fourier analysis of the microscopic quantities, respectively,
\begin{eqnarray} \label{eq:dyn}
\lambda_A(k,t)&=& \sum_{j=1}^{N_\alpha} v_{j x}(t) \exp \bigl[ i k x_j(t) \bigr], \nonumber \\
\tau_A(k,t)&=& \sum_{j=1}^{N_\alpha} v_{j y}(t) \exp \bigl[ i k x_j(t) \bigr],
\end{eqnarray}
where $x_j$ and $v_j$ are the position and velocity of the $j$-th particle. Here we assume that ${\bf k}$ is directed along the $x$ axis. These calculations allow the determination of the spectra for a series of wave numbers, which are multiples of $k_{min,x(y)} = 2 \pi / L_{x(y)}$, where $L_{x(y)}$ is the edge length of the simulation box in the $x$ (or $y$) direction.
The identification of the collective modes is based on the observation of the extrema of $L_{11}$ and $L_{22}$. When the peak positions do not completely coincide, (this may happen for various reasons, which will be discussed elsewhere), it is the position of the stronger peak that is accepted.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=8cm]{008figure04}
\caption{Liquid state: Examples for the $g(r)$ pair distribution functions at $\Gamma=120$
(a,c,e) $n_2=n_1/2$; (b,d,f) $n_2=n_1$. Note that for $Z_1=Z_2$ we obtain $g_{11}=g_{22}=g_{12}$, irrespective of the density ratios.}
\label{fig:HCPCF}
\end{center}
\end{figure}
Distribution functions $g_{AB}(r)=1+h_{AB}(r)$ that have been inputted in the QLCA calculations are given in FIG \ref{fig:HCPCF} for the previously chosen $n_2/n_1$ and $Z_2/Z_1$ values. We have also added the $Z_2/Z_1=1$ distribution functions, in order to show that in this case the three, $h_{11}$, $h_{12}$, and $h_{22}$, correlation functions are identical, independently of the density ratios (the mass ratios obviously do not affect the correlation functions).
Some illustrative current fluctuation spectra are given in Fig. \ref{fig:HCL11}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=8cm]{008figure05}
\caption{Liquid state: Examples for the longitudinal current fluctuation spectra. (a) left column: $n_2=n_1/2$, $Z_2=2 Z_1$; (b) right column $n_2=n_1$, $Z_2=1.4 Z_1$.}
\label{fig:HCL11}
\end{center}
\end{figure}
The MD simulated mode structures, together with the QLCA calculated dispersion curves are given in Fig. \ref{fig:MDQLCA}. Although the MD spectra are sometimes quite noisy as the collective modes have rather broad peaks in the spectra, the agreement between the simulated and calculated dispersions, in general, is good. A new feature shown by the simulation but not predicted by the QLCA formalism is the merging of a portion of the longitudinal acoustic and longitudinal optic modes at low $m_2/m_1$ values into a new acoustic mode.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=8cm]{008figure06}
\caption{Liquid state: Current fluctuation spectra from MD simulation (color map) compared with QLCA calculated dispersion (black lines) for $\Gamma=120$. (a) $n_2=n_1/2$; (b) $n_2=n_1$.}
\label{fig:MDQLCA}
\end{center}
\end{figure}
\section{Binary Lattice}
Depending on the $Z$ and $n$ values of the two components, a variety of ordered and disordered phases should exist in a 2D binary crystal. In combination with the different melting temperature associated with the different phases, a rather complex phase diagram can emerge. The stability of the different structures can be analyzed through a thermodynamic approach \cite{LOWEN1,LOWEN2} (minimizing the free energy) or through a dynamical normal mode analysis. In this paper we restrict ourselves to the study of the $T=0$ ($\Gamma \rightarrow \infty$) lattice structures only, which is amenable to the latter approach.
The lattice calculation is based on the evaluation of the lattice sum for the dynamical matrix
\begin{equation}
C_{AB}^{\mu\nu}({\bf k})=-e^2\frac{Z_AZ_B}{\sqrt{m_Am_B}} \left[ \sum_i\left\{\Psi^{\mu\nu}({\bf r}_{i,AB})\left(\exp(-i{\bf k}\cdot{\bf r}_{i,AB} - \delta_{AB}\right) - \delta_{AB} \sum_{C\ne A} \sum_j \frac{Z_C}{Z_A}\Psi^{\mu\nu}({\bf r}_{j,AC})\right\}\right]
\end{equation}
over all the particle pairs with designated $A,B$ and $A,C$ indices, which now run over all the bases in the primitive cell (the number of which may be equal to or greater than the number of species, i.e. 2). The evaluation was done for ca. $10^5$ particles.
In the following we will consider two different lattice structures with the previously studied density ratios $n_2/n_1=1$ and $n_2/n_1=1/2$. These two cases provide a reasonable guidance as to what lattice mode spectrum to expect in more general situations. In both cases we choose the equilibrium hexagonal lattice as the skeleton Bravais lattice. The descendent crystal structures should be stable in the vicinity of $Z_1=Z_2$. With $Z_1=1$, $Z_2$ is restricted to $Z_m<Z_2<Z_M$. The values of $Z_m$ and $Z_M$ have been determined by finding the onset of unstable normal modes \cite{stability} and are given for both cases in Table \ref{tab:1}. Then the resulting lattice structures are the following;
\begin{table}[htdp]
\caption{Stability regions.}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline$n_2/n_1$ & $Z_m$ & $Z_M$ \\
\hline
1 & $0.646 \pm 0.001$ & $1.548 \pm 0.002$ \\
1/2 & $0.51 \pm 0.01$ & $2.88 \pm 0.01$ \\
\hline
\end{tabular}
\end{center}
\label{tab:1}
\end{table}
\begin{enumerate}
\item In the equal-density case with $n_1=n_2$ we obtain a staggered rectangular (SR) lattice with the aspect ratio $\sqrt{3}:1$.
\item In the half density case with $n_2=n_1/2$ we obtain a honeycomb (HC) lattice for species 1, while the particles of species 2 occupy the center sites of the honeycomb and form a hexagonal lattice; the lattice constants of the two lattices are in the ratio $\sqrt{3}:1$, see figure \ref{fig:lattstruct}.
\end{enumerate}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=8cm]{008figure07}
\caption{Principal lattice structures: staggered rectangular (a), and honeycomb (b). Primitive cells are shown with dashed lines. In (b) the positions $1^\prime$ and $1^{\prime\prime}$ are distinguished.}
\label{fig:lattstruct}
\end{center}
\end{figure}
The SR structure is built up from 2 bases in the primitive cell, while the HC structure has 3 bases in the primitive cell. According to \cite{LOWEN1,LOWEN2} other possible structures may exist outside the stability domains of Table \ref{tab:1}, such as various rhombic structures, the asymmetric hexagon (also with 3 bases in the primitive cell) and various pentagonal structures (with 3-5 bases in the primitive cell).
The number of modes $r$ in general is $r= d \times b$, where $d$ is the dimensionality and $b$ is the number of bases in the primitive cell. In general, the polarizations of the modes can be characterized only in the combined $r$-dimensional species--configuration space. In specific situations, however, (i) the $r$-dimensional space factorizes into the $b$-dimensional species- and $d$-dimensional configuration sub-spaces; moreover, (ii) longitudinal and transverse polarizations (with respect to $k$) may become the eigen-polarizations in the latter. This occurs when ${\bf k}$ is along one of the principal axes of the crystal. Thus the $L_{+}$, etc. designations remain still meaningful and, by continuity, can be used for the labeling of the modes, with the proviso that since in general, more than one pair of optic modes may exist, a further index, say $\beta = I, II$ may be needed for the full labeling. The HC mode structure consists of 6 modes altogether, out of which 3 are ``longitudinal'' and 3 are ``transverse'' modes. Due to the rotational symmetry of the reciprocal lattice for $k \rightarrow 0$ the $L$ and $T$ optic modes are degenerate at $k=0$. In this limit, one can identify a pair of acoustic and two pairs of degenerate optic (gapped) modes. The SR mode structure consists of 4 (2 longitudinal and 2 transverse) modes. In the absence of rotational symmetry the $L$ and $T$ modes are not degenerate at $k=0$, and the $\omega_{-}^L$ and $\omega_{-}^T$ gaps are separated.
\subsection{Optic modes}
The simple geometric structure of the primitive cell allows one to obtain a transparent result for the $\omega(k\rightarrow 0)$ frequency gaps. The results are given below and portrayed in Fig. \ref{fig:HCsumP}. For the HC at $k=0$ the elements of the $C$-matrix are
\begin{eqnarray}\label{eq:OpC}
\frac{C_{1^\prime 1^\prime}^L(0)}{\omega_1^2}&=&\frac{1}{2\sqrt{2}}\left[\sum_j \Psi^L({\bf r}_{j,1^\prime 1^\prime})+\Psi^L({\bf r}_{j,1^\prime 2})\right]=\frac{C_{1^{\prime\prime} 1^{\prime\prime}}^L(0)}{\omega_1^2} \\
\frac{C_{2 2}^L(0)}{\omega_1^2}&=&\frac{1}{2\sqrt{2}}\frac{Z}{m}\left[\sum_j \Psi^L({\bf r}_{j,21^\prime})+\Psi^L({\bf r}_{j,21^{\prime\prime}})\right] \nonumber \\
\frac{C_{1^\prime 1^{\prime\prime}}^L(0)}{\omega_1^2}&=&-\frac{1}{2\sqrt{2}}\left[\sum_j \Psi^L({\bf r}_{j,1^\prime 1^{\prime\prime}})\right]\nonumber \\
\frac{C_{1^\prime 2}^L(0)}{\omega_1^2}&=&-\frac{Z}{2\sqrt{2m}}\left[\sum_j \Psi^L({\bf r}_{j,1^\prime 2})\right] = \frac{C_{1^{\prime\prime} 2}^L(0)}{\omega_1^2} \nonumber
\end{eqnarray}
By symmetry, all the lattice sums are equal; the rotational symmetry ($L=T$) can be further exploited to obtain
\begin{equation}
P=\frac{1}{2\sqrt{2}}\sum_{j,21^\prime}\exp(-y)\frac{1}{\bar{r}^3}\left(\frac{1}{2}(1+y+y^2)\right)
\end{equation}
in terms of which the roots of the cubic equation are
\begin{eqnarray} \label{eq:PHCgap}
\omega^{L,T}_{-,I}& =& \sqrt{2\left(p^2+q^2\right)P} \\
\omega^{L,T}_{-,II}&=& \sqrt{2\left(1+p^2\right)P}, \nonumber
\end{eqnarray}
Note that $\omega_{-,II}^{L,T}$ is an ``invariant mode'', where the gap frequency is independent of $m_2$; in this mode the light particles oscillate around the inert heavy particle.
For the SR a similar construction yields
\begin{equation}
Q^{L,T}=\frac{1}{2\sqrt{2}}\left[\sum_{j,12}\Psi^{L,T}({\bf r}_j)\right]
\end{equation}
in terms of which
\begin{eqnarray} \label{eq:PSRgap}
\bar{\omega}^L_{-}&=&\sqrt{\left(p^2+q^2\right)Q^L} \\
\bar{\omega}^T_{-}&=&\sqrt{\left(p^2+q^2\right)Q^T}. \nonumber
\end{eqnarray}
Here the $P$, $Q$-s are lattice sums, characteristic of the lattice structure (SR or HC); they depend on $\kappa$ only. They can be contrasted with $F_{12}$ factor appearing in the gap frequency expression in the liquid (\ref{eq:FFF}), that depends on $Z_2/Z_1$ as well (see Fig. \ref{fig:GapMZ}), but for $\Gamma \rightarrow \Gamma_{\rm freeze}$ its value in the $n_2=1/2 n_1$ and $n_2=n_1$ cases reasonably well approaches the corresponding $4P$ (for the HC) and $2(Q_L+Q_T)/2$ (for the SR) values respectively.
While the gap frequencies are angle independent, the polarizations associated with them are not: Fig. \ref{fig:SRgappol} shows that $T$ and $L$ polarizations switch place as the propagation angle varies from 0 to 90 degrees.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=8cm]{008figure08}
\caption{SR lattice: polarizations of the gap frequencies versus propagation angle for $Z_2=Z_1$.}
\label{fig:SRgappol}
\end{center}
\end{figure}
Figure \ref{fig:GapMZ} shows the $Z$ and $m$ dependences of the respective gap frequencies in the SR and HC crystal lattices and the corresponding gap frequency in the liquid. Commenting on the HC case first, we note that the liquid has only one frequency gap and therefore there is no equivalent of the invariant mode in the liquid. Turning to the SR lattice, one observes the separation of the longitudinal $\omega_{-}^L$ and transverse $\omega_{-}^T$ gaps. The $\omega_{\rm GAP }$ frequency in the liquid largely follows the angular average of $\omega_1$ and $\omega_2$, but less closely than it does in the case of the YOCP \cite{IEEEH}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=8cm]{008figure09}
\caption{Mass and charge ratio dependence of the gap frequencies. The QLCA and MD result are also shown. (a) HC lattice; note the portrayal of the ``invariant mode'' in the right ($Z={\rm const.}$) panels. (b) SR lattice.}
\label{fig:GapMZ}
\end{center}
\end{figure}
Figure \ref{fig:HCsumP} shows the dependence of the $P$, $Q$ lattice sums on the screening parameter; the smooth extrapolation to the $\kappa=0$ value provides the input for the calculation of the noteworthy Coulomb gap frequencies via Eqs. \ref{eq:PHCgap} and \ref{eq:PSRgap}. In parenthesis we note that a little reflection shows that the $P(\kappa=0)$ value bears a close relationship to the $M=\sum r^{-3}$ dipole sum over a hexagonal lattice whose value is well-known \cite{GoldenPRB08}: $M/2=0.7985/b^3$ in terms of the Wigner-Seitz radius $b$. Then $P(\kappa=0)=2^{-13/4}(3^{3/2}-1)M/2$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=8cm]{008figure15}
\caption{The dependence of the lattice sums on the Yukawa screening parameter ($\kappa$). (a) HC lattice; (b) SR lattice.}
\label{fig:HCsumP}
\end{center}
\end{figure}
\subsection{Acoustic modes and sound speed}
In contrast to the optic modes, whose dispersion is highly structure dependent and is, in general, quite different from the corresponding mode dispersion in the liquid, the $k\rightarrow 0$ behavior of the acoustic phonons in the lattice is largely similar to that of their liquid counterpart. More precisely, the sound speeds, as calculated by the QLCA and verified by simulations, go over quite smoothly to the lattice sound speeds as $\Gamma$ crosses the freezing boundary. This is visible in Fig. \ref{fig:SoundS}. The only difference of some significance arises in the case of the SR lattice, due to the fact that its reciprocal lattice space is, in contrast to the HC structure, anisotropic even in the $k\rightarrow 0$ limit. The most important observation, however, is that the notion of the VAA as a dominant feature for the low frequency excitations both in the liquid and in the solid state, is of universal validity.
\subsection{Mode dispersion}
The full calculated lattice phonon dispersion diagrams both for the HC and SR lattices are portrayed in Fig. \ref{fig:HCldisp}. In order to be able to compare the MD results with lattice summation data, simulations were carried out at very low temperatures, at $\Gamma_1 = 10^4$. In these runs the particles are initially arranged in a perfect lattice and their thermal motion does not disrupt the lattice in the course of the simulations. In Fig. \ref{fig:MDlatt} we display the MD simulation results for these finite temperature lattices: the MD simulations and the results of the lattice calculations are in full agreement.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=8cm]{008figure10}
\caption{Calculated mode dispersions for different propagation angles. (a) HC lattice, note that the invariant mode frequency at $k=0$ (pointed at by the arrows) remains invariant for any $M=m_2/m_1$ and any angle; (b) SR lattice.}
\label{fig:HCldisp}
\end{center}
\end{figure}
The polarizations of the modes in the combined (in general not factorizable) cartesian and species space can be assessed from Figures \ref{fig:HCpol0}, \ref{fig:HCpol90} and \ref{fig:SRpol0}, \ref{fig:SRpol10}, where the components of the eigenvectors for the HC (SR) lattice modes along the 6 (4) eigendirections of the dynamical matrix for a given $k$ are shown. The lengths of the $L_A$ and $T_A$ labeled bars (components of the eigenvectors) are proportional to the longitudinal and transverse displacements of particles at position $A$. Samples are given for propagation angles along and off the principal axes. Note that in the latter case no overall polarization direction can be assigned to the displacement of the particles belonging to different species.
Finally we address the question of how the collective mode dispersion depends on $\kappa$, the screening parameter of the Yukawa potential and, in particular, how the transition to the $\kappa=0$ Coulomb limit occurs. Figure \ref{fig:HCkappa} shows that there is a smooth evolution of the mode dispersions towards the Coulomb limit and towards the changeover of the longitudinal acoustic mode into the characteristic quasiacoustic $\sqrt{k}$ Coulombic behavior. It will be shown in another publication, that this behavior is in sharp contrast to what happens in the 3D case \cite{BIM}).
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=8cm]{008figure11}
\caption{Current fluctuation spectra from MD simulation at $\Gamma_1=10,000$ (color map) compared with dispersion from lattice calculations (black lines). (a) HC lattice; (b) SR lattice.}
\label{fig:MDlatt}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=8cm]{HC_pol0deg}
\caption{HC lattice: Mode polarizations for 0 deg propagation. $Z = 0.7$, $M = 5$, $\alpha = 0^{\circ}$ ($k || x$), $ka = 0.2$ in the panels, particle "2" is the heavy one (see Fig. \ref{fig:lattstruct}).}
\label{fig:HCpol0}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=8cm]{HC_pol90deg}
\caption{HC lattice: Mode polarizations for 90 deg propagation. $Z = 0.7$, $M = 5$, $\alpha = 90^{\circ}$ ($k || y$), $ka = 0.5$ in the panels.}
\label{fig:HCpol90}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=8cm]{SR_pol0deg}
\caption{SR lattice: Mode polarizations for 0 deg propagation. $Z = 0.7$, $M = 5$, $\alpha = 0^{\circ}$ ($k || x$), $ka = 0.2$ in the panels, particle ``2'' belongs to species ``2''. Note the $L/T$ and 1/2 polarization mixings.}
\label{fig:SRpol0}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=8cm]{SR_pol10deg}
\caption{SR lattice: Mode polarizations for 10 deg propagation. $Z = 0.7$, $M = 5$, $\alpha = 10^{\circ}$, $ka = 0.5$ in the panels, particle ``2'' belongs to species ``2''.}
\label{fig:SRpol10}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=8cm]{008figure16}
\caption{The dependence of the HC lattice dispersions on the Yukawa screening parameter ($\kappa$) at $m_2=5m_1$ and $Z_2=0.7 Z_1$.}
\label{fig:HCkappa}
\end{center}
\end{figure}
\section{Comparisons and Conclusions}
The results of earlier analyses \cite{DonkoJPC,PRL-Einstein,HartmannJPA06,IEEED,GoldenPRE10,KalmanEPL10} of the collective mode structure of the YOCP have established the close affinity of the mode structures in the strongly coupled liquid and in the crystalline lattice states. More precisely, what has been found is that the QLCA model, which essentially portrays the strongly coupled liquid as a superposition of randomly oriented microcrystals and determines the eigenmodes as those of the averaged crystal, provides an adequate description of wave propagation in the liquid. Whether such a simple picture would prevail in the binary liquid where the non-random distribution of the particles belonging to the two species is an issue as well, is not a priori obvious. The study presented in the previous Sections shows, however, that this is the case. In the following, we discuss the relationship between the phonon dispersion in the binary crystal, as calculated by lattice summation and corroborated by MD simulations, and collective excitations in the binary liquid, as provided by the QLCA description and the MD simulations. Judged by comparison with the results of the MD simulations, the QLCA results are quite reliable, with two exceptions, that we will discuss below. In comparing mode structures in the liquid and in the solid, the effect of the different density ratios has to be kept in mind: while in the former the difference between the $n_2=n_1$ and $n_2=n_1/2$ cases does not make a major difference, in the latter the two different crystal structures (SR and HC) substantially affect the mode structure.
Focusing first on the low-$k$ acoustic excitations, we see that there is an almost perfect agreement between the liquid QLCA and MD sound speeds, on the one hand, and the corresponding values in the liquid and in the two crystal structures studied, on the other hand. The only difference of note, as we have already pointed out, arises in the case of the SR lattice, due to its anisotropy, which results in a narrow band of sound speeds; in the liquid, as represented by the QLCA, it is replaced by an angular average. The most important result that emerges from all this is the fact that the low frequency excitations are governed by oscillation frequency $\omega$ of the Virtual Average Atom (see Eq. \ref{eq:226}) which is created by the average charges and masses of all the components. This effect, as it was discussed in some detail elsewhere \cite{PRL-2011}, has its most dramatic manifestation for the effective mass of the nominal plasma frequency of the binary (with respective masses $m_1$ and $m_2$), which in the weakly coupled case is formed, in general, through the ``parallel connection'' of the two masses ($1/m_{\rm eff}=1/m_1+1/m_2$), but which in the strongly coupled case becomes the ``series connection'' of the two masses ($m_{\rm eff}= m_1+m_2$). While the VAA has been a useful heuristic concept for liquid alloys \cite{VoraFMC08} and for disordered systems \cite{PoonPR66,ElliottRMP74,LangerJMP61,LikalterSCCS99}, and also in connection with self-diffusion \cite{HansenJoly}, here we have been able to give a rigorous demonstration through the QLCA of the emergence of this phenomenon. The MD simulation has shown (see Figs. \ref{fig:GapEins} -- \ref{fig:effmass}) that in the $\Gamma \rightarrow \Gamma_{\rm freeze}$ limit the VAA concept becomes ``exact'', in the sense that after the subtraction of the explicitly identifiable pair correlation $h_{12}$ dependent correlational contribution it determines the sound speed. With decreasing $\Gamma$ the $m_{\rm eff}$ decreases, seemingly marching towards weak coupling limit, but within the boundaries of our MD simulation which covers only the $\Gamma>5$ domain, the behavior is still essentially strongly coupled, in that the decrease of $m_{\rm eff}$ from its high $\Gamma$ value is quite slow. However, this decrease of $m_{\rm eff}$ in the moderately coupled domain is not reflected by the QLCA model: there $m_{\rm eff}$ preserves its high $\Gamma$ value (Eq. \ref{eq:efm}) for any $\Gamma$. This is the first inadequacy of the QLCA and it is the consequence of the fact that the appearance of the VAA structure is formally correlation independent. Correlational effects appear only indirectly, through the model from which it is derived and which adopts quasilocalization as its basis. That the quasilocalization can lead to such a qualitative effect is a novel feature of the approximation, which manifests itself only in binary systems. In contrast, in the single component system, the weakly coupled and strongly coupled states differ through their explicit correlation function dependence only.
A hallmark of the binary system is the emergence of -- one or more -- optic modes with a $k=0$ gap frequency. In the liquid state there is only one gap frequency, corresponding to the two -- longitudinal and transverse -- modes that become degenerate at $k=0$, due to the isotropy of the liquid. In the crystal lattice this degeneracy may or may not be lifted, depending on the local environment: it is in the SR crystal, but it survives in the HC crystal. In addition, in the crystal lattice the number of optic modes increases with the number of particles in the unit cell, which increases in oder to accommodate $n_2/n_1$ unequal 1 density ratios: hence an additional degenerate gap frequency in the HC crystal. This latter is the ``invariant mode'' whose gap frequency is independent of the mass of the lower density component, which remains inert in this mode. The mode does not have an equivalent in the liquid. The other, ``normal'' mode does re-appear in the liquid, with the gap frequency in the vicinity of the crystal equivalent (for the HC) or between the longitudinal and transverse gaps (in the SR). It should be emphasized though that the approximation of the liquid dispersion by angle averaging the lattice phonons is not equivalent to the QLCA. This difference was already demonstrated for the YOCP: here it is much more pronounced.
The gap frequencies are not related to the VAA. In the liquid they can be expressed in terms of the nominal Einstein frequencies $\bar{\Omega}_{AB}$ (Eq. \ref{eq:Qopt}) and thus they follow the ``parallel connection'' rule.
In the liquid state one can identify two upper ($\bar{\Omega}_{I}$) and lower ($\bar{\Omega}_{II}$) Einstein frequencies. (Eq. \ref{eq:EinF}) For high $k$ values the two ``acoustic'' (longitudinal and transverse) modes of the liquid merge into $\bar{\Omega}_{II}$, while the two optic modes merge into $\bar{\Omega}_{I}$. These latter cannot be directly identified in the crystal lattice, but they appear in the expressions for its gap frequencies, showing a good agreement with the QLCA calculated liquid $\bar{\Omega}_{I}$ and $\bar{\Omega}_{II}$ quantities.
According to the MD simulation result (Fig. \ref{fig:MDQLCA}) the slopes in the vicinity of $k=0$ of the longitudinal acoustic and longitudinal optic modes match and the two modes fuse into a single acoustic mode. There is no indication of this phenomenon within the QLCA formalism.
As to the dependence on the screening constant $\kappa$, we see (Fig. \ref{fig:HCkappa}) that the qualitative features of the dispersion remain unaffected over a wide range of $\kappa$ values, down to and including the $\kappa=0$ Coulomb limit.
A number of problems relating to the collective dynamics of the system have been identified, but have not been studied in this paper: the damping of the modes, the detailed structures and the link between the various fluctuation spectra, the nature of the underlying order in the liquid phase, lattice stability and structures, etc. These problems will have to be investigated in future works.
\begin{acknowledgments}
This work was supported by NSF Grants PHY-0715227, PHY-1105005, PHY-0812956 and the Hungarian Fund for Scientific Research (OTKA) through grants K77653, IN85261, K105476, NN103150.
\end{acknowledgments}
|
2,877,628,090,544 | arxiv | \section{Radio Astrometry and Extra-Solar Planets}
Radio astrometry has long been the gold standard for definition of
celestial reference frames (Fey et al. 2004, AJ 127, 3587) and has been
used to obtain the most accurate geometric measurements of any
astronomical technique. Astrometric results include measurement of
the deflection of background sources due to the gravitational
fields of the Sun and Jupiter (Fomalont \& Kopeikin 2003, ApJ, 598, 704),
the parallax and proper motion of pulsars at distances greater than
1 kpc (Chatterjee et al. 2005, ApJ, 630, L61), an upper limit
to the proper motion of Sagittarius A* of a few ${\rm\ km\ s^{-1}}$
(Reid \& Brunthaler 2004, ApJ, 616, 872), the rotation of the disk
of M33 (Brunthaler et al. 2005, Science, 307, 1440), and
a $<1\%$ distance to the Taurus star-forming
cluster (Loinard 2006, BAAS, 209, 1080).
The Very Long Baseline Array (VLBA) images nonthermal radio emission
and can routinely achieve 100 ${\rm\ \mu as}$ astrometric accuracy, but has
achieved an accuracy as high as 8 ${\rm\ \mu as}$ under favorable
circumstances (Fomalont \& Kopeikin 2003).
Nonthermal stellar radio emission has been detected from many stellar
types (G\"udel 2002, ARA\&A, 40, 217), including brown dwarfs
(Berger et al. 2001, Nat, 410, 338),
proto-stars (Bower et al. 2003, ApJ, 598, 1140) , massive
stars with winds (Dougherty et al. 2005, ApJ, 623, 447),
and late-type stars
(Berger et al. 2006, ApJ, 648, 629).
Only late-type
stars are sufficiently bright, numerous, nearby, and low mass to
provide a large sample of stars suitable for large-scale astrometric
exoplanet searches. Radio astrometric searches can determine whether
or not M dwarfs, the {\em largest stellar constituent of the Galaxy}, are
surrounded by planetary systems as frequently as FGK stars and if the
planet mass-period relation varies with stellar type. The population
of gas giants at a few AU around low mass stars is an important
discriminant between planet formation models.
Radio astrometric searches have a number of unique qualities:
\begin{itemize}
\item Opportunity to discover planets around nearby active M dwarfs at
large radii;
\item Ability to fully characterize orbits of detected planets,
without degeneracies in mass, inclination, and longitude of ascending
node;
\item Sensitivity to long-period planets with sub-Jovian masses
currently and Earth masses ultimately;
\item Complementary with existing planet searching techniques: most
targets cannot be explored through other methods;
\item Ability to follow-up detected planets with imaging and
spectroscopy; and,
\item Absolute astrometric positions within the radio reference frame
for stars and planets.
\end{itemize}
The quality and uniqueness of radio astrometry for planet searches are
the result of two factors:
$\bullet$ {\bf High precision of radio astrometry:} The VLBA
can routinely achieve 100 ${\rm\ \mu as}$ accuracy through
relative astrometry. This precision is
an order of magnitude better than obtained from laser-guide
star adaptive optics (e.g., Pravdo et al. 2005).
Future instruments will have one to two orders of magnitude
more accurate astrometry, comparable to the best accuracy
achievable with the proposed SIM spacecraft.
$\bullet$ {\bf Active stars are difficult to study in optical
programs:} Our target stars are active M dwarfs,
which have radio fluxes on the order of 1 mJy. These radio stars
are difficult to study through optical radial velocity
techniques because they are faint and because the activity in
these stars distorts line profiles, reducing the accuracy of
radial velocity measurements.
We give a sketch of the parameter space for RIPL, future radio
astrometric searches, the Space Interferometric Mission, radial
velocity searches, and coronagraphic searches in
Figure~\ref{fig:pspace}. A comparison of the radial velocity and
astrometric amplitudes indicates that astrometric techniques are
favored over radial velocity techniques for long period ($\gsim 1$
year) planets for these faint objects, for an astrometric accuracy of
$\sim 100 {\rm\ \mu as}$ (Ford 2006, PASP, 118, 364).
\begin{figure}[tb]
\includegraphics[width=0.75\textwidth]{RIPL+eVLBA_sensitivity_b.eps}
\caption{Sensitivity of different methods in planet mass and semi-major axis
space for radio astrometric surveys and other methods.
``Exp. VLBA'' refers to the upgraded VLBA described in \S~\ref{sec:VLBA}.
The semi-major axis at the
minimum in the astrometric search curves is determined by the search duration,
which is 3 years for RIPL and the Exp. VLBA campaign.}
\label{fig:pspace}
\end{figure}
In Section 2, we describe the sensitivity and methods of radio astrometry.
In Section 3, we describe a new program with the VLBA and the Green Bank 100m
telescope to search for planets around nearby M dwarfs. In Section 4, we
demonstrate that a bandwidth upgrade for the VLBA will increase astrometric
accuracy or stellar sample sizes by an order of magnitude. In Section 5,
we discuss the role that the Square Kilometer Array can play with its three
order of magnitude increase in sensitivity over the VLBA.
\section{Radio Astrometry Sensitivity and Methods}
Astrometric exoplanet searches must be able to detect an astrometric
signal that has an amplitude of
\begin{equation}
\theta = 2 {a \over D} * {M_{p} \over M_{*} } =
1400 {\rm\ \mu as} * {a \over 1 AU} * {5 {\rm\ pc} \over D} * {M_p / M_J} * {0.2 M_\sun \over M_*},
\end{equation}
for a planet of mass $M_p$ orbiting a star of mass $M_*$ with a
semimajor axis $a$ at a distance $D$ from the Sun (a mass of 0.2
$M_\sun$ corresponds to a M5 dwarf). To robustly detect a planet,
observations must span at least a significant fraction of a period
\begin{equation}
T = 2.2\,yr * {\left(a \over 1 AU\right)}^{3/2} * {\left(0.2 M_\sun \over M_*\right)}^{1/2}.
\end{equation}
The ultimate accuracy that can be obtained through
a radio astrometric technique is
\begin{equation}
\sigma_{ast}= \sigma_{beam} / {\rm SNR},
\end{equation}
where $\sigma_{beam}=b/\lambda$ is the synthetic beam size for an
array with maximum baseline $b$, $\lambda$ is the observing
wavelength, and SNR is the signal to noise ratio of the target source
detection. For the VLBA $\sigma_{ast}\approx500 {\rm\ \mu as}/{\rm SNR}$.
The astrometric position is defined relative to nearby ($\sim 1^\circ$)
compact radio sources. Typical observations include switching on
minute timescales between the calibrator and the target sources,
with less frequent observations of secondary calibrators.
The use of multiple calibrators is intended to determine the differential
delay in position on the sky due to varying path length from tropospheric
water vapor. The extent to which this cannot be calibrated sets the
final astrometric accuracy in observations that are not SNR-limited.
The nearer the
calibrators and the greater sensitivity at which they can be detected typically
determines this error. The error decreases linearly with decreasing
calibrator-target separation. The increased sensitivity of future arrays will
increase the calibrator density and therefore decrease the typical separation
from calibrator to target and the uncalibrated astrometric error.
For sufficiently small target to calibrator separation, the calibrator will
be in the primary beam of the antenna, enabling simultaneous
observations of the target and calibrator that also remove temporal dependence
of tropospheric variations.
\section{RIPL: Radio Interferometric Planet Search}
RIPL is a 1400-hour, 3-year VLBA and GBT program to search
for planets around 29 nearby, low-mass, active stars. The program will
achieve sub-Jovian planet mass sensitivity. The observing program will
be completed in 2009.
The most serious limitation to astrometric accuracy may be from
stellar activity that jitters the apparent stellar position. Most
evidence, however, indicates that this radio astrometric
jitter is small. For instance,
White, Lim and Kundu (1994, ApJ, 422, 293)
model the radio emission from dMe stars as originating
within $\sim 1$ stellar radius of the photosphere. At a distance of
10 pc for a M5 dwarf a stellar radius is $\sim 100$ $\mu$as, roughly
an order of magnitude smaller than the astrometric signature of a
Jupiter analog. We conducted the VLBA Precursor Astrometric Survey
(VPAS) in Spring 2006 to assess the effect of stellar jitter on
astrometric accuracy (Bower et al. 2007, in prep.).
\begin{figure}[tb]
\center\mbox{\includegraphics[width=0.25\textwidth,angle=-90]{GJ4247_B2.PS}\includegraphics[width=0.25\textwidth,angle=-90]{GJ4247_C2.PS}\includegraphics[width=0.25\textwidth,angle=-90]{GJ4247_A2.PS}}
\caption[]{Images of GJ4247 in three separate epochs on 23, 25, and
26 March 2006 (right to left) from the VLBA Precursor Astrometric Survey.
Contour levels are -3, 3, 4, 5, 6, 7, 8 times the rms noise of 95 $\mu$Jy.
The synthesized beam is shown in the lower left hand corner of each image.
\label{fig:motion}}
\end{figure}
For each star, three VLBA epochs were spread over fewer than 10 days.
Seven stars were detected in at least one epoch and four were detected
in all three epochs (Figure~\ref{fig:motion}).
All stars have proper motions and parallaxes determined by Hipparcos
or other optical methods with a precision of a few mas per year,
yielding predicted relative positions accurate to $\sim 100 {\rm\ \mu as}$ during
the length of the study. For all stars detected with multiple epochs,
the motions match the results of Hipparcos astrometry well with rms in
each coordinate ranging from 0.08 to $0.26 {\rm\ \mu as}$.
Deviations in the positions appear to be limited by our
sensitivity; i.e., the effect of stellar
activity on their positions is unimportant.
{\em In fact, the {\bf small} differences in the fitted proper motion
and the Hipparcos proper motion already eliminate brown dwarfs as
companions to these objects (Figure~\ref{fig:accel}).} The measured
differences are consistent with noise in the VLBA astrometry ($200 {\rm\ \mu as}
{\rm\ / 3 day} \sim 20$ mas/yr). The typical reflex motion due to
a long period brown dwarf is $\sim 100$ mas/yr, which would be
apparent. The much longer time baseline and better sensitivity of
RIPL will reduce proper motion errors by $\sim 2$ orders of magnitude.
\begin{figure}[tb]
\includegraphics[width=\textwidth]{acceleration_mp_r.eps}
\caption[]{Region of planetary mass and semi-major axis phase-space rejected by
acceleration upper limits based on combination of 3 epochs of radio astrometric
measurements and optical astrometry, primarily from Hipparcos. Different
contours indicate confidence intervals for excluded regions.
\label{fig:accel}}
\end{figure}
\subsection{Synergy with other Planet Searches}
{\em RIPL\, is synergistic with the existing and future planet-search
programs, as well as current ground-based planet searches (including
radial velocities, transits, adaptive optics, and interferometry).}
RIPL\, provides an opportunity to search for planetary systems in a
unique area of parameter space that will not be targeted by other planet
searches until the launch of NASA SIM - Planetquest.
Ground based transit searches are most sensitive for very short
periods ($P \sim$ days), and the Kepler mission aims to detect planets
with orbital periods of slightly more than a year. Thus, RIPL\, will
make a valuable contribution to our understanding of the frequency of
long-period planets around M stars. Further, unlike transits and
radial velocity observations astrometric measurements directly measure
the planet mass, which is important for testing models of planet
formation. While the unknown inclination is less of an issue for
studying large samples of planets, measuring individual inclinations
will be particularly valuable for planets around M dwarfs, since a
relatively modest number of M dwarfs are being surveyed by RIPL ($\sim
30$ vs $\sim 3000$ stars by radial velocities).
Ground-based optical and near-infared interferometers (e.g., PTI,
NPOI) require bright stars and are not appropriate for faint low-mass
stars. The RIPL\, astrometric accuracy is an order of magnitude
better than the astrometric error from Keck Laser Guide Star Adaptive
Optics astrometry (Pravdo et al. 2005, ApJ, 630, 528).
{\em Thus, RIPL\, is the best
means for an astrometric search of M dwarfs until SIM launches} (now
estimated for no earlier than 2016).
A long-period planet detected by RIPL\, would enable exciting
scientific investigations such as photometric and spectroscopic
observations to determine the planets physical properties. While
space based missions such as TPF-C and TPF-I are expected to be
extremely powerful and aim to directly detect terrestrial mass
planets, these missions are not expected to launch for at least a
decade in the future. Knowing which stars have giant planets suitable
for direct imaging would enable direct probes of an extrasolar planet.
\section{VLBA Upgrade and Planet Detection}
\label{sec:VLBA}
The VLBA is presently being upgraded from a typical data rate of 256
Mbit/s to 4 Gbit/s, with project completion estimated by 2010. This
will result in a sensitivity increase by a factor of 4, or about a
factor of 8 increase in areal density of reference sources on the sky.
Thus, the typical distance between a target star and its nearest
reference source will decrease by a factor of $\sim 3$. A few years
later we expect a data rate of 16 Gbit/s, yielding a target-calibrator
separation more than 10 times smaller than current values. Since in
the limit of infinite SNR the astrometric error depends linearly on
the separation from the reference source, relative astrometric errors
of $\lesssim 10$~$\mu$as should be fairly routine; in principle, this
would permit detection of a planet with a mass of less than 10\% of
the mass of Jupiter. The sensitivity increase afforded by these
upgrades will also permit a sizable increase of the late-type dwarf
sample.
\section{Square Kilometer Array}
The Square Kilometer Array (SKA; Carilli \& Rawlings 2004, New AR, 48,
979) is a proposed future radio telescope
that would have a collecting area of a square kilometer, approximately
200 times the collecting area of the VLBA. The SKA would be built
toward the end of the next decade; it is planned to cover the
frequency range from 0.1 to 25~GHz, with the 5--10~GHz range being
most useful for astrometric planet detection. If 25\% of the SKA area
at $\sim 8$~GHz is constructed on baselines of 1000-5000~km, it will
supply revolutionary astrometric accuracy (Fomalont \& Reid 2004, New
AR, 48, 1473). With dish antennas of 12m diameter, the combination of
sensitivity and wide field of view often will enable many astrometric
reference sources to be found in the same antenna field of view as the
target star, allowing all temporal variations in Earth's atmosphere to
be removed. In such a case, the relative astrometric accuracy may
reach $\sim 1$~$\mu$as, competitive with SIM and enabling astrometric
detection of Earth-mass planets.
The sensitivity of the SKA will enable astrometric detection of
thermal emission from stars.
The Sun, for instance, would be detectable to a distance of 10 pc with
the SKA. Thus, the SKA will be capable of detecting and characterizing
planets around Sun-like stars.
\end{document}
|
2,877,628,090,545 | arxiv | \section{Introduction}\label{sec:introduction}
Opinion Mining and Sentiment Analysis (OMSA) are crucial for determining opinion trends and attitudes about commercial products, companies reputation management, brand monitoring, or to track attitudes by mining social media, etc. Furthermore, given the explosion of information produced and shared via the Internet, especially in social media, it is simply not possible to keep up with the constant flow of new information by manual methods.
Early approaches to OMSA were based on document classification, where the task was to determine the polarity (positive, negative, neutral) of a given document or review \citep{pang_opinion_2008,liu_sentiment_2012}. A well known benchmark for polarity classification at document level is that of \cite{Pangetal:2002}. Later on, a finer-grained OMSA was deemed necessary. This was motivated by the fact that in a given review more than one opinion about a variety of aspects or attributes of a given product is usually conveyed. Thus, Aspect Based Sentiment Analysis (ABSA) was defined as a task which consisted of identifying several components of a given opinion: the opinion holder, the target, the opinion expression (the textual expression conveying polarity) and the aspects or features. Aspects are mostly domain-dependent. In restaurant reviews, relevant aspects would include ``food quality'', ``price'', ``service'', ``restaurant ambience'', etc. Similarly, if the reviews were about consumer electronics such as laptops, then aspects would include ``size'', ``battery life'', ``hard drive capacity'', etc.
In the review shown by Figure \ref{fig:absaexample} there are three different opinions about two different aspects (categories) of the restaurant, namely, the first two opinions are about the quality of the food and the third one about the general ambience of the place. Furthermore, there are just two opinion targets because the target of the third opinion, the restaurant itself, remains implicit. Finally, each aspect is assigned a polarity; in this case all three opinion aspects are negative.
\begin{figure}[ht]\centering
\begin{lstlisting}[language=XML]
<sentence id="1016296:4">
<text>Chow fun was dry; pork shu mai was more than usually greasy and had to share a table with loud and rude family</text>
<Opinions>
<Opinion target="Chow fun" category="FOOD#QUALITY" polarity="negative" from="0" to="8"/>
<Opinion target="pork shu mai" category="FOOD#QUALITY" polarity="negative" from="18" to="30"/>
<Opinion target="NULL" category="AMBIENCE#GENERAL" polarity="negative" from="0" to="0"/>
</Opinions>
</sentence>
\end{lstlisting}
\caption{Aspect Based Sentiment Analysis example.}
\label{fig:absaexample}
\end{figure}
In this work we focus on Opinion Target Extraction, which we model as a sequence labelling task. In order to do so, we convert an annotated review such as the one in Figure \ref{fig:absaexample} into the BIO scheme for learning sequence labelling models \citep{tjong_kim_sang_introduction_2002}. Example (1) shows the review in BIO format. Tokens in the review are tagged depending on whether they are at the beginning (B-target), inside (I-target) or outside (O) of the opinion target expression. Note that the third opinion target in Figure \ref{fig:absaexample} is implicit.
\begin{enumerate}
\item[(1)] \textbf{Chow/B-target fun/I-target} was/O dry/O; \textbf{pork/B-target shu/I-target mai/I-target} was/O more/O than/O usually/O greasy/O and/O had/O to/O share/O a/O table/O with/O loud/O and/O rude/O family/O.
\end{enumerate}
We learn language independent models which consist of a set of local, shallow features complemented with semantic distributional features based on clusters obtained from a variety of data sources. We show that our approach, despite the lack of hand-engineered, language-specific features, obtains state-of-the-art results in 7 datasets for 6 languages on the ABSA benchmarks \citep{pontiki-EtAl:2014:SemEval,pontiki-EtAl:2015:SemEval,pontiki-EtAl:2016:SemEval}.
The main contribution of this research note is providing an extension or addendum to previous work on sequence labelling \citep{agerri2016robust} by reporting additional experimental results as well as further insights on the performance of our model across languages on a different NLP task such as Opinion Target Extraction (OTE). Thus, we empirically demonstrate the validity and strong performance of our approach for six languages in seven different datasets of the restaurant domain. Every experiment and result presented in this note is novel.
In this sense, we show that our approach is not only competitive across languages and domains for Named Entity Recognition, as shown by \cite{agerri2016robust}, but that it can be straightforwardly adapted to different tasks and domains such as OTE. Furthermore, we release the system and every model trained for public use and to facilitate reproducibility of results.
\section{Background}\label{sec:background}
Early approaches to Opinion Target Extraction (OTE) were unsupervised, although later on the vast majority of works have been based on supervised and deep learning models. To the best of our knowledge, the first work on OTE was published by \cite{hu_mining_2004}. They created a new task which consisted of generating overviews of the main product features from a collection of customer reviews on consumer electronics. They addressed such task using an unsupervised algorithm based on association mining. Other early unsupervised approaches include \cite{popescu_extracting_2005} which used a dependency parser to obtain more opinion targets, and \cite{kim2006extracting} which aimed at extracting opinion targets in newswire via Semantic Role Labelling. From a supervised perspective, \cite{zhuang2006movie} presented an approach which learned the opinion target candidates and a combination of dependency and part-of-speech (POS) paths connecting such pairs. Their results improved the baseline provided by \cite{hu_mining_2004}. Another influential work was \cite{qiu2011opinion}, an unsupervised algorithm called Double Propagation which roughly consists of incrementally augmenting a set of seeds via dependency parsing.
Closer to our work, \cite{jin2009novel}, \cite{li2010structure} and \cite{jakob2010extracting} approached OTE as a sequence labelling task, modelling the opinion targets using the BIO scheme. The first approach implemented HMM whereas the last two proposed CRFs to solve the problem. In all three cases, their systems included extensive human-designed and linguistically motivated features, such as POS tags, lemmas, dependencies, constituent parsing structure, lexical patterns and semantic features extracted from WordNet \citep{Wordnet:1998}.
Quite frequently these works used a third party dataset, or a subset of the original one, or created their own annotated data for their experiments. The result was that it was difficult to draw precise conclusions about the advantages or disadvantages of the proposed methods. In this context, the Aspect Based Sentiment Analysis (ABSA) tasks at SemEval \citep{pontiki-EtAl:2014:SemEval,pontiki-EtAl:2015:SemEval,pontiki-EtAl:2016:SemEval} provided standard training and evaluation data thereby helping to establish a clear benchmark for the OTE task.
Finally, it should be noted that there is a closely related task, namely, the SemEval 2016 task on Stance Detection\footnote{\url{http://alt.qcri.org/semeval2016/task6/}}. Stance detection is related to ABSA, but there is a significant difference. In ABSA the task is to determine whether a piece of text is positive, negative, or neutral with respect to an aspect and a given target (which in Stance Detection is called ``author's favorability'' towards a given target). However, in Stance Detection the text may express opinion or sentiment about some other target, not mentioned in the given text, and the targets are predefined, whereas in ABSA the targets are open-ended.
\subsection{ABSA Tasks at SemEval}\label{sec:absa-at-semeval}
Three ABSA editions were held within the SemEval Evaluation Exercises between 2014 and 2016. The ABSA 2014 and 2015 tasks consisted of English reviews only, whereas in the 2016 task 7 more languages were added. Additionally, reviews from four domains were collected for the various sub-tasks across the three editions, namely, Consumer Electronics, Telecommunications, Museums and Restaurant reviews. In any case, the only constant in each of the ABSA editions was the inclusion, for the Opinion Target Extraction (OTE) sub-task, of restaurant reviews for every language. Thus, for the experiments presented in this paper we decided to focus on the restaurant domain across 6 languages and the three different ABSA editions. Similarly, this section will be focused on reviewing the OTE results for the restaurant domain.
The ABSA task consisted of identifying, for each opinion, the opinion target, the aspect referred to by the opinion and the aspect's polarity. Figure \ref{fig:absaexample} illustrates the original annotation of a restaurant review in the ABSA 2016 dataset. It should be noted that, out of the three opinion components, only the targets are explicitly represented in the text, which means that OTE can be independently modelled as a sequence labelling problem as shown by Example (1). It is particularly important to notice that the opinion expressions (``dry'', ``greasy'', ``loud and rude'') are not annotated.
Following previous approaches, the first competitive systems for OTE at ABSA were supervised. Among the participants (for English) in the three editions, one team \citep{toh2014dlirec,S15-2083} was particularly successful. For ABSA 2014 and 2015 they developed a CRF system with extensive handcrafted linguistic features: POS, head word, dependency relations, WordNet relations, gazetteers and Name Lists based on applying the Double Propagation algorithm \citep{qiu2011opinion} on an initial list of 551 seeds. Interestingly, they also introduced word representation features based on Brown and K-mean clusters. For ABSA 2016, they improved their system by using the output of a Recurrent Neural Network (RNN) to provide additional features. The RNN is trained on the following input features: word embeddings, Name Lists and word clusters \citep{toh2016nlangp}. They were the best system in 2014 and 2016. In 2015 they obtained the second best result, in which the best system, a preliminary version of the one presented in this note, was submitted by the EliXa team \citep{sanvicente-saralegi-agerri:2015:SemEval}.
From 2015 onwards most works have been based on deep learning. \cite{D15-1168} applied RNNs on top of a variety of pre-trained word embeddings, while \cite{cimianoote} presented an architecture in which a RNN based tagger is stacked on top of the features generated by a Convolutional Neural Network (CNN). These systems were evaluated on the 2014 and 2015 datasets, respectively, but they did not go beyond the state-of-the-art.
\cite{poria2016aspect} presented a 7 layer deep CNN combining word embeddings trained on a $~$5 billion word corpus extracted from Amazon \citep{mcauley2013hidden}, POS tag features and manually developed linguistic patterns based on syntactic analysis and SenticNet \citep{cambria2014senticnet} a concept-level knowledge based build for Sentiment Analysis applications. They only evaluate their system on the English 2014 ABSA data, obtaining best results up to date on that benchmark.
More recently, \cite{wang2017coupled} proposed a coupled multi-layer attention (CMLA) network where each layer consists of a couple of attentions with tensor operators. Unlike previous approaches, their system does not use complex linguistic-based features designed for one specific language. However, whereas previous successful approaches modelled OTE as an independent task, in the CMLA model the attentions interactively learn both the opinion targets and the opinion expressions. As opinion expressions are not available in the original ABSA datasets, they had to manually annotate the ABSA training and testing data with the required opinion expressions. Although \cite{wang2017coupled} did not release the datasets with the annotated opinion expressions, Figure \ref{fig:absaexamplepolarity} illustrates what these annotations would look like. Thus, two new attributes (\texttt{pfrom} and \texttt{pto}) annotate the opinion expressions for each of the three opinions (``dry'', ``greasy'' and ``loud and rude'', respectively). Using this new manual information to train their CMLA network they reported the best results so far for ABSA 2014 and 2015 (English only).
\begin{figure}[ht]\centering
\begin{lstlisting}[language=XML]
<sentence id="1016296:4">
<text>Chow fun was dry; pork shu mai was more than usually greasy and had to share a table with loud and rude family</text>
<Opinions>
<Opinion target="Chow fun" category="FOOD#QUALITY" polarity="negative" from="0" to="8" pfrom=13 pto=16/>
<Opinion target="pork shu mai" category="FOOD#QUALITY" polarity="negative" from="18" to="30" pfrom=53 pto=59/>
<Opinion target="NULL" category="AMBIENCE#GENERAL" polarity="negative" from="0" to="0" pfrom=90 pto=103/>
</Opinions>
</sentence>
\end{lstlisting}
\caption{Adding opinion expression annotations to Example (1) in the ABSA 2016 training set.}
\label{fig:absaexamplepolarity}
\end{figure}
Finally, \cite{li2017deep} develop a multi-task learning framework consisting of two LSTMs equipped with extended memories and neural memory operations. As \cite{wang2017coupled}, they use opinion expressions annotations for a joint modelling of opinion targets and expressions. However, unlike \cite{wang2017coupled} they do not manually annotate the opinion expressions. Instead they manually add sentiment lexicons and rules based on dependency parsing in order to find the opinion words required to train their system. Using this hand-engineered system, they report state of the art results only for English on the ABSA 2016 dataset. They do not provide evaluation results on the 2014 and 2015 restaurant datasets.
With respect to other languages, the IIT-T team presented systems for 4 out of the 7 languages in ABSA 2016, obtaining the best score for French and Dutch, second in Spanish but with very poor results for English, well below the baseline. The GTI team \citep{S16-1049} implemented a CRF system using POS, lemmas and bigrams as features. They obtained the best result for Spanish and rather modest results for English.
Summarizing, the most successful systems for OTE have been based on supervised approaches with rather elaborate, complex and linguistically inspired features. \cite{poria2016aspect} obtains best results on the ABSA 2014 data by means of a CNN with word embeddings trained on 5 billion words from Amazon, POS features, manual patterns based on syntactic analysis and SenticNet. More recently, the CMLA deep learning model has established new state-of-the-art results for the 2015 dataset, whereas \cite{li2017deep} provide the state of the art for the 2016 benchmark.
Thus, there is not currently a multilingual system that obtains competitive results across (at least) several of the languages included in ABSA.
As usual, most of the work has been done for English, with the large majority of the previous systems providing results only for one of the three English ABSA editions and without exploring the multilingual aspect. This could be due to the complex and language-specific systems that performed best for English \citep{poria2016aspect}, or perhaps because the CMLA approach of \cite{wang2017coupled} would require, in addition to the opinion targets, the gold standard annotations of the opinion expressions for each of the 6 languages other than English in the ABSA datasets.
\section{Methodology}\label{sec:methodology}
The work presented in this research note requires the following resources: (i) Aspect Based Sentiment Analysis (ABSA) data for training and testing; (ii) large unlabelled corpora to obtain semantic distributional features from clustering lexicons; and (iii) a sequence labelling system. In this section we will describe each of the resources used.
\subsection{ABSA Datasets}\label{sec:datasets}
\begin{table}[ht]\small
\centering
\begin{tabular}{clrrrrrr} \hline
Language & ABSA & \multicolumn{6}{c}{No. of Tokens and Opinion Targets} \\ \hline \hline
& & \multicolumn{3}{c}{Train} & \multicolumn{3}{c}{Test} \\ \cline{3-8}
& & Token & B-target & I-target & Token & B-target & I-target\\ \hline
en & 2014 & 47028 & 3687 & 1457 & 12606 & 1134 & 524 \\
en & 2015 & 18488 & 1199 & 538 & 10412 & 542 & 264 \\
en & 2016 & 28900 & 1743 & 797 & 9952 & 612 & 274 \\
es & 2016 & 35847 & 1858 & 742 & 13179 & 713 & 173 \\
fr & 2016 & 26777 & 1641 & 443 & 11646 & 650 & 239 \\
nl & 2016 & 24788 & 1231 & 331 & 7606 & 373 & 81 \\
ru & 2016 & 51509 & 3078 & 953 & 16999 & 952 & 372 \\
tr & 2016 & 12406 & 1374 & 516 & 1316 & 145 & 61 \\ \hline
\end{tabular}
\caption{ABSA SemEval 2014-2016 datasets for the restaurant domain. B-target indicates the number of opinion targets in each set; I-target refers to the number of multiword targets.}
\label{tab:datasets}
\end{table}
Table \ref{tab:datasets} shows the ABSA datasets from the restaurants domain for English, Spanish, French, Dutch, Russian and Turkish. From left to right each row displays the number of tokens, number of targets and the number of multiword targets for each training and test set. For English, it should be noted that the size of the 2015 set is less than half with respect to the 2014 dataset in terms of tokens, and only one third in number of targets. The French, Spanish and Dutch datasets are quite similar in terms of tokens although the number of targets in the Dutch dataset is comparatively smaller, possibly due to the tendency to construct compound terms in that language. The Russian dataset is the largest whereas the Turkish set is by far the smallest one.
Additionally, we think it is also interesting to note the low number of targets that are multiwords. To provide a couple of examples, for Spanish only the \%35.59 of the targets are multiwords whereas for Dutch the percentage goes down to \%25.68. If we compare these numbers with the CoNLL 2002 data for Named Entity Recognition (NER), a classic sequence labelling task, we find that in the ABSA data there is less than half the number of multiword targets than the number of multiword entities that can be found in the CoNLL Spanish and Dutch data (\%35.59 vs \%74.33 for Spanish and \%25.68 vs \%44.96 for Dutch).
\subsection{Unlabelled Corpora}\label{sec:unlabelled-corpora}
Apart from the manually annotated data, we also leveraged large, publicly available, unlabelled data to train the clusters: (i) Brown 1000 clusters and (ii) Clark and Word2vec clusters in the 100-800 range.
\begin{table}[ht]\small
\centering
\begin{tabular}{clrrrr} \hline
& \multicolumn{2}{c}{million words in corpus} & \multicolumn{3}{r}{million words for training}\\ \hline \hline
& & & Brown & Clark & Word2vec \\ \hline
\multirow{4}{*}{en} & Yelp Academic Dataset & 225 & 156 & 225 & 225 \\
& Yelp food & 117 & 82 & 117 & 117 \\
& Yelp food-hotels & 102 & 73 & 102 & 102 \\
& Wikipedia (20141208) & 1700 & 790 & 790 & 1700 \\ \hline
es & Wikipedia (20140810) & 428 & 246 & 246 & 428 \\
fr & Wikipedia (20140804) & 547 & 280 & 280 & 547 \\
nl & Wikipedia (20140804) & 235 & 128 & 128 & 235 \\
ru & Wikipedia (20140727) & 338 & 158 & 158 & 338 \\
tr & Wikipedia (20140806) & 48 & 33 & 48 & 48 \\ \hline
\end{tabular}
\caption{Unlabeled corpora to induce clusters. For each corpus and cluster type the number of words (in millions) is specified. Average training times: depending on the number of words, Brown clusters training time required between 5h and 48h. Word2vec required 1-4 hours whereas Clark clusters training lasted between 5 hours and 10 days.}
\label{tab:unlabeledcorpora}
\end{table}
In order to induce clusters from the restaurant domain we used the \emph{Yelp Academic Dataset}\footnote{http://www.yelp.com/dataset\_challenge}, from which three versions were created. First, the full dataset, containing 225M tokens. Second, a subset consisting of filtering out those categories that do not correspond directly to food related reviews \citep{nrcSemeval_2014}. Thus, out of the 720 categories contained in the Yelp Academic Dataset, we kept the reviews from 173 of them. This \emph{Yelp food} dataset contained 117M tokens in 997,721 reviews. Finally, we removed two more categories (Hotels and Hotels \& Travel) from the \emph{Yelp food} dataset to create the \emph{Yelp food-hotels} subset containing around 102M tokens. For the rest of the languages we used their corresponding Wikipedia dumps. The pre-processing and tokenization is performed with the IXA pipes tools \citep{AGERRI14.775.L14-1605}.
The number of words used for each dataset, language and cluster type are described in Table \ref{tab:unlabeledcorpora}. For example, the first row reads ``Yelp Academic Dataset containing 225M words was used; after pre-processing, 156M words were taken to induce Brown clusters, whereas Clark and Word2vec clusters were trained on the whole corpus''. As explained in \cite{agerri2016robust}, we pre-process the corpus before training Brown clusters, resulting in a smaller dataset than the original. Additionally, due to efficiency reasons, when the corpus is too large we use the pre-processed version to induce the Clark clusters.
\subsection{System}\label{sec:system}
We use the sequence labeller implemented within IXA pipes \citep{agerri2016robust}. It learns supervised models based on the Perceptron algorithm \citep{collins_discriminative_2002}. To avoid duplication of efforts, it uses the Apache OpenNLP project implementation\footnote{\url{http://opennlp.apache.org/}} customized with its own features. By design, the sequence labeller aims to establish a simple and shallow feature set, avoiding any linguistic motivated features, with the objective of removing any reliance on costly extra gold annotations and/or cascading errors across annotations.
The system consists of: (i) Local, shallow features based mostly on orthographic, word shape and n-gram features plus their context; and (ii) three types of simple clustering features, based on unigram matching: (i) Brown \citep{brown1992class} clusters, taking the 4th, 8th, 12th and 20th node in the path; (ii) Clark \citep{clark2003combining} clusters and, (iii) Word2vec \citep{mikolov2013distributed} clusters, based on K-means applied over the extracted word vectors using the skip-gram algorithm.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.50]{clustering-features}
\caption{Unigram matching in clustering features.}
\label{fig:clustering-features}
\end{figure}
The clustering features look for the cluster class of the incoming token in one or more of the clustering lexicons induced following the three methods listed above. If found, then the class is added as feature (``not found'' otherwise). As we work on a 5 token window, for each token and clustering lexicon at least 5 features are generated. For Brown, the number of features generated depend on the number of nodes found in the path for each token and clustering lexicon used.
Figure \ref{fig:clustering-features} depicts how our system relates, via clusters, unseen words with those words that have been seen as targets during the training process. Thus, the tokens `french-onions' and `salmon' would be annotated as opinion targets because they occur in the same clusters as seen words which in the training data are labeled as targets.
The word representation features are \emph{combined} and \emph{stacked} using the clustering lexicons induced over the different data sources listed in Table \ref{tab:unlabeledcorpora}. In other words, \emph{stacking} means adding various clustering features of the same type obtained from different data sources (for example, using clusters trained on Yelp and on Wikipedia); \emph{combining} refers to combining different types of clustering features obtained from the same data source (e.g., using features from Brown and Clark clustering lexicons).
To choose the best combination of clustering features we tried, via 5-fold cross validation on the training set, every possible permutation of the available Clark and Word2vec clustering lexicons obtained from the data sources. Once the best combination of Clark and Word2vec clustering lexicons per data source was found, we tried to combine them with the Brown clusters. The result is a rather simple but very competitive system that has proven to be highly successful in the most popular Named Entity Recognition and Classification (NER) benchmarks, both in out-of-domain and in-domain evaluations. Furthermore, it was demonstrated that the system also performed robustly across languages without any language-specific tuning. Details of the system's implementation, including detailed description of the local and clustering features, can be found in \cite{agerri2016robust}\footnote{Table 3 and pages 68-71}, including a section on how to combine the clustering features.
A preliminary version of this system \citep{sanvicente-saralegi-agerri:2015:SemEval} was the winner of the OTE sub-task in the ABSA 2015 edition (English only). In the next section we show that this system obtains state-of-the-art results not only across domains and languages for NER, but also for other tasks such as Opinion Target Extraction. The results reported are obtained using the official ABSA evaluation scripts \citep{pontiki-EtAl:2014:SemEval,pontiki-EtAl:2015:SemEval,pontiki-EtAl:2016:SemEval}.
\section{Experimental Results}\label{sec:results}
In this section we report on the experiments performed using the system and data described above. First we will present the English results for the three ABSA editions as well as a comparison with previous work. After that we will do the same for 5 additional languages included in the ABSA 2016 edition: Dutch, French, Russian, Spanish and Turkish. The local and clustering features, as described in Section \ref{sec:system}, are the same for every language and evaluation setting. The only change is the clustering lexicons used for the different languages. As stated in section \ref{sec:system}, the best cluster combination is chosen via 5-fold cross validation (CV) on the training data. We first try every permutation with the Clark and Word2vec clusters. Once the best combination is obtained, we then try with the Brown clusters obtaining thus the final model for each language and dataset.
\subsection{English}\label{sec:english}
Table \ref{tab:english} provides detailed results on the Opinion Target Extraction (OTE) task for English. We show in bold our best model (ALL) chosen via 5-fold CV on the training data. Moreover, we also show the results of the best models using only one type of clustering feature, namely, the best Brown, Clark and Word2vec models, respectively.
The first noteworthy issue is that the same model obtains the best results on the three English datasets. Second, it is also interesting to note the huge gains obtained by the clustering features, between 6-7 points in F1 score across the three ABSA datasets. Third, the results show that the combination of clustering features induced from different data sources is crucial. Fourth, the clustering features improve the recall by 12-15 points in the 2015 and 2016 data, and around 7 points for 2014. Finally, while in 2014 the precision also increases, in the 2015 setting it degrades almost by 4 points in F1 score.
\begin{table}[ht]\footnotesize
\centering
\begin{tabular}{l|ccc|ccc|ccc} \hline
& \multicolumn{3}{c}{2014} & \multicolumn{3}{c}{2015} & \multicolumn{3}{c}{2016} \\ \hline \hline
Features & P & R & F1 & P & R & F1 & P & R & F1 \\ \hline
Local (L) & 81.84 & 74.69 & 78.10 & \textbf{76.82} & 54.43 & 63.71 & 74.41 & 61.76 & 67.50 \\
L + BY & 77.84 & 84.57 & 81.07 & 71.73 & 63.65 & 67.45 & \textbf{74.49} & 71.08 & 72.74\\
L + CYF100-CYR200 & \textbf{82.91} & 84.30 & 83.60 & 73.25 & 61.62 & 66.93 & 74.12 & 72.06 & 73.07\\
L + W2VW400 & 76.82 & 82.10 & 79.37 & 74.42 & 59.04 & 65.84 & 73.04 & 65.52 & 69.08 \\
L + \textbf{ALL} & 81.15 & \textbf{87.30} & \textbf{84.11} & 72.90 & \textbf{69.00} & \textbf{70.90} & 73.33 & \textbf{73.69} & \textbf{73.51} \\ \hline
\end{tabular}
\caption{ABSA SemEval 2014-2016 English results. BY: Brown Yelp 1000 classes; CYF100-CYR200: Clark Yelp Food 100 classes and Clark Yelp Reviews 200 classes; W2VW400: Word2vec Wikipedia 400 classes; ALL: BY+CYF100-CYR200+W2VW400.}
\label{tab:english}
\end{table}
Table \ref{tab:comparisonenglish} compares our results with previous work. MIN refers to the multi-task learning framework consisting of two LSTMs equipped with extended memories and neural memory operations with manually developed rules for detecting opinion expressions \citep{li2017deep}. CNN-SenticNet is the 7 layer CNN with Amazon word embeddings, POS, linguistic rules based on syntax patterns and SenticNet \citep{poria2016aspect}.
LSTM is a Long Short Term Memory neural network built on top of word embeddings as proposed by \cite{D15-1168}. WDEmb \citep{Yin:2016:UWD:3060832.3061038} uses word and dependency path, linear context and dependency context embedding features the input to a CRF. RNCRF is a joint model with CRF and a recursive neural network whereas CMLA is the Coupled Multilayer Attentions model described in section \ref{sec:absa-at-semeval}, both systems proposed by \cite{wang2017coupled}. DLIREC-NLANGP is the winning system at ABSA 2014 and 2016 \citep{toh2014dlirec,S15-2083,toh2016nlangp} while the penultimate row refers to our own system for all the three benchmarks (details in Table \ref{tab:english}).
\begin{table}[ht]\small
\centering
\begin{tabular}{lccc} \hline
System & ABSA 2014 & ABSA 2015 & ABSA 2016\\ \hline \hline
MIN$*$ (Li and Lam, 2017) & - & - & 73.44 \\
CNN-SenticNet (Poria et al., 2016) & 86.20 & - & - \\
CNN-SenticNet$*$ (Poria et al., 2016) & \textbf{87.17} & - & - \\
LSTM (Liu et al., 2015) & 81.15 & 64.30 & - \\
WDEmb (Yin et al., 2016) & 84.31 & 69.12 & - \\
WDEmb$*$ (Yin et al., 2016) & 84.97 & 69.73 & - \\
RNCRF (Wang et al., 2017) & 84.05 & 67.06 & - \\
RNCRF$*$ (Wang et al., 2017) & 85.29 & 70.73 & - \\
DLIREC-NLANGP (Toh et al., 2014-2016) & 84.01 & 67.11 & 72.34 \\
\textbf{BY+CYF100-CYR200+W2VW400} & 84.11 & \textbf{70.90} & \textbf{73.51} \\ \hline
Baseline & 47.16 & 48.06 & 44.07 \\ \hline
\end{tabular}
\caption{ABSA SemEval 2014-2016: Comparison of English results in terms of F1 scores; $*$ refers to models enriched with human-engineered linguistic features.}
\label{tab:comparisonenglish}
\end{table}
The results of Table \ref{tab:comparisonenglish} show that our system, despite its simplicity, is highly competitive, obtaining the best results on the 2015 and 2016 datasets and a competitive performance on the 2014 benchmark. In particular, we outperform much more complex and language-specific approaches tuned via language-specific features, such as that of DLIREC-NLANGP. Furthermore, while the deep learning approaches (enriched with human-engineered linguistic features) obtain comparable or better results on the 2014 data, that is not the case for the 2015 and 2016 benchmarks, where our system outperforms also the MIN and CMLA models (systems which require manually added rules and gold-standard opinion expressions to obtain their best results, as explained in section \ref{sec:absa-at-semeval}). In this sense, this means that our system obtains better results than MIN and CMLA by learning the targets independently instead of jointly learning the target and those expressions that convey the polarity of the opinion, namely, the opinion expression.
There seems to be also a correlation between the size of the datasets and performance, given that the results on the 2014 data are much higher than those obtained using the 2015 and 2016 datasets. This might be due to the fact that the 2014 training set is substantially larger, as detailed in Table \ref{tab:datasets}. In fact, the smaller datasets seem to affect more the deep learning approaches (LSTM, WDEmb, RNCRF) where only the MIN and CMLA models obtain similar results to ours, albeit using manually added language-specific annotations.
Finally, it would have been interesting to compare MIN, CNN-SenticNet and CMLA with our system on the three ABSA benchmarks, but their systems are not publicly available.
\subsection{Multilingual}\label{sec:multilingual}
We trained our system for 5 other languages on the ABSA 2016 datasets, using the same strategy as for English. We choose the best Clark-Word2vec combination (with and without Brown clusters) via 5-cross validation on the training data. The features are exactly the same as those used for English, the only change is the data on which the clusters are trained. Table \ref{tab:absa2016multilingual} reports on the detailed results obtained for each of the languages. In bold we show the best model chosen via 5-fold CV. Moreover, we also show the best models using only one of each of the clustering features.
\begin{table}[ht]\small
\centering
\begin{tabular}{llccc}\hline
Language & Features & Precision & Recall & F1 \\ \hline \hline
\multirow{5}{*}{es} & Local (L) & 79.17 & 59.19 & 67.74 \\
& L + BW & 67.96 & 63.67 & 65.75 \\
& L + CW600 & 73.22 & 64.80 & 68.75 \\
& L + W2VW300 & 75.50 & 63.53 & 69.00 \\
& L + \textbf{CW600 + W2VW300} & 75.36 & 65.22 & \textbf{69.92} \\ \hline
\multirow{4}{*}{fr} & Local (L) & 66.92 & 66.41 & 66.67 \\
& L + BW & 63.39 & 72.46 & 67.62 \\
& L + \textbf{CW100} & 69.94 & 69.08 & \textbf{69.50} \\
& L + W2VW100 & 66.52 & 68.77 & 67.62 \\ \hline
\multirow{4}{*}{nl} & Local (L) & 73.14 & 55.50 & 63.11 \\
& L + BW & 68.59 & 57.37 & 62.48 \\
& L + CW100 & 66.94 & 65.15 & 66.03 \\
& L + \textbf{W2VW400} & 68.27 & 64.61 & \textbf{66.39} \\ \hline
\multirow{4}{*}{ru} & Local (L) & 64.87 & 61.87 & 63.33 \\
& L + BW & 61.32 & 64.60 & 62.92 \\
& L + \textbf{CW500} & 64.21 & 66.91 & \textbf{65.53} \\
& L + W2VW700 & 64.41 & 64.81 & 64.61 \\ \hline
\multirow{4}{*}{tr} & Local (L) & 56.82 & 51.72 & 54.15 \\
& L + \textbf{BW} & 62.69 & 57.93 & \textbf{60.22} \\
& L + CW200 & 58.28 & 60.69 & 59.46 \\
& L + W2VW300 & 59.09 & 53.79 & 56.32 \\ \hline
\end{tabular}
\caption{ABSA SemEval 2016 multilingual results.}
\label{tab:absa2016multilingual}
\end{table}
The first difference with respect to the English results is that the Brown clustering features are, in three out of five settings, detrimental to performance. Second, that combining clustering features is only beneficial for Spanish. Third, the overall results are in general lower than those obtained in the 2016 English data. Finally, the difference between the best results and the results using the Local features is lower than for English, even though the Local results are similar to those obtained with the English datasets (except for Turkish, but this is due to the significantly smaller size of the data, as shown in Table \ref{tab:datasets}).
We believe that all these four issues are caused, at least partially, by the lack of domain-specific clustering features used for the multilingual experiments. In other words, while for the English experiments we leveraged the Yelp dataset to train the clustering algorithms, in the multilingual setting we first tried with already available clusters induced from the Wikipedia. Thus, it is to be expected that the gains obtained by clustering features obtained from domain-specific data such as Yelp would be superior to those achieved by the clusters trained on out-of-domain data.
In spite of this, Table \ref{tab:comparisonmultilingual} shows that our system outperforms the best previous approaches across the five languages. In some cases, such as Turkish and Russian, the best previous scores were the baselines provided by the ABSA organizers, but for Dutch, French and Spanish our system is significantly better than current state-of-the-art. In particular, and despite using the same system for every language, we improve over GTI's submission, which implemented a CRF system with linguistic features specific to Spanish \citep{S16-1049}.
\begin{table}[ht]\small
\centering
\begin{tabular}{llc} \hline
Language & System & F1 \\ \hline \hline
\multirow{3}{*}{es} & GTI & 68.51 \\
& L + \textbf{CW600 + W2VW300} & \textbf{69.92} \\
& Baseline & 51.91 \\ \hline
\multirow{3}{*}{fr} & IIT-T & 66.67 \\
& L + \textbf{CW100} & \textbf{69.50} \\
& Baseline & 45.45 \\ \hline
\multirow{3}{*}{nl} & IIT-T & 56.99 \\
& L + \textbf{W2VW400} & \textbf{66.39} \\
& Baseline & 50.64 \\ \hline
\multirow{3}{*}{ru} & Danii. & 33.47 \\
& L + \textbf{CW500} & \textbf{65.53} \\
& Baseline & 49.31 \\ \hline
\multirow{2}{*}{tr} & L + \textbf{BW} & \textbf{60.22} \\
& Baseline & 41.86 \\ \hline
\end{tabular}
\caption{ABSA SemEval 2016: Comparison of multilingual results in terms of F1 scores.}
\label{tab:comparisonmultilingual}
\end{table}
\section{Discussion and Error Analysis}\label{sec:discussion}
Considering the simplicity of our approach, we obtain best results for 6 languages and 7 different settings in the Opinion Target Extraction (OTE) benchmark for the restaurant domain using the ABSA 2014-2016 datasets.
These results are obtained without linguistic or manually-engineered features, relying on injecting external knowledge from the combination of clustering features to obtain a robust system across languages, outperforming other more complex and language-specific systems. Furthermore, the feature set used is the same for every setting, reducing human intervention to a minimum and establishing a clear methodology for a fast and easy creation of competitive OTE multilingual taggers.
The results also confirm the behaviour of these clustering algorithms to provide features for sequence labelling tasks such as OTE and Named Entity Recognition (NER), as previously discussed in \cite{agerri2016robust}. Thus, in every evaluation setting the best results using Brown clusters as features were obtained when data close to the application domain and text genre, even if relatively small, was used to train the Brown algorithm. This can be clearly seen if we compare the English with the multilingual results. For English, the models including Brown clusters improve the Local features over 3-5 points in F1 score, whereas for Spanish, Dutch and Russian, they worsen performance. The reason is that for English the Yelp dataset is used whereas for the rest of languages the clusters are induced using the Wikipedia, effectively an out-of-domain corpus. The exception is Turkish, for which a 6 point gain in F1 score is obtained, but we believe that is probably due to the small size of the training data used for training the Local model.
In contrast, Word2vec clusters clearly benefit from larger amounts of data, as illustrated by the best English Word2vec model being the one trained using the Wikipedia, and not the Yelp dataset, which is closer to the application domain. Finally, the Clark algorithm seems to be the most versatile as it consistently outperforms the other two clustering methods in 4 out of the 8 evaluation settings presented.
Summarizing: (i) Brown clusters perform better when leveraged from source data close to the application domain, even if small in size; (ii) Clark clusters are the most robust of the three with respect to the size and domain of the data used; and (iii) for Word2vec size is the crucial factor. The larger the source data the better the performance. Thus, instead of choosing over one clustering type or the other, our system provides a method to effectively combining them, depending on the data sources available, to obtain robust and language independent sequence labelling systems.
Finally, results show that our models are particularly competitive when the amount of training data available is small, allowing us to compete with more complex systems including also manually-engineered features, as shown especially by the English results on the 2015 and 2016 data.
\subsection{Error Analysis}\label{sec:error-analysis}
We will now discuss the shortcomings and most common errors performed by our system for the OTE task. By looking at the overall results in terms of \emph{precision} and \emph{recall}, it is possible to see the following patterns: With respect to the Local models, precision is consistently better than recall or, in other words, the coverage of the Local models is quite low. Tables \ref{tab:english} and \ref{tab:absa2016multilingual} show that adding clustering features to the Local models allows to improve the recall for every evaluation setting, although with different outcomes. Overall, precision suffers, except for French\footnote{It also goes up for Turkish, but as already commented, we believe that due to the small size of the Turkish training set, clustering features allow to improve both precision and recall.}. Furthermore, in three cases (English 2014, 2016 and Russian) precision is lower than recall, whereas the remaining 5 evaluations show that, despite large improvements in F1 score, most errors in our system are caused by \emph{false negatives}, as it can be seen in Table \ref{tab:false}.
\begin{table}[ht]\footnotesize
\centering
\begin{tabular}{lcccccccc} \hline
& 2014 & 2015 & \multicolumn{6}{c}{2016} \\ \hline \hline
Error type & en & en & en & es & fr & nl & ru & tr \\ \hline
FP & \textbf{230} & 151 & \textbf{189} & 165 & 194 & 117 & \textbf{390} & 62 \\
FN & 143 & \textbf{169} & 163 & \textbf{248} & \textbf{202} & \textbf{132} & 312 & \textbf{65} \\ \hline
\end{tabular}
\caption{False Positives and Negatives for every ABSA 2014-2016 setting.}
\label{tab:false}
\end{table}
Table \ref{tab:errorexamples} displays the top 5 most common false positives and false negative errors for English, Spanish and French\footnote{According to the authors' knowledge of languages to comment on specific examples from the data.}. By inspecting our system's output, and both the test and training sets, we found out that there were three main sources of errors: (a) errors caused by ambiguity in the use of certain source forms that may or may not refer to an opinion target; (b) span errors, where the target has only been partially annotated; and (c) unknown targets, which the system was unable to annotate by generalizing on the training data or clusters.
\begin{table}[ht]\footnotesize
\centering
\begin{tabular}{c|lr|lr|lr|lr|lr} \hline
& \multicolumn{2}{c}{2014} & \multicolumn{2}{c}{2015} & \multicolumn{6}{c}{2016} \\ \hline \hline
& en & & en & & en & & es & & fr & \\ \hline
\multirow{5}{*}{FP} & place & 21 & place & 16 & place & 16 & comida & 11 & restaurant &13 \\
& money & 6 & food & 6 & food & 16 & restaurante & 10 & cuisine & 9 \\
& spot & 4 & waitress & 4 & restaurant & 11 & atenci\'on & 7 & terrasse & 8 \\
& pizza & 3 & chicken & 4 & service & 7 & platos & 6 & repas & 7 \\
& sushi & 3 & salmon & 3 & wait & 3 & servicio & 4 & plats & 6 \\ \hline \hline
\multirow{5}{*}{FN} & place & 4 & restaurant & 8 & place & 7 & restaurante & 12 & restaurant & 5 \\
& food & 3 & place & 7 & sushi & 3 & platos & 7 & cuisine & 5 \\
& waiting & 2 & food & 5 & restaurant & 3 & trato & 6 & carte & 5 \\
& taste & 2 & Casa La Femme & 4 & Ray's & 3 & comida & 6 & plats & 4 \\
& selection & 2 & The Four Seasons & 3 & menu & 3 & carta & 6 & table & 3 \\ \hline
\end{tabular}
\caption{Top five false positive (FP) and negative (FN) errors for English, Spanish and French.}
\label{tab:errorexamples}
\end{table}
With respect to type (a), it is useful to look at the most common errors for all three languages, namely, `place', `food' and `restaurant', which are also among the top 5 most frequent targets in the gold standard sets. By looking at Examples (1-3) we would say that in all three cases `place' should be annotated as opinion target. However, (2) is a false positive (FP), (3) is a false negative (FN) and (1) is an example from the training set in which `place' is annotated as target. This is the case with many instances of `place' for which there seems to be some inconsistency in the actual annotation of the training and test set examples\footnote{Interannotator agreement (91\% F1) was only reported for a small subset of the Spanish data.}.
\vspace{0.5cm}
\noindent Example (1): Avoid this place! \\
\noindent Example (2): this place is a keeper! \\
\noindent Example (3): it is great place to watch sporting events. \\
For other frequent type (a) errors, ambiguity is the main problem. Thus, in Spanish the use of `comida'\footnote{In English: ``food'' or ``meal'', depending on the context.} and `restaurante'\footnote{In English: ``restaurant''.} is highly ambiguous and causes many FPs and FNs because sometimes it is actually an opinion target whereas in many other other cases it is just referring to the meal or the restaurant themselves without expressing any opinion about them. The same phenomenon occurs for ``food'' and ``restaurant'' in English and for `cuisine' and `restaurant' in French.
Span type (b) errors are typically caused by long opinion targets such as ``filet mignon on top of spinach and mashed potatoes'' for which our system annotates ``filet'' and ``spinach'' as separate targets, or ``chicken curry and chicken tikka masala'' which is wrongly tagged as one target. These cases are difficult because on the surface they look similar but the first one refers to one dish only, hence one target, whereas the second one refers to two separate dishes for which two different opinion targets should be annotated. Of course, these cases are particularly hurtful because they count as both FP and FN.
Finally, type (c) errors are usually caused by lack of generalization of our system to deal with unknown targets. Example (4-7) contain various mentions to the ``Ray's'' restaurant, which is in the top 5 errors for the English 2016 test set.
\vspace{0.5cm}
\noindent Example (4): After 12 years in Seattle Ray's rates as the place we always go back to.\\
\noindent Example (5): We were only in Seattle for one night and I'm so glad we picked Rays for dinner!\\
\noindent Example (6): I love Dungeness crabs and at Ray's you can get them served in about 6 different ways!\\
\noindent Example (7): Imagine my happy surprise upon finding that the views are only the third-best thing about Ray's!\\
\noindent Example (8): Ray's is something of a Seattle institution\\
Examples (4), (5) and (7) are FNs, (6) is a FP caused by wrongly identifying the target as ``Ray's you'', whereas (8) is not event annotated in the gold standard or by our system, although it should had been.
\section{Concluding Remarks}\label{sec:conclusion}
In this research note we provide additional empirical experimentation to \cite{agerri2016robust}, reporting best results for Opinion Target Extraction for 6 languages and 7 datasets using the same set of simple, shallow and language independent features. Furthermore, the results provide some interesting insights with respect to the use of clusters to inject external knowledge via semi-supervised features.
First, Brown clusters are particularly beneficial when trained on domain-related data. This seems to be the case in the multilingual setting, where the Brown clusters (trained on out-of-domain Wikipedia data) worsen the system's performance for every language except for Turkish.
Second, the results also show that Clark and Word2vec improve results in general, even if induced on out-of-domain data. Thirdly, for best performance it is convenient to combine clusters obtained from diverse data sources, both from in- and out-of-domain corpora.
Finally, the results indicate that, even when the amount of training data is small, such as in the 2015 and 2016 English benchmarks, our system's performance remains competitive thanks to the combination of clustering features. This, together with the lack of linguistic features, facilitates the easy and fast development of systems for new domains or languages. These considerations thus confirm the hypotheses stated in \cite{agerri2016robust} with respect to the use of clustering features to obtain robust sequence taggers across languages and tasks.
The system and models for every language and dataset are available as part of the \emph{ixa-pipe-opinion} module for public use and reproducibility of results.\footnote{\url{https://github.com/ixa-ehu/ixa-pipe-opinion}}
\section*{Acknowledgments}
First, we would like to thank the anonymous reviewers for their comments to improve the paper. We would also like to thank I\~naki San Vicente for his help obtaining the Yelp data. This work has been supported by the Spanish Ministry of Economy and Competitiveness (MINECO/FEDER, UE), under the projects TUNER (TIN2015-65308-C5-1-R) and CROSSTEXT (TIN2015-72646-EXP).
\bibliographystyle{apalike}
|
2,877,628,090,546 | arxiv | \section{Introduction}
Quantum computers have the potential to greatly enhance the efficiency of certain computational problems. However, they rely on the storage and manipulation of many quantum systems in superposition, and it is this careful juxtaposition of storage and manipulation of the quantum states that renders their development to be so difficult. Namely, by making individual quantum systems easily accessible to control often leads to increased external noise. Suppressing noise in a scalable manner is thus a necessary requirement for any quantum computing architecture, promoting the need for quantum error correction and fault-tolerance.
Quantum error correcting codes come in many different forms, yet the key to any error correcting scheme is the establishment of a fault-tolerance threshold~\cite{Shor96, AB97, Preskill98, KLZ98}. Concatenated codes have played a key role in determining these threshold rates due to their ability to iteratively suppress errors by increasing levels of concatenation. Along these lines, Aliferis, Gottesman, and Preskill established a rigorous lower bound for the fault-tolerance threshold of concatenated codes by introducing a technique called malignant set counting~\cite{AGP06}. Paetznick and Reichardt used this method to establish a circuit level noise threshold for the 23-qubit Golay~code under physical depolarizing noise, obtaining a threshold error rate of~$1.32 \times 10^{-3}$.
One of the most prominent methods for implementing a logical fault-tolerant gate is by implementing the gate transversally, that is by applying individual physical gates to each of the qubits composing the logical qubit. However, as shown by Eastin and Knill, the set of transversal gates for a given code generates a finite group, and therefore cannot be universal for quantum computation~\cite{EK09}. In order to circumvent this fundamental restriction and potentially reduce the qubit overhead seen in magic state distillation~\cite{BK05}, many recent fault-tolerant proposals for universal logical quantum computation have focused on code conversion and gauge fixing~\cite{PR13,ADP14, Bombin14, BC15, JB16, JBH15}. In this work, we study a parallel construction for universal fault-tolerant quantum computation through the concatenation of two error correcting codes~\cite{JL14}. The idea behind this scheme is to protect the gate that is not transversal for one code through its implementation using transversal gates in the other code. The concatenation scheme provides a dual protection for the purposes of universal fault-tolerant quantum logic.
In this work, we establish a lower bound on the fault-tolerant threshold for the 105-qubit universal concatenated code under depolarizing noise. We show that the dual protection coming from the concatenation of two different error correcting codes provides more than just minimal fault-tolerant protection, it serves as a means for logical error suppression at the second level of concatenation~(and above). We believe that this provides new insights in the development of quantum error correcting codes, and emphasizes an important principle: to logically protect the quantum gates that are most present in the fault-tolerant architecture.
\section{Concatenated 105-qubit scheme} \label{Concatenated 105-qubit scheme}
\begin{figure}[h
\centering
\begin{subfigure}{0.4\textwidth}
\includegraphics{fig1a.jpg}
\caption{}
\label{fig:HadCircuit}
\end{subfigure}
\begin{subfigure}{0.4\textwidth}
\begin{align*}
\xymatrix @*=<0em> @C=2em @R=1.4em {
& \gate{\text{LEC}} & \gate{G} & \gate{\text{TEC}} & \qw
\gategroup{1}{2}{1}{4}{0.7em}{--}
}
\end{align*}
\caption{}
\label{fig:exRecCircuit}
\end{subfigure}
\caption{\subref{fig:HadCircuit}~Logical Hadamard~$H$ circuit for $[[15,1,3]]$~Reed-Muller code. The bold dark lines represent resting qubits subject to storage errors. The dotted vertical lines are used to separate the time steps for which gates are applied in parallel. Logical~$H$ for the 105-qubit code is implemented fault-tolerantly by applying each non-fault-tolerant logical~$H$ gates in parallel. \subref{fig:exRecCircuit}~Extended rectangle consisting of leading and trailing error correcting circuits implementing the desired logical gate $G$.}
\label{fig:Circuits}
\end{figure}
We begin by briefly reviewing the 105-qubit concatenated scheme for universal fault-tolerant logical gates~\cite{JL14}. The logical information is encoded through the concatenation of an outer and inner quantum code, that is each logical qubit of the outer code is in turn encoded into the code of the inner code. In the 105-qubit code, the outer code is the 7-qubit Steane code, which has the properties of having transversal Clifford operations. The inner code is the 15-qubit Reed-Muller code, which contains CNOT and $T = \text{diag}(1, e^{i\pi/4})$ as its transversal gates. The gate set generated by Clifford~+~$T$ is universal for quantum computation~\cite{BBC+95}. The overall code is a~$[[105,1,9]]$ code encoding a single logical qubit in 105 qubits with distance~9. Since CNOT and the phase gate~$S = T^2$ is transversal for both codes, both gates will remain transversal when the two codes are concatenated. Logical Hadamard~$H$ is obtained by applying a non-transversal logical Hadamard to each of the encoded 15-qubit codeblocks. Although not fault-tolerant on each 15-qubit codeblock with a single error potentially leading to a logical error, due to the protection of the 7~qubit code, a single error will never result in a global logical fault and will remain correctable. Figure~\ref{fig:HadCircuit} summarizes the application of the logical~$H$ gate on a 15-qubit codeblock. Note that the circuit construction was optimized using only 14 CNOT gates with a circuit depth of 9 time steps. It might be possible to find a circuit using fewer gates and a better depth.
Logical~$T$ is constructed by choosing a sequence of logical CNOT and $T$~gates to be implemented on the 7-qubit outer code (see Fig.~\ref{fig:LogicalTgateCircuit} in the Supplementary Material). While this operation is not fault-tolerant on the outer code as errors can be spread between codeblocks, the underlying logical gates are transversal on each of the 15-qubit codeblocks. As such, any single error may result in multiple single errors spread across different codeblocks, and will remain correctable by the error correction of each of the 15-qubit codeblocks.
\section{Fault-tolerance threshold theorem}
The key property of fault-tolerant architectures is the presence of an \emph{asymptotic threshold}. For concatenated coding schemes, the asymptotic threshold is the physical error rate $p_{th}$ such that for physical error rates~$p < p_{th}$ the logical error rate can be made arbitrarily small for sufficiently large number of concatenation levels (and the overall time/space resource overhead scales as~$\mathcal{O}(\text{poly}(\log{(A/\epsilon)})A)$, where $A$ would be the required resources for a noiseless circuit).
All currently known fault-tolerant schemes for quantum logic require active error correction between logical gates. Error correction steps are interleaved between the implementation of various fault-tolerant gates. In this study, fault-tolerant syndrome measurement and error correction is implemented using Steane's method~\cite{Steane97}. At a given concatenation level, each component of the logical circuit (gates and error detection/measurement) will be themselves composed of many operations from the previous level of concatenation. These components include state preparation and measurement, logical gates and memory locations. We consider a depolarizing model for each physical location (level-0) in the circuit. Depolarizing noise is modelled in a similar manner to that of Paetznick and Reichardt in their study of the 23-qubit Golay code~\cite{PR12}. Each single qubit gate (including resting qubits) undergoes Pauli noise with probability~$p/4$ for each Pauli operation, and each two qubit gate undergoes two-qubit Pauli noise with probability~$p/16$ for each non-trivial two-qubit Pauli. Under this noise, state preparation in the stabilizer $Z \ (X)$~basis is flipped from $\ket{0} \ (\ket{+})$ to $\ket{1} \ (\ket{-})$ with probability~$p/4$. Similarly, measurement in the stabilizer $Z \ (X)$~basis is flipped with probability~$p/4$.
As first proposed by Aliferis~\emph{et al.}~\cite{AGP06} we analyze logical gates by considering the whole as an extended rectangle (\emph{exRec}), that is the logical gate itself along with its leading~(LEC) and trailing~(TEC) error correction circuits (see Figure~\ref{fig:exRecCircuit}). In order to characterize the rate at which logical errors occur, we define \emph{malignant} error events. Let $\ket{\psi_1}$ be a single or two-qubit logical state obtained by applying ideal decoders immediately after the LEC circuit and $\ket{\psi_2}$ the state obtained by applying ideal decoders immediately after the TEC. We define the event $\text{mal}_{E}$ as $\ket{\psi_2} = EU\ket{\psi_1}$ where $E$ is a single or two-qubit logical Pauli error and $U$ is the desired gate. We denote the malignant logical error~$E$ present at the output of the circuit by~$\text{mal}_{E}$. In what follows we will be interested in obtaining estimates of the probability that the event $\mathrm{mal}_{E}$ occurs for the CNOT, Hadamard and $T$ gate.
We use Monte-Carlo sampling in order to determine the probability of each malignant event given an underlying physical depolarizing model. Given~$N$ simulations of the logical gate $G$ at a physical error rate~$p$, we track the number of malignant faults~$a_E(\epsilon)$ of each error type~$E$, and estimate the probability of a given logical fault as~$\text{Pr}[\text{mal}_E | G, p] = a_E/N$. The estimate of $\text{Pr}[\text{mal}_E | G, \ p]$ improves as the number of iterations~$N$ increases by reducing the standard deviation. Using a least-squares fitting to determine the error probability as a function of input depolarizing error rate, we can determine the \emph{pseudo-threshold} for each of the logical operations for our error-correcting code. For a level-1 exRec encoding the logical gate~$G$, we define the pseudo-threshold as the crossing point ~$p = p_G^{(1)}(p)$, where $p_G^{(1)}(p)=\sum_{E_i} \text{Pr}[\text{mal}_{E_i} | G, p] $ for all possible logical Pauli errors~$E_i$ for a given logical gate~$G$. Intuitively, the pseudo-threshold corresponds to the error rate below which the logical error rate at level-1 is guaranteed to be lower than the physical error rate. In all previously studied error correction codes, the pseudo-threshold was conjectured to be an upper bound on the asymptotic threshold~\cite{SCCA06, PR12}. In this work we show that this intuitive bound does not necessarily have to hold and that the asymptotic threshold can be much larger than the pseudo-threshold. To our knowledge this is the first exhibition of this type of logical error behaviour and is fundamentally related to the structure of the underlying 105-qubit error correcting code.
At each location of the level-one exRec, errors are introduced following the depolarizing noise model with noise strength $p$. Since the logical gates in question are fault-tolerant, a logical fault can only occur if a sequence of failures occur at the physical level. Namely, we can upper bound the failure probability for each logical fault~$E$ as follows:
\begin{align}
\text{Pr}[\text{mal}_E^{(1)}| G,p] \le \sum_{k = \ceil{\frac{d^*}{2}}}^{L_G} c(k) p^k =: \Gamma_G^{(1)},
\label{eq:Gamma1}
\end{align}
where the coefficients $c(k)$ are positive integers that parametrize the number of possible weight-$k$ errors that can lead to a logical fault, $L_G$~is the total number of circuit locations in the logical gate~$G$, and $d^*$~characterizes the minimal distance of a given logical gate (that is $\ceil{d^*/2}$~is the minimum weight error that must occur to produce a logical fault). For example in the 105-qubit code, the logical CNOT gate has $d^* = 9$, while the Hadamard and $T$~logical gates have~$d^* = 3$ since they sacrifice some of the distance of the code due to the fact that they are not globally transversal. As was shown in \cite{PR12}, the polynomial $\Gamma^{(1)}(p)$ is monotone non-decreasing making its construction straightforward with the role of upper bounding the logical error probabilities of all the logical operations~$G$ at level-1.
We can then generalize this notion to the level~$l$ concatenation level, where each of the physical locations are replaced by logical~exRec locations of the $(l-1)$~level. Taking the worst case error rate for the $(l-1)$~logical components, the error rate of logical gates at the $l$-th concatenation level can be bounded as follows:
\begin{align}
\text{Pr}[\text{mal}_E^{(l)}| G,p] \le \sum_{k = \ceil{\frac{d^*}{2}}}^{L_G} c(k) \left(\Gamma_G^{(l-1)}\right)^k =: \Gamma_G^{(l)},
\label{eq:Gammal}
\end{align}
where the polynomials given by the coefficients~$c(k)$ remain the same as the logical gate is composed of the same operations, just replacing physical locations with logical exRecs from the previous concatenation level.
Finally, we generalize a claim of Ref.~\cite{PR12} required to show the suppression of errors for level-2 and higher concatenation levels when below the fault-tolerance threshold~$p_{th}$. Importantly, there exists a $p_{th}$ such that the upper bound on the level-2 logical error probability will be lower than that of level-1, that is $\Gamma_G^{(2)} \le \epsilon \Gamma_G^{(1)}$, and the following will hold for all concatenation levels~$m \ge 2$:
\begin{align}
\text{Pr}[\text{mal}_E^{(m)}| G,p] \le \Gamma_G^{(m)} \le \epsilon^{\ceil{\frac{d^*}{2}}^{m-2}+1} \Gamma_G^{(1)},
\label{eq:asymptoticthresh}
\end{align}
that is the error rate is exponentially suppressed below the crossing point of~$\Gamma_G^{(1)}$ and~$\Gamma_G^{(2)}$, thus providing a lower bound for the asymptotic threshold for the logical gate~$G$. The proof in full generality is presented in Supplementary Material~\ref{app:thresholdlowerbound}.
\section{Concatenated 105-qubit thresholds}
At the level-1 encoding, the logical gate exhibiting the lowest pseudo-threshold is the Hadamard gate~$H$. Due to the complexity of the individual logical Hadamard gates arising on each of the 15-qubit codeblocks, many errors propagating from the different individual gate locations could lead to logical faults on that codeblock. The predominant error occurs when two codeblocks contain a logical fault. The logical error that occurs with the highest probability~$\text{Pr}[\text{mal}_{E} | H, p]$ is a logical~$X$. This can be understood from the sensitivity of the circuit encoding the Hadamard gate to input $Z$ errors from the LEC which have a high probability of leading to a logical error. If any of the input $Z$ errors land on the target qubit of the CNOT gates in the Hadamard encoding circuit, they will propagate to the physical Hadamard gate on the fourth qubit (see Figure~\ref{fig:HadCircuit}) resulting in a logical $X$ error.
Unlike the logical Hadamard, the leading level-1 logical error for both CNOT and $T$ arise from logical $Z$ errors rather than $X$ errors. This stems from the asymmetry in stabilizer generators of the 15-qubit code resulting in an increased protection against $X$ errors. Due to the transversality of the logical~CNOT gate in both codes and since there are fewer ways for errors to propagate in the implementation of the logical~$T$, these gates have a better pseudo-threshold relative to logical~$H$.
In order to lower bound the level-1 pseudo-threshold, the probability of all logical error types are summed for each of the logical gates and bounded as in Eq.~\ref{eq:Gamma1}. The resulting polynomials are compared to the input physical error rate and their crossing point determines the pseudo-threshold (see Fig.~\ref{fig:PseudoAndLevelThreeThresholds} in the Supplementary Material). The resulting values are presented in Table~\ref{tab:Pseudo-and-asymptotic}.
\begin{table}
\begin{centering}
\begin{tabular}{|c|c|c|}
\hline
& Pseudo-Threshold & Asymptotic threshold\tabularnewline
\hline
\hline
CNOT gate & $\left(2.11\pm0.02\right)\times10^{-3}$ & $\left(1.95\pm0.01\right)\times10^{-3}$\tabularnewline
\hline
$T$ gate & $\left(4.89\pm0.11\right)\times10^{-4}$ & $\left(1.58\pm0.02\right)\times10^{-3}$\tabularnewline
\hline
Hadamard gate & $\left(4.47\pm0.29\right)\times10^{-5}$ & $\left(1.28\pm0.02\right)\times10^{-3}$\tabularnewline
\hline
\hline
{\bf 105-qubit} & $\mathbf{\left(4.47\pm0.29\right)\times10^{-5}}$ & $\mathbf{\left(1.28\pm0.02\right)\times10^{-3}}$\tabularnewline
\hline
{\bf 23-qubit Golay} & $\mathbf{\left(1.73\right)\times10^{-3}}$ & $\mathbf{\left(1.32\right)\times10^{-3}}$\tabularnewline
\hline
\end{tabular}
\par\end{centering}
\caption{\label{tab:Pseudo-and-asymptotic}Lower bounds for the pseudo and asymptotic threshold results for the Hadamard, $T$ gate and CNOT gates. The Hadamard asymptotic-threshold is larger than its pseudo-threshold resulting from the double protection of the CNOT gates as seen by the high CNOT pseudo-threshold. In bold, the overall thresholds for the 105-qubit and 23-qubit codes are compared.}
\end{table}
It is important to observe that the CNOT pseudo-threshold is nearly two orders of magnitude larger than the Hadamard pseudo-threshold. Furthermore, all other operations in our circuits (resting qubits, measurement in the $X$ and $Z$ basis and state preparations) are upper bounded by level one polynomials that have larger pseudo-thresholds than CNOT.
The dominant set of errors leading to logical faults in the level-1 Hadamard gate is a result of input errors from the LEC as well as failures in the CNOT gates within the 15-qubit Hadamard codeblocks. These components are composed of only memory, CNOT, $X$ and~$Z$ basis state preparation and measurement locations. Since the level-1 logical error probability of these gates will be much smaller in the level-2 Hadamard exRec, detrimental faults will be much less likely to occur. Hence, there will be error rates~$p$ above the pseudo-threshold~$p_{1,H}$ such that the level-2 error polynomials characterizing the logical error rate will be below the level-1 bounding polynomial,
\begin{align}
\Gamma^{(2)}(p)\leq \Gamma^{(1)}(p), \ \forall \ p \le p_{2,H},
\label{eq:AsymptoticCond}
\end{align}
where $p_{2,H} > p_{1,H} $. The error rate~$p_{2,H}$ is the threshold rate below which all level-2 logical gates have a lower error rate compared to the level-1 logical error rate. As shown in Ref.~\cite{PR12} and argued in the previous section, the value~$p_{2,H}$ serves as a lower bound for the asymptotic threshold~$p_{th}$.
\begin{figure
\centering
\begin{subfigure}{0.4\textwidth}
\includegraphics[width=\textwidth]{fig2a}
\caption{}
\label{fig:AsymptoticHadamard}
\end{subfigure}
\begin{subfigure}{0.4\textwidth}
\includegraphics[width=\textwidth]{fig2b}
\caption{}
\label{fig:AsymptoticCNOT}
\end{subfigure}
\caption{Probability of logical error as function of physical error rate for the level-1 and level-2 logical \subref{fig:AsymptoticHadamard}~Hadamard and \subref{fig:AsymptoticCNOT}~CNOT. The crossing point of the fitted curve allows for the determination of a lower bound for the asymptotic threshold for each of the logical gates. The CNOT gate exhibits a much lower logical error rate than the Hadamard at the first level.}
\label{fig:AsymptoticThresh}
\end{figure}
In previous studies of asymptotic thresholds for the Golay and 7-qubit CSS codes, the CNOT exRec provided a lower bound on the threshold value since it contained the largest amount of locations relative to all the other gates in the universal gate set~\cite{AGP06,PR12,CDT09}. Since the CNOT exRec is itself composed entirely of gates that are transversal, as the error rate approaches the pseudo-threshold value, certain malignant events (for example, the probability of getting a logical $ZI$ error at the output of the CNOT circuit, as can be seen in Fig.~\ref{fig:LevelOneCNOT}) become more likely to occur than the level-zero probabilities determined from the depolarizing noise model. Recall that the pseudo-threshold was conjectured to be an upper bound on the asymptotic threshold value. However, it is the CNOT locations that are the leading contributors to logical errors. Consequently, the pseudo-threshold of the CNOT gate, as opposed to the $H$~and~$T$ gates, will be the limiting factor to the asymptotic threshold. As argued above, this will give rise to reduced logical error rates of the~$H$ and~$T$ gates at the second level of concatenation, and using Eq.~\ref{eq:asymptoticthresh}, a lower bound for the asymptotic threshold~$p_{th}$ can be determined. The plots in Fig.~\ref{fig:AsymptoticThresh} illustrate the level-1 and level-2 polynomials upper bounding the logical error rates at the first and second level for the Hadamard and CNOT gate circuits (see Fig.~\ref{fig:PseudoAndLevelThreeThresholds} for the corresponding $T$~gate plots). As expected, the CNOT exRec contains a lower asymptotic threshold value given by $\left(1.95\pm0.01\right)\times10^{-3}$. The Hadamard exRec limits the threshold value of the 105-qubit code to be $\left(1.28\pm0.02\right)\times10^{-3}$. Interestingly, the level-two polynomials satisfy~Eq.~\ref{eq:asymptoticthresh} for error rates nearly 30 times larger than their corresponding level-one polynomials. This is a distinctive feature of the 105-qubit concatenated scheme and clearly demonstrates the impact of having an exRec primarily composed of gates which are transversal in both codes with much larger pseudo-threshold rates. The asymptotic threshold derived for the 105-qubit code compares favourably to the $[[23,1,7]]$~Golay code studied under the same depolarizing error model and metric for gate failures under malignant set counting~\cite{PR12}. This scheme does not require magic state distillation in order to achieve fault-tolerance and may lead to reduced overhead~\cite{FMMC12}. Determining the resource overhead remains an interesting open problem.
\section{Conclusion}
In this work, we established the first rigorous lower bound on the asymptotic threshold for the concatenated 105-qubit code. We show that the pseudo-threshold value of $\left(4.47\pm0.29\right)\times10^{-5}$ arising from the $H$ gate is significantly improved at higher levels of concatenation yielding a lower bound on the asymptotic threshold value of $\left(1.28\pm0.02\right)\times10^{-3}$. The increase in asymptotic threshold is primarily due to the relatively high threshold of the logical CNOT gate. We believe that this non-traditional behaviour of logical error probabilities at higher concatenation levels is an interesting property of the studied scheme and points to an interesting direction for future error correction research. Due to the high concentration of CNOT gates for the purposes of error detection, we believe that tailoring codes to correct for logical errors in encoded CNOT gates at the expense of perhaps noisy single qubit gates would allow for higher asymptotic thresholds for concatenated codes.
\section{Acknowledgements}
T.~J.~would like to acknowledge the support of NSERC and the Vanier-Banting Secretariat through the Vanier~CGS. This work was supported by CIFAR, NSERC, and Industry~Canada. C. C. would like to thank Hemant Katiyar for useful discussions.
\bibliographystyle{ieeetr}
|
2,877,628,090,547 | arxiv | \section{Introduction} \label{sec:introduction}
Conventional (forward) optimization problems find an optimal solution for a given set of parameters. Inverse optimization, on the other hand, infers the parameters of a forward optimization problem given a set of observed solutions (typically a single point). In the literature, inverse optimization \citep{zhang1996calculating} is often employed to derive the parameters of the cost vector of an optimization problem while the constraint parameters are assumed to be fully known. In this paper, we focus on the opposite case. We impute the constraint parameters (as opposed to the objective function) of a linear forward problem given a cost vector and a set of observations. \add{Hence, we infer the feasible region of the forward problem which can be used to identify future feasible or infeasible observations, and to understand the behaviour of the model under different cost vectors.}
When imputing the cost vector, it is usually assumed that the observed solution is a candidate optimal solution~\citep{Ahuja01, Iyengar05, ghate2020inverse}. More recently, several studies also consider the case where the observed solution is not necessarily a candidate for optimality. They propose inverse models to minimize error metrics that capture the optimality gap of the observed solution~\citep{Keshavarz11, Chan14, Chan15, Bertsimas15, Aswani16, naghavi2019inverse}. More recently, multiple observations are considered, where the cost vector is imputed based on a given set of feasible observations~\citep{Keshavarz11,Troutt06,Troutt08,Chow12, Bertsimas15, esfahani2018IncompleteInfo, babier2019ensemble}.
\citet{tavasliouglu2018structure} find a set of inverse-feasible cost vectors, instead of a single cost vector, that makes feasible observations optimal. \add{A standard assumption in the literature of inverse optimization is that the observed data is noise-free \citep{zhang1999further, Ahuja01}. There are only a few studies that consider noise or uncertainty in the input data \citep{Aswani16, dong2018inferring, ghobadi2018robust} when inferring the cost vector.}
Extending from only imputing the cost vector, some studies consider the case where both the objective function and the right-hand side (RHS) of the constraints are imputed simultaneously for specific types of problems~\citep{dempe2006inverse, Chow12, cerny2016inverse}. Note that when the feasible region is being imputed, a given observation can become optimal since the constraints can be adjusted so as to position the given observation on the boundary of the feasible region. A few studies focus on imputing only the RHS constraint parameters of the forward problem. Given a single observation, \citet{cerny2016inverse} find the RHS of the constraints from a pre-specified set of possible parameters. In other studies, the RHS is imputed in such a way that the observed solution becomes optimal~\citep{birge2017inverse} or near-optimal according to a pre-specified distance metric~\citep{dempe2006inverse, guler2010capacity, saez2018short}.
\add{There are limited previous studies that impute the left-hand side (LHS) parameters of the constraints set. \citet{chan2018inverse} input a single observation and propose an inverse optimization method to find the LHS (assuming the RHS is known) as well as the cost vector such that the given observation becomes optimal. Assuming an unknown objective in addition to an unknown feasible region would result in finding a feasible region that makes a given point optimal for \emph{some} objective ({\it i.e.}, any objective that fits the problem mathematically), and hence, would make the problem more relaxed and less practical. An unknown objective would also mean that we are not able to assess the quality of the given solutions.}
\add{Although the forward problem we consider is similar to that of~\cite{chan2019inverse}, our proposed inverse models differ from theirs in several key aspects. Our models infer the full constraint parameters (both LHS and RHS) and consider multiple observations instead of a single one. We assume that the objective function is known and hence, we can identify the preferred solution(s) among all given observations. This assumption is relevant in practical settings and will be discussed in Section~\ref{sec:motivation}. Their models also require a prior belief and additional user-defined conditions on the constraint parameters to avoid trivial (all zero) solutions; our models do not require any additional user input and generate non-trivial solutions by design. We also introduce generalized loss functions that do not necessarily rely on a prior belief on the constraint parameters.}
Solving inverse optimization problems efficiently has been the focus of a few papers in the literature. While inverse optimization problems for imputing the cost vector often retain the complexity of their corresponding forward problems ({\it e.g.}, linear programming), imputing the constraint parameters often constitutes a non-convex problem due to the presence of multiple bilinear terms. Hence, the resulting inverse problems are more complex to solve. To address these concerns, a few studies in the literature focus on specific problem instances and find certain criteria or assumptions to reduce the complexity~\citep{birge2017inverse, brucker2009inverse}. Others propose a solution methodology that solves a sequence of convex optimization problems under a specific distance metric~\citep{chan2018inverse}. In this paper, instead of attempting to solve a nonlinear non-convex problem, we use the problem's theoretical properties and propose an equivalent reformulation that can be linearly constrained, and hence, easier to solve. We also further simplify the problem by providing closed-form solutions or suggesting decomposition approaches for specific cases.
To the best of our knowledge, this paper is the first in the literature to propose an inverse optimization framework for inferring the full constraint matrix (both LHS and RHS) of a linear programming model based on multiple observations. \add{Contrary to the recent literature, the objective function is known in our proposed framework, and the constraint parameters are partially or fully unknown. Our models do not require any additional input data with the observations, but such data can be incorporated in the model if available. The solutions are observed without noise, but our framework allows for potential inclusion of noisy data.}
The contributions of this paper are \add{summarized} as follows:
\begin{itemize}
\item We propose a single-point inverse optimization model that inputs one observation and infers a set of fully or partially unknown constraint parameters of the forward problem so as to make the given observation optimal for a specific cost vector.
\item We extend this model to a multi-point inverse optimization methodology that inputs any number of observations and finds the constraint parameters so as to make all the observations feasible and the preferred observation(s) optimal for a specific cost vector.
\item We develop an equivalent tractable reformulation of the multi-point inverse optimization model that eliminates the bilinearity of the original model.
\item \add{We introduce a generalized loss function that can take any form or input any type of available data and induces the desirable properties of the feasible region of the forward problem. This proposed loss function does not necessarily rely on a prior belief, expert opinion, or other specific user inputs on the constraint parameters.}
\item We test and validate our proposed methodology on numerical \add{examples} and demonstrate the characteristics of each of the loss functions introduced.
\item \add{We demonstrate the application of our approach on a diet recommendation problem and show that the proposed model can use past food consumption observations to impute each user's implicit constraints and generate personalized diets that are palatable.}
\end{itemize}
The rest of this paper is organized as follows. In Section~\ref{sec:motivation}, we motivate the proposed methodology by presenting examples of application areas. Section~\ref{sec:Methodology} introduces our methodology for inverse optimization of constraint parameters, its theoretical properties, and an equivalent reformulation. In Section~\ref{sec:measures}, we introduce examples of the generalized loss function that can be used as the objective function of the inverse optimization problem and discuss the theoretical properties of each. We illustrate the results of our methodology using two numerical examples and a diet recommendation application in Section~\ref{sec:numericalexample}. Section~\ref{sec:discussions} discusses a few extensions to the proposed models, and finally, conclusions and future research directions are provided in Section~\ref{sec:conclusions}.
\section{Motivation} \label{sec:motivation}
Inverse optimization has been applied to several different application areas, from healthcare~\citep{Erkin10, Ayer14} and nutrition~\citep{ghobadi2018robust} to finance~\citep{Bertsimas12} and electricity markets~\citep{birge2017inverse}. In this section, we provide two example applications where imputing the feasible region based on a set of collected observations is of practical importance. These applications serve to motivate the development of our proposed methodology.
\vspace{1em}
\noindent{\bf A. Cancer Treatment Planning:}
Consider the radiation therapy treatment planning problem for cancer patients. The input of the problem is a medical image ({\it e.g.}, CT, MRI) which includes contours that delineate the cancerous region ({\it i.e.}, tumor) and the surrounding healthy organs. The goal of a treatment planner is to find the direction, shape, and intensity of radiation beams such that a set of clinical metrics on the tumor and the surrounding healthy organs are satisfied. \add{While there exists literature on using inverse optimization for inferring the objective function in cancer treatment planning \citep{Chan14, Goli15, babier2018inverse}, to the best of our knowledge, no study infers the constraint parameters for cancer treatment planning.}
In current practice, there are clinical guidelines on the upper/lower limits for different clinical metrics. Planners often try to find an \emph{acceptable} treatment plan based on these guidelines to forward to an oncologist who will inspect it and either approve or return it to the planner. If the plan is not approved, the planner receives a set of instructions on which metrics to adjust. It often happens that the final approved plan may not meet all the clinical limits simultaneously as there are trade-offs between different metrics.
Suppose we have a set of approved treatment plans from previous patients. Even though there are clinical guidelines on acceptability thresholds for different metrics, in reality, there may exist approved treatment plans that do not meet these limits. There may also exist plans that do meet the guidelines but are not approved by the oncologists since she/he believed a better plan is achievable. Hence, the true feasible region of the forward problem in treatment planning is unknown.
Considering the historically-approved plans as ``feasible points'', we can employ our inverse optimization approach to find the constraint parameters, based on which we can understand the implicit logic of the oncologists in approving a treatment plan. In doing so, we would help both the oncologists and the planners by ({\it i}\,) generating more realistic lower/upper bounds on the clinical metrics based on past observations, ({\it ii}\,) improving the iterative planning process by producing higher quality initial plans given the clear guidelines and hence, reducing the number of preliminary plans passed back and forth between the planner and oncologists, and ({\it iii}\,) improving the quality of plans by preventing low-quality solutions that otherwise satisfy the acceptability metrics, especially for inverse treatment planning methods that heavily rely on these metrics.
\vspace{1em}
\noindent{\bf B. Diet Problem:}
Consider a diet recommendation system that suggests a variety of food items based on a user's dietary needs and/or personal preferences. Each meal can be characterized by a set of features and/or metrics such as meat content or daily value of each nutrient.The diet problem often consists of minimizing some objective such that a set of requirements on the food intake is met. \add{A limited number of studies in the literature have used inverse optimization for inferring the objective function weights in a diet problem \citep{shahmoradi2019quantile, ghobadi2018robust}. However, to the best of our knowledge, no study infers the constraint parameters for the diet problem.}
Assume the objective function of the underlying (forward) optimization problem in the diet problem is known. Examples of such objective functions would be minimizing total calories in a weight loss program, minimizing sodium intake in a hypertension diet, or minimizing monetary cost. In addition to dietary requirements, each person has a set of implicit constraints that would result in them finding a certain suggestion ``palatable'' or not. Different users would have different such constraints, and it is not explicitly possible to list what these constraints are. In such cases, our inverse optimization model can use historical data to ensure the next suggested meal in the diet recommendation system is palatable.
For example, consider a user who is mostly vegetarian and is implicitly limiting the number of meat servings during the week, or another user who prefers to limit the amount of dairy intake when consuming certain vegetables. If we observe the user's diet choices (feasible observations) for a certain time horizon, the inverse optimization model would find the set of constraints (feasible region) that captures this behaviour by making diets that are too far off from the past observations infeasible while ensuring that the required amount of nutrients are met, the diet is palatable (feasible), and the given objective ({\it e.g.}, cost, calories) is minimized. \add{In Section~\ref{sec:numericalexample}, we further discuss this application of our proposed methodology on a diet recommendation problem. }
\section{Methodology} \label{sec:Methodology}
In this section, we first set up the forward optimization problem where, contrary to conventional inverse optimization, the cost vector is known and the unknown parameters are, instead, the constraint parameters. Let $ \mathbf{c} \in\mathbb{R}^n, \mathbf{A} \in\mathbb{R}^{m_1\times n}, \mathbf{b} \in\mathbb{R}^{m_1}, \mathbf{G} \in\mathbb{R}^{m_2\times n}$ and $ \mathbf{h} \in\mathbb{R}^{m_2}$. We define our linear forward optimization ($\mathbf{FO}$) problem as
\begin{subequations}
\begin{align}
\mathbf{FO}: \quad \underset{ \mathbf{x} }{\text{minimize}} & \quad \mathbf{c} ' \mathbf{x} \\
\text{subject to} & \quad \mathbf{A} \mathbf{x} \ge \mathbf{b} ,
\\
& \quad \mathbf{G} \mathbf{x} \ge \mathbf{h} , \\
&{\color{myGreen} \quad \mathbf{x} \in \mathbb{R}^n. }
\end{align}
\end{subequations}
Consider the case in which some or all of the constraint parameters of the $\mathbf{FO}$ formulation are unknown, but there exist one or more observations (solutions) that are deemed feasible or optimal for $\mathbf{FO}$ based on expert opinion.
For such settings, we propose inverse optimization models that infer these unknown parameters of $\mathbf{FO}$ and recover the full feasible region. We assume that $ \mathbf{A} $ and $ \mathbf{b} $ are the \emph{unknown constraint} parameters that the inverse optimization aims to infer and $ \mathbf{G} $ and $ \mathbf{h} $ are the \emph{known constraint} parameters.
For every constraint of $\mathbf{FO}$, three cases can be considered: ({\it i}\,) all of the parameters of the constraint are known, in which case we denote it as part of the known constraints, $ \mathbf{G} \mathbf{x} \geq \mathbf{h} $; ({\it ii}\,) all of its parameters are unknown and we denote the constraint as part of the unknown constraints, $ \mathbf{A} \mathbf{x} \geq \mathbf{b} $; or ({\it iii}\,) some of its parameters are known (for instance, $ \mathbf{b} $ is known) or some properties about the constraint(s) are known, in which case we denote the constraint(s) as part of unknown constraint parameters $ \mathbf{A} $ and $ \mathbf{b} $ and add the additional restrictions to the inverse model. We will discuss the latter case in more detail in Section~\ref{sec:discussions}, and without loss of generality, we assume no such partial information is available for the rest of this section.
Throughout this paper, we index the unknown and known constraints by the sets $\mathcal{I}_1 =\{ 1,\dots, m_1\}$ and $\mathcal{I}_2 =\{ 1,\dots, m_2\}$, respectively. Note that if there are no known constraints, $\mathcal{I}_2 = \emptyset$. The $i^\text{th}$ row of the constraint matrices $ \mathbf{A} $ and $ \mathbf{G} $ is referred to as $ \mathbf{a} _i$ and $ \mathbf{g} _i$, respectively. Similarly, the $i^\text{th}$ elements of the $ \mathbf{b} $ and $ \mathbf{h} $ vectors are denoted by $b_i$ and $h_i$, respectively. The set $\mathcal{J}= \{1,\dots,n\}$ denotes the columns in the constraint matrices ({\it i.e.}, the indices of the $ \mathbf{x} $ variable). We use bold numbers $\mathbf{1}$ and $ \mathbf{0} $ to denote the all-ones and the all-zeros vectors, respectively.
In this section, we propose three models to infer the unknown parameters of $\mathbf{FO}$. First, we present a single-point inverse optimization model when only one observation is available. Next, we provide a multi-point inverse optimization model to infer the unknown constraint parameters when several observations are available. Finally, we propose a tractable reformulation for the proposed inverse optimization model.
\subsection{Single-point Inverse Optimization}
In this section, we propose an inverse optimization model for the case where only one observed solution is available. Given a single observation $\hatX$, a cost vector $ \mathbf{c} $, and known constraint parameters $ \mathbf{G} $ and $ \mathbf{h} $ (if any), we would like to formulate an inverse optimization model that finds the unknown constraint parameters $ \mathbf{A} $ and $ \mathbf{b} $ such that the observation $\hatX$ is optimal for the forward problem $\mathbf{FO}$. Without loss of generality, we assume that the observation $\hatX$ is feasible for the known constraints $ \mathbf{G} \mathbf{x} \geq \mathbf{h} $, since the forward problem will be otherwise ill-defined, and the inverse problem will have no solution.
Let $ \mathbf{y} $ and $ \mathbf{w} $ be dual vectors for constraints (1b) and (1c) of $\mathbf{FO}$, respectively. The single-point inverse optimization model ($\IO$) can be written as follows:
\begin{subequations} \label{eq:IO}
\begin{align}
\IO: \underset{ \mathbf{A} , \mathbf{b} , \mathbf{y} , \mathbf{w} }{\text{minimize}} & \quad \mathscr{F} ( \mathbf{A} , \mathbf{b} ; \mathscrsfs{D} ) , \\
\text{subject to}
& \quad \mathbf{A} \hatX \geq \mathbf{b} , \label{eq:IOprimalfeas}\\
& \quad \mathbf{c} ' \hatX = \mathbf{b} ' \mathbf{y} + \mathbf{h} ' \mathbf{w} , \label{eq:IOstrongduality}\\
& \quad \mathbf{A} ' \mathbf{y} + \mathbf{G} ' \mathbf{w} = \mathbf{c} , \label{eq:IOdualfeas1}\\
&\quad || \mathbf{a} _i||= 1, \quad \forall i \in \mathcal{I}_1 \label{eq:IOnorm} \\
& \quad \mathbf{y} \in \mathbb{R}^{m_1}, \quad \mathbf{w} \in \mathbb{R}^{m_2},
\label{eq:IOdualfeas2} \\
& \quad \mathbf{A} \in \mathbb{R}^{m_1 \times n}, \quad \mathbf{b} \in \mathbb{R}^{m_1}. \label{eq:IOprimalfeas2}
\end{align}
\end{subequations}
The objective $ \mathscr{F} ( \mathbf{A} , \mathbf{b} ; \mathscrsfs{D} )$ is a loss function that drives the desired properties of the feasible region based on some given input parameter $ \mathscrsfs{D} $. \add{For instance, the user may input a prior belief on the shape of the feasible region as parameter $ \mathscrsfs{D} $ and set the objective function $ \mathscr{F} $ to minimize the deviation from such prior belief.} More details and other explicit examples of the loss function $ \mathscr{F} ( \mathbf{A} , \mathbf{b} ; \mathscrsfs{D} )$ are discussed in Section~\ref{sec:measures}. Constraint~\eqref{eq:IOprimalfeas} enforces primal feasibility of $ \mathbf{x} ^0$. Constraints~\eqref{eq:IOdualfeas1} and~\eqref{eq:IOdualfeas2} are the dual feasibility constraints. Constraint~\eqref{eq:IOstrongduality} is the strong duality constraint that ensures $ \mathbf{x} ^0$ is the optimal solution of $\mathbf{FO}$. Finally, without loss of generality, constraint~\eqref{eq:IOnorm} normalizes the LHS of each unknown constraint based on some norm $||\cdot||$. The introduction of this norm avoids finding multiple scalars of the same constraint parameters or finding trivial (all-zero) solutions, without requiring the user to define application-specific side constraints. Nevertheless, if any such side constraints or partial information on $ \mathbf{A} $ or $ \mathbf{b} $ exists, they can be incorporated in the model. This extension will be discussed later in Section~\ref{sec:discussions}.
We make the following assumption to ensure that the forward problem is not a simple feasibility problem.
\begin{assumption} \label{assum:c}
$ \mathbf{c} \neq \mathbf{0} $.
\end{assumption}
\noindent We note that without Assumption~\ref{assum:c}, the $\IO$ problem will be simplified since it will have many trivial solutions such as $ \mathbf{A} = \mathbf{G} , \mathbf{b} = \mathbf{h} $, and $ \mathbf{y} = - \mathbf{w} $, or alternatively, $ \mathbf{w} = \mathbf{y} = \mathbf{0} $ with any $ \mathbf{A} $ and $ \mathbf{b} $ that satisfy the primal feasibility constraint~\eqref{eq:IOprimalfeas}. For the rest of this paper, we assume Assumption~\ref{assum:c} holds. We next show that the $\IO$ formulation is valid and has non-trivial feasible solutions.
\begin{proposition} \label{prop:IOfeas}
\add{The feasible region of $\IO$ is non-empty.}
\end{proposition}
\proof{Proof.}
Let $ \mathbf{w} = \mathbf{0} $, $ \mathbf{y} = || \mathbf{c} || \mathbf{1}$, \,
$ \mathbf{b} = ( \mathbf{c} ' \mathbf{x} ^0)/{|| \mathbf{c} ||}$, and $ \mathbf{a} _i= \mathbf{c} /{|| \mathbf{c} ||}$,\, $\forall i \in \mathcal{I}_1$, given $ \mathbf{c} \neq \mathbf{0} $.
Then $( \mathbf{A} , \mathbf{b} , \mathbf{y} , \mathbf{w} )$ is a feasible solution to $\IO$. \Halmos
\endproof
\add{In general, any feasible region that renders the point $ \mathbf{x} ^0$ optimal for $\mathbf{FO}$ would be a feasible solution to the single-point $\IO$ problem. Therefore, the solutions to single-point $\IO$ can be futile, and the applicability of this model can be limited. For instance, when $m\geq n$, $\IO$ may force all constraints to pass through $ \mathbf{x} ^0$ and possibly make $ \mathbf{x} ^0$ be the only feasible point for $\mathbf{FO}$. As another example, often more than one feasible observation is available for the forward problem in practice. In such cases, the $\IO$ formulation does not apply because strong duality may not necessarily hold for multiple observations and lead to the infeasibility of some of the observations. As a result, the theoretical properties of the solutions to the inverse optimization problem would also change. Hence, in the next section, we extend our $\IO$ formulation for the case of multiple observations and discuss its properties.}
\subsection{Multi-point Inverse Optimization}
Consider a finite number of observations $ \mathbf{x} ^k, k\in \mathcal{K} =\{1,...,K\}$ to the forward problem. One of the first goals of a multi-point inverse optimization is to find the constraint parameters in such a way that all observations $ \mathbf{x} ^k, k\in \mathcal{K}$ become feasible. We define this property as follows.
\begin{definition}\label{def:valid}
A polyhedron $\mathcal{X} = \{ \mathbf{x} \in \mathbb{R}^n \,|\, \mathbf{D} \mathbf{x} \geq \mathbf{d} \}$ is a \underline{valid} feasible set
for the forward problem if $ \mathbf{x} ^k \in \mathcal{X}$, \, $\forall k\in \mathcal{K}$. \end{definition}
\begin{remark}\label{rem:validSet}
If $\mathcal{X}\subseteq \mathbb{R}^n$ is a valid feasible set, then any set $S \subseteq \mathbb{R}^n$ such that $\mathcal{X} \subseteq S$ is also a valid feasible set.
\end{remark}
Remark~\ref{rem:validSet} states that if a set $\mathcal{X}$ is a valid feasible set, then any set that contains $\mathcal{X}$ is also valid since all observations remain feasible for any superset of $\mathcal{X}$. Any set that is not valid, {\it i.e.}, does not contain some observation $ \mathbf{x} ^k, k \in \mathcal{K}$ cannot be a feasible set to the forward optimization (by definition). Hence, all feasible regions that are imputed from the solutions of a multi-point inverse optimization must be valid feasible sets. In particular, to ensure that the feasible region of the forward problem is well-defined, we assume that the set defined by the known constraints is also a valid feasible set.
\begin{assumption} \label{assum:G}
The set $\mathcal{G} = \{ \mathbf{x} \in \mathbb{R}^n |~ \mathbf{G} \mathbf{x} \geq \mathbf{h} \}$ is a valid feasible set.
\end{assumption}
\noindent The feasibility of the observations for the known constraints is similar to the assumption in the single-point $\IO$, except that all observations (as opposed to one observation) are assumed to be feasible for the known constraints $ \mathbf{G} \mathbf{x} \geq \mathbf{h} $. Otherwise, the inverse optimization will not have a solution. Although we have $K$ observations in the multi-point inverse optimization, we can identify the observation(s) that result in the best objective function value for the forward problem, because the $ \mathbf{c} $ vector is known. We define the observation with the best value as the preferred observation, denoted by $ \mathbf{x} ^0$, for which strong duality must hold.
\begin{definition}\label{def:x0} The \underline{preferred} solution in a set of observations $\{ \mathbf{x} ^k\}_{k\in\mathcal{K}}$ is defined as
\begin{equation*}
\mathbf{x} ^0 \in \argmin_{ \mathbf{x} ^k, k\in\mathcal{K}}\{ \mathbf{c} ' \mathbf{x} ^k\}.
\end{equation*}
\end{definition}
\noindent
If multiple observations satisfy Definition~\ref{def:x0}, without loss of generality, we arbitrarily select one of them as $ \mathbf{x} ^0$. The multi-point inverse optimization problem aims to find a set of constraints for the forward problem such that all observations are feasible, and the preferred solution $ \mathbf{x} ^0$ becomes optimal.
The following multi-point inverse optimization ($\MIO$) formulation finds a feasible region that minimizes some loss function of the inverse optimal solution (desired properties of the feasible region) from a set of input parameters $ \mathscrsfs{D} $.
\begin{subequations}\label{eq:TMIO}
\begin{align}
\mathbf{MIO}: \underset{ \mathbf{A} , \mathbf{b} , \mathbf{y} , \mathbf{w} }{\text{minimize}} \quad & \mathscr{F} ( \mathbf{A} , \mathbf{b} ; \mathscrsfs{D} ), \\
\text{subject to} \quad
& \mathbf{A} \mathbf{x} ^k \ge \mathbf{b} , \quad \forall k\in \mathcal{K} \label{eq:TMIO-PrimalFeas}\\ & \mathbf{c} ' \mathbf{x} ^0 = \mathbf{b} ' \mathbf{y} + \mathbf{h} ' \mathbf{w} , \label{eq:TMIO-StrongDual}\\
& \mathbf{A} ' \mathbf{y} + \mathbf{G} ' \mathbf{w} = \mathbf{c} , \label{eq:TMIO-DualFeas}\\
& || \mathbf{a} _i||= 1, \quad \forall i \in \mathcal{I}_1 \label{eq:MIOnorm} \\
& {\color{myGreen} \mathbf{y} \in \mathbb{R}^{m_1}, \quad \mathbf{w} \in \mathbb{R}^{m_2},
\label{eq:MIO-Dualfeas2}} \\
& {\color{myGreen} \mathbf{A} \in \mathbb{R}^{m_1 \times n}, \quad \mathbf{b} \in \mathbb{R}^{m_1}. }\label{eq:MIO-signs}
\end{align}
\end{subequations}
The constraints in $\MIO$ include strong duality~\eqref{eq:TMIO-StrongDual}, dual feasibility (\eqref{eq:TMIO-DualFeas} and~\eqref{eq:MIO-Dualfeas2}), and normalization~\eqref{eq:MIOnorm}. \add{We note that even though $ \mathbf{c} $ is known, the strong duality constraint does not automatically hold. To ensure the optimality of $ \mathbf{x} ^0$, the inverse problem needs to find the constraint parameters $ \mathbf{A} $ and $ \mathbf{b} $ such that $ \mathbf{x} ^0$ is on the boundary of the imputed feasible region and can be optimal with respect to $ \mathbf{c} $. Hence, constraint \eqref{eq:TMIO-StrongDual} is necessary to ensure that strong duality holds for the preferred solution.} In contrast to the single-point $\IO$ formulation, the primal feasibility constraint~\eqref{eq:TMIO-PrimalFeas} is now a set of $K$ constraints that ensures the feasibility of all observations for $\mathbf{FO}$. The formulation of $\MIO$, similar to that of $\IO$, is bilinear and hence, is non-convex in general. Analogous to $\IO$, we first show that the $\MIO$ formulation is feasible.
\if 0
The constraints in $\MIO$ include strong duality~\eqref{eq:TMIO-StrongDual}, dual feasibility (\eqref{eq:TMIO-DualFeas} and~\eqref{eq:MIO-Dualfeas2}), and normalization~\eqref{eq:MIOnorm}. \add{We note that even though $ \mathbf{c} $ is known, the strong duality constraint does not automatically hold for $ \mathbf{x} ^0$ because $ \mathbf{b} $ is unknown, and hence, constraint \eqref{eq:TMIO-StrongDual} is necessary to ensure that strong duality holds for the preferred solution.}
In contrast to the single-point $\IO$ formulation, the primal feasibility constraint~\eqref{eq:TMIO-PrimalFeas} is now a set of $K$ constraints that ensures the feasibility of all observations for $\mathbf{FO}$. The formulation of $\MIO$, similar to that of $\IO$, is bilinear and hence, is non-convex in general. Analogous to $\IO$, we first show that the $\MIO$ formulation is feasible.
\fi
\begin{proposition}\label{prop:MIOfeas}
\add{The feasible region of $\MIO$ is non-empty.}
\end{proposition}
\proof{Proof.}
Let $ \mathbf{w} = \mathbf{0} $, $ \mathbf{y} = || \mathbf{c} ||\mathbf{1}$, $ \mathbf{b} = ( \mathbf{c} ' \mathbf{x} ^0)/\| \mathbf{c} \|$, and $ \mathbf{a} _i= \mathbf{c} /\| \mathbf{c} \|$ $\forall i \in \mathcal{I}_1$.
The resulting solution $( \mathbf{A} , \mathbf{b} , \mathbf{y} , \mathbf{w} )$ satisfies constraints~\eqref{eq:TMIO-StrongDual}--\eqref{eq:MIO-Dualfeas2}. To show that the primal feasibility constraints~\eqref{eq:TMIO-PrimalFeas} also hold, note that if $\exists k \in \mathcal{K}$ such that $ \mathbf{A} \mathbf{x} ^k < \mathbf{b} $, then by substituting the values of $ \mathbf{A} $ and $ \mathbf{b} $, we have $( \mathbf{c} ' \mathbf{x} ^k)/\| \mathbf{c} \| < ( \mathbf{c} ' \mathbf{x} ^0)/\| \mathbf{c} \|$, or equivalently, $ \mathbf{c} ' \mathbf{x} ^k < \mathbf{c} ' \mathbf{x} ^0$, which is a contradiction to the definition of $ \mathbf{x} ^0$ (Definition~\ref{def:x0}). Therefore, the solution $( \mathbf{A} , \mathbf{b} , \mathbf{y} , \mathbf{w} )$ is feasible for $\MIO$.
\Halmos \endproof
Constraint parameters $ \mathbf{A} $ and $ \mathbf{b} $ described in Proposition~\ref{prop:MIOfeas} represent the half-space $\mathcal{C}=\{ \mathbf{x} \in \mathbb{R}^n |~ \mathbf{c} ' \mathbf{x} \geq \mathbf{c} ' \mathbf{x} ^0 \}$ whose identifying hyperplane is orthogonal to the cost vector $ \mathbf{c} $ and passes through the preferred solution $ \mathbf{x} ^0$ (as an example, see Figure~\ref{fig:noPriorB:a} in Section~\ref{sec:numericalexample}). Therefore, the set $\mathcal{C}$ includes all observations $ \mathbf{x} ^k, k\in\mathcal{K}$ ({\it i.e.}, $\mathcal{C}$ is a valid feasible set) and makes $ \mathbf{x} ^0$ optimal for the forward problem. Hence, $\mathcal{C}$ is a feasible set for the $\mathbf{FO}$ problem that is {\it imputed} from a solution of $\MIO$. In Definition~\ref{def:imputedSet}, we generalize this concept for all valid feasible sets that are derived from $\MIO$ solutions.
\begin{definition} \label{def:imputedSet}
A polyhedron $\mathcal{X} = \{ \mathbf{x} \in \mathbb{R}^n |~ \mathbf{D} \mathbf{x} \geq \mathbf{d} \}$ is called an \underline{imputed} feasible set if there exists a feasible solution $( \mathbf{A} , \mathbf{b} , \mathbf{y} , \mathbf{w} )$ of $\MIO$ such that $\mathcal{X} = \{ \mathbf{x} \in \mathbb{R}^n |~ \mathbf{A} \mathbf{x} \geq \mathbf{b} , \mathbf{G} \mathbf{x} \geq \mathbf{h} \}$.
\end{definition}
An imputed feasible set $\mathcal{X} = \{ \mathbf{x} \in \mathbb{R}^n \,|\, \mathbf{D} \mathbf{x} \geq \mathbf{d} \}$ may be represented by infinitely many sets of constraints. For example, any scalar multiplication of the inequality or any other (perhaps linearly-dependent) reformulation will represent the same set $\mathcal{X}$. The $\MIO$ formulation finds one such set of constraints to characterize $\mathcal{X}$ while satisfying the normalization constraint~\eqref{eq:MIOnorm}. When referring to an imputed feasible set, we consider the set $\mathcal{X}$ and not the exact constraint parameters that define it. Note that any imputed feasible set is always a feasible region for $\mathbf{FO}$ that makes $ \mathbf{x} ^0$ optimal because it is inferred by a solution of $\MIO$, and conversely, any feasible region of $\mathbf{FO}$ that satisfies the known constraints and makes $ \mathbf{x} ^0$ optimal is an imputed feasible set of $\MIO$. We formalize this property in Proposition~\ref{prop:valid}.
\begin{proposition}\label{prop:valid}
The system $( \mathbf{A} , \mathbf{b} , \mathbf{y} , \mathbf{w} )$ is feasible for $\mathbf{MIO}$ if and only if the polyhedron $\mathcal{X} = \{ \mathbf{x} \in \mathbb{R}^n|~ \mathbf{A} \mathbf{x} \geq \mathbf{b} , \mathbf{G} \mathbf{x} \geq \mathbf{h} \}$ is a valid feasible set that makes $ \mathbf{x} ^0$ optimal for the forward problem.
\end{proposition}
\proof{Proof.}
Assume that $( \mathbf{A} , \mathbf{b} , \mathbf{y} , \mathbf{w} )$ is a solution to $\mathbf{MIO}$. By constraint~\eqref{eq:TMIO-PrimalFeas}, ${ \mathbf{A} \mathbf{x} ^k \geq \mathbf{b} , \forall k\in \mathcal{K}}$, and by Definition~\ref{assum:G}, we have $ \mathbf{G} \mathbf{x} ^k \geq \mathbf{h} , \forall k\in \mathcal{K}$. Hence, $\mathcal{X}$ is a valid feasible set. Constraint~\eqref{eq:TMIO-StrongDual} ensures that strong duality holds for $ \mathbf{x} ^0$, and hence, $ \mathbf{x} ^0$ must be optimal for $\mathbf{FO}$.
\noindent Now let $\mathcal{X} = \{ \mathbf{x} \in \mathbb{R}^n|~ \mathbf{A} \mathbf{x} \geq \mathbf{b} , \mathbf{G} \mathbf{x} \geq \mathbf{h} \}$ be a valid feasible set that makes $ \mathbf{x} ^0$ optimal for $\mathbf{FO}$. Without loss of generality, we can assume that constraint~\eqref{eq:MIOnorm} holds since we can always normalize $ \mathbf{A} $ and $ \mathbf{b} $ so that $\| \mathbf{a} _i\|=1$. The primal feasibility constraint~\eqref{eq:TMIO-PrimalFeas} is always met by definition of $\mathcal{X}$. Since $ \mathbf{x} ^0$ is optimal for $\mathbf{FO}$, we have $\underset{ \mathbf{x} \in \mathcal{X}}{\min}\{ \mathbf{c} ' \mathbf{x} \}>-\infty$, and therefore, the dual of $\mathbf{FO}$ exists and is feasible, and strong duality holds. Hence, all constraints~(\ref{eq:TMIO-PrimalFeas}-\ref{eq:MIO-Dualfeas2}) are satisfied, which implies that there must exist $ \mathbf{y} $ and $ \mathbf{w} $ such that $( \mathbf{A} , \mathbf{b} , \mathbf{y} , \mathbf{w} )$ is feasible for $\MIO$.
\Halmos \endproof
Proposition~\ref{prop:valid} characterizes the properties of all solutions to $\MIO$ and ensures that $ \mathbf{x} ^0$ is optimal for $\mathbf{FO}$. Although Proposition~\ref{prop:valid} and the $\MIO$ formulation explicitly consider the optimality of only $ \mathbf{x} ^0$, we show in Remark~\ref{rem:StrongDual} that any other observation with the same objective function value as $ \mathbf{x} ^0$ is also optimal for the forward problem.
\begin{remark}\label{rem:StrongDual}
If $\mathcal{X}$ is an imputed feasible set, any $\tilde{ \mathbf{x} } \in \mathcal{X}$ such that $\tilde{ \mathbf{x} } \in \argmin_{ \mathbf{x} ^k, \forall k\in \mathcal{K}}\{ \mathbf{c} ' \mathbf{x} ^k\}$ is an optimal solution of $\mathbf{FO}$.
\end{remark}
\proof{Proof.}
Let $ \mathbf{x} ^0$ be the preferred solution. Assume $\exists \, \tilde{ \mathbf{x} } \in \mathcal{X}$ such that $\tilde{ \mathbf{x} } \in \argmin_{ \mathbf{x} ^k, \forall k\in \mathcal{K}}\{ \mathbf{c} ' \mathbf{x} ^k\}$ and ${\tilde{ \mathbf{x} } \ne \mathbf{x} ^0}$. By constraint~\eqref{eq:TMIO-PrimalFeas}, we know that $\tilde{ \mathbf{x} }$ is feasible for $\mathbf{FO}$. If $\tilde{ \mathbf{x} }$ is not optimal for $\mathbf{FO}$, then $ \mathbf{c} ' \mathbf{x} ^0 < \mathbf{c} ' \tilde{ \mathbf{x} }$ which is a contradiction to $\tilde{ \mathbf{x} } \in \argmin_{ \mathbf{x} ^k, \forall k\in \mathcal{K}}\{ \mathbf{c} ' \mathbf{x} ^k\}$. Hence, $\tilde{ \mathbf{x} }$ must be an optimal solution to $\mathbf{FO}$.
\Halmos \endproof
\add{In this section, we proposed inverse optimization models that can impute the feasible region of a forward problem based on a set of feasible observations, and we discussed the general properties of the solutions. Considering that the proposed models are nonlinear, we next focus on additional properties of the solutions and propose a tractable reformulation that can be used to solve the $\MIO$ problem.}
\subsection{Tractable Reformulation}
\add{The proposed $\MIO$ formulation includes a set of bilinear constraints which makes the formulation non-linear ({\it i.e.}, constraints~\eqref{eq:TMIO-StrongDual} and~\eqref{eq:TMIO-DualFeas}) and therefore, intractable to solve. In this section, we outline specific properties of the solution space of the $\MIO$ model that would allow us to develop a tractable reformulation of this model. To this end, we first characterize the range of possible imputed feasible sets to $\MIO$ and then prove that by considering specific known constraints, strong duality and dual feasibility can be guaranteed without explicitly incorporating the corresponding nonlinear constraints in the formulation. We finally formalize this idea theoretically and discuss how this reformulation can be used to find solutions to $\MIO$.}
We first show that we can find the smallest and largest possible imputed feasible sets of $\MIO$ solely based on the given observations. An imputed feasible set $\mathcal{X}$ of $\MIO$ is a valid feasible set, according to Proposition~\ref{prop:valid}. By Remark~\ref{rem:validSet}, any superset of $\mathcal{X}$ will also be a valid feasible set. In particular, $\mathcal{X}$ is always a subset of the half-space $\mathcal{C} = \{ \mathbf{x} \in \mathbb{R}^n |~ \mathbf{c} ' \mathbf{x} \geq \mathbf{c} ' \mathbf{x} ^0 \}$ and a superset of the convex hull of the observations, denoted by $\mathcal{H}$. This property is shown in Lemma~\ref{lem:subset} and plays a fundamental role in reformulating the $\MIO$ model later in Theorem~\ref{thm:MIP_general}. For brevity of notations, we use $\mathcal{H}$ and $\mathcal{C}$ as defined above throughout the rest of this paper.
\begin{lemma}\label{lem:subset}
If $\mathcal{X}$ is an imputed feasible set of $\MIO$, then $\mathcal{H} \subseteq \mathcal{X} \subseteq \mathcal{C}$.
\end{lemma}
\proof{Proof.}
\noindent ($\mathcal{H}\subseteq \mathcal{X}$): Assume $\mathcal{H}\not\subseteq \mathcal{X}$ and $\exists\, \bar{ \mathbf{x} } \in \mathcal{H},$ $\bar{ \mathbf{x} } \not \in \mathcal{X}$. By definition of $\mathcal{H}$, $\exists \, \lambda_k \geq 0, \, \forall k \in \mathcal{K}$ such that $\bar{ \mathbf{x} } = \sum_{k \in \mathcal{K}} \, \lambda_k \mathbf{x} ^k$ and $ \sum_{k \in \mathcal{K}} \, \lambda_k = 1$. This is a contradiction because $\mathcal{X}$ is a polyhedron that is a valid feasible set. Therefore, it contains all observations $ \mathbf{x} ^k$ and any convex combination of them, including $\bar \mathbf{x} $. Hence, $\mathcal{H}\subseteq \mathcal{X}$. \\
($\mathcal{X} \subseteq \mathcal{C}$): Similarly, assume $\mathcal{X} \not \subseteq \mathcal{C}$ and $\exists \, \bar{ \mathbf{x} } \in \mathcal{X},$ $\bar{ \mathbf{x} } \not \in \mathcal{C}$. Since $\bar{ \mathbf{x} } \not \in \mathcal{C}$, we have $ \mathbf{c} '\bar{ \mathbf{x} } < \mathbf{c} ' \mathbf{x} ^0$ (by definition). Therefore, $\bar{ \mathbf{x} }$ has a better objective value than $ \mathbf{x} ^0$, which is a contradiction to $\mathcal{X}$ being an imputed feasible set because $\mathcal{X}$ must make $ \mathbf{x} ^0$ optimal for $\mathbf{FO}$. Hence, $\mathcal{X} \subseteq \mathcal{C}$.
\Halmos \endproof
As Lemma~\ref{lem:subset} illustrates, any imputed feasible set must be a subset of the half-space $\mathcal{C} = \{ \mathbf{x} \in \mathbb{R}^n |~ \mathbf{c} ' \mathbf{x} \geq \mathbf{c} ' \mathbf{x} ^0 \}$.
\add{The intuition behind this idea is as follows. The identifying hyperplane of $\mathcal{C}$ ({\it i.e.}, $ \mathbf{c} ' \mathbf{x} = \mathbf{c} ' \mathbf{x} ^0 $) passes through the preferred solution $ \mathbf{x} ^0$ and is orthogonal to the known cost vector $ \mathbf{c} $. The inclusion of this half-space ensures that $ \mathbf{x} ^0$ is on the boundary of the imputed feasible region and is always candidate optimal. In other words, for a valid feasible set $\mathcal{U} \not \subseteq \mathcal{C}$, there will always exist other feasible solutions that have a better objective function value than $ \mathbf{x} ^0$ in the forward problem. Such a set $\mathcal{U}$ cannot be an imputed feasible set of $\MIO$ since it does not make $ \mathbf{x} ^0$ a candidate for optimality. Hence, any imputed feasible set must be a subset of the half-space $\mathcal{C}$. For a visual representation of the half-space $\mathcal{C}$, see Figure~\ref{fig:noPriorB:a} in Section~\ref{sec:numericalexample}.}
Using this property, we can reduce the solution space of $\MIO$ from $\mathbb{R}^n$ to the half-space $\mathcal{C}$.
We can further restrict the solution space of $\MIO$ by noting that the set of known constraint $\mathcal{G} = \{ \mathbf{x} \in \mathbb{R}^n \mid \mathbf{G} \mathbf{x} \geq \mathbf{h} \}$ also has to be met for any $\MIO$ solution. Therefore, the solution space of $\MIO$ is always a subset of $\mathcal{S} = \mathcal{C} \cap \mathcal{G}$. Proposition~\ref{prop:CG} implies that this solution space is the largest imputed feasible set of $\MIO$ and any $\MIO$ solution is a subset of the space $\mathcal{S}$.
\begin{proposition}\label{prop:CG}
\add{Let $\mathcal{S} =\mathcal{C} \cap \mathcal{G} = \{ \mathbf{x} \in \mathbb{R}^n |~ \mathbf{c} ' \mathbf{x} \geq \mathbf{c} ' \mathbf{x} ^0, \, \mathbf{G} \mathbf{x} \geq \mathbf{h} \}$, then
\begin{enumerate} \setlength\itemsep{0em}
\item [{\normalfont(}i\,{\normalfont)}] $\mathcal{S}$ is an imputed feasible set of $\MIO$,
\item [{\normalfont(}ii\,{\normalfont)}] for any other imputed feasible set $\mathcal{X}$, we have $\mathcal{X} \subseteq \mathcal{S}$,
\item [{\normalfont(}iii\,{\normalfont)}] for any valid feasible set $\mathcal{U}$, the set $\mathcal{U} \cap \mathcal{S}$ is an imputed feasible set.
\end{enumerate}
}
\end{proposition}
\proof{Proof.}
\add{
({\it i\,}) The set $\mathcal{S}$ is a valid feasible set since both $\mathcal{C}$ and $\mathcal{G}$ are valid feasible sets as shown in Lemma~\ref{lem:subset} and Assumption~\ref{assum:G}. The set $\mathcal{S}$ also makes $ \mathbf{x} ^0$ optimal for $\mathbf{FO}$ because it includes the half-space $\mathcal{C}$. Hence, by Proposition~\ref{prop:valid}, $\mathcal{S}$ is an imputed feasible set of $\MIO$. \\
({\it ii\,}) For any imputed feasible set $\mathcal{X}$, it is obvious that $\mathcal{X} \subseteq \mathcal{C}$ (by Lemma~\ref{lem:subset}) and $\mathcal{X} \subseteq \mathcal{G}$ (by definition), and hence $\mathcal{X} \subseteq \mathcal{S}$. \\
({\it iii\,}) Since both $\mathcal{U}$ and $\mathcal{C}$ are valid feasible sets of $\mathbf{FO}$, the set $\mathcal{U} \cap \mathcal{S}$ is also a valid feasible set, and hence, primal feasibility holds. Strong duality also holds for $ \mathbf{x} ^0 \in \mathcal{U} \cap \mathcal{S}$ because the half-space $\mathcal{C}$ is considered as part of $\mathcal{S}$ with $ \mathbf{c} ' \mathbf{x} ^0$ as the optimal value of the $\mathbf{FO}$. Hence, dual feasibility also holds, and $\mathcal{U} \cap \mathcal{S}$ is an imputed set of $\MIO$.
}
\Halmos \endproof
Without loss of generality, in the rest of this paper, we assume that $\mathcal{C}=\{ \mathbf{x} \in \mathbb{R}^n |~ \mathbf{c} ' \mathbf{x} \geq \mathbf{c} ' \mathbf{x} ^0 \}$ is added as the {\it first} unknown constraint in the formulation, that is, $ \mathbf{g} _1 = \mathbf{c} , \, h_1 = \mathbf{c} ' \mathbf{x} ^0$. Let the set $\mathcal{S} =\mathcal{C} \cap \mathcal{G} = \{ \mathbf{x} \in \mathbb{R}^n |~ \mathbf{c} ' \mathbf{x} \geq \mathbf{c} ' \mathbf{x} ^0, \, \mathbf{G} \mathbf{x} \geq \mathbf{h} \}$ denote the new set of ``known constraints'' hereinafter. With this assumption, Remark~\ref{rem:cs} formally points out that the largest possible feasible set that can be imputed by a solution of $\MIO$ is the set $\mathcal{S}$ itself, as defined in Proposition~\ref{prop:CG}.
\begin{remark}\label{rem:cs}
The set $\mathcal{S}$ is the largest possible imputed feasible set of $\MIO$.
\end{remark}
Considering the set $\mathcal{S}$ as the set of known constraints, we can guarantee that the strong duality and dual feasibility constraints~\eqref{eq:TMIO-StrongDual} and~\eqref{eq:TMIO-DualFeas} hold without explicitly including them in the model. Based on these properties, Theorem~\ref{thm:MIP_general} shows an equivalent reformulation of the $\MIO$ problem when the half-space $\mathcal{C}$ is considered as a known constraint.
\begin{theorem}\label{thm:MIP_general}
Solving $\MIO$ is equivalent to solving the following problem when \add{$\mathcal{S} =\mathcal{C} \cap \mathcal{G} = \{ \mathbf{x} \in \mathbb{R}^n |~ \mathbf{c} ' \mathbf{x} \geq \mathbf{c} ' \mathbf{x} ^0, \, \mathbf{G} \mathbf{x} \geq \mathbf{h} \}$ is the set of known constraints} of $\mathbf{FO}$.
\begin{subequations} \label{eq:MIP_general}
\begin{align}
{\mathbf \eMIO:~}\underset{ \mathbf{A} , \mathbf{b} }
{\normalfont{\text{minimize}}} \quad & \mathscr{F} ( \mathbf{A} , \mathbf{b} ; \mathscrsfs{D} ) \\
{\normalfont{\text{subject to}}} \quad & \mathbf{a} _{i}' \, \mathbf{x} ^k \geq b_i, \qquad \forall i \in \mathcal{I}_1,
\quad k \in \mathcal{K}
\label{eq:mip1}\\
& || \mathbf{a} _i|| = 1 \label{eq:MIPnorm},\\
& {\color{myGreen} \mathbf{A} \in \mathbb{R}^{m_1 \times n}, \quad \mathbf{b} \in \mathbb{R}^{m_1}.} \label{eq:MIPboxConst}
\end{align}
\end{subequations}
\end{theorem}
\proof{Proof.}
({\it i}\,) If $( \mathbf{A} , \mathbf{b} , \mathbf{y} , \mathbf{w} )$ is a solution of $\MIO$, then $( \mathbf{A} , \mathbf{b} )$ is a solution to the $\eMIO$ formulation since~\eqref{eq:mip1}--\eqref{eq:MIPboxConst} are also constraints of $\MIO$.
({\it ii}\,) Conversely, for the pair $( \mathbf{A} , \mathbf{b} )$ that is a solution to $\eMIO$, let $ \mathbf{w} = (1, 0, \dots, 0)$, $ \mathbf{y} = (0, 0, \dots, 0)$. The solution $( \mathbf{A} , \mathbf{b} , \mathbf{y} , \mathbf{w} )$ is feasible for $\MIO$ since by Proposition~\ref{prop:CG}, the strong duality constraint \eqref{eq:TMIO-StrongDual} and the dual feasibility constraint \eqref{eq:TMIO-DualFeas} hold through the inclusion of the half-space $\mathcal{C}$ as a known constraint in $\mathcal{S}$. Therefore, by ({\it i}\,) and ({\it ii}\,), solving $\eMIO$ is equivalent to solving $\MIO$.
\Halmos \endproof
Theorem~\ref{thm:MIP_general} shows that by considering the half-space as one of the known constraints, instead of solving the bilinear $\MIO$ problem, we can solve a simpler problem that does not explicitly include the strong duality and dual feasibility constraints and hence, does not have any bilinear terms. Note that there are multiple ways to re-write constraint~\eqref{eq:MIPnorm} based on the particular application and the desired properties of the resulting model. Depending on the type of normalization constraint used in~\eqref{eq:MIPnorm}, the complexity of the corresponding $\eMIO$ formulation would differ. For example, popular norms such as $L_1$ or $L_2$ would yield linearly- or quadratically-constrained problems, respectively.
Remark~\ref{rem:eMIO} highlights that any valid feasible set for $\mathbf{FO}$ can be a solution to $\eMIO$. This property is intuitive since the $\eMIO$ formulation only includes the primal feasibility constraints for all observations. Hence, the feasible region of $\eMIO$ reduces to the set of valid feasible sets of $\mathbf{FO}$. Therefore, by the inclusion of $\mathcal{C}$ in the known constraints, the complexity of the problem reduces to only finding valid feasible sets of $\mathbf{FO}$ through the $\eMIO$ formulation.
\begin{remark}\label{rem:eMIO}
Any valid feasible set $\mathcal{X}$ of $\mathbf{FO}$ is an imputed feasible set to $\eMIO$.
\end{remark}
\proof{Proof.}
The set $\mathcal{X}$ is a valid feasible set, and the set of known constraints in $\eMIO$ is $\mathcal{S}$. Therefore, by Proposition~\ref{prop:CG}, the set $\mathcal{X} \cap \mathcal{S}$ is an imputed feasible set to $\eMIO$.
\Halmos \endproof
We finally note that solving $\eMIO$ provides constraint parameters $ \mathbf{A} $ and $ \mathbf{b} $ such that $ \mathbf{A} \mathbf{x} \geq \mathbf{b} $ along with the set of known constraints $\mathcal{S}$ shape the imputed feasible region of $\mathbf{FO}$. In other words, any solution to $\eMIO$ can identify an imputed feasible set of $\MIO$ by first finding the constraint parameters $ \mathbf{A} $ and $ \mathbf{b} $ and then finding the intersection of these constraints with the known constraints. Remark~\ref{cor:eMIO} formalizes this concept.
\begin{remark}\label{cor:eMIO}
An imputed feasible set of $\MIO$ can be derived as $\{ \mathbf{x} \in \mathbb{R}^n |~ \mathbf{A} \mathbf{x} \geq \mathbf{b} , \mathbf{x} \in \mathcal{S}\}$ for any solution $( \mathbf{A} , \mathbf{b} )$ to $\eMIO$.
\end{remark}
In this section, we showed that the solution to the $\MIO$ (or $\IO$) formulation can be found by first solving the $\eMIO$ formulation and then deriving the corresponding imputed feasible set through adding the known constraints. The complexity of $\eMIO$ depends on the type of norm in constraint~\eqref{eq:MIPnorm} and the complexity of the loss function. The $\eMIO$ problem will be a linearly-constrained model if a linear norm is used. In the next section, we focus on different types of loss functions and provide specific examples of measures to induce the characteristics of the feasible region.
\section{Loss Functions} \label{sec:measures}
The $\MIO$ formulation minimizes an objective function $ \mathscr{F} ( \mathbf{A} , \mathbf{b} , \mathscrsfs{D} )$ which affects the optimal solution $( \mathbf{A} , \mathbf{b} , \mathbf{y} , \mathbf{w} )$ and hence, drives the desirable properties of the imputed feasible set of $\mathbf{FO}$. This imputed feasible set for the forward problem may take various shapes and forms based on the given parameter set ($ \mathscrsfs{D} $) and the objective function ($ \mathscr{F} $). In this chapter, we introduce several loss functions that can be used based on the available information on the constraints.
Note that all the models introduced in our framework include a generalized loss function $ \mathscr{F} $, and this function can be tailored by the user to induce properties for the application domain at hand. As shown in Section~\ref{sec:Methodology}, the solutions to both the $\IO$ and $\MIO$ formulations can be found by solving the $\eMIO$ model. Therefore, without loss of generality, we use the $\eMIO$ formulation to develop the theoretical properties of models with different loss functions in this section.
In the literature of inverse optimization, a \emph{prior belief} on the constraint parameter is defined as reasonable or desired values for the constraint parameters, and the inverse problem often attempts to minimize the distance of the parameters from such belief. In our framework, we do not necessarily require the user to provide any such prior belief on the parameters. Therefore, for any unknown constraint in $\mathbf{FO}$, we consider two cases: ({\it i}\,) a prior belief on the constraint is available, and ({\it ii}\,) no such prior information exists. For case ({\it i}\,), which has been considered in the literature, we discuss a specialization of our general loss function that allows the user to minimally perturb these prior beliefs. In case ({\it ii}\,), where no information on the constraint is assumed, we show that it is possible to find a large variety of imputed feasible sets for $\MIO$. We introduce different loss functions that aim to find the appropriate constraints when no prior belief on the constraint parameters is available.
In the rest of this section, we first discuss the theoretical properties of imputed feasible sets of $\eMIO$ when a prior belief on the constraints is available. Next, we present and discuss different loss functions that can be employed in the absence of a prior belief.
\subsection{Prior Belief on Constraints Available} \label{sec:priorB}
When a prior belief on the constraint parameters is available, the objective of the inverse problem is often to minimize some measure of distance ({\it e.g.}, norm) of the imputed constraint parameters from that prior belief. In this section, we study the use of prior belief as a loss function in the objective of our $\eMIO$ model. We refer to this loss function as the {\it Adherence Measure}.
Let the assumed prior belief on the constraint parameters, denoted as $\hat{\bA}$ and $\hat{\bb}$, be given as the input parameter $ \mathscrsfs{D} $. For ease of notations, let $\Delta} %{{\bA^b} = [ \mathbf{A} \,\,\, \mathbf{b} ]$ be the matrix that appends the column $ \mathbf{b} $ to the $ \mathbf{A} $ matrix and $\hat{\Delta}}%{{\hat{\bA}^b} = [\hat{\bA} \,\,\, \hat{\bb}]$ be the corresponding prior belief. We define the Adherence Measure~as the loss function that captures the distance of $\Delta} %{{\bA^b}$ from the prior belief $\hat{\Delta}}%{{\hat{\bA}^b}$ according to some norm $||\cdot ||$ as follows:
\[ \mathscr{F} ( \mathbf{A} , \mathbf{b} ; \mathscrsfs{D} ) = \mathscr{F} ( \mathbf{A} , \mathbf{b} , \hat{\Delta}}%{{\hat{\bA}^b}) = \sum_{i \in \mathcal{I}_1}\omega_i\, ||\Delta} %{{\bA^b}_i - \hat{\Delta}}%{{\hat{\bA}^b}_i ||. \qquad \tag{Adherence Measure}\]
Parameter $\omega_i$ is the objective weight capturing the relative importance of constraint $i$, and $\Delta} %{{\bA^b}_i$ and $\hat{\Delta}}%{{\hat{\bA}^b}_i$ are the $i^\text{th}$ rows of matrices $\Delta} %{{\bA^b}$ and $\hat{\Delta}}%{{\hat{\bA}^b}$, respectively.
Proposition~\ref{prop:prior2} shows that the $\eMIO$ model with the Adherence Measure~can be decomposed into solving a series of smaller problems for each of the $m_1$ unknown constraints.
\begin{proposition}\label{prop:prior2}
The optimal solution of $\eMIO$ with the Adherence Measure~can be found by solving the following problem $m_1$ times for each $i \in \mathcal{I}_1 = \{1,\dots,m_1 \}$.
\begin{subequations} \label{eq:prior}
\begin{align}
\underset{ \mathbf{a} _i, b_i }{\normalfont\text{minimize}} \quad & ||\Delta} %{{\bA^b}_i - \hat{\Delta}}%{{\hat{\bA}^b}_i || \\
{\normalfont\text{subject to}} \quad & \mathbf{a} _{i}' \, \mathbf{x} ^k \geq b_i, \quad \forall k \in \mathcal{K}
\label{eq:prior1i}\\
& || \mathbf{a} _i|| = 1.
\end{align}
\end{subequations}
\end{proposition}
\proof{Proof.} The $\eMIO$ problem with the Adherence Measure~is separable for each constraint $i$, which means problem~\eqref{eq:prior} can be solved $m_1$ times to recover each $ \mathbf{a} _i$ and $b_i$ independently.
\Halmos \endproof
If the prior belief is not a valid feasible set, then at least one of the observations $ \mathbf{x} ^k, \, k \in \mathcal{K}$ is positioned outside of the prior belief. Therefore, $\hat{\Delta}}%{{\hat{\bA}^b}$ needs to be minimally perturbed to generate a valid feasible set. This is a prevalent occurrence in practice since although a set of {\it a priori} constraints might be available, in reality, these constraints might be too tight to hold for all observations.
If the set identified by the prior belief $\hat{\Delta}}%{{\hat{\bA}^b}$ is a valid feasible set, {\it i.e.}, $\hat{\bA} \mathbf{x} ^k \geq \hat{\bb}, \, \forall k \in \mathcal{K}$, then Proposition~\ref{prop:priorValidSet} shows that there is a closed-form solution to $\eMIO$.
\begin{proposition}\label{prop:priorValidSet}
If $\mathcal{X} = \{ \mathbf{x} \in \mathbb{R}^n \, |~ \hat{\bA} \mathbf{x} \geq \hat{\bb} \}$ is a valid feasible set, then $ \mathbf{A} = \hat{\bA}$ and $ \mathbf{b} = \hat{\bb}$ is an optimal solution to $\eMIO$ under the Adherence Measure.
\end{proposition}
\proof{Proof.}
By assumption, $\mathcal{X}$ is a valid feasible set and hence by Remark~\ref{rem:eMIO}, an imputed feasible set to $\eMIO$. Therefore, $\Delta} %{{\bA^b} = \hat{\Delta}}%{{\hat{\bA}^b}$ is a feasible solution to $\eMIO$ with $ \mathscr{F} ( \mathbf{A} , \mathbf{b} ; \mathscrsfs{D} ) = 0$ under the Adherence Measure. Hence, $ \mathbf{A} = \hat{\bA}$, $ \mathbf{b} = \hat{\bb}$ is an optimal solution to $\eMIO$.
\Halmos \endproof
The Adherence Measure, which is most often used in the literature, heavily relies on both the availability and the quality of the prior belief. In particular, if the quality of the prior belief is poor, it enforces the inverse optimization to fit the imputed feasible set to this poor-quality prior belief. In what follows, we propose and discuss other loss functions that can be employed if no quality prior belief is available for the constraint parameters.
\subsection{No Prior Belief on Constraints} \label{noPriorB}
In this section, instead of relying on a prior belief, we propose different loss functions that can incorporate other data ({\it e.g.}, observations) to find the solution of $\eMIO$. We start with a simple constraint satisfaction model, which is sometimes used in the literature of inverse optimization. We then propose three new loss functions that each result in different properties for the imputed feasible set of $\eMIO$. We also consider combining the loss functions to further refine the shape of the imputed feasible region.
\subsubsection{Indifference Measure}
If no preference and no information about the feasible region is given, {\it i.e.}, there is no data provided to be used to derive the shape of the imputed feasible set ($ \mathscrsfs{D} =[\,\,]$), then the $\eMIO$ reduces to a feasibility problem by setting the objective function as zero, {\it i.e.}, \[ \mathscr{F} ( \mathbf{A} , \mathbf{b} ; \mathscrsfs{D} ) = 0. \tag{Indifference Measure} \]
We refer to this loss function as the {\it Indifference Measure}.
\begin{proposition}\label{prop:feassol}
A closed-form optimal solution for $\eMIO$ with the Indifference Measure~is
\begin{align}
\mathbf{a} _{i} = \frac{c_i}{|| \mathbf{c} ||},
\qquad
b_i = \frac{ \mathbf{c} ' \mathbf{x} ^0 }{|| \mathbf{c} ||}, \qquad \forall i \in \mathcal{I}_1.
\end{align}
\end{proposition}
\noindent As shown in Section~\ref{sec:Methodology}, the solution above is feasible for $\eMIO$ and is hence, optimal under the Indifference Measure. Intuitively, any feasible solution to $\eMIO$ is an optimal solution in this case. This property is highlighted in Remark~\ref{prop-infinitefeas}.
\begin{remark}\label{prop-infinitefeas}
The $\eMIO$ formulation with the Indifference Measure~has an infinite number of optimal solutions.
\end{remark}
\proof{Proof.}
The convex hull $\mathcal{H}$ of the observations is a valid feasible set by definition, and hence, it is an imputed feasible set of $\eMIO$ under the Indifference Measure. By Remark~\ref{rem:validSet}, any set $\mathcal{X}$ that $\mathcal{H} \subseteq \mathcal{X}$ is also a valid feasible set, and by Proposition~\ref{prop:CG}, $\mathcal{X}\cap \mathcal{C}$ is an imputed feasible set for $\MIO$ (and hence, for $\eMIO$). Therefore, $\eMIO$ has infinitely many imputed feasible sets, and accordingly, infinitely many optimal solutions.
\Halmos \endproof
In practice, the Indifference Measure~may not be the loss function of choice if there exist some properties that are preferred for the feasible set of $\mathbf{FO}$. In the rest of this section, we introduce three other loss functions that can inform the shape of the imputed feasible set using the observations and discuss their properties.
\subsubsection{Adjacency Measure}
The \emph{Adjacency Measure}~finds a feasible region that has the smallest total distance from all of the observations. Here, the given parameter $ \mathscrsfs{D} $ is the matrix that includes all observations, $ \mathscrsfs{D} =[ \mathbf{x} ^1,\dots, \mathbf{x} ^k]$. This loss function minimizes the sum of the distances of each observation from all constraints. Let $d_{ik}$ denote the distance of each observation $ \mathbf{x} ^k, \, k\in \mathcal{K}$ from the identifying hyperplane of the $i^{\text{th}}$ constraint. The {\it Adjacency Measure}~is defined as
\[ \mathscr{F} ( \mathbf{A} , \mathbf{b} , \mathscrsfs{D} ) = \mathscr{F} ( \mathbf{A} , \mathbf{b} , [ \mathbf{x} ^1,\dots, \mathbf{x} ^k])=\sum_{k=1}^{K} \sum_{i=1}^{m_1} d_{ik}, \quad \tag{Adjacency Measure}\]
where the distance $d_{ik}$ can be calculated using any distance metric, for example, the \emph{Euclidean distance}, or the \emph{slack distance} defined as $d_{ik}~=~ \mathbf{a} _{i} \mathbf{x} ^{k}- b_{i}$. Similar to the Adherence Measure, this loss function is separable for each constraint, and hence, the resulting $\eMIO$ model can be decomposed and solved for each constraint independently, as shown in Proposition \ref{prop:mindist}.
\begin{proposition}\label{prop:mindist}
The optimal solution of $\eMIO$ with the Adjacency Measure~can be found by solving the following problem $m_1$ times, for each $i\in\mathcal{I}_1$:
\begin{subequations}
\begin{align}
\underset{ \mathbf{a} _i, b_i} {\normalfont\text{minimize}} \quad &\sum_{k=1}^{K}
d_{ik} \\
{\normalfont\text{subject to}} \quad & \mathbf{a} _{i}' \, \mathbf{x} ^k \geq b_i, \quad \forall k \in \mathcal{K}
\label{eq:mindist:i}\\
& || \mathbf{a} _i|| = 1.
\end{align}
\end{subequations}
\end{proposition}
\subsubsection{Fairness Measure}
This loss function aims to find a feasible set {such that all of its constraints are} equally close to all observations and hence, is ``fair''. Using the same notations as those in the Adherence Measure, we calculate $d_{ik}$ as the distance of each observation $k$ from the identifying hyperplane of each constraint $i$. We then calculate the \add{total} distance for all observations, $d_k=\sum_{i=1}^{m_1} d_{ik}$. The {\it Fairness Measure}~is
\[ \mathscr{F} ( \mathbf{A} , \mathbf{b} , \mathscrsfs{D} ) = \mathscr{F} ( \mathbf{A} , \mathbf{b} ; [ \mathbf{x} ^1,\dots, \mathbf{x} ^K])=\sum_{k\in\mathcal{K}}{(d_k-\sum_{k\in \mathcal{K}}d_k/K)}. \quad \tag{Fairness Measure} \]
This measure minimizes the deviation of the total distances for all observations and ensures that all observations have roughly the same total distance from all constraints. The Fairness Measure~avoids cases were the constraints are all on one side of the observations and far away from others, and hence, it typically results in imputed feasible sets that are more confined compared to the Adjacency Measure.
\subsubsection{Compactness Measure}
The {\it Compactness Measure}~tries to find the constraint parameters such that the minimum distance of each observation from all of the constraints is minimized. In other words, it tries to ensure that each observation is close to at least one constraint, if possible ({\it i.e.}, if the observation is not an interior point). Again, let $d_{ik}$ be the distance of observation $k$ from the identifying hyperplane of constraint $i$. The Compactness Measure~is defined as
\[ \mathscr{F} ( \mathbf{A} , \mathbf{b} , \mathscrsfs{D} ) = \mathscr{F} ( \mathbf{A} , \mathbf{b} ; [ \mathbf{x} ^1,\dots, \mathbf{x} ^k])=\sum_{k\in\mathcal{K}}{ \min_{i\in\mathcal{I}_1} d_{ik}}. \quad \tag{Compactness Measure}\]
Minimizing the Compactness Measure~can be written as $\, \min \sum_{k\in\mathcal{K}}{ \min_{i\in\mathcal{I}_1} d_{ik}}$, and this min-min objective can be reformulated using auxiliary binary variables. The resulting $\eMIO$ formulation under the Compactness Measure~is as follows:
\begin{subequations}
\begin{align}
\text{minimize} \quad & \sum_{k \in \mathcal{K}} m_k \label{eq:minmin_obj}\\
\text{subject to} \quad & d_{ik} = \sum_{j \in \mathcal{J}}a_{ij} x^k_{i} - b_i \quad \forall i\in \mathcal{I}_1, \, k \in \mathcal{K} \label{eq:minmin_const_first}\\
& m_k \geq d_{ik} - M \gamma_{ik} , \quad \forall i\in \mathcal{I}_1, \, k \in \mathcal{K}\\
& \sum_{i} \gamma_{ik} = |\mathcal{I}_1| - 1, \quad \forall k \in \mathcal{K} \\
& \gamma_{ik} \in \{0,1\}, \quad \forall i \in \mathcal{I}_1, k \in \mathcal{K} \\
& m_k \geq 0, \quad \forall k \in \mathcal{K} \label{eq:minmin_const_last}\\
&\eqref{eq:mip1}-\eqref{eq:MIPboxConst}.
\end{align}
\end{subequations}
Note that the resulting model is a mixed-integer linear program if a linear norm is used as constraint~\eqref{eq:MIPnorm} of $\eMIO$.
\subsubsection{Combined Loss Functions}
In the literature, inverse optimization formulations tend to produce multiple optimal solutions and return one of them arbitrarily as the optimal solution. Our inverse optimization formulations often demonstrate this property as well, even when the previously-mentioned loss functions are imposed. Given that the $\eMIO$ formulation is tractable, we can utilize the multi-optimum property of inverse optimization to further calibrate the shape and characteristics of the imputed feasible set of $\mathbf{FO}$ by combining multiple loss functions.
Typical approaches for combining different objective functions include using multi-objective optimization and using sequential objectives. The former is trivial to implement but introduces challenges such as deciding on the weights of each of the multiple objectives and is still prone to generating multiple optimal solutions that do not necessarily reflect the desired characteristics. The latter approach (also referred to as {\it secondary objective}) is our suggested method since each iteration narrows down the solution space to further fine-tune the solution to the specific characteristics of interest. We note that this approach does not require significant additional computational burden given that the $\eMIO$ formulation is linearly-constrained for some popular norms ({\it e.g.}, $L_1$) and multiple instances can be solved sequentially.
In the secondary objective approach, the model is solved for a loss function of choice, say $ \mathscr{F} _1$, where the optimal value of $ \mathscr{F} _1^*$ is achieved. Then, to select those imputed feasible sets whose corresponding solutions generate the same optimal value of $ \mathscr{F} _1^*$ but possess other desired properties as well, the $\eMIO$ is solved again with a new loss function $ \mathscr{F} _2$ and an additional constraint of $ \mathscr{F} _1( \mathbf{A} , \mathbf{b} , \mathscrsfs{D} ) = \mathscr{F} _1^*$. This process can be repeated for as many loss functions as desired.
In the next section, we test our approach on two numerical examples and compare the results for different loss functions. We then provide an example of combining the loss functions, by choosing the Fairness Measure~as the primary objective $ \mathscr{F} _1$ and the Adjacency Measure~as the secondary objective $ \mathscr{F} _2$. In this case, we find solutions with the best Fairness Measure~value that are also closer to all observations with regards to the Adjacency Measure.
\section{Numerical Results} \label{sec:numericalexample}
In this section, we test our methodology on two illustrative two-dimensional (2D) numerical examples \add{and a larger-scale diet recommendation case study}. For the ease of visualization, we use the 2D datasets ($n=2$) to graphically show the observations, the feasible region, and the objective vector. In the first example, we consider a small number of equidistant observations ($K=5$) that form a symmetric convex hull. For this example, the inverse solutions are easy to find by visual inspection, which allows us to understand the intuition behind the solutions generated under each loss function and compare their characteristics. The second numerical example considers a relatively larger set of observations ($K = 19$) that are randomly placed and their convex hull has an arbitrary shape. This example further elaborates on the insights from each of the introduced loss functions under non-trivial cases. In each of the two examples, we consider multiple known and unknown constraints in the $\mathbf{FO}$ problem. We solved the first example using both the $\MIO$ and $\eMIO$ formulations which confirmed the equivalence of the results for the two models. The second example, however, was only solved using the $\eMIO$ model because the commercial solver we used was not able to solve the larger non-linear model to optimality. \add{Finally, we apply the $\eMIO$ formulation to a much larger example of a diet recommendation application. In this case study, we consider $K=100$ observations of a dieter's daily food intake from a set of $n=26$ food items. We consider a set of known nutrient constraints and impute multiple implicit constraints of the dieter. We compare the palatability of the resulting diet recommendations with and without the imputed constraints. }
In our numerical results, we use the $L_2$~norm in the Adherence Measure~since it is a popular norm used in the literature. For all other loss functions, for simplicity, we use the linear slack distance ({\it i.e.}, $d_{ik}= \mathbf{a} _i'\, { \mathbf{x} ^k} - b_i$) to calculate the distance of a given point $ \mathbf{x} ^k$ to the identifying hyperplane of the $i^{\text{th}}$ constraint ({\it i.e.}, $ \mathbf{a} _i' \, \mathbf{x} = b_i$). We note that there exist other linear distance metrics ({\it e.g.}, $L_{\infty}$~norm) that can be used, but we find the slack distance to be more illustrative in a two-dimensional setting.
For the normalization constraint~\eqref{eq:MIOnorm}, we use $|\sum_{j\in \mathcal{J}}a_{ij}|=1$ as a proxy for the $L_1$~norm ({\it i.e.}, $\sum_{j\in \mathcal{J}}|a_{ij}|=1$). We chose this normalization method instead of the $L_1$~norm to reduce the number of auxiliary binary variables to only $2n$ (as opposed to $2n\,(m_1+m_2)$) when reformulating it as a linear mixed-integer model.
\subsection{Numerical Case I}\label{sec:NumCase1}
In the first numerical case, we have 5 observations, as listed in Table~\ref{tab:case1}. There are two known constraints with the first one being the half-space $\mathcal{C}$, as discussed in Section~\ref{sec:Methodology}.
The $\MIO$ {and $\eMIO$} models were solved using the nonlinear solver {MINOS}~(\citeyear{saunders2003minos}) Version~5.51 and {CPLEX}~(\citeyear{cplex}) Version~12.9, respectively. Both models were formulated using AMPL~(\citeyear{ampl}) modeling language Version~20190529. While MINOS and other nonlinear solvers are sometimes capable of solving small-scale instances to optimality, they often fail to provide a global optimal solution in larger cases. For this numerical example, {MINOS} was able to solve the $\MIO$ model to optimality, and the $\MIO$ and $\eMIO$ solutions confirmed the same solutions in all instances.
\begin{table}[htbp]
\centering
\begin{tabular}{l l}
\toprule
{\bf Description} & {\bf Value(s)} \\
\midrule
Cost vector $ \mathbf{c} $ & $(-1, -1)$ \\
\midrule
Observations $ \mathbf{x} ^0$; $ \mathbf{x} ^k \qquad$ & $\mathbf{(2,2)}$; $(1,1)$, $(1,2)$, $(2,1)$, $(1.5,1.5)$\\
\midrule
Known constraints
& $0.5 x_1 + 0.5 x_2 \leq 2\quad$ (half-space $\mathcal{C}$) \\
& $x_1 + x_2 \geq 1 $ \\
\midrule
Unknown constraints & 4 constraints\\
\bottomrule
\end{tabular}
\caption{Numerical Case I}
\label{tab:case1}
\end{table}
For this example, we present the results for a loss function with Adherence Measure, the four loss function when no prior belief is provided, and an example of the combined loss functions, in Figures~\ref{fig:priorB},~\ref{fig:noPriorB}, and~\ref{fig:secondObj}, respectively. In these figures, black dots denote the given observations $ \mathbf{x} ^k$, $\forall k \in \mathcal{K}$, and the preferred observation ($ \mathbf{x} ^0$) is highlighted in red. The blue solid lines are the hyperplanes corresponding to the given prior belief parameters ($\hat{\Delta}}%{{\hat{\bA}^b}$), the dotted red lines represent the known constraints ($\mathcal{S}$), and the dashed black lines demonstrate the constraints found by the inverse optimization model. The resulting imputed feasible set of $\MIO$ (including the known constraints) is marked as a shaded area.
Figure~\ref{fig:priorB} shows the results under the Adherence Measure~for three different possible feasible regions as the prior belief ($\hat{\Delta}}%{{\hat{\bA}^b}$). With the Adherence Measure, the goal of the inverse optimization model is to minimally perturb $\hat{\Delta}}%{{\hat{\bA}^b}$ to ensure all the observations are feasible and $ \mathbf{x} ^0$ is optimal. In Figure~\ref{fig:priorB:a}, the given prior belief $\hat{\Delta}}%{{\hat{\bA}^b}$ is a valid feasible set, and as shown in Proposition~\ref{prop:prior2}, the optimal solution $\Delta} %{{\bA^b}$ is the same as the prior belief $\hat{\Delta}}%{{\hat{\bA}^b}$. On the contrary, in Figures~\ref{fig:priorB:b} and~\ref{fig:priorB:c}, the given prior beliefs are not valid feasible sets. In Figure~\ref{fig:priorB:b}, $\hat{\Delta}}%{{\hat{\bA}^b} \subseteq \mathcal{S}$, while in Figure~\ref{fig:priorB:c}, a part of the prior belief is infeasible for the known constraints and hence, $\hat{\Delta}}%{{\hat{\bA}^b} \not\subseteq \mathcal{S}$. In both cases, the solution $\Delta} %{{\bA^b}$ is a minimally perturbed $\hat{\Delta}}%{{\hat{\bA}^b}$ that makes all the observations feasible. The resulting imputed feasible set ({\it i.e.}, the shaded area) is derived from $\Delta} %{{\bA^b}$ and is a subset of $\mathcal{S}$.
\begin{figure}[htbp]
\centering
\subfigure[\label{fig:priorB:a}]{
\begin{tikzpicture} [thick, scale=.95]
\draw[->, >=stealth] (-1,0) -- (3,0);
\draw[->, >=stealth] (0,-.5) -- (0,3);
\draw [fill, color=black] (1.5,1.5) circle [radius=0.07];
\draw [fill, color=black] (1,1) circle [radius=0.07];
\draw [fill, color=black] (2,1) circle [radius=0.07];
\draw [fill, color=black] (1,2) circle [radius=0.07];
\draw [fill, color=red] (2,2) circle [radius=0.07];
\node[anchor=south, color=red] at (2.2 , 2) (coord) {$ \mathbf{x} ^0$};
\draw[<-] (3.2,2.2) coordinate -- (3.5,2.5) node[anchor= north west] {$ \mathbf{c} $};
\draw[-] (3.6, 2.4) -- (3.4, 2.6);
\draw[color=blue, very thick] (0.5,0.5) -- (0.5,2.5) -- (2.5,2.5) -- (2.5, 0.5) -- cycle;
\draw[dashed, very thick] (-.5, .5) -- (3.2, .5);
\draw[dashed, very thick] (-.5, 2.5) -- (3.2, 2.5);
\draw[dashed, very thick] (.5, -.5) -- (.5, 3.2);
\draw[dashed, very thick] (2.5, -.5) -- (2.5, 3.2);
\draw[dotted, ultra thick, red] (1, 3) -- (3, 1);
\node[anchor=south east, color=red] at (1.1,2.9) (coord) {$\mathcal{C}$};
\draw[dotted, ultra thick, red] (1.5,-0.5 ) -- (-0.5, 1.5);
\fill[pattern=north east lines, pattern color=gray] (.5, .5) -- (.5,2.5) -- (1.5, 2.5) -- (2.5, 1.5) -- (2.5,.5) -- cycle;
\end{tikzpicture}
}
\subfigure[\label{fig:priorB:b}]{
\begin{tikzpicture} [thick, scale=.95]
\draw[->, >=stealth] (-1,0) -- (3,0);
\draw[->, >=stealth] (0,-.5) -- (0,3);
\draw [fill, color=black] (1.5,1.5) circle [radius=0.07];
\draw [fill, color=black] (1,1) circle [radius=0.07];
\draw [fill, color=black] (2,1) circle [radius=0.07];
\draw [fill, color=black] (1,2) circle [radius=0.07];
\draw [fill, color=red] (2,2) circle [radius=0.07];
\node[anchor=south, color=red] at (2.2 , 2) (coord) {$ \mathbf{x} ^0$};
\draw[<-] (3.2,2.2) coordinate -- (3.5,2.5) node[anchor= north west] {$ \mathbf{c} $};
\draw[-] (3.6, 2.4) -- (3.4, 2.6);
\draw[color=blue, very thick] (1.15,1.15) -- (1.15, 1.85) -- (1.85, 1.85) -- (1.85, 1.15) -- (1.15, 1.15) -- cycle;
\draw[dashed, very thick] (-0.5, 1) -- (2.7, 1);
\draw[dashed, very thick] (-.7, 2) -- (3.2, 2);
\draw[dashed, very thick] (1, -.5) -- (1, 2.5);
\draw[dashed, very thick] (2, -.5) -- (2, 2.5);
\draw[dotted, ultra thick, red] (1, 3) -- (3, 1);
\node[anchor=south east, color=red] at (1.1,2.9) (coord) {$\mathcal{C}$};
\draw[dotted, ultra thick, red] (1.5,-0.5 ) -- (-0.5, 1.5);
\fill[pattern=north east lines, pattern color=gray] (1,1)--(1,2)--(2,2)--(2,1)--cycle;%
\end{tikzpicture}
}
\subfigure[\label{fig:priorB:c}]{
\begin{tikzpicture} [thick, scale=.95]
\draw[->, >=stealth] (-1,0) -- (3,0);
\draw[->, >=stealth] (0,-.5) -- (0,3);
\draw [fill, color=black] (1.5,1.5) circle [radius=0.07];
\draw [fill, color=black] (1,1) circle [radius=0.07];
\draw [fill, color=black] (2,1) circle [radius=0.07];
\draw [fill, color=black] (1,2) circle [radius=0.07];
\draw [fill, color=red] (2,2) circle [radius=0.07];
\node[anchor=south, color=red] at (2.2 , 2) (coord) {$ \mathbf{x} ^0$};
\draw[<-] (3.2,2.2) coordinate -- (3.5,2.5) node[anchor= north west] {$ \mathbf{c} $};
\draw[-] (3.6, 2.4) -- (3.4, 2.6);
\draw[color=blue, very thick] (-0.5,0.5) -- (1.5,0.5) -- (1.5,1.5) -- (-.5, 1.5) -- (-.5, 0.5) -- cycle;
\draw[dashed, very thick] (-.7, .5) -- (3.2, .5);
\draw[dashed, very thick] (-.7, 2) -- (3.2, 2);
\draw[dashed, very thick] (-.5, -.5) -- (-.5, 2.5);
\draw[dashed, very thick] (2, -.5) -- (2, 2.5);
\draw[dotted, ultra thick, red] (1, 3) -- (3, 1);
\node[anchor=south east, color=red] at (1.1,2.9) (coord) {$\mathcal{C}$};
\draw[dotted, ultra thick, red] (1.5,-0.5 ) -- (-0.5, 1.5);
\fill[pattern=north east lines, pattern color=gray] (.5, .5) -- (-.5,1.5) -- (-.5,2) -- (2,2) -- (2, .5) -- (.5,.5) -- cycle;%
\end{tikzpicture}
}
\caption{Results for Numerical Case I with {the} Adherence Measure. The subfigures illustrate different scenarios for the prior belief: (a) it is a valid feasible set, (b) it is not a valid feasible set but a subset of known constraints {$\mathcal{S}$}, and (c) it is neither a valid feasible set nor a subset of {$\mathcal{S}$}.
}
\label{fig:priorB}
\end{figure}
Figure~\ref{fig:priorB} confirms that the Adherence Measure~heavily relies on the quality of the prior belief on the constraint parameters. Particularly, when the prior belief is an unreasonably large valid feasible set far from the observations ({\it e.g.}, Figure~\ref{fig:priorB:a}, or two of the constraints in Figure~\ref{fig:priorB:c}) the inverse problem will always return the prior belief as the optimal solution. While this measure is the most commonly-used objective function of inverse problems in the literature, the results obtained may not be reliable if a high-quality prior belief does not exist.
\begin{figure}[htbp]
\centering
\subfigure[Indifference Measure \label{fig:noPriorB:a}]{
\begin{tikzpicture} [thick, scale=1]
\draw[->, >=stealth] (-1,0) -- (3,0);
\draw[->, >=stealth] (0,-.5) -- (0,3);
\draw [fill, color=black] (1.5,1.5) circle [radius=0.07];
\draw [fill, color=black] (1,1) circle [radius=0.07];
\draw [fill, color=black] (2,1) circle [radius=0.07];
\draw [fill, color=black] (1,2) circle [radius=0.07];
\draw [fill, color=red] (2,2) circle [radius=0.07];
\node[anchor=south, color=red] at (2.2 , 2) (coord) {$ \mathbf{x} ^0$};
\draw[<-] (3.2,2.2) coordinate -- (3.5,2.5) node[anchor= north west] {$ \mathbf{c} $};
\draw[-] (3.6, 2.4) -- (3.4, 2.6);
\draw[dashed, very thick] (1, 3) -- (3, 1);;
\node[anchor=west, color=black] at (3 , 1) (coord) {\scriptsize ($4\times$)};
\draw[dotted, ultra thick, red] (1, 3) -- (3, 1);
\node[anchor=south east, color=red] at (1.1,2.9) (coord) {$\mathcal{C}$};
\draw[dotted, ultra thick, red] (1.5,-0.5 ) -- (-0.5, 1.5);
\fill[pattern=north east lines, pattern color=gray] (-0.5, 1.5)--(1,3) -- (3,1) -- (1.5,-0.5) -- cycle;
\end{tikzpicture}
}
\subfigure[Adjacency Measure \label{fig:noPriorB:b}]{
\begin{tikzpicture} [thick, scale=1]
\draw[->, >=stealth] (-1,0) -- (3,0);
\draw[->, >=stealth] (0,-.5) -- (0,3);
\draw [fill, color=black] (1.5,1.5) circle [radius=0.07];
\draw [fill, color=black] (1,1) circle [radius=0.07];
\draw [fill, color=black] (2,1) circle [radius=0.07];
\draw [fill, color=black] (1,2) circle [radius=0.07];
\draw [fill, color=red] (2,2) circle [radius=0.07];
\node[anchor=south, color=red] at (2.2 , 2) (coord) {$ \mathbf{x} ^0$};
\draw[<-] (3.2,2.2) coordinate -- (3.5,2.5) node[anchor= north west] {$ \mathbf{c} $};
\draw[-] (3.6, 2.4) -- (3.4, 2.6);
\draw[dashed, very thick] (-0.5, 1) -- (3.5, 1);
\node[anchor=west, color=black] at (3.5 , 1) (coord) {\scriptsize ($4\times$)};
\draw[dotted, ultra thick, red] (1, 3) -- (3, 1);
\node[anchor=south east, color=red] at (1.1,2.9) (coord) {$\mathcal{C}$};
\draw[dotted, ultra thick, red] (1.5,-0.5 ) -- (-0.5, 1.5);
\fill[pattern=north east lines, pattern color=gray] (-0.5, 1.5)--(1,3) -- (3,1) -- (0,1) -- cycle;
\end{tikzpicture}
}
\subfigure[Fairness Measure \label{fig:noPriorB:c}]{
\begin{tikzpicture} [thick, scale=1]
\draw[->, >=stealth] (-1,0) -- (3,0);
\draw[->, >=stealth] (0,-.5) -- (0,3);
\draw [fill, color=black] (1.5,1.5) circle [radius=0.07];
\draw [fill, color=black] (1,1) circle [radius=0.07];
\draw [fill, color=black] (2,1) circle [radius=0.07];
\draw [fill, color=black] (1,2) circle [radius=0.07];
\draw [fill, color=red] (2,2) circle [radius=0.07];
\node[anchor=south, color=red] at (2.2 , 2) (coord) {$ \mathbf{x} ^0$};
\draw[<-] (3.2,2.2) coordinate -- (3.5,2.5) node[anchor= north west] {$ \mathbf{c} $};
\draw[-] (3.6, 2.4) -- (3.4, 2.6);
\draw[dashed, very thick] (.5, 3.5) -- (3, 1);
\draw[dashed, very thick] (1, 3.5) -- (1,.5);
\draw[dashed, very thick] (2, -.5) -- (2, 2.5);
\draw[dashed, very thick] (.5, 1.5) -- (2.5, -.5);
\draw[dotted, ultra thick, red] (1, 3) -- (3, 1);
\node[anchor=south west, color=red] at (1,2.9) (coord) {$\mathcal{C}$};
\draw[dotted, ultra thick, red] (1.5,-0.5 ) -- (-0.5, 1.5);
\fill[pattern=north east lines, pattern color=gray] (.5, .5) -- (1,3) -- (2, 2) -- (2, 0) -- (1,1) -- (1,3);
\end{tikzpicture}
}
\subfigure[Compactness Measure \label{fig:noPriorB:d}]{
\begin{tikzpicture} [thick, scale=1]
\draw[->, >=stealth] (-1,0) -- (3,0);
\draw[->, >=stealth] (0,-.5) -- (0,3);
\draw [fill, color=black] (1.5,1.5) circle [radius=0.07];
\draw [fill, color=black] (1,1) circle [radius=0.07];
\draw [fill, color=black] (2,1) circle [radius=0.07];
\draw [fill, color=black] (1,2) circle [radius=0.07];
\draw [fill, color=red] (2,2) circle [radius=0.07];
\node[anchor=south, color=red] at (2.2 , 2) (coord) {$ \mathbf{x} ^0$};
\draw[<-] (3.2,2.2) coordinate -- (3.5,2.5) node[anchor= north west] {$ \mathbf{c} $};
\draw[-] (3.6, 2.4) -- (3.4, 2.6);
\draw[dashed, very thick] (-0.5, 1) -- (3.5, 1);
\draw[dashed, very thick] (-0.5, 2) -- (3.5, 2);
\draw[dashed, very thick] (1,-0.5 ) -- (1, 2.5);
\node[anchor=west, color=black] at (3.5 , 1) (coord) {\scriptsize ($2\times$)};
\draw[dotted, ultra thick, red] (1.5, 2.5) -- (3.5, .5);
\node[anchor=south east, color=red] at (1.6, 2.4) (coord) {$\mathcal{C}$};
\draw[dotted, ultra thick, red] (1.5,-0.5 ) -- (-0.5, 1.5);
\fill[pattern=north east lines, pattern color=gray]
(1, 1)--(1,2) -- (2,2) -- (3,1) -- cycle;
\end{tikzpicture}
}
\caption{Results for Numerical Case I with different loss functions.}
\label{fig:noPriorB}
\end{figure}
Figure~\ref{fig:noPriorB} illustrates the results for the other loss functions as defined in Section~\ref{sec:measures}. Figure~\ref{fig:noPriorB:a} shows the results for the Indifference Measure~which has infinitely many optimal solutions. In our results, the four inferred constraints happened to be the same and equal to the half-space $\mathcal{C}$. Hence, the resulting inferred feasible region is equivalent to the set of known constraints~$\mathcal{S}$, which is the largest possible imputed feasible set (Remark~\ref{rem:cs}). Figures~\ref{fig:noPriorB:b}, \ref{fig:noPriorB:c}, and \ref{fig:noPriorB:d} show the results for the Adjacency Measure, the Fairness Measure, and the Compactness Measure, respectively. In Figure~\ref{fig:noPriorB:b}, the inferred constraints are four identical lines that pass through observations $(1,1)$ and $(2,1)$. On the contrary, Figure~\ref{fig:noPriorB:c}, shows that employing the Fairness Measure~in the objective function results in finding four distinct constraints. For this example, the constraints have the same total distance from all observations and are hence, distributed fairly. As speculated, this measure provides a confined feasible set for the $\mathbf{FO}$ problem. Finally, Figure~\ref{fig:noPriorB:d} illustrates the results for the Compactness Measure which ensures that each observation is close to some constraint. In this case, each constraint passes through two of the observations, and due to symmetry, two of the constraints are identical.
Lastly, we used a combined loss function to provide additional control over the properties of the imputed feasible set as shown in Figure \ref{fig:secondObj}. In this example, we first imposed the Fairness Measure~to encourage similar total distances across different constraints, and then applied the Adjacency Measure~as a secondary objective. In other words, we search among those solutions with the optimal Fairness Measure~value that also have the minimum total distance between constraints and observations. As Figure~\ref{fig:secondObj} illustrates, the imputed feasible set is the same as the convex hull $\mathcal{H}$ in this case, which is the smallest possible imputed feasible set for $\mathbf{FO}$ (Lemma~\ref{lem:subset}).
\begin{figure}[htbp
\centering
\begin{tikzpicture} [thick, scale=1.3
\draw[->, >=stealth] (-1,0) -- (3,0);
\draw[->, >=stealth] (0,-.5) -- (0,3);
\draw [fill, color=black] (1.5,1.5) circle [radius=0.07];
\draw [fill, color=black] (1,1) circle [radius=0.07];
\draw [fill, color=black] (2,1) circle [radius=0.07];
\draw [fill, color=black] (1,2) circle [radius=0.07];
\draw [fill, color=red] (2,2) circle [radius=0.07];
\node[anchor=south, color=red] at (2.2 , 2) (coord) {$ \mathbf{x} ^0$};
\draw[<-] (3.2,2.2) coordinate -- (3.5,2.5) node[anchor= north west] {$ \mathbf{c} $};
\draw[-] (3.6, 2.4) -- (3.4, 2.6);
\draw[dashed, very thick] (-.5, 1) -- (2.5, 1);
\draw[dashed, very thick] (1, -.5) -- (1,2.5);
\draw[dashed, very thick] (-.5, 2) -- (2.5, 2);
\draw[dashed, very thick] (2, -.5) -- (2, 2.5);
\draw[dotted, ultra thick, red] (1.5, 2.5) -- (3.5, .5);
\node[anchor=south east, color=red] at (1.6,2.5) (coord) {$\mathcal{C}$};
\draw[dotted, ultra thick, red] (1.5,-0.5 ) -- (-0.5, 1.5);
\fill[pattern=north east lines, pattern color=gray] (.5, .5) -- (1,1) -- (1, 2) -- (2, 2) -- (2,1) -- (1,1);
\end{tikzpicture}
\caption{Result for Numerical Case I with a combined loss function, where the Fairness Measure~is the primary objective and the Adjacency Measure~is the secondary objective. }
\label{fig:secondObj}
\end{figure}
\subsection{Numerical Case II}\label{sec:NumCase2}
In this section, we test our approach on a relatively larger numerical example. This example considers 19 observations with $ \mathbf{x} ^0 = (1,1)$, two known constraints (the first one being the half-space $\mathcal{C}$), and 6 unknown constraints. The details of this numerical example are summarized in Table~\ref{tab:case2}. Given the larger size of this example, the nonlinear solver {MINOS} was not able to find the global optimal solutions for most instances of the problem. Hence, we were only able to solve this example using the $\eMIO$ formulation, which further illustrates the importance and advantage of the proposed $\eMIO$ formulation in solving larger instances to optimality. This advantage is more pronounced when the normalization constraint~\eqref{eq:MIPnorm} is linear and the $\eMIO$ problem becomes a linearly-constrained optimization model.
\begin{table}[htbp]
\centering
\begin{tabular}{p{.28\linewidth} p{.6\linewidth}}
\toprule
{\bf Description} & {\bf Value(s)} \\
\midrule
Cost vector $ \mathbf{c} $ & $(1, 1)$ \\
\midrule
Observations $ \mathbf{x} ^0; \mathbf{x} ^k \qquad$ & $\bf{(1,1)}$; $(2,1)$, $(4,2)$, $(4,5)$, $(3,6)$, $(2,4)$, $(3,4)$, $(3,2)$, $(4,3)$, $(1,3)$, $(2,2.5)$, $(1,5)$, $(5,2.5)$, $(5,4)$, $(2.7,3.2)$, $(2.3,4.7)$, $(1.4,4.8)$, $(3.8,4.3)$, $(4.8,3.3)$ \\
\midrule
Known constraints
& $0.5 x_1 + 0.5 x_2 \geq 1 \quad$ (half-space $\mathcal{C}$) \\
& $-x_1 \geq -5 $ \\
\midrule
Unknown constraints & 6 constraints\\
\bottomrule
\end{tabular}
\caption{Numerical Case II}
\label{tab:case2}
\end{table}
\begin{figure}[htbp] \vspace{-2em}
\centering
\subfigure[Prior belief is valid \label{fig:largefigPrior:a}]
{%
\begin{tikzpicture}[thick, scale=1]
\draw[->, >=stealth] (-1,0) -- (7,0);
\draw[->, >=stealth] (0,-.5) -- (0,7);
\draw [fill, color=black] (2,1) circle [radius=0.07];
\draw [fill, color=black] (4,2) circle [radius=0.07];
\draw [fill, color=black] (5,2.5) circle [radius=0.07];
\draw [fill, color=black] (5,4) circle [radius=0.07];
\draw [fill, color=black] (4,5) circle [radius=0.07];
\draw [fill, color=black] (3,6) circle [radius=0.07];
\draw [fill, color=black] (1,5) circle [radius=0.07];
\draw [fill, color=black] (2,4) circle [radius=0.07];
\draw [fill, color=black] (3,4) circle [radius=0.07];
\draw [fill, color=black] (3,2) circle [radius=0.07];
\draw [fill, color=black] (4,3) circle [radius=0.07];
\draw [fill, color=black] (1,3) circle [radius=0.07];
\draw [fill, color=black] (2,2.5) circle [radius=0.07];
\draw [fill, color=black] (2.7,3.2) circle [radius=0.07];
\draw [fill, color=black] (2.3,4.8) circle [radius=0.07];
\draw [fill, color=black] (1.4,4.8) circle [radius=0.07];
\draw [fill, color=black] (3.8,4.3) circle [radius=0.07];
\draw [fill, color=black] (4.8,3.3) circle [radius=0.07];
\draw [fill, color=red] (1,1) circle [radius=0.07];
\node[anchor=north east, color=red] at (1 , 1) (coord) {$ \mathbf{x} ^0$};
\draw[->] (6.2,5.1) coordinate -- (6.5,5.4) node[anchor=north west] {$ \mathbf{c} $};
\draw[-] (6.1,5.20) -- (6.30, 5.0);
\draw[blue, very thick] (0.5,0.5) -- (0.5,6) -- (4,6) -- (6,4) -- (6,2) -- (3, 0.5) -- cycle ;
\draw[dashed, very thick] (0.5, -0.5) -- (0.5, 7);
\draw[dashed, very thick] (-0.5, 0.5) -- (4,0.5);
\draw[dashed, very thick] (1,-0.5) -- (6.5,2.25);
\draw[dashed, very thick] (6,0.5) -- (6,5.5);
\draw[dashed, very thick] (6.5,3.5) -- (3,7);
\draw[dashed, very thick] (5.4,6) -- (-0.5,6);
\draw[dotted, red, ultra thick] (2.5,-0.5) -- (-0.5,2.5);
\node[anchor=south east, color=red] at (-0.5, 2.5) (coord) {$\mathcal{C}$};
\draw[dotted, red, ultra thick] (5,-0.5) -- (5,7);
\fill[pattern=north east lines, pattern color=gray] (1.5,0.5) -- (0.5,1.5) --(0.5,6) -- (4,6) -- (5,5) -- (5,1.5) -- (3,0.5) -- cycle;
\end{tikzpicture}
\subfigure[Prior belief is not valid \label{fig:largefigPrior:b}]
{%
\begin{tikzpicture}[thick, scale=1]
\draw[->, >=stealth] (-1,0) -- (7,0);
\draw[->, >=stealth] (0,-.5) -- (0,7);
\draw [fill, color=black] (2,1) circle [radius=0.07];
\draw [fill, color=black] (4,2) circle [radius=0.07];
\draw [fill, color=black] (5,2.5) circle [radius=0.07];
\draw [fill, color=black] (5,4) circle [radius=0.07];
\draw [fill, color=black] (4,5) circle [radius=0.07];
\draw [fill, color=black] (3,6) circle [radius=0.07];
\draw [fill, color=black] (1,5) circle [radius=0.07];
\draw [fill, color=black] (2,4) circle [radius=0.07];
\draw [fill, color=black] (3,4) circle [radius=0.07];
\draw [fill, color=black] (3,2) circle [radius=0.07];
\draw [fill, color=black] (4,3) circle [radius=0.07];
\draw [fill, color=black] (1,3) circle [radius=0.07];
\draw [fill, color=black] (2,2.5) circle [radius=0.07];
\draw [fill, color=black] (2.7,3.2) circle [radius=0.07];
\draw [fill, color=black] (2.3,4.8) circle [radius=0.07];
\draw [fill, color=black] (1.4,4.8) circle [radius=0.07];
\draw [fill, color=black] (3.8,4.3) circle [radius=0.07];
\draw [fill, color=black] (4.8,3.3) circle [radius=0.07];
\draw [fill, color=red] (1,1) circle [radius=0.07];
\node[anchor=north east, color=red] at (1 , 1) (coord) {$ \mathbf{x} ^0$};
\draw[->] (6.2,5.1) coordinate -- (6.5,5.4) node[anchor=north west] {$ \mathbf{c} $};
\draw[-] (6.1,5.20) -- (6.30, 5.0);
\draw[blue, very thick] (2,2) -- (2,4) -- (3,4) -- (4,3) -- (4,2.5) -- (3,2) -- cycle;
\draw[dashed, very thick] (1, -0.5) -- (1, 8.5);
\draw[dashed, very thick] (-0.5, 1) -- (4,1);
\draw[dashed, very thick] (.5,0.398) -- (6.5,2.805);
\draw[dashed, very thick] (5,0.5) -- (5,5.5);
\draw[dashed, very thick] (6,3) -- (.5,8.5);
\draw[dashed, very thick] (6.97, 0) -- (3.03,8);
\draw[dotted, red, ultra thick] (2.5,-0.5) -- (-0.5,2.5);
\node[anchor=south east, color=red] at (-0.5, 2.5) (coord) {$\mathcal{C}$};
\draw[dotted, red, ultra thick] (5,-0.5) -- (5,7);
\fill[pattern=north east lines, pattern color=gray] (1,1) -- (1,8) -- (5,4) -- (5,2.204) --(2,1) -- cycle;
\end{tikzpicture}
}
\caption{Results for Numerical Case II with the Adherence Measure. (a) The prior belief $\hat{\Delta}}%{{\hat{\bA}^b}$ is a valid feasible set but $\hat{\Delta}}%{{\hat{\bA}^b}\not\subseteq\mathcal{S}$. (b) The prior belief is not a valid feasible set but $\hat{\Delta}}%{{\hat{\bA}^b}\subseteq \mathcal{S}$.
}
\label{fig:largefigPrior}
\end{figure}
\begin{figure}[htbp] \vspace{-1.5em}
\centering
\subfigure[Indifference Measure \label{fig:largefigNoPrior:a}]
{%
\begin{tikzpicture}[thick, scale=0.9]
\draw[->, >=stealth] (-1,0) -- (7,0);
\draw[->, >=stealth] (0,-.5) -- (0,7);
\draw [fill, color=black] (2,1) circle [radius=0.07];
\draw [fill, color=black] (4,2) circle [radius=0.07];
\draw [fill, color=black] (5,2.5) circle [radius=0.07];
\draw [fill, color=black] (5,4) circle [radius=0.07];
\draw [fill, color=black] (4,5) circle [radius=0.07];
\draw [fill, color=black] (3,6) circle [radius=0.07];
\draw [fill, color=black] (1,5) circle [radius=0.07];
\draw [fill, color=black] (2,4) circle [radius=0.07];
\draw [fill, color=black] (3,4) circle [radius=0.07];
\draw [fill, color=black] (3,2) circle [radius=0.07];
\draw [fill, color=black] (4,3) circle [radius=0.07];
\draw [fill, color=black] (1,3) circle [radius=0.07];
\draw [fill, color=black] (2,2.5) circle [radius=0.07];
\draw [fill, color=black] (2.7,3.2) circle [radius=0.07];
\draw [fill, color=black] (2.3,4.8) circle [radius=0.07];
\draw [fill, color=black] (1.4,4.8) circle [radius=0.07];
\draw [fill, color=black] (3.8,4.3) circle [radius=0.07];
\draw [fill, color=black] (4.8,3.3) circle [radius=0.07];
\draw [fill, color=red] (1,1) circle [radius=0.07];
\node[anchor=north east, color=red] at (1 , 1) (coord) {$ \mathbf{x} ^0$};
\draw[->] (6.2,5.1) coordinate -- (6.5,5.4) node[anchor=north west] {$ \mathbf{c} $};
\draw[-] (6.1,5.20) -- (6.30, 5.0);
\draw[dashed, very thick] (1,-0.5) -- (1,7);
\node[anchor=south, color=black] at (1 , 7) (coord) {\scriptsize $(6\times)$};
\draw[dotted, red, ultra thick] (2.5,-0.5) -- (-0.5,2.5);
\node[anchor=south east, color=red] at (-0.5, 2.5) (coord) {$\mathcal{C}$};
\draw[dotted, red, ultra thick] (5,-0.5) -- (5,7);
\fill[pattern=north east lines, pattern color=gray] (2,0) --(1,1) -- (1,7) -- (5,7) -- (5,-0.5) -- (2.5,-0.5) -- cycle;
\end{tikzpicture}
\subfigure[Adjacency Measure \label{fig:largefigNoPrior:b}]
{%
\begin{tikzpicture}[thick, scale=0.9]
\draw[->, >=stealth] (-1,0) -- (7,0);
\draw[->, >=stealth] (0,-.5) -- (0,7);
\draw [fill, color=black] (2,1) circle [radius=0.07];
\draw [fill, color=black] (4,2) circle [radius=0.07];
\draw [fill, color=black] (5,2.5) circle [radius=0.07];
\draw [fill, color=black] (5,4) circle [radius=0.07];
\draw [fill, color=black] (4,5) circle [radius=0.07];
\draw [fill, color=black] (3,6) circle [radius=0.07];
\draw [fill, color=black] (1,5) circle [radius=0.07];
\draw [fill, color=black] (2,4) circle [radius=0.07];
\draw [fill, color=black] (3,4) circle [radius=0.07];
\draw [fill, color=black] (3,2) circle [radius=0.07];
\draw [fill, color=black] (4,3) circle [radius=0.07];
\draw [fill, color=black] (1,3) circle [radius=0.07];
\draw [fill, color=black] (2,2.5) circle [radius=0.07];
\draw [fill, color=black] (2.7,3.2) circle [radius=0.07];
\draw [fill, color=black] (2.3,4.8) circle [radius=0.07];
\draw [fill, color=black] (1.4,4.8) circle [radius=0.07];
\draw [fill, color=black] (3.8,4.3) circle [radius=0.07];
\draw [fill, color=black] (4.8,3.3) circle [radius=0.07];
\draw [fill, color=red] (1,1) circle [radius=0.07];
\node[anchor=north east, color=red] at (1 , 1) (coord) {$ \mathbf{x} ^0$};
\draw[->] (6.2,5.1) coordinate -- (6.5,5.4) node[anchor=north west] {$ \mathbf{c} $};
\draw[-] (6.1,5.20) -- (6.30, 5.0);
\draw[dashed, very thick] (2,7) -- (6,3);
\node[anchor=south, color=black] at (2.2 , 7) (coord) {\scriptsize $(6\times)$};
\draw[dotted, red, ultra thick] (2.5,-0.5) -- (-0.5,2.5);
\node[anchor=south east, color=red] at (-0.5, 2.5) (coord) {$\mathcal{C}$};
\draw[dotted, red, ultra thick] (5,-0.5) -- (5,7);
\fill[pattern=north east lines, pattern color=gray] (2,0) -- (-.5,2.5) -- (-.5,7) --(2,7)-- (5,4) -- (5,-0.5) -- (2.5, -0.5) -- cycle;
\end{tikzpicture}
\subfigure[Fairness Measure \label{fig:largefigNoPrior:c}]
{%
\begin{tikzpicture}[thick, scale=0.9]
\draw[->, >=stealth] (-1,0) -- (7,0);
\draw[->, >=stealth] (0,-.5) -- (0,7);
\draw [fill, color=black] (2,1) circle [radius=0.07];
\draw [fill, color=black] (4,2) circle [radius=0.07];
\draw [fill, color=black] (5,2.5) circle [radius=0.07];
\draw [fill, color=black] (5,4) circle [radius=0.07];
\draw [fill, color=black] (4,5) circle [radius=0.07];
\draw [fill, color=black] (3,6) circle [radius=0.07];
\draw [fill, color=black] (1,5) circle [radius=0.07];
\draw [fill, color=black] (2,4) circle [radius=0.07];
\draw [fill, color=black] (3,4) circle [radius=0.07];
\draw [fill, color=black] (3,2) circle [radius=0.07];
\draw [fill, color=black] (4,3) circle [radius=0.07];
\draw [fill, color=black] (1,3) circle [radius=0.07];
\draw [fill, color=black] (2,2.5) circle [radius=0.07];
\draw [fill, color=black] (2.7,3.2) circle [radius=0.07];
\draw [fill, color=black] (2.3,4.8) circle [radius=0.07];
\draw [fill, color=black] (1.4,4.8) circle [radius=0.07];
\draw [fill, color=black] (3.8,4.3) circle [radius=0.07];
\draw [fill, color=black] (4.8,3.3) circle [radius=0.07];
\draw [fill, color=red] (1,1) circle [radius=0.07];
\node[anchor=north east, color=red] at (1 , 1) (coord) {$ \mathbf{x} ^0$};
\draw[->] (6.2,5.1) coordinate -- (6.5,5.4) node[anchor=north west] {$ \mathbf{c} $};
\draw[-] (6.1,5.20) -- (6.30, 5.0);
\draw[dashed, very thick] (2.938,-0.938) -- (5.96,4.1);
\draw[dashed, very thick] (1,-0.5) -- (1,7) ;
\draw[dashed, very thick] (0.2,1)--(1.6,8);
\node[anchor=west, color=black] at (1.6,8) (coord) {\scriptsize $(2\times)$};
\draw[dashed, very thick] (1,8)--(6,3);
\node[anchor=north , color=black] at (6,3) (coord) {\scriptsize $(2\times)$};
\draw[dotted, red, ultra thick] (2.5,-0.5) -- (-0.5,2.5);
\node[anchor=south east, color=red] at (-0.5, 2.5) (coord) {$\mathcal{C}$};
\draw[dotted, red, ultra thick] (5,-0.5) -- (5,7);
\fill[pattern=north east lines, pattern color=gray] (2.938,-0.938) --(1,1)-- (1,5)--(1.5,7.5)--(5,4) -- (5,2.5) -- (3.5, 0)-- cycle;
\end{tikzpicture}
\subfigure[Compactness Measure \label{fig:largefigNoPrior:d}]
{%
\begin{tikzpicture}[thick, scale=0.9]
\draw[->, >=stealth] (-1,0) -- (7,0);
\draw[->, >=stealth] (0,-.5) -- (0,7);
\draw [fill, color=black] (2,1) circle [radius=0.07];
\draw [fill, color=black] (4,2) circle [radius=0.07];
\draw [fill, color=black] (5,2.5) circle [radius=0.07];
\draw [fill, color=black] (5,4) circle [radius=0.07];
\draw [fill, color=black] (4,5) circle [radius=0.07];
\draw [fill, color=black] (3,6) circle [radius=0.07];
\draw [fill, color=black] (1,5) circle [radius=0.07];
\draw [fill, color=black] (2,4) circle [radius=0.07];
\draw [fill, color=black] (3,4) circle [radius=0.07];
\draw [fill, color=black] (3,2) circle [radius=0.07];
\draw [fill, color=black] (4,3) circle [radius=0.07];
\draw [fill, color=black] (1,3) circle [radius=0.07];
\draw [fill, color=black] (2,2.5) circle [radius=0.07];
\draw [fill, color=black] (2.7,3.2) circle [radius=0.07];
\draw [fill, color=black] (2.3,4.8) circle [radius=0.07];
\draw [fill, color=black] (1.4,4.8) circle [radius=0.07];
\draw [fill, color=black] (3.8,4.3) circle [radius=0.07];
\draw [fill, color=black] (4.8,3.3) circle [radius=0.07];
\draw [fill, color=red] (1,1) circle [radius=0.07];
\node[anchor=north east, color=red] at (1 , 1) (coord) {$ \mathbf{x} ^0$};
\draw[->] (6.2,5.1) coordinate -- (6.5,5.4) node[anchor=north west] {$ \mathbf{c} $};
\draw[-] (6.1,5.20) -- (6.30, 5.0);
\draw[dashed, very thick] (-1,-0.5) -- (6.5,3.25);
\node[anchor=south, color=black] at (6.5 , 3.25) (coord) {\scriptsize $(2\times)$};
\draw[dashed, very thick] (1,-0.5) -- (1,8.5);
\node[anchor=west, color=black] at (1,8.4) (coord) {\scriptsize $(2\times)$};
\draw[dashed, very thick] (0.5,8.5) -- (6.5,2.5);
\draw[dashed, very thick] (5,-0.5) -- (5,7);
\draw[dotted, red, ultra thick] (2.5,-0.5) -- (-0.5,2.5);
\node[anchor=south east, color=red] at (-0.5, 2.5) (coord) {$\mathcal{C}$};
\draw[dotted, red, ultra thick] (5,-0.5) -- (5,7);
\fill[pattern=north east lines, pattern color=gray] (1.333, 0.667) -- (1,1) -- (1,8) -- (5,4) -- (5,2.5) -- cycle;
\end{tikzpicture}
}
\caption{Results for Numerical Case II with different loss functions.}
\label{fig:largefigNoPrior}
\end{figure}
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[thick, scale=1]
\draw[->, >=stealth] (-1,0) -- (7,0);
\draw[->, >=stealth] (0,-.5) -- (0,7);
\draw [fill, color=black] (2,1) circle [radius=0.07];
\draw [fill, color=black] (4,2) circle [radius=0.07];
\draw [fill, color=black] (5,2.5) circle [radius=0.07];
\draw [fill, color=black] (5,4) circle [radius=0.07];
\draw [fill, color=black] (4,5) circle [radius=0.07];
\draw [fill, color=black] (3,6) circle [radius=0.07];
\draw [fill, color=black] (1,5) circle [radius=0.07];
\draw [fill, color=black] (2,4) circle [radius=0.07];
\draw [fill, color=black] (3,4) circle [radius=0.07];
\draw [fill, color=black] (3,2) circle [radius=0.07];
\draw [fill, color=black] (4,3) circle [radius=0.07];
\draw [fill, color=black] (1,3) circle [radius=0.07];
\draw [fill, color=black] (2,2.5) circle [radius=0.07];
\draw [fill, color=black] (2.7,3.2) circle [radius=0.07];
\draw [fill, color=black] (2.3,4.8) circle [radius=0.07];
\draw [fill, color=black] (1.4,4.8) circle [radius=0.07];
\draw [fill, color=black] (3.8,4.3) circle [radius=0.07];
\draw [fill, color=black] (4.8,3.3) circle [radius=0.07];
\draw [fill, color=red] (1,1) circle [radius=0.07];
\node[anchor=north east, color=red] at (1 , 1) (coord) {$ \mathbf{x} ^0$};
\draw[->] (6.2,5.1) coordinate -- (6.5,5.4) node[anchor=north west] {$ \mathbf{c} $};
\draw[-] (6.1,5.20) -- (6.30, 5.0);
\draw[dashed, very thick] (1,-0.5) -- (1,8.5);
\node[anchor=south west, color=black] at (1,8.4) (coord) {\scriptsize $(2\times)$};
\draw[dashed, very thick] (0.5,8.5) -- (6,3);
\node[anchor=west, color=black] at (6 , 3) (coord) {\scriptsize $(2\times)$};
\draw[dashed, very thick] (-1,1.174) -- (6,0.565);
\draw[dashed, very thick] (5,-0.5) -- (5,7);
\draw[dotted, red, ultra thick] (2.5,-0.5) -- (-0.5,2.5);
\node[anchor=south east, color=red] at (-0.5, 2.5) (coord) {$\mathcal{C}$};
\draw[dotted, red, ultra thick] (5,-0.5) -- (5,7);
\fill[pattern=north east lines, pattern color=gray] (1,1) -- (1,8) -- (5,4) -- (5,0.652) -- cycle;
\end{tikzpicture}
\caption{Result for Numerical Case II with a combined loss function, where the Fairness Measure~is the primary objective and the Adjacency Measure~is the secondary objective. }
\label{fig:largefig-secondObj}
\end{figure}
Figure~\ref{fig:largefigPrior} illustrates the results for the Adherence Measure. In Figure~\ref{fig:largefigPrior:a}, the prior belief is a valid feasible set, and the optimal solution $\Delta} %{{\bA^b}$ is the same as the given prior belief $\hat{\Delta}}%{{\hat{\bA}^b}$, as demonstrated by Proposition~\ref{prop:prior2}. Note that in this example, although $\hat{\Delta}}%{{\hat{\bA}^b}$ is a valid feasible set, it does not satisfy the known constraints ({\it i.e.}, $\hat{\Delta}}%{{\hat{\bA}^b} \not\subseteq \mathcal{S}$). Conversely, Figure~\ref{fig:largefigPrior:b} illustrates the case that $\hat{\Delta}}%{{\hat{\bA}^b}$ is a subset of the known constraints but is not a valid feasible set. In this case, the prior belief is minimally expanded in order to include all observations. These results re-emphasize that the shape of the imputed feasible set is heavily affected by the quality of the prior belief on the constraint parameters.
We next illustrate the results for the Numerical Case II with the remaining four loss functions that do not require a prior belief in Figure~\ref{fig:largefigNoPrior}. Analogous to Numerical Case I, Figure~\ref{fig:largefigNoPrior:a} shows the results for Indifference Measure~which is a simple feasibility problem and results in the optimality of any feasible solution to $\MIO$. In our results, all six constraints equate a hyperplane that passes through three of the observations. Figure~\ref{fig:largefigNoPrior:b} shows the results for the Adjacency Measure~which, as expected, induces an unbounded imputed feasible set. In this case, the problem finds a constraint that has the minimum distance from all observations and chooses to repeat that constraint 6 times.
Figures~\ref{fig:largefigNoPrior:c} and~\ref{fig:largefigNoPrior:d} demonstrate the results for the Fairness Measure~and Compactness Measure~as the loss function of $\MIO$, respectively. In both cases, the imputed feasible sets are bounded and $ \mathbf{x} ^0$ is an extreme point. The Fairness Measure~tries to spread the constraints around the observations and obtains a bounded imputed feasible set for this example. Finally, the Compactness Measure~ensures that there is at least one constraint in the vicinity of each of the observations on the boundary of their convex hull results in an imputed feasible region that encapsulates all observations for this example.
As described in Section~\ref{sec:measures}, inverse models often have multiple solutions. Using combined loss functions, we can explore the multiple optimal solution space under one loss function and further tailor the desired characteristics of the imputed feasible set based on an alternate loss function. In Figure~\ref{fig:largefig-secondObj}, we use a combined loss function to first solve the inverse optimization problem with the Fairness Measure~and then search among its multiple optimal solutions to find a set of constraints that minimizes the Adjacency Measure. As a result, we derive an imputed feasible set that not only scatters the constraints around the observations fairly but also minimizes the total distance of the constraints to all observations. In this example, the resulting imputed feasible region is visually tighter than that of the Fairness Measure~alone.
{\color{myGreen}
\subsection{Numerical Case III}
In this section, we validate our proposed methodology on a diet recommendation problem using a dataset of 100 observations of daily food intake choices. As briefly discussed in Section~\ref{sec:motivation}, the goal is to employ our $\MIO$ framework to impute the implicit constraints of a dieter based on the observations of past food choices. The implicit constraints are typically difficult to capture in regular forward settings. However, using our $\MIO$ approach, these additional constraints will help to identify diets that are more palatable to the user.
\begin{table}[htbp]
\centering
\begin{tabular}{p{.28\linewidth} p{.6\linewidth}}
\toprule
{\bf Description} & {\bf Value(s)} \\
\midrule
Cost vector $ \mathbf{c} $ & ({\it a}\,) Maximize total protein intake \\
& ({\it b}\,) Minimize total sodium intake\\
\midrule
Observations & 100 Daily food intakes of 26 different food items \\
\midrule
Known constraints
& 8 known constraints \& half-space $\mathcal{C}$\\
\it \footnotesize \hspace{0.9cm} Lower bounds: & \it \footnotesize Carbohydrates, Fiber, Calories \\
\it \footnotesize \hspace{0.9cm} Upper bounds: & \it \footnotesize Fat, Sugar, Cholesterol, Calories, \# of servings \\
\midrule
Unknown constraints & 30 constraints\\
\bottomrule
\end{tabular}
\caption{Numerical Case III: Two objective functions were considered for a set of 100 observations on 26 food items. The set of constraints includes 8 known constraints, the half-space, and 30 unknown constraints.}
\label{tab:case3}
\end{table}
In this case study, we consider a set of 100 observations of the daily food intake that are reported as the number of servings consumed per food per day of observation. This data is gathered from \citet{CSSEDietData} which is based on the National Health and Nutrition Examination Survey (NHANES) dietary data. Our data includes 26 food items, each of which were consumed at least once among the 100 daily observations. We consider a set of 8 known constraints on different nutrition values ({\it e.g.}, lower bound on fiber, upper bound on cholesterol) that must always be met. To derive the known constraints, we consulted the guidelines provided by the~\citet{DietData_Const}.
We tested the proposed $\MIO$ model with two different known objective functions: ({\it a}\,) maximizing daily protein intake and ({\it b}\,) minimizing daily sodium intake. For each objective, we found the preferred observation $ \mathbf{x} ^0$ ({\it i.e.}, the observation with the best objective value) and recovered a feasible region that made all 100 observations feasible and $ \mathbf{x} ^0$ optimal for the corresponding forward problem. Table~\ref{tab:case3} shows a summary of the known constraints and the inverse problems, and Table~\ref{tab:case3_fooddetails} in the Appendix provides additional information about the observations.
For each of the two objective functions, we solved the $\eMIO$ formulation to impute 30 additional constraints using a combined loss function with Fairness Measure\ and Compactness Measure\ as the primary and secondary objective functions, respectively. We then found optimal diets by solving the $\mathbf{FO}$ model twice: first with only the eight given constraints and then using both the known constraints and the imputed constraints by $\MIO$.
\begin{figure}
\centering
\includegraphics[width=.9\textwidth]{barplot-protein-noxlabel-cropped.pdf}
\includegraphics[width=.9\textwidth, trim= 0 5 0 0, clip=true]{barplot-sodium-cropped.pdf}
\caption{Comparison of diet recommendations with and without the imputed constraints. The range of past food consumption observations and the preferred observation are illustrated as error bars and yellow circles, respectively.
}\label{fig:barplot}
\end{figure}
\begin{figure}
\centering
\subfigure[\label{fig:spider-protein}]{
\includegraphics[width=0.47\textwidth, trim= 5 5 8 10, clip=true]{spider-protein-cropped.pdf}
}
\subfigure[\label{fig:spider-sodium}]{
\includegraphics[width=0.47\textwidth, trim= 5 5 8 10, clip=true]{spider-sodium-cropped.pdf}
}
\caption{Comparison of diet recommendations with and without the imputed constraints. The shaded grey areas show past observations. Darker grey shows a larger number of past food consumption for a food item. }
\label{fig:spider}
\end{figure}
Figure~\ref{fig:barplot} shows the recommended diets for each of the two objectives with first considering only the known constraints (denoted as ``w/o MIO") and then also considering the additional imputed constraints (denoted as ``$\MIO$'') in red and blue bars, respectively. The ranges of the observations for each food item are shown by error bars, and the preferred observation ($ \mathbf{x} ^0$) is highlighted in yellow circles on the error bar. Figure~\ref{fig:barplot} shows that the recommended diet without the imputed constraints includes larger quantities of fewer food items. For instance, when maximizing protein intake, over nine servings of juice and six servings of sausage are recommended to the user in the diet without $\MIO$ constraints. On the contrary, the diet with $\MIO$ constraints includes a variety of food items and more moderate quantities of each food item, closer to the food choices the user has made in the past. Similarly, when minimizing sodium intake, without $\MIO$ constraints, the suggested diet only includes three food options, and the diet consists large amounts of fruit, while the diet with $\MIO$ constraints provides moderate amounts of six food items that more closely replicate meals consumed by the user in the past.
To better visualize the different suggested diets in this multi-dimensional solution space, we also plotted these solutions using radar charts in Figure~\ref{fig:spider}. Each food item is individually Min-Max normalized. The diagrams serve to further illustrate the past data and compare each of the recommended diets on a relative scale, instead of the absolute scale showed in Figure~\ref{fig:barplot}. The gray shaded areas show the past observations where the darker areas show a larger amount of observed food intake. Similar to Figure~\ref{fig:barplot}, the solutions with and without $\MIO$ constraints are highlighted with blue and red lines, respectively. These plots confirm that in both cases, without $\MIO$ constraints, the diet often consists of food items that are not regularly consumed in the past and are limited in variety, while the diet with $\MIO$ constraints more closely resembles past observations.
Finally, we quantify the differences between the diets recommended with and without the imputed $\MIO$ constraints for each objective by finding the average $L_1$~norm distance of all the observations from each of these diets as depicted in Table~\ref{tab:case3error}. In both cases, the diet with $\MIO$ constraints is closer to the observations.
\begingroup
\setlength{\tabcolsep}{10pt}
\renewcommand{\arraystretch}{1.2}
\begin{table}[htbp]
\centering
\begin{tabular}{c c c}
\toprule
& \multicolumn{2}{c}{Average $L_1$~norm distance} \\
\cline{2-3}
Objective & w/o $\MIO$ & $\MIO$ \\
\midrule
Maximizing Protein Intake & 30.5 & 15.2 \\
Minimizing Sodium Intake & 19.0 & 14.5 \\
\bottomrule
\end{tabular}
\caption{Average distance of recommended diets for each of the two objectives tested with and without the imputed constraints.}
\label{tab:case3error}
\end{table}
\endgroup
\section{Model Extensions} \label{sec:discussions}
Typically, there are many assumptions in the inverse optimization models, depending on the particular structure of the model. In this section, we provide extensions for when some of these assumptions do not hold and discuss how our models can be adapted accordingly.
In particular, we consider three extensions: (a) unknown cost vector, (b) noisy data, and (c) additional information on constraint parameters. We first briefly discuss the rationale behind the underlying standard assumptions and then provide extensions for lifting these assumptions.
\paragraph{(a) Unknown cost vector:}
When inferring the feasible region of a problem, the constraint parameters are unknown and hence, a set of given solutions cannot be labeled as feasible or infeasible. However, when a cost vector is known, the quality of the observation can be compared based on this cost vector. When, on the contrary, the cost vector is also unknown, there is no information about the quality of the given solutions. That is, we do not know whether a solution is feasible, and we are also not able to even assess or compare the quality of solutions. Such a problem setting may have limited practical use because it assumes experts have no knowledge about the objective or the constraints of the problem. Nevertheless, our models can be modified to consider an unknown cost vector. Let $ \mathbf{c} $ be a decision vector (unknown) and a candidate optimal solution $ \mathbf{x} ^0$ be provided by experts. The updated model, denoted by $\MIO^{ \mathbf{c} }$, is
\begin{subequations}\label{eq:TMIOc}
\begin{align*}
\MIO^{ \mathbf{c} }: \underset{ \mathbf{A} , \mathbf{b} , \mathbf{y} , \mathbf{w} , \mathbf{c} }{\text{minimize}} \quad & \mathscr{F} ( \mathbf{A} , \mathbf{b} ; \mathscrsfs{D} ), \\
\text{subject to} \quad & \eqref{eq:TMIO-PrimalFeas}-\eqref{eq:MIO-signs}, \\
\quad & \mathbf{c} \in \mathbb{R}^n .
\end{align*}
\end{subequations}
\noindent We note that the complexity of $\MIO^{ \mathbf{c} }$ is similar to that of $\MIO$ since the addition of $ \mathbf{c} $ as a variable does not introduce any new nonlinear terms into the model. The only difference is that $ \mathbf{c} $ is now a variable in the strong duality and the dual feasibility constraints~\eqref{eq:TMIO-StrongDual} and~\eqref{eq:TMIO-DualFeas}.
\paragraph{(b) Noisy Data:} A standard assumption in many inverse optimization models in the literature is that the data is observed without noise. A few recent studies have considered that such perfect information may not be available, and even if the data is accurate, the models may be prone to overfitting to the given observations. In particular, the inverse model would always make $ \mathbf{x} ^0$ exactly optimal for the forward problem and make any other solution that dominates $ \mathbf{x} ^0$ infeasible.
To consider noisy data and avoid overfitting in our proposed model, a robust optimization approach such as that of \citet{ghobadi2018robust} can be employed by considering uncertainty sets around the observations. Such uncertainty sets can be considered around the preferred solution $ \mathbf{x} ^0$ only, or alternatively around all observations $ \mathbf{x} ^k, k \in \mathcal{K}$. First, let $\mathcal{U}^0$ be an uncertainty set around $ \mathbf{x} ^0$. Since $ \mathbf{c} $ is known, the preferred solution within the uncertainty set $\mathcal{U}^0$ is $\tilde{ \mathbf{x} }^0 = \underset{ \mathbf{x} \in \mathcal{U}^0}{\min} \{ \mathbf{c} ' \mathbf{x} ^0\}$. Using this $ \mathbf{x} ^0$ in the $\MIO$ formulation guarantees that the imputed feasible region is robust for all $ \mathbf{x} ^0 \in \mathcal{U}^0$. Next, assume we consider uncertainty sets around all observations. In this case, the feasibility of the uncertain observations needs to be guaranteed as well. A similar robust optimization approach can be employed to consider uncertainty sets $\mathcal{U}^k$ around observations $ \mathbf{x} ^k, \, \forall k \in \mathcal{K}$. In addition to the strong duality constraint, in this case, the primal feasibility constraints are also modified to hold for any realization of the uncertainty set ({\it i.e.}, $\forall \mathbf{x} ^k \in \mathcal{U}^k$).
\paragraph{(c) Additional Information on Constraint Parameters}
In some inverse optimization models in the literature, some additional information (or side constraints) on the parameters of the forward model is considered in the inverse setting. Often, the purpose of this additional information is to avoid finding trivial (all-zero) solutions in the inverse model. Our inverse models avoid such trivial solutions by design and do not require the user to know and input such information on the inferred parameters. However, if such information exists, it can easily be incorporated into the model. Recall that we assumed that each constraint is either entirely known or entirely unknown. If additional information on some constraint parameters is available, the constraint (or set of constraints) is \emph{partially} known. In this case, we can modify the inverse model accordingly by assuming these partially-known constraints as part of the unknown constraints and adding the partial information as known constraints in the inverse model. For instance, if specific properties about the $i^{\text{th}}$ constraint is known ({\it e.g.}, $b_i = \beta_i$ for $\beta_i \in \mathbb{R}$, or $a_{i1}\leq 2\,a_{i2}$), these properties can be explicitly added to the inverse model as known constraints. In general, if $\mathcal{A} \subseteq \mathbb{R}^{m_1\times n}$ and $\mathcal{B} \subseteq {R}^{m_1}$ capture the additional information on the LHS and RHS parameters, respectively, the $\MIO$ formulation can be adjusted by replacing constraint~\eqref{eq:MIO-signs} with $ \mathbf{A} \in \mathcal{A}, \mathbf{b} \in \mathcal{B}.$
\section{Conclusions} \label{sec:conclusions}
Using inverse optimization to recover the feasible region can be applied to settings in which a set of solutions have previously been identified by experts as ``feasible" solutions, but the logic behind such labeling is not known. This paper proposes an inverse optimization approach for imputing fully- or partially-unknown constraint parameters of a forward optimization problem. The goal is to infer the feasible region of the forward problem such that all given observations become feasible and the preferred observation(s) become optimal. Identifying such feasible regions would provide a baseline for initial filtering of future observation as feasible or infeasible, before seeking an expert opinion. This information will, in turn, improve the flow of processes in expert-driven systems and reduce the time spent in manually identifying feasible solutions. In some applications, having such a data-driven approach would standardize the practice of quality control across different experts or different institutions.
We demonstrate the theoretical properties of our methodology and propose a new tractable reformulation for the nonlinear non-convex inverse model. We also present and discuss several loss functions that can be used to derive imputed feasible sets that have certain desired properties. Our numerical examples demonstrate the differences between these loss functions and serve as a basic guideline for users to choose the appropriate loss function depending on the available data and the relevant application. \add{We further apply our methodology to a diet recommendation problem and show that the proposed model can impute the implicit constraints for each dieter and result in diet recommendations that are more palatable.}
An important future direction is to apply this methodology to a real-world large-scale dataset and to demonstrate the computational benefits of the proposed tractable reformulation methodology that allows for a more efficient solution of the originally nonlinear non-convex inverse optimization problems.
\clearpage
|
2,877,628,090,548 | arxiv | \section{Introduction}
\label{sec:intro}
For an open subset $\mathcal{U}\subset\mathbb{R}^d$, consider the $2d$-dimensional stochastic differential equation (SDE):
\begin{equation}\label{eq:SDEgeneral}
\left\{\begin{array}{rclrcl}
d\bm{x}_t^m & = & \bm{v}_t^m\,dt &\bm{x}_0^m &=& \bm{x}, \\
d\bm{v}_t^m & = & \left[ \frac{\bm{F}(\bm{x}_t^m)}{m} - \frac{\bm{\bm{\gamma}}(\bm{x}_t^m)}{m}\bm{v}^m_t \right] \,dt + \frac{\bm{\sigma}(\bm{x}_t^m)}{m}\,d\bm{W}_t \quad &\bm{v}_0^m &=& \bm{v},
\end{array}\right.
\end{equation}
with $\bm{F}:\mathcal{U}\mapsto\mathbb{R}^d$, $\bm{\gamma}:\mathcal{U}\rightarrow \mathbb{R}^{d\times d}$ a $d\times d$ invertible matrix-valued function, $\bm{\sigma}:\mathcal{U}\rightarrow \mathbb{R}^{d\times k}$ and $\bm{W}$ a $k$-dimensional Wiener process. The above SDE provides a framework to model many physical systems, from colloidal particles in a fluid {\cite{nelson}} to a camera tracking an object \cite{papanicolaou2010}. For example, the motion of a Brownian particle can be modeled using an SDE where $x$ and $v$ are one-dimensional and ${\gamma}(x) = {k_BT \over D(x)}$ and ${\sigma}(x) = {k_BT\sqrt{2} \over \sqrt{D(x)}}$ (see description below in Section~\ref{sec:BP}). In fact, the original motivation for the present work was to provide a mathematical explanation of the experimental observation of a \emph{noise-induced drift} in \cite{volpe2010}. While in this model the coefficients $\gamma(x)$ and $\sigma(x)$ are constrained by the fluctuation-dissipation relation such that ${\gamma}(x) \propto {\sigma}(x)^2$ \cite{kubo}, our main result, Theorem~\ref{theorem}, does not assume it and has a much more general reach including applications in other fields.
Theorem~\ref{theorem} says that, under the assumptions stated in Section~\ref{sec:SKa}, the $\bm{x}$-component of the solution of equation~(\ref{eq:SDEgeneral}) converges in $L^ 2$, with respect to the topology on $C_{\mathcal{U}}([0,T])$ (i.e. the space of continuous functions from $[0,T]$ to $\mathcal{U}$ with the uniform metric), to the solution of the SDE
\begin{equation}\label{eq:SKlimit}
d\bm{x}_t=\left[ \bm{\gamma}^{-1}(\bm{x}_t)\bm{F}(\bm{x}_t)+\bm{S}(\bm{x}_t) \right] dt + \bm{\gamma}^{-1}(\bm{x}_t)\bm{\sigma}(\bm{x}_t)d\bm{W}_t,
\end{equation}
with the original initial condition $\bm{x}_0 = \bm{x}$, where $\bm{S}(\bm{x}_t)$ is the \emph{noise-induced drift} whose $i^\text{th}$ component equals
\begin{equation}
\label{eq:spurious}
S_i(\bm{x}) =\frac{\partial}{\partial x_{l}}[(\gamma^{-1})_{ij}(\bm{x})]J_{j{l}}(\bm{x}),
\end{equation}
where $\bm{J}$ is the matrix solving the Lyapunov equation
\begin{equation}\label{eq:Lyapunov}
\bm{J}\bm{\gamma}^* + \bm{\gamma}\bm{J} = \bm{\sigma}\bm{\sigma}^*.
\end{equation}
Throughout the paper we use Einstein summational convention and ``$^*$" denotes the transposition of a matrix. The limiting SDE~(\ref{eq:SKlimit}) is given in the It\^o form, while we provide in Section~\ref{sec4} the corresponding Stratonovich form. Note that for $m > 0$ the process $\bm{x}^m_t$ has bounded variation and thus all definitions of stochastic integral lead to the same form of SDE~(\ref{eq:SDEgeneral}).
The zero-mass limits of equations similar to equation~(\ref{eq:SDEgeneral}) have been studied by many authors beginning with Smoluchowski \cite{smoluchowski1916} and Kramers \cite{kramers1940}. In the case where $F=0$ and $\gamma$ and $\sigma$ are constant, the solution to equation~(\ref{eq:SDEgeneral}) converges to the solution of equation~(\ref{eq:SKlimit}) almost surely \cite{nelson}. The case including an external force was treated by entirely different methods in \cite{schuss}. The problem of identifying the limit for position-dependent noise and friction was studied in \cite{hanggi1982} for the case when the fluctuation-dissipation relation is satisfied and in \cite{sancho1982} for the general one-dimensional case (the multidimensional case is also discussed there but without complete proof). The homogenization techniques described in \cite{papanicolaou1975,schuss,pavliotis} were used to compute the limiting backward Kolmogorov equation as mass is taken to zero in \cite{hottovy2012}. In \cite{Pardoux2003} convergence in distribution is proven rigorously for equations of the same type as equation~(\ref{eq:SDEgeneral}), under somewhat stronger assumptions than those made here. The rigorous proof of convergence in probability for $\bm{\gamma}$ constant and $\bm{\sigma}$ position-dependent is given in \cite{freidlin2004}. The present paper contains the first rigorous derivation of the zero-mass limit of equation~(\ref{eq:SDEgeneral}) for a multidimensional system with general friction and noise coefficients.
Systems with colored noise can also be treated within the above (suitably adapted) framework. For example, the one-dimensional equation driven by an Ornstein-Uhlenbeck (OU) noise with a short correlation time $\tau$
\begin{equation}\label{eq:Newton}
m \ddot{x}^m_t =F(x_t^m)-\gamma(x^m_t) \dot{x}^m_t + \sigma(x_t^m){\eta}_t^\tau
\end{equation}
can be rewritten in the form of equation~(\ref{eq:SDEgeneral}), by defining $\bm{v}_t^m = (v_t^m,\eta_t^\tau)^*$, $\bm{x}_t^m = (x_t^m,\zeta_t^\tau)^*$ and $\tau = \tau_0m$ \cite{pavliotis}, as
\begin{equation}\label{eq:SDEOUCN}
\left \{ \begin{array}{rcl}
dx_t^m &=& v_t^m\,dt \\
dv_t^m &=& \left[ \frac{F(x_t^m)}{m}-\frac{\gamma(x_t^m)}{m} v_t^m + \frac{\sigma(x_t^m)}{m}\eta_t^\tau \right] dt \\
d\zeta_t^\tau &=& \eta_t^\tau dt \\
d\eta_t^\tau &=& -\frac{a \eta_t^\tau}{\tau}\,dt + \frac{\sqrt{2\lambda}}{\tau}\,dW_t .
\end{array} \right.
\end{equation}
The details are worked out in Section~\ref{sec:OU}. SDE with colored noise were first studied in \cite{wong1965}, where it was shown that, as the correlation time of the noise goes to zero, the stochastic integral converges to the Stratonovich integral with respect to the Wiener process. This result was generalized in \cite{kurtz91} and similar systems were studied in \cite{kupferman2004} by homogenization methods. Our method is sufficiently general to permit us to recover the results obtained in these works as well as those in \cite{wong1965,freidlin2011}.
In Section~\ref{sec:SKa} we state and prove the main result, Theorem~\ref{theorem}, for arbitrary dimension $d$. In Section~\ref{sec:1D} we explicitly calculate the limit for $d=1$. In Section~\ref{sec5} we present a series of applications of our result. In Section~\ref{sec:BP} we study the equations describing the experiment on Brownian motion in a diffusion gradient that originally motivated this work \cite{volpe2010}. In Section~\ref{sec:OU} we study the case of SDE driven by OU colored noise and find the explicit limit for constant (Section~\ref{sec:constfric}) and position-dependent (Section~\ref{sec:thermo}) friction. In Section~\ref{sec:3DBP} we study a three-dimensional Brownian particle on which a non-conservative external force is acting, and in Section~\ref{sec:magnetic} we consider the more specific case of a magnetic force. In Section 5 we reformulate the main result using Stratonovich formalism.
\begin{acknowledgements}
We thank Thomas Kurtz for the crucial references \cite{kurtz91} and \cite{blount1991}, and Krzysztof Gaw\c edzki for pointing out some earlier results. AM was partially supported by the NSF grant DMS 1312711. JW was partially supported by NSF grants DMS 1009508 and DMS 1312711. SH was partially supported by the NSF under grant DMS 1009508 and grant No. DGE0841234. GV was partially supported by the Marie Curie Career Integration Grant (MC-CIG) No. PCIG11 GA-2012-321726.
\end{acknowledgements}
\section{Smoluchowski-Kramers approximation}\label{sec:SKa}
For the main theorem, we assume $\bm{x}_t^m,\bm{x}_t\in\mathcal{U}\subset\mathbb{R}^d$, an open set, and $\bm{v}_t^m\in\mathbb{R}^d$ for all $0\leq t\leq T$. For an arbitrary vector $\bm{a}\in\mathbb{R}^d$, $|\bm{a}|$ is the Euclidean norm and, for a $d\times d$ matrix $\bm{A}\in\mathbb{R}^{d\times d}$, $|\bm{A}|$ is the induced operator norm. We now state the assumptions and main theorem.
\begin{assumption}\label{assume:bddcoeffs}
The coefficients $\bm{F},\bm{\gamma},\bm{\sigma}$ are continuously differentiable functions. Furthermore, the smallest eigenvalue $\lambda_1(\bm{x})$ of the symmetric part ${1\over2} (\bm{\gamma}+\bm{\gamma}^*)$ of the matrix $\bm{\gamma}$ is positive uniformly in ${\bm x}$, i.e.
\begin{equation}\label{eq:AssumeEig}
\lambda_1(\bm{x}) \ge c_\lambda > 0.
\end{equation}
It follows that $(\bm{\gamma}(\bm{x}) \bm{y}, \bm{y}) \geq c_{\lambda} (\bm{y}, \bm{y})$ and $|\bm{\gamma}(\bm{x})|\geq c_{\lambda}$ for all $\bm{x}\in\mathcal{U}, \bm{y} \in \mathbb{R}^d$ and that the real parts of the eigenvalues of $\bm{\gamma}(\bm{x})$ are bounded below by $c_\lambda$.
\end{assumption}
\begin{remark}
The lower bounds on $\bm{\gamma}$ and its eigenvalues are crucial for the estimates of the proof. A system with vanishing friction, i.e. $\bm{\gamma}(\bm{x})={\bm 0}$, behaves differently \cite{freidlin2013}.
\end{remark}
\begin{assumption}\label{assume:existence}
With probability one there exist global unique solutions, {defined on $[0,T]$}, to equation~(\ref{eq:SDEgeneral}) for each $m$ and to equation~(\ref{eq:SKlimit}). In particular, there are no explosions.
\end{assumption}
\begin{assumption}\label{assume:stop}
With probability one there exists a {compact set} $\mathcal{K}\subsetneq\mathcal{U}$ such that, for all $m>0$, $\bm{x}_t^m\in\mathcal{K}$ for all $t \in [0, T]$.
\end{assumption}
The existence of such a set $\mathcal{K}$, together with the continuity of the coefficients $\bm{F}$, $\bm{\gamma}$ and $\bm{\sigma}$, implies that there exists a constant $C_T$, depending only on $T$ (in particular, independent of $m$), such that
\begin{equation}
|\bm{F}(\bm{x}_t^m)|, \, |\bm{\sigma}(\bm{x}_t^m)|, \, |\bm{\gamma}(\bm{x}_t^m)|\leq C_T,
\end{equation}
for all $t \in [0, T]$.
\begin{theorem}\label{theorem}
Suppose SDE~(\ref{eq:SDEgeneral}) satisfies Assumptions~1-3. Let $(\bm{x}_t^m,\bm{v}_t^m)\in\mathcal{U}\times\mathbb{R}^{d}$ be its solution with initial condition $(\bm{x},\bm{v})$ independent of $m$ and let $\bm{x}_t$ be the solution to the It\^o SDE~(\ref{eq:SKlimit}) with the same initial position $\bm{x}_0 = \bm{x}$. Then
\begin{equation}
\lim_{m\rightarrow 0} E\left [\left (\sup_{0\leq t\leq T}|\bm{x}_t^m-\bm{x}_t|\right )^2\right ]=0.
\end{equation}
\end{theorem}
Before proving the theorem, we state a lemma about convergence of stochastic integrals, which restates (in a slightly less general form) a theorem of Kurtz and Protter \cite{kurtz91}.
\subsection{Convergence of Stochastic Integrals}\label{sec:convergenceSI}
Let $\{\mathcal{F}_t\}_{t \geq 0}$ be a filtration on a probability space $(\Omega,\mathcal{F},P)$. We assume that it satisfies the usual conditions \cite{revuz}. In our case, $\mathcal{F}_t$ will be (the usual augmentation of) $\sigma(\{\bm{W}_s:s\leq t\})$, the $\sigma$-algebra generated by a $k$-dimensional Wiener process $\bm{W}_t$ up to time $t$.
Suppose $\bm{H}$ is an $\{\mathcal{F}_t\}$-adapted semi-martingale with paths in $C_{\mathbb{R}^n}[0,T]$, whose Doob-Meyer decomposition is $\bm{H}_t = \bm{M}_t + \bm{A}_t$ so that $\bm{M}_t$ is an $\mathcal{F}_t$-local martingale and $\bm{A}_t$ is a process of locally bounded variation \cite{revuz}. For a continuous $\{\mathcal{F}_t\}$-adapted process $\bm{X}$ with paths in $C_{\mathbb{R}^{d\times n}}[0,T]$ and for $t \leq T$ consider the It\^o integral
\begin{equation}\label{eq:defint}
\int_0^t \bm{X}_s\,d\bm{H}_s = \lim \sum_{i}\bm{X}_{t_i}(\bm{H}_{t_{i+1}} - \bm{H}_{t_i}),
\end{equation}
where $\{t_i\}$ is a partition of $[0,t]$ and the limit is taken as the maximum of $t_{i+1}-t_i$ goes to zero. For a continuous processes $\bm{X}_s$ such that
\begin{equation}
P\left ( \int_0^T |\bm{X}_s|^2 \,d\langle \bm{M} \rangle_s + \int_0^T |\bm{X}_s|\,dV_s(\bm{A}) <\infty\right ) = 1,
\end{equation}
where $\langle \bm{M} \rangle_s$ is the quadratic variation of $\bm{M}_s$ and $V_s(\bm{A})$ is the total variation of $\bm{A}_s$, the limit in equation~(\ref{eq:defint}) exists in the sense that
$$
\sup_{0 \leq t \leq T}\left (\left | \int_0^t \bm{X}_s\,d\bm{H}_s - \sum_{i}\bm{X}_{t_i}(\bm{H}_{t_{i+1}} - \bm{H}_{t_i})\right |\right )\rightarrow 0,
$$
in probability. This (and related) convergence modes will be used throughout the paper \cite{protter,revuz}.
Consider $(\bm{U}^m,\bm{H}^m)$ with paths in $C_{\mathbb{R}^d\times\mathbb{R}^n}[0,T]$ adapted to $\{\mathcal{F}_t\}$ where $\bm{H}^m_t$ is a semi-martingale with respect to $\mathcal{F}_t$. Let $\bm{H}^m_t = \bm{M}_t^m + \bm{A}_t^m$ be its Doob-Meyer decomposition. Let $\bm{f}:\mathcal{U}\rightarrow \mathbb{R}^{d\times n}$ be a continuous {matrix-valued} function and let $\bm{X}^m$, with paths in $C_{\mathcal{U}}[0,T]$, satisfy the SDE
\begin{equation}\label{eq:KPthm1}
\bm{X}_{t}^{m} = \bm{X}_0 + \bm{U}_{t}^m + \int_0^{t} \bm{f}(\bm{X}_s^{m})\,d\bm{H}_s^m,
\end{equation}
where $\bm{X}_0^m = \bm{X}_0 \in\mathbb{R}^d$ is the same initial condition for all $m$. Define $\bm{X}$, with paths in $C_{\mathcal{U}}[0,T]$, to be the solution of
\begin{equation}\label{eq:KPlimit}
\bm{X}_t = \bm{X}_0 + \int_0^tf(\bm{X}_s)\,d\bm{H}_s.
\end{equation}
Note that (\ref{eq:KPthm1}) implies $\bm{U}_0^m = \bm{0}$ for all $m$.
\begin{lemma}\cite[Theorem 5.10]{kurtz91}\label{theorem:KP}
Assume $(\bm{U}^m,\bm{H}^m)\rightarrow (\bm{0},\bm{H})$ in probability with respect to $C_{\mathbb{R}^d\times\mathbb{R}^n}[0,T]$, i.e. for all $\epsilon>0$,
\begin{equation}\label{eq:defofprob}
P\left [ \sup_{0\leq s \leq T} \left( |\bm{U}_s^m|+|\bm{H}_s^m-\bm{H}_s| \right) >\epsilon\right ]\rightarrow 0,
\end{equation}
as $m\rightarrow 0$, and the following condition is satisfied:
\begin{condition}\label{condition}[Tightness condition]
The total variations, $\{V_t(\bm{A}^m)\}$, are stochastically bounded for each $t>0$, i.e. $P[V_t(\bm{A}^m)>L]\rightarrow 0$ as $L\rightarrow\infty$, uniformly in $m$.
\end{condition}
Suppose that there exists a unique global solution to equation~(\ref{eq:KPlimit}). Then, as $m\rightarrow0$, $\bm{X}^m$ converges to $\bm{X}$, the solution of equation~(\ref{eq:KPlimit}), in probability with respect to $C_{\mathcal{U}}([0,T])$.
\end{lemma}
To cast the limiting equation in the form of Lemma 1, it would be enough to rewrite equation~(\ref{eq:SDEgeneral}) and check that Condition~\ref{condition} is satisfied. In our case, the limiting equation is
\begin{equation}
d\bm{x}_t=\left[ \bm{\gamma}^{-1}(\bm{x}_t)\bm{F}(\bm{x}_t)+\bm{S}(\bm{x}_t) \right] dt + \bm{\gamma}^{-1}(\bm{x}_t)\bm{\sigma}(\bm{x}_t)d\bm{W}_t, \quad \bm{x}_0 = \bm{x}.
\end{equation}
To state the limiting equation, it would be enough to define
$$\bm{f}(\bm{x}) = (\bm{\gamma}^{-1}(\bm{x})\bm{F}(\bm{x}), \bm{\gamma}^{-1}(\bm{x})\bm{\sigma}(\bm{x}), \bm{S}(\bm{x})).$$
However, to describe the equations with $m >0$ using the same function $\bm{f}$, we need it to have more components. In the limit $m \to 0$ these additional components will be integrated against zero processes and thus will not contribute to the stochastic integral. That is, we will apply Lemma~\ref{theorem:KP}, with $\bm{f}$ of the form
\begin{equation}
\bm{f}(\bm{x}) = (\bm{\gamma}^{-1}(\bm{x})\bm{F}(\bm{x}), \bm{\gamma}^{-1}(\bm{x})\bm{\sigma}(\bm{x}), \bm{S}(\bm{x}), ... ),
\end{equation}
where $\bm{f}$ contains more components and the limit process $\bm{H}_t$ has zeros in the corresponding rows, i.e.
\begin{equation}
\bm{H}_t = \begin{pmatrix} t \\ \bm{W}_t \\ t \\ 0 \\ \vdots \\ 0 \end{pmatrix}.
\end{equation}
Note also that $\bm{H}_t$ has two components equal $t$ to separate the noise-induced drift $\bm{S}$ from the term $\bm{\gamma}^{-1}\bm{F}$.
\begin{proof}[of Theorem~\ref{theorem}]
We first state and prove a lemma about the convergence of the processes $m\bm{v}^m$ to zero.
\begin{lemma}\label{lemma:convergenceVas}
For each $m>0$, let $\bm{x}_t^m$ be any $\mathcal{F}_t$-adapted process with continuous paths in $\mathcal{K}$ and define $\bm{v}_t^m$ as the solution to the SDE given by the second equation in~(\ref{eq:SDEgeneral}) with functions $\bm{F}$, $\bm{\gamma}$, and $\bm{\sigma}$ satisfying Assumptions~\ref{assume:bddcoeffs}-\ref{assume:stop}. Then, for any $p \geq 1$, $m\bm{v}^m\rightarrow 0$ as $m\rightarrow 0$ in $L^p$ with respect to $C_{\mathbb{R}^d}[0,T]$, and hence also in probability with respect to $C_{\mathbb{R}^d}[0,T]$, i.e.
\begin{equation}
\label{lemma2assertion}
\lim_{m\rightarrow 0} E\left[ \left (\sup_{0\leq t \leq T} |m\bm{v}_t^m| \right )^p \right] =0.
\end{equation}
and, for all $\epsilon>0$,
\begin{equation}
\lim_{m\rightarrow 0} P\left (\sup_{0\leq t \leq T} |m\bm{v}_t^m|>\epsilon \right )=0.
\end{equation}
\end{lemma}
\begin{proof}
Consider the SDE for $m\bm{v}_t^m$ given by equation~(\ref{eq:SDEgeneral}),
\begin{equation}
d(m\bm{v}_t^m) = \bm{F}(\bm{x}_t^m)\;dt -\frac{\bm{\gamma}(\bm{x}_t^m)}{m}(m\bm{v}_t^m)\;dt + \bm{\sigma}(\bm{x}_t^m)\;
d\bm{W}_t.
\end{equation}
This equation is similar to an Ornstein-Uhlenbeck equation, which we would obtain with $\bm{F} = 0$ and $\bm{\gamma}$ and $\bm{\sigma}$ constant. Thus we use this similarity to bound $m\bm{v} ^m$. We use a technique similar to the proof of Lemma 3.19 in
\cite{blount1991}. We first define the function
\begin{equation}
f_m(u) = \frac{2m}{c_{\lambda}} \int_0^{\sqrt{c_{\lambda} u/(2 m \Gamma )}}e^{s^2/2}\int_0^s e^{-r^2/2}\;dr\;ds,
\end{equation}
where $\Gamma = C_T^2d$ ($C_T$ is the bound from Assumption~3 and $d$ is the dimension of $\bm{v}_t^m$, i.e. the dimension of the space).
Note that $f_m(0) = 0$, $f'_m(u),f''_m(u)>0$ for all $u\in[0,\infty)$. Also, $f_m(u)\rightarrow\infty$ and $f_m(m^2u)\rightarrow0$ as $m\rightarrow0$ for all $u>0$. Furthermore,
\begin{equation}
\label{eq:Aidentity}
Af_m(u) = 1
\end{equation}
for all $u\in[0,\infty)$,
where $A$ is the differential operator defined by
\begin{equation}
\label{eq:Aequation}
Af_m(u) \equiv f'_m(u)(-\frac{c_\lambda}{m}u+ 2 \Gamma) + 4 \Gamma u f''_m(u)
\end{equation}
We will prove that
\begin{equation}
P\left (\sup_{0\leq t \leq T}|m\bm{v}_t^m|^2\geq \epsilon \right ) \leq
\frac{f_m\left (|m\bm{v}|^2\right ) +T}{f_m(\epsilon)}\rightarrow 0,
\end{equation}
as $m\rightarrow 0$. Using the It\^o product formula for $|m\bm{v}^m_t|^2 = m(\bm{v}_t^m)^*m\bm{v}^m_t$, we obtain
\begin{align}
d(m(\bm{v}_t^m)^*m\bm{v}_t^m) =& m(\bm{v}_t^m)^*d(m\bm{v}_t^m) + d(m\bm{v}_t^m)^*m\bm{v}_t^m +
d(m\bm{v}_t^m)^*d(m\bm{v}_t^m) \\
\label{eq:mv2diff}
=&-\frac{2}{m}(\bm{\gamma}(\bm{x}_t^m)m\bm{v}_t^m,m\bm{v}_t^m)\;dt \\
+& Tr(\bm{\sigma} (\bm{x}_t^m) \bm{\sigma} ^* (\bm{x}_t^m))\;dt + m(\bm{v}_t^m)^* \bm{F}(\bm{x}_t^m)dt + \bm{F}(\bm{x}_t^m)^*m\bm{v}_t^mdt \nonumber \\
+& m(\bm{v}_t^m)^* (\bm{\sigma}(\bm{x}_t^m)\;d\bm{W}_t) + (\bm{\sigma}(\bm{x}_t^m)\;d\bm{W}_t)^* m\bm{v}_t^m. \nonumber
\end{align}
By the It\^o formula for all $t\in[0,T]$,
\begin{align}
f_m\left(|m\bm{v}_t^m|^2\right ) = &f_m\left (|m\bm{v}|^2\right ) \\
+& \int_0^t \left [f'_m\left (|m\bm{v}_s^m|^2 \right)\Big (-\frac{2}{m}(\bm{\gamma}(\bm{x}_s^m)m\bm{v}_s^m,m\bm{v}_s^m) \right .\nonumber\\
+& \left .m(\bm{v}_s^m)^* \bm{F}(\bm{x}_s^m) + \bm{F}(\bm{x}_s^m)^*m\bm{v}_s^m + Tr(\bm{\sigma} (\bm{x}_s^m) \bm{\sigma} ^* (\bm{x}_s^m)) \Big) \right .\nonumber\\
+& 2 f''_m\left (|m\bm{v}_s^m|^2 \right) |m\bm{\sigma} ^* (\bm{x}_s^m) \bm{v}^m_s|^2 \Big ] \;ds
+ M_t \nonumber
\end{align}
where $M_t \in C_{\mathbb{R}} [0,T]$ is a martingale with $E[M_t]=0$. Next we use the bound,
\begin{align}
(m\bm{v}_t^m,\bm{F}(\bm{x}_t^m)) \leq& \frac{1}{2}|m\bm{v}_t^m|^2 + \frac{1}{2}|\bm{F}(\bm{x}_t^m)|^2
\end{align}
and from Assumption~1
\begin{equation}
(\bm{\gamma}(\bm{x}_t^m)m\bm{v}_t^m,m\bm{v}_t^m) \geq c_{\lambda}|m\bm{v}_t^m|^2 .
\end{equation}
Using $f' _m (u), f'' _m (u)>0$ for all $u\in[0,\infty)$, we obtain
\begin{align}
f_m\left(|m\bm{v}_t^m|^2\right) \leq & f_m\left (|m\bm{v}|^2\right )+\int_0^t \Big[f'_m\left(|m\bm{v}_s^m|^2\right )\Big(-\frac{2c_\lambda}{m}|m\bm{v}_s^m|^2 \\
+& |m\bm{v}_s^m|^2 + |\bm{F}(\bm{x}_s^m)|^2 + Tr(\bm{\sigma} (\bm{x}_s^m) \bm{\sigma} ^* (\bm{x}_s^m)) \Big)\nonumber \\
+& 2 f''_m\left(|m\bm{v}_s^m|^2\right)|m\bm{v}_s^m|^2 | \bm{\sigma} (\bm{x}_s^m) |^2 \Big ] \;ds
+ M_t . \nonumber
\end{align}
For small $m>0$, the first term under the integral will dominate the second.
More precisely, for $\bm{x}_s^m$ in the compact set $\mathcal{K}$ and for $m$ sufficiently small so that $\frac{c_\lambda}{m} > 1$, we have
\begin{align}
f_m\left (|m\bm{v}_t^m|^2\right ) \leq & f_m\left (|m\bm{v}|^2\right ) \\
+&\int_0^t [f'_m\left (|m\bm{v}_s^m|^2\right )\Big(-\frac{c_\lambda}{m}|m\bm{v}_s^m|^2 +C_T ^2 + C_{T}^2d\Big)\nonumber \\
+& 2 f''_m\left (|m\bm{v}_s^m|^2\right )|m\bm{v}_s^m|^2 C_T^2\Big ] \;ds
+ M_t . \nonumber
\end{align}
Using the definition of $\Gamma$ and equations ~(\ref{eq:Aequation}) and ~(\ref{eq:Aidentity}) we get
\begin{align}
f_m(|m\bm{v}_t^m|^2) \leq & f_m\left (|m\bm{v}|^2\right )+\int_0^t [f'_m(|m\bm{v}_s^m|^2)(-\frac{c_\lambda}{m}|m\bm{v}_s^m|^2 +2 \Gamma) \\
+& 4 \Gamma |m\bm{v}_s^m|^2 f''_m(|m\bm{v}_s^m|^2) \Big ] \;ds
+ M_t\nonumber\\
= & f_m\left (|m\bm{v}|^2\right )+\int_0^t Af_m(|m\bm{v}_s^m| ^2)\;ds + M_t \\
= &f_m\left (|m\bm{v}|^2\right )+ t + M_t.
\end{align}
Define $\tau_\epsilon^m=\inf\{t: |m\bm{v}_t^m|^2=\epsilon\}$. Then for all
$\epsilon>0$,
\begin{equation}
P\left ( \sup_{0\leq t\leq T} |m\bm{v}_t^m|^2\geq\epsilon \right ) =
P\left ( |m\bm{v}_{T\wedge \tau_\epsilon^m}^m|^2\geq\epsilon\right ).
\end{equation}
Next, because $f_m$ is an increasing function (since $f' _m (u)>0$ for all $u\geq0$),
\begin{equation}
P\left ( |m\bm{v}_{T\wedge \tau_\epsilon^m}^m|^2\geq\epsilon\right ) =
P\left ( f_m( |m\bm{v}_{T\wedge \tau_\epsilon^m}^m|^2)\geq f_m(\epsilon)\right )
\end{equation}
Finally we use Chebyshev's inequality and the Optional Stopping Theorem to obtain,
\begin{align}
P\left ( \sup_{0\leq t\leq T} |m\bm{v}_t^m|^2\geq\epsilon \right ) \leq& \frac{E[f_m(|m\bm{v}_{T\wedge \tau_\epsilon^m}^m|^2)]}{f_m(\epsilon)} \leq \frac{E[f_m\left (|m\bm{v}|^2\right )+T\wedge \tau_\epsilon^m]}{f_m(\epsilon)}\\
\leq& \frac{f_m\left (|m\bm{v}|^2\right ) +T}{f_m(\epsilon)}.
\end{align}
Recalling that $f_m \left (m^2|\bm{v}|^2\right ) \rightarrow 0$ and $f_m(\epsilon) \rightarrow \infty$ as $m \rightarrow 0$, this inequality proves that as $m\rightarrow 0$, $ \sup_{0 \leq t \leq T} | m\bm{v}_t^m|^2 \rightarrow 0$ in probability, i.e.,
for all $\epsilon>0$,
\begin{equation}
\label{eq:convergenceinprob}
\lim_{m\rightarrow 0} P \left (\sup_{0\leq t \leq T} |m\bm{v}_t^m| ^2 > \epsilon \right ) =0.
\end{equation}
We prove that $m\bm{v} ^m$ converges to zero in $L^p$ with respect to $C_{\mathbb{R}^d}[0,T]$. Let $q > 1$, then
\begin{align*}
E\left[ \left (\sup_{0\leq t \leq T} |m\bm{v}_t^m|^2 \right )^q \right] &= \int _0 ^{\infty} q x ^{q - 1} P\left ( \sup_{0\leq t\leq T} |m\bm{v}_t^m|^2 \geq x \right ) dx \\
&\leq \int _0 ^{\infty} q x ^{q - 1} \frac{f_m\left (|m\bm{v}|^2\right ) +T}{f_m(x)} dx \\
&\leq q(1 + T) \int _0 ^{\infty} \frac{x ^{q - 1}}{f_m(x)} dx
\end{align*}
for $m$ sufficiently small since $f_m \left (|m \bm{v}|^2\right ) \rightarrow 0$ as $m \rightarrow 0$. Since
\begin{align*}
f_m(x) &= \frac{2m}{c_{\lambda}} \int_0^{\sqrt{c_{\lambda} x/(2 m \Gamma )}}e^{s^2/2}\int_0^s e^{-r^2/2}\;dr\;ds \\
&\geq \frac{2m}{c_{\lambda}} \int_0^{\sqrt{c_{\lambda} x/(2 m \Gamma )}}e^{s^2/2} \left( \frac{s}{2} \right) e^{-s^2/8} \;ds \\
&= \frac{1}{4 \Gamma} \int_0^x e^{\frac{3 c_{\lambda} u}{16 m \Gamma}} du \; \geq \; \frac{1}{4 \Gamma} \left( \frac{x}{2} \right) e^{\frac{3 c_{\lambda} x}{32 m \Gamma}}
\end{align*}
it follows that
\begin{equation*}
E\left[ \left (\sup_{0\leq t \leq T} |m\bm{v}_t^m|^2 \right )^q \right] \leq C(q) < \infty
\end{equation*}
where $C(q)$ depends on $q$ but is independent of $m$. Thus, there exists $m_0 > 0$ such that the family $\{ \sup_{0\leq t \leq T} |m\bm{v}_t^m| ^p \; : \; 0 < m \leq m_0 \}$ is uniformly integrable for $ 1 \leq p < 2q$ \cite[13.3]{williams}. This fact together with (\ref{eq:convergenceinprob}) implies (\ref{lemma2assertion})\cite[13.7]{williams}. Q.E.D.
\end{proof}
To determine the limit of SDE~(\ref{eq:SDEgeneral}) as $m\rightarrow 0$, we rewrite the equation for $\bm{v}_t^m$ as
\begin{equation}
\bm{\gamma}(\bm{x}^m_t)\bm{v}^m_t \,dt = \bm{F}(\bm{x}^m_t)\,dt + \bm{\sigma}(\bm{x}^m_t)d\bm{W}_t - md\bm{v}^m_t.
\end{equation}
By Assumption~\ref{assume:bddcoeffs}, $\bm{\gamma}(\bm{x})$ is invertible, thus
\begin{equation}\label{eq:vdt}
d\bm{x}_t^m = \bm{v}_t^m\,dt = \bm{\gamma}^{-1}(\bm{x}^m_t) \bm{F}(\bm{x}^m_t)\,dt +
\bm{\gamma}^{-1}(\bm{x}^m_t) {\bm{\sigma}}(\bm{x}^m_t)d\bm{W}_t - m\bm{\gamma}^{-1}(\bm{x}^m_t)\,d\bm{v}^m_t,
\end{equation}
or, in integral form,
\begin{equation}\label{eq:sub}
\bm{x}_t^m =\bm{x} + \int_0^t\bm{\gamma}^{-1}(\bm{x}^m_s) \bm{F}(\bm{x}^m_s)\,ds +\int_0^t\bm{\gamma}^{-1}(\bm{x}^m_s){\bm{\sigma}}(\bm{x}^m_s)
d\bm{W}_s - \int_0^t m\bm{\gamma}^{-1}(\bm{x}^m_s)\,d\bm{v}^m_s.
\end{equation}
In order to apply Lemma~\ref{theorem:KP} we need to integrate the last term by parts (see Remark~\ref{remark2}).
\subsection{Integration by parts to satisfy assumptions of Lemma~\ref{theorem:KP}} To determine the limit of the expression~(\ref{eq:sub}) as $m\rightarrow 0$, we consider its $i$th component. Integrating by parts the last term on the right-hand side of equation~(\ref{eq:sub}) we obtain, noting that $\bm{v}_0^m = \bm{v}$,
\begin{align}\label{eq:subvvstar}
\int_0^t m\left[ (\gamma^{-1})_{ij}(\bm{x}^m_s)\right]\,d(v^m_s)_j =& (\gamma^{-1})_{ij}(\bm{x}^m_t)m({v}_t^m)_j - (\gamma^{-1})_{ij}(\bm{x})m{v}_j \\
-& \int_0^t \frac{\partial}{\partial x_{l}}[(\gamma^{-1})_{ij}(\bm{x}^m_s)]m({v}^m_s)_jd({x}^m_s)_{l}.\nonumber
\end{align}
Since $d(x^m_s)_{{l}} = (v^m_s)_{{l}}\,ds$, the last integral can be rewritten as
\begin{equation}
\int_0^t \frac{\partial}{\partial x_{l}}[(\gamma^{-1})_{ij}(\bm{x}^m_s)]m({v}^m_s)_j (v^m_s)_{{l}}\,ds.
\end{equation}
Note that $\bm{x}_t^m$ has bounded variation, hence the It\^o term in the integration by parts formula is zero. The product $m({v}^m_s)_j (v^m_s)_{{l}}$ in the above integral is the $(j,{l})$-entry of the (outer product) matrix $m\bm{v}^m_s(\bm{v}^m_s)^*$. We will express this matrix as a solution of an equation. To this end, we calculate, using the It\^o product formula,
\begin{equation}
d[m\bm{v}^m_s(m\bm{v}^m_s)^*] = \,d(m\bm{v}^m_s)(m\bm{v}^m_s)^* + m\bm{v}^m_s\,d(m\bm{v}^m_s)^* + d(m\bm{v}^m_s)\,d(m\bm{v}^m_s)^*.
\end{equation}
We now substitute for $md(\bm{v}^m_s)$ and for its adjoint the expression from equation~(\ref{eq:SDEgeneral}), obtaining
\begin{align}\label{eq:dmvmv}
d[m\bm{v}^m_s(m\bm{v}^m_s)^*] =& \left[ m\bm{F}(\bm{x}_s^m)(\bm{v}^m_s)^* - m\bm{\gamma}(\bm{x}_s^m)\bm{v}^m_s(\bm{v}^m_s)^* \right] \,ds \\
+& m\left (\bm{\sigma}(\bm{x}_s^m)\,d\bm{W}_s \right )(\bm{v}^m_s)^*\nonumber\\
+& \left[ m\bm{v}^m_s\bm{F}(\bm{x}_s^m)^* - m\bm{v}^m_s(\bm{v}^m_s)^*\bm{\gamma}^*(\bm{x}_s^m) \right] \,ds \nonumber\\
+&m\bm{v}^m_s\left (\bm{\sigma}(\bm{x}_s^m)\,d\bm{W}_s \right )^* + \bm{\sigma}(\bm{x}_s^m)\bm{\sigma}^*(\bm{x}_s^m)\,ds \nonumber.
\end{align}
Because of Lemma~\ref{lemma:convergenceVas}, we expect the terms proportional to $m\bm{v}_s^m$ to converge to zero in probability. Defining
\begin{equation}\label{eq:tildeU}
\tilde{\bm{U}}_t^m = \int_0^t m{\bm{v}}^m_s{\bm{F}}^*(\bm{x}_s^m) ds + \int_0^tm{\bm{v}}^m_s({\bm{\sigma}}(\bm{x}_s^m)d\bm{W}_s)^*,
\end{equation}
we can rewrite equation~(\ref{eq:dmvmv}) as
\begin{align}\label{eq:lyapunov}
-& m\bm{v}_t^m(\bm{v}_t^m )^*\bm{\gamma}^*(\bm{x}_t^m)dt - \bm{\gamma}(\bm{x}_t^m)m \bm{v}_t^m(\bm{v}_t^m)^*dt \\ = & d[m\bm{v}_t^m(m\bm{v}_t^m)^*]-\bm{\sigma}(\bm{x}_t^m)\bm{\sigma}^*(\bm{x}_t^m)\,dt
- d\tilde{\bm{U}}_t^m - d(\tilde{\bm{U}}_t^m)^*.\nonumber
\end{align}
Equation~(\ref{eq:lyapunov}) can be written as
\begin{align}\label{eq:lyapunov2}
&[m\bm{v}_t^m(\bm{v}_t^m )^*dt][-\bm{\gamma}^*(\bm{x}_t^m)] + [- \bm{\gamma}(\bm{x}_t^m)][m \bm{v}_t^m(\bm{v}_t^m)^*dt] \\
= & d[m\bm{v}_t^m(m\bm{v}_t^m)^*]-\bm{\sigma}(\bm{x}_t^m)\bm{\sigma}^*(\bm{x}_t^m)\,dt - d\tilde{\bm{U}}_t^m - d(\tilde{\bm{U}}_t^m)^*.\nonumber
\end{align}
Denoting $m\bm{v}_t^m(\bm{v}_t^m )^*dt$ by $\bm{V}$, $-\bm{\gamma}(\bm{x}_t^m)$ by $\bm{A}$ and the right-hand side of equation~(\ref{eq:lyapunov2}) by $\bm{C}$, we obtain
\begin{equation}\label{eq:lyapunovgeneral}
\bm{A}\bm{V} + \bm{V}\bm{A}^* = \bm{C},
\end{equation}
which is a Lyapunov equation \cite{ortega,bellman}. By \cite[Theorem 6.4.2]{ortega}, if the real parts of all eigenvalues of $\bm{A}$ are negative, then the Lyapunov equation has a unique solution, given by \cite[Chapter 11]{bellman}
\begin{equation}\label{eq:lyapunovanalytical}
\bm{V} = -\int_0^\infty e^{\bm{A}y}\bm{C}e^{\bm{A}^*y}\,dy.
\end{equation}
By Assumption~1, this applies to $\bm{A} = -\bm{\gamma}(\bm{x}_t^m)$, giving
\begin{align}\label{eq:mvv}
m\bm{v}_t^m(\bm{v}_t^m)^*dt =& -\int_0^\infty e^{-\bm{\gamma}(\bm{x}_t^m)y}\left ( d[m\bm{v}_t^m(m\bm{v}_t^m)^*]-\bm{\sigma}(\bm{x}_t^m)\bm{\sigma}^*(\bm{x}_t^m)\,dt \right. \\
-& \left . d\tilde{\bm{U}}_t^m - d(\tilde{\bm{U}}_t^m)^*\right )e^{-\bm{\gamma}^*(\bm{x}_t^m)y}\,dy \nonumber\\
=&\underbrace{-\int_0^\infty e^{-\bm{\gamma}(\bm{x}_t^m)y}d[m\bm{v}_t^m(m\bm{v}_t^m)^*]e^{-\bm{\gamma}^*(\bm{x}_t^m)y}\,dy}_{d\bm{C}^1_t}\nonumber \\
+& \underbrace{\int_0^\infty e^{-\bm{\gamma}(\bm{x}_t^m)y}\left (\bm{\sigma}(\bm{x}_t^m)\bm{\sigma}^*(\bm{x}_t^m)\,dt\right )\,e^{-\bm{\gamma}^*(\bm{x}_t^m)y}\,dy}_{d\bm{C}^2_t} \nonumber\\
+& \underbrace{\int_0^\infty e^{-\bm{\gamma}(\bm{x}_t^m)y}\left (d\tilde{\bm{U}}_t^m + d(\tilde{\bm{U}}_t^m)^*\right )e^{-\bm{\gamma}^*(\bm{x}_t^m)y}\,dy}_{d\bm{C}^3_t} . \nonumber
\end{align}
We will treat each term in a different way: after substituting the above expression into equation~(\ref{eq:subvvstar}), the term with $\bm{C}_t^1$ will be included in the $\bm{H}_t^m$ process (in the notation of Lemma~\ref{theorem:KP}), the $\bm{C}_t^2$ term will become a part of the noise-induced drift term $\bm{S}$ in the limiting equation~(\ref{eq:SKlimit}), and the $\bm{C}_t^3$ term will become a part of $\bm{U}_t^m$ which will be shown to converge to zero.
For the first term,
\begin{equation}
d(C^1_t)_{ij} = - \int_0^\infty (e^{-\bm{\gamma}(\bm{x}_t^m)y})_{i k_1}(e^{-\bm{\gamma}^*(\bm{x}_t^m)y})_{k_2 j}\,dy \,d[m({v}_t^m)_{k_1}(m{v}_t^m)_{k_2}^*],
\end{equation}
where the integral exists and is finite for all $t\in[0,T]$.
For the second term, $d\bm{C}^2_t=\bm{J}(\bm{x}_t^m)dt$, where $\bm{J}(\bm{x}):\mathcal{U}\rightarrow\mathbb{R}^{d\times d}$ is the solution to the Lyapunov equation
\begin{equation}\label{eq:Glyapunov}
\bm{J}\bm{\gamma}^* + \bm{\gamma}\bm{J} = \bm{\sigma}\bm{\sigma}^*,
\end{equation}
as follows from differentiating the (Lebesgue) integrals in the identity
\begin{equation}
{\int_0^t[\bm{J}(\bm{x}_s^m)\bm{\gamma}^*(\bm{x}_s^m) + \bm{\gamma}(\bm{x}_s^m)\bm{J}(\bm{x}_s^m)]\,ds = \int_0^t \bm{\sigma}(\bm{x}_s^m)\bm{\sigma}^*(\bm{x}_s^m)\,ds. }
\end{equation}
For the third term, using the equation~(\ref{eq:tildeU}) for $\tilde{\bm{U}}^m$, the entries of $\bm{C}^3$ can be written as
\begin{align}
(\bm{C}^3_t)_{ij} =& \int_0^t \int_0^\infty (e^{-\bm{\gamma}(\bm{x}_s^m)y})_{i k_1}\left ([m{\bm{v}}^m_s{\bm{F}}^*(\bm{x}_s^m)]_{k_1 k_2} ds \right. \\
+&\left . [m{\bm{v}}^m_s({\bm{\sigma}}(\bm{x}_s^m)d\bm{W}_s)^*]_{k_1 k_2}
+ [{\bm{F}}(\bm{x}_s^m) (m{\bm{v}}^m_s)^*]_{k_1 k_2}ds \right .\nonumber \\
+& \left . [{\bm{\sigma}}(\bm{x}_s^m)d\bm{W}_s(m{\bm{v}}^m_s)^*]_{k_1 k_2}\right )(e^{-\bm{\gamma}^*(\bm{x}_s^m)y})_{k_2 j}\,dy \nonumber \\
=& \sum_{k_1 k_2} \int_0^t \int_0^\infty(e^{-\bm{\gamma}(\bm{x}_s^m)y})_{i k_1}(e^{-\bm{\gamma}^*(\bm{x}_s^m)y})_{k_2 j}\,dy\left ([m{\bm{v}}^m_s{\bm{F}}^*(\bm{x}_s^m)]_{k_1 k_2} ds \right .\nonumber\\
+& \left .[m{\bm{v}}^m_s({\bm{\sigma}}(\bm{x}_s^m)d\bm{W}_s)^*]_{k_1 k_2}
+ [{\bm{F}}(\bm{x}_s^m) (m{\bm{v}}^m_s)^*]_{k_1 k_2}ds \right.\nonumber \\
+&\left . [{\bm{\sigma}}(\bm{x}_s^m)d\bm{W}_s(m{\bm{v}}^m_s)^*]_{k_1 k_2} \right ).\nonumber
\end{align}
We substitute the expression for $m\bm{v}_t^m(\bm{v}_t^m)^*\,dt$ back into equation~(\ref{eq:subvvstar}). In the resulting formula for $\bm{x}_t^m$, the contribution from $\bm{C}^3$ will form a vector-valued process $\bm{U}^m$. Integrating equation~(\ref{eq:sub}) by parts and substituting equation~(\ref{eq:mvv}) for $(v_s^m)_j(v_s^m)_{l} ds$,
\begin{align}
\label{eq:xmcomplete}
(&{x}_t^m)_i = {x}_i + ({U}_t^m)_i + \int_0^t (\bm{\gamma}^{-1}(\bm{x}_s^m)\bm{F}(\bm{x}_s^m))_i \,ds \\
+ &\left (\int_0^t (\bm{\gamma}^{-1}(\bm{x}_s^m)\bm{\sigma}(\bm{x}_s^m))d\bm{W}_s\right) _i \nonumber\\
+& \int_0^t\frac{\partial}{\partial x_{l}}[(\gamma^{-1})_{ij}(\bm{x}^m_s)]J_{j{l}}(\bm{x}_s^m)\,ds \nonumber\\
+& \int_0^t\frac{\partial}{\partial x_{l}}[(\gamma^{-1})_{ij}(\bm{x}^m_s)] \times \nonumber \\
&\left [-\int_0^\infty (e^{-\bm{\gamma}(\bm{x}_s^m)y})_{jk_1}(e^{-\bm{\gamma}^*(\bm{x}_s^m)y})_{k_2 l}\,dy \right ] d[(mv_s^m)_{k_1}(mv_s^m)_{k_2}],\nonumber
\end{align}
where $\bm{U}_t^m$ is
\begin{align}\label{eq:fullU}
(\bm{U}^m_t)_i =& (\gamma^{-1})_{ij}(\bm{x}^m_t)m({v}_t^m)_j - (\gamma^{-1})_{ij}(\bm{x})m{v}_j \\
+&\int_0^t\frac{\partial}{\partial x_{l}}[(\gamma^{-1})_{ij}(\bm{x}^m_s)]\times \nonumber\\
&\left [ \int_0^\infty(e^{-\bm{\gamma}(\bm{x}_s^m)y})_{jk_1}(e^{-\bm{\gamma}^*(\bm{x}_s^m)y})_{k_2 l}\,dy \times \right . \nonumber\\
& \left ([m{\bm{v}}^m_s{\bm{F}}^*(\bm{x}_s^m)]_{k_1 k_2} ds
+[m{\bm{v}}^m_s({\bm{\sigma}}(\bm{x}_s^m)d\bm{W}_s)^*]_{k_1 k_2} \right . \nonumber\\
+ & \left . [{\bm{F}}(\bm{x}_s^m) (m{\bm{v}}^m_s)^*]_{k_1 k_2}ds + [{\bm{\sigma}}(\bm{x}_s^m)d\bm{W}_s(m{\bm{v}}^m_s)^*]_{k_1 k_2} \right ).\nonumber
\end{align}
Now we prove that $\bm{U}_t^m\rightarrow 0$ in $L^2$, and hence in probability, with respect to $C_{\mathbb{R}^d}[0,T]$. By Lemma~\ref{lemma:convergenceVas}, the first two terms on the right-hand side of equation~(\ref{eq:fullU}) go to zero in $L^2$ with respect to $C_{\mathbb{R}^d}[0,T]$. The rest of the terms in $\bm{U}^m$ are Lebesgue or It\^o integrals with integrands that are products of continuous functions and $m(v_t^m)_i$. {We need a lemma} about the convergence of these integrals to zero. Recall that in Lemma 2 we have shown that $m|\bm{v}_t^m| \rightarrow 0$ in $L^2$. The next lemma proves an explicit bound on the rate of this convergence.
\begin{lemma}
\label{lemma:convergenceVpointwise}
For each $m>0$, let $(\bm{x}_t^m, \bm{v}_t^m)$ be the solution to the system ~(1) with functions $\bm{F}$, $\bm{\gamma}$, and $\bm{\sigma}$ satisfying Assumptions 1-3. Then for any fixed $t\in[0,T]$,
\begin{equation}
E\left [ m|\bm{v}_t^m|^2\right ] \leq C,
\end{equation}
where $C$ is a constant independent of $m$ and of $\; t \leq T$. Furthermore, this implies that
\begin{equation}
E\left [|m\bm{v}_t^m|^2\right ] \leq Cm.
\end{equation}
\end{lemma}
\begin{proof}
Consider the generator of the diffusion process defined by the system (1):
\begin{equation}
\mathcal{L} = \frac{\sigma_{ik}(\bm{x})\sigma_{jk}(\bm{x})}{2m^2}\frac{\partial^2}
{\partial v_i \partial v_j} + v_i \frac{\partial}{\partial x_i} + \frac{F_i(\bm{x})}{m}\frac{\partial}{\partial v_i} - \frac{\gamma_{ik}(\bm{x})v_k}{m}\frac{\partial}{\partial v_i} ,
\end{equation}
and apply it to the kinetic energy
\begin{equation}
\label{eq:kineticE}
\phi(\bm{x},\bm{v}) = \frac{m}{2}|\bm{v}|^2.
\end{equation}
The result is
\begin{equation}
\mathcal{L}\phi = \frac{Tr(\bm{\sigma}(\bm{x})\bm{\sigma}^*(\bm{x}))}{2m}+ F_i(\bm{x})v_i
-\gamma_{ik}(\bm{x})v_kv_i.
\end{equation}
Next, from Assumption~1 we have
\begin{equation}
\gamma_{ik}(\bm{x})v_kv_i \geq c_\lambda |\bm{v}|^2.
\end{equation}
We use this fact along with the bound
\begin{equation}
F_{i}(\bm{x})v_i = \left( \frac{F_{i}(\bm{x})}{\sqrt{c_{\lambda}}} \right) (\sqrt{c_{\lambda}}v_i) \leq \frac{1}{2c_{\lambda}}|\bm{F}(\bm{x})|^2 + \frac{c_{\lambda}}{2}|\bm{v}|^2 ,
\end{equation}
to obtain
\begin{equation}
\mathcal{L}\phi \leq - \frac{c_{\lambda}}{2} |\bm{v}|^2 + \frac{1}{2c_{\lambda}}|\bm{F}(\bm{x})|^2 + \frac{Tr(\bm{\sigma}(\bm{x})\bm{\sigma}^*(\bm{x}))}{2m},
\end{equation}
for all $\bm{x} \in \mathcal{U}, \bm{v}\in\mathbb{R}^d$. Recall that for $0 \leq t \leq T$, $\bm{x}^m_t$ lies in the compact set $\mathcal{K}$, so that $|\bm{F}(\bm{x})|$ and $|\bm{\sigma}(\bm{x})|$ are bounded by $C_T>0$ (Assumption 3). Thus, we obtain the bound
\begin{equation}
\mathcal{L}\phi(\bm{v}) \leq -\frac{c_{\lambda}}{m}\phi(\bm{v}) + \frac{C_T^2}{2c_{\lambda}}+ \frac{C_T^2d}{2m}
\end{equation}
For $m<c_{\lambda}d$, the second term is less than the third and thus
\begin{equation}
\label{eq:MainEst}
\mathcal{L}\phi(\bm{v}) \leq -\frac{c_{\lambda}}{m}\phi(\bm{v}) +\frac{C_T^2d}{m}.
\end{equation}
Applying the It\^o formula to the process $y_t ^m \equiv \exp(\frac{c_{\lambda}}{m}t)(\phi(\bm{v}^m _t)-\frac{C_T^2d}{c_{\lambda}})$ we obtain
\begin{equation}
dy ^m _t =
\left[ \frac{c_{\lambda}}{m}e^{\frac{c_{\lambda}}{m}t}\left(\phi(\bm{v}^m _t)-\frac{C_T^2d}{c_{\lambda}}\right ) +e^{\frac{c_{\lambda}}{m}t}\mathcal{L}\phi(\bm{v} ^m _t) \right] dt + e^{\frac{c_{\lambda}}{m}t} (\bm{v}_t^m)^* \bm{\sigma}(\bm{x}_t^m)\;d\bm{W}_t.
\end{equation}
Using inequality~(\ref{eq:MainEst}) we obtain,
\begin{equation}
\frac{c_{\lambda}}{m}e^{\frac{c_{\lambda}}{m}t}\left(\phi(\bm{v}^m _t)-\frac{C_T^2d}{c_{\lambda}}\right ) +e^{\frac{c_{\lambda}}{m}t}\mathcal{L}\phi(\bm{v} ^m _t)
\leq 0.
\end{equation}
Thus, by Dynkin's formula \cite{oksendal},
\begin{equation}
E\left [e^{\frac{c_{\lambda}}{m}t}\left(\phi(\bm{v} ^m _t)-\frac{C_T^2d}{c_{\lambda}}\right )\right ] \leq \frac{m}{2} |\bm{v}|^2 -\frac{C_T^2d}{c_{\lambda}}.
\end{equation}
This implies
\begin{equation}
\label{eq:mv2}
E\left [\frac{m}{2}|\bm{v} ^m _t|^2\right ] \leq \frac{C_T^2d}{c_{\lambda}} \left( 1 - e^{-\frac{c_{\lambda}}{m}t} \right ) + \frac{me^{-\frac{c_{\lambda}}{m}t}}{2} |\bm{v}|^2 \leq \frac{C_T^2d}{c_{\lambda}} + \frac{m}{2} |\bm{v}|^2 \leq \frac{C}{2},
\end{equation}
for $C$ independent of $m$.
Q.E.D.
\end{proof}
Now we can prove a lemma to show the integrals in $\bm{U}^m$ converge to zero.
\begin{lemma}\label{lemma:intmv}
For each $m > 0$, let $\bm{x}_t^m$ be an $\mathcal{F}_t$-adapted process with values in the compact set $\mathcal{K}\subset\mathcal{U}$ for $t \in [0,T]$. If ${g}(x):\mathcal{K}\rightarrow\mathbb{R}$ is a continuous function such that $|g(\bm{x})|\leq C_T$, then for all $\bm{x}\in \mathcal{K}$
\begin{equation}
\lim_{m\rightarrow 0} E\left [ \left (\sup_{0\leq t\leq T} \left |\int_0^t g(\bm{x}_s^m)m(v_s^m)_i\,ds\right |\right )^2\right ] = 0
\end{equation}
and
\begin{equation}\label{eq:Itomv}
\lim_{m\rightarrow 0} E\left [ \left (\sup_{0\leq t\leq T} \left |\int_0^t g(\bm{x}_s^m)m(v_s^m)_i\,d(W_s)_j\right |\right )^2\right ] = 0,
\end{equation}
for $i=1,...,d, \; j=1,...,k$.
\end{lemma}
\begin{proof}
First note that,
\begin{equation}
E\left [ \left (\sup_{0\leq t\leq T} \left |\int_0^t g(\bm{x}_s^m)m(v_s^m)_i\,ds \right | \right )^2 \right ] \leq E\left [\left(\int_0^T \left |g(\bm{x}_s^m)m(v_s^m)_i\right |\,ds \right ) ^2 \right ].
\end{equation}
By the Cauchy-Schwarz inequality,
\begin{align}
E\left [\left (\int_0^T \left |g(\bm{x}_s^m)m(v_s^m)_i\right |\,ds \right )^2 \right ] \leq & T\int_0^T E\left [ ( g(\bm{x}_s^m)m(v_s^m)_i )^2 \right ]\,ds \\
\leq & C_T^2T \int_0^T E\left [(m(v_s^m)_i)^2\right ]\,ds, \nonumber
\end{align}
where the continuous function $g$ is bounded by $C_T$ on $\mathcal{K}$. From Lemma~\ref{lemma:convergenceVpointwise} we have,
\begin{equation}
E\left [\left (\int_0^T \left |g(\bm{x}_s^m)m(v_s^m)_i\right |\,ds \right )^2 \right ]\leq T^2 Cm.
\end{equation}
Taking the limit of both sides as $m\rightarrow 0$,
\begin{equation}
\lim_{m\rightarrow 0}E\left [\left (\int_0^T |g(\bm{x}_s^m)m(v_s^m)_i|\,ds \right )^2 \right ] = 0.
\end{equation}
Therefore,
\begin{equation}
\lim_{m\rightarrow 0} E\left [ \left (\sup_{0\leq t\leq T} \left |\int_0^t g(\bm{x}_s^m)m(v_s^m)_i\,ds\right |\right )^2\right ]
\le \lim_{m\rightarrow 0} E\left [\left (\int_0^T |g(\bm{x}_s^m)m(v_s^m)_i|\,ds \right )^2 \right ]
= 0.
\end{equation}
To estimate the It\^o integral in (\ref{eq:Itomv}), we first use It\^o isometry:
\begin{align}
E\left [\left(\int_0^T g(\bm{x}_s^m)m(v_s^m)_i\,d(W_s)_j \right)^2 \right ] = & \int_0^T E\left [ (g(\bm{x}_s^m)m(v_s^m)_i )^2\right ]\,ds \\
\leq & C_T^2 \int_0^T E[ (m(v_s^m)_i)^2]\,ds. \nonumber
\end{align}
Using Lebesgue dominated convergence theorem and Doob's maximal inequality (see page 14 of \cite{karatzas}),
\begin{equation}
E\left [ \left (\sup_{0\leq t\leq T} \left |\int_0^t g(\bm{x}_s^m)m(v_s^m)_i\,d(W_s)_j\right |\right )^2\right ] \leq 4 E\left [\left( \int_0^T g(\bm{x}_s^m)m(v_s^m)_i\,d(W_s)_j \right)^2 \right ] \rightarrow 0
\end{equation}
as $m\rightarrow 0$.
Q.E.D.
\end{proof}
We use Lemma~\ref{lemma:intmv} to show $\bm{U}^m$ converges to zero in $L^2$ with respect to $C_{\mathbb{R}^d}[0,T]$ as $m\rightarrow0$. Note that all functions in the expression~(\ref{eq:fullU}) for $\bm{U}^m$ are continuous. The integrals $\int_0^\infty(e^{-\bm{\gamma}(\bm{x}_s^m)y})_{jk_1}(e^{-\bm{\gamma}^*(\bm{x}_s^m)y})_{k_2 l}\,dy$ are continuous because $\bm{\gamma}$ is continuous, matrix exponentiation is a continuous operation and the integrand decays exponentially with $y$. Therefore, $\bm{U}^m\rightarrow 0$ as $m\rightarrow 0$ in $L^2$ with respect to $C_{\mathbb{R}^d}[0,T]$.
To verify the rest of the assumptions of Lemma~\ref{theorem:KP}, including Condition~\ref{condition}, we first write equation~(\ref{eq:xmcomplete}) in the form
\begin{equation}
\bm{x}_t^m = \bm{x} + \bm{U}_t^m + \int_0^t \bm{f}(\bm{x}_t^m)\,d\bm{H}_t^m.
\end{equation}
Define $\bm{f}:\mathcal{U}\rightarrow \mathbb{R}^{d\times (1 + k + 1 + d^2)}$ as
\begin{equation}
\bm{f}(\bm{x}) = \begin{pmatrix} \bm{\gamma}^{-1}(\bm{x})\bm{F}(\bm{x}), & \bm{\gamma}^{-1}(\bm{x})\bm{\sigma}(\bm{x}), & \bm{S}(\bm{x}), & \bm{f}^1(\bm{x}), ..., \bm{f}^{d}(\bm{x}) \end{pmatrix}
\end{equation}
where the components of $\bm{S}(\bm{x}):\mathcal{U}\rightarrow \mathbb{R}^d$ are defined as
\begin{equation}
{S}_i(\bm{x}) = \int_0^t\frac{\partial}{\partial x_{l}}[(\gamma^{-1})_{ij}(\bm{x}^m_s)]J_{j{l}}(\bm{x}),
\end{equation}
$\bm{J}$ is the solution of the Lyapunov equation~(\ref{eq:Glyapunov}) and the components of $\bm{f}^{\beta}(\bm{x}):\mathcal{U}\rightarrow \mathbb{R}^{d\times d}$ are defined as
\begin{equation}
f^{k_2}_{i k_1}(\bm{x}) =\int_0^t\frac{\partial}{\partial x_{l}}[(\gamma^{-1})_{ij}(\bm{x})]
\left [- \int_0^\infty (e^{-\bm{\gamma}(\bm{x})y})_{jk_1}(e^{-\bm{\gamma}^*(\bm{x})y})_{k_2 l}\,dy \right ]
\end{equation}
for $k_1 ,k_2=1,2,...,d$. Next, $\bm{H}^m_t$ with paths in $C_{\mathbb{R}^{1+k+1+d^2}}[0,T]$ is defined as
\begin{equation}
\bm{H}^m_t = \begin{pmatrix} t \\ \bm{W}_t \\ t \\ (mv_t^m)_1m\bm{v}_t^m-mv_1m\bm{v} \\ \vdots \\ (mv_t^m)_d m\bm{v}_t^m-mv_d m\bm{v} \end{pmatrix}.
\end{equation}
By Lemma~\ref{lemma:convergenceVas}, $\bm{H}^m\rightarrow \bm{H}$ as $m\rightarrow 0$ in probability with respect to $C_{\mathbb{R}^{1+k+1+d^2}}[0,T]$, where
\begin{equation}
\bm{H}_t = \begin{pmatrix} t \\ \bm{W}_t \\ t \\ 0 \\ \vdots \\ 0 \end{pmatrix}.
\end{equation}
Therefore, $(\bm{U}^m,\bm{H}^m)\rightarrow (\bm{0},\bm{H})$ as $m\rightarrow 0$ in probability with respect to $C_{\mathbb{R}^d\times\mathbb{R}^{1+k+1+d^2}}[0,T]$. All that is left, to be able to use Lemma~\ref{theorem:KP}, is to check Condition~\ref{condition}.
\subsection{Verification of Condition~1}
We need to find the Doob-Meyer decomposition of $\bm{H}_t^m$ and stochastically bound, uniformly in $m$, the bounded variation part of the decomposition, denoted $\bm{A}_t^m$. Only the last $d^2$ rows of $\bm{H}^m$ depend on $m$. Furthermore, the columns of the matrix $(m\bm{v}_t^m (m\bm{v}_t^m)^*)$ make up the last $d^2$ rows of $\bm{H}^m$. That is, the first column of the matrix $(m\bm{v}_t^m (m\bm{v}_t^m)^*)$ is rows $1+k+1+1$ through $1+k+1+d$ of $\bm{H}^m$. The second column of the matrix $(m\bm{v}_t^m (m\bm{v}_t^m)^*)$ is rows $1+k+1+d+1$ through $1+k+1+2d$ of $\bm{H}^m$ and so on. Consider the expression for $d(m\bm{v}_t^m(m\bm{v}_t^m)^*)$ given by equation~(\ref{eq:dmvmv}). Because the stochastic integrals are local martingales, $\bm{A}_t^m$ contains the columns of the Lebesgue integrals in the above expression. That is,
\begin{equation}
\bm{A}_t^m = \begin{pmatrix} t \\ 0 \\ t \\ (\bm{\mathcal{A}}_t^m)^1 \\ \vdots \\ (\bm{\mathcal{A}}_t^m)^d \end{pmatrix},
\end{equation}
where
\begin{align}\label{eq:Atm}
\begin{pmatrix} (\bm{\mathcal{A}}_t^m)^1 ,& (\bm{\mathcal{A}}_t^m)^2, & \cdots, & (\bm{\mathcal{A}}_t^m)^d \end{pmatrix} =& \int_0^t m\bm{v}_s^m\bm{F}(\bm{x}_s^m)^*\,ds \\
+& \int_0^t\bm{F}(\bm{x}_s^m)(m\bm{v}_s^m)^*ds \nonumber\\
-& \int_0^tm(\bm{v}_s^m) (\bm{\gamma}(\bm{x}_s^m)\bm{v}_s^m)^*\,ds \nonumber\\
-& \int_0^t\bm{\gamma}(\bm{x}_s^m)\bm{v}_s^m m(\bm{v}_s^m)^* \,ds \nonumber \\
+& \int_0^t\bm{\sigma}(\bm{x}_s^m)\bm{\sigma}^*(\bm{x}_s^m)\,ds.\nonumber
\end{align}
We must show that $\bm{A}_t^m$ is stochastically bounded. Because $m\bm{v}^m\rightarrow 0$ in probability, the first and second terms on the right-hand side of equation~(\ref{eq:Atm}) go to zero in probability. By Assumption~3, $\bm{\sigma}(\bm{x}_t^m)\bm{\sigma}^*(\bm{x}_t^m)$ is bounded for all $t\in[0,T]$ and thus the fifth term is stochastically bounded in $m$. To prove stochastic boundedness of the third and fourth terms, it is enough to show $E[|m(v_s^m)_i(\bm{v}_s^m)|]$ is bounded uniformly in $m$ (based on previous works \cite{pavliotis,kupferman2004,hottovy2012}, we expect $\sqrt{m}\bm{v}_s^m$ to be of order one). For the rows we have $|m(v_s^m)_i(\bm{v}_s^m)|\leq m|\bm{v}_s^m|^2$ for every $i=1,...,d$. Using
Lemma~\ref{lemma:convergenceVpointwise} we have
\begin{equation}
E[ m|\bm{v}_s^m|^2] \leq C,
\end{equation}
uniformly in $m$.
Thus, by the Chebyshev inequality, $\{V_t(\bm{A}^m)\}$ is stochastically bounded and this proves that $\bm{H}_t^m$ satisfies Condition~\ref{condition}.
Therefore, $\bm{x}_t^m\rightarrow\bm{x}_t$ in probability as $m\rightarrow 0$. We use this together with boundedness to prove $L^2$ convergence: because $\bm{x}_t^m$ lies in a bounded set $\mathcal{K}$, there exists $N>0$ such that $P(|\bm{x}_t^m|\leq N)=1$ for all $t$ and $m$. Therefore,
\begin{align}
\lim_{m\rightarrow0}E\left [\left (\sup_{0\leq t\leq T}|\bm{x}_t^m-\bm{x}_t|\right )^2\right ] =& \lim_{m\rightarrow 0}\int_0^\infty P\left[ \left (\sup_{0\leq t\leq T}|\bm{x}_t^m-\bm{x}_t|\right )^2\geq x \right] \,dx \\
=& \int_0^{(2N)^2}\lim_{m\rightarrow0}P\left[ \left (\sup_{0\leq t\leq T}|\bm{x}_t^m-\bm{x}_t|\right )^2\geq x \right] \,dx \nonumber \\
=& 0. \nonumber
\end{align}
Q.E.D.
\end{proof}
\begin{remark}\label{remark2}
One may be tempted to apply Lemma~\ref{theorem:KP} to equation (\ref{eq:sub}) without integration by parts, because $m\bm{v}_t^m\rightarrow 0$. However, this would lead to the limiting equation,
\begin{equation}
\label{eq:wronglimit}
d\bm{x}_t = \bm{\gamma}^{-1}(\bm{x}_t)\bm{F}(\bm{x}_t) \,dt + \bm{\gamma}^{-1}(\bm{x}_t)\bm{\sigma}(\bm{x}_t)\,d\bm{W}_t.
\end{equation}
This is not the equation we derived. In view of Lemma~\ref{lemma:convergenceVas}, if $\bm{\gamma}(\bm{x})=\bm{\gamma}_0$ is a constant matrix for all $\bm{x}$, then
\begin{equation}
\lim_{m\rightarrow 0}P\left (\left (\sup_{0\leq t\leq T}\left |\int_0^tm\bm{\gamma}_0^{-1}d\bm{v}_s^m
\right |\right )^2>\epsilon\right ) = \lim_{m\rightarrow 0}P\left (\left (\sup_{0\leq t\leq T}\left |\bm{\gamma}_0^{-1}m\bm{v}_t^m-\bm{\gamma}_0^{-1}m\bm{v}\right |\right )^2>\epsilon\right ) = 0,
\end{equation}
similarly to \cite{nelson,freidlin2004}. However, with $\bm{\gamma}(\bm{x})$ dependent on position, the limit will be non-zero because $m\bm{v}_t^m$ does not satisfy Condition~\ref{condition}. Note that from the SDE~(\ref{eq:SDEgeneral}) for $d\bm{v}_t^m$
\begin{equation}
m\bm{v}_t^m = m\bm{v} + \underbrace{\int_0^t\left (\bm{F}(\bm{x}_s^m)-\bm{\gamma}(\bm{x}_s^m)\bm{v}_s^m\right )ds}_{\bm{A}_t^m \text{ Bounded Variation}} \; + \underbrace{\int_0^t\bm{\sigma}(\bm{x}_s^m)\,d\bm{W}_s}_{\bm{M}_t^m \text{ Local Martingale}}.
\end{equation}
Because the limits of integration are finite, $\bm{A}_t^m$ has bounded variation for fixed $m>0$. Note that $O(V_t(\bm{A}^m)) = O(\bm{v} _t ^m )$. It can be shown explicitly in the special case in which the fluctuation-dissipation relation is satisfied (and we expect it to be true in general) that $\bm{v} _t ^m$ is of the order $m^{- \frac{1}{2}}$. Therefore $O(V_t(\bm{A}^m)) = O(m^{-1/2})$ and Lemma~\ref{theorem:KP} cannot be used.
\end{remark}
\section{One Dimension}\label{sec:1D}
As the first example, we apply Theorem~\ref{theorem} to a one-dimensional model of a Brownian particle. This is the model studied in \cite{hottovy2012} and earlier in \cite{sancho1982}. The particle's position satisfies
\begin{equation}\label{eq:SDEgeneral1D}
\left \{ \begin{array}{rcl}
dx_t^m &=& v_t^m\,dt\\
dv_t^m &=& \left ( \frac{{F}(x_t^m)}{m} - \frac{{\gamma}(x_t^m)}{m}v_t^m\right )\,dt + \frac{{\sigma}(x_t^m)}{m}dW_t
\end{array} \right .
\end{equation}
with initial conditions $x_0^m = x$ and $v_0^m = v$. For simplicity, we study the system on the whole real line, assuming the coefficients and their derivatives are bounded. These assumptions will be relaxed in Section~\ref{sec:BP}. Equation~(\ref{eq:Glyapunov}) for the noise-induced drift term is in this case
\begin{equation}
2J(x)\gamma(x) = \sigma(x)^2.
\end{equation}
Thus, the limiting equation for $x_t$ is
\begin{equation}
\label{eq:SKlimit1D}
dx_t = \left (\frac{{F}(x_t)}{{\gamma}(x_t)} - \frac{{\gamma}'(x_t)}{2{\gamma}(x_t)^3}{\sigma}(x_t)^2\right )dt + \frac{{\sigma}(x_t)}{{\gamma}(x_t)}dW_t,
\end{equation}
with $x_0 = x$, which recovers prior results \cite{sancho1982,freidlin2011,hottovy2012}.
It is instructive to illustrate on this simple example the key quantities entering the proof of Lemma~1, namely $\bm{f}$ and $\bm{H}_t^m$. Define $\bm{f}$, a continuous function from $\mathbb{R}$ to $\mathbb{R}^4$, as
\begin{equation}
\bm{f}(x) = \begin{pmatrix} \frac{{F}(x)}{{\gamma}(x)}, &\frac{{\sigma}(x)}{{\gamma}(x)}, & - \frac{{\gamma}'(x)}{2{\gamma}(x)^3}{\sigma}(x)^2, & - \frac{{\gamma}'(x)}{{\gamma}(x)^3} \end{pmatrix},
\end{equation}
and $\bm{H}_t^m$ with paths in $C_{\mathbb{R}^4}[0,T]$ as,
\begin{equation}
\bm{H}_t^m = \begin{pmatrix} t \\ W_t \\ t \\ \frac{1}{2}\left [(mv_t^m)^2 - (mv)^2\right ] \end{pmatrix}.
\end{equation}
We have $\lim_{m\rightarrow 0} \bm{H}_t^m = ( t, W_t, t, 0)^*$, and the limiting equation~(\ref{eq:SKlimit1D}) is recovered.
The boundedness of the coefficients and their derivatives implies global existence of the strongly unique solutions
$x_t^m$ to SDE~(\ref{eq:SDEgeneral}) for every $m>0$, and $x_t$ to SDE~(\ref{eq:SKlimit}); Assumptions 1-3 are thus satisfied. {However, because the
state space of the process (the real line) is unbounded, we can only conclude convergence in probability
(for comparison, see the last paragraph of the proof of Theorem~\ref{theorem}). Therefore, by Theorem~\ref{theorem}, $x_t^m\rightarrow x_t$ as $m\rightarrow 0$ in probability with respect to $C_{\mathbb{R}}[0,T]$.}
\section{Examples and applications}\label{sec5}
\subsection{Brownian particle in a {one-dimensional} diffusion gradient}\label{sec:BP}
The equations studied in this example model the experiment described in \cite{volpe2011}. In this experiment a colloidal particle is diffusing in a cylinder filled with water. The friction and noise coefficients depend on the particle'��s position, as described below, giving rise to a noise-induced drift. Even though we do not verify Assumption~2 in this case, the Smoluchowski-Kramers approximation derived in Theorem~\ref{theorem} agrees with the experimental results of \cite{volpe2011}. The equations are:
\begin{equation}\label{eq:LE}
\left\{\begin{array}{ccl}
dx_t^m &=& v_t^m\;dt \\
dv_t^m &=& \left [\frac{F(x_t^m)}{m} - \frac{k_BT}{mD(x_t^m)}v_t^m\right ]\;dt +
\frac{k_BT\sqrt{2}}{m\sqrt{D(x_t^m)}}\;dW_t
\end{array}\right.
\end{equation}
where $D(x)$ is the diffusion coefficient. Near $x = 0$ $D(x)$ can be expressed analytically \cite{brenner} and has the form shown in Fig.~1; an analogous behavior also holds near the top of the cylinder. The force $F$ results from effective gravity and electrostatic repulsion from the bottom and top walls of the container. Away from the lateral walls of the cylinder both forces are vertical so the horizontal components of particle's motion can be (and were) separated and the equations are written for the vertical component only.
\begin{figure}[h!]
\resizebox{.5\textwidth}{!}{
\includegraphics{Dperp.pdf}
}
\caption{Plot of the normalized diffusion coefficient $D(x)$ for a spherical particle of radius $1\,\rm{\mu m}$.}
\label{fig:D}
\end{figure}
An application of equation~(\ref{eq:SKlimit1D}) to this case gives the limiting equation
\begin{equation}
dx_t = \left [\frac{D(x_t)F(x_t)}{k_BT} + D'(x_t)\right ]\;dt + \sqrt{2D(x_t)}\;dW_t.
\end{equation}
The noise-induced term in the drift is thus $S(x) = D'(x)$, as observed in \cite{volpe2011}.
\subsection{Systems driven by a colored noise}\label{sec:OU}
The driving mechanisms of real physical systems are typically characterized by a non-zero correlation time. Therefore, models employing colored noise, instead of white noise, are often more appropriate to describe them. We work through two examples with Ornstein-Uhlenbeck colored noise. We calculate the limiting equations without stating explicit conditions for the existence and uniqueness assumed in Theorem~\ref{theorem}. In this Section we consider the multi-dimensional version of equation~(\ref{eq:SDEOUCN}):
\begin{equation}\label{eq830000}
\left \{\begin{array}{rcl}
d\bm{x}_t &=& \bm{v}_t\,dt \\
d\bm{v}_t &=&\left[ \frac{ \bm{F}(\bm{x}_t)}{m} - \frac{\bm{\gamma}(\bm{x}_t)}{m}\bm{v}_t + \frac{\bm{\sigma}(\bm{x}_t)}{m}\bm{\eta}_t \right] dt
\end{array}\right .
\end{equation}
where $\bm{x}_t\in\mathcal{U}\subset\mathbb{R}^d$ and $\bm{\eta}_t$ is a $k$-dimensional stationary random process with zero mean and correlation time $\tau$. To use the framework of Theorem~\ref{theorem}, we consider a special type of noise, the Ornstein-Uhlenbeck process defined as the stationary solution of the SDE
\begin{equation}
\label{eq:OU}
d\bm{\eta}_t = -\frac{\bm{A}}{\tau}\bm{\eta}_t\,dt + \frac{\bm{\lambda}}{\tau}d\bm{W}_t,
\end{equation}
where $\bm{A}$ is a $k$ by $k$ constant invertible matrix, $\bm{\lambda}$ is a $k$ by $\ell$ constant matrix, and $\bm{W}$ an $\ell$-dimensional Wiener process. Defining the variable $\bm{\zeta}_t$ by the equation $d\bm{\zeta}_t=\bm{\eta}_t\,dt$, we use the above framework by setting $\bar{\bm{x}} = (\bm{x},\bm{\zeta})$ and $\bar{\bm{v}} = (\bm{v},\bm{\eta})$. We will now illustrate this use of Theorem~\ref{theorem} to derive the limit, as the correlation time $\tau$ and mass $m$ tend to zero, on two concrete examples. Note that here the initial condition $\bm{\eta}_0$ is taken to be a random variable distributed according to the stationary distribution corresponding to (\ref{eq:OU}), so that it is Gaussian and depends on $\tau$, but this presents no additional difficulty and the theorem can be generalized to include this case.
\subsubsection{A system with colored noise and constant friction}\label{sec:constfric}
Consider the system
\begin{equation}
\left \{ \begin{array}{rcl}
\mu \ddot{x}_t &= &{F}(x_t) + \left[ -\dot{x}_t + f(x_t)\eta_t \right]\\
d{\eta}_t &=& - \frac{a \eta_t}{\epsilon^2} \,dt + \frac{\sqrt{2\lambda}}{\epsilon^2}\,dW_t
\end{array} \right.
\end{equation}
with $x_t$ and $\eta_t$ one-dimensional.
This is equivalent to the example in \cite[Section 11.7.6]{pavliotis} with the substitution $\eta_t = \frac{1}{\epsilon}\tilde{\eta_t}$, where $\tilde{\eta}_t$ is the colored noise used in the reference. Setting $\mu = k \epsilon^2$, we rewrite the above system as
\begin{equation}
\label{eq:OUconstFric}
\left \{ \begin{array}{rcl}
dx_t &=& v_t\,dt \\
dv_t &=& \left[ \frac{F(x_t)}{k \epsilon^2} -\frac{v_t}{k \epsilon^2} + \frac{f(x_t)\eta_t}{k \epsilon^2} \right] dt \\
d\zeta_t &=& \eta_t dt \\
d\eta_t &=& -\frac{a \eta_t}{\epsilon^2}\,dt + \frac{\sqrt{2\lambda}}{\epsilon^2}\,dW_t
\end{array} \right .
\end{equation}
In the framework of Section~\ref{sec:SKa}, defining $\bm{x}_t = (x_t,\zeta_t)^*$ and $\bm{v}_t = (v_t,\eta_t)^*$, and letting $m=\epsilon^2$, the SDE system~(\ref{eq:OUconstFric}) becomes
\begin{equation}
\left \{ \begin{array}{rcl}
d\bm{x}_t & = & \bm{v}_t \,dt \\
md\bm{v}_t &=& \tilde{\bm{F}}(\bm{x}_t)dt - \bm{\gamma}(\bm{x}_t)\bm{v}_t\,dt + \bm{\sigma}(\bm{x}_t)d{W}_t
\end{array}\right .
\end{equation}
with
\begin{equation}
\tilde{\bm{F}}(\bm{x}_t) = \begin{pmatrix} \frac{{F}(x_t)}{k} \\ 0 \end{pmatrix}, \quad \bm{\gamma}(\bm{x}_t) = \begin{pmatrix} \frac{1}{k} & -\frac{f(x_t)}{k} \\ 0 & a \end{pmatrix},\quad \bm{\sigma}(\bm{x}_t) = \begin{pmatrix} 0 \\ \sqrt{2\lambda} \end{pmatrix}.
\end{equation}
To compute the noise-induced drift term, we solve the Lyapunov equation,
\begin{equation}
\bm{\gamma}\bm{J} +\bm{J}\bm{\gamma}^* = \bm{\sigma}\bm{\sigma}^*,
\end{equation}
and note that the Wiener process $W_t$ is one-dimensional.
We use Mathematica\textsuperscript{\textregistered} to find a closed form for $\bm{J}$,
\begin{equation}
\bm{J}(\bm{x}) = \begin{pmatrix} \frac{\lambda f(x)^2}{a(1+a k)} & \frac{\lambda f(x)}{a(1+a k)} \\
\frac{\lambda f(x)}{a(1+a k)} & \frac{\lambda}{a} \end{pmatrix}.
\end{equation}
We compute the noise-induced drift in the first component ($i = 1$) using equation~(\ref{eq:spurious}):
\begin{equation}
\begin{array}{rcl}
S_1(x) &=&\frac{\partial}{\partial x_{l}}[({\gamma}^{-1})_{1j}({x})]J_{j{l}}(x)\\
& = & \frac{\lambda f'(x)f(x)}{a^2(1+ k a)}.
\end{array}
\end{equation}
Therefore, the limiting SDE for $x_t$ is
\begin{equation}
dx_t =\left[ {F}(x_t) + \frac{\lambda f'(x_t)f(x_t)}{a^2(1+ k a)} \right]dt + \sqrt{\frac{2\lambda}{a^2}}f(x_t)\,dW_t,
\end{equation}
in agreement with \cite{pavliotis}.
\subsubsection{Thermophoresis}
\label{sec:thermo}
The same type of equation can be used to model thermophoresis, i.e. the movement of small particles in a temperature gradient \cite{piazza2008}. While theoretical models of this phenomenon are still a matter of debate, thermophoresis has been successfully employed experimentally, e.g., to separate and group small particles \cite{piazza2008} and to influence the motion of DNA \cite{duhr2006pnas}. In \cite{hottovyEPL2012} we used equation~(\ref{eq830000}) to model the motion of a particle of mass $m$ driven by a colored noise $\eta_t$ with a short correlation time $\tau$ in an environment where the temperature $T(x)$ depends on the particle's position $x$, and thus $\gamma(x) = \gamma(T(x))$ and $D(x) = D(T(x))$. In the limit as $m,\tau\rightarrow 0$, the noise-induced drift pushes the particle toward the hotter regions or toward the colder regions depending on the ratio $m/\tau$. This was argued in \cite{hottovyEPL2012} using a multi-scale expansion. We now show this using Theorem~\ref{theorem}. We consider the SDE system
\begin{equation}\label{eq:thermotheta}
\left \{\begin{array}{rcl}
dx_t &=& v_t\,dt \\
dv_t &=&\left[ \frac{F(x_t)}{\theta(x_t) \tau} -\frac{1}{\theta(x_t) \tau}v_t + \frac{\sqrt{2D(x_t)}\eta_t}{\theta(x_t) \tau}\right] dt \\
d\zeta_t &=& \eta_t\,dt \\
d\eta_t &=& -\frac{2\eta_t}{\tau}\,dt + \frac{2}{\tau}\,dW_t
\end{array}\right .
\end{equation}
where $W_t$ is a one-dimensional Wiener process and we have introduced the dimensionless quantity
\begin{equation}
\theta(x) = \theta(T(x)) = \frac{m}{\bm{\gamma}(T(x))\tau}.
\end{equation}
Differently from previous sections, the small parameter is $\tau$, not $m$ (as $\tau$ goes to zero $m$ will go to zero as well). Define $\bm{x} = (x,\zeta)$, $\bm{v} = (v,\eta)$, and
\begin{equation}
\bm{\gamma}(\bm{x}) = \begin{pmatrix} \frac{1}{\theta(x)} & -\frac{\sqrt{2D(x)}}{\theta(x)} \\
0 & 2 \end{pmatrix}, \quad \bm{\sigma} = \begin{pmatrix} 0 \\ 2 \end{pmatrix}.
\end{equation}
$\bm{\gamma}$ is invertible and
\begin{equation}
\bm{\gamma}^{-1}(\bm{x}) = \begin{pmatrix} \theta(x) & \frac{\sqrt{2D(x)}}{2} \\ 0 & \frac{1}{2} \end{pmatrix}.
\end{equation}
To compute the noise-induced drift term, we solve the Lyapunov equation,
\begin{equation}
\bm{\gamma}\bm{J} +\bm{J}\bm{\gamma}^* = \bm{\sigma}\bm{\sigma}.
\end{equation}
A closed form of $\bm{J}$ obtained using Mathematica\textsuperscript{\textregistered} is
\begin{equation}
\bm{J}(\bm{x}) = \begin{pmatrix} \frac{2 D(x)}{1+2\theta(x)} & \frac{ \sqrt{2D(x)}}{1+2\theta(x)}\\ \frac{ \sqrt{2D(x)}}{1+2\theta(x)} & 1\end{pmatrix}
\end{equation}
Using equation~(\ref{eq:SKlimit}), as $\tau,m\rightarrow 0$ so that $m/\tau$ is constant, we see that the limiting equation for $x$ is
\begin{equation}
\label{eq:thermolimit}
dx_t =\left[ \frac{{F}(x_t)}{\theta({x}_t)}+ \frac{\bm{\gamma}(x_t)D'(x_t)-4\theta(x_t)\bm{\gamma}'(x_t)D(x_t)}{2\bm{\gamma}(x_t)(1+2\theta(x_t))} \right] dt + \sqrt{2D(x_t)}\,dW_t,
\end{equation}
which coincides with the result of \cite{hottovyEPL2012}.
\begin{remark}
Strictly speaking, the system~(\ref{eq:thermotheta}) does not obey the fluctuation-dissipation relation as the time correlations of the noise should be reflected in the friction term, which should become an integral over the past \cite[Section 1.5]{zwanzig}. The resulting non-Markovian system requires a more refined analysis.
\end{remark}
\subsection{Three-dimensional Brownian motion in a force field}\label{sec:3DBP}
As a generalization of the example in Section~\ref{sec:BP}, we consider a Brownian particle in $\mathbb{R}^3$. The coefficients consist of a spatially varying noise coefficient $\bm{\sigma}(\bm{x})$ and the fluctuation-dissipation relation \cite{kubo} in multi-dimensional form, i.e.
\begin{equation}
\bm{\gamma}(\bm{x}) = \frac{\bm{\sigma}(\bm{x})\bm{\sigma}^*(\bm{x})}{k_BT}.
\end{equation}
A force $\bm{F}$ is acting on the particle.
Equation~(\ref{eq:SDEgeneral}) becomes
\begin{equation} \label{BMforce}
\left\{\begin{array}{rcl}
d\bm{x}_t^m & = & \bm{v}_t^m\,dt \\
d\bm{v}_t^m & = & \left[ \frac{\bm{F}(\bm{x}_t^m)}{m} - \frac{\bm{\sigma}\bm{\sigma}^*(\bm{x}_t^m)}{mk_BT}\bm{v}^m_t \right] \,dt + \frac{\bm{\sigma}(\bm{x}_t^m)}{m}\,d\bm{W}_t
\end{array}\right.
\end{equation}
To find the limiting equations, we solve the Lyapunov equation
\begin{equation}
\frac{1}{k_BT}\left(\bm{\sigma}\bm{\sigma}^* \bm{J} + \bm{J}\bm{\sigma}\bm{\sigma}^*\right) = \bm{\sigma}\bm{\sigma}^*
\end{equation}
obtaining $\bm{J} = \frac{k_BT}{2}\bm{I}$ where $\bm{I}$ is the identity matrix. The limiting equation~(\ref{eq:SKlimit}), as $m\rightarrow 0$, is
\begin{equation}\label{eq113dhdkodl}
d\bm{x}_t=\left [ (\bm{\sigma}\bm{\sigma}^*(\bm{x}_t))^{-1}k_BT\bm{F}(\bm{x}_t)-k_BT\bm{S}(\bm{x}_t)\right ]dt + [\bm{\sigma}(\bm{x}_t)^*]^{-1}k_BTd\bm{W}_t,
\end{equation}
where the $i^\text{th}$ component of $\bm{S}$ equals
\begin{equation}
\label{eq:spuriousG}
S_{i}(\bm{x}) =\frac{k_BT}{2}\frac{\partial}{\partial x_{l}}([(\bm{\sigma}\bm{\sigma}^*)^{-1}(\bm{x})]_{i{l}}).
\end{equation}
\begin{remark}
If $\bm{F}$ is a conservative force, i.e. $ \bm{F} = -\nabla{U}$, it can be shown (e.g. by solving the corresponding stationary Fokker-Planck equation) that for $m > 0$ equation~(\ref{BMforce}) has a stationary density $C\exp\left \{ -\frac{{U}(\bm{x})}{k_BT} - \frac{m|\bm{v}|^2}{2k_BT}\right \}$ (Gibbs distribution). In this case, one can recover the formula for $\bm{S}$ by requiring that the limiting equation has $C\exp\left \{ -\frac{{U}(\bm{x})}{k_BT} \right\}$ as its stationary density. For a non-conservative force $\bm{F}$, the stationary solution will not be Gibbs and the limit is identified using Theorem~\ref{theorem}. Interestingly these cases have also been studied experimentally in the presence, e.g., of non-conservative forces arising from hydrodynamic interactions in two dimensions \cite{volpe2008} and optical forces in three dimensions \cite{simpson1997,pesce2009}.
\end{remark}
\subsection{Brownian particle in a three-dimensional magnetic field}\label{sec:magnetic}
We consider a particle of mass $m$ and charge $q$, moving in three dimensions under an external force ${\bm F}({\bm x})$ and a friction force $-{\bm \gamma}({\bm x}){\bm v}$ in the presence of (white) noise ${\bm \sigma}({\bm x}){\bm \eta_t}$. We assume there is an additional magnetic (Lorentz) force $q{\bm v} \times {\bm B}({\bm x})$, where ${{\bm B}\in\mathbb{R}^3}$ is a magnetic field. Similar problems were studied in \cite{kwon2005structure,cerrai2011,freidlin2012}. The Lorentz force can be written as an action of an (antisymmetric) matrix ${{\bm H}({\bm x}) \in C_{\mathbb{R}^{3\times3}}[0,T]}$ on ${\bm v}$. While physically ${\bm H}({\bm x})$ does not represent friction, it can be added to the friction term, changing the matrix ${\bm \gamma}$ to a modified one
$$
\tilde{\bm{\gamma}}({\bm x}) = {\bm \gamma}({\bm x}) + {\bm H}({\bm x}).
$$
Note that $\bm{\gamma}$ and $\tilde{\bm{\gamma}}$ have the same symmetric part and, therefore, Assumption~\ref{assume:bddcoeffs} is preserved. Accordingly, the noise-induced drift $\tilde{\bm S}$ is now calculated, using the solution of the modified Lyapunov equation
\begin{equation}
\label{eq:LyapMag}
\bm{{J}}\bm{\tilde{\gamma}}^* + \bm{\tilde{\gamma}}\bm{{J}} = \bm{\sigma}\bm{\sigma}^*,
\end{equation}
In particular, if ${\bm \gamma}$ and ${\bm \sigma}$ satisfy the Einstein relation ${\bm \sigma}{\bm \sigma}^* = 2k_BT {\bm \gamma}$, the solution of the Lyapunov equation is
$$
{\bm J} = k_BT {\bm I},
$$
where ${\bm I}$ is the identity matrix, leading to
$$
\tilde{S}_i({\bm x}) = {k_BT}{\partial \over \partial x_j}[( \bm{\gamma}+ \bm{H})^{-1}_{ij}({\bm x})].
$$
The result in this case is essentially contained (based on different arguments) in \cite{shi2012}. This case is special in that adding an anti-symmetric matrix $\bm{H}$ to $\bm{\gamma}$ does not change the solution of the Lyapunov equation.
\section{Stratonovich form of the limiting equation}\label{sec4}
In general, an It\^o system
\begin{equation}
d(x_t)_i = b_i(\bm{x}_t)\;dt + h_{ij}(\bm{x}_t)\;d (W_t)_j
\end{equation}
has an equivalent Stratonovich form
\begin{equation}
d(x_t)_i = b_i(\bm{x}_t)\;dt - \frac{1}{2} \left( \partial_k(h_{ij}) (\bm{x}_t) \right) h_{kj}(\bm{x}_t) dt + h_{ij}(\bm{x}_t)\circ d(W_t)_j,
\end{equation}
in which the middle term $- \frac{1}{2} \left( \partial_k(h_{ij}) (\bm{x}_t) \right) h_{kj} (\bm{x}_t)$ is the {\it It\^{o}-to-Stratonovich
correction}. We apply it to equation~(\ref{eq:SKlimit}), where $\bm{h} = \bm{\gamma}^{-1}\bm{\sigma}$, getting for the It\^{o}-to-Stratonovich
correction the expression
\begin{equation}
-\frac{1}{2} (\partial_k(\gamma^{-1})_{i\ell}) \sigma_{\ell j}(\gamma^{-1})_{km}\sigma_{mj} - \frac{1}{2} (\gamma^{-1})_{i\ell} (\partial_k(\sigma_{\ell j})) (\gamma^{-1})_{km}\sigma_{mj}.
\end{equation}
In the case when $\bm{\gamma} = \bm{\gamma}^*$ commutes with $\bm{\sigma}$ (and thus
also with $\bm{\sigma}^*$), the solution of the Lyapunov equation~(\ref{eq:Lyapunov}) is
\begin{equation}
\bm{J} = \frac{1}{2}\bm{\sigma}\bm{\sigma}^*\gamma^{-1}.
\end{equation}
Substituting it into the limiting equation~(\ref{eq:spurious}) we see that $\bm{S}$ cancels the first term of the It\^o-to-Stratonovich correction and thus in the Stratonovich language the limiting equation becomes
\begin{equation}
\label{eq:StratSKlimit}
d\bm{x}_t = \left[ \bm{\gamma}^{-1}(\bm{x}_t)\bm{F}(\bm{x}_t)+ \bar{\bm{S}}(\bm{x}_t)
\right] \;dt + \bm{\gamma}^{-1}(\bm{x}_t)\bm{\sigma}(\bm{x}_t)\circ d\bm{W}_t,
\end{equation}
with
\begin{equation}
\bar{S}_i(\bm{x}) = - \frac{1}{2}(\gamma^{-1})_{i\ell} (\bm{x}) ( \partial_k(\sigma_{\ell j})(\bm{x}))(\gamma^{-1})_{km} (\bm{x})\sigma_{mj} (\bm{x}).
\end{equation}
For example, in one dimension, equation~(\ref{eq:StratSKlimit}) is
\begin{equation}
\label{eq:StratSKlimit1D}
dx_t = \left (\frac{F(x_t)}{\gamma(x_t)} -\frac{1}{2}\frac{\sigma(x_t)\sigma'(x_t)}{\gamma^2(x_t) }\right )\;dt + \frac{\sigma(x_t)}{\gamma(x_t)}\;\circ dW_t.
\end{equation}
It follows that $\bar{\bm{S}}=0$ if the noise matrix $\bm{\sigma}$ is independent of $\bm{x}$. Note that when $\bm{\gamma}(\bm{x}) = \bm{\gamma}$ is independent of $\bm{x}$, the noise-induced drift in the It\^o SDE~(\ref{eq:SKlimit}) is zero.
\section{Conclusion}
\label{sec:conclusion}
We have proven convergence of solutions of a class of SDE systems in the small-mass limit. Generalizing earlier work by several authors, the results apply in arbitrary dimension and allow to include position-dependent friction and noise coefficients, as well as colored noises with suitably scaled correlation times. Our main result (Theorem~\ref{theorem}) provides an alternative to homogenization of SDE obtained by multiscale expansion; while the latter prove convergence in distribution, our method yields stronger $L^2$-convergence. It has a wide range of physically relevant applications, including explanation of actual experiments and prediction of new effects. We have, in particular, discussed applications to Brownian motion in a diffusion gradient, thermophoresis of small particles, and Brownian motion in the presence of non-conservative forces.
\bibliographystyle{plain}
|
2,877,628,090,549 | arxiv | \section{Introduction}
\label{sec:introduction}
Pricing of American or Bermudan type options, i.e., options with an early exercise feature, is one of the most classical, but also most difficult problems of computational finance, producing a vast amount of literature. Some examples of popular classes of methods include PDE methods (see, for instance, \cite{achdou2005computational}), tree and stochastic mesh methods (see, for instance, \cite{glasserman2013monte}), and policy iteration (see, e.g., \cite{belomestny2018advanced}). In this paper we consider two other very popular methodologies, namely least squares Monte Carlo methods based on the dynamic programming principles pioneered by \cite{longstaff2001valuing} and dual martingale methods introduced by \cite{rogers2002MCOptions}, both of which were, of course, widely adapted and considerably improved since then. We refer to \cite{ludkovski2020mlosp} for a recent overview together with an open-source implementation.
Both least squares Monte Carlo methods and duality methods require efficient and accurate approximation of functions from a potentially large class. Indeed, the key step of the least squares Monte Carlo method involves the computation of a \emph{continuation value}, i.e., of the conditional expectation $\mathbb{E}_t[v(t+\Delta t, X_{t+\Delta t})]$ of a future \emph{value function} at time $t$.\footnote{Actual algorithms may rather involve actual future payoffs such as in \cite{longstaff2001valuing}. Note that we ignore discounting at this time.} (For sake of presentation, let us assume that we are using an asset price model based on a Markov process $X$, which contains the asset prices $S$, but possibly also further components, such as stochastic volatilities or interest rates.) This conditional expectation is then approximated within a finite dimensional space spanned by \emph{basis functions} -- often chosen to be polynomials. When the dimension $d$ of the underlying process $X$ is high, we encounter a curse of dimensionality, i.e., we expect that the number of basis functions needed to achieve a certain accuracy increases exponentially in the dimension $d$. This is especially true when the basis functions are chosen by ``tensorization'' of one-dimensional basis functions. E.g., the dimension of the space of polynomials of (total) degree $p$ in $d$ variables is $\binom{d+p}{d}$. Such a polynomial basis become inefficient when $d \gg 1$, a realistic scenario for options on baskets or indices. For instance, options on SPY (with $100$ assets) are American, implying that $d \ge 100$, depending on the choice of the model -- in the sense that continuation values also depend on volatilities not just the asset prices in stochastic volatility models, for example. Hence, other classes of basis functions are needed.
Duality methods are typically based on parameterizations of families of candidate martingales. In the Markovian case, we may restrict ourselves to martingales representable as stochastic integrals of functions $\phi(t,X_t)$ against the driving Brownian motion, and we again see a potential curse of dimensionality in terms of the dimension of $X$.
When the underlying model is \emph{not Markovian} -- as, e.g., common for \emph{rough volatility models}, see, e.g., \cite{bayer2016pricing} -- the involved dimensions can increase drastically, as then both continuation values and candidate martingales theoretically depend on the entire trajectory of the process $X$ until time $t$. There are only very few rigorously analyzed methods for such non-Markovian problems. We specifically refer to \cite{lelong2018dual,lelong2019pricing}, both of which are based on Wiener chaos expansions of the value process and the candidate martingale, respectively. In this framework, conditional expectations can be computed explicitly, but the curse of dimension enters via the chaos decomposition itself, see Section~\ref{sec:dual} for details.
In either case, we are faced with ``natural'' $d$-dimensional bases which quickly increase in size as $d$ increases. While the curse of dimension is often a real, inescapable fact of complexity theory (in the sense of a worst case dependence over sufficiently general classes of approximation problems), real life problems often exhibit structural properties which lead to a notion of ``effective dimension'' of a problem which may increase much slower than the actual dimension $d$ -- see, for instance, \cite{wang2005high} for a similar phenomenon in finance. This insight has lead to efficient approximation strategies for high-dimensional functions of low effective dimension of some sort in numerical analysis. In this paper, we propose to use hierarchical tensor formats, more precisely \emph{tensor trains}, to provide efficient approximations of nominally high-dimensional functions, provided that they allow for accurate \emph{low-rank approximations}.
Hierarchical tensors (HT)~\cite{bachmayr2016tensor,hackbusch2014tensor} rely on the classical concept of~\textit{separation of variables} by means of a generalization of the singular value decomposition (SVD) to higher-order tensors, preserving many of its well-known properties.
The hierarchical SVD (HSVD) yields a notion of multilinear rank and provides an approach to obtain a quasi-optimal low-rank approximation by rank truncation.
For fixed multilinear ranks, the representation and operation complexities of these formats scale only linearly in the order of the tensor.
Central to the HSVD is a tree-based representation of a recursive decomposition of the tensor space into nested subspaces.
For the described algorithms, we use the common tensor train (TT) format~\cite{Oseledets2009,Oseledets-2011,oseledets2013constructive}, which is a ``linearization'' of the HT representation with general binary trees.
Similar to matrices, the set of hierarchical tensors of fixed multilinear rank is not convex but forms a smooth manifold.
Hence, appropriate optimization techniques such as alternating and Riemannian schemes are available.
Tensor trains are a new technique in computational finance. In fact, we are only aware of one other paper in the field using these tensor representations, namely \cite{glau2020low}. In that paper, the authors consider parametric option pricing problems. That is, they are given a model with parameters $\zeta$ and options with parameters $\eta$. The price of these options in the model is then a function $P(\theta)$, $\theta \coloneqq (\zeta, \eta)$, of the model and option parameters, and we can expect $P$ to be regular. Some tasks in financial engineering require rapid option pricing, e.g., for calibrating model parameters to market prices. Following \cite{gass2018chebyshev}, \cite{glau2020low} propose to approximate $\theta \mapsto P(\theta)$ by Chebyshev interpolation. If $\theta$ is high-dimensional, such a interpolation may already involve a very large number of Chebyshev polynomials, and they then proceed to ``compress'' the representation using tensor trains.
No discussion of computational methods for high-dimensional problems can today ignore the trend of using machine learning techniques, in particular deep neural networks, to often great success. In the context of American or Bermudan options, we mention the recent paper by \cite{becker2019deep}, who are able to accurately price high-dimensional Bermudan options in dimensions up to $500$ using deep learning techniques based on parameterization of \emph{randomized} stopping times, see also \cite{bayer2020pricing}. A natural question then is if the successes of deep learning for solving high dimensional problems (\emph{``overcoming'' the curse of dimension}) can also be achieved by other, more traditional methods of numerical analysis.
\subsection*{Main contributions}
\label{sec:contributions}
Our intention is to advocate the use of hierarchical tensor formats for high-dimensional problems in computational finance.
For this, we provide an overview of the main ideas of these formats and illustrate the application of tensor trains with two popular methods using tensorized polynomial spaces for the discretization.
The considered problem sizes would be infeasible without some efficient model order reduction technique.
We demonstrate in particular that the achieved accuracy is comparable to recent Neural Network approaches.
Tensor networks have already been used to alleviate the curse of dimensionality in physics~\cite{vidal2003mps}, parametric PDEs~\cite{bachmayr2016tensor,eigel2017adaptive,eigel2019variational,eigel2020lognormal} as well as other control problems~\cite{dolgov2019tensor, oster2019approximating, fackeldey2020approximative}.
They may significantly reduce the computational complexity~\cite{hackbusch2012book} and are able to represent sparse functions with a constant overhead~\cite{bachmayr2017sparseVsLowrank}.
In this paper we demonstrate the usefulness of tensor networks in computational finance on two examples with discretizations in polynomial tensor product spaces in $d$ dimensions with degree $p$ of the form
\begin{equation}
X = \sum_{\alpha \in [p]^d} X_\alpha P_\alpha
\end{equation}
with coefficient tensor $X\in\mathbb R^{p^d}$.
The first example showcases the application of the alternating least squares algorithm~\cite{Holtz2012a} for the best approximation problem in the primal method of Longstaff and Schwartz~\cite{longstaff2001valuing} where the discounted value is given by
\begin{equation}
v(x) = \sum_{\alpha\in\Lambda} V_\alpha \prod_{k=1}^{d'} B_{\alpha_k}(x_k).
\end{equation}
In the second example we present the application of a Riemannian optimization algorithm~\cite{Kressner2014} to solve the convex minimization problem in the dual method of Lelong~\cite{lelong2018dual}.
For both examples we examine the reduction of the space and time complexity.
In the numerical experiments we compare the originally published and the new methods on standard problems.
The reduced complexity allows to apply the Longstaff-Schwartz algorithm to problems with up to $1000$ assets.
Problems of this size have only been reported recently with state-of-the-art machine learning methods~\cite{becker2019deep}.
Moreover, in comparison to the Neural Network approach, our method requires significantly fewer samples.
Even though the application of the tensor compression to the dual method turned out to be quite involved (in terms of the tensor optimization), the resulting algorithm produces comparable or better results while considerably reducing the dimensionality of the underlying equation.
This renders this approach tractable for more assets and higher accuracy computations.
We conclude that tensor networks can be very beneficial technique for high-dimensional problems in financial mathematics.
They rival the performance of Neural Networks, show similar approximation and complexity properties, and exhibit richer mathematical structures that can be exploited (such as in the Riemannian optimization described in Section~\ref{sec:differentiable}).
\section{Bermudan option pricing}
\label{sec:option pricing}
In what follows we introduce our frameworks and notations for the Bermudan option pricing problem. Furthermore, we recall the celebrated Longstaff-Schwartz algorithm as well as Lelong's version of Rogers' duality approach based on a Wiener chaos expansion.
We fix some finite time horizon $T>0$ and a filtered probability space $\pars{\Omega, \mcal{F}, \pars{\mcal{F}_t}_{0\le t\le T}, \mbb{P}}$, where $\pars{\mcal{F}_t}_{0\le t\le T}$ is supposed to be the natural augmented filtration of a $d$-dimensional Brownian motion $B$ -- the natural setting for the Wiener chaos expansion lying at the core of our duality algorithm.
On this space, we consider an adapted Markov process $\pars{S_t}_{0\le t\le T}$ with values in $\mbb{R}^{d'}$ modeling a $d'$-dimensional underlying asset.
The number of assets $d'$ can be smaller than the dimension $d$ of the Brownian motion to encompass the case of stochastic volatility models or stochastic interest rate.
To simplify notation, we consider the case that $S$ generates the filtration and $d' = d$.
We assume that %
$\mbb{P}$ is an associated risk neutral measure.
We consider an adapted payoff process $\widetilde{Z}$ and introduce its discounted value process
\[
\pars*{Z_t = \exp\pars{-\int_0^t r\pars{s} \dx{s}} \widetilde{Z}_t}_{0\le t\le T}.
\]
We assume that the paths of $Z$ are right continuous and that $\sup_{t\in\bracs{0,T}} \abs{Z_t} \in L^2(\Omega, \mathcal{F}_T, \mathbb{P})$.
The process $\widetilde{Z}$ can obviously take the simple form $\pars{\varphi\pars{S_t}}_{t\le T}$ for some function $\varphi$,
but it can also depend on the whole path of the underlying asset $S$ up to the current time. %
We consider the Bermudan option paying $\widetilde{Z}_{t_k}$ to its holder if exercised at time $0 = t_1 < \dots < t_N = T$.
Standard arbitrage pricing theory defines the discounted time-$t$ value of the Bermudan option to be
\begin{equation}\label{eq:valuefun}
U_{t_n} = \esssup_{\tau\in\mcal{T}_{t_n}} \mbb{E}\bracs{Z_\tau \vert \mcal{F}_{\tau}}
\end{equation}
where $\mcal{T}_t$ denotes the discrete set of $\mcal{F}$-stopping times with values in $\bracs{t,T}$.
We now recall two of the many algorithms for pricing Bermudan options available in the literature, beginning with the classical Longstaff-Schwartz algorithm. These algorithms will be used to test the efficiency gains achievable by hierarchical tensor formats in the context of option pricing.
\subsection{Primal (Longstaff-Schwartz)}
\label{sec:primal}
In the Longstaff-Schwartz algorithm \cite{longstaff2001valuing}, the dynamic programming principle corresponding to the discounted time-$t$ value of the Bermudan option \eqref{eq:valuefun}, is used. It reads
\begin{equation}\label{eq:bellman}
U_{t_n} = \max \{ Z_{t_n}, \mathbb E [U_{t_{n+1}} | \mathcal F_{t_n} ] \}
\end{equation}
with final condition $U_{t_N} = Z_{t_N}$.
If $\mathbb E[U_{t_n+1} | \mathcal F_{t_n}]$ is known, an optimal stopping-time policy can be synthesized explicitly by stopping if and only if $Z_{t_n} \geq \mathbb E[U_{t_{n+1}} | \mathcal F_{t_n}]$.
Thus, the problem of finding the optimal stopping time and also the valuation of the option can be reduced to finding $\mathbb E[U_{t_{n+1}} | \mathcal F_{t_n}]$, which is exactly what the Longstaff-Schwartz algorithm approximates.
As this algorithm is pretty standard, we do not give a detailed explanation and instead simply state the algorithm.
Note that we abbreviate the notation by dropping the $t$ in the discretization, i.e. $S_{t_n} = S_n$.
We define the $\itm$ (``in the money'') operator which is mapping a set of assets to the subset where the current payoff is positive.
\begin{algorithm}[H]\label{alg:LS}
\SetAlgoLined
\caption{Longstaff-Schwartz}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\SetKwInOut{Output}{output}\SetKwInOut{Output}{output}
\Input{Number of samples $M$, exercise dates $0 = t_1 < \dots < t_N = T$, initial value $s_0$.}
\Output{Conditional expectations $ v_n(x) = \mathbb E[U_{n+1} | S_n=x]$, $n \leq N$.}
Set $S_0^m = s_0$ and compute trajectories: $S_n^m$ for $m=1, \dots M$, $n = 1, \dots N$.
Set
\begin{equation}
Y^m = Z_n^m
\end{equation}
\For{$k = n-1$ to $1$}{
Find $\itm$ paths $S_{\tilde m}$ for $m \in \itm \subset \{1, \dots, M\}$.
Set
\begin{equation}\label{eq:longstaff_regression}
v_n(\cdot) \approx \argmin_{v \in \mathcal M} \frac 1 {|\text{ITM}|} \sum_{\tilde m \in \text{ITM}} | v(S_n^{\tilde m}) - Y^{\tilde m} |^2.
\end{equation}
\For{$m = 1$ to $M$}{
\uIf{$m \in \itm$ and $Z_n^m > v_k(S_n^m)$}{
$Y^n = Z_n^m$.
}
}
}
Set $v_0(s_0) = \sum_{m = 1}^M Y^m$.
\end{algorithm}
Note that in this formulation of the algorithm, the set $\mathcal M$ in \eqref{eq:longstaff_regression} is traditionally a linear space of polynomials.
Adding the payoff function to the ansatz space is a common trick to improve the result, see e.g.~\cite{glasserman2013monte}.
In this work we use the set of tensor trains, which we explain in Section~\ref{sec:dual minimization}.
The key computational challenge is the approximation of the conditional expectation
\begin{equation}
v(S_n) = \mathbb{E}[U_{n+1}|S_n] = \sum_{\alpha\in\mathbb{N}^{d'}} v_\alpha B_\alpha(S_n)
\end{equation}
for some $L^2(\mathbb{R}^{d'}, \mathcal{B}(\mathbb{R}^{d'}), S_*\mathbb{P})$-orthogonal basis $\{B_k\}_{k\in\mathbb{N}}$, where we tacitly assume the payoff having finite second moments.
Since this is an $L^2$-orthogonal projection we can choose a finite set of multi-indices $\Lambda\subset\mathbb{N}^{d'}$ and approximate $\mathbb{E}[Y|S_n]$ by minimizing
\begin{equation}
\left\| Y - \sum_{\alpha\in\Lambda} v_\alpha B_\alpha(S_n) \right\|^2
\approx \frac{1}{m}\sum_{i=1}^m \left(Y^m - \sum_{\alpha\in\Lambda} v_\alpha B_\alpha(S_n^m)\right)^2
. \label{eq:cond_exp_approx}
\end{equation}
We use the index set $\Lambda = [p]^{d'}$ and mitigate the ``curse of dimensionality'' by representing $v$ in the tensor train format as defined in Section~\ref{sec:low-rank tensors}.
\subsection{Chaos-martingale minimization}
\label{sec:dual}
Rogers \cite{rogers2002MCOptions} reformulates the problem of computing $U_{0}$ as the following dual optimization problem
\begin{equation*}
U_0 = \inf_{M\in H^2_0} \mbb{E}\bracs*{\max_{n=1,\ldots,N} \pars{Z_{t_n} - M_{t_n}}}
\end{equation*}
where $H^2_0$ denotes the set of square integrable martingales vanishing at zero.
This approach requires us to optimize over the space of all (square integrable) martingales. As any martingale $M$ can be expressed as conditional expectations $t \mapsto \mathbb{E}[X|\mathcal{F}_t]$ for some square integrable random variable $X$, we may equivalently solve
\begin{equation} \label{eq:dual_cont}
U_0 = \inf_{X\in L^2_0\pars{\Omega, \mcal{F}_T,\mbb{P}}} \mbb{E}\bracs*{\max_{n=1,\ldots,N} \pars{Z_{t_n} - \mbb{E}\bracs{X \vert \mcal{F}_{t_n}}}},
\end{equation}
where $L^2_0\pars{\Omega, \mcal{F}_T,\mbb{P}}$ is the set of square integrable $\mcal{F}_T$-random variables with zero mean.
This allows us to minimize over a (seemingly) simpler space -- namely the space of square integrable random variables rather than the space of martingales -- at the cost of expensive calculations of conditional expectations.
The ingenious idea of Lelong \cite{lelong2018dual} was to use a specific parameterization of the space of square integrable random variables in which conditional expectations w.r.t.~the filtration $(\mathcal{F}_t)$ can be computed explicitly at virtually no cost.
Indeed, a finite-dimensional approximation of $X\in L^2_0\pars{\Omega, \mcal{F}_T,\mbb{P}}$ with the above property is given by the truncated Wiener chaos expansion
\begin{equation}
\label{eq:truncated Wiener}
\widetilde{X} = \sum_{\alpha\in\Lambda} \widetilde{X}_\alpha H_\alpha\pars{G_1,\ldots,G_N},
\end{equation}
where $\Lambda \subseteq \mathbb{N}^{N\times d'}$ is a predefined set of multi-indices, $H_\alpha$ is the tensorized Hermite polynomial with multi-index $\alpha$ and $G_1, \ldots, G_N$ are $d'$-dimensional Gaussian increments.
The tensorized Hermite polynomials are defined by
\begin{equation}
H_{\alpha}\pars{G_1,\ldots,G_N} := \prod_{n=1}^N\prod_{k=1}^{d'} h_{\alpha_{nk}}\pars{G_{n,k}}
\end{equation}
where $h_{\alpha_{nk}}$ are the univariate Hermite polynomials with index $\alpha_{nk}$.
Defining the subset $\Lambda^n := \{\alpha\in\Lambda : \forall k>n, \alpha_k=0\}$ it is easy to see that
\begin{equation}
\mbb{E}\bracs{\widetilde{X} \vert \mcal{F}_{t_n}} = \sum_{\alpha\in\Lambda^n} \widetilde{X}_\alpha H_{\alpha}\pars{G_1, \ldots, G_N} .
\end{equation}
This means that the linear expectation operator $\mbb{E}\bracs{\,\bullet\, \vert \mcal{F}_{t_n}}$ can be represented with the coefficient tensor simply by dropping trailing terms of the chaos expansion.
The expectation in~\eqref{eq:dual_cont} can thus be estimated by the sample average
\begin{equation}
U_0 = \inf_{\substack{\widetilde{X}_0 = 0 \\ \widetilde{X}_\alpha \in \mbb{R}}} \frac{1}{m}\sum_{i=1}^m \bracs*{\max_{n=1,\ldots, N} \pars*{Z^{\pars{i}}_{t_n} - \sum_{\alpha\in\Lambda^n} \widetilde{X}_\alpha H_{\alpha}\pars{G_1^{\pars{i}}, \ldots, G_N^{\pars{i}}}}} \label{eq:dual_disc},
\end{equation}
where $(Z^{(i)}, G^{(i)})_{1\le i\le m}$ are i.i.d.\ samples from the distribution of $(Z, G)$.
It is shown in~\cite{lelong2018dual} that this is an infimum of a convex, continuous and piece-wise linear cost function over a convex domain and can be calculated easily by a gradient descent descent method with an Armijo line search.
The choice of the multi-index set $\Lambda$ plays an important role in the preformance and applicability of this algorithm.
In~\cite{lelong2018dual} $\Lambda$ is chosen such that the polynomial degree $\sum_{n=1}^N\sum_{k=1}^{d'} \alpha_{nk}$ is bounded by $p$.
This bounds the number of entries of $\widetilde{X}$ that have to be stored by $\binom{Nd'+p}{Nd'} \in \mathcal{O}\left(\frac{(Nd'+p)^p}{p!}\right)$. %
For fixed $p$ this can scale unfavourably when the number of exercise dates $N$ or the dimension of the Brownian motion (i.e.\ the number of assets) $d'$ increases.
We propose to choose $\Lambda = \Lambda_p^N$ such that $\sum_{k=1}^{d'} \alpha_{nk}\le p$ for $\alpha_n\in\Lambda_p$.
and to use the tensor train format to alleviate the ensuing ``curse of dimensionality''.
We introduce the relevant notions and central concepts in the following section.
\section{Low-rank tensor representations}
\label{sec:low-rank tensors}
We are concerned with an efficient representation of expansions of the form $\sum_{\alpha\in\Lambda}U_\alpha\prod_{j=1}^d P_{\alpha_j}$ in tensorized polynomials $P_\alpha$ determined by some finite set $\Lambda\subset \mathcal F := \{\alpha\in\mathbb R^\mathbb N\;:\; |\mathrm{supp} \,\alpha|<\infty \}$ of finitely supported multi-indices.
This representation is used for the considered algorithms with tensorized expansions given by~\eqref{eq:truncated Wiener} and~\eqref{eq:approximate value}.
The set $\Lambda$ typically is given as a tensor set $\Lambda = \bigtimes_{j=1}^d \mathcal I_n:=[n]^d$ or as anisotropical set $\Lambda = \bigtimes_{j=1}^d \mathcal I_{p_j}$, where in our setting $p_j$ denotes the maximal polynomial degree in dimension $j=1,\ldots,d$.
Apparently, $\# \Lambda$ is in $\mathcal O(p^d)$ with $p:=\max\{p_j\;:\; j=1,\ldots,d\}$.
To cope with this exponential complexity, a potentially very efficient approach is the use of low-rank tensor representations as e.g. presented in~\cite{hackbusch2014tensor,nouy2017low}.
Since these modern model reduction techniques are not widely known in the finance community yet, we provide a brief review in order to elucidates some of the central principles.
In the presentation, we follow~\cite{rauhut2017low,bachmayr2016tensor}.
\subsection{Tensor product spaces and subspace approximation}
\label{sec:subspace approximation}
We consider finite dimensional linear spaces $U_i=\mathbb R^{p_i}$ and define the tensor product space
\begin{equation}
\mathcal H_d := \bigotimes_{j=1}^d U_j.
\end{equation}
Fixing the canonical basis for all $U_j$, any tensor $\mathbf u\in\mathcal H_n$ can be represented by
\begin{equation}
\label{eq:u expansion}
\mathbf u = \sum_{\nu_0=1}^{p_1}\cdots\sum_{\nu_n=0}^{p_n} \mathbf U(\nu_1,\ldots,\nu_n)\mathbf e_{\nu_1}^1\otimes\cdots\otimes \mathbf e_{\nu_n}^n,\quad \mathbf U\in\mathbb R^{p_1}\otimes\cdots\otimes\mathbb R^{p_n}.
\end{equation}
Hence, given this basis, any multi-index $\nu\in\mathcal F$ can be identified with a component in the (coefficient) tensor $\mathbf U$, i.e.
\begin{equation}
\nu=(\nu_1,\ldots,\nu_n) \mapsto \mathbf U(\nu_1,\ldots,\nu_n)\in\mathbb R.
\end{equation}
The goal is to obtain a compressed representation of~\eqref{eq:u expansion} in an analytically and numerically more favourable format by exploiting an assumed low-rank structure.
Hierarchical representations have appealing properties making them attractive for the treatment of the problems at hand.
For example, they contain sparse polynomials, but are much more flexible at a price of a slightly larger overhead, see e.g. \cite{bachmayr2018parametric,bachmayr2016adaptive} for a comparison concerning parametric PDEs.
To introduce the concept of \textit{subspace approximations}, which is central to the complexity properties of tensor formats, we start with the classical \textit{Tucker format}.
Given a tensor $\mathbf U$ and a \emph{rank tuple} $\mathbf r:=(r_j)_{j=1}^d$, the approximation problem reads: find optimal subspaces $V_j\subset U_j$ such that
\begin{equation}
\min_{\mathbf V\in\mathcal V_d} \|\mathbf U - \mathbf V\|\qquad\text{with}\quad \mathcal V_d := \bigotimes_{j=1}^d V_j
\end{equation}
is minimized over $V_1,\ldots,V_d$, with $\dim V_j=r_j$.
An equivalent problem is to find the corresponding basis vectors $\{b^j_{k_j}\}_{k_j=1,\ldots,r}$ of $V_j$ which can be written in the form
\begin{equation}
\label{eq:basis Ud}
b^j_{k_j} := \sum_{\nu_j=1}^{p_j} b^j(\nu_j,k_j)\mathbf e_{\nu_j}^j,\qquad k_j=1,\ldots,r_j<p_j.
\end{equation}
Note that this can be understood as the construction of a reduced basis.
The optimal tensor $\mathbf{V}$ can thus be represented by
\begin{equation}
\label{eq:Tucker}
\mathbf V = \sum_{k_1}^{r_1}\cdots\sum_{k_d=1}^{r_d} \mathbf c(k_1,\ldots,k_d)b^1_{k_1}\otimes\cdots\otimes b^d_{k_d} \in \mathcal{V}_d.
\end{equation}
In case of orthonormal bases $\{b^j_{k_j}\}_{k_j=1,\ldots,r_j}$, the \textit{core tensor} $\mathbf c\in\bigotimes_{j=1}^d\mathbb R^{p_j}$ is given entry-wise by projection,
\begin{equation}
\mathbf c(k_1,\ldots,k_d) = (\mathbf v,b^1_{k_1}\otimes\cdots\otimes b^d_{k_d}).
\end{equation}
With a complexity of $\mathcal O(p_jr_j)$ for each basis $\{b^j_{k_j}\}_{k_j=1,\ldots, r_j}$ and a complexity of $\mathcal O(r^d)$ for the core tensor $\mathbf{c}$, the complexity of the Tucker representation~\eqref{eq:Tucker} is $\mathcal O(pdr + r^d)$ with $r:=\max\{r_j:\; j=1,\ldots,d\}$ and $p:=\max\{p_j:\; j=1,\ldots,d\}$.
As such, the Tucker representation is not sufficient to cope with exponential representation complexity and the format exhibits other problems such as non-closedness.
Nevertheless, the ideas described above eventually lead to a very efficient format by hierarchization of the bases as described in what follows.
\subsection{Hierarchical tensor representations}
\label{sec:HT}
The \textit{hierarchical Tucker} (HT) format introduced in~\cite{Hackbusch-2010} is an extension of the notion of subspace approximation to a hierarchical setting determined by a dimension tree as shown in Figure~\ref{fig:dimension tree} where the indices $j=1,\ldots,d$ correspond to the spaces $U_j$ of the tensor space $\mathcal H_d$.
Note that by cutting any edge in the tree, two subtrees are generated.
Collecting the indices for each subtree, a tensor of order two (a matrix) arises.
By this, fundamental principles from matrix analysis, in particular the singular value decomposition (SVD), can be transferred to the higher-order tensor setting.
To illustrate the central idea, consider the optimal Tucker-subspaces $V_1\otimes V_2 \subseteq U_1\otimes U_2=\mathbb R^{p_1}\otimes\mathbb R^{p_2}$.
For the approximation of $\mathbf u\in\mathcal H_d$, often only a subspace $V_{\{1,2\}}\subset V_1\otimes V_2$ with dimension $\operatorname{dim}(V_{\{1,2\}}) = r_{\{1,2\}} < r_1r_2 = \operatorname{dim}(V_1\otimes V_2)$ is required.
In fact, $V_{\{1,2\}}$ is defined by a basis
\begin{equation}
V_{\{1,2\}} = \mathrm{span}\left\{b^{\{1,2\}}_{k_{\{1,2\}}}:\; k_{\{1,2\}}=1,\ldots,r_{\{1,2\}}\right\}
\end{equation}
with basis vectors
\begin{equation}
b^{\{1,2\}}_{k_{\{1,2\}}} = \sum_{k_1=1}^{r_1}\sum_{k_2=1}^{r_2} \mathbf b^{\{1,2\}}(k_1,k_2,k_{\{1,2\}})b^1_{k_1}\otimes b^2_{k_2},\quad k_{\{1,2\}}=1,\ldots,r_{\{1,2\}}
\end{equation}
and coefficient tensors $\mathbf b^{\{1,2\}}\in\mathbb R^{r_1\times r_2\times r_{\{1,2\}}}$ where
\begin{equation}
b^{\{j\}}_{k_{\{j\}}} := \sum_{\nu_j=1}^{p_j} \mathbf{b}^{\{j\}}(\nu_j,k_{\{j\}})\mathbf e_{\nu_j}^j,\qquad j=1,2\text{ and }k_{\{j\}}=1,\ldots,r_j<p_j.
\end{equation}
are the basis vectors of the Tucker representation~\eqref{eq:basis Ud}.
This can be generalized to the tensor product space $\mathcal H_d$ by the introduction of a \textit{partition tree} (or \textit{dimension tree}) $\mathbb D$ with vertices $\alpha\subset D:=\{1,\ldots,d\}$ and leaves $\{1\},\ldots,\{d\}$ where $D$ is called the root of the tree.
Each vertex $\alpha$ that is not a leaf can be partitioned as $\alpha=\alpha_1\cup\alpha_2$ with $\alpha_1\cap\alpha_2=\emptyset$ and $\alpha_1,\alpha_2\ne\emptyset$.
Although not required, one we restrict the topology to a binary tree %
and denote by $\alpha_1, \alpha_2$ the children of $\alpha$.
Figure~\ref{fig:dimension tree} is an illustration of the unbalanced tree $\mathbb D=\left\{ \{1\},\{2\},\{1,2\},\{3\},\{1,2,3\},\ldots,\{d\},\{1,\ldots,d\} \right\}$ where e.g. $\alpha=\{1,2,3\}=\alpha_1\cup\alpha_2=\{1,2\}\cup\{3\}$.
Let $\alpha_1,\alpha_2\subset D$ be the two children of $\alpha\in D$.
Then $V_\alpha\subset V_{\alpha_1}\otimes V_{\alpha_2}$ is defined by a basis
\begin{equation}
\label{eq:HT basis}
b^\alpha_\ell = \sum_{i=1}^{r_{\alpha_1}}\sum_{j=1}^{r_{\alpha_2}} \mathbf b^\alpha(i,j,\ell)b^{\alpha_1}_i\otimes b^{\alpha_2}_j,
\end{equation}
where the tensors $(i,j,\ell)\mapsto \mathbf b^\alpha(i,j,\ell)$ are called \textit{transfer} or \textit{components tensors} and $\mathbf b^D = \mathbf b^{\{1,\ldots,d\}}$ is called the \textit{root tensor}.
To represent a tensor in this hierarchical format it suffices to store the transfer tensors $\mathbf b^\alpha$ along with the root tensor $\mathbf{b}^D$.
More specifically, $\mathbf u\in\mathcal H_d$ is obtained from $(\mathbf b^\alpha)_{\alpha\in\mathbb D}$, via the multilinear function $\tau$
\begin{equation}
(\mathbf b^\alpha)_{\alpha\in\mathbb D} \mapsto \mathbf u = \tau(\{\mathbf b^\alpha:\;\alpha\in\mathbb D\}),
\end{equation}
which is defined by the recursive application of the basis representation~\eqref{eq:HT basis}.
The mapping $\tau$ is a multilinear function in its arguments $\mathbf b^\alpha$.
A graphical representation of this mapping is depicted in Figure~\ref{fig:dimension tree}.
In this pictorial description, the contractions of component tensors~\eqref{eq:HT basis} are indicated as edges between vertices of a graph and the indices of the tensor are represented by open edges.
This hierarchical representation has complexity $\mathcal O(pdr + dr^3)$ with $p=\max\{p_1,\ldots,p_d\}$ and $r=\max\{r_\alpha:\;\alpha\in\mathbb D\}$.
\begin{figure}[!ht]
\begin{tikzpicture}[sibling distance=10pt]
\tikzset{frontier/.style={distance from root=140pt}}
\Tree [.\node[draw]{$\mathbf b^{\{1,2,3,4,5\}}$};
[.\node[draw]{$\mathbf b^{\{1,2,3\}}$};
[.\node[draw]{$\mathbf b^{\{1,2\}}$};
[.\node[draw]{$\mathbf b^{\{1\}}$}; $\nu_1$ ]
[.\node[draw]{$\mathbf b^{\{2\}}$}; $\nu_2$ ] ]
[.\node[draw]{$\mathbf b^{\{3\}}$}; $\nu_3$ ] ]
[.\node[draw]{$\mathbf b^{\{4,5\}}$};
[.\node[draw]{$\mathbf b^{\{4\}}$}; $\nu_4$ ]
[.\node[draw]{$\mathbf b^{\{5\}}$}; $\nu_5$ ] ] ]
\end{tikzpicture}
\hfill
\begin{tikzpicture}[sibling distance=10pt]
\tikzset{frontier/.style={distance from root=140pt}}
\Tree [.\node[draw]{$\mathbf b^{\{1,2,3,4,5\}}$};
[.\node[draw]{$\mathbf b^{\{1\}}$}; $\nu_1$ ]
[.\node[draw]{$\mathbf b^{\{2,3,4,5\}}$};
[.\node[draw]{$\mathbf b^{\{2\}}$}; $\nu_2$ ]
[.\node[draw]{$\mathbf b^{\{3,4,5\}}$};
[.\node[draw]{$\mathbf b^{\{3\}}$}; $\nu_3$ ]
[.\node[draw]{$\mathbf b^{\{4,5\}}$};
[.\node[draw]{$\mathbf b^{\{4\}}$}; $\nu_4$ ]
[.\node[draw]{$\mathbf b^{\{5\}}$}; $\nu_5$ ] ] ] ] ]
\end{tikzpicture}
\label{fig:dimension tree}
\caption{Dimension trees $\mathbb D$ for $d=5$. Balanced HT tree (left) and linearized TT tree (right).}
\end{figure}
\paragraph{Tensor trains}
Tensor trains are a subset of the general hierarchical tensors described above.
They were introduced to the numerical mathematics community in~\cite{Oseledets2009,oseledets2010tt} but have been known to physicists for a long time as matrix product states (MPS).
The linear structure is depicted in Figure~\ref{fig:dimension tree} (right), which corresponds to taking $V_{1,\ldots,j+1}\subset V_{\{1,\ldots,j\}}\otimes V_{\{j+1\}}$.
In the example, we consider the unbalanced tree $\mathbb D=\left\{ \{1\},\{2\},\{1,2\},\{3\},\{1,2,3\},\ldots,\{d\},\{1,\ldots,d\} \right\}$.
Applying the recursive construction, any tensor $\mathbf u\in\mathcal H_d$ can be written as
\begin{align}
(\nu_1,\ldots,\nu_d) &\mapsto \mathbf U(\nu_1,\ldots,\nu_d)\notag\\
&= \sum_{k_0}^{r_0}\cdots\sum_{k_d}^{r_d} \mathbf U^1(k_0,\nu_1,k_1)\mathbf U^2(k_1,\nu_2,k_2)\cdots \mathbf U^d(k_{d-1},\nu_d,k_d), \label{eq:the train}
\end{align}
where %
\begin{align*}
\mathbf{U}^1(\nu_1,k_1) &:= \sum_{\ell=1}^{r_{1}} \mathbf{b}^{\{1\}}(\nu_1, \ell) \mathbf{b}^{D}(k_{1},\ell) , \\
\mathbf{U}^j(k_{j-1},\nu_j,k_j) &:= \sum_{\ell=1}^{r_{j}} \mathbf{b}^{\{j\}}(\nu_j, \ell) \mathbf{b}^{\{j,\ldots,d\}}(k_{j-1},k_j,\ell) , \qquad j=2,\ldots,d-1 \\
\mathbf{U}^d(k_{d-1},\nu_d) &:= \mathbf{b}^{\{d\}}(\nu_d, k_{d-1}) .
\end{align*}
This can be reformulated as matrix products
\begin{equation}
\label{eq:TT}
\mathbf U(\nu_1,\ldots,\nu_d) = \prod_{j=1}^d \mathbf b_j(\nu_j) = \tau(\mathbf b^1, \ldots, \mathbf b^d)(\nu),
\end{equation}
with component matrices $b_j(\nu_j)\in\mathbb R^{r_{j-1}\times r_j}$ given by
\begin{equation}
\left(b_j(\nu_j)\right)_{k_{j-1},k_j} = \mathbf b^j(k_{i-j},\nu_j,k_j),\quad 1< j< d,
\end{equation}
and
\begin{equation}
\left(b_1(\nu_1)\right)_{k_1}^\intercal = \mathbf b^1(\nu_1,k_1),\quad \left(b_d(\nu_d)\right)_{k_d} = \mathbf b^d(k_d,\nu_d).
\end{equation}
It has to be pointed out that the representation~\eqref{eq:TT} is not unique since in general there exist $\mathbf b^\alpha\neq \mathbf c^\alpha$ such that $\tau\left(\{\mathbf b^\alpha:\; \alpha\in\mathbb D\}\right)=\tau\left(\{\mathbf c^\alpha:\; \alpha\in\mathbb D\}\right)$.
This can also be seen easily in~\eqref{eq:TT} when introducing arbitrary orthogonal matrices and their respective inverses in between the component tensors.
An illustration of the tensor train structure~\eqref{eq:the train} is depicted in Figure~\ref{TT:fig:hosvd} (right), which is equivalent to the tree structure shown on the left-hand side.
\begin{figure}[!ht]
\centering
\begin{tikzpicture}[sibling distance=10pt]
\tikzset{frontier/.style={distance from root=140pt}}
\Tree [.\node[draw]{$\mathbf b^{\{1,2,3,4,5\}}$};
[.\node[draw]{$\mathbf b^{\{1\}}$}; $\nu_1$ ]
[.\node[draw]{$\mathbf b^{\{2,3,4,5\}}$};
[.\node[draw]{$\mathbf b^{\{2\}}$}; $\nu_2$ ]
[.\node[draw]{$\mathbf b^{\{3,4,5\}}$};
[.\node[draw]{$\mathbf b^{\{3\}}$}; $\nu_3$ ]
[.\node[draw]{$\mathbf b^{\{4,5\}}$};
[.\node[draw]{$\mathbf b^{\{4\}}$}; $\nu_4$ ]
[.\node[draw]{$\mathbf b^{\{5\}}$}; $\nu_5$ ] ] ] ] ]
\end{tikzpicture}
\hfill
\begin{tikzpicture}[sibling distance=10pt]
\tikzset{frontier/.style={distance from root=140pt}}
\Tree [.\node[draw]{$\mathbf U^{1}$}; $\nu_1$
[.\node[draw]{$\mathbf U^{2}$}; $\nu_2$
[.\node[draw]{$\mathbf U^{3}$}; $\nu_3$
[.\node[draw]{$\mathbf U^{4}$}; $\nu_4$
[.\node[draw]{$\mathbf U^{5}$}; $\nu_5$ ] ] ] ] ]
\end{tikzpicture}
\caption{An order $5$ tensor in tensor train representation and its linear representation using component tensors as in~\eqref{eq:the train}.}
\label{TT:fig:hosvd}
\end{figure}
It turns out that every tensor has a TT-representation with minimal rank, which means that the TT-rank is well-defined.
Moreover, an efficient algorithm for computing a minimal TT-representation is given by the TT Singular Value Decomposition (TT-SVD)~\cite{holtz2012manifolds}.
Additionally, the set of tensor trains with fixed TT-rank $\mathbf r$ denoted by $\mathcal T_\mathbf r\subseteq \mathcal H_d$ forms a smooth manifold.
If all lower ranks are included, an algebraic variety denoted by $\mathcal T_{\leq\mathbf r}$ is formed \cite{kutschan2017tangent}.
\subsection{Tensor Trains as differentiable manifolds}
\label{sec:differentiable}
The multilinear structure of the tensor product enables efficient optimization within the manifold structure.
Endowed with the Euclidean metric induced by the Frobenius scalar product, the set $\mcal{T}_{\mathbf r}$ becomes an embedded Riemannian manifold~\cite{holtz2011TTMfld,uschmajew2020riemannianTT,wolf2019}.
This allows the formulation of different line search algorithms utilizing the \emph{Riemannian gradient}.
For a function $J : \mcal{H}_n\to \mbb{R}$ the Riemannian gradient at $X\in\mcal{T}_r$ can be computed by projecting the Euclidean gradient onto the tangent space $\mathbb{T}_X$ at $X$ (see e.g.~\cite{steinlechner2016thesis,absil2008book}), i.e.
\begin{equation}
P_{\mathbb{T}_{X}} \nabla J\pars{X},
\end{equation}
where $P_{\mathbb{T}_{\widetilde{X}}}$ is the projector onto the tangent space of $\mcal{T}_{r}$ at the point $\widetilde{X}$.
Just as the negative Euclidean gradient, the negative Riemannian gradient can be used as a descent direction for minimizing $V_{p,N}^{m}$.
In theory, the strategy is to move in that direction along a geodesic until a local minimum is reached.
Starting from $\widetilde{X}$, the function that moves in the direction $Z\in\mbb{T}_{\widetilde{X}}$ along a geodesic for a distance of $\norm{Z}$ is called the \emph{exponential map} $\exp_{\widetilde{X}}\pars{Z}$.
Unfortunately, there is no analytic expression for the exponential map available for $\mcal{T}_{r}$.
Instead, one usually resorts to a so-called \emph{retraction} $\mcal{R}_{\widetilde{X}}\pars{Z}$ which is an approximation of the exponential map, see~\cite{absil2008book} for details.
In the tensor train format, an example of a retraction is defined by the TT-SVD via
\begin{equation}
\mcal{R}_{\widetilde{X}}\pars{Z} = \operatorname{TT-SVD}\pars{\widetilde{X} + Z}
\end{equation}
as shown by~\cite{steinlechner2016thesis}.
Using these techniques, a steepest descent update with step size $\beta$ on the manifold $\mcal{T}_{r}$ is given by
\begin{equation}
\widetilde{X}_{k+1} = \mcal{R}_{\widetilde{X}_k}\pars{- \beta P_{\mbb{T}_{\widetilde{X}_k}} \nabla V_{p,N}^m\pars{\widetilde{X}_k} } .
\end{equation}
Convergence of Riemannian optimization algorithms is typically only considered for smooth functions.
When this can be assumend, the convergence can be sped up by using higher-order algorithms such as the conjugated gradient method.
This additionally requires a method of ``moving'' tangent vectors $Z_{k-1}\in\mbb{T}_{\widetilde{X}_{k-1}}$ from the tangent space at point $\widetilde{X}_{k-1}$ to the tangent space $\mbb{T}_{\widetilde{X}_{k}}$ at point $\widetilde{X}_{k}$.
Again, the optimal differential geometric tool, the \emph{parallel transport}, is computationally infeasible on the tensor train manifold.
However, the \emph{vector transport} introduced by \cite{absil2008book} defines a class of approximations, which can be used to accomplish this task.
In the tensor train format, such a vector transport is given by the projection $P_{\mbb{T}_{\widetilde{X}_{k}}} Z_{k-1}$.
\section{A version of the Longstaff-Schwartz algorithm based on the Tensor Train format}
\label{sec:primal minimization}
We now combine the tensor train format introduced in Section~\ref{sec:HT} with the Longstaff-Schwartz algorithm for computing Bermudan option prices as detailed in Algorithm~\ref{alg:LS}.
To make the approximation problem~\eqref{eq:cond_exp_approx} concrete a set of basis functions $\{\vec{B}_\alpha\}_{\alpha\in\Lambda}$ has to be chosen.
We prefer to work on a compact sub-domain of the reals, which we choose such that the probability of assets lying outside the domain is minimal.
As a heuristic method for determining the truncation, we set
\[
a = \min_{m,n,k} (S^m_n)_k \qquad\text{and}\qquad b = \max_{m,n,k} (S^m_n)_k
\]
and choose the $H^2(a, b)$-orthogonal basis functions $\{ B_1, \dots, B_{p} \}$ spanning the space of polynomials of degree $p$.
We then represent the approximation of the discounted value of the option $v:\mathbb R^{d'} \to \mathbb R$ by
\begin{equation}
\label{eq:approximate value}
v(x) = \sum_{\alpha\in\Lambda} V_\alpha \prod_{k=1}^{d'} B_{\alpha_k}(x_k),
\end{equation}
where we approximate the coefficient tensor $\mathbf V\in(\mathbb{R}^{p})^{\otimes d'}$ in the TT format.
As is common practice in Longstaff-Schwartz type algorithms we augment this basis by the payoff function $\varphi$.
With the definition
\[
B: \mathbb R \to \mathbb R^p,\quad B(x) = [B_1(x), \dots, B_p(x) ],
\]
i.e. $B$ stacks the one-dimensional basis functions into a vector such that they can be contracted with the component tensors,
the resulting approximation $v:\mathbb R^{d'} \to \mathbb R$ is graphically represented by
\begin{center}
\begin{tikzpicture}
\begin{scope}[every node/.style={draw, fill=white}]
\node (A1) at (0,0) {$U_1$};
\node (A2) at (1.25,0) {$U_2$};
\node (A3) at (2.5,0) {$U_3$};
\node (A4) at (4.25,0) {$U_{d'}$};
\node (B1) at (0,-1) {$B(x_1)$};
\node (B2) at (1.25,-1) {$B(x_2)$};
\node (B3) at (2.5,-1) {$B(x_3)$};
\node (B4) at (4.25,-1) {$B(x_{d'})$};
\end{scope}
\node (C0) at (-2,0) {$v(x)$};
\node (C1) at (-1,0) {$=$};
\node[right=0.25 of A4] (plus) {$+$};
\node[right=0.05 of plus] (g) {$c_\varphi \varphi(x)\ .$};
\begin{scope}[every edge/.style={draw=black,thick}]
\path [-] (A1) edge node[midway,left,sloped] [above] {$r_1$} (A2);
\path [-] (A2) edge node[midway,left,sloped] [above] {$r_2$} (A3);
\path [-] (A1) edge node[midway,left] [right] {$p$} (B1);
\path [-] (A2) edge node[midway,left] [right] {$p$} (B2);
\path [-] (A3) edge node[midway,left] [right] {$p$} (B3);
\path [-] (A4) edge node[midway,left] [right] {$p$} (B4);
\path [-] (A3) edge ($(A3)+(0.56,0)$);
\draw[cheating dash=on 2pt off 2pt, thick] ($(A3)+(0.55,0)$) edge ($(A4)-(0.55,0)$);
\path [-] ($(A4)-(0.56,0)$) edge (A4);
\end{scope}
\end{tikzpicture}
\end{center}
Note that on the r.h.s.\ of this equation every open-index of $U_i$ and $B(x_i)$ for $1 \leq i \leq d'$ is contracted, which indeed results in a scalar value $v(x)$.
To solve the resulting minimization problem~\eqref{eq:cond_exp_approx} we use a rank adaptive version of the \emph{alternating least-squares (ALS)} algorithm \cite{ALS}, the \emph{stable alternating least-squares algorithm (SALSA)} \cite{grasedyck2019stable}.
Using this algorithm relieves us from having to guess an appropriate rank of the solution beforehand.
As a termination condition we check whether the error on the samples or on a validation set decreases sufficiently during one iteration.
In our implementation this validation set is chosen to have $20\%$ of the size of the training set.
We now describe how we modify ALS (or SALSA) to handle the additional term $c_\varphi \varphi(x)$.
The classical ALS algorithm optimizes the component tensors $\{U_1,\ldots,U_{d'}\}$ in an alternating fashion.
For each $k=1,\ldots,d'$ all component tensors $\{U_j\}_{j\ne k}$ are fixed and only $U_k$ is optimized.
This procedure is then repeated alternatingly until a convergence criterion is met.
We modify this scheme by optimizing $c_\varphi$ as well as $U_k$ for each $k$.
Since the mapping $(U_k,c_\varphi) \mapsto v$ is linear, the resulting problem is a classical linear least squares problem
\[
(U_k, c_\varphi) = \argmin_{w,c} \frac{1}{m}\sum_{i=1}^{m} |Y^m - A_k^m(w,c)|^2 .
\]
To exemplify this, for $k=2$ the operator $A_k^m$ is diagrammatically represented by
\begin{center}
\begin{tikzpicture}
\begin{scope}[every node/.style={draw, fill=white}]
\node (A1) at (0.0,0) {$U_1$};
\node (A2) at (1.5,0) {$w$};
\node (A3) at (3.0,0) {$U_3$};
\node (A4) at (5.0,0) {$U_{d'}$};
\node (B1) at (0.0,-1) {$B(S_1^m)$};
\node (B2) at (1.5,-1) {$B(S_2^m)$};
\node (B3) at (3.0,-1) {$B(S_3^m)$};
\node (B4) at (5.0,-1) {$B(S_{d'}^m)$};
\end{scope}
\node (C0) at (-2.5,0) {$A_k^m (w, c_\varphi)$};
\node (C1) at (-1,0) {$=$};
\begin{scope}[every edge/.style={draw=black,thick}]
\path [-] (A1) edge node[midway,left,sloped] [above] {$r_1$} (A2);
\path [-] (A2) edge node[midway,left,sloped] [above] {$r_2$} (A3);
\path [-] (A1) edge node[midway,left] [right] {$p$} (B1);
\path [-] (A2) edge node[midway,left] [right] {$p$} (B2);
\path [-] (A3) edge node[midway,left] [right] {$p$} (B3);
\path [-] (A4) edge node[midway,left] [right] {$p$} (B4);
\path [-] (A3) edge ($(A3)+(0.56,0)$);
\draw[cheating dash=on 2pt off 2pt, thick] ($(A3)+(0.55,0)$) edge ($(A4)-(0.55,0)$);
\path [-] ($(A4)-(0.56,0)$) edge (A4);
\end{scope}
\node[right=0.25 of A4] (plus) {$+$};
\node[right=0.05 of plus] (g) {$c_\varphi \varphi(S^m)\ .$};
\end{tikzpicture}
\end{center}
After reshaping the pair $(w,c)\in\mathbb{R}^{r_1\times p\times r_2}\times\mathbb{R}$ into a vector of size $r_1pr_2+1$, the operator can be written as $A\in\mathbb{R}^{m\times (r_1p_2r_2+1)}$ and the problem becomes
\[
X = \argmin_{x} \frac{1}{m} \|\vec{Y} - Ax\|_2^2,
\]
where $\vec{Y} = [Y^1,\ldots,Y^m]$ .
\subsection*{Complexity analysis}
\label{sec:primal complexity}
Using a tensor train representation instead of the full tensor allows us to reduce the space complexity from $\mcal{O}\pars{p^{d'}}$ to $\mcal{O}\pars{d'pr^2}$ with $r = \max\braces{r_1,\ldots,r_{d'-1}}$.
For moderate $r$ this leads to a dramatic reduction in memory usage which we observe in our experiments.
Figure~\ref{fig:ranks_barplot} shows that the rank-adaptive algorithm computes solutions with $r<6$ and we numerically verify that for $d'>100$ a rank of $r=1$ is sufficient for obtaining values within the reference interval from the literature.
This allows us to compute the price of max-call options with up to $1000$ assets.
Since ALS is an iterative method its time complexity can only be provided per iteration and amounts to
\begin{equation}
\mcal{O}\pars{Nm\abs{\Lambda_p}^2r^4}
\end{equation}
floating point operations per iteration.
As with every iterative algorithm the number of iterations needed depends on the specific problem.
In our numerical tests we generally needed less than $10$ iterations.
\section{Dual martingale minimization with tensor trains}
\label{sec:dual minimization}
To use the tensor train format in the dual formulation, we define the set $\mcal{P}_{\hat{0}} = \braces{\widetilde{X} : \widetilde{X}_0 = 0}$ and rewrite \eqref{eq:dual_disc} as
\begin{equation}
U_0 = \inf_{\substack{\widetilde{X} \in \mcal{T}_{r} \cap \mcal{P}_{\hat{0}}}} V_{p,N}^m\pars{\widetilde{X}}, \label{eq:dual_disc_intersection}
\end{equation}
where $\mathcal{T}_{r}$ denotes the set of TT tensors of rank $r$ and $V_{p,N}^m$ is the cost function that is minimized in \eqref{eq:dual_disc}.
Performing this optimization directly on the parameters of the tensor train is ill-posed since its parametrization is not unique.
A common way to solve this is to use the manifold structure of $\mcal{T}_r$ and employ a Riemannian optimization algorithm.
For this~\eqref{eq:dual_disc_intersection} has to be rephrased as an unconstrained smooth optimization problem.
Define the projector $\pars{P_{\hat{0}} \widetilde{X}}_{{\alpha}} = (1-\delta_{{\alpha}0}) \widetilde{X}_\alpha$ and remove the constraint $\widetilde{X}\in\mcal{P}_0$ by rewriting \eqref{eq:dual_disc_intersection} as
\begin{equation}
U_0 = \inf_{\widetilde{X} \in \mcal{T}_{r}} V_{p,N}^m\pars{P_{\hat{0}} \widetilde{X}} .
\end{equation}
Since $P_{\hat{0}}$ is a linear operator, the modified cost function $V_{p,N}^m\circ P_{\hat{0}}$ retains the convexity, continuity and piece-wise linearity of $V_{p,N}^m$.
We then mollify the $V_{p,N}^m$ by replacing the maximum with the smooth approximation
\begin{equation}
\amax_{n=1,\ldots,N} x_n = \frac{\sum_{n=1}^N x_n e^{\alpha x_n}}{\sum_{n=1}^N e^{\alpha x_n}} .
\end{equation}
The resulting cost function reads
\begin{equation}
V_{p,N}\pars{\widetilde{X}} = V_{p,N}^{m,\alpha}\pars{\widetilde{X}} = \frac{1}{m}\sum_{i=1}^m \bracs*{\amax_{n=1,\ldots,N} \pars*{Z^{\pars{i}}_{t_n} - \sum_{\alpha\in\Lambda^n} \widetilde{X}_\alpha H_{\alpha_1}\pars{G_1^{\pars{i}}} \cdots H_{\alpha_n}\pars{G_n^{\pars{i}}}}}.
\end{equation}
The respective optimization problem
\begin{equation}
\label{eq:U0 optimization}
U_0 = \inf_{\widetilde{X} \in \mcal{T}_{r}} V_{p,N}^{m,\alpha}\pars{P_{\hat{0}} \widetilde{X}}
\end{equation}
can be solved by Riemannian algorithms.
We use a conjugated gradient method with the \textsc{FR-PR\textsubscript{+}} update rule as defined in~\cite{cg}.
We also have to address the choice of the initial value for the optimization.
Since the set $\mcal{T}_{r}$ is not convex, a diligent choice is important in order to reach the global minimum.
We obtain such a value for polynomial degree $p$ by using the optimal value $\widetilde{X}^{\pars{p-1}}$ for the polynomial degree $p-1$.
This recursion stops at $p=0$ where we know the optimal value to be $\widetilde{X}^{\pars{0}} = 0$.
In our implementation we used a constant rank of $4$ and chose $\alpha = 50$ which, empirically, held the smoothing induced error below $10^{-3}$.
As a termination condition we check if the error does not sufficiently decrease over a period of $10$ iterations.
Of all iterates obtained during the optimization we choose the one that has the lowest value on a validation set.
In our implementation this validation set is chosen to have one ninth of the size of the training set.
\subsection*{Complexity analysis}
\label{sec:dual complexity}
In the dual method we observe the same dramatic reduction in space complexity as in the primal algorithm.
The space complexity of $\mcal{O}\pars{p^{Nd'}}$ for the full tensor is reduced to $\mcal{O}\pars{Nd'pr^2}$ for a tensor in the tensor train format with a rank uniformly bounded by $r$.
This allows us to use the dual algorithm to compute the price of a basket put option with $N=31$ exercise dates in Table~\ref{tbl:experiment_putbasket_lelong}.
Since gradient descent is again an iterative algorithm the time complexity can only be computed per iteration.
Assuming that $\widetilde{X}$ is a tensor train tensor with rank $r$, the contraction
\begin{equation}
\sum_{\alpha\in\Lambda} \widetilde{X}_\alpha H_{\alpha_1}\pars{G_1^{\pars{i}}} \cdots H_{\alpha_n}\pars{G_n^{\pars{i}}}
\end{equation}
can be computed with $\mcal{O}\pars{n\abs{\Lambda_p}r^2 + \pars{N-n}r^2}$ floating point operations.
This means that both $V_{p,N}^{m,\alpha}\pars{\widetilde{X}}$ and its gradient can be computed with $\mcal{O}\pars{m N^2 \abs{\Lambda_p}r^2}$ floating point operations.
Compare this to the $\mcal{O}\pars{mp^{Nd'}}$ floating point operations required for the full tensor and to the $\binom{Nd'+p}{Nd'}$ operations for the sparse tensor.
At least from a theoretical point of view, evaluation and optimization are faster in the tensor train format, namely
\begin{itemize}
\item exponentially faster when compared to the full tensor ansatz and
\item when $p>2$ up to a polynomial factor for the sparse ansatz.
\end{itemize}
These statements obviously depend on the rank $r$ which is bounded by at most $4$ in our experiments, meaning that the represented objects are in fact low-rank.
\section{Numerical experiments}
\label{sec:experiments}
In this part, we present results obtained from the algorithms described above.
Implementations in Python can be found at \url{https://github.com/ptrunschke/tensor_option_pricing}.
For each experiment, we report low-biased estimators $v_0\pars{S_{t_0}}$ and $V_{p,N}^{m,\alpha}\pars{\widetilde{X}}$ based on re-simulated trajectories, see \cite{glasserman2013monte}. More precisely, we generate independent trajectories of the underlying price process $S$ and apply the stopping strategy implied by the already computed approximate value functions $v_k$, giving as a low-biased approximation to the true option price. Conversely, approximately optimal martingale parameterizations computed by the dual algorithm are used to compute a high-biased estimator, once again based on new trajectories, not used to produce the parameterization in the first place.
In the following we denote by $n$ the number of possible exercise dates, including $0$, by $p$ the polynomial degree used in the approximation of the conditional probabilities and in the Wiener--It\^o chaos expansion and by $m$ the number of samples used.
We further denote by $m_{\mathrm{resim}}$ the number of samples used for the resimulation.
$V_{\text{LS}}$ is the price computed by the resimulation of the Longstaff--Schwartz method and $V_{\text{dual}}$ is the price computed by the dual method.
The corresponding reference values are denoted by $V_{\text{LS}}^{\text{ref}}$ and $V_{\text{dual}}^{\text{ref}}$ respectively, and were obtained in the literature -- see specific references for the individual examples.
\subsection{Options in the Black--Scholes model}
The $d$-dimensional Black Scholes model for $j\in \{1, \ldots, d\}$ reads
\begin{equation}\label{eq:black_scholes}
\mathrm{d}S^j_t = S^j_t(r_t - \delta_t)\mathrm{d}t + \sigma^j L_j\mathrm{d}B_t),
\end{equation}
where $B$ is a Brownian motion with values in $\mathbb{R}^d$, $\sigma = (\sigma_1, \ldots, \sigma_d)$ is the vector of volatilities assumed to be deterministic and positive at all times, and $L_{j}$ is the $j$-th row of the matrix $L$ defined as a square root of the correlation matrix chosen to be of the form
\begin{equation}
\Gamma = \begin{pmatrix}
1 & \rho & \cdots & \rho \\
\rho & 1 & \ddots & \vdots \\
\vdots & \ddots & \ddots & \rho \\
\rho & \cdots & \rho & 1 \\
\end{pmatrix},
\end{equation}
where $\rho \in (-1/(d-1), 1]$ to ensure that $\Gamma$ is positive definite.
The initial condition for the SDE is given by the spot price $S_0$.
We will test the algorithms for different payoff functions $\phi$, dimensions $d$ and strike prices $K$.
\subsection{A basket put option on correlated assets}
We first consider the case of a put basket option on correlated assets.
The payoff of this option writes as $\phi\pars{S_t} = \left(K - \sum_{j=1}^d \omega_jS_t^j\right)_+$ where $\omega = \pars{\omega_1, \ldots, \omega_d}$ is a vector of real valued weights.
We report in Table~\ref{tbl:experiment_putbasket_lelong} and Table~\ref{tbl:experiment_putbasket_lelong_100K} our values compared to the reference prices for two different sample sizes $m = 20\hspace{0.1em}000$ and $m = 10^5$.
Blank cells in the tables indicate that reference values are not reported in the reference papers.
The results of our experiments are reported in Table~\ref{tbl:experiment_putbasket_lelong}.
It can be seen that the values obtained by our version of Lelong's method are not as close to the reference price as are the values obtained by~\cite{lelong2018dual}.
From a theoretical perspective a lower value should always be possible given a sufficient rank.
We thus attribute this to the lack of a rank adaption strategy in the dual problem and highlight this as an interesting direction for further research.
It can moreover be seen that for $N=31$ the values of $V_{\text{dual}}$ increase with $p$.
Because the manifold for $p=2$ is a submanifold of $p=3$ one would expect that this is impossible.
Note however that the table shows resimulated prices only.
Therefore we interpret this observation to indicate that a larger value of $m$ is needed in this case.
This is confirmed in Table~\ref{tbl:experiment_putbasket_lelong_100K}.
For the Longstaff-Schwartz variant we use $m = 10^5$ and observe values close to the reference value.
Furthermore, in the case $N = 31$ and $S_0^j = 100$, we observe that the result for $p=2$ dominate the $p=3$ case, indicating sub-optimal results.
However, as seen in Table \ref{tbl:experiment_putbasket_lelong_100K} we obtain better results for polynomial degree $p=8$.
Note that we have capped the TT-rank at $4$ for the computation with $p=8$.
By doing that, the computational time only increased by a factor of $3$ when compared to the run time for the case $p=3$, being $40$ seconds and $15$ seconds respectively.
We also report that during the optimization within the Longstaff-Schwartz algorithm the TT-rank of the value function did not exceed $5$ for any test-case, which means that a low-rank structure of the sought expectation values within the polynomial ansatz space is noticeable.
This low-rank structure is a necessity for high-dimensional computation and will be analyzed in greater detail in the next example.
In this example the number of samples used for training has a larger effect not only on the variances but also on the values.
\begin{table}[!ht]
\centering
\begin{tabular}{ c c c | c c c | c c c }
$p$ & $N$ & $S_0^j$ & $V_{\text{dual}}$ & Stddev & $V_{\text{dual}}^{\text{ref}}$ & $V_{\text{LS}}$ & Stddev & $V^{\text{ref}}$ \\
\hline
$2$ & $4$ & $100$ & $2.34$ & $0.003$ & $2.29$ & $2.15$ & $0.009$ & $2.17$ \\
$3$ & $4$ & $100$ & $2.33$ & $0.003$ & $2.25$ & $2.16$ & $0.009$ & $2.17$ \\
$2$ & $7$ & $100$ & $2.64$ & $0.002$ & $2.62$ & $2.39$ & $0.008$ & $2.43$ \\
$3$ & $7$ & $100$ & $2.64$ & $0.002$ & $2.52$ & $2.40$ & $0.008$ & $2.43$ \\
$2$ & $31$ & $100$ & $3.08$ & $0.002$ & & $2.49$ & $0.01$ & \\
$3$ & $31$ & $100$ & $3.12$ & $0.002$ & & $2.36$ & $0.01$ & \\
\hline
$2$ & $4$ & $110$ & $0.67$ & $0.002$ & $0.57$ & $0.53$ & $0.006$ & $0.55$ \\
$3$ & $4$ & $110$ & $0.67$ & $0.002$ & $0.55$ & $0.53$ & $0.006$ & $0.55$ \\
$2$ & $7$ & $110$ & $0.78$ & $0.002$ & $0.64$ & $0.57$ & $0.007$ & $0.61$ \\
$3$ & $7$ & $110$ & $0.77$ & $0.002$ & $0.64$ & $0.57$ & $0.007$ & $0.61$ \\
$2$ & $31$ & $110$ & $3.94$ & $0.002$ & & $0.61$ & $0.008$ & \\
$3$ & $31$ & $110$ & $3.95$ & $0.002$ & & $0.61$ & $0.008$ & \\
\end{tabular}
\caption{Prices for the put basket option with parameters $d=5$, $T=3$, $r=0.05$, $\delta^j = 0$, $\sigma^j=0.2$, $\rho=0$, $K=100$, $\omega_j=\frac{1}{d}$, $m=20\hspace{0.1em}000$, $m_{\mathrm{resim}}=10^6$. Values for $V_{\text{dual}}^{\text{ref}}$ and $V^{\text{ref}}$ are taken from \cite{lelong2018dual}. Number of samples for Longstaff-Schwartz: $m_{\text{LS}} = 10^5$. Empty spaces denote unavailable reference values.}
\label{tbl:experiment_putbasket_lelong}
\end{table}
\begin{table}[!ht]
\centering
\begin{tabular}[t]{ c c | c c }
$p$ & $S_0^j$ & $V_{\text{dual}}$ & Stddev \\
\hline
$2$ & $100$ & $2.88$ & $0.001$ \\
$3$ & $100$ & $2.88$ & $0.001$ \\
\hline
$2$ & $110$ & $0.80$ & $0.001$ \\
$3$ & $110$ & $0.80$ & $0.001$ \\
\end{tabular}
\qquad
\begin{tabular}[t]{ c c | c c }
$p$ & $S_0^j$ & $V_{\text{LS}}$ & Stddev \\
\hline
$8$ & $100$ & $2.56$ & $0.01$ \\
\end{tabular}
\caption{Prices for the put basket option with parameters $d=5$, $N=31$, $T=3$, $r=0.05$, $\delta^j = 0$, $\sigma^j=0.2$, $\rho=0$, $K=100$, $\omega_j=\frac{1}{d}$, $\mathbf{m=10^5}$, $m_{\mathrm{resim}}=10^6$. Empty spaces denote unavailable reference values}
\label{tbl:experiment_putbasket_lelong_100K}
\end{table}
\FloatBarrier
\subsection{Bermudan max-call options} \label{sec:max-call}
In this section we consider max-call options and in particular the scalability of the tensor train approach for the Longstaff-Schwartz algorithm for higher dimensions.
The reference values for this problem were taken from~\cite{andersen2004primal, becker2019deep}.
The payoff function of a max-call option takes the form
\begin{equation}
\left( \max_{1 \leq i \leq d} \omega_i S^i - K \right)_+.
\end{equation}
In Table~\ref{tbl:experiment_maxcall_lelong} we report results for the dual algorithm.
In contrast to the case of the put basket option, we see that we are close to the values computed by the original method~\cite{lelong2018dual} and in some cases improve the previously reported results.
This indicates the viability of this approach.
A rank-adaptive algorithm could probably further improve the efficiency of our method in high dimensions.
\begin{table}[h!]
\centering
\begin{tabular}{ c c c c | c c c | c }
$p$ & $d$ & $m$ & $S_0^j$ & $V_{\text{dual}}$ & Stddev & $V_{\text{dual}}^{\text{ref}}$ & $V^{\text{ref}}$ \\
\hline
$2$ & $2$ & $20\hspace{0.1em}000$ & $90$ & $8.85$ & $0.004$ & $10.05$ & $8.15$ \\
$3$ & $2$ & $20\hspace{0.1em}000$ & $90$ & $8.83$ & $0.004$ & $8.6$ & $8.15$ \\
$2$ & $5$ & $20\hspace{0.1em}000$ & $90$ & $21.68$ & $0.014$ & $21.2$ & $16.77$ \\
$3$ & $5$ & $40\hspace{0.1em}000$ & $90$ & $21.40$ & $0.015$ & $20.13$ & $16.77$ \\
\hline
$2$ & $2$ & $20\hspace{0.1em}000$ & $100$ & $14.68$ & $0.004$ & $16.3$ & $14.01$ \\
$3$ & $2$ & $20\hspace{0.1em}000$ & $100$ & $14.65$ & $0.004$ & $15$ & $14.01$ \\
$2$ & $5$ & $20\hspace{0.1em}000$ & $100$ & $32.37$ & $0.017$ & $31.8$ & $26.34$ \\
$3$ & $5$ & $40\hspace{0.1em}000$ & $100$ & $31.95$ & $0.017$ & $29$ & $26.34$ \\
\end{tabular}
\caption{Prices for the call option on the maximum of $d$ assets with parameters $N=10$, $T=3$, $r=0.05$, $\delta^j = 0.1$, $\sigma^j=0.2$, $\rho=0$, $K=100$, $m_{\mathrm{resim}}=10^6$. Values for $V_{\text{dual}}^{\text{ref}}$ and $V^{\text{ref}}$ are taken from \cite{lelong2018dual}.}
\label{tbl:experiment_maxcall_lelong}
\end{table}
In Table~\ref{tbl:experiment_max_call} we consider the Longstaff-Schwartz algorithm in moderate to extreme dimensions. We increase the number of samples to $10^6$ and test every polynomial degree up to $p=7$.
We observe that we rarely see any significant improvement when using polynomial degree larger than $4$ or $5$.
However, throughout the table polynomial degree $p=6$ appears to obtain the overall best results, with small improvements over the other polynomial degrees.
Moreover, we see that while we are not exactly as high as the reference value for low dimensions, i.e. $d \leq 20$, the results for higher dimension are accurate.
A possible explanation for this is that the value function might have simpler structure in high dimension.
Finally, in Table~\ref{tbl:experiment_max_call_sort} we use a trick, where after sampling all the paths, we sort the assets at every time point by decreasing magnitude, see, e.g., \cite[p. 1230]{andersen2004primal}.
We observe, that , the unsorted algorithm performs better than the sorted, while both stay closely under the reference interval.
We observe, that while the unsorted algorithm is already performing well, sorting the assets yields an increase in performance in every dimension.
Moreover, for the sorted case, polynomial degree of $3$ appears to be sufficient to obtain optimal results.
Finally, we observe some numerical instabilities for our implementation of the sorted algorithm
when the dimension is $d=750$ or $d=1000$ and the polynomial degree is larger than $3$.
We assume that by using a better polynomial basis these instabilities can be resolved.
However, as polynomial degree $3$ was sufficient in the lower-dimensional case we did not further investigate this instability.
We state that within these experiments the standard deviation of the resimulations was never larger than $0.1$.
It is worth noting, that the results in very high dimensions were obtained by calculating only $10^6$ trajectories while the reference values were computed using more than $24\times 10^6$ paths using state-of-the-art machine learning techniques, see \cite{becker2019deep}.
This underlines the potential of tensor train approaches for optimal stopping, especially in high dimensions.
In Figure \ref{fig:ranks_barplot} we analyze the average and the maximal rank of the value function and observe a decrease of the ranks in higher dimensions.
We state that from $d = 100$ a separate test run where we fix the ranks to $1$ yield comparable results, implying that a rank $1$ solution can yield close to optimal results.
This means, that the value function indeed has a simple structure in high dimension.
\begin{table}[h!]
\centering
\begin{tabular}{ c | c c c c c c c | c}
$d$ & \multicolumn{7}{c}{p} & $V_\textrm{ref}$ \\
\hline
& $1$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & \\
\hline
$2$ & $13.66$ & $13.79$ & $13.81$ & $13.76$ & $13.80$ & $13.83$ & $13.78$ & $13.902$ \\
$3$ & $18.34$ & $18.30$ & $18.39$ & $18.48$ & $18.50$ & $18.55$ & $18.53$ & $18.69$ \\
$5$ & $25.66$ & $25.58$ & $25.70$ & $25.97$ & $25.75$ & $25.84$ & $25.93$ & $[26.115, 26.164]$\\
$10$ & $37.77$ & $37.65$ & $38.01$ & $38.12$ & $38.25$ & $38.27$ & $38.14$ & $[38.300, 38.367]$ \\
$20$ & $51.10$ & $51.34$ & $51.49$ & $51.64$ & $51.62$ & $51.63$ & $51.62$ & $[51.549, 51.803]$ \\
$30$ & $59.11$ & $59.30$ & $59.50$ & $59.63$ & $59.62$ & $59.63$ & $59.63$ & $[59.476, 59.872]$ \\
$50$ & $69.22$ & $69.23$ & $69.70$ & $69.56$ & $69.57$ & $69.51$ & $69.57$ & $ [69.560, 69.945]$ \\
$100$ & $83.14$ & $83.18$ & $83.29$ & $83.33$ & $83.37$ & $83.39$ & $83.16$ & $ [83.357, 83.862]$ \\
$200$ & $97.21$ & $97.07$ & $97.31$ & $97.43$ & $97.41$ & $97.46$ & $97.21$ & $[97.381, 97.889]$ \\
$500$ & $116.13$ & $116.07$ & $116.17$ & $116.31$ & $116.31$ & $116.36$ & $116.14$ & $[116.210, 116.685]$ \\
$750$ & $124.56$ & $124.56$ & $124.61$ & $124.72$ & $124.73$ & $124.78$ & $124.59$ \\
$1000$ & $130.65$ & $130.63$ & $130.66$ & $130.78$ & $130.83$ & $130.84$ & $130.67$ \\
\end{tabular}
\caption{$n = 9$, $T=3$, $r=0.05$, $\delta = 0.1$, $\sigma=0.2$, $\rho=0$, $S_0^j=100$, $K=100$, $\omega_j=1$, $m=10^6$, $m_{\mathrm{resim}}=10^6$ not using reordering \\ From: \cite{andersen2004primal, becker2019deep}}
\label{tbl:experiment_max_call}
\end{table}
\begin{table}[!ht]
\centering
\begin{tabular}{ c | c c c c c c c | c}
$d$ & \multicolumn{7}{c}{p} & $V_\textrm{ref}$ \\
\hline
& $1$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & \\
\hline
$2$ & $13.67$ & $13.76$ & $13.82$ & $11.63$ & $13.84$ & $13.84$ & $13.85$ & $13.902$ \\
$3$ & $18.39$ & $18.51$ & $18.60$ & $18.61$ & $18.61$ & $18.62$ & $18.62$ & $18.69$ \\
$5$ & $25.83$ & $26.01$ & $26.06$ & $26.07$ & $26.07$ & $26.07$ & $26.07$ & $[26.115, 26.164]$\\
$10$ & $38.08$ & $38.24$ & $38.29$ & $38.31$ & $38.31$ & $38.30$ & $38.30$ & $[38.300, 38.367]$ \\
$20$ & $51.48$ & $51.66$ & $51.71$ & $51.71$ & $51.71$ & $51.71$ & $51.71$ & $[51.549, 51.803]$ \\
$30$ & $59.50$ & $59.68$ & $59.71$ & $59.71$ & $59.72$ & $59.72$ & $59.72$ & $[59.476, 59.872]$ \\
$50$ & $69.58$ & $69.78$ & $69.80$ & $69.81$ & $69.81$ & $69.81$ & $69.81$ & $ [69.560, 69.945]$ \\
$100$ & $83.45$ & $83.65$ & $83.67$ & $83.67$ & $83.67$ & $83.66$ & $83.66$ & $ [83.357, 83.862]$ \\
$200$ & $97.56$ & $97.69$ & $97.70$ & $97.70$ & $97.70$ & $97.69$ & $97.69$ & $[97.381, 97.889]$ \\
$500$ & $116.45$ & $116.56$ & $116.56$ & $116.56$ & $116.56$ & $116.50$ & $116.52$ & $[116.210, 116.685]$ \\
$750$ & $124.91$ & $124.98$ & $124.99$ & $124.98$ & nan & nan & nan \\
$1000$ & $130.96$ & $131.06$ & $131.05$ & nan & nan & nan & nan \\
\end{tabular}
\caption{$n = 9$, $T=3$, $r=0.05$, $\delta = 0.1$, $\sigma=0.2$, $\rho=0$, $S_0^j=100$, $K=100$, $\omega_j=1$, $m=10^6$, $m_{\mathrm{resim}}=10^6$ using reordering \\ From: \cite{andersen2004primal, becker2019deep} }
\label{tbl:experiment_max_call_sort}
\end{table}
\begin{figure}[!ht]
\centering
\includegraphics[width=\textwidth]{avgandmaxrank_both.pdf}
\caption{average ranks(blue) and maximal(black) rank for different dimension.
The black lines indicate the maximal rank.
We use the results with highest values for every dimension and every bar.}
\label{fig:ranks_barplot}
\end{figure}
\FloatBarrier
\section*{Acknowledgements}
Christian Bayer gratefully acknowledges support by the DFG cluster of excellence MATH+, project AA4-2.
Leon Sallandt acknowledges support from the Research Training Group ``Differential Equation- and Data-driven Models in Life Sciences and Fluid Dynamics: An Interdisciplinary Research Training Group (DAEDALUS)'' (GRK 2433) funded by the German Research Foundation (DFG).
Philipp Trunschke acknowledges support by the Berlin International Graduate School in Model and Simulation based Research (BIMoS).
We thank Max Pfeffer and Reinhold Schneider for fruitful discussions.
\printbibliography
\end{document}
|
2,877,628,090,550 | arxiv | \section{Introduction}
Fatigue fracture is one of the most common causes of component failure. Still, the underlying physical phenomena, especially on the microscale \cite{bathias_fatigue_2010}, are not fully understood. Simulating fatigue fracture, e.\,g. via the Finite Element Method (FEM), is often very time-con\-su\-ming, as several hundred to millions of load cycles have to be simulated.
Generally, the fatigue life of a component until fracture can be divided into the crack initiation and the crack propagation stage. While crack initiation often takes the major part, in thin-walled specimen like fuselage shells, crack growth is crucial for the design process. The growth rate of the long, visible cracks determines the maintenance interval.
Particularly, residual stresses created by the process of laser shock peening (LSP) can be used to deliberately influence the fatigue crack growth (FCG) rate of long cracks, as demonstrated for the aluminium alloy AA2024 \cite{hombergsmeier2014fatigue,tan2004laser}. The application of laser shock peening aims at the introduction of compressive residual stresses in regions susceptible to fatigue, where the process provides a relatively high penetration depth as well as surface quality \cite{peyre1996laser}. These compressive residual stresses interact with the applied stresses of the fatigue load cycles and reduce the fatigue crack driving quantity. However, compressive residual stresses are always accompanied by tensile residual stresses to meet stress equilibrium. While compressive residual stresses are expected to retard the fatigue crack growth, tensile residual stresses may lead to increased fatigue crack growth rates reducing the fatigue life of the component, as shown by \cite{keller_experimentally_2019}. Thus, the efficient application of residual stress modification techniques needs the precise prediction of the residual stress field to determine the FCG rate.
In order to estimate the fatigue life of residual stress affected components, mainly two types of models are used. Empirical concepts based on Wöhler curves mostly evaluate the fatigue life until crack \textit{initiation} and often treat residual stresses as mean stresses \cite{landersheim_analyse_2009}. For the remaining fatigue life during crack \textit{growth}, models based on fracture mechanics are often used. In this context, residual stresses can be applied with the eigenstrain approach. The residual stress field is applied with the help of a fictitious temperature field \cite{benedetti_numerical_2010,keller_experimentally_2019} and considered with effective stress intensity factors in the fatigue crack growth computations.
In contrast to the previously mentioned approaches, in this contribution, an FEM framework is applied which covers both crack initiation and growth -- the phase-field method. However, the focus of this paper lies on the fatigue crack \textit{growth}.
The phase-field method has become a popular tool to simulate fracture phenomena because of its capability to treat arbitrary crack paths in a straight-forward way. This is possible due to a second field variable which describes the crack topology, making mesh alterations due to crack growth redundant. Originally formulated for static brittle fracture \cite{miehe_thermodynamically_2010} by simply regularising the Griffith criterion \cite{griffith_phenomena_1921} for crack growth, the phase-field method has now been applied to a large variety of materials and phenomena like e.\,g. ductile fracture \cite{miehe_phase_2016,ambati_phasefield_2015}.
However, the phase-field modelling of \textit{fatigue} fracture has been addressed only recently. While some models reduce the crack resistance of the material as a result of its cyclic degradation \cite{carrara_framework_2019,seiler_efficient_2020,mesgarnejad_phasefield_2018}, others increase the crack driving force \cite{schreiber_phase_2020,haveroth_nonisothermal_2020}. There are first approaches to cover plastic \cite{ulloa_phasefield_2019} and viscous \cite{loew_fatigue_2020} materials. To tackle the crucial problem of computational time when simulating repetitive loading, Loew et al. \cite{loew_accelerating_2020} and Schreiber et al. \cite{schreiber_phase_2020} use cycle jump techniques. Seiler et al. \cite{seiler_efficient_2020} approached the problem by incorporating a classic fatigue concept -- the local strain approach (LSA) -- into the model, enabling the simulation of several load cycles within only one increment. However, this model has been studied only qualitatively.
In this contribution, the phase-field fatigue model of \cite{seiler_efficient_2020} is calibrated and validated using FCG experiments of aluminium AA2024. Moreover, a straightforward strategy to include residual stresses in the model is presented. Additional FCG experiments are conducted with specimen in which residual stresses are introduced deliberately with LSP. Those residual stresses are analysed with the incremental hole drilling method and serve as an initial state for the fatigue simulation.
The paper is structured as follows: In Section~\ref{sec:model}, the model formulation is recapitulated, including the underlying phase-field equations for brittle fracture and its extension to fatigue, and the incorporation of residual stresses is explained. Section~\ref{sec:Exp} deals with the LSP experiments for the creation of residual stresses, the experimental determination of the resulting residual stress state, as well as the FCG experiments. Section~\ref{sec:Sim} contains the numerical predictions of the proposed model including the model parameter calibration and the comparison to experimental results. The conclusion follows in Section~\ref{sec:Conc}.
\section{Model framework}
\label{sec:model}
The model used in this publication is extensively described and qualitatively studied in \cite{seiler_efficient_2020}. Therefore, the model formulation is only outlined briefly here, starting with the basis of the framework, the phase-field method for brittle fracture, as well as its extension to fatigue. Then, going one step further, the incorporation of residual stresses in the model is described in detail.
\subsection{Phase-field method for fracture}
\label{sec:model_PF}
\begin{figure}
\def\svgwidth{\linewidth}
\input{pics/phase-field/phase-field.eps_tex}
\caption{Fractured domain $\Omega$ with crack surface $\Gamma$. \textbf{(a)} Sharp representation of crack topology. \textbf{(b)} Regularised representation: The crack is described by the phase-field variable $d=1$, while $d=0$ represents undamaged material. The crack is regularised over the length scale $\ell$.
\label{fig:Model_PF}}
\end{figure}
\begin{figure*}
\def\svgwidth{\textwidth}
\input{pics/LSA/LSA.eps_tex}
\caption{Scheme of local strain approach (LSA), with which the fatigue life variable $D$ is determined at every material point.
}
\label{fig:LSA}
\end{figure*}
The phase-field method for brittle fracture is based on the Griffith criterion \cite{griffith_phenomena_1921} for crack growth, which requires the energy release rate to be equal to the critical energy release rate or fracture toughness $\mathcal{G}_\mathrm{c}$. This criterion was brought to a variational form in \cite{francfort_revisiting_1998} and regularised for convenient numerical implementation in \cite{bourdin_variational_2008}.
During the regularisation, an additional field variable $d\in[0,1]$ is introduced. The diffuse description of the crack smoothly bridges the entirely intact ($d=0$) and totally broken ($d=1$) state. In this way, the crack topology can be described without any mesh modifications, allowing for a straightforward modelling of arbitrary crack paths. See Fig.~\ref{fig:Model_PF} for a graphical explanation of the regularisation.
Using the regularisation length scale $\ell$, the regularised energy functional $\Pi_\ell$ in the domain $\Omega$ can be written as
\begin{equation}
\Pi_\ell = \int_{\Omega} g(d)\,\psi^\mathrm{e}(\boldsymbol{\varepsilon}) \, \mathrm{d} V + \int_{\Omega} \mathcal{G}_\mathrm{c}{\frac{1}{2\ell}(d^2+\ell^2|\nabla d |^2)}\,\mathrm{d}V.
\end{equation}
In the small strain linear elastic setting. The elastic strain energy density is
\begin{equation}
\psi^\mathrm{e}=\frac{1}{2}\lambda\,\mathrm{tr}^2(\boldsymbol{\varepsilon})+\mu\,\mathrm{tr}(\boldsymbol{\varepsilon}^2) = \frac{1}{2}\,\boldsymbol{\varepsilon}: \mathbb{C} : \boldsymbol{\varepsilon}
\end{equation}
with the Lamé constants $\lambda$ and $\mu$, the elasticity tensor $\mathbb{C}$ and the strain
\begin{equation}
\boldsymbol{\varepsilon} = \frac{1}{2}\left(\nabla \boldsymbol{u} + (\nabla \boldsymbol{u})^\top\right).
\end{equation}
The degradation function $g(d)=(1-d)^2$ models the loss of stiffness due to the developing crack, coupling displacement field $\boldsymbol{u}$ and phase-field $d$. Consequently, the stress is given by
\begin{equation}
\label{eq:stress}
\boldsymbol{\sigma}=g(d)\frac{\partial\psi^\mathrm{e}}{\partial \boldsymbol{\varepsilon}} = g(d)\, \mathbb{C}:\boldsymbol{\varepsilon}.
\end{equation}
The governing equations of the coupled problem obtained from the variation $\delta\Pi_\ell=0$
\begin{equation} \label{eq:goveq}
\boldsymbol{0}=\mathrm{div}\,\boldsymbol{\sigma} \qquad \qquad
\mathcal{G}_\mathrm{c}\left(d-\ell^2\Delta d\right) = (1-d)2\ell\underbrace{\psi^\mathrm{e}(\boldsymbol{\varepsilon})}_\mathcal{H}
\end{equation}
are subject to the boundary conditions $\boldsymbol{n}\cdot\boldsymbol{\sigma}=\tilde{\boldsymbol{t}}$, $\boldsymbol{u}=\tilde{\boldsymbol{u}}$ and $\boldsymbol{n}\cdot\nabla d=0$ with $\bar{\boldsymbol{t}}$ and $\bar{\boldsymbol{u}}$ being the prescribed tractions and displacements on the corresponding boundaries, respectively.
To ensure crack irreversibility, in Eq.~(\ref{eq:goveq}), the crack driving force $\mathcal{H}$ in each point $\boldsymbol{x}$ is set to its temporal maximum \cite{miehe_phase_2016}
\begin{equation} \label{eq:irr}
\mathcal{H}(\boldsymbol{x},t) = \max_{s\in[t_0;t]} \psi^\mathrm{e}(\boldsymbol{\varepsilon}(\boldsymbol{x},s)).
\end{equation}
\subsection{Extension to fatigue}
\label{sec:model_Fat}
\subsubsection*{Fatigue degradation}
In order to incorporate fatigue into the phase-field framework, the fracture toughness $\mathcal{G}_\mathrm{c}$ is reduced when the material degradation due to repetitive stressing precedes. This process is described by introducing a local lifetime variable $D$. An additional scalar fatigue degradation function \mbox{$\alpha(D):[0,1]\rightarrow[\alpha_0,1]$} with \mbox{$\alpha_0>0$} is introduced, which lowers the fracture toughness $\mathcal{G}_\mathrm{c}$ locally. The energy functional then reads
\begin{align}\label{eq:Pi_l}
\begin{split}
\Pi_\ell =& \int_{\Omega} g(d)\,\psi^\mathrm{e}(\boldsymbol{\varepsilon}) \, \mathrm{d} V \\
& + \int_{\Omega} \alpha(D) \,\mathcal{G}_\mathrm{c}\frac{1}{2\ell}(d^2+\ell^2|\nabla d|^2)\,\mathrm{d}V,
\end{split}
\end{align}
which leads to the modified phase-field evolution equation
\begin{equation}\label{eq:ev_PF}
\mathcal{G}_\mathrm{c}\left( \alpha\,d-\nabla\alpha\cdot\ell^2\nabla d-\alpha\,\ell^2\Delta d\right)=(1-d)\,\mathcal{H}\,2\ell
\end{equation}
with the dependency $\alpha(D)$ dropped for brevity.
The lifetime variable $D\in[0,1]$ is a history variable that is accumulated strictly locally. For $D=0$, the material has experienced no cyclic loads and therefore offers full fracture toughness. Consequently, $\alpha(0)=1$ must hold. For $D=1$, the fracture toughness is reduced to a threshold value $\alpha_0$. Therefore, the fatigue degradation function
\begin{equation}
\label{eq:alpha}
\alpha(D)=(1-\alpha_0)(1-D)^\xi+\alpha_0
\end{equation}
with the parameters $\alpha_0$ and $\xi$ is used. For a study of the influence of the model parameters $\alpha_0$ and $\xi$, see Sec.~\ref{sec:Sim_FCG} and \cite{seiler_efficient_2020}.
\subsubsection*{Local strain approach}
The computation of the lifetime variable $D$ follows the LSA \cite{seeger_grundlagen_1996}. This method is generally used for fatigue life calculations of components, but is implemented here in the material routine of the FEM framework and therefore executed at each integration point.
The computation scheme is illustrated in Fig.~\ref{fig:LSA}.
At first, the stresses and strains from a linear elastic simulation are revaluated using the Neuber rule \cite{neuber_theory_1961}. The von Mises equivalent stress $\sigma$ is projected to the cyclic stress-strain curve (CSSC) yielding a virtual, revaluated stress-strain pair $(\sigma^*,\varepsilon^*)$ by assuming a constant strain energy $\frac{1}{2}\,\sigma\varepsilon = \frac{1}{2}\,\sigma^*\varepsilon^*$. The CSSC is thereby described by the Ramberg-Osgood equation \cite{ramberg_description_1943}
\begin{equation}
\varepsilon^* = \frac{\sigma^*}{E} + \left( \frac{\sigma^*}{K'} \right)^{1/n'}
\end{equation}
with the cyclic parameters $K'$ and $n'$ and Young's modulus $E$. It can be determined from standardised cyclic experiments such as the incremental step test. In this way, the complete virtual stress-strain path can be derived from the loading sequence. This stress-strain path is divided into hysteresis loops. For each loop $i$, the damage parameter by Smith, Watson and Topper \cite{smith_stressstrain_1970}
\begin{equation}
\label{eq:PSWT}
P_{\mathrm{SWT},i} = \sqrt{(\sigma_{\mathrm{a},i}^*+\sigma_{\mathrm{m},i}^*)\varepsilon_{\mathrm{a},i}^*E}
\end{equation}
can be determined from the stress and strain amplitudes $\sigma_\mathrm{a}^*$ and $\varepsilon_\mathrm{a}^*$ and the mean stress $\sigma_\mathrm{m}^*$. It quantifies the damaging effect of the loop. Only the tensile range contributes to $P_\mathrm{SWT}$, see~Fig.~\ref{fig:LSA}. From strain Wöhler curves (SWC) -- also generated with standardised experiments -- the matching virtual load cycle number $N_i$ for $P_{\mathrm{SWT},i}$ can be read. Finally, the fatigue life contribution of the single hysteresis loop $\Delta D_i$ and the full loading path is
\begin{equation}
\label{eq:Miner}
\Delta D_i = 1/N_i \quad\text{and}\quad D=\Sigma_i \Delta D_i.
\end{equation}
Note, that the revaluated stresses and strains $\sigma^*$ and $\varepsilon^*$ are solely used for the damage calculation and are not used in the coupled problem in any other way.
In conclusion, the integration of fatigue in the phase-field model with the LSA is beneficial for the computational cost in three ways:
\begin{enumerate}
\item Local cyclic plasticity is covered by the Neuber rule, so no elastic-plastic material model is needed.
\item Since only amplitude and mean values of stress and strain enter the calculation of $D$, the loading path does not have to be resolved in the simulation, instead the reversal points are sufficient.
\item In case of constant load amplitudes and small crack growth rates, several load cycles can be simulated with only one increment, since the lifetime contributions are accumulated linearly according to Eq.~\ref{eq:Miner}. Especially for high cycle fatigue, this can save immense computational time.
\end{enumerate}
\subsection{Incorporation of residual stresses}
\label{sec:model_RS}
\begin{figure}
\def\svgwidth{\linewidth}
\input{pics/RS/RS.eps_tex}
\caption{\textbf{(a)} Formation of residual stresses $\sigma_0$ through plastic deformation. Remaining strain after unloading is $\varepsilon_0=\varepsilon_\mathrm{p}+\varepsilon_{0,\mathrm{el}}$.
\textbf{(b)} Material law for cyclic simulation. Initial state $(\sigma_0,\varepsilon_{0,\mathrm{el}})$ undergoes cyclic load with $(\sigma_\mathrm{c},\varepsilon_\mathrm{c})$. Crack driving force $\mathcal{H}$ is strain energy density of total stress-strain state $(\sigma,\varepsilon)$.
Schemes of 1D, undamaged ($d=0$) case.}
\label{fig:RS}
\end{figure}
Residual stresses result from plastic deformations which occur during the production process, e.\,g. due to forming, tempering or surface treatment. The stress remaining after unloading is the residual stress $\boldsymbol{\sigma}_0$. The associated strain
\begin{equation}
\boldsymbol{\varepsilon}_0 = \boldsymbol{\varepsilon}_{0,\mathrm{el}} + \boldsymbol{\varepsilon}_\mathrm{p}
\end{equation}
consists of an elastic part $\boldsymbol{\varepsilon}_{0,\mathrm{el}}$ and a plastic part $\boldsymbol{\varepsilon}_\mathrm{p}$, see Fig.~\ref{fig:RS}(a). While the total residual strain $\boldsymbol{\varepsilon}_0$ is geometrically compatible, this does not apply to its components $\boldsymbol{\varepsilon}_\mathrm{p}$ and $\boldsymbol{\varepsilon}_{0,\mathrm{el}}$.
Only the elastic part $\boldsymbol{\varepsilon}_{0,\mathrm{el}}$ of the residual strain $\boldsymbol{\varepsilon}_0$ is relevant for the fatigue life simulation. The plastic forming process is treated as completed and is not modelled. The plastic part $\boldsymbol{\varepsilon}_\mathrm{p}$ is not of further interest in the fatigue crack simulation, because it is assumed that the yielding process does not change the crack resistance properties of the material \cite{zerbst2016fatigue}. All material points are assigned the same material parameters initially, regardless of their (plastic) history.
Therefore the total stress-strain state $(\boldsymbol{\sigma},\boldsymbol{\varepsilon})$ in the model is the sum of the initial state $(\boldsymbol{\sigma}_0,\boldsymbol{\varepsilon}_{0,\mathrm{el}})$ and the stress-strain state caused by the cyclic loading $(\boldsymbol{\sigma}_\mathrm{c},\boldsymbol{\varepsilon}_\mathrm{c})$.
Hence, the total strain is
\begin{equation}
\boldsymbol{\varepsilon} = \boldsymbol{\varepsilon}_{0,\mathrm{e}} + \boldsymbol{\varepsilon}_\mathrm{c}
\end{equation}
as displayed in Fig.~\ref{fig:RS}(b). Thereby, the strain $\boldsymbol{\varepsilon}_\mathrm{c}$ is
\begin{equation}
\boldsymbol{\varepsilon}_\mathrm{c} = \frac{1}{2}\left(\nabla \boldsymbol{u} + (\nabla \boldsymbol{u})^\top\right).
\end{equation}
The regularised energy functional is, analogously to \linebreak Eq.~(\ref{eq:Pi_l}),
\begin{align}\label{eq:Pi_l_res}
\begin{split}
\Pi_\ell =& \int_{\Omega} g(d)\,\psi^\mathrm{e}(\boldsymbol{\varepsilon}_{0,\mathrm{e}} + \boldsymbol{\varepsilon}_\mathrm{c}) \, \mathrm{d} V + \\
& + \int_{\Omega} \alpha(D) \,\mathcal{G}_\mathrm{c}\frac{1}{2\ell}(d^2+\ell^2|\nabla d|^2)\,\mathrm{d}V.
\end{split}
\end{align}
Consequentially, the stress is
\begin{equation}
\boldsymbol{\sigma} = g(d) \left(\boldsymbol{\sigma}_0 + \boldsymbol{\sigma}_\mathrm{c} \right) = g(d) \left(\boldsymbol{\sigma}_0 + \mathbb{C}:\boldsymbol{\varepsilon}_\mathrm{c} \label{eq:stress_RS} \right).
\end{equation}
and the evolution equation remains
\begin{equation}\label{eq:ev_PF_res}
\mathcal{G}_\mathrm{c}\left( \alpha\,d-\nabla\alpha\cdot\ell^2\nabla d-\alpha\,\ell^2\Delta d\right)=(1-d)\,\mathcal{H}\,2\ell.
\end{equation}
The crack driving force is the temporal maximum of the strain energy density of the total stress strain state
\begin{align}
\mathcal{H}(t) &= \max_{s\in[t_0,t]} \left(\psi^\mathrm{e}(\boldsymbol{\varepsilon}_{0,\mathrm{el}} + \boldsymbol{\varepsilon}_\mathrm{c}(s)) \right)\\
&= \max_{s\in[t_0,t]}\left( \frac{1}{2} ( \boldsymbol{\varepsilon}_{0,\mathrm{el}} + \boldsymbol{\varepsilon}_\mathrm{c}(s)) : \mathbb{C} : (\boldsymbol{\varepsilon}_{0,\mathrm{el}} + \boldsymbol{\varepsilon}_\mathrm{c}(s)) \right).
\end{align}
The initial state at the time $t_0$ is hereby $\boldsymbol{\varepsilon} = \boldsymbol{\varepsilon}_{0,\mathrm{el}}$, $\boldsymbol{\sigma}=g(d)\,\boldsymbol{\sigma}_0$.
The superposition of residual stress state and the stress state caused by external loading is actually also common in fracture-mechanical fatigue computations based on stress intensity factors \cite{larue_predicting_2007}. Note that since $\boldsymbol{\varepsilon}_{0,\mathrm{el}}$ is not necessarily geometrically compatible, the total strain $\boldsymbol{\varepsilon}$ is not, either.
\begin{figure}
\def\svgwidth{\linewidth}
\input{pics/mat_RS/mat_RS.eps_tex}
\caption{Comparison of virtual stress-strain path with and without residual stress $\sigma_0$. Constitutive stress-strain behaviour (\protect\tikz[baseline=-0.5ex]{\protect\draw[thick,dashed] (0,0) -- (0.5,0) ;}) for 1D, undamaged ($d=0$) case. For the computation of the lifetime variable $D$, the virtual stress-strain path (\protect\tikz[baseline=-0.5ex]{\protect\draw[thick,color=darkred] (0,0) -- (0.5,0) ;}) is used, which is determined from the CSSC (\protect\tikz[baseline=-0.5ex]{\protect\draw[thick,color=darkgreen] (0,0) -- (0.5,0) ;}). Mainly, the residual stress shifts the virtual mean stress $\sigma_\mathrm{m}^*$, which controls the damage parameter $P_\mathrm{SWT}$ (schematically).}
\label{fig:mat_RS}
\end{figure}
It is assumed that the plastic strains do not enter the crack driving force. Instead the crack is driven by the elastic strain energy density, which again only depends on the total elastic strain. This assumption is appropriate for typical HCF and higher LCF loads which do not exceed the static yield limit.
Fig.~\ref{fig:mat_RS} depicts how the residual stresses affect the LSA procedure: Due to the initial state, the Neuber rule yields a shifted stress-strain path. Although the stress and strain amplitude stay the same, the damage parameter $P_\mathrm{SWT}$ is affected by the altered mean stress $\sigma_\mathrm{m}^*$ according to Eq.~\ref{eq:PSWT}. In this way, tensile residual stresses increase the damage parameter $P_\mathrm{SWT}$ while compressive residual lead to a decrease.
In summary, residual stresses influence the crack development in two ways:
\begin{enumerate}
\item They change the peak stress-strain state of a load cycle which is decisive for the crack development in a cyclic load. In this way, tensile residual stresses increase, compressive residual stresses decrease the crack driving force $\mathcal{H}$ which depends on the strain energy density $\psi^\mathrm{e}(\boldsymbol{\varepsilon})$.
\item They shift the virtual stress-strain path and therefore influence the damaging effect and with that the lifetime variable $D$. Compressive residual stresses reduce $D$, while tensile residual stresses increase it.
\end{enumerate}
The initial residual stress tensor $\boldsymbol{\sigma}_0$ remains unchanged throughout the simulation. Residual stress redistribution due to wide-ranging plasticising is therefore not covered in the model, since this is only relevant for very low cycle fatigue with macroscopic plastic deformations \cite{benedetti_numerical_2010}. However, as cracks propagate, the stress state is rearranged due to degradation of the total stress (Eq.~\ref{eq:stress_RS}) which also affects the residual stresses. In this way, the residual stress state redistributes compared to the initial state $\boldsymbol{\sigma}_0$ due to FCG.
\section{Experiments}
\label{sec:Exp}
The fatigue crack growth influenced by residual stresses is investigated with compact tension (C(T)) specimens, where significant residual stresses are introduced by LSP. The material under investigation is the aluminium alloy AA2024 in T3 heat treatment condition with 2\,mm and 4.8\,mm thickness, which is a representative aluminium alloy used in the aircraft industry for fuselage structures \cite{dursun2014recent}. The previous investigation \cite{keller_experimentally_2019} indicates that LSP allows the introduction of relatively high and deep residual stresses, where the microstructural changes in AA2024 does not influence the FCG behaviour significantly. Thus, differences in the FCG rates between untreated and laser peened material are mainly linked to the effect of residual stresses. In the following the LSP treatment, the experimental residual stress determination and the determination of the fatigue crack growth rate are described. The experimental data for specimens with thickness of 4.8\,mm are taken from \cite{kallien2019effect} and \cite{keller_experimentally_2019}.
\subsection{Laser shock peening}
\label{sec:Exp_LSP}
LSP uses short-time high-energy laser pulses to generate plasma consisting of near-surface material, see Fig.~\ref{fig:LSP_Schema}. The extension of the plasma generates mechanical shock waves, which cause local plastic deformation of the subsurface material, see Fig.~\ref{fig:LSP_Schema}(b). After relaxation of the high dynamic process, these local plastic deformation lead to a residual stress distribution, where compressive residual stresses remain in the subsurface region surrounded by balancing tensile residual stresses, Fig.~\ref{fig:LSP_Schema}(c). The penetration depth of these compressive residual stresses is in millimeter range. The efficiency of the process can be increased by the use of a confinement layer. A laminar water layer is used during the LSP process in this study as confinement medium. The LSP treatment is conducted with an Nd:YAG laser. 5\,J laser pulses with the duration of 20\,ns (full width at half maximum) and a $3 \,\mathrm{mm} \times 3\,\mathrm{mm} $ square focus are used. The LSP treatment is performed without pulse overlap in five columns on the sheet material, as shown in Fig.~\ref{fig:RS_Specimen}(a). The laser pulse sequence of a rectangular peening patch with $15\, \mathrm{mm} \times 80\,\mathrm{mm}$
is applied twice at both sides of the sheet material. The sequence is shot at the first side twice before the second side was treated twice as well.
\begin{figure}
\includegraphics[width=\linewidth]{pics/LSP_Scheme/LSP_Schema.pdf}
\caption{LSP laser system containing the pulsed Nd:YAG laser and clamping system fixing the specimen. Laser pulses are used to vaporize near-surface material to initiate mechanical shock waves \textbf{(b)}. These shock waves cause local plastic deformations, which lead to a residual stress distribution after the process \textbf{(c)}. A water layer increases the efficiency of the process by confining the plasma. }
\label{fig:LSP_Schema}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{pics/LSP_Scheme/RS_Specimen.pdf}
\caption{Schematic of the specimens used to evaluate the LSP-induced residual stress state \textbf{(a)}. 5\,J laser pulses with a $3 \, \mathrm{mm} \times 3 \, \mathrm{mm}$ square laser focus are used to treat a region with size $15 \, \mathrm{mm} \times 80 \, \mathrm{mm}$. The laser pulse sequence consists of five columns in which the laser pulses are shot without overlap. The advancing direction of each column is kept constant. The sequence is applied twice at both sides of the sheet material. Residual stresses were experimentally determined by incremental hole drilling \textbf{(b)}, where the same depth profile of residual stresses is assumed below the peening patch.}
\label{fig:RS_Specimen}
\end{figure}
\subsection{Experimental residual stress determination}
\label{sec:Exp_RS}
The incremental hole drilling system PRISM from \linebreak Stresstech is used to determine the depth profile of the residual stresses. The hole drilling system uses electronic speckle pattern interferometry to determine material surface deformation after each increment of an incrementally drilled hole. These surface deformations are correlated with the residual stress at the respective increment depth via the integral method \cite{schajer2005full}. The interested reader is referred to \cite{ponslet2003residual,ponslet2003residual_PartIII,steinzig2003residual} for a detailed explanation of the incremental hole drilling method using electronic speckle pattern interferometry.
A driller with 2\,mm diameter is used to determine the residual stresses up to the depth of 1\,mm. This hole depth allows for the experimental determination of the through thickness residual stress profile within the specimens with 2\,mm thickness, when the residual stresses are determined from both material sides. As it is recommended that the material thickness is four times larger than the hole diameter \cite{ponslet2003residual_PartIII}, the residual stress determinations of AA2024 with 2\,mm thickness where repeated with a hole diameter of 0.5\,mm as well. The determined residual stresses with the hole diameter of 0.5\,mm and 2\,mm match. Therefore, we focus on the determined residual stresses with the hole diameter of 2\,mm in the following. A relative small increment size is used near the material surface, where relatively large residual stress gradients are expected. Residual stresses were determined within the area of the peening patch, see Fig.~\ref{fig:RS_Specimen}(b). The depth profile of residual stresses is assumed to be the same below the whole peening patch. Thus, the average value and the standard deviation of at least eight experimentally determined residual stress profiles for both material sides are depicted in the following.
\subsection{Fatigue crack growth}
\label{sec:Exp_FCG}
C(T) specimens with a width of 100\,mm are used to determine the FCG rate according to ASTM~E647 standard. The FCG tests were performed with the servo hydraulically testing machine from Schenk/Instron and a 25 kN load cell. The specimen geometry is displayed in Fig.~\ref{fig:CT_Specimen}. A pre-crack of 5\,mm is introduced extending the initial crack length to 25\,mm (20\,mm notch and 5\,mm pre-crack). Afterwards, the peening patch, as described in Sec.~\ref{sec:Exp_LSP}, is applied 10\,mm in front of the initial crack front to the LSP treated specimens. The applied load ratio $R=0.1$ is kept constant during the FCG test. The maximum applied force of the fatigue load load cycles is 1.65\,kN and 4.0\,kN for specimens with 2\,mm and 4.8\,mm thickness, respectively. All FCG experiments are repeated at least twice.
\begin{figure}
\includegraphics[width=\linewidth]{pics/LSP_Scheme/CT_Specimen.pdf}
\caption{C(T) specimen with 100\,mm width according to ASTM-E647 standard. The specimen is pre-cracked by 5\,mm and subsequently peened with 10\,mm distance to the crack front of the pre-crack. C(T) specimens are treated twice from both surfaces of the sheet material. The introduction of compressive residual stresses below the peening patch lead to balancing tensile stresses in the surrounding material.}
\label{fig:CT_Specimen}
\end{figure}
\subsection{Experimental results}
\label{sec:Exp_result}
\subsubsection*{Residual stresses}
The LSP treatment leads to the introduction of significant compressive residual stresses over the thickness for the investigated AA2024, see Fig.~\ref{fig:RS_2mm} for 2\,mm thickness and Fig.~\ref{fig:RS_4.8mm} for 4.8\,mm thickness, respectively.
Since the residual stresses are only determined up to a depth of 1\,mm from each side with the incremental hole drilling technique, numerical simulations via a LSP process simulation, as described elsewhere \cite{keller_crack_2019,keller_experimentally_2019} are used to estimate the residual stress profile along the entire material cross-section in $z$ direction.
The residual stress components $\sigma_{xx}$ and $\sigma_{yy}$ in surface plane direction differ, whereby the magnitude of the component perpendicular to the crack growth direction, $\sigma_{yy}$, is more pronounced. This difference of the residual stress components might be attributed to geometrical effects, such that the rectangular peening patch geometry, as experiments with a square peening patch do not indicate this significant difference of $\sigma_{xx}$ and $\sigma_{yy}$ in aluminium alloy AA2024 \cite{keller_experimentally_2019}.
The residual stress magnitude and gradient differ significantly depending on the material thickness. The LSP-treated aluminium alloy with 2\,mm thickness contains a lower maximum compressive residual stress of approximately 160\,MPa compared to the compressive maximum of approx\-imate\-ly 280\,MPa in the 4.8\,mm thick material. While tensile residual stresses occur at mid-thickness for the thicker material, the residual stress component $\sigma_{yy}$ is completely compressive along the $z$ direction for the 2\,mm thick material.
The resulting residual stress field depends on the order of the applied pulse sequences on the two sides. These differences are more pronounced for the thinner material. The non-symmetric residual stress profile is assumed to result from the interaction of mechanical shock waves initiated at the secondly peened side at $z=2 \, \mathrm{mm}$ and already existing residual stresses introduced by the LSP treatment of the first side at $z=0 \, \mathrm{mm}$. These interactions result in increased residual stresses between 0.4\,–\,0.9\,mm depth.
It has to be noted, that the peening patch surrounding material in $x$-$y$ direction contains balancing tensile residual stresses. A detailed analysis about the overall residual stress field of the C(T) specimen after the LSP treatment in AA2024 for 4.8\,mm thickness can be found in \cite{keller_experimentally_2019}.
\begin{figure}
\def\svgwidth{\linewidth}
\input{pics/ResStress_2mm/ResStress_2mm.eps_tex}
\caption{Residual stress profile of the base material (BM) and after LSP in AA2024 with 2\,mm thickness. The average value and standard deviation are depicted. At least seven depth profiles were experimentally investigated. At first, LSP was applied at 0\,mm and secondly from the other side of the specimen at 2\,mm. The LSP treatment leads to compressive residual stresses along the entire material thickness.}
\label{fig:RS_2mm}
\end{figure}
\begin{figure} [h]
\def\svgwidth{\linewidth}
\input{pics/ResStress_4_8mm/ResStress_4_8mm.eps_tex}
\caption{Experimentally determined and numerically calculated (Sim.) residual stresses after LSP treatment in AA2024 with 4.8\,mm thickness. The firstly treated side is at 0\,mm. The numerically determined residual stress profile shows tensile residual stresses at mid-thickness (1.8-3.0\,mm). The residual stress profiles are taken from \cite{kallien2019effect} (BM) and \cite{keller_experimentally_2019} (LSP).}
\label{fig:RS_4.8mm}
\end{figure}
\subsubsection*{Fatigue crack growth}
Experimentally determined FCG rates of the untreated material show the typical exponential correlation between FCG rate $\mathrm{d}a/\mathrm{d}N$ and stress intensity factor range $\Delta K$ known for the Paris regime, see Fig.~\ref{fig:FCG_mm}. This characteristic FCG behaviour is significantly affected by the introduced residual stresses for both investigated material thicknesses. For the LSP-treated samples, the FCG rate increases between the initial crack front and the peening patch. This increased FCG rate is attributed to balancing tensile residual stresses, as indicated in Fig.~\ref{fig:CT_Specimen}. Thereafter, the FCG rate decreases up to a minimum at $a \approx 49\,\mathrm{mm}$, when the crack front is located within the area of the peening patch. After the crack front has passed the peening patch, the FCG rate accelerates, but stays below the FCG rate of specimens without LSP treatment. This characteristic FCG behaviour is observed for both material thicknesses.
The observation of the increased FCG rate highlights the importance of the overall residual stress field for an efficient application of residual stress modification techniques, such as LSP. Furthermore, tools for FCG rate calculation need to contain the prediction of this possible increase of the FCG rate as well. While the material thickness of 2\,mm allows the experimental determination of residual stresses over entire material thickness, the relatively thin material may lead to buckling during the fatigue testing at larger crack length. These buckling phenomena are indicated from $a > 50 \, \mathrm{mm}$ and may cause the increased scatter of the experimentally determined FCG rate for the material thickness 2\,mm.
\begin{figure}
\def\svgwidth{\linewidth}
\input{pics/Comp_1061_1060/Comp_1061_1060.eps_tex}
\caption{FCG rate in AA2024 with 2\,mm and 4.8\,mm thickness in BM and LSP-treated material. The introduced residual stress field leads to an accelerated FCG in front of the peened area and a FCG retardation when the crack front is located within the peening patch. The different curves, represent the repeated experiments. The data for the 4.8\,mm specimen are taken from \cite{keller_experimentally_2019}}
\label{fig:FCG_mm}
\end{figure}
\section{Simulation of fatigue crack growth}
\label{sec:Sim}
In the following, the phase-field model described in Sec.~\ref{sec:model} is used to simulate the fatigue crack growth experiments described in Sec.~\ref{sec:Exp_FCG}. Starting with an unpeened specimen, model parameters are studied and the model is calibrated to one fatigue crack growth curve. With the calibrated model parameters, the other unpeened and peened specimens are simulated.
For all simulations, a staggered solution scheme \cite{hofacker_continuum_2012} is applied to solve the coupled problem with mechanical field and phase-field. A structured, locally refined mesh with a minimum mesh size of $h_{\min}=0.33$\,mm is used. Due to the thin specimens, a plane stress state is assumed. The plane stress assumption is supported by the experimental determination of the residual stress component perpendicular to the material surface ($\sigma_{zz} \approx 0\,\mathrm{mm}$) after LSP application via synchrotron radiation in \cite{keller2018experimental}. As comparative computations showed, a tension-compression split, see \cite{miehe_phase_2010}, is not necessary for the simulations due to the simple stress state of the specimen. The characteristic length is set to $\ell=1$\,mm as a compromise between mesh refinement and accuracy. The elastic, cyclic and fracture-mechanical parameters for the AA2024-T3 material taken from literature are specified in Tab.~\ref{tab:matpar}.
The specimen are loaded by cyclic loading with force ratio $\tilde{F}_\mathrm{min}/\tilde{F}_\mathrm{max}=0.1$. The force boundary condition was kept to $\tilde{F}_\mathrm{max}$ throughout the simulation. This is possible due to the model formulation with the LSA. The damage parameter $P_\mathrm{SWT}$ only needs amplitude and mean stress values as an input, see Sec.~\ref{sec:model_Fat}. Therefore, a simulation of the full loading path is not necessary for the damage calculation.
Due to the constant amplitude loading, several load cycles $\Delta N$ can be simulated within one increment. $\Delta N$ is reduced adaptively depending on the number of Newton iterations required to in the staggered loop, starting with $\Delta N=3000$.
\begin{table}
\caption{Material parameters of aluminium AA2024-T3 used in phase-field simulations.}
\label{tab:matpar}
\begin{tabular}{lll}
\hline\noalign{\smallskip}
Elastic constants \cite{boller_materials_1987} & $E=74.6$ GPa & $\nu=0.33$ \\
CSSC \cite{boller_materials_1987} & $K'=0.453$ GPa & $n'=0.201$ \\
SWC \cite{boller_materials_1987} & $\sigma'_\mathrm{f}=0.314$ GPa & $\varepsilon'_\mathrm{f}=0.162$ \\
& $b=-0.091$ & $c=-0.452$ \\
Fracture toughness \cite{kaufman_fracture_2001} & $\mathcal{G}_\mathrm{c}=0.165\, \mathrm{MPa\,m}$ & \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\subsection{Unpeened specimens}
\label{sec:Sim_FCG}
\subsubsection*{Model parameters}
\begin{figure}
\def\svgwidth{\linewidth}
\input{pics/Comp_1066_1067/Comp_1066_1067.eps_tex}
\caption{FCG rate for range of stress intensity factor $\Delta K$ (Paris curves). Study of the exponent $\xi$ and the threshold $\alpha_0$ of the fatigue degradation function.}
\label{fig:alpha_xi}
\end{figure}
The parameters of the fatigue degradation function (\ref{eq:alpha}), $\alpha_0$ and $\xi$, are the only model parameters that have to be calibrated apart from the characteristic length $\ell$. All other parameters -- listed in Tab.~\ref{tab:matpar} -- are drawn from standardised experiments. The influence of the fatigue degradation function is studied on the 4.8\,mm thick specimen loaded with \mbox{$\tilde{F}_\mathrm{max}=4\,\mathrm{kN}$}. Fig.~\ref{fig:alpha_xi} shows the results as a Paris plot, i.\,e. the crack growth rate over the range of the stress intensity factor. The variation of the threshold value $\alpha_0$ while keeping $\xi=1000$ shows its influence on the inclination and curvature of the Paris curve. For \mbox{$\alpha_0=0.002$}, the graph is a straight line in the double logarithmic plot, which is typical for most crack growth experiments. Varying the exponent~$\xi$ while keeping $\alpha_0=0.002$ shifts the Paris curve in vertical direction.
For a study of the influence of the model parameters on crack \textit{initiation}, the reader is referred to \cite{seiler_efficient_2020}.
\subsubsection*{Model calibration}
\begin{figure}
\def\svgwidth{\linewidth}
\input{pics/Exp_488/Exp_488.eps_tex}
\caption{Fatigue crack growth of a 2\,mm thick specimen loaded with $\tilde{F}_\mathrm{max}=1.65\,\mathrm{kN}$. Model parameters are fitted to experimental results yielding $\alpha_0=0.0015$ and $\xi=500$. \textbf{a)} Fatigue degradation $\alpha$ and \textbf{b)} phase-field variable $d$ after $\approx251\,500$ load cycles. \textbf{c)} Paris curve.}
\label{fig:Exp488}
\end{figure}
The different effects of the two parameters allow for a convenient calibration of the model. Here, a fatigue crack growth experiment with the 2\,mm thick specimen loaded with \mbox{$\tilde{F}_\mathrm{max}=1.65\,\mathrm{kN}$}, which was repeated three times, is used for calibration. The fit of the Paris curve yielded the model parameters $\alpha_0=0.0015$ and $\xi=500$ as displayed in Fig.~\ref{fig:Exp488}. The figure also shows the distribution of the fatigue degradation $\alpha$ and the crack indicating phase-field variable $d$ after $251\,500$ load cycles.
\subsubsection*{Test loading case}
The calibrated parameters are tested with a different a fatigue crack growth experiment with a 4.8\,mm thick specimen loaded with $\tilde{F}_\mathrm{max}=4\,\mathrm{kN}$, also repeated three times. As displayed in Fig.~\ref{fig:Exp496}, the simulation meets the experiments quite well, yet underestimates the crack growth rate slightly. This could be due to the fact that the assumption of a plane stress state as it was used in the simulation is less accurate for a thicker (albeit still thin) specimen.
\begin{figure}
\def\svgwidth{\linewidth}
\input{pics/Exp_496/Exp_496.eps_tex}
\caption{Fatigue crack growth in a 4.8\,mm thick specimen loaded with $\tilde{F}_\mathrm{max}=4\,\mathrm{kN}$. Test loading case with calibrated model parameters $\alpha_0=0.0015$ and $\xi=500$.}
\label{fig:Exp496}
\end{figure}
\subsection{Peened specimens}
\label{sec:Sim_RS}
\begin{figure}
\def\svgwidth{\linewidth}
\input{pics/Comp_1074_1077/Comp_1074_1077.eps_tex}
\caption{Fatigue crack growth in 2\,mm and 4.8\,mm thick peened and unpeened specimen loaded with $\tilde{F}_\mathrm{max}=1.65\,\mathrm{kN}$ and $\tilde{F}_\mathrm{max}=4\,\mathrm{kN}$. Model parameters $\alpha_0=0.0015$ and $\xi=500$. \textbf{a)} Imposed residual stress component $\sigma_{0,yy}$ from residual stress measurements after LSP. The component $\sigma_{0,xx}$ is not shown here. \textbf{b)} and \textbf{c)} Crack growth rate.}
\label{fig:Comp1074_1077}
\end{figure}
Before the crack growth simulation, the initial residual stress state has to be established. For this purpose, the experimentally determined residual stresses are mapped to the used mesh. In this context, the integral mean of the depth profile of the experimentally determined residual stresses $\sigma_{0,xx}$ and $\sigma_{0,yy}$ is taken to fit the 2D plane stress simulation. The integrated shear stress as well as the stress in thickness direction is close to zero. With a preliminary, load-free simulation, an equilibrium stress state is found which serves as the initial residual stress $\boldsymbol{\sigma}_0$ in the actual simulation. For both, the 2\,mm and the 4.8\,mm thick specimen, the employed residual stress component $\sigma_{0,yy}$ is depicted in Fig.~\ref{fig:Comp1074_1077}a) exemplarily.
Both peened specimens are now simulated with the parameters fitted in the previous section. Please note that no additional parameters are modified for the simulations including the residual stresses. The initial load cycle increments are $\Delta N=300$ and $\Delta N=1000$, respectively. Fig.~\ref{fig:Comp1074_1077} shows the FCG rates. The peened area is shaded. Within this area, there are dominantly compressive residual stresses, while before and after it the residual stresses are primarily in the tensile range. Both simulations reproduce the effect of the peening very well qualitatively: The crack is accelerated in front and after the peened area while within the peened area it is inhibited.
The model overestimates the influence of the residual stresses. This is true for both crack accelerating and inhibiting effects. One reason for the quantitative gap between experiment and simulation is that crack closure is not considered in the model. Another reason could be the simplification to a 2D stress state. The residual stresses introduced through LSP have a distinct profile over the thickness which influence the FCG rate.
The oscillations at the end of the peened area result from the fact that the very low FCG rates in this area lead to almost zero crack growth in some increments. The jump at the end of the peened area presumeably stems from the high residual stress gradient applied in the simulation due to the fact that measurements of residual stresses are only pointwise.
\section{Conclusion}
\label{sec:Conc}
This paper revisits a phase-field model for the computationally effective simulation of fatigue cracks \cite{seiler_efficient_2020}. The model is calibrated and validated with FCG experiments in aluminium metal sheets. It is able to reproduce different FCG experiments fairly well. In the second part, residual stresses are introduced into the metal sheets through LSP. A method for the incorporation of residual stresses into the model is presented. The model is able to reproduce the crack inhibiting effect of the compressive residual stresses qualitatively.
Future works will now focus on low and very low cycle fatigue where the elastic approximation is not valid anymore. Moreover the degradation of residual stresses due to cyclic plasticity deserves closer attention. A 3D simulation which considers the distinct crack closure effects over the thickness of the specimen could also yield more realistic results.
\begin{acknowledgements}
The group of M. Kästner thanks the German Research Foundation DFG which supported this work within the Priority Programme 2013 ''Targeted Use of Forming Induced Residual Stresses in Metal Components'' with grant number KA 3309/7-1. The authors would like to thank M. Horstmann and H. Tek for the specimen preparation and performing the fatigue tests.
\end{acknowledgements}
\textbf{Conflict of interest}
On behalf of all authors, the corresponding author states that there is no conflict of interest. \\
This is a preprint of an article published in \textit{Archive of Applied Mechanics}. The final authenticated version is available online at: https://doi.org/10.1007/s00419-021-01897-2.
\bibliographystyle{spmpsci}
|
2,877,628,090,551 | arxiv | \section{Introduction}
Fundus images capture snapshots of the anterior portion of the eye, to detect retinal pathologies such as diabetic retinopathy (DR), glaucoma, macular edema, to name a few. Several automated diagnostic systems have been developed over the past decade that utilize fundus images for primary-care physicians to generate a quick ``second opinion'' and enable decision-making regarding referrals and follow-up treatment \cite{fraz} \cite{DREAM}. Most such automated diagnostic systems using fundus images are primarily based on machine learning and decision making principles. With increasing dimensions and sizes of medical data, automated decision making processes may experience scalability issues due to the speed, volume, variety and complexity involved with ``large-scale'' medical image data. In this paper, we present a scalable cloud-computing framework using Microsoft Azure Machine Learning Studio (MAMLS) platform to analyze and classify high-dimensional fundus image-based medical data sets and ensure high classification accuracy.
Large data sets with high dimensionality require substantial amount of computation time for data creation and data processing \cite{cloudref}. In such instances, data mining strategies such as feature reduction are found to be effective in enhancing manageability by significantly reducing the dimensionality and computational time complexity \cite{DREAM} cite{Major}. In this work a novel cloud-computing framework is presented that is capable of generalizing the steps for fundus image-based classification tasks to ensure maximum accuracy and low computational time complexity for automated DR screening systems. Most existing automated screening systems for non-proliferative DR (NPDR) ensure pathology detection at the cost of high false positives \cite{DREAM}. Proliferative DR (PDR) detection systems on the other hand, focus on retinal blood vessel extraction followed by classification for detection of new-vessel like abnormalities in the retina \cite{Lee}. All such automated DR detection systems primarily focus on classification accuracies per image, rather than the classification accuracy per lesion (or per pathological manifestation). The proposed system is trained to focus on pathology level classification to find generalizable features that discriminate borderline pathological manifestations from their normal counterparts. Such a generalized large-scale cloud-computing based analysis is capable of performing exhaustive feature set analysis and optimal classifier identification, thereby improving the state-of-the-art pathology classification metrics, thus leading to improved prognosis.
This paper makes two key contributions. First, it introduces a novel cloud-computing framework that processes large data sets to evaluate optimal classification features from fundus image data sets. This MAMLS generalized flow analyzes over 229,386 samples from fundus images with 98 features per sample by performing feature ranking, reduction and classification significantly in under 15 minutes of cloud-computing time. Second, several feature ranking strategies are comparatively analyzed and the minimal-redundancy-maximal-relevance (mRMR) \cite{mrmr} feature ranking strategy is found to be the best detector of optimal feature sets for fundus image classification tasks. The optimal feature sets are more discriminating than full feature sets. These optimal feature sets increase the overall classification accuracy from 0.2-1.2\% with 11-23\% reduction in computational time complexity when compared to the full feature set in the MAMLS platform.
\section{Data and Method}
This work analyzes the image-based features that uniquely identify retinal pathologies such as NPDR and blood vessel abnormalities due to PDR. While large numbers of image-based features can be useful in generalizing automated pathology classification methodologies, the identification of the optimal feature sets that maximize classification accuracies is key for accurate detection of the borderline pathological images. In this work, region-based and pixel-based features are analyzed for their impact on binary and multi-class classification for two separate automated pathology detection tasks based on the fundus image data sets described below.
\subsection{Fundus Image Data}
\begin{itemize}
\item DIARETDB1 \cite{DB1}: data set consists of 89 fundus images with $50^{o}$ FOV, that are manually annotated for bright lesions (hard exudates and cotton wool spots) and red lesions (haemorrhages and microaneurysms) corresponding to varying severities of NPDR. A sample image and the lesions are shown in Fig. \ref{DB1}. Automated image filtering and segmentation can be used to detect bright regions and red regions separately \cite{DREAM}, where each region corresponds to a sample for classification. An optimal set of region-based features corresponding to the bright and red regions can then be used to maximize the overall classification accuracy for such a multi-class classification task for NPDR detection with 6 classes (corresponding to false positive bright regions, hard exudates, cotton wool spots, false positive red regions, haemorrhages and microaneurysms, respectively).
\begin{figure}[ht]
\begin{center}
\includegraphics[width = 3.0in, height=1.75in]{pics/DB1ex}
\caption{A sample fundus image from DIARETDB1 data set with bright and red lesions corresponding to NPDR.}\label{DB1}
\end{center}
\end{figure}
\item STARE \cite{STARE}: data set contains 20 fundus images with 35$^{o}$ FOV that are manually annotated for blood vessels by two independent human observers. Here, 10 images represent patients with retinal abnormalities while the remaining 10 represent normal retina. A sample image and its vessel annotations are shown in Fig. \ref{fundus}(a),(b), respectively. Vessels marked by the second manual observer are considered ground-truth. PDR is known to cause fine vessel-like growth to appear in fundus images. Although the major blood vessel regions are easily detectable by high-pass and morphological filtering as shown in \cite{Major}, detection of finer vessel-like regions is challenging. An optimal set of region-based and pixel based features can then be used to classify the fine vessel regions from non-vessels (binary classification) to aid PDR detection. Here, each minor vessel region corresponds to a sample for classification.
\end{itemize}
\begin{figure}
\begin{center}
\subfigure[]{\includegraphics[width = 1.5in,height=0.9in]{pics/fundus}}
\subfigure[]{\includegraphics[width = 1.5in, height=0.9in]{pics/fundus_maj}}
\subfigure[]{\includegraphics[width = 1.5in, height=0.9in]{pics/fundus_man}}
\subfigure[]{\includegraphics[width = 1.5in, height=0.9in]{pics/fundus_min}}
\caption{Blood vessel segmentation using fundus images (a) Fundus Image. (b) Manually marked blood vessels. (c) Major vessels detected using \cite{Major}. (d) Remaining minor vessel regions for binary classification.}
\label{fundus}
\end{center}
\end{figure}
The histogram of sample distributions from our two data sets is shown in Fig. \ref{hist}. Classification of both data sets poses challenges due to the unbalanced sample distributions. Once the various sample regions are extracted from the fundus images, the next steps to extract features and classification are described in the sections below.
\begin{figure}
\begin{center}
\subfigure[]{\includegraphics[width = 1.4in,height=1.0in]{pics/DB1hist}}
\subfigure[]{\includegraphics[width = 1.4in, height=1.0in]{pics/STAREhist}}
\caption{Sample distribution for the 2 data sets. (a) Distribution of the 6 class types from DIARETDB1 data set. False positive red lesions regions (class label 3) have the largest number of samples while cotton wool spot regions (class label 2) have the smallest number of samples. (b) Distribution of the 2 class types from the STARE vessel data set. Number of non-vessel regions (class label 0) is greater than the number of actual vessel regions (class label 1).}
\label{hist}
\end{center}
\end{figure}
\subsection{Feature Extraction}
The features that are extracted for classifying the region-based samples extracted from the data sets can be categorized into 7 categories shown in Table \ref{notation}. As a pre-processing step, the green plane of each fundus image is resized to [500x500] pixels and the pixel intensities are rescaled in the range [0,1], resulting in image $I$. From the RGB to HSI converted image planes \cite{DREAM}, the other similarly resized and rescaled image planes include the red plane ($I^r$), hue plane ($I^h$), saturation plane ($I^s$) and intensity plane ($I^i$). The Gaussian derivative images corresponding to 6 coefficients from 0th to second order Gaussian filtering of image $I$ in the horizontal ($x$ direction) and vertical ($y$ direction) with $\sigma^2=8$ are denoted as $[I^{G},I^{G}_x,I^{G}_y,I^{G}_{x,y},I^{G}_{xx},I^{G}_{yy}]$, respectively \cite{DREAM}. First and second order gradient images in $(x,y)$ directions for various image planes are denoted by the subscript $_{(x,y)}, _{(xx,yy)}$, respectively.
\begin{table}[ht]
\begin{center}
\caption{Definition of Features.}
\begin{tabular}{|c|l|l|}
\hline
\#&Category&Features\\ \hline \hline
14&Structural&Area, bounding box lengths, convex area,\\
&&filled area, Euler number, extent, orientations,\\
&& major and minor axes lengths, orientation,\\
&&eccentricity, perimeter, solidity.\\\hline
12&Gaussian&Mean and variance in Gaussian coefficient\\
&Coefficients&images $[I^{G},I^{G}_x,I^{G}_y,I^{G}_{xy},I^{G}_{xx},I^{G}_{yy}]$.\\ \hline
16&Regional&Regional Mean, minimum, maximum and\\
&Intensity&std. dev. for images [$I,I^r,I^h,I^i$]\\\hline
24&Gradient&Maximum, minimum and mean pixel intensities\\
&Intensity&in gradient images $[I_{(x,y)},I_{(xx,yy)},I^{r}_{(x,y)},$\\
&&$I^{r}_{(xx,yy)},I^{h}_{(x,y)},I^{h}_{(xx,yy)},I^{s}_{(x,y)},I^{s}_{(xx,yy)}]$\\ \hline
24&Gradient in &Maximum, minimum and mean pixel intensities\\
&Image&in $[I.I_{(x,y)},I.I_{(xx,yy)},I^{r}.I^{r}_{(x,y)},I^{r}.I^{r}_{(xx,yy)},$\\
&Intensity&$I^{h}.I^{h}_{(x,y)},I^{h}.I^{h}_{(xx,yy)},I^{s}.I^{s}_{(x,y)},I^{s}.I^{s}_{(xx,yy)}]$\\ \hline
4&Pixel-window&Pixel intensity: Max. in [3x3], mean in [5x5], \\
&based \cite{Major}&std. dev. in [5x5], neighbors in [5x5] window.\\ \hline
4&Pixel intensity&From images $[I^{G}_x,I^{G}_y,I^{G}_{xx},I^{G}_{yy}]$.\\ \hline
\end{tabular}
\label{notation}
\end{center}
\end{table}
For the DIARETDB1 data set, $n=15,945$ samples with $L=66$ region-based features per sample are extracted using the 14, 12, 16 and 24 features corresponding to Structural, Gaussian Coefficient, Regional intensity and Gradient Intensity in Table \ref{notation}, respectively. For the STARE data set, $n=229,386$ samples with $L=98$ region-based and pixel-based features per sample are extracted using all the features defined in Table \ref{notation}. The next step is identification of the most discriminating features for classification tasks.
\subsection{Feature Ranking and Classification}
The discriminating characteristic of each feature is evaluated using 3 ranking methods. First, the F-score of each feature ($\phi$) is evaluated using (\ref{feqn}). Here, for $c$ different class labels, the mean feature value ($v$) for all samples in class $c$ is denoted as $\overline{v^{c}_{\phi}}$, while the overall mean feature value is $\overline{v_{\phi}}$. The number of samples belonging to each class type is $n^{c}$ and total number of samples is $n$. The second feature ranking method utilizes the correlation coefficient between feature distributions as a metric for feature ranking in (2). Here, the underlying assumption is that discriminating characteristic of a feature ($\phi_1$) can be improved by using it in combination with other strongly correlating features ($\phi_2$). Thus, features are ranked in the decreasing order of their correlation coefficients ($\rho$) with the remaining features using (2). The third feature ranking strategy uses mRMR criterion \cite{Major} that is based on mutual information from the individual features. Here, features are ranked based on the top combination of features that have maximum relevance with the sample class labels and minimum redundancy.
\begin{eqnarray}\label{feqn}
\forall \phi, F(_\phi)=\frac{\sum_{j=0}^{c-1} (\overline{v^{j}_{\phi}}-\overline{v_{\phi}})^2}{\sum_{j=0}^{c-1} \frac{1}{n^c-1} \sum_{k=1}^{n^c}(v_{k,\phi}^{c}-\overline{v^{c}_{\phi}})^2}.\\
\rho(_{\phi_1},_{\phi_2})_=\frac{\overline{v_{\phi_1}v_{\phi_2}}-\overline{v_{\phi_1}}.\overline{v_{\phi_2}}}{\frac{1}{n}\sqrt{{\sum_{_{k=1}}^{n}(v_{k,\phi_1}-\overline{v_{\phi_1}})^2{\sum_{_{k'=1}}^{n}(v_{k',\phi_2}-\overline{v_{\phi_2}})^2}}}}.\\ \nonumber
\end{eqnarray}
For optimal feature ranking, 5-fold cross validation followed by classification is performed. First, each data set is partitioned into training data (30\% samples) and testing data (70\% samples) \cite{DREAM}. Next, the training data set is separated into 5-folds, where in each fold, 80\% of the data samples are used for feature ranking and classifier parametrization, while the remaining 20\% data samples are used for validation of the trained classifier. The averaged ranks across all the folds are analyzed for aggregated classification performance as shown in Fig. \ref{fold}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width = 2.7in, height=2.0in]{pics/folding}
\caption{Feature ranking process with 5-fold cross-validation.
}\label{fold}
\end{center}
\end{figure}
Finally, optimal classifier selection is performed for the two data sets from a family of classifiers including k-nearest neighbors, Gaussian Mixture Models, Support Vector Machines, Decision Forest (DF), Boosted Decision Trees (BDT). It is observed that the BDT and DF classifiers have least average validation error for the DIARETDB1 and STARE data sets, respectively.
\section{Results}\label{result}
In Fig. \ref{feat}, the average classification accuracy on the DIARETDB1 and STARE data sets are analyzed using the top $\phi$ combinations of ranked features, where $\phi \in [1:L]$, where $L=66$ and $L=98$ for the DIARETDB1 and STARE data sets, respectively. Here, it is observed that using the mRMR feature ranking strategy, the top 10-15 features are capable of achieving about 75-80\% classification accuracy, while the remaining 25-30 features contribute to an additional 3-6\% increase in overall classification accuracy. Thus, the top 10-15 features may be adequate for initial screening purposes, but the complete set of 40 features becomes important in case of borderline decision making tasks, i.e. separating fundus images with moderate NPDR from severe NPDR.
\begin{figure}[ht]
\begin{center}
\subfigure[]{\includegraphics[width = 3.0in,height=1.55in]{pics/db1feat}}
\subfigure[]{\includegraphics[width = 3.0in, height=1.55in]{pics/starefeat}}
\caption{Classification performance assessment for top ranked features. Top 40 mRMR ranked features are most accurate for (a) DIARETDB1 data set, (b) STARE data set.}
\label{feat}
\end{center}
\end{figure}
For both DIARETDB1 and STARE data sets, the mRMR feature ranking strategy results in highest classification accuracy using top 40 features. For the DIARETDB1 data set, the top 40 features include the 14 structural, 11 Gaussian Coefficient, 9 regional intensity, 6 gradient intensity features from Table \ref{notation}. On this data set, the optimal feature set results in 1.2\% higher accuracy and 11.2\% lower computation time than the entire feature set. For the STARE data set, top 40 features include 4 pixel-window based, 4 pixel intensity-based, 14 structural, 10 Gaussian Coefficient, 8 regional intensity features from Table \ref{notation}. On this data set, the optimal feature set results in 0.24\% higher accuracy with 23.4\% lower processing time when compared to the entire feature set. The performance of the optimal feature set with respect to the existing methods is shown in Table \ref{res}.
\begin{table*}[ht]
\begin{center}
\caption{Classification Accuracy of Optimal Feature Set in comparison with full feature set and existing works. Computation time is measured in the MAMLS platform.}
\begin{tabular}{|c| c| c| c| c|}
\hline
Dataset&All Features(ACC)&Optimal Features (ACC)&Existing work/ Features (ACC)&Computation Time (seconds)\\ \hline\hline
DIARETDB1 \cite{DB1}&66 (0.89)&40 (0.901)&\cite{DREAM}/ 30 (0.886)&792 s\\ \hline
STARE \cite{STARE}&98 (0.832)&40 (0.835)& \cite{Major}/ 8 (0.751)&326 s\\ \hline
\end{tabular}
\label{res}
\end{center}
\end{table*}
The Receiver Operating Characteristic (ROC) curves and area under ROC curves (AUC) for the challenging classification tasks of hemorrhages from microaneurysms in the DIARTEDB1 data set and for classification of minor blood vessels from non-vessels is shown in Fig. \ref{roc}. Using the optimal set of top 40 features, the observed [sensitivity (SEN), specificity (SPEC), AUC] for classification of red lesions from false positive regions is [0.9,0.7,0.895], which has better DR screening performance than [0.8,0.85,0.84] reported in \cite{DREAM}.
\begin{figure}
\begin{center}
\subfigure[]{\includegraphics[width = 1.5in, height=1.5in]{pics/DB15n}}
\subfigure[]{\includegraphics[width = 1.5in, height=1.5in]{pics/STAREROCred}}
\caption{ROC curves generated by the MAMLS platform. (c) Classification of hemorrhages from microaneurysms in DIARETDB1 (AUC=0.78). (b) Classification of minor blood vessels in STARE (AUC=0.914).}
\label{roc}
\end{center}
\end{figure}
\section{Conclusions and Discussion} \label{conclusion}
In this paper optimal feature sets have been identified for classification of NPDR lesions and minor vessels that can aid automated DR screening systems \cite{DREAM}. It is observed that mRMR feature ranking strategy is most efficient in detecting combination of region-based and pixel-based features for DR classification tasks. Additionally, Decision Forest and Boosted Decision Tree classifiers in the MAMLS platform were found to be most effective for such large-scale fundus image data classification. The data sets used for the proposed analysis are available for download and classification performance analysis \footnote[1]{https://sites.google.com/a/uw.edu/src/useful-links}. Future efforts will be directed towards evaluating the proposed large-scale screening systems for NPDR and PDR on additional fundus image data sets.
\bibliographystyle{IEEEtran}
|
2,877,628,090,552 | arxiv | \section{Introduction}\label{sec1}
In elastodynamic wave theory, a propagator matrix is a square matrix that `propagates' a wave-field vector from one depth level to another.
It was originally introduced in geophysics for horizontally layered media \citep{Thomson50JAP, Haskell53BSSA, Gilbert66GEO} and
later extended for laterally varying media \citep{Kennett72GJRAS}. It has been used for modelling surface waves \citep{Woodhouse74GJR}
and reflection and transmission responses of heterogeneous media \citep{Haines88GJI, Kennett90GJI, Koketsu91GJI, Takenaka93WM}.
It has also been proposed as an operator for accurate seismic imaging schemes \citep{Kosloff83GEO}, accounting for multiple scattering \citep{Wapenaar86GP2}.
The wave-field vector that the propagator matrix acts on contains components (e.g. particle velocity and stress) of the full wave field.
Here `full' means that the wave field implicitly consists of
downgoing and upgoing, propagating and evanescent waves.
A Marchenko focusing function is a wave field that focuses at a designated point in space at zero time, accounting for primaries and multiples.
Marchenko focusing functions were originally introduced to express the wave field inside a horizontally layered medium in terms of the reflection response at the
boundary of that medium \citep{Rose2001PRA, Rose2002IP, Broggini2012EJP}. This was later extended for laterally varying media \citep{Wapenaar2014GEO, Slob2014GEO},
under the assumption that the wave field inside the medium can be decomposed into downgoing and upgoing components and that evanescent waves can be neglected.
It has recently been shown that the propagator matrix can be expressed in terms of Marchenko focusing functions and vice versa \citep{Wapenaar2022GEO}.
Via this relation, the usual assumptions underlying the focusing functions (such as ignoring evanescent waves) are circumvented.
A transfer matrix is a square matrix that `transfers' decomposed wave-field vectors (explicitly containing downgoing and upgoing waves) from one depth level to another \citep{Born65Book, Katsidis2002AO}.
In this paper we call this matrix
the transfer matrix, to distinguish it from the propagator matrix, which acts on full wave-field vectors
(but please note that in the literature there is not a clear distinction between the use of the terminologies `propagator matrix' and `transfer matrix').
It has recently been shown that
the transfer matrix can be expressed in terms of decomposed Marchenko focusing functions
\citep{Dukalski2022EAGE, Dukalski2022IMAGE}.
The aim of this paper is to present propagator matrices, transfer matrices and Marchenko focusing functions in a consistent way and to discuss their mutual relations.
We aim to set up the theory as general as possible, accounting for lateral and vertical variations of the medium parameters, accounting for evanescent waves, taking dissipation into account, and considering
wave phenomena ranging from acoustic to seismoelectric waves.
Only the numerical examples, which are meant as illustrations of the different quantities and their relations,
are restricted to oblique acoustic plane waves in a lossless horizontally layered medium.
We hope that this consistent treatment will contribute to the understanding of the mutual connections and provide insight in the assumptions and approximations that underly
Marchenko-type wave field retrieval schemes and how to cope with them
\citep{Slob2016PRL, Dukalski2019GJI, Dukalski2022IMAGE, Reinicke2020GEO, Reinicke2023GJI, Elison2020GJI, Diekmann2021PRR, Wapenaar2021GJI, Kiraz2023GJI}.
Moreover, we hope to stimulate new research directions.
The setup of this paper is as follows. In section \ref{sec2} we discuss the $2\times 2$ propagator matrix for acoustic wave fields and its relation with acoustic Marchenko focusing functions.
The advantage of starting with the acoustic situation is that all expressions are relatively simple and yet contain all essential aspects.
In section \ref{sec3} we discuss the $2\times 2$ transfer matrix for acoustic wave fields and its relation with decomposed acoustic Marchenko focusing functions.
Sections \ref{sec4} and \ref{sec5} are generalisations of sections \ref{sec2} and \ref{sec3} for other wave phenomena.
Here, the propagator and transfer matrices are $N\times N$ matrices, with $N$ ranging from 2 for acoustic waves to 12 for seismoelectric waves;
the Marchenko focusing functions are $\frac{N}{2}\times\frac{N}{2}$ matrices.
We derive their mutual relations
by exploiting general symmetry properties, which are derived in Appendix \ref{AppB}.
Sections \ref{sec4} and \ref{sec5} not only cover classical waves, but also quantum mechanical waves obeying the Schr\"odinger equation ($N=2$) and the Dirac equation ($N=4$, Appendix \ref{AppA}).
In section \ref{sec6} we present some conclusions.
\section{Acoustic propagator matrix and focusing functions}\label{sec2}
\subsection{Acoustic matrix-vector wave equation}
Our starting point is the following acoustic matrix-vector wave equation in the space-frequency domain
\begin{eqnarray}\label{eq2.0}
\partial_3{\bf q} = {{\mbox{\boldmath ${\cal A}$}}}\,{\bf q} +{\bf d}
\end{eqnarray}
\citep{Corones75JMAA, Ursin83GEO, Kosloff83GEO, Fishman84JMP, Wapenaar86GP2, Hoop96JMP}.
Here ${\bf q}$ is a vector containing the wave field components $p$ (acoustic pressure) and $v_3$ (vertical component of the particle velocity), both as a function
of the space coordinate vector ${\bf x}=(x_1,x_2,x_3)$ (with positive $x_3$ denoting depth) and the angular frequency $\omega$, hence,
\begin{eqnarray}\label{eq9996ge}
{\bf q}({\bf x},\omega) = \begin{pmatrix} p \\ v_3 \end{pmatrix}({\bf x},\omega).
\end{eqnarray}
Operator $\partial_3$ stands for the partial differential operator $\partial/\partial x_3$.
The space- and frequency-dependent operator matrix
${{{\mbox{\boldmath ${\cal A}$}}}}$ is defined as
\begin{eqnarray}
{{\mbox{\boldmath ${\cal A}$}}}({\bf x},\omega)&=&
\begin{pmatrix} 0 & i\omega \rho \\
i\omega\kappa-\frac{1}{i\omega}\partial_\alpha\frac{1}{\rho}\partial_\alpha & 0 \end{pmatrix}({\bf x},\omega),
\end{eqnarray}
where $\kappa({\bf x},\omega)$ is the compressibility, $\rho({\bf x},\omega)$ the mass density and $i$ the imaginary unit.
Operator $\partial_\alpha$ stands for the partial differential operator $\partial/\partial x_\alpha$.
Greek subscripts take on the values 1 and 2 and Einstein's summation convention applies to repeated Greek subscripts, unless otherwise noted.
In general the medium may be dissipative,
meaning that $\kappa$ and $\rho$ may be frequency-dependent and complex-valued, with (for positive $\omega$) $\Im(\kappa)\ge 0$ and $\Im(\rho)\ge 0$,
where $\Im$ denotes the imaginary part.
For later convenience we rewrite the operator matrix as follows
\begin{eqnarray}
{{\mbox{\boldmath ${\cal A}$}}}({\bf x},\omega)
&=&\begin{pmatrix} 0 & i\omega \rho \\
-\frac{1}{i\omega\sqrt{\rho}}{\cal H}_2\frac{1}{\sqrt{\rho}} & 0 \end{pmatrix}({\bf x},\omega).\label{eqAcoustic}
\end{eqnarray}
Here ${\cal H}_2({\bf x},\omega)$ is the Helmholtz operator, defined as
\begin{eqnarray}
{\cal H}_2({\bf x},\omega)=k^2({\bf x},\omega)+\partial_\alpha\partial_\alpha, \label{eqHelmholtz}
\end{eqnarray}
with wavenumber $k({\bf x},\omega)$ defined via
\begin{eqnarray}
k^2({\bf x},\omega)=\omega^2\kappa\rho-\frac{3(\partial_\alpha\rho)(\partial_\alpha\rho)}{4\rho^2}+\frac{(\partial_\alpha\partial_\alpha\rho)}{2\rho}\label{eqks}
\end{eqnarray}
\citep{Brekhovskikh60Book, Wapenaar2001RS}.
Finally, vector ${\bf d}$ in equation (\ref{eq2.0}) contains source terms, according to
\begin{eqnarray}\label{eq9996ged}
{\bf d}({\bf x},\omega) = \begin{pmatrix} \hat f_3 \\ \frac{1}{i\omega}\partial_\alpha(\frac{1}{\rho}\hat f_\alpha) + \hat q \end{pmatrix}({\bf x},\omega).
\end{eqnarray}
Here $\hat f_\alpha({\bf x},\omega)$ and $\hat f_3({\bf x},\omega)$ are the horizontal and vertical components, respectively, of the external force density
(the hats are used to distinguish external force components from focusing functions),
and $\hat q({\bf x},\omega)$ is the volume injection-rate density (where $\hat q$
is to be distinguished from the wave field vector ${\bf q}$). From here onward we simplify the notation by not explicitly mentioning the frequency-dependency in the argument lists.
\subsection{Acoustic propagator matrix}
We define a boundary $\partial\mathbb{D}_F$ at depth level $x_3=x_{3,F}$. We define a coordinate vector ${\bf x}_F$ at this boundary as ${\bf x}_F=(x_{1,F},x_{2,F},x_{3,F})$ (with fixed $x_{3,F}$).
We introduce the propagator matrix ${\bf W}({\bf x},{\bf x}_F)$ as a solution of wave equation (\ref{eq2.0}) for the source-free situation, according to
\begin{eqnarray}\label{eq2.1}
\partial_3{\bf W}({\bf x},{\bf x}_F) = {{\mbox{\boldmath ${\cal A}$}}}({\bf x}){\bf W}({\bf x},{\bf x}_F),
\end{eqnarray}
with boundary condition
\begin{eqnarray}
{\bf W}({\bf x},{\bf x}_F)|_{x_3={x_{3,F}}} = {\bf I}\delta({{\bf x}_{\rm H}}-{{\bf x}_{{\rm H},F}}),\label{eq9998d}
\end{eqnarray}
where ${\bf I}$ is the identity matrix and ${\bf x}_{\rm H}$ and ${\bf x}_{{\rm H},F}$ denote the horizontal coordinates of ${\bf x}$ and ${\bf x}_F$, respectively, hence
${\bf x}_{\rm H}=(x_1,x_2)$ and ${\bf x}_{{\rm H},F}=(x_{1,F},x_{2,F})$. Since equations (\ref{eq2.0}) and (\ref{eq2.1}) are both linear, Huygens' superposition principle can be applied
to get a representation for ${\bf q}({\bf x})$ in terms of ${\bf W}({\bf x},{\bf x}_F)$. For a given depth level $x_3$, assuming there are no sources for ${\bf q}({\bf x})$ between $x_{3,F}$ and $x_3$,
we obtain
\begin{eqnarray}\label{eq1330}
{\bf q}({\bf x})&=&\int_{\partial\mathbb{D}_F} {\bf W}({\bf x},{\bf x}_F){\bf q}({\bf x}_F){\rm d}^2{\bf x}_F
\end{eqnarray}
\citep{Gilbert66GEO, Kennett72GJRAS, Woodhouse74GJR}.
Note that equation (\ref{eq1330}) expresses the `propagation' of ${\bf q}$ from depth level $x_{3,F}$ to depth level $x_3$, which is why ${\bf W}({\bf x},{\bf x}_F)$ is called the propagator matrix.
It is partitioned as follows
\begin{eqnarray}\label{eq424}
{\bf W}({\bf x},{\bf x}_F)= \begin{pmatrix}W^{p,p} & W^{p,v} \\
W^{v,p} & W^{v,v} \end{pmatrix}({\bf x},{\bf x}_F),
\end{eqnarray}
where $W^{p,p}$, $W^{p,v}$, $W^{v,p}$ and $W^{v,v}$ are the scalar components of the propagator matrix.
For each of these components, the second superscript refers to the quantity it acts on at ${\bf x}_F$,
whereas the first superscript refers to the quantity it contributes to at ${\bf x}$.
Equation (\ref{eq1330}) is illustrated in the upper-left frame of Figure \ref{Fig1}. The solid line at $x_{3,F}$ denotes the boundary $\partial\mathbb{D}_F$ (not necessarily a physical boundary).
The medium below $\partial\mathbb{D}_F$ may be inhomogeneous and dissipative. The dashed line at $x_3$ indicates an arbitrary depth level inside the inhomogeneous medium.
\begin{figure}
\centerline{\hspace{0cm}\epsfysize=9. cm \epsfbox{Fig1.pdf}}
\caption
Relations between the propagator matrix ${\bf W}({\bf x},{\bf x}_F)$, the transfer matrix ${{{\mbox{\boldmath ${\cal T}$}}}}({\bf x},{\bf x}_F)$,
and the Marchenko focusing functions $F^p({\bf x},{\bf x}_F)$ and
$F^v({\bf x},{\bf x}_F)$ (right column of ${\bf Y}({\bf x},{\bf x}_F)$).
The green and yellow double-sided arrows indicate full wave fields (implicitly consisting of downgoing and upgoing components), whereas the red and blue
single-sided arrows indicate decomposed downgoing and upgoing wave fields, respectively.}\label{Fig1}
\end{figure}
By applying equation (\ref{eq1330}) recursively, it follows that ${\bf W}$ obeys the following recursive expression
\begin{eqnarray}\label{eq1330k}
{\bf W}({\bf x}',{\bf x}_F)&=&\int_{{{\partial\mathbb{D}}}}{\bf W}({\bf x}',{\bf x}){\bf W}({\bf x},{\bf x}_F){\rm d}^2{\bf x},
\end{eqnarray}
where ${{{\partial\mathbb{D}}}}$ is a horizontal boundary at depth level $x_3$. By taking $x_3'=x_{3,F}$, we obtain from equations (\ref{eq9998d}) and (\ref{eq1330k})
\begin{eqnarray}\label{eq1330kinv}
{\bf I}\delta({{\bf x}_{\rm H}'}-{{\bf x}_{{\rm H},F}})&=&\int_{{{\partial\mathbb{D}}}}{\bf W}({\bf x}',{\bf x}){\bf W}({\bf x},{\bf x}_F){\rm d}^2{\bf x},
\end{eqnarray}
from which it follows that ${\bf W}({\bf x}_F,{\bf x})$ is the inverse of ${\bf W}({\bf x},{\bf x}_F)$.
The propagator matrix accounts for primaries and multiples between $x_{3,F}$ to $x_3$ and holds for propagating and evanescent waves
(for example, \cite{Woodhouse74GJR} uses the elastodynamic version of the propagator matrix to analyse surface waves).
However, evanescent field components may lead to instability and should be handled with care \citep{Kennett79GJRAS}.
Since the underlying wave equation is based on the explicit Helmholtz operator ${\cal H}_2$ (rather than on its implicit square-root, appearing in one-way wave equations),
\cite{Kosloff83GEO} argue that the numerical evaluation of equation (\ref{eq1330}) converges much faster and for higher propagation angles than schemes based on one-way wave equations.
They exploit this property in wide-angle imaging of seismic reflection responses. However, they use filters to eliminate evanescent and downgoing waves, so they do not exploit the
fact that the propagator matrix can handle multiply scattered and evanescent waves.
\cite{Wapenaar86GP2} propose a seismic imaging scheme based on the propagator matrix that handles internal multiple scattering.
Since their scheme is very sensitive to the chosen background model it has not found wide applications.
In section \ref{sec2.3} we show that the propagator matrix can be expressed in terms
of Marchenko focusing functions. For a lossless medium, these focusing functions can be derived from seismic reflection data and a smooth background model (section \ref{sec2.4}).
Hence, this leads to a propagator matrix that can be used for seismic imaging, which properly handles internal multiple scattering without being highly sensitive to the background model.
\begin{figure}
\centerline{\epsfysize=7.cm \epsfbox{Fig2a.pdf}}
\vspace{0.cm}
\centerline{\epsfysize=7. cm \epsfbox{Fig2b.pdf}}
\vspace{0.4cm}
\centerline{\epsfysize=7. cm \epsfbox{Fig2c.pdf}}
\vspace{0.cm}
\caption{(a) Horizontally layered medium. (b) Propagator matrix component $W^{p,p}(s_1,x_3,x_{3,F},\tau)$ (for fixed $s_1=1/3500$ m/s).
(c) Propagator matrix component $W^{p,v}(s_1,x_3,x_{3,F},\tau)$. }\label{Figure2}
\end{figure}
We conclude this section with a numerical illustration of the propagator matrix for the horizontally layered lossless medium of Figure \ref{Figure2}(a).
In each layer the propagation velocity $c=1/\sqrt{\kappa\rho}$ is shown (in m/s).
We define the spatial Fourier transformation of a function $u({\bf x},\omega)$ along the horizontal coordinate ${\bf x}_{\rm H}$ for constant $x_3$ as
\begin{eqnarray}
\tilde u({{\bf s}},x_3,\omega)=\int_{{\mathbb{R}}^2}\exp\{-i\omega{{\bf s}}\cdot{{\bf x}_{\rm H}}\}u({{\bf x}_{\rm H}},x_3,\omega){\rm d}^2{{\bf x}_{\rm H}},\label{eq99950b}
\end{eqnarray}
where ${{\bf s}}=(s_1,s_2)$ is the horizontal slowness vector and ${\mathbb{R}}$ is the set of real numbers.
For a horizontally layered medium, this transformation decomposes $u({\bf x},\omega)$ into independent
plane waves, with propagation angle $\theta$ (with respect to the vertical axis) obeying $\sin\theta=c|{\bf s}|$.
Next, we define the inverse temporal transformation for constant ${\bf s}$ and $x_3$ as
\begin{eqnarray}
u({{\bf s}},x_3,\tau)=\frac{1}{\pi}\Re\int_0^\infty \tilde u({{\bf s}},x_3,\omega)\exp\{-i\omega\tau\}{\rm d}\omega,\label{eq500a}
\end{eqnarray}
where $\Re$ denotes the real part and $\tau$ is the intercept time \citep{Stoffa89Book}.
We apply these transformations to the propagator matrix ${\bf W}({\bf x},{\bf x}_F)$, choosing ${\bf x}_F=(0,0,x_{3,F})$ and setting $s_2=0$.
This yields the transformed propagator matrix ${\bf W}(s_1,x_3,x_{3,F},\tau)$, with boundary condition ${\bf W}(s_1,x_{3,F},x_{3,F},\tau)={\bf I}\delta(\tau)$.
Analogous to equation (\ref{eq1330k}) it obeys the recursive expression
\begin{eqnarray}
{\bf W}(s_1,x_3',x_{3,F},\tau)={\bf W}(s_1,x_3',x_3,\tau)*{\bf W}(s_1,x_3,x_{3,F},\tau),
\end{eqnarray}
where the inline asterisk denotes temporal convolution.
The components $W^{p,p}(s_1,x_3,x_{3,F},\tau)$ and $W^{p,v}(s_1,x_3,x_{3,F},\tau)$,
with boundary conditions $W^{p,p}(s_1,x_{3,F},x_{3,F},\tau)=\delta(\tau)$ and\\ $W^{p,v}(s_1,x_{3,F},x_{3,F},\tau)=0$,
are shown in Figures \ref{Figure2}(b) and \ref{Figure2}(c) for fixed $s_1=1/3500$ s/m, as a function of intercept time $\tau$ and depth $x_3$. To get a smooth display,
at each depth the components are convolved with a Ricker wavelet with a central frequency of 50 Hz. The upper traces at $x_{3,F}=0$ m represent the aforementioned boundary conditions.
Note that $W^{p,p}$ and $W^{p,v}$ are, for each depth $x_3$, even and odd functions, respectively, of intercept time $\tau$.
The propagation velocity in the layer between $x_3=760$ m and $x_3=800$ m equals 3600 m/s, which implies that for the chosen horizontal slowness of $s_1=1/3500$ s/m we have $\sin\theta>1$
(i.e., $\theta$ is complex-valued), hence, waves become `evanescent' in this layer.
The propagator tunnels through this layer and the amplitudes below this layer are higher than above it.
In general, evanescent field components of the propagator matrix should be handled with care,
because next to exponentially decaying terms they contain exponentially growing terms that may cause numerical inaccuracies
\citep{Kennett79GJRAS}.
\subsection{Relation with acoustic Marchenko focusing functions}\label{sec2.3}
In preparation for defining the focusing functions, we decompose operator matrix ${\mbox{\boldmath ${\cal A}$}}$ as
\begin{eqnarray}
{{{\mbox{\boldmath ${\cal A}$}}}}={{{\mbox{\boldmath ${\cal L}$}}}}{\bf \Lambda}{{{\mbox{\boldmath ${\cal L}$}}}}^{-1},
\end{eqnarray}
with
\begin{eqnarray}
&&\hspace{-.7cm}{\bf \Lambda}=\begin{pmatrix}i{\cal H}_1&0\\0&-i{\cal H}_1\end{pmatrix},\quad {\cal H}_1=\rho^{1/2}{\cal H}_2^{1/2}\rho^{-1/2},\label{eqkk18}\\
&&\hspace{-.7cm}{{{\mbox{\boldmath ${\cal L}$}}}}=\begin{pmatrix}1 & 1\\ \frac{1}{\omega\rho}{\cal H}_1 & -\frac{1}{\omega\rho}{\cal H}_1\end{pmatrix},\quad
{{{\mbox{\boldmath ${\cal L}$}}}}^{-1}=\frac{1}{2}\begin{pmatrix}1 & \omega{\cal H}_1^{-1}\rho \\1 &-\omega{\cal H}_1^{-1}\rho\end{pmatrix}\label{eqkk19}
\end{eqnarray}
\citep{Corones75JMAA, Fishman84JMP, Wapenaar86GP2, Hoop96JMP}.
The square-root operator ${\cal H}_2^{1/2}$ is symmetric in the following sense
\begin{eqnarray}
&&\hspace{-0.7cm}\int_{{\mathbb{R}}^2}\{{\cal H}_2^{1/2}g({\bf x}_{\rm H})\}h({\bf x}_{\rm H}){\rm d}^2{\bf x}_{\rm H}=
\int_{{\mathbb{R}}^2}g({\bf x}_{\rm H})\{{\cal H}_2^{1/2}h({\bf x}_{\rm H})\}{\rm d}^2{\bf x}_{\rm H}
\label{eq20kk}
\end{eqnarray}
\citep{Wapenaar2001RS}, where $g({\bf x}_{\rm H})$ and $h({\bf x}_{\rm H})$ are functions in the horizontal plane with `sufficient decay at infinity'. Operator ${\cal H}_1$, as defined in equation (\ref{eqkk18}) is not symmetric,
but operator $\frac{1}{\rho}{\cal H}_1$ and its inverse, both appearing in equation (\ref{eqkk19}), are symmetric.
From here onward we assume that the medium at and above $\partial\mathbb{D}_F$ is homogeneous and may be dissipative, with mass density $\rho_0$ and propagation velocity $c_0$.
The medium below $\partial\mathbb{D}_F$ may be inhomogeneous and dissipative, and it is source-free.
In the upper half-space (i.e., at and above $\partial\mathbb{D}_F$) we express the wave field vector ${\bf q}({\bf x})$ in terms of downgoing and upgoing waves $p^+({\bf x})$ and $p^-({\bf x})$ via
\begin{eqnarray}\label{eq15ffrev}
{\bf q}({\bf x})={{{\mbox{\boldmath ${\cal L}$}}}}({\bf x}){\bf p}({\bf x}),
\end{eqnarray}
with
\begin{eqnarray}
{\bf p}({\bf x})=\begin{pmatrix}p^+ \\p^-\end{pmatrix}({\bf x}).\label{eq517rev}
\end{eqnarray}
Note that these equations imply $p({\bf x})=p^+({\bf x}) + p^-({\bf x})$, hence, the downgoing and upgoing waves $p^+$ and $p^-$ are pressure-normalised.
We will now use equation (\ref{eq15ffrev}) and the properties of operator ${\cal H}_1$
to derive focusing functions and express them in the components of the propagator matrix and vice-versa. Substituting
equation (\ref{eq15ffrev}), with ${\bf x}$ replaced by ${\bf x}_F$, into the right-hand side of equation (\ref{eq1330}) gives
\begin{eqnarray}
{\bf q}({\bf x})=\int_{\partial\mathbb{D}_F} {\bf Y}({\bf x},{\bf x}_F){\bf p}({\bf x}_F){\rm d}^2{\bf x}_F,\label{eq1330dec}
\end{eqnarray}
for $x_3\ge x_{3,F}$, with
\begin{eqnarray}
{\bf Y}({\bf x},{\bf x}_F)={\bf W}({\bf x},{\bf x}_F){{{{\mbox{\boldmath ${\cal L}$}}}}}({\bf x}_F),\label{eq15k}
\end{eqnarray}
or
\begin{eqnarray}
&&\hspace{-.7cm}{\bf Y}({\bf x},{\bf x}_F)= \begin{pmatrix}W^{p,p} & W^{p,v} \\ W^{v,p} & W^{v,v} \end{pmatrix}({\bf x},{\bf x}_F)
\begin{pmatrix}1 & 1\\ \frac{1}{\omega\rho_0}{\cal H}_1 & - \frac{1}{\omega\rho_0}{\cal H}_1\end{pmatrix}({\bf x}_F).\nonumber\\
&&\label{eq519}
\end{eqnarray}
The operators $\pm\frac{1}{\omega\rho_0}{\cal H}_1({\bf x}_F)$ in equation (\ref{eq519}) act, via equation (\ref{eq1330dec}), on $p^\pm({\bf x}_F)$.
However, since these operators are symmetric (in the sense of equation (\ref{eq20kk})), we may replace the actions on $p^\pm({\bf x}_F)$ by actions on the elements $W^{p,v}({\bf x},{\bf x}_F)$ and $W^{v,v}({\bf x},{\bf x}_F)$.
To be more specific, if we partition ${\bf Y}({\bf x},{\bf x}_F)$ as follows
\begin{eqnarray}
{\bf Y}({\bf x},{\bf x}_F)=\begin{pmatrix}Y^{p,+} & Y^{p,-} \\ Y^{v,+} & Y^{v,-} \end{pmatrix}({\bf x},{\bf x}_F),\label{eq22nn}
\end{eqnarray}
we obtain from equation (\ref{eq519}) for the elements of this matrix
\begin{eqnarray}
&&\hspace{-0.7cm}Y^{p,\pm}({\bf x},{\bf x}_F)=W^{p,p}({\bf x},{\bf x}_F)\pm\frac{1}{\omega\rho_0}{\cal H}_1({\bf x}_F)W^{p,v}({\bf x},{\bf x}_F),\label{eq11}\\
&&\hspace{-0.7cm}Y^{v,\pm}({\bf x},{\bf x}_F)=W^{v,p}({\bf x},{\bf x}_F)\pm\frac{1}{\omega\rho_0}{\cal H}_1({\bf x}_F)W^{v,v}({\bf x},{\bf x}_F).\label{eq11v}
\end{eqnarray}
We analyse these expressions one by one. First we consider the element $Y^{p,-}$.
From equations (\ref{eq1330dec}) and (\ref{eq22nn}) it can be seen that the superscript $p$ refers to the acoustic pressure $p({\bf x})$ contained in ${\bf q}({\bf x})$
and superscript $-$ refers to the upgoing wave field component $p^-({\bf x}_F)$ in ${\bf p}({\bf x}_F)$.
Using equations (\ref{eq9998d}), (\ref{eq424}) and (\ref{eq11}) we obtain
\begin{eqnarray}
Y^{p,-}({\bf x},{\bf x}_F)|_{x_3=x_{3,F}} = \delta({\bf x}_{\rm H}-{\bf x}_{{\rm H},F}),\label{eq9998dag}
\end{eqnarray}
which is a focusing condition.
Hence, we define
\begin{eqnarray}
&&\hspace{-.5cm}Y^{p,-}({\bf x},{\bf x}_F)=F^p({\bf x},{\bf x}_F)
=W^{p,p}({\bf x},{\bf x}_F)-\frac{1}{\omega\rho_0}{\cal H}_1({\bf x}_F)W^{p,v}({\bf x},{\bf x}_F),\label{eq15b}
\end{eqnarray}
with $F^p({\bf x},{\bf x}_F)$ denoting a focusing function for the acoustic pressure $p$, which focuses at ${\bf x}={\bf x}_F$ and continues as an upgoing field in the homogeneous upper half-space,
see the lower frame of Figure \ref{Fig1}.
Next, we consider the element $Y^{v,-}$. Superscript $v$ refers to the vertical particle velocity $v_3({\bf x})$ contained in ${\bf q}({\bf x})$ and superscript
$-$ refers again to the upgoing wavefield component $p^-({\bf x}_F)$ in ${\bf p}({\bf x}_F)$.
Using equations (\ref{eq9998d}), (\ref{eq424}) and (\ref{eq11v}) we obtain
\begin{eqnarray}
Y^{v,-}({\bf x},{\bf x}_F)|_{x_3=x_{3,F}} = -\frac{1}{\omega\rho_0}{\cal H}_1({\bf x}_F)\delta({\bf x}_{\rm H}-{\bf x}_{{\rm H},F}),\label{eq9998dagg}
\end{eqnarray}
which is also a focusing condition,
but somewhat more complicated than before because of the mix of the involved wavefield components $v_3({\bf x})$ and $p^-({\bf x}_F)$.
Hence, we define
\begin{eqnarray}
&&\hspace{-.5cm}Y^{v,-}({\bf x},{\bf x}_F)=F^v({\bf x},{\bf x}_F)
=W^{v,p}({\bf x},{\bf x}_F)-\frac{1}{\omega\rho_0}{\cal H}_1({\bf x}_F)W^{v,v}({\bf x},{\bf x}_F),\label{eq15bb}
\end{eqnarray}
with $F^v({\bf x},{\bf x}_F)$ denoting the particle velocity counterpart of the focusing function $F^p({\bf x},{\bf x}_F)$
(note that the definition of $F^v({\bf x},{\bf x}_F)$ is different from that in \cite{Wapenaar2022JASA}, to facilitate the derivations below).
The focusing functions $F^p({\bf x},{\bf x}_F)$ and $F^v({\bf x},{\bf x}_F)$, which together form the right column of matrix ${\bf Y}({\bf x},{\bf x}_F)$,
are illustrated in the lower frame of Figure \ref{Fig1}. They resemble the focusing function $f_2$ introduced in previous work \citep{Wapenaar2014GEO, Slob2014GEO},
which also focuses at the upper boundary
(as opposed to the focusing function $f_1$, which focuses inside the medium). However, there are also some notable differences.
First, $f_2({\bf x},{\bf x}_F)$ is defined in a truncated
version of the actual medium and is obtained from a superposition of downgoing and upgoing components, $f_2^+({\bf x},{\bf x}_F)$
and $f_2^-({\bf x},{\bf x}_F)$ respectively, at ${\bf x}$ inside the medium (at the lower boundary of the truncated medium).
Moreover, representations involving $f_2^+$ and $f_2^-$ ignore evanescent waves at $x_{3,F}$ and $x_3$. In contrast,
$F^p({\bf x},{\bf x}_F)$ and $F^v({\bf x},{\bf x}_F)$ are defined in the actual (i.e., untruncated)
medium and represent the full pressure and vertical particle velocity at ${\bf x}$ of a field that focuses at ${\bf x}_F$ at the upper boundary.
Since they are derived from the propagator matrix, these focusing functions
account for evanescent waves. The only decomposition takes place at the boundary $\partial\mathbb{D}_F$.
This decomposition, formulated by equation (\ref{eq519}), accounts for evanescent waves.
Last but not least, $F^p$ and $F^v$ hold for dissipative media and they are normalised differently from $f_2$.
Before we analyse the elements in the left column of matrix ${\bf Y}({\bf x},{\bf x}_F)$, we introduce an adjoint medium, with parameters $\bar\kappa({\bf x})=\kappa^*({\bf x})$ and
$\bar\rho({\bf x})=\rho^*({\bf x})$. The bar denotes the adjoint medium and the superscript
asterisk denotes complex conjugation. Since the original medium is dissipative, the adjoint medium
is effectual, with (for positive $\omega$) $\Im(\kappa)\le 0$ and $\Im(\rho)\le 0$. Waves propagating through an effectual medium gain energy \citep{Bojarski83JASA, Hoop88JASA}.
Adjoint media are usually associated to a computational state. The operator matrix
${\,\,\,\bar{\mbox{\!\!\!\boldmath ${\cal A}$}}}$ and the Helmholtz operator $\bar{\cal H}_2$ of the adjoint medium are defined similarly as ${{\mbox{\boldmath ${\cal A}$}}}$ and ${\cal H}_2$
in equations (\ref{eqAcoustic}) and (\ref{eqHelmholtz}), respectively, but with $\kappa({\bf x})$ and $\rho({\bf x})$ replaced by $\bar\kappa({\bf x})$ and $\bar\rho({\bf x})$, respectively.
Hence, $\bar{\cal H}_2={\cal H}_2^*$.
Analogous to equations (\ref{eq2.1}) and (\ref{eq9998d}), we define the propagator matrix $\bar{\bf W}({\bf x},{\bf x}_F)$ of the adjoint medium as the solution of
$\partial_3\bar{\bf W}({\bf x},{\bf x}_F) = {\,\,\,\bar{\mbox{\!\!\!\boldmath ${\cal A}$}}}({\bf x})\bar{\bf W}({\bf x},{\bf x}_F)$, with boundary condition
$\bar{\bf W}({\bf x},{\bf x}_F)|_{x_3={x_{3,F}}} = {\bf I}\delta({{\bf x}_{\rm H}}-{{\bf x}_{{\rm H},F}})$.
In Appendix \ref{AppB} we derive
\begin{eqnarray}
\begin{pmatrix}\bar W^{p,p} & \bar W^{p,v} \\
\bar W^{v,p} & \bar W^{v,v} \end{pmatrix}({\bf x},{\bf x}_F)=
\begin{pmatrix}W^{p,p*} & -W^{p,v*} \\
-W^{v,p*} & W^{v,v*} \end{pmatrix}({\bf x},{\bf x}_F)\label{eq7}
\end{eqnarray}
(equation (\ref{eq65aws})).
For the square-root operator we have, similar as for the Helmholtz operator,
\begin{eqnarray}
\bar{\cal H}_1={\cal H}_1^*\label{eq9}
\end{eqnarray}
\citep{Wapenaar2001RS}.
Using equations (\ref{eq7}) and (\ref{eq9}) in equations (\ref{eq11}) and (\ref{eq11v}), we find $\bar Y^{p,+}({\bf x},{\bf x}_F)=Y^{p,-*}({\bf x},{\bf x}_F)$
and $\bar Y^{v,+}({\bf x},{\bf x}_F)=-Y^{v,-*}({\bf x},{\bf x}_F)$. Hence, using equations (\ref{eq15b}) and (\ref{eq15bb}), we find for the elements in the left column of matrix ${\bf Y}({\bf x},{\bf x}_F)$
\begin{eqnarray}
&&\hspace{-.7cm}Y^{p,+}({\bf x},{\bf x}_F)=\bar F^{p*}({\bf x},{\bf x}_F)
=W^{p,p}({\bf x},{\bf x}_F)+\frac{1}{\omega\rho_0}{\cal H}_1({\bf x}_F)W^{p,v}({\bf x},{\bf x}_F),\label{eq16}\\
&&\hspace{-.7cm}Y^{v,+}({\bf x},{\bf x}_F)=-\bar F^{v*}({\bf x},{\bf x}_F)
=W^{v,p}({\bf x},{\bf x}_F)+\frac{1}{\omega\rho_0}{\cal H}_1({\bf x}_F)W^{v,v}({\bf x},{\bf x}_F).\label{eq16bb}
\end{eqnarray}
For matrix ${\bf Y}({\bf x},{\bf x}_F)$ we thus obtain
\begin{eqnarray}
{\bf Y}({\bf x},{\bf x}_F)= \begin{pmatrix}\bar F^{p*} & F^p \\ -\bar F^{v*} & F^v \end{pmatrix}({\bf x},{\bf x}_F).\label{eq21}
\end{eqnarray}
Note that $F^p$, $F^v$, $\bar F^{p*}$ and $\bar F^{v*}$ are expressed in terms of the components of the propagator matrix ${\bf W}({\bf x},{\bf x}_F)$ via equations
(\ref{eq15b}), (\ref{eq15bb}), (\ref{eq16}) and (\ref{eq16bb}). Conversely,
we can express the components of the propagator matrix ${\bf W}({\bf x},{\bf x}_F)$ in terms of the focusing functions $F^p$, $F^v$, $\bar F^{p*}$ and $\bar F^{v*}$.
Inverting equation (\ref{eq15k}) yields
\begin{eqnarray}
{\bf W}({\bf x},{\bf x}_F)={\bf Y}({\bf x},{\bf x}_F){{{{\mbox{\boldmath ${\cal L}$}}}}}^{-1}({\bf x}_F),\label{eqWYD}
\end{eqnarray}
with ${{{{\mbox{\boldmath ${\cal L}$}}}}}^{-1}$ defined in equation (\ref{eqkk19}).
Since operator $\frac{1}{\rho}{\cal H}_1$ is symmetric, its inverse ${\cal H}_1^{-1}\rho$ is symmetric as well.
Hence, in equation (\ref{eqWYD}) these operators can be taken to act on the elements of matrix ${\bf Y}({\bf x},{\bf x}_F)$. This yields
\begin{eqnarray}
&&\hspace{-.4cm}W^{p,p}({\bf x},{\bf x}_F) = \frac{1}{2}\bigl( \bar F^{p*} + F^p\bigr)({\bf x},{\bf x}_F),\label{eq14}\\
&&\hspace{-.4cm}W^{p,v}({\bf x},{\bf x}_F)=\frac{\omega\rho_0}{2}{\cal H}_1^{-1}({\bf x}_F)\bigl( \bar F^{p*} - F^p\bigr)({\bf x},{\bf x}_F),\label{eq15}\\
&&\hspace{-.4cm}W^{v,p}({\bf x},{\bf x}_F) = \frac{1}{2}\bigl( -\bar F^{v*} + F^v\bigr)({\bf x},{\bf x}_F),\label{eq14v}\\
&&\hspace{-.4cm}W^{v,v}({\bf x},{\bf x}_F)=-\frac{\omega\rho_0}{2}{\cal H}_1^{-1}({\bf x}_F)\bigl( \bar F^{v*} + F^v\bigr)({\bf x},{\bf x}_F).\label{eq15v}
\end{eqnarray}
For the special case of a lossless medium, ignoring evanescent wave modes at $\partial\mathbb{D}_F$, we can omit the bars on $F^p$ and $F^v$.
For this situation equations (\ref{eq14}) $-$ (\ref{eq15v}) simplify to
\begin{eqnarray}
&&\hspace{-.2cm}W^{p,p}({\bf x},{\bf x}_F) = \Re \{F^p({\bf x},{\bf x}_F)\},\label{eq14g}\\
&&\hspace{-.2cm}W^{p,v}({\bf x},{\bf x}_F)=-i\omega\rho_0{\cal H}_1^{-1}({\bf x}_F)\Im\{F^p({\bf x},{\bf x}_F)\},\label{eq15g}\\
&&\hspace{-.2cm}W^{v,p}({\bf x},{\bf x}_F) = i\Im\{F^v\bigr({\bf x},{\bf x}_F)\},\label{eq14vg}\\
&&\hspace{-.2cm}W^{v,v}({\bf x},{\bf x}_F)=-\omega\rho_0{\cal H}_1^{-1}({\bf x}_F)\Re\{F^v({\bf x},{\bf x}_F)\}.\label{eq15vg}
\end{eqnarray}
We illustrate the focusing function and its relation with the propagator matrix with a numerical example.
Applying the transformations of equations (\ref{eq99950b}) and (\ref{eq500a}) to equation (\ref{eq15b}) (assuming a laterally invariant medium), taking
${\bf x}_F=(0,0,x_{3,F})$ and $s_2=0$, we obtain
\begin{eqnarray}\label{eq41W}
F^p(s_1,x_3,x_{3,F},\tau)&=&W^{p,p}(s_1,x_3,x_{3,F},\tau)
-\frac{s_{3,0}}{\rho_0}W^{p,v}(s_1,x_3,x_{3,F},\tau),
\end{eqnarray}
with vertical slowness $s_{3,0}=\sqrt{1/c_0^2-s_1^2}$ being the spatial Fourier transform of $\frac{1}{\omega}{\cal H}_1$ for the laterally invariant medium
(here we assumed $s_1^2<1/c_0^2$).
Equation (\ref{eq41W}) shows how the superposition of the even component $W^{p,p}$ of Figure \ref{Figure2}(b) and the odd component $W^{p,v}$ of Figure \ref{Figure2}(c)
yields the focusing function $F^p(s_1,x_3,x_{3,F},\tau)$. This focusing function is shown in Figure \ref{Figure4}(a) for $s_1=1/3500$ m/s.
The upper trace at $x_{3,F}=0$ m represents the focusing condition $F^p(s_1,x_{3,F},x_{3,F},\tau)=\delta(\tau)$.
At and above $x_{3,F}$ the focusing function is an upgoing field.
The time-reversed focusing function $F^p(s_1,x_3,x_{3,F},-\tau)$ is shown in Figure \ref{Figure4}(b).
The focusing function of Figure \ref{Figure4}(a) and its time-reversed version of Figure \ref{Figure4}(b) can be combined to give the components of the propagator matrix.
To this end, equations (\ref{eq14g}) and (\ref{eq15g}) are transformed to (assuming $s_1^2<1/c_0^2$)
%
\begin{eqnarray}\label{eq42W}
&&\hspace{-.7cm}W^{p,p}(s_1,x_3,x_{3,F},\tau)=
\frac{1}{2}\Bigl(F^p(-s_1,x_3,x_{3,F},-\tau)+F^p(s_1,x_3,x_{3,F},\tau)\Bigr),\label{eq43FW}\\
&&\hspace{-.7cm}W^{p,v}(s_1,x_3,x_{3,F},\tau)=
\frac{\rho_0}{2s_{3,0}}\Bigl(F^p(-s_1,x_3,x_{3,F},-\tau)-F^p(s_1,x_3,x_{3,F},\tau)\Bigr).\label{eq44FW}
\end{eqnarray}
For the acoustic case all components are symmetric in $s_1$, i.e., \\
$F^p(-s_1,x_3,x_{3,F},-\tau)=F^p(s_1,x_3,x_{3,F},-\tau)$, etc.
Hence, equations (\ref{eq43FW}) and (\ref{eq44FW})
show how the even and odd components $W^{p,p}(s_1,x_3,x_{3,F},\tau)$ and $W^{p,v}(s_1,x_3,x_{3,F},\tau)$ of Figures \ref{Figure2}(b) and \ref{Figure2}(c)
are obtained from the focusing function and its time-reversal of Figure \ref{Figure4}.
\begin{figure}
\centerline{\epsfysize=7.cm \epsfbox{Fig4a.pdf}}
\vspace{0.4cm}
\centerline{\epsfysize=7. cm \epsfbox{Fig4b.pdf}}
\vspace{0.cm}
\caption{(a) Focusing function $F^p(s_1,x_3,x_{3,F},\tau)$ (for fixed $s_1=1/3500$ m/s).
(b) Time-reversed focusing function $F^p(s_1,x_3,x_{3,F},-\tau)$. }\label{Figure4}
\end{figure}
\subsection{Representations with acoustic Marchenko focusing functions}\label{sec2.4}
Substituting the expressions for ${\bf q}({\bf x})$, ${\bf p}({\bf x}_F)$ and ${\bf Y}({\bf x},{\bf x}_F)$
(equations (\ref{eq9996ge}), (\ref{eq517rev}) and (\ref{eq21})) into equation (\ref{eq1330dec}), gives
the following representations for the acoustic pressure $p({\bf x})$ and the vertical particle velocity $v_3({\bf x})$ inside the inhomogeneous medium
\begin{eqnarray}
p({\bf x})&=&\int_{\partial\mathbb{D}_F} \bar F^{p*}({\bf x},{\bf x}_F)p^+({\bf x}_F){\rm d}^2{\bf x}_F
+\int_{\partial\mathbb{D}_F} F^p({\bf x},{\bf x}_F)p^-({\bf x}_F){\rm d}^2{\bf x}_F,\label{eq13}\\
v_3({\bf x})&=&-\int_{\partial\mathbb{D}_F} \bar F^{v*}({\bf x},{\bf x}_F)p^+({\bf x}_F){\rm d}^2{\bf x}_F
+\int_{\partial\mathbb{D}_F} F^v({\bf x},{\bf x}_F)p^-({\bf x}_F){\rm d}^2{\bf x}_F,\label{eq13bb}
\end{eqnarray}
for $x_3\ge x_{3,F}$. These expressions are exact and hold for dissipative media.
Equation (\ref{eq13}) is a generalisation of equation (17) of \cite{Wapenaar2022GEO} for dissipative media.
We use equations (\ref{eq13}) and (\ref{eq13bb}) to derive representations for Green's functions between the boundary $\partial\mathbb{D}_F$ and any position ${\bf x}$ inside the medium.
To this end, we define a unit point source of vertical force at ${\bf x}_S$ just above $\partial\mathbb{D}_F$.
For the downgoing field at $\partial\mathbb{D}_F$ (i.e., just below the source), we then have $p^+({\bf x}_F)=\frac{1}{2}\delta({\bf x}_{{\rm H},F}-{\bf x}_{{\rm H},S})$, where
${\bf x}_{{\rm H},S}$ denotes the horizontal coordinates of ${\bf x}_S$. The upgoing field at $\partial\mathbb{D}_F$ is the reflection response to this downgoing source field, hence
$p^-({\bf x}_F)=\frac{1}{2} R({\bf x}_F,{\bf x}_S)$.
The field at ${\bf x}$ inside the medium is the Green's response to the source at ${\bf x}_S$, hence
$p({\bf x})=G^{p,f}({\bf x},{\bf x}_S)$ and $v_3({\bf x})=G^{v,f}({\bf x},{\bf x}_S)$.
Here the second superscript ($f$) refers to the vertical force source at ${\bf x}_S$, whereas the
first superscripts ($p$ and $v$) refer to the observed quantities (pressure and vertical particle velocity) at ${\bf x}$.
Substitution of these expressions for $p^\pm({\bf x}_F)$, $p({\bf x})$ and $v_3({\bf x})$ into equations (\ref{eq13}) and (\ref{eq13bb}) gives
\begin{eqnarray}
2G^{p,f}({\bf x},{\bf x}_S)&=& \int_{\partial\mathbb{D}_F} F^p({\bf x},{\bf x}_F)R({\bf x}_F,{\bf x}_S){\rm d}^2{\bf x}_F
+\bar F^{p*}({\bf x},{\bf x}_S),\label{eq13G}\\
2G^{v,f}({\bf x},{\bf x}_S)&=& \int_{\partial\mathbb{D}_F} F^v({\bf x},{\bf x}_F)R({\bf x}_F,{\bf x}_S){\rm d}^2{\bf x}_F
-\bar F^{v*}({\bf x},{\bf x}_S),\label{eq13Gbb}
\end{eqnarray}
for $x_3\ge x_{3,F}$. \cite{Slob2016PRL} derived similar representations for decomposed wave fields in dissipative media. In the present derivation we only used decomposition at the
boundary $\partial\mathbb{D}_F$ (similar as \cite{Diekmann2021PRR} and \cite{Wapenaar2021GJI}).
This implies that inside the medium the wavefield does not need to be decomposed into downgoing and upgoing waves and that evanescent waves can be present.
When the medium is lossless and when evanescent waves are neglected at $\partial\mathbb{D}_F$, the bars on $F^p$ and $F^v$ in representations (\ref{eq13G}) and (\ref{eq13Gbb}) can be omitted.
Using the Marchenko method, these focusing functions can then be retrieved from the reflection response $R({\bf x}_F,{\bf x}_S)$ and
a smooth background model \citep{Wapenaar2014GEO, Elison2020GJI}.
Since representations (\ref{eq13G}) and (\ref{eq13Gbb}) account for evanescent waves inside the medium,
the retrieved focusing functions potentially also account for evanescent waves inside the
medium (this is subject of current research). Once the focusing functions are found, they can be used to retrieve the Green's functions
$G^{p,f}({\bf x},{\bf x}_S)$ and $G^{v,f}({\bf x},{\bf x}_S)$
(from equations (\ref{eq13G}) and (\ref{eq13Gbb}))
and all components of the propagator matrix ${\bf W}({\bf x},{\bf x}_F)$ (from equations (\ref{eq14}) $-$ (\ref{eq15v})).
\section{Acoustic transfer matrix and decomposed focusing functions}\label{sec3}
\subsection{Acoustic transfer matrix}
Given the downgoing and upgoing fields $p^+({\bf x}_F)$ and $p^-({\bf x}_F)$ at the boundary $\partial\mathbb{D}_F$,
we `transfer' these fields to downgoing and upgoing fields $p^+({\bf x})$ and $p^-({\bf x})$ at any depth level $x_3$ inside the medium using the following expression
\begin{eqnarray}
{\bf p}({\bf x})=\int_{\partial\mathbb{D}_F} {{{\mbox{\boldmath ${\cal T}$}}}}({\bf x},{\bf x}_F){\bf p}({\bf x}_F){\rm d}^2{\bf x}_F,\label{eq1330trans}
\end{eqnarray}
for $x_3\ge x_{3,F}$. Vectors ${\bf p}({\bf x}_F)$ and ${\bf p}({\bf x})$
contain the downgoing and upgoing fields (equation (\ref{eq517rev})) and ${{{\mbox{\boldmath ${\cal T}$}}}}({\bf x},{\bf x}_F)$
is the transfer matrix, which we partition as follows
\begin{eqnarray}\label{eq424T}
{{{\mbox{\boldmath ${\cal T}$}}}}({\bf x},{\bf x}_F)= \begin{pmatrix}{\cal T}^{+,+} & {\cal T}^{+,-} \\
{\cal T}^{-,+} & {\cal T}^{-,-} \end{pmatrix}({\bf x},{\bf x}_F).
\end{eqnarray}
For each component of this matrix, the superscripts refer to the propagation direction at ${\bf x}$ and at ${\bf x}_F$, respectively.
Equation (\ref{eq1330trans}) is illustrated in the upper-right frame of Figure \ref{Fig1}.
For horizontally layered media, the transfer matrix is usually built up recursively from interface to interface
\citep{Born65Book, Katsidis2002AO, Elison2020PHD, Dukalski2022EAGE, Dukalski2022IMAGE}. Here we follow a different approach to derive an expression for
${{{\mbox{\boldmath ${\cal T}$}}}}({\bf x},{\bf x}_F)$ for laterally varying media.
Substituting equation (\ref{eq15ffrev}) into equation (\ref{eq1330}) we obtain equation (\ref{eq1330trans}), with
\begin{eqnarray}
{{{\mbox{\boldmath ${\cal T}$}}}}({\bf x},{\bf x}_F)={{{\mbox{\boldmath ${\cal L}$}}}}^{-1}({\bf x}){\bf W}({\bf x},{\bf x}_F){{{{\mbox{\boldmath ${\cal L}$}}}}}({\bf x}_F),\label{eq31}
\end{eqnarray}
with ${{{{\mbox{\boldmath ${\cal L}$}}}}}({\bf x})$ and its inverse defined in equation (\ref{eqkk19}).
Equation (\ref{eq31}), which relates the transfer matrix ${{{\mbox{\boldmath ${\cal T}$}}}}({\bf x},{\bf x}_F)$ to the propagator matrix ${\bf W}({\bf x},{\bf x}_F)$,
is illustrated in the upper half of Figure \ref{Fig1}.
In the next section we show that the transfer matrix can be expressed in terms of decomposed Marchenko focusing functions.
\subsection{Relation with decomposed acoustic Marchenko focusing functions}
From equations (\ref{eq15k}) and (\ref{eq31}) we find
\begin{eqnarray}
{{{\mbox{\boldmath ${\cal T}$}}}}({\bf x},{\bf x}_F)={{{\mbox{\boldmath ${\cal L}$}}}}^{-1}({\bf x}){\bf Y}({\bf x},{\bf x}_F).\label{eq31k}
\end{eqnarray}
According to equation (\ref{eq21}), the right column of ${\bf Y}({\bf x},{\bf x}_F)$ contains $F^p({\bf x},{\bf x}_F)$ and $F^v({\bf x},{\bf x}_F)$,
i.e., the pressure and vertical particle velocity components at ${\bf x}$ of the focusing function.
Hence, analogous to ${\bf p}({\bf x})={{{\mbox{\boldmath ${\cal L}$}}}}^{-1}({\bf x}){\bf q}({\bf x})$,
we obtain for the right column of ${{{\mbox{\boldmath ${\cal T}$}}}}({\bf x},{\bf x}_F)$
\begin{eqnarray}\label{eq55b}
\begin{pmatrix} F^+({\bf x},{\bf x}_F) \\ F^-({\bf x},{\bf x}_F) \end{pmatrix}=
\frac{1}{2}\begin{pmatrix} 1 & \omega{\cal H}_1^{-1}({\bf x})\rho({\bf x})\\1 & -\omega{\cal H}_1^{-1}({\bf x})\rho({\bf x})\end{pmatrix}
\begin{pmatrix} F^p({\bf x},{\bf x}_F) \\ F^v({\bf x},{\bf x}_F) \end{pmatrix},
\end{eqnarray}
with $F^+({\bf x},{\bf x}_F)$ and $F^-({\bf x},{\bf x}_F)$
being the downgoing and upgoing parts at ${\bf x}$ of the focusing function $F^p({\bf x},{\bf x}_F)$.
According to equation (\ref{eq21}), the left column of ${\bf Y}({\bf x},{\bf x}_F)$ contains $\bar F^{p*}({\bf x},{\bf x}_F)$ and $-\bar F^{v*}({\bf x},{\bf x}_F)$.
Hence, for the left column of ${{{\mbox{\boldmath ${\cal T}$}}}}({\bf x},{\bf x}_F)$ we obtain
\begin{eqnarray}
\frac{1}{2}\begin{pmatrix} 1 & \omega{\cal H}_1^{-1}({\bf x})\rho({\bf x})\\1 & -\omega{\cal H}_1^{-1}({\bf x})\rho({\bf x})\end{pmatrix}
\begin{pmatrix} \bar F^{p*}({\bf x},{\bf x}_F) \\ -\bar F^{v*}({\bf x},{\bf x}_F) \end{pmatrix},
\end{eqnarray}
or, using ${\cal H}_1=\bar{\cal H}_1^*$ (equation (\ref{eq9})) and $\rho=\bar\rho^*$,
\begin{eqnarray}
\frac{1}{2}\begin{pmatrix} 1 & -\omega\bar{\cal H}_1^{-1}({\bf x})\bar\rho({\bf x})\\1 & \omega\bar{\cal H}_1^{-1}({\bf x})\bar\rho({\bf x})\end{pmatrix}^*
\begin{pmatrix} \bar F^p({\bf x},{\bf x}_F) \\ \bar F^v({\bf x},{\bf x}_F) \end{pmatrix}^*.
\end{eqnarray}
Comparing this with equation (\ref{eq55b}) we find that this gives a vector with $\bar F^{-*}({\bf x},{\bf x}_F)$ and $\bar F^{+*}({\bf x},{\bf x}_F)$.
This is the left column of ${{{\mbox{\boldmath ${\cal T}$}}}}({\bf x},{\bf x}_F)$. Hence, we have obtained
\begin{eqnarray}\label{eq58vv}
{{{\mbox{\boldmath ${\cal T}$}}}}({\bf x},{\bf x}_F)=\begin{pmatrix} \bar F^{-*}({\bf x},{\bf x}_F) & F^+({\bf x},{\bf x}_F)\\
\bar F^{+*}({\bf x},{\bf x}_F) & F^-({\bf x},{\bf x}_F)\end{pmatrix},
\end{eqnarray}
see the upper-right frame of Figure \ref{Fig1}.
Hence, the transfer matrix for an inhomogeneous dissipative acoustic medium is expressed in terms of decomposed focusing functions of the medium and its adjoint.
We consider the special case of a horizontally layered medium.
Applying the transformations of equations (\ref{eq99950b}) and (\ref{eq500a}) to equation (\ref{eq58vv}), taking ${\bf x}_F=(0,0,x_{3,F})$,
we obtain
\begin{eqnarray}\label{eq59ff}
&&\hspace{-.7cm}{{{\mbox{\boldmath ${\cal T}$}}}}({\bf s},x_3,x_{3,F},\tau)=
\begin{pmatrix} \bar F^-(-{\bf s},x_3,x_{3,F},-\tau) & F^+({\bf s},x_3,x_{3,F},\tau)\\
\bar F^+(-{\bf s},x_3,x_{3,F},-\tau) & F^-({\bf s},x_3,x_{3,F},\tau)\end{pmatrix}.
\end{eqnarray}
\cite{Dukalski2022EAGE, Dukalski2022IMAGE} used a recursive approach and obtained an expression similar to equation (\ref{eq59ff}). In their derivation
they used a path-reversal operator ${\cal P}$, which is equivalent with
(i) taking the adjoint medium,
(ii) taking the complex conjugate (or in the time domain taking the time-reversal) and (iii) changing the sign of the horizontal slowness.
Hence, ${\cal P}\{F^\pm({\bf s},x_3,x_{3,F},\tau)\}$ is equivalent with $\bar F^\pm(-{\bf s},x_3,x_{3,F},-\tau)$.
For the lossless medium of Figure \ref{Figure2}(a), the decomposed focusing functions $F^-(s_1,x_3,x_{3,F},\tau)$ and $F^+(s_1,x_3,x_{3,F},\tau)$
for $s_1=1/3500$ m/s and $s_2=0$
are shown in Figures \ref{Figure5}(a) and \ref{Figure5}(b), respectively. For each $x_3$, the function $F^-(s_1,x_3,x_{3,F},\tau)$ can be seen as the complicated
field that needs to be emitted upward from $x_3$ to arrive as a single upward propagating field at the focal depth $x_{3,F}$ at $\tau=0$. For the same $x_3$,
the function $F^+(s_1,x_3,x_{3,F},\tau)$ is the downward reflected response to $F^-(s_1,x_3,x_{3,F},\tau)$.
Figures \ref{Figure5}(b) and \ref{Figure5}(a) together form the right column of the transformed transfer matrix ${{{\mbox{\boldmath ${\cal T}$}}}}(s_1,x_3,x_{3,F},\tau)$.
Their superposition gives the focusing function $F^p(s_1,x_3,x_{3,F},\tau)$, shown in Figure \ref{Figure4}(a).
\begin{figure}
\centerline{\epsfysize=7.cm \epsfbox{Fig5a.pdf}}
\vspace{.4cm}
\centerline{\epsfysize=7. cm \epsfbox{Fig5b.pdf}}
\vspace{0.cm}
\caption{ (a) Decomposed focusing function $F^-(s_1,x_3,x_{3,F},\tau)$ (for fixed $s_1=1/3500$ m/s).
(b) Decomposed focusing function $F^+(s_1,x_3,x_{3,F},-\tau)$ . }\label{Figure5}
\end{figure}
\subsection{Representations with decomposed acoustic Marchenko focusing functions}\label{sec3.3}
Substituting the expressions for ${\bf p}({\bf x})$ and ${{{\mbox{\boldmath ${\cal T}$}}}}({\bf x},{\bf x}_F)$ (equations (\ref{eq517rev}) and (\ref{eq58vv}))
into equation (\ref{eq1330trans}), gives
the following representations for the downgoing and upgoing components of the acoustic pressure, $p^+({\bf x})$ and $p^-({\bf x})$ respectively, inside the inhomogeneous medium
\begin{eqnarray}
p^+({\bf x})&=&\int_{\partial\mathbb{D}_F} \bar F^{-*}({\bf x},{\bf x}_F)p^+({\bf x}_F){\rm d}^2{\bf x}_F
+\int_{\partial\mathbb{D}_F} F^+({\bf x},{\bf x}_F)p^-({\bf x}_F){\rm d}^2{\bf x}_F,\label{eq13deco}\\
p^-({\bf x})&=&\int_{\partial\mathbb{D}_F} \bar F^{+*}({\bf x},{\bf x}_F)p^+({\bf x}_F){\rm d}^2{\bf x}_F
+\int_{\partial\mathbb{D}_F} F^-({\bf x},{\bf x}_F)p^-({\bf x}_F){\rm d}^2{\bf x}_F,\label{eq13bbdeco}
\end{eqnarray}
for $x_3\ge x_{3,F}$. These expressions are exact and hold for dissipative media.
Making similar substitutions as in section \ref{sec2.4} we obtain
\begin{eqnarray}
2G^{+,f}({\bf x},{\bf x}_S)&=& \int_{\partial\mathbb{D}_F} F^+({\bf x},{\bf x}_F)R({\bf x}_F,{\bf x}_S){\rm d}^2{\bf x}_F
+\bar F^{-*}({\bf x},{\bf x}_S),\label{eq13Gdeco}\\
2G^{-,f}({\bf x},{\bf x}_S)&=& \int_{\partial\mathbb{D}_F} F^-({\bf x},{\bf x}_F)R({\bf x}_F,{\bf x}_S){\rm d}^2{\bf x}_F
+\bar F^{+*}({\bf x},{\bf x}_S),\label{eq13Gbbdeco}
\end{eqnarray}
for $x_3\ge x_{3,F}$. Here $G^{\pm,f}({\bf x},{\bf x}_S)$ stands for the downgoing ($+$) and upgoing ($-$) part of the Green's function $G^{p,f}({\bf x},{\bf x}_S)$.
When the medium is lossless and when evanescent waves are neglected at $\partial\mathbb{D}_F$ and at depth level $x_3$ inside the medium,
the bars on $F^+$ and $F^-$ in representations (\ref{eq13Gdeco}) and (\ref{eq13Gbbdeco}) can be omitted.
Using the Marchenko method, these decomposed focusing functions can then be retrieved from the reflection response $R({\bf x}_F,{\bf x}_S)$ and
a smooth background model \citep{Wapenaar2014GEO, Slob2014GEO}.
Once the focusing functions are found, they can be used to retrieve the decomposed Green's functions
$G^{+,f}({\bf x},{\bf x}_S)$ and $G^{-,f}({\bf x},{\bf x}_S)$
(from equations (\ref{eq13Gdeco}) and (\ref{eq13Gbbdeco}))
and all components of the transfer matrix ${{{\mbox{\boldmath ${\cal T}$}}}}({\bf x},{\bf x}_F)$ (from equation (\ref{eq58vv})).
\section{Unified propagator matrix and focusing functions}\label{sec4}
We extend the theory of section \ref{sec2} for unified wave fields.
\subsection{Unified matrix-vector wave equation}\label{sec4.1}
We consider again matrix-vector wave equation (\ref{eq2.0}), but this time for unified wave fields. We partition the $N\times 1$ wave field vector ${\bf q}$,
the $N\times 1$ source vector ${\bf d}$ and
the $N\times N$ operator matrix $\mbox{\boldmath ${\cal A}$}$ as follows
\begin{eqnarray}
{\bf q}=\begin{pmatrix}{\bf q}_1\\ {\bf q}_2\end{pmatrix},
\quad{\bf d}=\begin{pmatrix}{\bf d}_1\\ {\bf d}_2\end{pmatrix},
\quad \mbox{\boldmath ${\cal A}$}=\begin{pmatrix}\mbox{\boldmath ${\cal A}$}_{11}&\mbox{\boldmath ${\cal A}$}_{12}\\
\mbox{\boldmath ${\cal A}$}_{21}&\mbox{\boldmath ${\cal A}$}_{22}\end{pmatrix}.
\label{eq7em}
\end{eqnarray}
The vectors and operator matrix for different wave phenomena can be found in various references
\citep{Ursin83GEO, Stralen97PHD, Loseth2007GJI, Woodhouse74GJR, Gelinsky97GEO, Haartsen97JGR, White2006SIAM}.
A comprehensive overview is given by \citet{Wapenaar2019GJI} for acoustic waves ($N=2$),
quantum mechanical waves obeying the Schr\"odinger equation ($N=2$), electromagnetic waves ($N=4$),
elastodynamic waves ($N=6$), poroelastodynamic waves ($N=8$), piezoelectric waves ($N=10$) and seismoelectric waves ($N=12$).
For all these wave phenomena, the operator matrix $\mbox{\boldmath ${\cal A}$}$ obeys the following symmetry properties
\begin{eqnarray}
\mbox{\boldmath ${\cal A}$}^t&=&-{\bf N}\mbox{\boldmath ${\cal A}$}{\bf N}^{-1},\label{eqsym1f}\\
\mbox{\boldmath ${\cal A}$}^\dagger&=&-{\bf K}{\,\,\,\bar{\mbox{\!\!\!\boldmath ${\cal A}$}}}{\bf K}^{-1},\label{eqsym2f}\\
\mbox{\boldmath ${\cal A}$}^*&=&{\bf J}{\,\,\,\bar{\mbox{\!\!\!\boldmath ${\cal A}$}}}{\bf J}^{-1},\label{eqsym3f}
\end{eqnarray}
where
\begin{eqnarray}
{\bf N}=\begin{pmatrix} {\bf O} & {\bf I} \\ -{\bf I} & {\bf O}\end{pmatrix},\,
{\bf K}=\begin{pmatrix} {\bf O} & {\bf I} \\ {\bf I} & {\bf O}\end{pmatrix},\,
{\bf J}=\begin{pmatrix} {\bf I}& {\bf O}\\ {\bf O} & -{\bf I}\end{pmatrix},\label{eq23a}
\end{eqnarray}
where ${\bf O}$ and ${\bf I}$ are zero and identity matrices of appropriate size.
Superscript $t$ denotes transposition of the matrix and the operators contained in it, with $\partial_\alpha^t=-\partial_\alpha$.
Superscript $\dagger$ denotes transposition and complex conjugation. For further details we refer to the aforementioned references.
In Appendix \ref{AppA} we discuss one more application of the operator matrix.
Starting with the Dirac equation \citep{Book67Sakurai} we derive an expression for the $4\times 4$ operator matrix $\mbox{\boldmath ${\cal A}$}$.
This matrix obeys symmetry relations (\ref{eqsym1f}) $-$ (\ref{eqsym3f}), but with
\begin{eqnarray}
{\bf N}=\begin{pmatrix} {\bf O} & i\mbox{\boldmath $\sigma$}_1 \\ i\mbox{\boldmath $\sigma$}_1 & {\bf O}\end{pmatrix},\,
{\bf K}=\begin{pmatrix} {\bf O} & \mbox{\boldmath $\sigma$}_3 \\ \mbox{\boldmath $\sigma$}_3 & {\bf O}\end{pmatrix},\,
{\bf J}=\begin{pmatrix} \mbox{\boldmath $\sigma$}_2& {\bf O}\\ {\bf O} & \mbox{\boldmath $\sigma$}_2\end{pmatrix},\label{eq23b}
\end{eqnarray}
where $\mbox{\boldmath $\sigma$}_1$, $\mbox{\boldmath $\sigma$}_2$ and $\mbox{\boldmath $\sigma$}_3$ are the Pauli matrices, defined as
\begin{eqnarray}
\mbox{\boldmath $\sigma$}_1=\begin{pmatrix} 0 & 1 \\ 1 & 0\end{pmatrix},\quad
\mbox{\boldmath $\sigma$}_2=\begin{pmatrix} 0 & -i \\ i & 0\end{pmatrix},\quad
\mbox{\boldmath $\sigma$}_3=\begin{pmatrix} 1 & 0 \\ 0 & -1\end{pmatrix}.\label{eqPauli}
\end{eqnarray}
Although there are no direct applications for geophysics,
the Schr\"odinger and Dirac equations are included in all derivations below, since this comes almost for free.
When we speak of the `medium', for the Schr\"odinger and Dirac equations it should be understood as the `potential'.
\subsection{Unified propagator matrix}
We define the unified $N\times N$ propagator matrix ${\bf W}({\bf x},{\bf x}_F)$ as the solution of wave equation (\ref{eq2.1}),
with boundary condition (\ref{eq9998d}) and with the operator matrix replaced by the unified operator matrix $\mbox{\boldmath ${\cal A}$}$ discussed in section \ref{sec4.1}.
Using equation (\ref{eq1330}), the unified vector ${\bf q}({\bf x})$ can be propagated from $x_{3,F}$ to any depth level $x_3$, assuming there are no sources between these depth levels.
We partition ${\bf W}({\bf x},{\bf x}_F)$ as
\begin{eqnarray}\label{eq424b}
{\bf W}({\bf x},{\bf x}_F)= \begin{pmatrix}{\bf W}_{11} & {\bf W}_{12} \\
{\bf W}_{21}& {\bf W}_{22} \end{pmatrix}({\bf x},{\bf x}_F),
\end{eqnarray}
where ${\bf W}_{11}$, ${\bf W}_{12}$, ${\bf W}_{21}$ and ${\bf W}_{22}$ are $\frac{N}{2}\times\frac{N}{2}$ submatrices of ${\bf W}$.
For each of these submatrices, the second subscript refers to the quantity it acts on at ${\bf x}_F$, whereas the first subscript refers to the quantity it contributes to at ${\bf x}$.
${\bf W}({\bf x},{\bf x}_F)$ obeys the recursive relation (\ref{eq1330k}), and from equation (\ref{eq1330kinv}) it follows that ${\bf W}({\bf x}_F,{\bf x})$ is the inverse of ${\bf W}({\bf x},{\bf x}_F)$.
\subsection{Relation with unified Marchenko focusing functions}\label{sec4.3}
We assume again that the medium at and above $\partial\mathbb{D}_F$ is homogeneous and may be dissipative.
The medium below $\partial\mathbb{D}_F$ may be inhomogeneous and dissipative, and it is source-free.
In preparation for defining the focusing functions, in the upper half-space (i.e., at and above $\partial\mathbb{D}_F$) we apply eigenvalue decomposition to matrix $\tilde{\bf A}$
(the spatial Fourier transform of operator matrix $\mbox{\boldmath ${\cal A}$}$), as follows
\begin{eqnarray}
\tilde{\bf A}=\tilde{\bf L}\tilde{\bf \Lambda}\tilde{\bf L}^{-1},
\end{eqnarray}
with
\begin{eqnarray}\label{Aeq7mvbbprffBBrev}
\tilde{{{\mbox{\boldmath $\Lambda$}}}}= \begin{pmatrix} i\omega{\bf S}_3^+ &{\bf O}\\
{\bf O} & -i\omega{\bf S}_3^- \end{pmatrix},\quad
\tilde{\bf L}= \begin{pmatrix}\tilde{\bf L}_1^+ & \tilde{\bf L}_1^- \\
\tilde{\bf L}_2^+ & \tilde{\bf L}_2^- \end{pmatrix},
\end{eqnarray}
with ${\bf S}_3^+$ and ${\bf S}_3^-$ being diagonal matrices containing vertical slownesses for downgoing and upgoing waves, respectively.
In the upper half-space
we express the Fourier transformed wave field vector $\tilde {\bf q}({\bf s},x_3)$
in terms of downgoing and upgoing waves $\tilde{\bf p}^+({\bf s},x_3)$ and $\tilde{\bf p}^-({\bf s},x_3)$ via
\begin{eqnarray}\label{eq2.10rev}
\tilde {\bf q}({\bf s},x_3)=\tilde{\bf L}({\bf s},x_3) \tilde {\bf p}({\bf s},x_3),
\end{eqnarray}
with
\begin{eqnarray}
\tilde {\bf p}({\bf s},x_3)&=&\begin{pmatrix}\tilde{\bf p}^+\\ \tilde{\bf p}^-\end{pmatrix}({\bf s},x_3).\label{eq2.11rev}
\end{eqnarray}
Note that these equations imply $\tilde {\bf q}_1= \tilde {\bf L}_1^+\tilde {\bf p}^+ + \tilde {\bf L}_1^-\tilde {\bf p}^-$.
Similar as in section \ref{sec2.3} we continue with downgoing and upgoing waves $\tilde {\bf q}_1^+$ and $\tilde {\bf q}_1^-$ which are normalised such that
$\tilde{\bf q}_1=\tilde{\bf q}_1^++\tilde{\bf q}_1^-$. To this end, we define $\tilde {\bf q}_1^\pm=\tilde{\bf L}_1^\pm \tilde {\bf p}^\pm$
and we replace equation (\ref{eq2.10rev}) by
\begin{eqnarray}\label{eq2.7rev}
\tilde {\bf q}({\bf s},x_3)=\tilde{\bf D}({\bf s},x_3) \tilde {\bf b}({\bf s},x_3),
\end{eqnarray}
where
\begin{eqnarray}
\tilde{\bf D}({\bf s},x_3)&=&\begin{pmatrix}{\bf I} &{\bf I}\\ \tilde {\bf D}_1^+& \tilde {\bf D}_1^-\end{pmatrix}({\bf s},x_3),\label{eq2.9rev}\\
\tilde {\bf b}({\bf s},x_3)&=&\begin{pmatrix}\tilde{\bf q}_1^+\\ \tilde{\bf q}_1^-\end{pmatrix}({\bf s},x_3),\label{eq2.8rev}
\end{eqnarray}
with
\begin{eqnarray}\label{eq2.14rev}
\tilde{\bf D}_1^\pm=\tilde{\bf L}_2^\pm (\tilde{\bf L}_1^\pm)^{-1}.
\end{eqnarray}
Whereas there is ambiguity in the normalization of the matrices $\tilde{\bf L}_1^\pm$ and $\tilde{\bf L}_2^\pm$, the matrix $\tilde {\bf D}_1^\pm$ is uniquely defined.
Some examples of matrix $\tilde {\bf D}_1^\pm$ (for acoustic, electromagnetic and elastodynamic waves) are given by \cite{Wapenaar2022JASA}.
In Appendix \ref{AppB} we derive for any wave phenomenon
\begin{eqnarray}\label{eq34rev}
\tilde{\bar{{\bf D}}}_1^\pm({\bf s},x_3)={\bf J}_{22}\{{\tilde{\bf D}}_1^\mp(-{\bf s},x_3)\}^*{\bf J}_{11}^{-1},
\end{eqnarray}
with ${\bf J}_{11}$ and ${\bf J}_{22}$ being the $\frac{N}{2}\times\frac{N}{2}$ submatrices of $N\times N$ matrix ${\bf J}$. From equation (\ref{eq23a})
we have for all wave phenomena except for the Dirac equation ${\bf J}_{11}=-{\bf J}_{22}={\bf I}$, and
from equation (\ref{eq23b}) we have for the Dirac equation ${\bf J}_{11}={\bf J}_{22}=\mbox{\boldmath $\sigma$}_2$.
We will now use equation (\ref{eq2.7rev}) and the properties of matrix $\tilde{\bar{{\bf D}}}_1^\pm({\bf s},x_3)$
to derive unified focusing functions and express them in the components of the unified propagator matrix and vice-versa.
First we aim to substitute equation (\ref{eq2.7rev}) for $x_3=x_{3,F}$ into a transformed version of equation (\ref{eq1330}).
This equation contains the propagator matrix ${\bf W}({\bf x},{\bf x}_F)$.
For a function of two space variables, $u({{\bf x}},{\bf x}_F)$ (with ${\bf x}_F$ at $\partial\mathbb{D}_F$),
we define the spatial Fourier transformation along the horizontal components of the second space variable as
\begin{eqnarray}
&&\hspace{-.5cm}\tilde u({{\bf x}},{{\bf s}},x_{3,F})=
\int_{{\mathbb{R}}^2}u({{\bf x}},{\bf x}_{{\rm H},F},x_{3,F})\exp\{i\omega{{\bf s}}\cdot{\bf x}_{{\rm H},F}\}{\rm d}^2{\bf x}_{{\rm H},F}\label{eq999329}
\end{eqnarray}
and its inverse as
\begin{eqnarray}
&&\hspace{-.5cm}u({{\bf x}},{\bf x}_{{\rm H},F},x_{3,F})=
\frac{\omega^2}{4\pi^2}\int_{{\mathbb{R}}^2}\tilde u({{\bf x}},{{\bf s}},x_{3,F})\exp\{-i\omega{{\bf s}}\cdot{\bf x}_{{\rm H},F}\}{\rm d}^2{\bf s}.\label{eq999329inv}
\end{eqnarray}
Note that the sign in the exponential of equation (\ref{eq999329}) is opposite to that in equation (\ref{eq99950b}).
Using these definitions and Parseval's theorem, we rewrite equation (\ref{eq1330}) as
\begin{eqnarray}
{\bf q}({{\bf x}})=\frac{\omega^2}{4\pi^2}\int_{{\mathbb{R}}^2} \tilde{\bf W}({{\bf x}},{{\bf s}},{x_{3,F}})\tilde{\bf q}({{\bf s}},{x_{3,F}}){\rm d}^2{{\bf s}},
\label{eq99910}
\end{eqnarray}
with $\tilde{\bf W}({{\bf x}},{{\bf s}},{x_{3,F}})$ obeying the boundary condition
\begin{eqnarray}
\tilde{\bf W}({{\bf x}},{{\bf s}},{x_{3,F}})|_{x_3={x_{3,F}}}={\bf I}\exp\{i\omega{{\bf s}}\cdot{{\bf x}_{{\rm H}}}\}.\label{eq99930}
\end{eqnarray}
Substitution of equation (\ref{eq2.7rev}) into equation (\ref{eq99910}) gives
\begin{eqnarray}
{\bf q}({{\bf x}})=\frac{\omega^2}{4\pi^2}\int_{{\mathbb{R}}^2} \tilde{\bf Y}({{\bf x}},{{\bf s}},{x_{3,F}})\tilde{\bf b}({{\bf s}},{x_{3,F}}){\rm d}^2{{\bf s}}
\label{eq99910y}
\end{eqnarray}
for $x_3\ge x_{3,F}$, with
\begin{eqnarray}
\tilde{\bf Y}({{\bf x}},{{\bf s}},{x_{3,F}})=\tilde{\bf W}({{\bf x}},{{\bf s}},{x_{3,F}})\tilde{\bf D}({{\bf s}},x_{3,F}).\label{eq99943y}
\end{eqnarray}
We partition matrix $\tilde{\bf Y}({{\bf x}},{{\bf s}},{x_{3,F}})$ as follows
\begin{eqnarray}\label{eq2.20}
\tilde{\bf Y}({{\bf x}},{{\bf s}},{x_{3,F}})=\begin{pmatrix}\tilde {\bf Y}_1^+ & \tilde {\bf Y}_1^-\\ \tilde {\bf Y}_2^+ & \tilde {\bf Y}_2^-\end{pmatrix}({{\bf x}},{{\bf s}},{x_{3,F}}).
\end{eqnarray}
Using equation (\ref{eq2.9rev}) and the spatial Fourier transformation of equation (\ref{eq424b}), we obtain
\begin{eqnarray}
&&\hspace{-0.5cm}\tilde{\bf Y}_1^\pm({{\bf x}},{{\bf s}},{x_{3,F}})=
\tilde{\bf W}_{11}({{\bf x}},{{\bf s}},{x_{3,F}})+\tilde{\bf W}_{12}({{\bf x}},{{\bf s}},{x_{3,F}})\tilde{\bf D}_1^\pm({\bf s},x_{3,F}),\label{eq43}\\
&&\hspace{-0.5cm}\tilde{\bf Y}_2^\pm({{\bf x}},{{\bf s}},{x_{3,F}})=
\tilde{\bf W}_{21}({{\bf x}},{{\bf s}},{x_{3,F}})+\tilde{\bf W}_{22}({{\bf x}},{{\bf s}},{x_{3,F}})\tilde{\bf D}_1^\pm({\bf s},x_{3,F}).\label{eq44}
\end{eqnarray}
We analyse these expressions one by one. First consider $\tilde{\bf Y}_1^-({{\bf x}},{{\bf s}},{x_{3,F}})$. Via equation (\ref{eq99910y}) it can be seen that subscript $1$ refers to wavefield component ${\bf q}_1$ at ${\bf x}$ and superscript
$-$ refers to the upgoing wavefield component $\tilde {\bf q}_1^-$ at $x_{3,F}$. Moreover, for $x_3=x_{3,F}$ we obtain, using equation
(\ref{eq99930}), $\tilde{\bf Y}_1^-({{\bf x}},{{\bf s}},{x_{3,F}})|_{x_3={x_{3,F}}}={\bf I}\exp\{i\omega{{\bf s}}\cdot{{\bf x}_{{\rm H}}}\}$,
or, applying the inverse spatial Fourier transformation of equation (\ref{eq999329inv}),
${\bf Y}_1^-({\bf x},{\bf x}_F)|_{x_3={x_{3,F}}} = {\bf I}\delta({{\bf x}_{\rm H}}-{{\bf x}_{{\rm H},F}})$,
which is a focusing condition. Hence, we define
\begin{eqnarray}
\tilde{\bf Y}_1^-({{\bf x}},{{\bf s}},{x_{3,F}})=\tilde{\bf F}_1({{\bf x}},{{\bf s}},{x_{3,F}}),\label{eqy1min}
\end{eqnarray}
with $\tilde{\bf F}_1({{\bf x}},{{\bf s}},{x_{3,F}})$ denoting the spatial Fourier transform of the focusing function ${\bf F}_1({\bf x},{\bf x}_F)$
for wavefield component ${\bf q}_1$, which focuses at ${\bf x}={\bf x}_F$ and continues as an upgoing field in the homogeneous upper half-space.
Note that the focusing function is a $\frac{N}{2}\times\frac{N}{2}$ matrix.
Next, we consider $\tilde{\bf Y}_2^-({{\bf x}},{{\bf s}},{x_{3,F}})$. Subscript $2$ refers to wavefield component ${\bf q}_2$ at ${\bf x}$ and superscript
$-$ refers again to the upgoing wavefield component $\tilde {\bf q}_1^-$ at $x_{3,F}$. For $x_3=x_{3,F}$ we obtain, using equation (\ref{eq99930}),
$\tilde{\bf Y}_2^-({{\bf x}},{{\bf s}},{x_{3,F}})|_{x_3={x_{3,F}}}=\tilde{\bf D}_1^-({\bf s},x_{3,F})\exp\{i\omega{{\bf s}}\cdot{{\bf x}_{{\rm H}}}\}$, which is a focusing condition,
but somewhat more complicated than before because of the mix of wavefield components ${\bf q}_2$ and $\tilde {\bf q}_1^-$.
Hence, we define
\begin{eqnarray}
\tilde{\bf Y}_2^-({{\bf x}},{{\bf s}},{x_{3,F}})=\tilde{\bf F}_2({{\bf x}},{{\bf s}},{x_{3,F}}),\label{eqy2min}
\end{eqnarray}
with $\tilde{\bf F}_2({{\bf x}},{{\bf s}},{x_{3,F}})$ denoting the spatial
Fourier transform of the focusing function ${\bf F}_2({\bf x},{\bf x}_F)$ for wavefield component ${\bf q}_2$,
which focuses at ${\bf x}={\bf x}_F$ and continues as an upgoing field in the homogeneous upper half-space
(note that the definition of $\tilde{\bf F}_2$ is different from that in \cite{Wapenaar2022JASA}, to facilitate the derivations below).
The focusing functions $\tilde{\bf F}_1({{\bf x}},{{\bf s}},{x_{3,F}})$ and $\tilde{\bf F}_2({{\bf x}},{{\bf s}},{x_{3,F}})$ together form the right column of matrix $ \tilde{\bf Y}({{\bf x}},{{\bf s}},{x_{3,F}})$.
For the analysis of the submatrices in the left column of $\tilde{\bf Y}({{\bf x}},{{\bf s}},{x_{3,F}})$, we use symmetry relation (\ref{eq34rev}) and we need a similar relation for the
submatrices of $\tilde{\bf W}({{\bf x}},{{\bf s}},{x_{3,F}})$.
In Appendix \ref{AppB} we derive
\begin{eqnarray}\label{eq65awss}
\bar{\bf W}({\bf x},{\bf x}_F)={\bf J}{\bf W}^*({\bf x},{\bf x}_F){\bf J}^{-1}.
\end{eqnarray}
From the spatial Fourier transform of this equation we obtain for the submatrices of $\tilde{\bf W}({{\bf x}},{{\bf s}},{x_{3,F}})$
\begin{eqnarray}\label{eq25}
\tilde{\bar{{\bf W}}}_{\alpha\beta}({{\bf x}},{{\bf s}},{x_{3,F}})={\bf J}_{\alpha\alpha}{\tilde{\bf W}}_{\alpha\beta}^*({{\bf x}},-{{\bf s}},{x_{3,F}}){\bf J}_{\beta\beta}^{-1}
\end{eqnarray}
(no summation for repeated subscripts).
Substituting equations (\ref{eq34rev}) and (\ref{eq25}) into equations (\ref{eq43}) and (\ref{eq44}) yields
\begin{eqnarray}
\tilde{\bar{{\bf Y}}}_1^+({{\bf x}},{{\bf s}},{x_{3,F}})&=&{\bf J}_{11}\tilde{\bf Y}_1^{-*}({{\bf x}},-{{\bf s}},{x_{3,F}}){\bf J}_{11}^{-1},\\
\tilde{\bar{{\bf Y}}}_2^+({{\bf x}},{{\bf s}},{x_{3,F}})&=&{\bf J}_{22}\tilde{\bf Y}_2^{-*}({{\bf x}},-{{\bf s}},{x_{3,F}}){\bf J}_{11}^{-1}.
\end{eqnarray}
Hence, using equations (\ref{eqy1min}) and (\ref{eqy2min}), we find for the submatrices in the left column of $\tilde{\bf Y}({{\bf x}},{{\bf s}},{x_{3,F}})$
\begin{eqnarray}
\tilde{{\bf Y}}_1^+({{\bf x}},{{\bf s}},{x_{3,F}})&=&{\bf J}_{11}{\tilde{\bar{\bf F}}}_1^*({{\bf x}},-{{\bf s}},{x_{3,F}}){\bf J}_{11}^{-1},\label{eq96k}\\
\tilde{{\bf Y}}_2^+({{\bf x}},{{\bf s}},{x_{3,F}})&=&{\bf J}_{22}{\tilde{\bar{\bf F}}}_2^*({{\bf x}},-{{\bf s}},{x_{3,F}}){\bf J}_{11}^{-1}.\label{eq97k}
\end{eqnarray}
Hence, matrix $\tilde{\bf Y}({{\bf x}},{{\bf s}},{x_{3,F}})$ becomes
\begin{eqnarray}\label{eq67}
&&\hspace{-0.7cm}\tilde{\bf Y}({{\bf x}},{{\bf s}},{x_{3,F}})=\begin{pmatrix} {\bf J}_{11}{\tilde{\bar{\bf F}}}_1^*({{\bf x}},-{{\bf s}},{x_{3,F}}){\bf J}_{11}^{-1} &\tilde{\bf F}_1({{\bf x}},{{\bf s}},{x_{3,F}})\\
{\bf J}_{22}{\tilde{\bar{\bf F}}}_2^*({{\bf x}},-{{\bf s}},{x_{3,F}}){\bf J}_{11}^{-1} &\tilde{\bf F}_2({{\bf x}},{{\bf s}},{x_{3,F}})\end{pmatrix},\nonumber\\
&&
\end{eqnarray}
or, using the inverse Fourier transformation of equation (\ref{eq999329inv}),
\begin{eqnarray}\label{eq66}
{\bf Y}({\bf x},{\bf x}_F)=\begin{pmatrix} {\bf J}_{11}{{\bar{\bf F}}}_1^*({\bf x},{\bf x}_F){\bf J}_{11}^{-1} &{\bf F}_1({\bf x},{\bf x}_F)\\
{\bf J}_{22}{{\bar{\bf F}}}_2^*({\bf x},{\bf x}_F){\bf J}_{11}^{-1} &{\bf F}_2({\bf x},{\bf x}_F)\end{pmatrix}.
\end{eqnarray}
This is a generalisation of equation (\ref{eq21}).
Note that $\tilde{\bf F}_1$, $\tilde{\bf F}_2$, ${\tilde{\bar{\bf F}}}_1^*$ and ${\tilde{\bar{\bf F}}}_2^*$ are expressed in terms of the
submatrices of the propagator matrix $\tilde{\bf W}({{\bf x}},{{\bf s}},{x_{3,F}})$ via equations (\ref{eq43}) $-$ (\ref{eqy2min}), (\ref{eq96k}) and (\ref{eq97k}). Conversely,
we can express the submatrices of the propagator matrix $\tilde{\bf W}({{\bf x}},{{\bf s}},{x_{3,F}})$
in terms of the focusing functions $\tilde{\bf F}_1$, $\tilde{\bf F}_2$, ${\tilde{\bar{\bf F}}}_1^*$ and ${\tilde{\bar{\bf F}}}_2^*$. To this end, we start with inverting equation (\ref{eq99943y}), according to
\begin{eqnarray}\label{eq62}
\tilde{\bf W}({{\bf x}},{{\bf s}},{x_{3,F}}) =\tilde{\bf Y}({{\bf x}},{{\bf s}},{x_{3,F}}) \{\tilde{\bf D}({\bf s},x_{3,F})\}^{-1},
\end{eqnarray}
with
\begin{eqnarray}\label{eq39}
\{\tilde{\bf D}({\bf s},x_3)\}^{-1}=\begin{pmatrix}-(\tilde{\bf \Delta}_1)^{-1}\tilde{\bf D}_1^-& (\tilde{\bf \Delta}_1)^{-1}\\
(\tilde{\bf \Delta}_1)^{-1}\tilde{\bf D}_1^+ & -(\tilde{\bf \Delta}_1)^{-1} \end{pmatrix}({\bf s},x_3),
\end{eqnarray}
\begin{eqnarray}
\tilde{\bf \Delta}_1=\tilde{\bf D}_1^+- \tilde{\bf D}_1^-.\label{eq2107}
\end{eqnarray}
Using equations (\ref{eq67}) and (\ref{eq39}), we obtain
\begin{eqnarray}
\tilde{\bf W}_{\alpha 1}({{\bf x}},{{\bf s}},{x_{3,F}})&=&
-{\bf J}_{\alpha\alpha}{\tilde{\bar{\bf F}}}_\alpha^*({{\bf x}},-{{\bf s}},{x_{3,F}}){\bf J}_{11}^{-1}\{\tilde{\bf \Delta}_1({\bf s},x_{3,F})\}^{-1}\tilde{\bf D}_1^-({\bf s},x_{3,F})
\nonumber\\%\hspace{-0.5cm}
&+&\tilde{\bf F}_\alpha({{\bf x}},{{\bf s}},{x_{3,F}})\{\tilde{\bf \Delta}_1({\bf s},x_{3,F})\}^{-1}\tilde{\bf D}_1^+({\bf s},x_{3,F}),\label{eqWa1}\\
\tilde{\bf W}_{\alpha 2}({{\bf x}},{{\bf s}},{x_{3,F}})&=&
{\bf J}_{\alpha\alpha}{\tilde{\bar{\bf F}}}_\alpha^*({{\bf x}},-{{\bf s}},{x_{3,F}}){\bf J}_{11}^{-1}\{\tilde{\bf \Delta}_1({\bf s},x_{3,F})\}^{-1}
\nonumber\\%\hspace{-0.5cm}
&-&\tilde{\bf F}_\alpha({{\bf x}},{{\bf s}},{x_{3,F}})\{\tilde{\bf \Delta}_1({\bf s},x_{3,F})\}^{-1}\label{eqWa2}
\end{eqnarray}
(no summation for repeated subscripts). These expressions are a generalisation of equations (\ref{eq14}) $-$ (\ref{eq15v}). Those equations follow as a special case from equations (\ref{eqWa1}) and (\ref{eqWa2})
by substituting ${\bf J}_{11}=-{\bf J}_{22}=1$, $\tilde {\bf D}_1^\pm({{\bf s}},x_{3,F}) = \pm{s_{3,0}}/{\rho_0}$, $\{\tilde{\bf \Delta}_1({{\bf s}},x_{3,F})\}^{-1}={\rho_0}/{2s_{3,0}}$,
and applying an inverse spatial Fourier transformation, which involves replacing $s_{3,0}$ by operator $\frac{1}{\omega}{\cal H}_1({\bf x}_F)$.
\subsection{Representations with unified Marchenko focusing functions}\label{sec4.4}
Applying Parseval's theorem to equation (\ref{eq99910y}) and substituting the expressions for ${\bf q}({\bf x})$, ${\bf b}({\bf x}_F)$ and ${\bf Y}({\bf x},{\bf x}_F)$
(equations (\ref{eq7em}), (\ref{eq2.8rev}) and (\ref{eq66})), gives
the following representation for the quantities ${\bf q}_1({\bf x})$ and ${\bf q}_2({\bf x})$ inside the inhomogeneous medium
\begin{eqnarray}
{\bf q}_\alpha({\bf x})&=&\int_{\partial\mathbb{D}_F} {\bf J}_{\alpha\alpha}{{\bar{\bf F}}}_\alpha^*({\bf x},{\bf x}_F){\bf J}_{11}^{-1}{\bf q}_1^+({\bf x}_F){\rm d}^2{\bf x}_F
+\int_{\partial\mathbb{D}_F} {\bf F}_\alpha({\bf x},{\bf x}_F){\bf q}_1^-({\bf x}_F){\rm d}^2{\bf x}_F\label{eq13gen}
\end{eqnarray}
(no summation for repeated subscripts) for $x_3\ge x_{3,F}$. This is a generalisation of equations (\ref{eq13}) and (\ref{eq13bb}).
We use equation (\ref{eq13gen}) to derive representations for Green's functions between the boundary $\partial\mathbb{D}_F$ and any position ${\bf x}$ inside the medium.
We define a unit ${\bf d}_2$-type source (see equation (\ref{eq7em})) at ${\bf x}_S$ just above $\partial\mathbb{D}_F$. The $\frac{N}{2}\times\frac{N}{2}$ Green's matrix
${\bf G}_{12}({\bf x},{\bf x}_S)$ stands for the ${\bf q}_1$-type field observed at ${\bf x}$, in response to this source.
The spatial Fourier transform of the downgoing component at $\partial\mathbb{D}_F$ (i.e., just below the source) is proportional to the upper-right submatrix of the decomposition operator of
equation (\ref{eq39}), according to
\begin{eqnarray}
\tilde{\bf G}_{12}^+({\bf x}_F,{\bf s},x_{3,S})
=\{\tilde{\bf \Delta}_1({{\bf s}},x_{3,F})\}^{-1}\exp\{i\omega{\bf s}\cdot{\bf x}_{{\rm H},F}\}\label{eq330k}
\end{eqnarray}
\citep{Wapenaar2022JASA}. To compensate for the effects of the inverse matrix $\{\tilde{\bf \Delta}_1({{\bf s}},x_{3,F})\}^{-1}$, we define a modified Green's matrix as
\begin{eqnarray}
\tilde{\bf \Gamma}_{12}({\bf x},{\bf s},x_{3,S})=\tilde{\bf G}_{12}({\bf x},{\bf s},x_{3,S})\tilde{\bf \Delta}_1({\bf s}, x_{3,F}),\label{eq331}
\end{eqnarray}
such that its downgoing component at $\partial\mathbb{D}_F$ is given by
\begin{eqnarray}
\tilde{\bf \Gamma}_{12}^+ ({\bf x}_F,{\bf s},x_{3,S})
={\bf I}\exp\{i\omega{\bf s}\cdot{\bf x}_{{\rm H},F}\},\label{eq544}
\end{eqnarray}
or, after applying an inverse spatial Fourier transformation
\begin{eqnarray}
{\bf \Gamma}_{12}^+({\bf x}_F,{\bf x}_S)&=&{\bf I}\delta({\bf x}_{{\rm H},F}-{\bf x}_{{\rm H},S}).\label{eq4Gag}
\end{eqnarray}
The upgoing response at $\partial\mathbb{D}_F$ to this downgoing source field is by definition the reflection response, hence
\begin{eqnarray}
{\bf \Gamma}_{12}^-({\bf x}_F,{\bf x}_S)&=&{\bf R}({\bf x}_F,{\bf x}_S).\label{eq4Gagd}
\end{eqnarray}
The field at ${\bf x}$ inside the medium consists of ${\bf \Gamma}_{12}({\bf x},{\bf x}_S)$ and ${\bf \Gamma}_{22}({\bf x},{\bf x}_S)$,
where $\tilde{\bf \Gamma}_{22}({\bf x},{\bf s},x_{3,S})=\tilde{\bf G}_{22}({\bf x},{\bf s},x_{3,S})\tilde{\bf \Delta}_1({\bf s}, x_{3,F})$, with $\tilde{\bf G}_{22}({\bf x},{\bf s},x_{3,S})$ being
the Green's function for the ${\bf q}_2$-type field observed at ${\bf x}$.
Substituting ${\bf q}_\alpha({\bf x})={\bf \Gamma}_{\alpha 2}({\bf x},{\bf x}_S)$ and ${\bf q}_1^\pm({\bf x}_F)={\bf \Gamma}_{12}^\pm({\bf x}_F,{\bf x}_S)$
into equation (\ref{eq13gen}), using equations (\ref{eq4Gag}) and (\ref{eq4Gagd}), we obtain
\begin{eqnarray}
{\bf \Gamma}_{\alpha 2}({\bf x},{\bf x}_S)&=&
\int_{\partial\mathbb{D}_F} {\bf F}_\alpha({\bf x},{\bf x}_F){\bf R}({\bf x}_F,{\bf x}_S){\rm d}^2{\bf x}_F
+{\bf J}_{\alpha\alpha}{{\bar{\bf F}}}_\alpha^*({\bf x},{\bf x}_S){\bf J}_{11}^{-1}, \label{eq13genG}
\end{eqnarray}
(no summation for repeated subscripts) for $x_3\ge x_{3,F}$. This is a generalisation of equations (\ref{eq13G}) and (\ref{eq13Gbb}) and a starting point for developing a unified Marchenko method
for full wave fields, accounting for evanescent waves inside the medium (this is subject of current research).
Once the focusing functions are found, they can be used to retrieve the Green's matrices
${\bf \Gamma}_{\alpha 2}({\bf x},{\bf x}_S)$ for $\alpha=1,2$
(from equation (\ref{eq13genG})) and all components of the propagator matrix ${\bf W}({\bf x},{\bf x}_F)$ (from equations (\ref{eqWa1}) and (\ref{eqWa2})).
\section{Unified transfer matrix and decomposed focusing functions}\label{sec5}
We extend the theory of section \ref{sec3} for unified wave fields.
\subsection{Unified transfer matrix}
Given the downgoing and upgoing fields ${\bf q}^+({\bf x}_F)$ and ${\bf q}^-({\bf x}_F)$ contained in vector ${\bf b}({\bf x}_F)$ at the boundary $\partial\mathbb{D}_F$,
we transfer these fields to depth level $x_3$ via
\begin{eqnarray}
{\bf b}({\bf x})=\int_{\partial\mathbb{D}_F} {{{\mbox{\boldmath ${\cal T}$}}}}({\bf x},{\bf x}_F){\bf b}({\bf x}_F){\rm d}^2{\bf x}_F,\label{eq1330transunif}
\end{eqnarray}
for $x_3\ge x_{3,F}$.
Here the transfer matrix is partitioned as follows
\begin{eqnarray}\label{eq424TU}
{{{\mbox{\boldmath ${\cal T}$}}}}({\bf x},{\bf x}_F)= \begin{pmatrix}{{{\mbox{\boldmath ${\cal T}$}}}}^{+,+} & {{{\mbox{\boldmath ${\cal T}$}}}}^{+,-} \\
{{{\mbox{\boldmath ${\cal T}$}}}}^{-,+} & {{{\mbox{\boldmath ${\cal T}$}}}}^{-,-} \end{pmatrix}({\bf x},{\bf x}_F),
\end{eqnarray}
with ${{{\mbox{\boldmath ${\cal T}$}}}}^{\pm,\pm}$ being $\frac{N}{2}\times\frac{N}{2}$ submatrices. Analogous to equation (\ref{eq31}),
matrix ${{{\mbox{\boldmath ${\cal T}$}}}}({\bf x},{\bf x}_F)$ is related to the unified propagator matrix of equation (\ref{eq424b}) via
\begin{eqnarray}
{{{\mbox{\boldmath ${\cal T}$}}}}({\bf x},{\bf x}_F)={{{\mbox{\boldmath ${\cal D}$}}}}^{-1}({\bf x}){\bf W}({\bf x},{\bf x}_F){{{{\mbox{\boldmath ${\cal D}$}}}}}({\bf x}_F),\label{eq31unif}
\end{eqnarray}
with ${{{{\mbox{\boldmath ${\cal D}$}}}}}({\bf x}_F)$ and ${{{\mbox{\boldmath ${\cal D}$}}}}^{-1}({\bf x})$
being the inverse spatial Fourier transforms of $\tilde{\bf D}({\bf s},x_{3,F})$ and $\{\tilde{\bf D}({\bf s},x_3)\}^{-1}$, defined in equations (\ref{eq2.9rev}) and (\ref{eq39}), respectively.
Unlike in the acoustic situation, where ${{{{\mbox{\boldmath ${\cal L}$}}}}}({\bf x}_F)$ and ${{{\mbox{\boldmath ${\cal L}$}}}}^{-1}({\bf x})$ in equation (\ref{eq31}) account
for lateral variations of the medium parameters at depths $x_{3,F}$ and $x_3$, the unified matrices $\tilde{\bf D}({\bf s},x_{3,F})$ and $\{\tilde{\bf D}({\bf s},x_3)\}^{-1}$
are defined for laterally invariant medium parameters at $x_{3,F}$ and $x_3$.
For $\tilde{\bf D}({\bf s},x_{3,F})$ this is not a restriction, since $x_{3,F}$ is the depth of the boundary $\partial\mathbb{D}_F$
between the inhomogeneous medium and the homogeneous upper half-space.
However, for $\{\tilde{\bf D}({\bf s},x_3)\}^{-1}$ it implies that this operator can only be applied at depths where no lateral variations occur.
\subsection{Relation with decomposed unified Marchenko focusing functions}
Assuming there are no lateral variations at a specific depth level $x_3$, we use the spatial Fourier transformation of equation (\ref{eq99950b}) to express the transfer matrix
(analogous to equation (\ref{eq31k})) as
\begin{eqnarray}\label{eq62b}
\tilde{{{\mbox{\boldmath ${\cal T}$}}}}({\bf s},x_3,{\bf x}_F) = \{\tilde{\bf D}({\bf s},x_3)\}^{-1}\tilde{\bf Y}({\bf s},x_3,{\bf x}_F),
\end{eqnarray}
with $\{\tilde{\bf D}({\bf s},x_3)\}^{-1}$ defined in equation (\ref{eq39}).
Analogous to equation (\ref{eq55b}), we obtain for the right column of $\tilde{{{\mbox{\boldmath ${\cal T}$}}}}({\bf s},x_3,{\bf x}_F)$
\begin{eqnarray}\label{eq55}
&&\hspace{-0.5cm}\begin{pmatrix}\tilde{\bf F}^+({\bf s},x_3,{\bf x}_F)\\ \tilde{\bf F}^-({\bf s},x_3,{\bf x}_F)\end{pmatrix}=
\begin{pmatrix}-(\tilde{\bf \Delta}_1)^{-1}\tilde{\bf D}_1^-& (\tilde{\bf \Delta}_1)^{-1}\\
(\tilde{\bf \Delta}_1)^{-1}\tilde{\bf D}_1^+ & -(\tilde{\bf \Delta}_1)^{-1} \end{pmatrix}({\bf s},x_3)
\begin{pmatrix}\tilde{\bf F}_1({\bf s},x_3,{\bf x}_F)\\\tilde{\bf F}_2({\bf s},x_3,{\bf x}_F)\end{pmatrix},
\end{eqnarray}
with $\tilde{\bf F}^+({\bf s},x_3,{\bf x}_F)$ and $\tilde{\bf F}^-({\bf s},x_3,{\bf x}_F)$ being the downgoing and upgoing parts at $x_3$ of $\tilde{\bf F}_1({\bf s},x_3,{\bf x}_F)$.
For the left column of $\tilde{{{\mbox{\boldmath ${\cal T}$}}}}({\bf s},x_3,{\bf x}_F)$ we analyse the following expression
\begin{eqnarray}\label{eq56}
&&\hspace{-0.7cm}\begin{pmatrix}-(\tilde{\bf \Delta}_1)^{-1}\tilde{\bf D}_1^-& (\tilde{\bf \Delta}_1)^{-1}\\
(\tilde{\bf \Delta}_1)^{-1}\tilde{\bf D}_1^+ & -(\tilde{\bf \Delta}_1)^{-1} \end{pmatrix}({\bf s},x_3)
\begin{pmatrix}{\bf J}_{11}{\tilde{\bar{\bf F}}}_1^*(-{\bf s},x_3,{\bf x}_F){\bf J}_{11}^{-1}\\{\bf J}_{22}{\tilde{\bar{\bf F}}}_2^*(-{\bf s},x_3,{\bf x}_F){\bf J}_{11}^{-1}\end{pmatrix}.\nonumber\\
&&
\end{eqnarray}
Using equation (\ref{eq34rev}) and
\begin{eqnarray}
\{{\tilde{\bar{\bf \Delta}}}_1({\bf s},x_3)\}^{-1}=-{\bf J}_{11}\{{\tilde{{\bf \Delta}}}_1^*(-{\bf s},x_3)\}^{-1}{\bf J}_{22}^{-1}
\end{eqnarray}
in equation (\ref{eq56}) gives
\begin{eqnarray}\label{eq58}
&&\hspace{-0.5cm}\begin{pmatrix}{\bf J}_{11}({\tilde{\bar{\bf \Delta}}}_1^*)^{-1}({\tilde{\bar{\bf D}}}_1^+)^*{\bf J}_{11}^{-1}& -{\bf J}_{11}({\tilde{\bar{\bf \Delta}}}_1^*)^{-1}{\bf J}_{22}^{-1}\\
-{\bf J}_{11}({\tilde{\bar{\bf \Delta}}}_1^*)^{-1}({\tilde{\bar{\bf D}}}_1^-)^*{\bf J}_{11}^{-1} & {\bf J}_{11}({\tilde{\bar{\bf \Delta}}}_1^*)^{-1}{\bf J}_{22}^{-1} \end{pmatrix}(-{\bf s},x_3)
\begin{pmatrix}{\bf J}_{11}{\tilde{\bar{\bf F}}}_1^*(-{\bf s},x_3,{\bf x}_F){\bf J}_{11}^{-1}\\{\bf J}_{22}{\tilde{\bar{\bf F}}}_2^*(-{\bf s},x_3,{\bf x}_F){\bf J}_{11}^{-1}\end{pmatrix}.
\end{eqnarray}
By comparing this with equation (\ref{eq55}) we find that the expression in equation (\ref{eq58}) is equal to
\begin{eqnarray}\label{eq59}
\begin{pmatrix}{\bf J}_{11}\{{\tilde{\bar{\bf F}}}^-(-{\bf s},x_3,{\bf x}_F)\}^*{\bf J}_{11}^{-1}\\{\bf J}_{11}\{{\tilde{\bar{\bf F}}}^+(-{\bf s},x_3,{\bf x}_F)\}^*{\bf J}_{11}^{-1}\end{pmatrix}.
\end{eqnarray}
Combining the right column (equation (\ref{eq55})) and the left column (equation (\ref{eq59})),
we obtain the following expression for the unified transfer matrix
\begin{eqnarray}\label{eq60}
&&\hspace{-0.7cm}\tilde{{{\mbox{\boldmath ${\cal T}$}}}}({\bf s},x_3,{\bf x}_F)=
\begin{pmatrix} {\bf J}_{11}\{{\tilde{\bar{\bf F}}}^-(-{\bf s},x_3,{\bf x}_F)\}^*{\bf J}_{11}^{-1}&\tilde{\bf F}^+({\bf s},x_3,{\bf x}_F)\\
{\bf J}_{11}\{{\tilde{\bar{\bf F}}}^+(-{\bf s},x_3,{\bf x}_F)\}^*{\bf J}_{11}^{-1}&\tilde{\bf F}^-({\bf s},x_3,{\bf x}_F)\end{pmatrix},\nonumber\\
&&
\end{eqnarray}
or, in the space domain,
\begin{eqnarray}\label{eq60b}
&&\hspace{-0.7cm}{{{\mbox{\boldmath ${\cal T}$}}}}({\bf x},{\bf x}_F)=
\begin{pmatrix} {\bf J}_{11}\{{{\bar{\bf F}}}^-({\bf x},{\bf x}_F)\}^*{\bf J}_{11}^{-1}&{\bf F}^+({\bf x},{\bf x}_F)\\
{\bf J}_{11}\{{{\bar{\bf F}}}^+({\bf x},{\bf x}_F)\}^*{\bf J}_{11}^{-1}&{\bf F}^-({\bf x},{\bf x}_F)\end{pmatrix}.
\end{eqnarray}
This is the generalisation of equation (\ref{eq58vv}).
\subsection{Representations with decomposed unified Marchenko focusing functions}
Substituting the expressions for ${\bf b}({\bf x})$ and ${{{\mbox{\boldmath ${\cal T}$}}}}({\bf x},{\bf x}_F)$
into equation (\ref{eq1330transunif}), gives
the following representations for the downgoing and upgoing fields, ${\bf q}_1^+({\bf x})$ and ${\bf q}_1^-({\bf x})$ respectively, inside the inhomogeneous medium
\begin{eqnarray}
{\bf q}_1^+({\bf x})&=&\int_{\partial\mathbb{D}_F} {\bf J}_{11}\{{{\bar{\bf F}}}^-({\bf x},{\bf x}_F)\}^*{\bf J}_{11}^{-1}{\bf q}_1^+({\bf x}_F){\rm d}^2{\bf x}_F
+\int_{\partial\mathbb{D}_F} {\bf F}^+({\bf x},{\bf x}_F){\bf q}_1^-({\bf x}_F){\rm d}^2{\bf x}_F,\label{eq13decoun}\\
{\bf q}_1^-({\bf x})&=&\int_{\partial\mathbb{D}_F} {\bf J}_{11}\{{{\bar{\bf F}}}^+({\bf x},{\bf x}_F)\}^*{\bf J}_{11}^{-1}{\bf q}_1^+({\bf x}_F){\rm d}^2{\bf x}_F
+\int_{\partial\mathbb{D}_F} {\bf F}^-({\bf x},{\bf x}_F){\bf q}_1^-({\bf x}_F){\rm d}^2{\bf x}_F,\label{eq13bbdecoun}
\end{eqnarray}
for $x_3\ge x_{3,F}$. These expressions are exact and hold for dissipative media.
Making similar substitutions as in section \ref{sec4.4} we obtain
\begin{eqnarray}
{\bf \Gamma}_{12}^+({\bf x},{\bf x}_S)&=&
\int_{\partial\mathbb{D}_F} {\bf F}^+({\bf x},{\bf x}_F){\bf R}({\bf x}_F,{\bf x}_S){\rm d}^2{\bf x}_F
+{\bf J}_{11}\{{{\bar{\bf F}}}^-({\bf x},{\bf x}_S)\}^*{\bf J}_{11}^{-1}, \label{eq13genGdeco}\\
{\bf \Gamma}_{12}^-({\bf x},{\bf x}_S)&=&
\int_{\partial\mathbb{D}_F} {\bf F}^-({\bf x},{\bf x}_F){\bf R}({\bf x}_F,{\bf x}_S){\rm d}^2{\bf x}_F
+{\bf J}_{11}\{{{\bar{\bf F}}}^+({\bf x},{\bf x}_S)\}^*{\bf J}_{11}^{-1}, \label{eq13genGdecoup}
\end{eqnarray}
for $x_3\ge x_{3,F}$. Here ${\bf \Gamma}_{12}^+({\bf x},{\bf x}_S)$ and ${\bf \Gamma}_{12}^-({\bf x},{\bf x}_S)$ stand for the downgoing and upgoing part of the
Green's function ${\bf \Gamma}_{12}({\bf x},{\bf x}_S)$.
These equations are generalisations of equations (\ref{eq13Gdeco}) and (\ref{eq13Gbbdeco}) and
form a starting point for developing a unified Marchenko method for decomposed wave fields.
Once the focusing functions are found, they can be used to retrieve the decomposed Green's functions
${\bf \Gamma}_{12}^+({\bf x},{\bf x}_S)$ and ${\bf \Gamma}_{12}^-({\bf x},{\bf x}_S)$
(from equations (\ref{eq13genGdeco}) and (\ref{eq13genGdecoup}))
and all components of the transfer matrix ${{{\mbox{\boldmath ${\cal T}$}}}}({\bf x},{\bf x}_F)$ (from equation (\ref{eq60b})).
Versions of the Marchenko method based on expressions similar to equations (\ref{eq13genGdeco}) and (\ref{eq13genGdecoup}) have already been implemented for
the retrieval of decomposed elastodynamic Green's functions in lossless media, ignoring evanescent waves \citep{Wapenaar2014GJI, Costa2014PRE, Reinicke2019WM, Reinicke2020GEO}.
\section{Conclusions}\label{sec6}
We have derived relations between propagator matrices, transfer matrices and Marchenko focusing functions for different wave phenomena, ranging from acoustic to seismoelectric waves.
All expressions hold for heterogeneous dissipative media and account for propagating and evanescent waves.
Only for the transfer matrix
beyond the acoustic approach we assumed that there are no lateral variations at the depth level of decomposition.
The derived relations provide insight in the connections between the propagator matrices, transfer matrices and Marchenko focusing functions and may lead to new modelling algorithms for these quantities.
Moreover, several of the derived relations may be useful to develop improved Marchenko-type wave field retrieval schemes
for different wave phenomena, possibly accounting for evanescent waves inside the medium.
\section*{Acknowledgements}
The research of KW has received funding from the European Union's Horizon 2020 research and innovation programme: European Research Council (grant agreement 742703).
\section*{Data Availability}
No data have been used for this study.
|
2,877,628,090,553 | arxiv | \subsection{Sufficient Information}
\label{app:sufficient_information}
We compare conditions (i)-(iii) of Definition \ref{def:sufficient} to the conditions of Definition 2 in \cite{companion}; for ease of readability, we include the definition from \cite{companion} below.
\begin{definition}[Sufficient private information \cite{companion}]
We say $S_t^i=\zeta_t^i(P_t^i,C_t;g_{1:t-1})$, $i\in\mathcal{N}$, $t\in\mathcal{T}$, is \textit{sufficient private information} for the agents if,
\begin{enumerate}[(i)]
\item it can be updated recursively as \vspace*{-2pt}
\begin{gather}
S_t^{i}=\phi_t^i(S_{t-1}^{i},H_t^i\backslash H_{t-1}^i;g_{1:t-1}) \text{ for } t\in\mathcal{T}\backslash\{1\}, \label{eq:sufficientupdate}
\end{gather}
\item for any strategy profile $g$ and for all realizations $\{c_t,p_t,p_{t+1},z_{t+1},a_t\}\in\mathcal{C}_t\times\mathcal{P}_t\times\mathcal{P}_{t+1}\times\mathcal{Z}_{t+1}$ of positive probability,
\begin{align}
\mathbb{P}^{g_{1:t}}\left\{\hspace*{-2pt}s_{t+1}\hspace*{-1pt},\hspace*{-1pt}z_{t+1}\hspace*{-1pt}\mid p_t\hspace*{-1pt},\hspace*{-1pt}c_t\hspace*{-1pt},\hspace*{-1pt}a_t\hspace*{-2pt}\right\}\hspace*{-3pt}=\hspace*{-2pt}\mathbb{P}^{g_{1:t}}\hspace*{-1pt}\left\{\hspace*{-2pt}s_{t+1}\hspace*{-1pt},\hspace*{-1pt}z_{t+1}\hspace*{-1pt}\mid s_t\hspace*{-1pt},\hspace*{-1pt}c_t\hspace*{-1pt},\hspace*{-1pt}a_t\hspace*{-2pt}\right\}\hspace*{-1pt},\hspace*{-4pt}\label{eq:sufficientdynamic}
\end{align}
where $s_{\tau}^{1:N}=\zeta_{\tau}^{1:N}(p_{\tau}^{1:N},c_{\tau};g_{1\hspace*{-1pt}:\tau-1})$ for $\tau\in\mathcal{T}$;
\item for every strategy profile $\tilde{g}$ of the form
$\tilde{g}\hspace*{-2pt}:=\hspace*{-2pt}\{\hspace*{-1pt}\tilde{g}^{i}_t\hspace*{-1pt}:\hspace*{-1pt}\mathcal{S}_t^i\times \mathcal{C}_t\rightarrow \Delta(\mathcal{A}_t^i), i\hspace*{-2pt}\in\hspace*{-2pt}\mathcal{N}\hspace*{-1pt},\hspace*{-1pt} t\hspace*{-2pt}\in\hspace*{-2pt}\mathcal{T}\}$ and $a_t\hspace*{-2pt}\in\hspace*{-2pt}\mathcal{A}_t$, $t\hspace*{-2pt}\in\hspace*{-2pt}\mathcal{T}$;
\begin{align}
\mathbb{E}^{\tilde{g}_{1:t-1}\hspace*{-2pt}}\left\{\hspace*{-2pt}u_t^i(\hspace*{-1pt}X_t\hspace*{-1pt},\hspace*{-1pt}A_t\hspace*{-1pt})\hspace*{-1pt}\mid c_t\hspace*{-1pt},\hspace*{-1pt}p_t^i\hspace*{-1pt},\hspace*{-1pt}a_t\hspace*{-2pt}\right\}\hspace*{-3pt}=\hspace*{-2pt}\mathbb{E}^{\tilde{g}_{1:t-1}\hspace*{-2pt}}\left\{\hspace*{-2pt}u_t^i(\hspace*{-1pt}X_t\hspace*{-1pt},\hspace*{-1pt}A_t\hspace*{-1pt})\hspace*{-1pt}\mid c_t\hspace*{-1pt},\hspace*{-1pt}s_t^{i}\hspace*{-1pt},\hspace*{-1pt}a_t\hspace*{-2pt}\right\}\hspace*{-2pt},\hspace*{-5pt}\label{eq:payoff-relevant2}
\end{align}
for all realizations $\{\hspace*{-1pt}c_{t}\hspace*{-1pt},\hspace*{-1pt}p_{t}^i\}\hspace*{-3pt}\in\hspace*{-2pt}\mathcal{C}_{t}\hspace*{-1pt}\times\hspace*{-1pt}\mathcal{P}_{t}^i$ of positive probability where $s_{\tau}^{1:N}\hspace*{-3pt}=\hspace*{-2pt}\zeta_{\tau}^{1:N}\hspace*{-2pt}(p_{\tau}^{1:N}\hspace*{-1pt},\hspace*{-1pt}c_{\tau};\hspace*{-1pt}\tilde{g}_{1\hspace*{-1pt}:\tau-1}\hspace*{-1pt})$ for $\tau\in\mathcal{T}$;\vspace{5pt} \item given an arbitrary strategy profile $\tilde{g}$ of the form $\tilde{g}\hspace*{-1pt}:=\hspace*{-1pt}\{\tilde{g}^{i}_t:\mathcal{S}_t^i\hspace*{-1pt}\times \hspace*{-1pt}\mathcal{C}_t\rightarrow \Delta(\mathcal{A}_t^i), i\hspace*{-2pt}\in\hspace*{-2pt}\mathcal{N}, t\hspace*{-2pt}\in\hspace*{-2pt}\mathcal{T}\}$, $i\hspace*{-2pt}\in\hspace*{-2pt}\mathcal{N}$, and $t\hspace*{-2pt}\in\hspace*{-2pt}\mathcal{T}$,
\begin{align}
\mathbb{P}^{\tilde{g}_{1:t-1}}\hspace*{-2pt}\left\{\hspace*{-2pt}s_t^{-i}\hspace*{-1pt}\mid p_t^i\hspace*{-1pt},\hspace*{-1pt}c_t\hspace*{-2pt}\right\}\hspace*{-3pt}=\hspace*{-2pt}\mathbb{P}^{\tilde{g}_{1:t-1}}\hspace*{-2pt}\left\{\hspace*{-1pt}s_t^{-i}\hspace*{-1pt}\mid s_t^i\hspace*{-1pt},\hspace*{-1pt}c_t\hspace*{-2pt}\right\}\hspace*{-1pt},\hspace*{-4pt}\label{eq:sufficientinfo}
\end{align}
for all realizations $\{c_{t}\hspace*{-1pt},\hspace*{-1pt}p_{t}^i\}\hspace*{-2pt}\in\hspace*{-2pt}\mathcal{C}_{t}\hspace*{-2pt}\times\hspace*{-2pt}\mathcal{P}_{t}^i$ of positive probability where $s_{\tau}^{1:N}\hspace*{-3pt}=\hspace*{-2pt}\zeta_{\tau}^{1:N}\hspace*{-2pt}(p_{\tau}^{1:N}\hspace*{-1pt},\hspace*{-1pt}c_{\tau};\hspace*{-1pt}\tilde{g}_{1\hspace*{-1pt}:\tau-1}\hspace*{-1pt})$ for $\tau\in\mathcal{T}$.
\end{enumerate}
\label{def:sufficient-part1}
\end{definition}
Condition (i) of Definition \ref{def:sufficient} appears in the definition of $S^i_t$ in Definition \ref{def:sufficient-part1}, and condition (ii) of Definition \ref{def:sufficient} on recursive update is the same as condition (i) in Definition \ref{def:sufficient-part1}.
Condition (iii) of Definition \ref{def:sufficient} directly leads to (iii) and (iv) of Definition \ref{def:sufficient-part1};
the utility $u_t^i(X_t, A_t)$ in
condition (iii) and the random variable $s^{-i}_t$ in condition $(iv)$ of Definition \ref{def:sufficient-part1} are functions of $(x_t, s_t)$ whose distribution conditioned on $(p^i_t, c_t)$ is the same as conditioned on $(s^i_t, c_t)$ under condition (iii) of Definition \ref{def:sufficient}.
However, condition (ii) of Definition \ref{def:sufficient-part1} may not hold for sufficient private information satisfying Definition \ref{def:sufficient}.
Consider the following example.
Suppose $X_1 = Y^1_1 \text{ XOR } Y^2_1$, and $Y^1_1, Y^2_1$ takes values in $\{0, 1\}$ with equal probability. $Z_1 = \emptyset$ and $Z_2 = X_1$. Then $S^1_1=S^2_1=\emptyset$ satisfies Definition \ref{def:sufficient} because $\prob(x_1, s^{-i}_1\mid p^i_1, c_1) = \prob(x_1\mid y^i_1) = 0.5 = \prob(x_1, s^{-i}_1\mid s^i_1, c_1)$. However, they don't satisfy condition (ii) of Definition \ref{def:sufficient-part1} because $\prob(z_2\mid p_1, c_1, a_1) = \prob(x_1\mid y^1_1, y^2_1) = \mathds{1}(x_1 = y^1_1 \text{ XOR } y^2_1) \neq \prob(z_2\mid s_1, c_1, a_1) = \prob(x_1) = 0.5$.
\subsection{Proof of the generalized better reply secure property for the augmented stage-game}
\label{appendix:example}
We show that when $c > 24$ the augmented stage-game $\hat G_1$ in Section \ref{sec:example} is generalized better reply secure. For that matter, we set $\beta^*(q) = \mathds{1}(q \leq 1/3)$ and consider the following five cases.
\begin{enumerate}[{Case} (i), leftmargin=1.2cm]
\item $r^0_1(\bar\alpha, \bar q) \neq 0$.
In this case Bayes' rule doesn't hold at $(\bar\alpha, \bar q)$. We focus on agent $0$ and select the belief to satisfy Bayes' rule as follows:
\begin{align}
\phi^0(\tilde\alpha, \tilde q) = (
\tilde \alpha_2 p + \tilde \alpha_1 (1-p), \tilde \alpha_2 (1 - p) + \tilde \alpha_1 p)
\end{align}
Then this $\phi^0$ is a closed correspondence. From this construction of $\phi^0$, we can pick $\epsilon > 0$ such that
\begin{align*}
r^0_1(\tilde\alpha, \phi^0(\tilde\alpha, \tilde q))
= 0 > r^0_1(\bar\alpha, \bar q ) + \epsilon
\end{align*}
\item $r^0_1(\bar\alpha, \bar q) = 0$, and $\bar\pi_{-1} \neq 1/3$ and $\bar\pi_1 \neq 1/3$.
Since $\beta^*(q) = 1$ if $q < 1/3$, $\beta^*(q) = 0$ if $q > 1/3$, $\beta^*(\cdot)$ is continuous at points where $q \neq 1/3$. Hence, we can find $\epsilon > 0$ s.t. $\beta^*(\tilde q_{-1}) = \beta^*(\bar q_{-1})$ for all $\tilde q_{-1} \in (\bar q_{-1} - \epsilon, \bar q_{-1} + \epsilon)$, and $\beta^*(\tilde q_{1}) = \beta^*(\bar q_{1})$ for all $\tilde q_{1} \in (\bar q_{1} - \epsilon, \bar q_{1} + \epsilon)$. In this region we have
\begin{align}
r^A_1(\alpha, \tilde q) = r^A_1(\alpha, \bar q)
\end{align}
for all $\alpha$. Let
\begin{align}
\phi^A(\tilde\alpha, \tilde q) = \argmax_\alpha r^A_1(\alpha, \tilde q)
\end{align}
Because $r^A_1(\cdot)$ is continuous in the region under consideration, $\phi^A(\cdot)$ has a closed graph from Berge's maximum theorem. Note that for all $\tilde q_{1} \in (\bar q_{1} - \epsilon, \bar q_{1} + \epsilon)$, $\tilde q_{1} \in (\bar q_{1} - \epsilon, \bar q_{1} + \epsilon)$
\begin{align}
r^A_1(\phi^A(\tilde\alpha, \tilde q), \tilde q) = \max_\alpha r^A_1(\alpha, \tilde q) = \max_\alpha r^A_1(\alpha, \bar q)
\end{align}
If $\max_\alpha r^A_1(\alpha, \bar q) > r^A_1(\bar\alpha, \bar q)$ we can find $\epsilon > 0$ such that for $\tilde q_{1} \in (\bar q_{1} - \epsilon, \bar q_{1} + \epsilon)$, $\tilde q_{1} \in (\bar q_{1} - \epsilon, \bar q_{1} + \epsilon)$, $ r^A_1(\phi^A(\tilde\alpha, \tilde q), \tilde q) = \max_\alpha r^A_1(\alpha, \tilde q) \geq r^A_1(\bar\alpha, \bar q) + \epsilon$.
If $\max_\alpha r^A_1(\alpha, \bar q) = r^A_1(\bar\alpha, \bar q)$, then Alice has no profitable deviation. Furthermore, since $r^0_1(\bar\alpha, \bar q) = 0$, agent $0$ has no profitable deviation. Consequently, $(\bar\alpha, \bar q)$ is an equilibrium if $max_\alpha r^A_1(\alpha, \bar q) = r^A_1(\bar\alpha, \bar q)$.
\item $r^0_1(\bar\alpha, \bar q) = 0$, $\bar\pi_{-1} = 1/3$ and $\bar\pi_1 \neq 1/3$.
Note that $\bar q_{-1} = 0.8\bar\alpha_1 + 0.2\bar\alpha_2 = 1/3$ and $\beta^*(\bar q_{-1}) = 1/3$.
Since $\bar\pi_1 \neq 1/3$, we can find $\epsilon > 0$ s.t. $\beta^*(\tilde q_{1}) = \beta^*(\bar q_{1})$ for all $\tilde q_{1} \in (\bar q_{1} - \epsilon, \bar q_{1} + \epsilon)$. Therefore,
\begin{align}
r^A_1(\bar\alpha, \bar q) = 0.5c(1 - \bar\alpha_1 + \bar\alpha_2) + 0.5(2 - \bar\alpha_1 - \bar\alpha_2) + 0.5(3\bar q_{1} - 1) \beta^*(\bar q_1)
\end{align}
Pick for Alice
\begin{align}
\phi^A(\tilde\alpha, \tilde q) = (0, 1)
\end{align}
for all $\tilde \alpha_i \in (\bar \alpha_i - \epsilon, \bar \alpha_i + \epsilon), i=1,2$, $\tilde q_i \in (\bar q_i - \epsilon, \bar q_i + \epsilon), i=-1,1$. We get
\begin{align}
r^A_1(\phi^A(\tilde\alpha, \tilde q), \tilde q) = &c + 0.5 + 0.5( 0.6 - 1)\beta^*(\tilde q_{-1}) + 0.5(2.4 - 1)\beta^*(\tilde q_{-1})
\notag\\
= &c + 0.5 - 0.2\beta^*(\tilde q_{-1}) + 0.7\beta^*(\bar q_{-1})
\end{align}
and
\begin{align}
& r^A_1(\phi^A(\tilde\alpha, \tilde q), \tilde q) - r^A_1(\bar\alpha, \bar q) - \epsilon
\notag\\
= & 0.5c(1 + \bar\alpha_1 - \bar\alpha_2) -0.5(1 + \bar\alpha_1 + \bar\alpha_2)
\notag\\
& - 0.2\beta^*(\tilde q_{-1}) + 0.5(2.4 - 3\bar q_1)\beta^*(\bar q_{-1}) - \epsilon
\notag\\
\geq & 0.5c(1 + \bar\alpha_1 - \bar\alpha_2) -0.5 * 3 - 0.2 - 0.5 * 0.6 - \epsilon
\end{align}
When $\bar q_{-1} = 1/3$, then $0.8\bar\alpha_1 + 0.2\bar\alpha_2 = 1/3 \Rightarrow \bar \alpha_1 = 5/12 - 3/12 \bar \alpha_2$. Therefore,
\begin{align}
1 + \bar \alpha_1 -\bar \alpha_2 = 17/12 - 15/12 \bar \alpha_2 \geq 1/6
\end{align}
where the minimum is at $\bar\alpha_1 = 1/6$ and $\bar\alpha_2=1$.
When $c > 24$, then
\begin{align}
0.5c(1 + \bar \alpha_1 -\bar \alpha_2) \geq c/12 > 2
\end{align}
and
$r^A_1(\phi^A(\tilde\alpha, \tilde q), \tilde q) - r^A_1(\bar\alpha, \bar q) - \epsilon > 0$.
\item $r^0_1(\bar\alpha, \bar q) = 0$, and $\bar\pi_1 = 1/3$ and $\bar\pi_{-1} \neq 1/3$.
This case is similar to case (iii).
Since $\bar\pi_{-1} \neq 1/3$, we can find $\epsilon > 0$ s.t. $\beta^*(\tilde q_{-1}) = \beta^*(\bar q_{-1})$ for all $\tilde q_{-1} \in (\bar q_{-1} - \epsilon, \bar q_{-1} + \epsilon)$. Furthermore,
\begin{align}
& r^A_1(\bar\alpha, \bar q)
\notag\\
= & 0.5c(1 - \bar\alpha_1 + \bar\alpha_2) + 0.5(2 - \bar\alpha_1 - \bar\alpha_2) + 0.5(3\bar q_{-1} - 1) \beta^*(\bar q_{-1})
\end{align}
Pick for Alice the closed correspondence (as in case (iii))
\begin{align}
\phi^A(\tilde\alpha, \tilde q) = (0, 1)
\end{align}
for all $\tilde \alpha_i \in (\bar \alpha_i - \epsilon, \bar \alpha_i + \epsilon), i=1,2$, $\tilde q_i \in (\bar q_i - \epsilon, \bar q_i + \epsilon), i=-1,1$. Then
\begin{align}
& r^A_1(\phi^A(\tilde\alpha, \tilde q), \tilde q)
\notag\\
= &c + 0.5 - 0.2\beta^*(\bar q_{-1}) + 0.7\beta^*(\tilde q_{-1})
\end{align}
and
\begin{align}
& r^A_1(\phi^A(\tilde\alpha, \tilde q), \tilde q) - r^A_1(\bar\alpha, \bar q) - \epsilon
\notag\\
= & 0.5c(1 + \bar\alpha_1 - \bar\alpha_2) -0.5(1 + \bar\alpha_1 + \bar\alpha_2)
\notag\\
& + 0.5( 0.6 - 3\bar q_{-1})\beta^*(\bar q_{-1}) + 0.7\beta^*(\tilde q_{-1}) - \epsilon
\notag\\
\geq & 0.5c(1 + \bar\alpha_1 - \bar\alpha_2) -0.5 * 3 - 0.5 * 2.4 - \epsilon
\end{align}
When $\bar q_{1} = 1/3$, $0.2\bar\alpha_1 + 0.8\bar\alpha_2 = 1/3 \Rightarrow \bar \alpha_2 = 5/12 - 3/12 \bar \alpha_1$. Therefore,
\begin{align}
1 + \bar \alpha_1 -\bar \alpha_2 = 7/12 + 15/12 \bar \alpha_1 \geq 7/12.
\end{align}
When $c > 24$, then
\begin{align}
0.5c(1 + \bar \alpha_1 -\bar \alpha_2) \geq 7/24 c > 2.7
\end{align}
and
$r^A_1(\phi^A(\tilde\alpha, \tilde q), \tilde q) - r^A_1(\bar\alpha, \bar q) - \epsilon > 0$.
\item $r^0_1(\bar\alpha, \bar q) = 0$, and $\bar\pi_1 = 1/3$ and $\bar\pi_{-1} = 1/3$.
We have
\begin{align}
r^A_1(\bar\alpha, \bar q) = 0.5c(1 - \bar\alpha_1 + \bar\alpha_2) + 0.5(2 - \bar\alpha_1 - \bar\alpha_2)
\end{align}
Pick for Alice the closed correspondence (as in cases (iii) and (iv))
\begin{align}
\phi^A(\tilde\alpha, \tilde q) = (0, 1)
\end{align}
for all $\tilde \alpha_i \in (\bar \alpha_i - \epsilon, \bar \alpha_i + \epsilon), i=1,2$, $\tilde q_i \in (\bar q_i - \epsilon, \bar q_i + \epsilon), i=-1,1$. Then
\begin{align}
& r^A_1(\phi^A(\tilde\alpha, \tilde q), \tilde q) - r^A_1(\bar\alpha, \bar q) - \epsilon
\notag\\
= &0.5c(1 + \bar\alpha_1 - \bar\alpha_2) -0.5(1 + \bar\alpha_1 + \bar\alpha_2) -0.2\beta^*(\tilde q_{-1}) + 0.7\beta^*(\tilde q_{-1}) - \epsilon
\notag\\
\geq & 0.5c(1 + \bar\alpha_1 - \bar\alpha_2) -0.5 * 3 - 0.2 - \epsilon
\end{align}
Then we have $r^A_1(\phi^A(\tilde\alpha, \tilde q), \tilde q) - r^A_1(\bar\alpha, \bar q) - \epsilon > 0$ following the steps in (iv).
\end{enumerate}
\section{An illustrative example (Step 4)}
\label{sec:example}
In Section \ref{sec:sequential_decomposition} we argued (cf. Remark \ref{remark:no_dp_solution}) that the sequential decomposition equations defined by \eqref{eq:dp_last}-\eqref{eq:dp_bne_eq} for all $i\in\mathcal N, t\in\mathcal T$ may not have a solution, and that the value functions defined by \eqref{eq:dp_last}-\eqref{eq:dp_bne_eq} may not be continuous in the CIB belief $\Pi^{\psi^\sigma}_t$ (cf. Remark \ref{remark:valuefunction_discont}). In this section we present an example that illustrates/highlights the above remarks. In the example, a two-stage stochastic dynamic game, the agents' utilities depend on a parameter $c$. We show that: (i) the value functions of the corresponding sequential decomposition equations are not continuous in the CIB belief $\Pi^{\psi^\sigma}_t$; (ii) for certain values of $c$ a SIB-BNE exists.
\subsection{Model}
We consider the following two-stage stochastic dynamic game. There are two players/agents, Alice and Bob. At stage one, $t = 1$, the system's state $X_1$ is distributed on $\{-1, 1\}$ with $\mu_0(-1) = \prob(X_1 = -1) = 0.5$ and $\mu_1(1) = \prob(X_1 = 1)=0.5$. Alice observes perfectly $X_1$, i.e., $Y^{Alice}_1=X_1$, and takes action $A^{Alice}_1 \in
\{-1, 1\}$; $A^{Alice}_1$ is not observable by Bob and $Y^{Bob}_1=\emptyset$. Bob does not act at $t=1$.
At stage $2$, $t=2$, the system state is $X_2 = X_1 A^{Alice}_1$.
Alice and Bob have a common observation $Z_2 = X_2A^{Alice}_1W_1 = X_1W_1$, where $W_1 \in \{-1, 1\}$ and $\prob(Z=i \mid X_1 = i) = 1- p = 0.8, i \in \{-1, 1\}$, and there are no private observations, i.e., $Y^{Alice}_2=Y^{Bob}_2=\emptyset$.
Here $p = 0.2 = \prob(W_1 = -1)$.
Bob acts at $t=2$. Alice does not act at $t=2$.
Bob's action $A^{Bob}_2 \in \{-1, 1\}$. Alice's payoffs at $t=1$ and $t=2$ are
\begin{align}
u^{Alice}_1(X_1, A_1) = &\left\{
\begin{array}{ll}
c& \text{ if }A^{Alice}_1 = 1 \\
0& \text{ if }A^{Alice}_1 = -1
\end{array}
\right.
\end{align}
and
\begin{align}
u^{Alice}_2(X_2, A_2) = &\left\{
\begin{array}{ll}
2& \text{ if }X_2 = 1, A^{Bob}_2=1\\
1& \text{ if }X_2 = -1, A^{Bob}_2=-1\\
0& \text{ otherwise }
\end{array}
\right.
\end{align}
respectively. Bob's payoffs are $u^{Bob}_t(X_t, A_t) = -u^{Alice}_t(X_t, A_t), t= 1, 2$.
The game's information structure is
\begin{align}
H^{Alice}_1 = & \{X_1\} \\
H^{Alice}_2 = & \{X_1, A^{Alice}_1, X_2, Z_2\} \\
H^{Bob}_1 = & \emptyset \\
H^{Bob}_2 = & \{Z_2\}
\end{align}
where $H^{Alice}_t, H^{Bob}_t, t=1,2$, describe the information available to Alice and Bob, respectively, at stages $1$ and $2$.
This example has the same dynamics and utility functions as Example 3 in \cite{tang2022dynamic}, but Bob doesn't observe Alice's action as in \cite[Example 3]{tang2022dynamic}.
\subsection{Sequential decomposition}
Since Alice perfectly observes the state at both times, i.e., $Y^{Alice}_1 = X_1$ and $Y^{Alice}_2 = X_2$, and Bob doesn't have private information, $S^{Alice}_1 = X_1, S^{Bob}_1 = \emptyset$ are sufficient private information for Alice and Bob at stage $t=1$, respectively, and $S^{Alice}_2 = X_2, S^{Bob}_2 = \emptyset$ are sufficient private information for Alice and Bob, respectively, at stage $t=2$ according to Definition \ref{def:sufficient}.
Suppose $\sigma = (\sigma_1, \sigma_2) = (\sigma^{Alice}_1, \sigma^{Bob}_2)$ is a SIB strategy and $\psi^\sigma$ is the corresponding update rule. Here $\sigma$ is an equilibrium strategy candidate which serves as the strategy prediction for Alice and Bob.
Note that
$\Pi_1^{\psi^\sigma, Alice}(x_1) = \mu_0(x_1)$ and $\Pi_1^{\psi^\sigma, Bob}(x_1) = \mu_0(x_1)$ for all $x_1 \in \mathcal X_1$.
To get a BNE using the sequential decomposition of Theorem \ref{thm:sequential_decomposition}, we first consider the stage-game $G_2(0, \pi_2^{\psi^\sigma})$ at time $2$. Since Bob is the only agent who acts at time $2$ and $S^{Bob}_2 = \emptyset$, any BNE $\sigma_2$ of $G_2(0, \pi_2^{\psi^\sigma})$ must satisfy
\begin{align}
\hat\sigma_2^{Bob} = & \argmax_{\tilde\sigma^{Bob}_2} \ee^{\tilde\sigma^{Bob}_2, \psi^\sigma} [ u^{Bob}_2(X_2, A_2)]
\notag\\
= & \argmax_{\tilde\sigma^{Bob}_2}\Big(
-2 \prob^{\tilde\sigma^{Bob}_2, ^{\psi^\sigma}}(X_2 = A^{Bob}_2 = 1)
-
\prob^{\tilde\sigma^{Bob}_2, \psi^\sigma}(X_2 = A^{Bob}_2 = -1)
\Big)
\notag\\
= & \argmax_{\tilde\sigma^{Bob}_2}\Big(
-2 \pi^{\psi^\sigma,Bob}_2(1)\tilde\sigma^{\psi^\sigma,Bob}_2(1 \mid \pi_2^{\psi^\sigma})
\notag\\
& \hspace{2cm} - (1 - \pi^{\psi^\sigma,Bob}_2(1))(1 - \tilde\sigma^{\psi^\sigma,Bob}_2(1 \mid \pi_2^{\psi^\sigma}))
\Big)
\label{eq:example_bne_time2}
\end{align}
From \eqref{eq:example_bne_time2} we conclude that
one of the equilibrium SIB strategies is given by
\begin{align*}
& \sigma^{Bob}_2(\pi_2^{\psi^\sigma}) = 1 \text{, if } \pi^{\psi^\sigma,Bob}_2(1) \leq 1/3,
\\
& \sigma^{Bob}_2(\pi_2^{\psi^\sigma}) = 0 \text{, if } \pi^{\psi^\sigma,Bob}_2(1) > 1/3,
\end{align*}
or equivalently
\begin{align}
& \sigma^{Bob}_2(\pi_2^{\psi^\sigma}) = \mathds{1}(\pi^{\psi^\sigma,Bob}_2(1) \leq 1/3)
\label{eq:bob_bestresponse}
\end{align}
Note that $\sigma^{Bob}_2(\pi_2^{\psi^\sigma})$ can take any value in $[0, 1]$ if $\pi^{\psi^\sigma,Bob}_2(1) = 1/3$ and $\sigma_2$ is still a BNE of the stage-game.
Alice's sufficient private information at time $2$ is $S^{Alice}_2 = X_2$.
With the stage-game equilibrium SIB strategy $\sigma^{Bob}_2(\pi_2)$ given by \eqref{eq:bob_bestresponse}, the value function for Alice at $t=2$ is then given, according to \eqref{eq:dp_valupdate}, by
\begin{align}
V^{Alice}_2(\pi_2^{\psi^\sigma}, x_2) = & \ee^{\sigma_2, \psi^\sigma} [ u^{Alice}_2(X_2, A_2) \mid x_2]
\notag\\
= & \left\{
\begin{array}{ll}
2 \mathds{1}(\pi^{\psi^\sigma,Bob}_2(1) \leq 1/3) & \text{if } x_2 = 1
\\
1 - \mathds{1}(\pi^{\psi^\sigma,Bob}_2(1) \leq 1/3) & \text{if } x_2 = - 1
\end{array}\right.
\end{align}
Given the above value functions at time $t=2$, we now consider the stage-game $G_1(V_2, \pi^{\psi^\sigma}_1)$ at time $t=1$.
The utility for the stage-game for Alice is given as follows.
\begin{align}
U^{Alice}_{G_1(V_2, \pi^{\psi^\sigma}_1)}
= u^{Alice}_1(X_1, A_1) + V^{Alice}_2(\psi^\sigma_2(\pi_1, Z), X_2)
\end{align}
If Alice uses the SIB strategy $\tilde \sigma^{Alice}_1$, the expected utility of the stage-game can be calculated for $X_1=-1$ and $X_1=1$, according to \eqref{eq:stagegame_exputil}, by
\begin{align}
& \ee^{\tilde\sigma^{Alice}_1, \psi^\sigma}[
U^{Alice}_{G_1(V_2, \pi^{\psi^\sigma}_1)} \mid X_1 = -1]
\notag\\
= & c\tilde\sigma^{Alice}_1(1\mid -1) + \ee^{\tilde\sigma^{Alice}_1, \psi^\sigma}[V^A_2(\psi_{2}^\sigma(\pi_1^{\psi^\sigma}, X_1 W_1), X_1 A^{Alice}_1) \mid X_1 = -1]
\notag\\
= & (1 + c)(1 - \tilde\alpha_1) + (3\tilde\alpha_1 - 1)((1-p)
\mathds{1}(q_{-1} \leq 1/3)
+ p\mathds{1}(q_{1} \leq 1/3) )
\notag\\
=: & r^A_{-1}(\tilde\alpha_1, q)
\\
& \ee^{\tilde\sigma^{Alice}_1, \psi^\sigma}[
U^{Alice}_{G_1(V_2, \pi^{\psi^\sigma}_1)} \mid X_1 = 1]
\notag\\
= & c\tilde\sigma^{Alice}_1(1\mid 1) + \ee^{\tilde\sigma^{Alice}_1, \psi^\sigma}[V^A_2(\psi^\sigma_{2}(\pi_1^{\psi^\sigma}, X_1 W_1), X_1 A^{Alice}_1) \mid X_1 = 1]
\notag\\
= & 1 + (c - 1)\tilde\alpha_2 + (3\tilde\alpha_2 - 1)((1-p)\mathds{1}(q_{1} \leq 1/3) + p\mathds{1}(q_{-1} \leq 1/3) )
\notag\\
=: & r^A_{1}(\tilde\alpha_2, q)
\end{align}
where $q = (q_{-1}, q_1)$, $q_{-1} = \psi_{2}^{\sigma, Bob}(\pi_1^{\psi^\sigma}, -1)(1)$ and $q_1= \psi_{2}^{\sigma, Bob}(\pi_1^{\psi^\sigma}, 1)(1)$ are the CIB beliefs $\pi^{\psi^\sigma,Bob}_2(1)$ of $\{X_2 = 1\}$ when $Z=-1$ and $Z=1$, respectively, and $\tilde\alpha = (\tilde\alpha_1, \tilde\alpha_2)$, $\tilde\alpha_1 = \tilde\sigma^{Alice}_1(-1\mid -1), \tilde\alpha_2 = \tilde\sigma^{Alice}_1(1\mid 1)$ represents Alice's SIB strategy $\tilde\sigma^{Alice}_1$.
Note that from Bayes' rule in Definition \ref{def:CIB_update_n}, under the SIB strategy $\sigma^{Alice}_1$, represented by $\alpha_1 = \sigma^{Alice}_1(-1\mid -1)$ and $\alpha_2 = \sigma^{Alice}_1(1\mid 1)$, we have
\begin{align}
& q_{-1} = \psi_{2}^{\psi^\sigma,Bob}(\pi_1^{\psi^\sigma}, -1)(1) = \frac{\prob^{\alpha}(X_2 = 1, Z= -1)}{\prob^{\alpha}(Z = -1)} = \alpha_2 p + \alpha_1 (1-p)
\label{eq:qm1}
\\
& q_1 = \psi_{2}^{\psi^\sigma,Bob}(\pi_1^{\psi^\sigma}, 1)(1) = \frac{\prob^{\alpha}(X_2 = 1, Z= 1)}{\prob^{\alpha}(Z = 1)} = \alpha_2 (1 - p) + \alpha_1 p
\label{eq:q1}
\end{align}
Therefore, a SIB strategy $\hat\sigma^{Alice}_1$, represented by $\hat\alpha_1 = \hat\sigma^{Alice}_1(-1\mid -1)$ and $\hat\alpha_2 = \hat\sigma^{Alice}_1(1\mid 1)$, is a BNE of the stage-game $G_1(V_2, \pi^{\psi^\sigma}_1)$ at time $t=1$ if
\begin{align}
& \hat\alpha_1 \in
\argmax_{\tilde\alpha_1}
r^A_{-1}(\tilde\alpha_1, ( \alpha_2 p + \alpha_1 (1-p), \alpha_2 (1 - p) + \alpha_1 p ))
\\
& \hat\alpha_2 \in
\argmax_{\tilde\alpha_2}
r^A_{1}(\tilde\alpha_2, ( \alpha_2 p + \alpha_1 (1-p), \alpha_2 (1 - p) + \alpha_1 p ))
\end{align}
Consequently, the SIB strategy $\sigma^{Alice}_1$, represented by $\alpha_1 = \sigma^{Alice}_1(-1\mid -1)$ and $\alpha_2 = \sigma^{Alice}_1(1\mid 1)$ will satisfy the sequential decomposition equations \eqref{eq:dp_bne_max}-\eqref{eq:dp_bne_eq} if
\begin{align}
& \alpha_1 \in
\argmax_{\tilde\alpha_1}
r^A_{-1}(\tilde\alpha_1, ( \alpha_2 p + \alpha_1 (1-p), \alpha_2 (1 - p) + \alpha_1 p ))
\label{eq:br_rm1}
\\
& \alpha_2 \in
\argmax_{\tilde\alpha_2}
r^A_{1}(\tilde\alpha_2, ( \alpha_2 p + \alpha_1 (1-p), \alpha_2 (1 - p) + \alpha_1 p ))
\label{eq:br_r1}
\end{align}
\begin{remark}
Note that the functions $r^A_{-1}(\tilde\alpha_1, q)$ and $r^A_{1}(\tilde\alpha_2, q)$ are not continuous in $q$. Thus existence of equilibria cannot be established by the standard method relying on the continuity of the utility functions, and there may not no equilibria in the general case.
\end{remark}
\subsection{Existence of SIB-BNE under conditions on the instantaneous utility.}
The stage-game $G_1(V_2, \pi^{\psi^\sigma}_1)$ is a normal-form game with a fixed $\sigma_1$.
According to Remark \ref{remark:stage-game}, a BNE $\hat \sigma$ of $G_1(V_2, \pi^{\psi^\sigma}_1)$ could be different from $\sigma_1$ and the existence of a regular BNE of $G_1(V_2, \pi^{\psi^\sigma}_1)$ is not sufficient to satisfy \eqref{eq:dp_bne_eq} at time $t=1$. In order to apply equilibrium existence results for normal-form games to the sequential decomposition at time $t=1$, we introduce an agent $0$ who picks the $q$-belief $q = (q_{-1}, q_{1})$ so that \eqref{eq:dp_bne_eq} is satisfied.
Formally, we construct an augmented stage-game $\hat G_1$ between Alice and agent $0$. Alice chooses $\tilde\alpha = (\tilde\alpha_1, \tilde\alpha_2)$ and agent $0$ chooses $\tilde q = (\tilde q_{-1}, \tilde q_{1})$.
Alice's utility is
\begin{align}
r^A_1(\tilde \alpha, \tilde q)
= & 0.5 r^A_{-1}(\tilde \alpha_1, \tilde q) + 0.5 r^A_{1}(\tilde \alpha_2, \tilde q)
\notag\\
= & 0.5c(1 - \tilde \alpha_1 + \tilde \alpha_2) + 0.5(2 - \tilde \alpha_1-\tilde \alpha_2)
\notag\\
& + 0.5(3(\tilde \alpha_2 p + \tilde \alpha_1 (1-p))- 1)\mathds{1}(\tilde q_{-1} \leq 1/3)
\notag\\
& + 0.5(3 (\tilde \alpha_2 (1 - p) + \tilde \alpha_1 p) - 1)\mathds{1}(\tilde q_{1} \leq 1/3).
\label{eq:agument_alice}
\end{align}
Agent $0$'s utility is
\begin{align}
r^0_1(\tilde\alpha, \tilde q) =
-(\tilde q_{-1} - \tilde \alpha_2 p - \tilde \alpha_1 (1-p) )^2
- (\tilde q_{1} - \tilde \alpha_2 (1 - p) - \tilde \alpha_1 p)^2.
\label{eq:agument_zero}
\end{align}
Both Alice and agent $0$ are utility maximizers.
The game $\hat G_1$ with utilities \eqref{eq:agument_zero}-\eqref{eq:agument_alice} is a normal-form game with strategies $\tilde \alpha = (\tilde \alpha_1, \tilde \alpha_2)$ $\tilde q = (\tilde q_{-1}, \tilde q_{1})$.
Since the utility \eqref{eq:agument_zero} of agent $0$ is a quadratic function, any best response by agent $0$ must satisfy $\tilde q_{-1} = \tilde \alpha_2 p + \tilde \alpha_1 (1-p), \tilde q_{1} = \tilde \alpha_2 (1 - p) + \tilde \alpha_1 p$.
Note that in the augmented stage-game $\hat G_1$, the utility function $r^A_1(\tilde \alpha, \tilde q)$ is not continuous in $\tilde q$. To show the existence of a Nash equilibrium for $\hat G_1$, we proceed to apply existence results for games with discontinuous utilities in \cite{barelli2013note}.
Specifically, Proposition 2.4 of \cite{barelli2013note} guarantees the existence of a Nash equilibrium for games satisfying the generalized better reply secure property.
From Definition 2.3 in \cite{barelli2013note}, the stage game is generalized better reply secure if for any $(\bar\alpha, \bar q)$ not an equilibrium, at least one of the followings is true
\begin{itemize}
\item We can find an $\epsilon > 0$ and a closed correspondence $\phi^0(\tilde\alpha, \tilde q)$ such that
\begin{align}
r^0_1(\tilde\alpha, \phi^0(\tilde\alpha, \tilde q)) \geq r^0_1(\bar\alpha, \bar q) + \epsilon
\end{align}
for all $\tilde\alpha_1 \in (\bar\alpha_1 - \epsilon, \bar\alpha_1 + \epsilon)$, $\tilde\alpha_2 \in (\bar\alpha_2 - \epsilon, \bar\alpha_2 + \epsilon)$, $\tilde q_{-1} \in (\bar q_{-1} - \epsilon, \bar q_{-1} + \epsilon)$, $\tilde q_{1} \in (\bar q_{1} - \epsilon, \bar q_{1} + \epsilon)$
\item We can find an $\epsilon > 0$ and a closed correspondence $\phi^A(\tilde\alpha, \tilde q)$ such that
\begin{align}
r^A_1(\phi^A(\tilde\alpha, \tilde q), \tilde q) \geq r^A_1(\bar\alpha, \bar q) + \epsilon
\end{align}
for all $\tilde\alpha_1 \in (\bar\alpha_1 - \epsilon, \bar\alpha_2 + \epsilon)$, $\tilde\alpha_2 \in (\bar\alpha_2 - \epsilon, \bar\alpha_2 + \epsilon)$, $\tilde q_{-1} \in (\bar q_{-1} - \epsilon, \bar q_{-1} + \epsilon)$, $\tilde q_{1} \in (\bar q_{1} - \epsilon, \bar q_{1} + \epsilon)$
\end{itemize}
In Appendix \ref{appendix:example}, we show that when $c > 24$ the augmented stage-game $\hat G_1$ is generalized better reply secure.
Thus, there exists a Nash equilibrium of the augmented state-game $\hat G_1$ according to \cite[Proposition 2.4]{barelli2013note}.
Consider any Nash equilibrium $(\alpha, q)$ of $\hat G_1$. Since $q$ is a best response to $\alpha$ for agent $0$, from agent $0$'s utility \eqref{eq:agument_zero} we have
\begin{align}
& q_{-1} = \alpha_2 p + \alpha_1 (1-p)
\\
& q_1 = \alpha_2 (1 - p) + \alpha_1 p
\end{align}
Furthermore, since $\alpha$ is a best response to $q$ for Alice in $\hat G_1$,
\begin{align}
\alpha \in &\argmax_{\tilde \alpha }
\Big(
0.5 r^A_{-1}(\tilde \alpha_1, q) + 0.5 r^A_{1}(\tilde \alpha_2, q)
\Big)
\notag\\
= & \argmax_{\tilde \alpha }
\Big(
0.5 r^A_{-1}(\tilde \alpha_1, (\alpha_2 p + \alpha_1 (1-p), \alpha_2 (1 - p) + \alpha_1 p ))
\notag\\
& \hspace{2cm}+ 0.5 r^A_{1}(\tilde \alpha_2, (\alpha_2 p + \alpha_1 (1-p), \alpha_2 (1 - p) + \alpha_1 p ))
\Big)
\notag\\
= & \Big(\argmax_{\tilde \alpha_1 } r^A_{-1}(\tilde \alpha_1, (\alpha_2 p + \alpha_1 (1-p), \alpha_2 (1 - p) + \alpha_1 p )) ,
\notag\\
& \hspace{1cm} \argmax_{\tilde \alpha_2 }r^A_{1}(\tilde \alpha_2, (\alpha_2 p + \alpha_1 (1-p), \alpha_2 (1 - p) + \alpha_1 p ))
\Big)
\end{align}
Therefore, \eqref{eq:br_rm1}-\eqref{eq:br_r1} hold for $\alpha$, and consequently the sequential decomposition requirement \eqref{eq:dp_bne_max}-\eqref{eq:dp_bne_eq} is satisfied at $t=1$ by the SIB strategy $\sigma^{Alice}_1$ represented by $\alpha$, and we establish the existence of a SIB equilibrium based on Theorem \ref{thm:sequential_decomposition}.
\section{Introduction}
We study, in discrete time, a general class of sequential stochastic dynamic games with asymmetric information. We consider a setting where the underlying system has Markovian dynamics controlled by the agents’ joint actions. Each agent's instantaneous utility depends on the agents’ joint actions and the system state. At each time instant each agent makes a private noisy observation that depends on the current system state and the agents’ actions in the previous time instant.
In addition, at each time instant all agents may have a common noisy observation of the system state and their actions in the previous time instant.
The agents' actions are hidden, that is, each agent's actions are not directly observable by the other agents.
Therefore, at every time instant agents have asymmetric and imperfect information about the game's history.
Dynamic games with the above features arise in engineering (cybersecurity, transportation, energy markets), in economics (industrial organization), and in socio-technological applications.
As pointed out in \cite{tang2022dynamic}, the key challenges in the study of dynamic games with asymmetric information are:
(i) The domain of agents' strategies increases with time, as the agents acquire information over time. Thus, the computational complexity of the agents' strategies increases with time.
(ii) Due to signaling\footnote{
Signaling in games is more complex than signaling in teams because the agents have diverging incentives and their strategies are their own private information.
}
\citep{Ho:1980}, in many instances an agent's assessment of the game's status at time $t$, therefore his strategy at time $t$, depends on the strategies of agents who acted before him. Consequently, we cannot obtain the standard sequential decomposition (that sequentially determines the components of an equilibrium strategy profile) of the kind provided by the standard dynamic programming algorithm (where the agent's optimal strategy at any time $t$ does not depend on past strategies \cite[Chapter 6.5]{kumar1986stochastic}).
To address these challenges, we can look for equilibrium strategy profiles that are based on a compressed version of the agents' information and can be sequentially computed. However, such equilibrium strategy profiles may not exist.
In this paper we propose an approach, described in detail in Section \ref{sec:Methodology}, that addresses the above-stated challenges. According to this approach, we first compress the agents' private and common information at each time instant. Then, we define strategies based on the compressed information and show that Bayesian Nash Equilibria (BNE) based on these strategies can be determined sequentially in time moving backwards, if each step of this backwards procedure has a solution. Finally, we provide an example where a BNE strategy profile based on compressed information exists.
We show that the proposed approach works for the case where the agents have no common observations and their actions are hidden.
\subsection{Related Literature}
Dynamic games with asymmetric information have been extensively investigated in the literature in the context of repeated discounted games; see \cite{zamir1992repeated,forges1992repeated,aumann1995repeated,mailath2006repeated} and the references therein. The key feature of these games is the absence of a dynamic system. Moreover, the works on repeated games study primarily their asymptotic properties when the horizon is infinite and agents are sufficiently patient (i.e. the discount factor is close one). In repeated games, agents play a stage (static) game repeatedly over time. The main objective of this strand of literature is to explore situations where agents can form self-enforcing punishment/reward mechanisms so as to create additional equilibria that improve upon the payoffs they can get by simply playing an equilibrium of the stage game over time. Recent works (see \cite{horner2011recursive,escobar2013efficiency,sugaya2012}) adopt approaches similar to those used in repeated games to study infinite horizon dynamic games with asymmetric information when there is an underlying dynamic Markovian system. Under certain conditions on the system dynamics and information structure, the authors of \cite{horner2011recursive,escobar2013efficiency,sugaya2012} characterize a set of asymptotic equilibria attained when the agents are sufficiently patient.
The problem we study in this paper is different from the ones in \cite{zamir1992repeated,forges1992repeated,aumann1995repeated,mailath2006repeated,horner2011recursive,escobar2013efficiency,sugaya2012} in two aspects. First, we consider a class of dynamic games where the underlying system has general Markovian dynamics and a general information structure, and we do not restrict attention to asymptotic behaviors when the horizon is infinite and the agents are sufficiently patient. Second, we study situations where the decision problem that each agent faces, in the absence of strategic interactions with other agents, is a Partially Observed Markov Decision Process (POMDP), which is a complex problem to solve by itself. Therefore, reaching (and computing) a set of equilibrium strategies, which take into account the strategic interactions among the agents, is a very challenging task. As a result, it is not very plausible for the agents to seek reaching equilibria that are generated by the formation of self-enforcing punishment/reward mechanisms similar to those used in infinitely repeated games. We believe that our results provide new insight into the behavior of strategic agents in complex and dynamic environments, and complement the existing results in the repeated games literature.
Stochastic dynamic zero-sum games with asymmetric information have been studied in \cite{renault2006value,cardaliaguet2015markov,gensbittel2015value,li2017solving,kartik2021upper,zheng2013decomposition,li2014lp}. The authors of \cite{renault2006value,cardaliaguet2015markov,zheng2013decomposition,li2014lp} study zero-sum games with Markovian dynamics and lack of information on one side (i.e. one informed and one uninformed agent).
The authors of \cite{gensbittel2015value,li2017solving,kartik2021upper} study zero-sum games with Markovian dynamics and lack of information on both sides.
The works of \cite{renault2006value,cardaliaguet2015markov,gensbittel2015value,li2017solving,kartik2021upper,zheng2013decomposition,li2014lp} consider specific information structures.
Specifically: the actions of both agents are publicly observed; in \cite{renault2006value,cardaliaguet2015markov,zheng2013decomposition,li2014lp} the informed agent observes perfectly the state of the dynamic system, the other agent has no direct observation of the system's state; in \cite{gensbittel2015value,li2017solving} each agent observes perfectly part of the system's state and the states observed by the two agents are either independent or conditionally independent (given the observed actions). The authors of \cite{kartik2021upper} consider a general information structure where each agent has some private information and the agents share some information about the dynamic system's state and their actions. The authors of \cite{renault2006value,cardaliaguet2015markov,gensbittel2015value,li2017solving,kartik2021upper,zheng2013decomposition,li2014lp} derive their results by taking advantage of properties of zero-sum games such as the interchangeability of equilibrium strategies and the unique value of the game. These properties do not extend to non-zero sum games. We study a general class of stochastic dynamic games that include zero-sum stochastic dynamic games with asymmetric information as a special case. We consider general Markovian dynamics for the underlying system in contrast to \cite{renault2006value,cardaliaguet2015markov,gensbittel2015value,li2017solving,zheng2013decomposition,li2014lp}, where the system has the special structure described above. We consider a general information structure that allows us to capture scenarios with unobservable actions and imperfect observations that are not captured by \cite{renault2006value,cardaliaguet2015markov,gensbittel2015value,li2017solving,zheng2013decomposition,li2014lp}.
The problems investigated in \cite{tang2022dynamic, nayyar2014Common, gupta2014common, ouyang2015CDC, ouyang2016TAC, vasal2016signaling, sinha2016structured, gupta2016dynamic,nayyar2013common} are the most closely related to our problem. The authors of \cite{nayyar2014Common, gupta2014common,gupta2016dynamic,nayyar2013common} study a class of dynamic games where the agents’ common information based belief (defined in \cite{nayyar2014Common}) is independent of their strategies, that is, there is no signaling among them. This property allows them to apply ideas from the common information approach developed in \cite{nayyar2011optimal, nayyar2013decentralized}, and define an equivalent dynamic game with symmetric information among fictitious agents. Consequently, they characterize a class of equilibria for dynamic games called Common Information based Markov Perfect Equilibria.
Our results are different from those in \cite{nayyar2014Common, gupta2014common,gupta2016dynamic,nayyar2013common} in two aspects. First, we consider a general class of dynamic games where the agents' CIB beliefs are strategy-dependent, thus, signaling is present. Second, the proposed approach in \cite{nayyar2014Common, gupta2014common,gupta2016dynamic,nayyar2013common} requires the agents to keep track of all of their private information over time. We propose an approach to effectively compress the agents’ private information, and consequently, reduce the number of variables which the agents need to form CIB beliefs.
The authors of \cite{tang2022dynamic, ouyang2015CDC, ouyang2016TAC, vasal2016signaling, sinha2016structured} study a class of dynamic games with asymmetric information where signaling occurs. When the horizon in finite, the authors of \cite{ouyang2015CDC, ouyang2016TAC} introduce the notion of Common Information Based Perfect Bayesian Equilibrium, and provide a sequential decomposition of the game over time. The authors of \cite{vasal2016signaling, sinha2016structured} extend the results of \cite{ouyang2015CDC, ouyang2016TAC} to finite horizon Linear-Quadratic-Gaussian (LQG) dynamic games and infinite horizon dynamic games, respectively.
The work of \cite{tang2022dynamic} extends the model of \cite{ouyang2016TAC} to games among teams of agents. Each agent has his own private information which he shares with the members of his own team with delay $d$; teams also have common information. The authors of \cite{tang2022dynamic} consider two classes of strategies: sufficient private information based (SPIB) strategies, which only compress private information, and sufficient private and common information based (SPCIB) strategies, which compress both common and private information. They show that SPIB-strategy-based BNE exist and the set of payoff profiles of such equilibria is the same as the set of all BNE. They develop a backward inductive sequential procedure, whose solution, if it exists, provides a SPCIB BNE, and identify instances which guarantee the existence of SPCIB BNE. The class of dynamic games studied in
\cite{tang2022dynamic, ouyang2015CDC, ouyang2016TAC, vasal2016signaling, sinha2016structured}
satisfy the following assumptions: (i) agents’ actions are observable (ii) each agent has a perfect observation of his own local states/type (iii) conditioned on the agents’ actions, the evolution of the local states are independent.
We relax assumptions (i)-(iii) of \cite{tang2022dynamic, ouyang2015CDC, ouyang2016TAC, vasal2016signaling, sinha2016structured}, and study a general class of dynamic games with asymmetric information, hidden actions, imperfect observations, and controlled and coupled dynamics.
\subsection{Contribution}
We study/analyze, in discrete time, a general class of sequential stochastic dynamic games with asymmetric information, where the underlying system is dynamic, the information structure is non-classical, at each time instant the agents have private and common information and their actions are hidden (each agent's actions are not directly observable by the other agents). Our key contribution is a methodology for the discovery of Bayesian Nash Equilibrium (BNE) strategy profiles that are based on the agents' compressed private and common information and can be determined sequentially in time moving backwards, if each step of this backward procedure has a solution. We present an example where such a BNE strategy profile exists.
We show that our methodology works also for the case where the agents have no common observations and their actions are hidden.
\subsection{Organization}
The rest of the paper is organized as follows: We present the game's model along with the equilibrium concept in Section \ref{sec:model}. We state our objective and present the methodology that achieves it in Section \ref{sec:Methodology}. In Section \ref{sec:compression} we first introduce compressed versions of the agents' private and common information that are sufficient for decision making purposes; then we define Sufficient Information Based (SIB) strategies that are based on the agents' compressed information. In Section \ref{sec:sequential_decomposition} we first introduce Sufficient Information Based Bayesian Nash Equilibrium (SIB-BNE); then we present a sequential decomposition of the game, that is, a backward inductive procedure that determines SIB-BNE if each step of this procedure has a solution. In Section \ref{sec:example} we present an example that highlights our solution methodology and where a SIB-BNE exists. In Section \ref{sec:specialcase} we show that our solution methodology works for stochastic dynamic games where the agents have no common observations and each agent's actions are part of his private information.
The comparison of the definitions of compressed private information as it appears in this paper and in \cite{companion}, along with
some of the technical details related to the existence of SIB-BNE for the example of Section \ref{sec:example} are presented in the Appendices.
\section{Conclusion}
We considered stochastic dynamic games where the underlying system is dynamic, the strategic agents' actions are hidden (not observable) and their information is asymmetric. We presented an approach for the computation of BNE strategy profiles that are based on a compressed version of the agents' information and can be determined sequentially in time moving backwards, if each step of this backward procedure has a solution. The approach highlights: (i) the importance of common information/common knowledge in identifying BNE strategy profiles that can be sequentially computed; (ii) the difference between common information that is sufficient for decision-making purposes in games and common information that is sufficient for decision-making purposes in teams. The difference is due to the fact that agents have an incentive to deviate from their predicted strategies in games whereas they don't have such an incentive in teams. As a consqence of this incentive, at each time instant each agent has his own view/belief of the game's status based on the common information, but all these different views/beliefs are common knowledge among all agents. As a result the CIB belief system is described by the sequence $\Pi^\psi_{1:T}$ specified by Definition \ref{def:CIB_belief_system}.
Our investigation focused on determining SIB-BNE strategy profiles for the games under consideration. We note that the SIB-BNE strategy profiles determined by our methodology are also Perfect Bayesian Equilibrium (PBE) strategy profiles when the agents have no common observations (i.e., for the model of Section \ref{sec:specialcase}), but this is not true when the agents have common observations (the general model of Section \ref{sec:model}). Determining PBE strategy profiles for the general model of Section \ref{sec:model} is an interesting problem worthy of investigation.
\input{appendix}
\section{Model}
\label{sec:model}
We present our model for dynamic decision problems with strategic agents (dynamic games) below; this model is an analogue to the model of \cite{companion} for dynamic decision problems with non-strategic agents.
\subsection{System Dynamics} There are $N$ strategic agents who live in a dynamic Markovian world over horizon $\mathcal{T}\hspace*{-2pt}:=\hspace*{-2pt}\{1,2,...,T\}$, $T\hspace*{-2pt}<\hspace*{-2pt}\infty$. Let $X_t\hspace*{-2pt}\in\hspace*{-2pt}\mathcal{X}_t$ denote the state of the world at $t\hspace*{-2pt}\in\hspace*{-2pt}\mathcal{T}$. At time $t$, each agent, indexed by $i\hspace*{-2pt}\in\hspace*{-2pt} \mathcal{N}\hspace*{-2pt}:=\hspace*{-2pt}\{1,2,...,N\}$, chooses an action $a^i_t\hspace*{-2pt}\in\hspace*{-2pt}\mathcal{A}^i_t$, where $\mathcal{A}^i_t$ denotes the set of available actions to him at $t$. Given the collective action profile $A_t\hspace*{-2pt}:=\hspace*{-2pt}(A_t^1,...,A_t^N)$, the state of the world evolves according to the following stochastic dynamic equation,\vspace*{-2pt}
\begin{align}
X_{t+1}=f_t(X_t,A_t,W_t^x), \label{eq:systemdynamic1} \vspace*{-2pt}
\end{align}
where $W_{1:T-1}^x$ is a sequence of independent random variables.
The initial state $X_1$ is a random variable that has a probability distribution $\mu_0\in\Delta(\mathcal{X}_1)$.
At every time $t\in\mathcal{T}$, before taking an action, agent $i$ receives a noisy private observation $Y_t^i\in\mathcal{Y}_t^i$ of the current state of the world $X_t$ and the action profile $A_{t-1}$, given by\vspace*{-2pt}
\begin{align}
Y_t^i=O_t^i(X_t,A_{t-1},W_t^i), \label{eq:systemdynamic2}\vspace*{-2pt}
\end{align}
where $W_{1:T}^i$, $i\in\mathcal{N}$, are sequences of independent random variables. Moreover, at every $t\in\mathcal{T}$, all agents receive a common observation $Z_t\in\mathcal{Z}_t$ of the current state of the world $X_t$ and the action profile $A_{t-1}$, given by\vspace*{-2pt}
\begin{align}
Z_t=O_t^c(X_t,A_{t-1},W_t^c), \label{eq:systemdynamic3}\vspace*{-3pt}
\end{align}
where $W_{1:T}^c$, is a sequence of independent random variables.
We assume that the random variables $X_1$, $W_{1:T-1}^x$, $W_{1:T}^c$, and $W_{1:T}^i$, $i\in\mathcal{N}$ are mutually independent.
To avoid measure-theoretic technical difficulties and for clarity and convenience of exposition, we assume that all the random variables take values in finite sets.
\begin{assumption}\label{assump:finite}(finite game)
The sets $\mathcal{N}$, $\mathcal{X}_t$, $\mathcal{Z}_t$, $\mathcal{Y}_t^i$, $\mathcal{A}_t^i$, $ i \in \mathcal N$, are finite.
\end{assumption}
\subsection{ Information Structure}
Let $H_t$ denote the aggregate information of all agents at time $t$. Assuming that agents have perfect recall, we have $H_t=\{Z_{1:t},Y_{1:t}^{1:N},A_{1:t-1}^{1:N}\}$, \textit{i.e.} $H_t$ denotes the set of all agents' past and present observations and all agents' past actions. The set of all possible realizations of the agents' aggregate information is given by $\mathcal{H}_t:=\prod_{\tau\leq t}\mathcal{Z}_\tau\times\prod_{i\in\mathcal{N}}\prod_{\tau\leq t}\mathcal{Y}_\tau^i\times \prod_{i\in\mathcal{N}}\prod_{\tau< t}\mathcal{A}_\tau^i$.
At time $t\hspace*{-2pt}\in\hspace*{-2pt}\mathcal{T}$, the aggregate information $H_t$ is not fully known to all agents.
Let $C_t\hspace*{-2pt}:=\hspace*{-2pt}\{Z_{1:t}\}\hspace*{-2pt}\in\hspace*{-2pt}\mathcal{C}_t$ denote the agents' common information about $H_t$ and $P_t^i\hspace*{-2pt}:=\hspace*{-2pt}\{Y_{1:t}^i,A_{1:t-1}^i\}\backslash C_t\hspace*{-2pt}\in\hspace*{-2pt}\mathcal{P}_t^i$ denote agent $i$'s private information about $H_t$, where $\mathcal{P}_t^i$ and $\mathcal{C}_t$ denote the set of all possible realizations of agent $i$'s private and common information at time $t$, respectively.
We assume that observations $Y_\tau^i$, $\tau\in\{1,2...,t\}$, and actions $A_\tau^i$, $\tau\in\{1,2...,t-1\}$, are known to agent $i$ but are not necessarily fully known to all other agents, denoted by $-i$, at $t\in\mathcal{T}$. Therefore, we have $P_t^i\subseteq \{Y_{1:t}^i,A_{1:t-1}^i\}$ for all $i\in\mathcal{N}$, and $H_t=\left(\bigcup_{i\in\mathcal{N}}P_t^i\right)\cup C_t$ for all $t\in\mathcal{T}$. As such, $\left\{C_t,P_t^i,i\in\mathcal{N}\right\}$ form a partition of $\mathcal{H}_t$ at every time $t\in\mathcal{T}$.
In Section \ref{sec:model:special}, we discuss several instances of information structures that can be captured as special cases of our model.
\subsection{Strategies and Utilities:} Let $H_t^i:=\{C_t,P_t^i\}\in \mathcal{H}_t^i$ denote the information available to agent $i$ at $t$, where $\mathcal{H}_t^i$ denote the set of all possible realizations of agent $i$'s information at $t$. Agent $i$'s \textit{behavioral strategy} at $t$, denoted by $g_t^i$, is defined by
\begin{align}
g^i_t:\mathcal{H}_t^i\rightarrow \Delta (\mathcal{A}_t^i)
\label{eq:git}
\end{align}
where $\Delta (\mathcal{A}_t^i)$ is the set of Probability Mass Functions (PMFs) on $\mathcal{A}_t^i$.
We denote by
\begin{align}
g^i := (g^i_1,g^i_2,\ldots, g^i_T)
\label{eq:gi}
\end{align}
a strategy of agent $i$; $g^i \in \mathcal G^i$, where $\mathcal G^i$ is the set of admissible strategies described by \eqref{eq:git}-\eqref{eq:gi}.
We denote a strategy profile $g$ by
\begin{align}
g:= (g^1,g^2,\ldots,g^N)
\label{eq:g}
\end{align}
$g \in \mathcal G$, where $\mathcal G$ is the set of admissible strategy profiles described by \eqref{eq:git}-\eqref{eq:g}. We denote by
\begin{align}
g^{-i}:= (g^1,\ldots,g^{i-1},g^{i+1},\ldots,g^N)
\label{eq:gminusi}
\end{align}
Agent $i$'s instantaneous utility at $t$ depends on the system state $X_t$ and the collective action profile $A_t$, and is given by $u_t^i\hspace*{-1pt}(\hspace*{-1pt}X_t,\hspace*{-1pt}A_t\hspace*{-1pt})$. Agent $i$'s total utility over horizon $\mathcal{T}$, is given by,\vspace*{-2pt}
\begin{align}
U^i(X_{1:T},A_{1:T})=\sum_{t\in\mathcal{T}}u_t^i(X_t,A_t). \label{eq:totalutility}\vspace*{-2pt}
\end{align}
\subsection{Equilibrium Concept:} We consider Bayesian Nash Equilibrium (BNE) as the solution concept \citep{fudenberg1991game}.
A strategy profile $g^* = (g^{*1}, g^{*2}, \ldots,g^{*N})$ is a BNE if for all $i \in \mathcal N$
\begin{align}
\mathbb{E}^{g^*}\{U^i(X_{1:T},A_{1:T})\} \geq \mathbb{E}^{g^{*-i},\hat{g}^{i}}\{U^i(X_{1:T},A_{1:T})\},\quad\hspace{-4pt} \forall \hat{g}^i \in \mathcal G^i.
\end{align}
\subsection{Special Cases}\label{sec:model:special}
We discuss several instances of dynamic games with asymmetric information that are special cases of the general model described above.
\vspace{3pt}
\textit{1) Nested information structure:} Consider a two-player game with one informed player and one uninformed player and general Markovian dynamics. At every time $t\hspace*{-2pt}\in\hspace*{-2pt} \mathcal{T}$,
the informed player makes a private perfect observation of the state $X_t$, \textit{i.e.} $Y_t^1\hspace*{-2pt}=\hspace*{-2pt}X_t$. The uninformed player does not have any observation of the state $X_t$. Both the informed and uninformed players observe each others' actions, \textit{i.e.} $Z_t\hspace*{-2pt}=\hspace*{-2pt}\{A_{t-1}\}$. Therefore, we have $P_t^1=\{X_{1:t}\}$, $P_t^2=\emptyset$, and $C_t\hspace*{-2pt}=\hspace*{-2pt}\{A_{1:t-1}^1\hspace*{-1pt},\hspace*{-1pt}A_{1:t-1}^2\}$ for all $t\hspace*{-2pt}\in\hspace*{-2pt}\mathcal{T}$. The above nested information structure corresponds to dynamic games considered in \cite{renault2006value,cardaliaguet2015markov,renault2012value,li2014lp,li2017efficient,zheng2013decomposition}, where in \cite{renault2012value,li2017efficient} the state $X_t$ is static.
\vspace{3pt}
\textit{2) Delayed sharing information structure:} Consider a $N$-player game with observable actions where agents observe each others' observations with $d$-step delay. That is, $P_t^i=\{Y_{t-d+1:t}^i\}$ and $C_t=\{Y_{1:t-d},A_{1:t-1}\}$. We note that in our model we assume that the agents' common observation $Z_t$ at $t$ is only a function of $X_t$ and and $A_{t-1}$. Therefore, to describe the game with delayed sharing information structure within the context of our model we need to augment our state space to include the agents' last $d$ observations as part of the augmented state. Define $\tilde{X}_t:=\{X_t,M^1_t,M^2_t,...,M^d_t\}$ as the augmented system state where $M_t^i:=\{A_{t-i},Y_{t-i}\}\in\mathcal{A}_{t-i}\times\mathcal{Y}_{t-i}$, $i\in\mathcal{N}$; that is, $M_t^i$ serves as a temporal memory for the agents' observation $Y_{t-i}$ at $t-i$. Then, we have $\tilde{X}_{t+1}=\{X_{t+1},M_{t+1}^1,M_{t+1}^2,...,M_{t+1}^d\}=\{f_t(X_t,A_t,W_t^x),(Y_t),M_t^1,...,M_t^{d-1}\}$ and $Z_t=\{M_t^d, A_{t-1}\}=\{Y_{t-d}, A_{t-1}\}$.
The above environment captures a connection between the symmetric information structure and asymmetric information structure. The information asymmetry among the agents increases as $d$ increases. The above delayed sharing information structure corresponds to the dynamic game considered in \cite{tavafoghi2016stochastic}.
\vspace{5pt}
\textit{3) Perfectly controlled dynamics with hidden actions:} Consider a $N$-player game where the state $X_t\hspace*{-2pt}:=\hspace*{-2pt}(X_t^1\hspace*{-1pt},\hspace*{-1pt}X_t^2\hspace*{-1pt},\hspace*{-1pt}...,\hspace*{-1pt}X_t^N)$ has $N$ components. Agent $i$, $i\hspace*{-2pt}\in\hspace*{-2pt}\mathcal{N}$, perfectly controls $X_t^i$, \textit{i.e.} $X_{t+1}^i=A_t^i$. Agent $i$'s actions $A_t^i$, $t\hspace*{-2pt}\in\hspace*{-2pt}\mathcal{T}$, are not observable by all other agents $-i$. Every agent $i$, $i\hspace*{-2pt}\in\hspace*{-2pt}\mathcal{N}$, makes a noisy private observation $Y_i^t(X_t,W_t^i)$ of the system state at $t\hspace*{-2pt}\in\hspace*{-2pt}\mathcal{T}$. Therefore, we have $P_t^i\hspace*{-2pt}:=\hspace*{-2pt}\{A_{1:t},Y_{1:t}^i\}$, $C_t\hspace*{-2pt}=\hspace*{-2pt}\emptyset$.
\section{Objective and Methodology}
\label{sec:Methodology}
\subsection{Objective}
Our objective is twofold: (i) To determine BNE strategy profiles that are based on compressed versions of the agents' private and common information. (ii) To compute the above-mentioned strategy profiles by a sequential decomposition of the game, that is, by a backward inductive sequential procedure that identifies an equilibrium strategy profile when every step of the procedure has a solution.
\subsection{Methodology}
We present a methodology that achieves the above-state objective and proceeds as follows:
\begin{itemize}
\item Step 1. We determine a mutually consistent compression of the agents' private information that is sufficient for decision-making purposes (such a mutually consistent compression may not be unique). Based on this compression we introduce the Sufficient Private Information Based (SPIB) belief system.
\item Step 2. Based on the result of Step 1, we determine a compression of the agents' common information that is sufficient for decision-making purposes by defining the Common Information Based (CIB) belief system. The CIB belief system ensures that at each time instant each agent's CIB belief is consistent with his SPIB belief even when the agent deviates from his equilibrium strategy and plays an arbitrary strategy. Such a consistency implies that each agent forms his own CIB belief system, and each agent's CIB belief system is common knowledge among all agents.
\item Step 3. Based on the compression of the agents' private and common information we introduce Sufficient Information Based (SIB) strategies for each agent (i.e., strategies that depend at each time on the agent's sufficient private information and the CIB belief system) and SIB BNE. We show that SIB strategies satisfy a key closedness of best response property. Based on this property we provide a sequential decomposition of the game, that is, a backward inductive sequential procedure that determines a SIB BNE if each step of the procedure has a solution.
\item Step 4. We provide an example of a stochastic dynamic game with asymmetric information and hidden/unobservable actions where a SIB BNE exists.
\end{itemize}
\section{The case with no common observations}
\label{sec:specialcase}
We consider the model of Section \ref{sec:model} but we assume that the agents have no common observations, that is,
\begin{align}
Z_t = \emptyset \quad \forall t \in \mathcal T .
\end{align}
The system's dynamics, the agents' private observations, the functional form of the agents' strategies, their utilities, and the equilibrium concept (BNE) remain the same as in Section \ref{sec:model}.
Even though the agents have no common observations in this special case, we can still define SIB strategies by Definition \ref{def:SIB_strategy_n}, and construct the consistent CIB belief system according to Definition \ref{def:CIB_update_n} with $Z_t = \emptyset \, \forall t \in \mathcal T$.
Since there is no common observations, for any realization we always have
\begin{align}
& \sum_{\hat{x}_{t+1},\hat{s}_{t+1}} F_{t}^i(\hat{x}_{t+1},\hat{s}_{t+1},z_{t+1})(\pi^{\psi^\sigma}_t;\sigma^{-i}_t)
\notag\\
= & \sum_{\hat{x}_{t+1},\hat{s}_{t+1}} F_{t}^i(\hat{x}_{t+1},\hat{s}_{t+1})(\pi^{\psi^\sigma}_t;\sigma^{-i}_t) = 1 > 0
\end{align}
Therefore, case (ii) in Definition \ref{def:CIB_update_n} would never happen, and \eqref{eq:cib_bayesrule} can be simplified to
\begin{align}
& \pi_{t+1}^{\psi^{\sigma},i}(x_{t+1},s_{t+1})
\notag\\
= & \frac{F_{t}^i(x_{t+1},s_{t+1})(\pi^{\psi^\sigma}_t;\sigma^{-i}_t)}{\sum_{\hat{x}_{t+1},\hat{s}_{t+1}} F_{t}^i(\hat{x}_{t+1},\hat{s}_{t+1})(\pi_t^{\psi^\sigma};\sigma^{-i}_t)}
\notag\\
= & F_{t}^i(x_{t+1},s_{t+1})(\pi^{\psi^\sigma}_t;\sigma^{-i}_t)
\notag\\
= &
\sum_{y_{t+1},x_t,s_t,a_t} \Bigg[\mathbb{P}\{y_{t+1},x_{t+1}\mid x_t,a_t\}\left(\prod_{j}\mathbbm{1}\{s_{t+1}^j = \phi_{t+1}^j(s_t^j,y_{t+1}^j,a_t^j)\}\right)\nonumber\\
& \hspace{50pt} \left(\frac{1}{\vert A_t^i\vert}\prod_{j\neq i} \sigma^j_t(a^j_t)(\pi^\psi_t,s_t^j) \right) \pi_t^{\psi,i}(x_t,s_t) \Bigg].
\label{eq:cib_bayesrule_nocommon}
\end{align}
Based on \eqref{eq:cib_bayesrule_nocommon} we can write \begin{align}
&\Pi_{t+1}^{\psi^\sigma,i} = \psi_{t+1}^{\sigma,i}(\Pi_t^{\psi^\sigma}) \quad \forall i \in \mathcal N,
\label{eq:psi_it_nocommon}
\\
&\Pi_{t+1}^{\psi^\sigma} = \psi^{\sigma}_{t+1}(\Pi_t^{\psi^\sigma}).
\label{eq:psi_t_nocommon}
\end{align}
In other words, given a SIB strategy $\sigma$, the update rule $\psi^\sigma$ are deterministic functions given by \eqref{eq:psi_t_nocommon}, and the corresponding consistent CIB belief system $\Pi_{t}^{\psi^\sigma}, t \in \mathcal T$, evolves in a deterministic manner.
Furthermore, since case (ii) in Definition \ref{def:CIB_update_n} never happens without common observations, the update rule $\psi_{t+1}^{\sigma,i}$ given by \eqref{eq:cib_bayesrule_nocommon} becomes exactly the Bayes rule. As a result, the CIB belief $\Pi_{t}^{\psi^\sigma,i}$ becomes a regular PMF given by
\begin{align}
\Pi_{t}^{\psi^\sigma,i}(x_t, s_t) = \prob^{\tilde g^i, \sigma^{-i}}(x_t, s_t) \quad \forall i \in \mathcal N
\end{align}
where $\tilde g^i$ denotes the uniform strategy (i.e., the strategy that chooses every action $a^i_t \in \mathcal A^i_t$ with equal probability for all $t \in \mathcal T$).
\begin{remark}
If the $N$ agents have identical utilities, i.e. we have a dynamic team problem, then $\Pi_{t}^{\psi^\sigma}, t \in \mathcal T$ is similar to the common knowledge that appears in \cite{witsenhausen1973standard} where a dynamic team is analyzed. The common knowledge in \cite{witsenhausen1973standard} is a sequence (over time) of PMFs on the system's history $H_t, t \in \mathcal T$. These PMFs evolve in a deterministic manner, similar to \eqref{eq:cib_bayesrule_nocommon} for $\Pi_{t}^{\psi^\sigma}, t \in \mathcal T$, in the model of this section.
\end{remark}
For this special case with no common observations, Theorem \ref{thm:sequential_decomposition} becomes
\begin{corollary}
\label{cor:no_common}
Consider a SIB strategy profile $\sigma = \{\sigma_t, t \in \mathcal T\}$ and the corresponding update rule $\psi^\sigma = \{\psi^\sigma_t, t \in \mathcal T\}$ defined by \eqref{eq:psi_it_nocommon}-\eqref{eq:psi_t_nocommon} for the model of this section.
Define
\begin{align}
V_{T+1}^i(\cdot, \cdot) = 0 \text{ for all }i
\end{align}
\begin{align}
& V^i_t(\pi^{\psi^\sigma_t}, s^i_t) = \ee^{\sigma_t, \psi^\sigma} [ U^i_{G_t(V_{t+1}, \pi^{\psi^\sigma_t})}\mid s^i_t ]
\end{align}
where $U^i_{G_t(V_{t+1},\pi^{\psi^\sigma}_t)} = u^i_t(X_t, A_t) + V^i_{t+1}(\psi^{\sigma}_{t+1}(\pi^{\psi^\sigma}_t), S^i_{t+1})$, and in the conditional expectation $\ee^{\sigma_t, \psi^\sigma}[\cdot]$, the distribution of $(X_t, S_t)$ conditioned on $S^i_t$ is given by $\pi^{\psi^\sigma,i}_t(x_t, s^{-i}_t)$, $A^i_t, i \in \mathcal N$, are generated by $\sigma^i_t(a^i_t\mid s^i_t, \pi^{\psi^\sigma}_t)$, $S^i_{t+1}$ conditioned on $(X_t, S_t, A_t)$ follows the conditional probability $\sum_{x_{t+1}, s^{-i}_{t+1}}\prob(x_{t+1}, s_{t+1} \mid x_t, s_t, a_t)$ given by
\begin{align}
& \prob(x_{t+1}, s_{t+1} \mid x_t, s_t, a_t)
\notag\\
=& \sum_{y_{t+1}} \mathbb{P}\{x_{t+1}\mid x_t,a_t\}
\mathbb{P}\{y_{t+1}\mid x_{t+1},a_t\}
\left(\prod_{j}\mathbbm{1}\{s_{t+1}^j = \phi_{t+1}^j(s_t^j,y_{t+1}^j,a_t^j)\}\right).
\label{eq:update_condprob_common}
\end{align}
If for all $t \in \mathcal T$, there is a SIB strategy profile $\hat\sigma_t$ such that
$\hat\sigma_t$ is a BNE of the stage-game $G_t(V_{t+1}, \pi^{\psi^\sigma_t})$, that is,
\begin{align}
& \ee^{\hat \sigma^i_t, \hat\sigma^{-i}_t, \psi^\sigma}[U^i_{G_t(V_{t+1},\pi^{\psi^\sigma}_t)}\mid s^i_t]
= \max_{\tilde \sigma^i_t \in \Lambda^i_t}\ee^{\tilde \sigma^i_t, \hat\sigma^{-i}_t, \psi^\sigma}[U^i_{G_t(V_{t+1},\pi^{\psi^\sigma}_t)}\mid s^i_t]
\label{eq:dp_bne_max_nocommon}
\end{align}
for all $i \in \mathcal N$, and
\begin{align}
\hat\sigma_t = \sigma_t,
\label{eq:dp_bne_eq_nocommon}
\end{align}
then the SIB strategy profile $\sigma$ is a SIB-BNE of the dynamic game without common observations defined in this section.
\end{corollary}
\begin{remark}
The SIB-BNE strategy profiles $\{\sigma_t, t \in \mathcal T\}$ determined by sequential decomposition in Corollary \ref{cor:no_common}, along with the beliefs $\{\Pi^{\psi^{\sigma}}_t, t \in \mathcal T\}$ are also Perfect Bayesian Equilibria (PBE) \cite{fudenberg1991game}. This is true because $\{\sigma_t, t \in \mathcal T\}$ satisfy sequential rationality (Eq. \eqref{eq:dp_bne_max_nocommon}) and consistency holds because the beliefs $\{\Pi^{\psi^{\sigma}}_t, t \in \mathcal T\}$ are always updated by Bayes rule.
\end{remark}
\section{Compression of Private and Common Information}
\label{sec:compression}
In Section \ref{subsec:privatecompress}
we characterize/determine mutually consistent compressions of all agents' private information that are sufficient for decision-making purposes.
In Section \ref{subsec:SIBbelief} we introduce the common information based belief, a compressed version of the agents' common information, that is sufficient for decision making purposes.
\subsection{Sufficient private information (Step 1)}
\label{subsec:privatecompress}
We present/consider a compression of the agents' private information that is done in a mutually consistent manner so that the compressed information is sufficient for decision making purposes.
\begin{definition}[Sufficient private information]\label{def:sufficient}
We say that $S^i_t, i=1,\ldots,N$, is sufficient private information for the agents if
\begin{enumerate}[(i)]
\item $S^i_t$ is a function of $H^i_t$ such that
$S^i_t = \zeta^i_t(H^i_t)$ for some commonly known functions $\zeta^i_t, i=1,2,\ldots,N$.
\item $S^i_t$ can be sequentially updated as
$S^i_t = \phi^i_t(S^i_{t-1}, Y^i_t, Z_t, A^i_{t-1})$ using some commonly known functions $\phi^i_t,i=1,2,\ldots,N$.
\item For any realization $x_t, p^{-i}_t, p^i_t, c_t$, and the corresponding $s^{-i}_t =\zeta^{-i}_t(p^{-i}_t, c_t)$ and $s^i_t =\zeta^i_t(p^i_t, c_t)$, and any strategy profile $g$, where $g_t^i:\mathcal{S}_t^i\times C_t\rightarrow \Delta(\mathcal{A}_t^i),\forall i,\forall t$, such that $\prob^g(p^i_t, c_t) > 0$,
\begin{align}
\prob^g(x_t, s^{-i}_t \mid s^i_t, c_t)
= \prob^g(x_t, s^{-i}_t \mid p^i_t, c_t)
\label{eq:sufficient-3}
\end{align}
\end{enumerate}
\end{definition}
\begin{remark}
A similar definition of sufficient private information for dynamic teams appears in \cite[Definition 2]{companion}.
This definition is slightly different from Definition \ref{def:sufficient} above because the objectives in \cite{companion} and this paper are different.
In Appendix \ref{app:sufficient_information} we show that sufficient private information satisfying Definition \ref{def:sufficient} may violate condition (ii) of Definition 2 in \cite{companion}.
In \cite{companion} the compression of private (and common) information must entail no loss in performance, that is, we must be able to determine globally optimal team strategy profiles that are based on compressed private and common information. In this paper the goal is to determine BNE strategy profiles that are based on compressed information and be sequentially computed (if such BNE strategy profiles exist). We are not concerned about the equilibria we may lose when we compress information; therefore, we don't need condition (ii) of Definition 2 in \cite{companion}.
\end{remark}
Definition \ref{def:sufficient} characterizes a set of compressions for agents' private information. In the following, we show the set of sufficient private information $S_t^i$, $i\in\mathcal{N}$, $t\in\mathcal{N}$, is rich enough to form belief systems on information sets of realizations with positive or zero probability.
Let $\tilde g^i$ denote the uniform strategy that assigns equal probability to every action of agent $i \in \mathcal N$.
Below we show that the policy-independence property of belief \cite[Theorem 1]{companion} for agent $i$ is still true when the private information $p_t^i$ is replaced with the sufficient private information $s_t^i$. That is, $\prob^{\tilde g^i, g^{-i}}(x_t, x^{-i}_t \mid s^i_t, c_t)$ constructed by $(\tilde g^i, g^{-i})$ captures agent $i$'s belief based on $h^i_t$ even when he plays an arbitrary strategy $\hat{g}^i$, not necessarily the same as $g^i$ or $\tilde g^i$, provided that agents $-i$ play $g^{-i}$.
\begin{lemma}
\label{prop:policy-independent}
For $h^i_t$ such that $\prob^{\hat g^i, g^{-i}}(h^i_t) > 0$, we have $\prob^{\tilde g^i, g^{-i}}(h^i_t) > 0$ and
\begin{align}
\prob^{\hat{g}^i,g^{-i}}(x_t, s^{-i}_{t} \mid h^i_t) = \prob^{\tilde{g}^i, g^{-i}}(x_t, s^{-i}_{t} \mid h^i_t)
=
\prob^{\tilde{g}^i, g^{-i}}(x_t, s^{-i}_{t} \mid s^i_t, c_t) .
\end{align}
\end{lemma}
\begin{proof}
Note that $\prob^{\tilde g^i}(a^i_t) = 1 / \vert\mathcal A^i_t \vert$, so $\prob^{\tilde g^i, g^{-i}}(h^i_t) > 0$ given that $\prob^{g}(h^i_t) > 0$. Then
from part (i) of the definition of sufficient private information and part (i) of Theorem 1 in \cite{companion} we have
\begin{align}
\prob^{\hat{g}^i, g^{-i}}(x_t, s^{-i}_t \mid h^i_t)
= &\sum_{h^{-i}_{t}: \zeta^{-i}_t(h^{-i}_{t}) = s^{-i}_t }\prob^{\hat{g}^i, g^{-i}}(x_t, h^{-i}_{t} \mid h^i_t)
\notag\\
= & \sum_{h^{-i}_{t}: \zeta^{-i}_t(h^{-i}_{t}) = s^{-i}_t } \prob^{\tilde{g}^i, g^{-i}}(x_t,h_t^{-i} \mid h_t^i)
\notag\\
= &
\prob^{\tilde{g}^i, g^{-i}}(x_t, s^{-i}_{t} \mid h^i_t).
\end{align}
Furthermore, from condition (iii) of the definition of sufficient private information we have
\begin{align}
\prob^{\tilde{g}^i, g^{-i}}(x_t, s^{-i}_{t} \mid h^i_t)
= \prob^{\tilde{g}^i, g^{-i}}(x_t, s^{-i}_{t} \mid s^i_t, c_t).
\end{align}
\end{proof}
\subsection{CIB Belief System (Step 2)}
\label{subsec:SIBbelief}
Given the compressed private information, we next compress the agents' common information in the form of a belief system.
We call such a compressed belief system the Common Information Based (CIB) belief system.
Similar to \cite{tang2022dynamic, ouyang2016TAC}, the CIB belief system is sufficient for decision-making if it is common knowledge among all agents, and every agent $i$ can compute his belief about the system state and the other agents' sufficient private information using the CIB belief system and his compressed private information. More specifically,
agent $i$ should be able to compute $\prob^{\hat g^i, g^{-i}}(x_t, s_t \mid h^i_t)$ using the CIB belief system and his sufficient private information $s^i_t$ whenever other agents follow the strategy profile $g^{-i}$ and agent $i$ plays an arbitrary strategy $\hat g^i$.
To determine a CIB belief system that satisfies the above sufficiency requirement we proceed as follows. We first define $N$ CIB belief systems $\Pi^\psi:= \{\Pi^{\psi, 1}, \Pi^{\psi, 2}, \ldots, \Pi^{\psi, N}\}$, one for each agent (Definition \ref{def:CIB_belief_system} below). Each belief system $\Pi^{\psi, i}$ consists of a sequence of PMFs on $\mathcal X_t \times \mathcal S_t$ that are sequentially updated according to an update rule $\psi=(\psi^1, \psi^2,\ldots,\psi^N)$ that is common knowledge among the agents;
for each realization $c_t$ of the common information available at $t$, $\pi^{\psi,i}_t$ describes the belief on $\mathcal X_t \times \mathcal S_t$ based on $c_t$ from agent $i$'s point of view.
We want $\pi^{\psi,i}_t$, combined with $s^i_t$, to enable agent $i$ to form his own sufficient information-based private belief (given by $\prob^{\hat g^i, g^{*-i}}(x_t, s_t \mid s^i_t, c_t)$) about the current status of the game. Furthermore, we want the CIB belief system to capture the current status of the game when agents utilize strategies based on $(S_t, \Pi^\psi_t)$. For that matter, we define the notion/concept of Sufficient Information Based (SIB) strategy profile
$\sigma:= (\sigma^{i},i \in \mathcal N)$, $\sigma^i:= (\sigma^i_t, t \in \mathcal T), i \in \mathcal N$. Each component $\sigma^i_t$ of $\sigma$ is a function of $s^i_t$, agent $i$'s sufficient private information at $t$, and $\pi^\psi_t = (\pi^{\psi,i}_t, i \in \mathcal N)$ (see Definition \ref{def:SIB_strategy_n} below). Using the $N$ CIB belief systems and the SIB strategy profile $\sigma$ we define update equations for each $\pi^{\psi,i}_t$ so that each $\pi^{\psi,i}_t$ is consistent with $s^i_t$ and with agent $i$'s sufficient private information-based belief $\prob^{\hat g^i, g^{*-i}}(x_t, s_t \mid s^i_t, c_t)$, defined in Section \ref{subsec:privatecompress} (Definition \ref{def:sufficient}), and each $\pi^{\psi,i}_t$ is common knowledge among all agents (see Definition \ref{def:CIB_update_n} below). We proceed with the (formal) definitions.
\begin{definition}[Common information based (CIB) belief system]
\label{def:CIB_belief_system}
Given a sequence of update functions $\psi =\{\psi^{i}_t, i \in \mathcal N, t \in \mathcal T\}$
that are common knowledge among the $N$ agents, sequentially define
\begin{align}
\Pi_{t}^{\psi,i} = \psi_{t}^i(\Pi^{\psi}_{t-1},Z_{t}), i \in \mathcal N, t \in \mathcal T
\end{align}
where
\begin{align}
&\Pi^{\psi}_t:= \left[\begin{array}{c}
\Pi^{\psi, 1}_t \\
\vdots\\
\Pi^{\psi, N}_t
\end{array}\right], t \in \mathcal T
\\
& \Pi^{\psi}_0:= \left[\begin{array}{c}
\mu_0 \\
\vdots\\
\mu_0
\end{array}\right]
\end{align}
The sequence $\Pi^\psi_{1:T} = (\Pi^\psi_1, \Pi^\psi_2, \ldots, \Pi^\psi_T)$ defines a CIB belief system; $\Pi^{\psi, i}_t$ denotes the CIB belief over $\mathcal X_t \times \mathcal S_t$ based on $C_t$ from agent $i$'s point of view.
\label{def:SIB_belief_new_n}
\end{definition}
\begin{definition}[SIB strategy]
\label{def:SIB_strategy_n}
Given a CIB belief system $\Pi^\psi_{1:T}$, we define a Sufficient Information Based (SIB) strategy profile $\sigma:= (\sigma^1, \sigma^2,\ldots, \sigma^N)$, $\sigma^i:= (\sigma^i_1, \sigma^i_2, \ldots, \sigma^i_T)$ by the maps
\begin{align}
\sigma_t^i:\mathcal{S}^i_t\times [\Delta(\mathcal{X}_t\times \mathcal{S}_t)]^N \rightarrow \Delta(\mathcal{A}_t),
t=1,2,\ldots, i=1,2,\ldots,N.
\end{align}
\end{definition}
Based on Definitions \ref{def:CIB_belief_system} and \ref{def:SIB_strategy_n} we present a set of conditions that an individual CIB belief system $(\Pi^{\psi,i}_t, t \in\mathcal T)$ must satisfy so as to ensure that each agent $i$ can form his own (private) belief about the current status of the game, given by $(X_t, S_t)$, using $\Pi^\psi_t$ and $S^i_t$ when all other agents $-i$ employ SIB strategies $\sigma^{-i}$. This set of conditions describe a sequential update rule of $\Pi^{\psi,i}_t$; the update rule depends on whether or not the (new) common observation at $t$ is feasible under the agents' strategies.
\begin{definition}[Consistent CIB belief system]
\label{def:CIB_update_n}
Consider a SIB strategy profile $\sigma$.
Let $F_{t}^i(x_{t+1},s_{t+1},z_{t+1})(\pi^\psi_t;\sigma^{-i}_t)$ denote the CIB belief about $(x_{t+1},s_{t+1},z_{t+1})$ constructed recursively by assuming that (i) $(x_t,s_t)$ is distributed according to $\pi_t^{\psi,i}$ (ii) agent $i$ employs the uniform strategy $\tilde g^i$ at $t$ (i.e., the strategy that chooses every action $a^i_t \in \mathcal A^i_t$ with equal probability), and (iii) agent $-i$ plays according $\sigma_t^{-i}$. That is,
\begin{align}
F_{0}^i(x_{1},s_{1},z_{1})
=& \sum_{y_{1}} \Bigg[\mathbb{P}\{z_{1},y_{1} \mid x_{1}\}\mu_0(x_{1})\left(\prod_{j}\mathbbm{1}\{s_{1}^j = \phi_{1}^j(z_{1},y_{1}^j)\}\right) \Bigg]
\end{align}
at $t=1$, and for $t \geq 1$.
\begin{align}
& F_{t}^i(x_{t+1},s_{t+1},z_{t+1})(\pi^\psi_t;\sigma^{-i}_t)
\notag\\
=& \sum_{y_{t+1},x_t,s_t,a_t} \Bigg[\mathbb{P}
\{z_{t+1},y_{t+1},x_{t+1} \mid x_t,a_t\}\left(\prod_{j}\mathbbm{1}\{s_{t+1}^j = \phi_{t+1}^j(s_t^j,z_{t+1},y_{t+1}^j,a_t^j)\}\right)\nonumber\\
&
\hspace{50pt} \left(\frac{1}{ \mid A_t^i \mid}\prod_{j\neq i} \sigma^j_t(a^j_t)(\pi^\psi_t,s_t^j) \right) \pi_t^{\psi,i}(x_t,s_t) \Bigg]
\end{align}
We define the update rule $\psi^\sigma = (\psi^{\sigma,i}_t, i \in \mathcal N, t \in \mathcal T)$ and the corresponding CIB belief system $\Pi^{\psi^\sigma}_{1:T}$ as follows. At any $t$
\begin{enumerate}[(i)]
\item If $\sum_{\hat{x}_{t+1},\hat{s}_{t+1}} F_{t}^i(\hat{x}_{t+1},\hat{s}_{t+1},z_{t+1})(\pi^{\psi^\sigma}_t;\sigma^{-i}_t)>0$ (\textit{i.e.} the new common observation $z_{t+1}$ is feasible from the agent $i$'s point of view), then $\pi^{\psi^\sigma, i}_{t+1}$ can be updated recursively as
\begin{align}
\pi_{t+1}^{\psi^{\sigma},i}(x_{t+1},s_{t+1}) = \frac{F_{t}^i(x_{t+1},s_{t+1},z_{t+1})(\pi^{\psi^\sigma}_t;\sigma^{-i}_t)}{\sum_{\hat{x}_{t+1},\hat{s}_{t+1}} F_{t}^i(\hat{x}_{t+1},\hat{s}_{t+1},z_{t+1})(\pi_t^{\psi^\sigma};\sigma^{-i}_t)},
\label{eq:cib_bayesrule}
\end{align}
via Bayes rule.
\item If $\sum_{\hat{x}_{t+1},\hat{s}_{t+1}} F_{t}^i(\hat{x}_{t+1},\hat{s}_{t+1},z_{t+1})(\pi^{\psi^\sigma}_t;\sigma^{-i}_t)
=0$ (\textit{i.e.} the new common observation $z_{t+1}$ is infeasible from the agent $i$'s point of view), then the update rule is
\begin{align}
\pi_{t+1}^{\psi^\sigma,i}(x_{t+1},s_{t+1}) = \frac{1}{\left\vert\mathcal{X}_{t+1}\times\mathcal{S}_{t+1}\right\vert}.
\label{eq:update-2_n}
\end{align}
\end{enumerate}
\end{definition}
Based on \eqref{eq:cib_bayesrule} and \eqref{eq:update-2_n} we can write
\begin{align}
&\Pi_{t+1}^{\psi^\sigma,i} = \psi_{t+1}^{\sigma,i}(\Pi_t^{\psi^\sigma},Z_{t+1}).
\label{eq:psi_it}
\\
&\Pi_{t+1}^{\psi^\sigma} = \psi^{\sigma}_{t+1}(\Pi_t^{\psi^\sigma},Z_{t+1}).
\label{eq:psi_t}
\end{align}
Furthermore, for all $i\in\mathcal N$, each agent can determine if
$\sum_{\hat{x}_{t+1},\hat{s}_{t+1}} F_t^i(\hat{x}_{t+1},\hat{s}_{t+1},z_{t+1})(\pi_t^{\sigma^\psi};\sigma^{-i}_t)$ is positive or zero; thus each agent knows how agent $i$ computes $\pi^{\psi^\sigma, i}_{t+1}$ from $\sigma^i_t, z_{t+1}, \sigma^{-i}_t$ and $\psi^\sigma$. Therefore, $\pi^{\psi^\sigma, i}_{t}$ (hence $\pi^{\psi^\sigma}_{t}$) is common knowledge among all agents. We call $\Pi^{\psi^\sigma}_{1:T}$ the CIB belief system consistent with the SIB strategy profile $\sigma$.
\begin{remark}
Since the sufficient private information is a function of the agent's available information, a SIB strategy $\sigma^i_t$ corresponds to a strategy $g_t^{i, \sigma}$ given by
$g_t^{i, \sigma}(h^i_t) := \sigma^i_t(\zeta^i_t(h^i_t), \pi_t^{\psi^\sigma})$.
Therefore, in the rest of the paper we use the following convention: $\prob^{\sigma}(\cdot) = \prob^{g^\sigma}(\cdot)$ and $\ee^{\sigma}[\cdot] = \ee^{g^\sigma}[\cdot]$.
\end{remark}
\begin{remark}
There are many alternative specifications of the update rule $\psi^\sigma_{t}, t \in \mathcal T$ defined by \eqref{eq:psi_it}-\eqref{eq:psi_t}, that result in consistent CIB belief systems, that is, CIB belief systems which ensure that (i) agent $i$ can form his private belief over $(X_t, S^{-i}_t)$ by incorporating his private sufficient information $S^i_t$ into his CIB belief $\Pi^{\psi^\sigma, i}_t$ given that agents $-i$ play according to $\sigma^{-i}$, (ii) agent $i$'s private belief formed according to $i$ is identical to the probability distribution over $(X_t, S^{-i}_t)$ conditional on his complete history $H^i_t$ even when he plays an arbitrary strategy $\hat g^i$ different from $\sigma^i$. An example of such an alternative update rule is described by \eqref{eq:cib_bayesrule} (Bayes' rule) when $\sum_{\hat{x}_{t+1},\hat{s}_{t+1}} F_t^i(\hat{x}_{t+1},\hat{s}_{t+1},z_{t+1})(\pi_t^{\psi^\sigma};\sigma^{-i}_t)>0$ and a arbitrary PMF $\pi^{\psi^\sigma, i}_{t+1}(\cdot, \cdot)$ on $X_{t+1} \times S_{t+1}$ when $\sum_{\hat{x}_{t+1},\hat{s}_{t+1}} F_t^i(\hat{x}_{t+1},\hat{s}_{t+1},z_{t+1})(\pi_t^{\psi^\sigma};\sigma^{-i}_t)=0$.
\end{remark}
Definition \ref{def:CIB_update_n} ensures that agent $i$ can form his beliefs over $(X_t,S_t^{-i})$ by incorporating his sufficient private information $S_t^i$ into his CIB belief $\Pi_t^{\psi^\sigma,i}$ given that agents $-i$ play according to $\sigma^{-i}$.
Moreover, this belief is sufficient to compute the probability distribution over $(X_t,S_t^{-i})$ conditional on his complete history $H_t^i$ even when he plays an arbitrary strategy $\hat{g}^i$ different from $\sigma^i$.
We formalize the above discussion in Lemma \ref{lemma:CIBbelief-privatebelief} below, by using the notation
$\prob^{\hat g^i, \sigma^{-i}, \psi^\sigma}(\cdot)$ to indicate the belief resulting when agent $i$ plays $\hat g^i$ and agents $-i$ play $g^{-i,\sigma}(h^{-i}_t)=\sigma^{-i}_t(\zeta^{-i}_t(h^{-i}_t), \pi_t^{\psi^\sigma})$ using the update rule $\psi^\sigma$
.
\begin{lemma}\label{lemma:CIBbelief-privatebelief}
Consider a SIB strategy profile $\sigma$, along with an associated consistent CIB belief system $\Pi_t^{\psi^\sigma}$. Suppose $(x_t, h^i_t, h^{-i}_t$ is a realization with positive probability under $(\hat g^i, \sigma^{-i})$, where $\hat g^i$ denotes an arbitrary strategy for agent $i$.
Let $s^i_t = \zeta^i_t(h^i_t)$ and $s^{-i}_t = \zeta^{-i}_t(h^{-i}_t)$ be the associated sufficient private information.
Then agent $i$'s belief at time $t$ can be computed using $\pi_t^{\psi^\sigma}$ as
\begin{align}
\prob^{\hat g^i, \sigma^{-i}, \psi^\sigma}
(x_t, s^{-i}_t \mid h^i_t)
= \frac{\pi^{\psi^\sigma,i}_t(x_t, s_t)}
{\sum_{s^{-i}_t, x_t}\pi^{\psi^\sigma,i}_t(x_t, s^i_t, s^{-i}_t)}
\label{eq:lemma_cibbelief}
\end{align}
\end{lemma}
\begin{proof}
From Lemma \ref{prop:policy-independent}
we have
\begin{align}
\prob^{\hat g^i, \sigma^{-i}, \psi^\sigma}(x_t, s^{-i}_t \mid h^i_t)
= \prob^{\tilde g^i, \sigma^{-i}, \psi^\sigma}(x_t, s^{-i}_t \mid h^i_t) .
= \prob^{\tilde g^i, \sigma^{-i}, \psi^\sigma}(x_t, s^{-i}_t \mid c_t, s^i_t).
\label{eq:lemma_cibbelief_pf1}
\end{align}
By Bayes' rule we obtain
\begin{align}
\prob^{\tilde{g}^i, \sigma^{-i}, \psi^\sigma}(x_t, s^{-i}_t \mid c_t, s^i_t) =
\frac{\prob^{\tilde{g}^i, \sigma^{-i}, \psi^\sigma}(x_t, s_t \mid c_t)}{\prob^{\tilde{g}^i, \sigma^{-i}, \psi^\sigma}(s^{i}_t \mid c_t)} = \frac{\pi^{\psi^\sigma,i}_t(x_t, s_t)}
{\sum_{s^{-i}_t, x_t}\pi^{\psi^\sigma,i}_t(x_t, s^i_t, s^{-i}_t)}.
\label{eq:lemma_cibbelief_pf2}
\end{align}
Combination of \eqref{eq:lemma_cibbelief_pf1} and \eqref{eq:lemma_cibbelief_pf2} establishes the assertion of Lemma \ref{lemma:CIBbelief-privatebelief}.
\end{proof}
\begin{remark}
\label{remark:condindepend}
Suppose $X_t = (X^1_t, X^2_t,.\ldots, X^N_t)$ and we have the conditional independence property, namely, that for any strategy profile $g$ $\prob^g(x_t, s_t \mid c_t) = \prod_i \prob^{g^i}(x^i_t, s^i_t \mid c_t)$. Then one can show for any $i$ that
\begin{align*}
\pi^{\psi^\sigma,i}_t(x_t, s_t) = \prod_j \pi^{\psi^\sigma,i}(x^j_t, s^j_t) =
\prob^{\tilde g^{i}_t}(x^i_t, s^{i}_t \mid c_t)
\prod_{j \neq i} \prob^{\sigma^{j}}(x^j_t, s^{j}_t \mid c_t)
\end{align*}
Therefore, for settings with the conditional independence property as in \cite{tang2022dynamic, ouyang2016TAC}, one can use the simplified beliefs $\prob^{\tilde g^{i}_t}(x^i_t, s^{i}_t \mid c_t)$ and $\prob^{\sigma^{j}}(x^j_t, s^{j}_t \mid c_t)$ as the compressed common information to compute the CIB belief $\pi^{\psi^\sigma, i}_t(x_t,s_t)$.
The conditional independence among the system components in the models of \cite{tang2022dynamic, ouyang2016TAC} could be lost when the agents' actions are not observable.
\end{remark}
\section{
Sequential decomposition (Step 3)
}
\label{sec:sequential_decomposition}
In this section we present a sequential decomposition of the game, that is, a backward inductive sequential procedure that determines a Sufficient Information Based Bayesian Nash Equilibrium (SIB-BNE), defined below, if each step of this procedure has a solution.
We proceed as follows. We first establish a key closedness of best response property (Section \ref{subsec:closedness}); we use this property to provide a sequential decomposition of the game (Section \ref{subsec:dynamicprogram})
\begin{definition}[SIB-BNE]
\label{def:SIBBNE_original}
Consider a SIB strategy profile $\sigma^* = (\sigma^{*1}, \sigma^{*2}, \ldots, \sigma^{*n})$ and its corresponding consistent update rule $\psi^{\sigma^*}$.
The SIB strategy profile $\sigma^*$ is a SIB-BNE if it is a BNE of the dynamic game. That is, for all $i \in \mathcal N$,
\begin{align}
\mathbb{E}^{\hat{g}^{i},\sigma^{*-i}, \psi^{\sigma^*}}\{U^i(X_{1:T},A_{1:T})\}
\leq
\mathbb{E}^{\sigma^*, \psi^{\sigma^*}}\{U^i(X_{1:T},A_{1:T})\},
\notag\\
\text{ for all strategies (not necessarily SIB strategies) }\hat{g}^i.
\end{align}
\end{definition}
\subsection{Closedness of best response}
\label{subsec:closedness}
The key result of this subsection is presented in the following theorem.
\begin{theorem}
\label{thm:closedness}
Consider a fixed and known SIB strategy profile $\sigma$ and the corresponding update rule $\psi^\sigma$. Suppose agents $-i$ use $\sigma^{-i}$ with $\psi^\sigma$. Then, there exists a SIB strategy $\hat\sigma^{i}$ that uses $\psi^\sigma$ and is a best response to $\sigma^{-i}$ with $\psi^\sigma$.
\end{theorem}
The proof is based on Lemmas \ref{thm:POMDP_new_n}, \ref{lemma:markov_new_n}, and \ref{lemma:utility_new_n} that we state and prove below.
\begin{lemma}
\label{thm:POMDP_new_n}
Consider a SIB strategy profile $\sigma$ and the corresponding update rule $\psi^\sigma$ along with the consistent CIB belief system $\Pi_{1:T}^{\psi^\sigma}$.
If agents $-i$ play according to the SIB strategies $\sigma^{-i}$ and use the update rule $\psi^\sigma$, the best response problem for agent $i$ is a POMDP with state and observation processes
\begin{align}
& \tilde X_t = (S_t, \Pi_t^{\psi^\sigma}, X_t), t \in \mathcal T\\
& \tilde Y_t = (Y^i_t, Z_t), t \in \mathcal T
\end{align}
respectively, and instantaneous utility
\begin{align}
\tilde u^i_t(\tilde X_t, A^i_t)
=
\sum_{a^{-i}_t}
\big(\prod_{j \neq i}
\sigma^{j}_{t}(a^j_t \mid S^j_t, \Pi_t^{\psi^\sigma})
\big)
u^i_t(X_t, a^{-i}_t, A^i_t), t \in \mathcal T
\end{align}
\end{lemma}
The assertion of Lemma \ref{thm:POMDP_new_n} is a direct consequence of Lemmas \ref{lemma:markov_new_n} and \ref{lemma:utility_new_n}.
\begin{lemma}
\label{lemma:markov_new_n}
Consider a SIB strategy profile $\sigma$ and the corresponding update rule $\psi^\sigma$. Suppose agents $-i$ play according to the SIB strategies $\sigma^{-i}$ using $\psi^\sigma$ and agent $i$ follows an arbitrary strategy $\hat{g}^i$ (not necessarily a SIB strategy). Then
\begin{align}
\prob^{\hat{g}^i,\sigma^{-i},\psi^\sigma}(\tilde x_{t+1}, \tilde y_{t+1} \mid
\tilde x_{1:t}, \tilde y_{1:t}, a^i_{1:t})
= \prob^{\hat{g}^i\sigma^{-i},\psi^\sigma}(\tilde x_{t+1}, \tilde y_{t+1} \mid
\tilde x_{t}, a^i_{t})
\end{align}
\end{lemma}
\begin{proof}
The probability for the next state and observation $\tilde x_{t+1}, \tilde y_{t+1}$ can be computed by
\begin{align}
& \prob^{\hat{g}^i,\sigma^{-i},\psi^\sigma}(\tilde x_{t+1}, \tilde y_{t+1} \mid
\tilde x_{1:t}, \tilde y_{1:t}, a^i_{1:t})
\notag\\
= & \prob^{\hat{g}^i,\sigma^{-i},\psi^\sigma}(x_{t+1}, \pi^{\psi^\sigma}_{t+1}, s_{t+1}, y^i_{t+1}, z_{t+1}\mid
x_{1:t}, \pi^{\psi^\sigma}_{1:t}, s_{1:t}, y^i_{1:t}, z_{1:t}, a^i_{1:t})
\notag\\
= & \sum_{y^{-i}_{t+1}, a^{-i}_t} \prob^{\hat{g}^i,\sigma^{-i},\psi^\sigma}(
x_{t+1}, \pi^{\psi^\sigma}_{t+1}, s_{t+1}, y_{t+1}, z_{t+1}, a^{-i}_t\mid
x_{1:t}, \pi^{\psi^\sigma}_{1:t}, s_{1:t}, y^i_{1:t}, z_{1:t}, a^i_{1:t})
\notag\\
= & \sum_{y^{-i}_{t+1}, a^{-i}_t}
\big(\prod_{j}\mathds{1}(s^j_{t+1} = \phi^j_{t+1} (s^j_t, y^j_{t+1}, z_{t+1}, a^j_t))
\big)
\mathbb{P}\{z_{t+1},y_{t+1}, x_{t+1}\mid x_{t},a_{t}\}
\notag\\
& \hspace{2cm}
\mathds{1}(\pi^{\psi^\sigma}_{t+1} = \psi^\sigma_{t+1}(\pi^{\psi^\sigma}_t,z_{t+1}))
\big(\prod_{j \neq i}
\sigma^{j}_{t}(a^j_t \mid s^j_t, \pi^{\psi^\sigma}_t)
\big)
\label{eq:markov_update_1_n}
\end{align}
where the last equality follows from the system dynamics, part (ii) of Definition \ref{def:sufficient}, Definition \ref{def:CIB_update_n}, and the form of SIB strategies of agents ${-i}$.
Since the right hand side of \eqref{eq:markov_update_1_n} depends only on $(\tilde x_{t}, a^i_{t})$
we conclude that
\begin{align}
\prob^{\hat{g}^i,\sigma^{-i},\psi^\sigma}(\tilde x_{t+1}, \tilde y_{t+1} \mid
\tilde x_{1:t}, \tilde y_{1:t}, a^i_{1:t})
= \prob^{\hat{g}^i,\sigma^{-i},\psi^\sigma}(\tilde x_{t+1}, \tilde y_{t+1} \mid
\tilde x_{t}, a^i_{t})
\end{align}
\end{proof}
Lemma \ref{lemma:markov_new_n} shows that $\{\tilde{X}_t, \tilde{Y}_t, t \in \mathcal T\}$ is a Markov process conditional on $\{A^i_t, t \in \mathcal T\}$
\begin{lemma}
\label{lemma:utility_new_n}
Consider a SIB strategy profile $\sigma$ and the corresponding update rule $\psi^\sigma$. Suppose agents $-i$ follow the SIB strategies $\sigma^{-i}$ using $\psi^\sigma$ and agent $i$ follows an arbitrary strategy $\hat{g}^i$ (not necessarily a SIB strategy).
Then there are utility functions $\tilde u^i_t$ such that
$\ee^{\hat{g}^i,\sigma^{-i},\psi^\sigma}[\tilde u^i_t(\tilde X_t, A^i_t)] = \ee^{\hat{g}^i,\sigma^{-i},\psi^\sigma}[u^i_t(X_t, A_t)]$ for all $t\in\mathcal T$.
\end{lemma}
\begin{proof}
Recall that $\tilde X_t = (S_t, \Pi^{\psi^\sigma}_t, X_t)$. Then
\begin{align}
& \ee^{\hat{g}^i,\sigma^{-i},\psi^\sigma}[u^i_t(X_t, A_t)]
\notag\\
= & \ee^{\hat{g}^i,\sigma^{-i},\psi^\sigma}[u^i_t(X_t, A^{-i}_t, A^i_t)]
\notag\\
= & \ee^{\hat{g}^i,\sigma^{-i},\psi^\sigma}\big[
\ee^{\hat{g}^i,\sigma^{-i},\psi^\sigma}[u^i_t(X_t, A^{-i}_t, A^i_t)\mid \tilde X_t, A^i_t]\big]
\notag\\
= & \ee^{\hat{g}^i,\sigma^{-i},\psi^\sigma}\big[
\sum_{a^{-i}_t} \prob^{\hat{g}^i,\sigma^{-i},\psi^\sigma}(a^{-i}_t \mid S_t, \Pi^{\psi^\sigma}_t, X_t, A^i_t) u^i_t(X_t, a^{-i}_t, A^i_t)]
\big]
\notag\\
= & \ee^{\hat{g}^i,\sigma^{-i},\psi^\sigma}\big[
\sum_{a^{-i}_t}
\big(\prod_{j \neq i}
\sigma^{j}_{t}(a^j_t \mid S^j_t, \Pi^{\psi^\sigma}_t)
\big)
u^i_t(X_t, a^{-i}_t, A^i_t)]
\big]
\end{align}
Therefore, we establish the claim of the lemma by defining
\begin{align}
\tilde u^i_t(\tilde X_t, A^i_t) = \sum_{a^{-i}_t}
\big(\prod_{j \neq i}
\sigma^{j}_{t}(a^j_t \mid S^j_t, \Pi^{\psi^\sigma}_t)
\big)
u^i_t(X_t, a^{-i}_t, A^i_t)]
\end{align}
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:closedness}]
From Lemma \ref{thm:POMDP_new_n} we conclude that the best response of agent $i$ to $\sigma^{-i}$ is a POMDP with state $\tilde X_t$. From the theory of POMDP \cite[Chapter 6]{kumar1986stochastic} we know that: (i) the belief on the state $\tilde X_t = (S_t, \Pi^{\psi^\sigma}_t, X_t)$ conditioned on available information $h^i_t$ is an information state for the agent; (ii)
for each $t \in \mathcal T$ there exists an optimal strategy for agent $i$ that is a function of the information state at $t$.
We now prove that $(S^i_t, \Pi^{\psi^\sigma}_t)$ is an information state for agent $i$ at $t, t \in \mathcal T$.
\\
We note that $S^i_{t+1}=\phi^i_t(S^i_t, Y^i_{t+1}, Z_{t+1}, A^i_t)$ from part (ii) of Definition \ref{def:sufficient}, and $\Pi^{\psi^\sigma}_{t+1} = \psi^\sigma_{t+1}(\Pi^{\psi^\sigma}_t, Z_{t+1})$ from \eqref{eq:psi_t}.
Thus, we only need to show that
for any strategy $\hat{g}^i$ and any realization $h^i_t$ such that $\prob^{\hat{g}^i, \sigma^{-i},\psi^\sigma}(h^i_t) > 0$ the following equality is true:
\begin{align}
\prob^{\hat{g}^i, \sigma^{-i},\psi^\sigma} (s_t, \pi^{\psi^\sigma}_t, x_t \mid h^i_t)
=
\prob^{\hat{g}^i, \sigma^{-i},\psi^\sigma} (s_t, \pi^{\psi^\sigma}_t, x_t \mid s^i_t, \pi^{\psi^\sigma}_t)
\label{eq:thmclosedness_pf1}
\end{align}
For that matter, we note that $s^i_t, \pi^{\psi^\sigma}_t$ are perfectly known to agent $i$. Furthermore, from the definition of sufficient private information and Lemma \ref{lemma:CIBbelief-privatebelief} we have
\begin{align}
\prob^{\hat{g}^i, \sigma^{-i},\psi^\sigma} (s^{-i}_t, x_t \mid h^i_t)
= \frac{\pi^{{\psi^\sigma}, i}_t(s_t, x_t)}
{
\sum_{s^{-i}_t, x_t}\pi^{{\psi^\sigma}, i}_t(s^i_t, s^{-i}_t, x_t)},
\label{eq:thmclosedness_pf2}
\end{align}
which is a function of $(s^i_t,\pi^{\psi^\sigma}_t)$. Therefore,
\begin{align}
\prob^{\hat{g}^i, \sigma^{-i},\psi^\sigma} (s_t, \pi^{\psi^\sigma}_t, x_t \mid h^i_t)
= \mathds{1}(s^i_t = \zeta^i_t(h^i_t))\mathds{1}(\pi^{\psi^\sigma}_t = \gamma^{\psi^\sigma}(h^i_t))
\prob^{\hat{g}^i, \sigma^{-i},\psi^\sigma} (s^{-i}_t, x_t \mid p^i_t, c_t)
\label{eq:thmclosedness_pf3}
\end{align}
where $\gamma^{\psi^\sigma}(h^i_t) = \psi^\sigma_t(\psi^\sigma_{t-1}, \cdots)$ is the composition of $\psi^\sigma$ from $1$ to $t$.
Then, equation \eqref{eq:thmclosedness_pf1} is true because of \eqref{eq:thmclosedness_pf2} and \eqref{eq:thmclosedness_pf3}.
Consequently, $(S^i_t,\Pi^{\psi^\sigma}_t), t \in \mathcal T$ is an information state for the best response problem for agent $i$ and the assertion of Theorem \ref{thm:closedness} is true.
\end{proof}
As a result of Theorem \ref{thm:closedness}, a definition of SIB BNE equivalent to Definition \ref{def:SIBBNE_original} is the following
\begin{definition}[Equivalent definition of SIB BNE]
\label{def:sibbne}
Consider a SIB strategy profile $\sigma^* = (\sigma^{*1}, \sigma^{*2}, \ldots, \sigma^{*n})$ and its corresponding consistent update rule $\psi^{\sigma^*}$.
The SIB strategy profile $\sigma^*$ is a SIB BNE if for all $i \in \mathcal N$,
\begin{align}
\mathbb{E}^{\sigma^{i},\sigma^{*-i}, \psi^{\sigma^*}}\{U^i(X_{1:T},A_{1:T})\}
\leq
\mathbb{E}^{\sigma^*, \psi^{\sigma^*}}\{U^i(X_{1:T},A_{1:T})\}
\end{align}
for all $\sigma^i \in \Lambda^i$ where $\Lambda^i$ is the set of SIB strategy profiles of agent $i$.
\end{definition}
A consequence of Lemmas \ref{thm:POMDP_new_n}-\ref{lemma:utility_new_n} and Theorem \ref{thm:closedness} is the following. Consider a SIB strategy profile $\sigma$, the corresponding update rule $\psi^\sigma$ along with the consistent CIB belief system $\Pi^{\psi^\sigma}_{1:T}$; if agents $-i$ play according to $\sigma^{-i}$, then the best response of agent $i$ could be determined by the dynamic program
\begin{align}
\breve V_{T+1}^i(\cdot, \cdot) = 0 \text{ for all }i
\end{align}
\begin{align}
\breve V^i_t(\pi^{\psi^\sigma}_t, s^i_t) =
& \max_{\tilde \sigma^i_t \in \Lambda^i_t}\ee^{\tilde \sigma^i_t, \sigma^{-i}_t, \psi^\sigma}\{u^i_t(X_t, A_t) + \breve V^i_{t+1}(\psi^{\sigma}_{t+1}(\pi^{\psi^\sigma}_t, Z_{t+1}), S^i_{t+1})\mid s^i_t\},
\notag\\
& \,
\forall \pi^{\psi^\sigma}_t \in \Delta(\mathcal X_t \times \mathcal S_t)^N, \forall s^i_t \in \mathcal S^i_s, t \in \mathcal T
\end{align}
where $\Lambda^i_t$ is the set of SIB strategies of agent $i$ at time $t$.
\subsection{Sequential decomposition}
\label{subsec:dynamicprogram}
Given a set of value functions
$V_{t+1} = \{V^i_{t+1}: \mathbf\Pi_{t+1} \times \mathcal S^i_{t+1} \rightarrow \mathbb{R}, i \in \mathcal N\}$, a SIB strategy profile $\sigma$, the corresponding update rule $\psi^{\sigma}_{t+1}$ defined by $\eqref{eq:psi_t}$, and the consistent CIB belief $\pi^{\psi^\sigma}_t$,
define the stage-game $G_t(V_{t+1},\pi^{\psi^\sigma}_t)$ as follows.
(i) There are $N$ agents.
(ii) The system state is $X_t$.
(iii) Each agent $i$ observes private information $S^i_t$ and common information $\pi^{\psi^\sigma}_t$.
(iv) Agent $i$'s belief about the state $X_t$ and other agents' private information $S^{-i}_t$ is given by $\pi^{\psi^\sigma,i}_t(x_t, s^{-i}_t)$, that is,
\begin{align}
\pi^{\psi^\sigma,i}_t(x_t, s^{-i}_t) \in \Delta(\mathcal X_t \times \mathcal S^{-i}_t).
\end{align}
(v) Each agent $i$ selects action $A^i_t$ based on his available information; let $\hat \sigma^i_t$ denote agent $i$'s strategy for this stage-game; then,
\begin{align}
\prob^{\hat \sigma_t, \psi^\sigma}(A^i_t = a^i_t \mid s^i_t, \pi^{\psi^\sigma}_t) = \hat\sigma^i_t(a^i_t\mid s^i_t, \pi^{\psi^\sigma}_t).
\end{align}
(vi) Each agent $i$ has utility
\begin{align}
U^i_{G_t(V_{t+1},\pi^{\psi^\sigma}_t)} = u^i_t(X_t, A_t) + V^i_{t+1}(\psi^{\sigma}_{t+1}(\pi^{\psi^\sigma}_t, Z_{t+1}), S^i_{t+1})
\label{eq:stagegame_utility}
\end{align}
where
$(Z_{t+1},S^i_{t+1})$ conditioned on $(X_t, S_t, A_t)$ follows the conditional probability $\sum_{x_{t+1}, s^{-i}_{t+1}}\prob(z_{t+1}, x_{t+1}, s_{t+1} \mid x_t, s_t, a_t)$ and the conditional probability $\prob(z_{t+1}, x_{t+1}, s_{t+1} \mid x_t, s_t, a_t)$ is given by
\begin{align}
& \prob(z_{t+1}, x_{t+1}, s_{t+1} \mid x_t, s_t, a_t)
\notag\\
=& \sum_{y_{t+1}} \mathbb{P}\{x_{t+1}\mid x_t,a_t\}
\mathbb{P}\{z_{t+1},y_{t+1}\mid x_{t+1},a_t\}
\notag\\
& \hspace{1cm}\left(\prod_{j}\mathbbm{1}\{s_{t+1}^j = \phi_{t+1}^j(s_t^j,z_{t+1},y_{t+1}^j,a_t^j)\}\right)
\label{eq:update_condprob}
\end{align}
(vii) Given a strategy profile $\hat\sigma_t$ for the stage-game, the expected utility of each player $i$ is given by
\begin{align}
&\ee^{\hat\sigma_t, \psi^\sigma} [ U^i_{G_t(V_{t+1},\pi^{\psi^\sigma}_t)} \mid s^i_t ]
\notag\\
= &\sum_{x_t, s_t^{-i}, a_t, z_{t+1}, x_{t+1}, s_{t+1}}
\hspace{-1cm} \pi^{\psi^\sigma,i}_t(x_t, s_t^{-i}) \prod_j \hat\sigma^j_t(a^i_t\mid s^i_t, \pi^{\psi^\sigma}_t ) \prob(z_{t+1}, x_{t+1}, s_{t+1} \mid x_t, s_t, a_t)
\notag\\
&\hspace{2cm} (u^i_t(x_t, a_t) + V^i_{t+1}(\psi^\sigma_{t+1}(\pi_t^{\psi^\sigma}, z_{t+1}), s^i_{t+1}))
\label{eq:stagegame_exputil}
\end{align}
Note that all the random variables of the stage-game $G_t(V_{t+1},\pi^{\psi^\sigma}_t)$ may not necessarily be the same as their counterparts in the original dynamic game since each agent $i$ is allowed to choose an arbitrary SIB strategy $\hat \sigma^i_t$ which may be different from $\sigma^i_t$ specified by the SIB strategy profile $\sigma$. The stage-game random variables will coincide with their counterparts in the original game if all agents follow $\sigma$.
\begin{theorem}[Sequential decomposition]
\label{thm:sequential_decomposition}
Consider a SIB strategy profile $\sigma = \{\sigma_t, t \in \mathcal T\}$ and the corresponding update rule $\psi^\sigma = \{\psi^\sigma_t, t \in \mathcal T\}$ defined by \eqref{eq:psi_it}-\eqref{eq:psi_t}.
Define
\begin{align}
V_{T+1}^i(\cdot, \cdot) = 0 \text{ for all }i
\label{eq:dp_last}
\end{align}
\begin{align}
& V^i_t(\pi^{\psi^\sigma_t}, s^i_t) = \ee^{\sigma_t, \psi^\sigma} [ U^i_{G_t(V_{t+1}, \pi^{\psi^\sigma}_t)}\mid s^i_t ]
\label{eq:dp_valupdate}
\end{align}
where the right hand side of \eqref{eq:dp_valupdate} is given by \eqref{eq:stagegame_exputil}. If for all $t \in \mathcal T$, there is a SIB strategy profile $\hat\sigma_t$ such that
$\hat\sigma_t$ is a BNE of the stage-game $G_t(V_{t+1}, \pi^{\psi^\sigma}_t)$, that is,
\begin{align}
& \ee^{\hat \sigma^i_t, \hat\sigma^{-i}_t, \psi^\sigma}[U^i_{G_t(V_{t+1},\pi^{\psi^\sigma}_t)}\mid s^i_t]
= \max_{\tilde \sigma^i_t \in \Lambda^i_t}\ee^{\tilde \sigma^i_t, \hat\sigma^{-i}_t, \psi^\sigma}[U^i_{G_t(V_{t+1},\pi^{\psi^\sigma}_t)}\mid s^i_t]
\label{eq:dp_bne_max}
\end{align}
for all $i \in \mathcal N$ where $\Lambda^i_t$ is the set of SIB strategies of agent $i$ at time $t$, and
\begin{align}
\hat\sigma_t = \sigma_t,
\label{eq:dp_bne_eq}
\end{align}
then the SIB strategy profile $\sigma$ is a SIB-BNE of the original dynamic game.
\end{theorem}
\begin{proof}
Suppose that for all $t\in\mathcal T$ there is a SIB strategy profile $\hat \sigma_t = (\hat\sigma^1_t, \hat\sigma^2_t,\ldots,\hat \sigma^N_t)$ that is a BNE of the stage game $G_t(V_{t+1}, \pi^{\psi^\sigma}_t)$. Then for all $\pi^{\psi^\sigma}_t \in \Delta(\mathcal X_t \times \mathcal S_t)^N, s^i_t \in \mathcal S^i_s$
\begin{align}
& \ee^{\hat \sigma^i_t, \hat\sigma^{-i}_t, \psi^\sigma}[U^i_{G_t(V_{t+1},\pi^{\psi^\sigma}_t)}\mid s^i_t]
\notag\\
= &
\max_{\tilde \sigma^i_t \in \Lambda^i_t}\ee^{\tilde \sigma^i_t, \hat\sigma^{-i}_t,\psi^\sigma}[u^i_t(X_t, A_t) + V^i_{t+1}(\psi^{\sigma}_{t+1}(\pi^{\psi^\sigma}_t, Z_{t+1}), S^i_{t+1})\mid s^i_t].
\label{eq:dp_single}
\end{align}
Equation \eqref{eq:dp_single} holds for all $t \in \mathcal T$ with $V^i_{T+1}(\cdot, \cdot) = 0$ and for all $i\in\mathcal N$.
When $\hat \sigma_t = \sigma_t$ for all $t\in\mathcal T$, Equation \eqref{eq:dp_single} gives, for all $\pi^{\psi^\sigma}_t \in \Delta(\mathcal X_t \times \mathcal S_t)^N, s^i_t \in \mathcal S^i_s$,
\begin{align}
V^i_t(\pi^{\psi^\sigma}_t, s^i_t) =
& \ee^{\sigma^i_t, \sigma^{-i}_t,\psi^\sigma}[U^i_{G_t(V_{t+1},\pi^{\psi^\sigma}_t)}\mid s^i_t]
\notag\\
= &
\max_{\tilde\sigma^i_t \in \Lambda^i_t}\ee^{\tilde \sigma^i_t, \sigma^{-i}_t,\psi^\sigma}[u^i_t(X_t, A_t) + V^i_{t+1}(\psi^{\sigma}_{t+1}(\pi^{\psi^\sigma}_t, Z_{t+1}), S^i_{t+1})\mid s^i_t]
\label{eq:thmsq_pf1}
\end{align}
for all $i\in\mathcal N$.
By induction, \eqref{eq:thmsq_pf1}, and the fact that the update rule $\psi^\sigma$ is consistent with $\sigma$ we have, for all $i\in\mathcal N$ and $t\in\mathcal T$,
\begin{align}
\ee^{\tilde\sigma^i_{t:T}, \sigma^{-i}_{t:T},\psi^\sigma}[
\sum_{\tau = t}^T
u^i_\tau(X_\tau, A_\tau)\mid s^i_\tau]
\leq
\ee^{\sigma^i_{t:T}, \sigma^{-i}_{t:T},\psi^\sigma}[
\sum_{\tau = t}^T
u^i_\tau(X_\tau, A_\tau)\mid s^i_\tau]
\label{eq:thmsq_pf2}
\end{align}
Then \eqref{eq:thmsq_pf2} at time $t=1$ gives
\begin{align}
\mathbb{E}^{\tilde\sigma^{i},\sigma^{-i}, \psi^{\sigma}}\{U^i(X_{1:T},A_{1:T})\}
\leq
\mathbb{E}^{\sigma, \psi^{\sigma}}\{U^i(X_{1:T},A_{1:T})\}
\end{align}
for all $\tilde \sigma^i \in \Lambda^i$ for all $i\in\mathcal N$.
Therefore, the strategy profile $\sigma$ is a SIB-BNE of the original dynamic game (sf. Definition \ref{def:sibbne}).
\end{proof}
\begin{remark}
\label{remark:stage-game}
Note that even when the stage-game $G_t(V_{t+1},\pi^{\psi^\sigma}_t)$ has a BNE $\hat\sigma_t$, it is possible that $\hat\sigma_t \neq \sigma_t$. Thus, the existence of BNE for every stage-game $G_t(V_{t+1},\pi^{\psi^\sigma}_t)$ is not sufficient to establish the existence of BNE for the original dynamic game.
\end{remark}
\begin{remark}
In the model of \cite{tang2022dynamic} when each team consists of one agent, a SIB BNE coincides with a SPCIB BNE introduced in \cite{tang2022dynamic} with an appropriate mapping of the information state as discussed in Remark \ref{remark:condindepend}.
\end{remark}
\begin{remark}
\label{remark:no_dp_solution}
There may not be a solution for the set of value functions in the sequential decomposition equations described by \eqref{eq:dp_last}-\eqref{eq:dp_bne_eq} for all $i\in\mathcal N$ and for all $t\in\mathcal T$.
\end{remark}
\begin{remark}
In Definition \ref{def:CIB_update_n}, \eqref{eq:update-2_n} could be defined differently, and different \eqref{eq:update-2_n} would lead to different choices of $\psi$. And for any choice of \eqref{eq:update-2_n}, the claim of Theorem \ref{thm:sequential_decomposition} will still hold.
\end{remark}
\begin{remark}
\label{remark:valuefunction_discont}
The value functions of the sequential decomposition equations defined by Theorem \ref{thm:sequential_decomposition} (Eqs. \eqref{eq:dp_last}-\eqref{eq:dp_bne_eq} for all $i\in\mathcal N, t\in\mathcal T$) may not be continuous in the CIB belief $\Pi^{\psi^\sigma}_t$.
\end{remark}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.